Posts
Comments
- The argument that concerns about future AI risks distract from current AI problems does not make logical sense when analyzed directly, as concerns can complement each other rather than compete for attention.
- The real motivation behind this argument may be an implicit competition over group status and political influence, with endorsements of certain advocates seen as wins or losses.
- Advocates for AI safety and those for addressing current harms are not necessarily opposed and could find areas of agreement like interpretability issues.
- AI safety advocates should avoid framing their work as more important than current problems or that resources should shift, as this can antagonize allies.
- Both future risks and current harms deserve consideration and efforts to address them can occur simultaneously rather than as a false choice.
- Concerns over future AI risks come from a diverse range of political ideologies, not just tech elites, showing it is not a partisan issue.
- Cause prioritization aiming to quantify and compare issues can seem offensive but is intended to help efforts have the greatest positive impact.
- Rationalists concerned with AI safety also care about other issues not as consequential, showing ability to support multiple related causes.
- Framing debates as zero-sum competitions undermines potential for cooperation between groups with aligned interests.
- Building understanding and alliances across different advocacy communities could help maximize progress on AI and its challenges.
https://www.lesswrong.com/posts/nt8PmADqKMaZLZGTC/inside-views-impostor-syndrome-and-the-great-larp
- Experts like Yoshua Bengio have deep mental models of their field that allow them to systematically evaluate new ideas and understand barriers, while most others lack such models and rely more on trial and error.
- Impostor syndrome may be correct in that most people genuinely don't have deep understanding of their work in the way experts do, even if they are still skilled compared to others in their field.
- Progress can still be made through random experimentation if a field has abundant opportunities and good feedback loops, even without deep understanding.
- Claiming nobody understands anything provides emotional comfort but isn't true - understanding varies significantly between experts and novices.
- The real problem with impostor syndrome is the pressure to pretend one understands more than they do.
- People should be transparent about what they don't know and actively work to develop deeper mental models through experience.
- The goal should be learning, not just obtaining credentials, by paying attention to what works and debugging failures.
- Have long-term goals and evaluate work in terms of progress towards those goals.
- Over time, actively working to understand one's field leads to developing expertise rather than feeling like an impostor.
- Widespread pretending of understanding enables a "civilizational LARP" that discourages truly learning one's profession.
a comment thread of mostly ai generated summaries of lesswrong posts so I can save them in a slightly public place for future copypasting but not show up in the comments of the posts themselves
coming back to this: I claim that when we become able to unify the attempted definitions, it will become clear that consciousness is a common, easily-achieved-by-accident information dynamics phenomenon, but that agency, while achievable in software, is not easily achieved by accident.
some definition attempts that I don't feel hold up to scrutiny right now, but which appear to me to be low scientific quality sketches which resemble what I think the question will resolve to later:
- the one zahima linked is top of the list
- this one which has terrible epistemic quality and argues some things that are physically nonsensical, but which nevertheless is a reasonable high temperature attempt to define it in full generality imo
- "it boils down to, individual parts of the universe exist, consciousness is when they have mutual information" or so. you're a big mind knowing about itself, but any system having predictive power on another system for justified reasons is real consciousness.
a gpu contains 2.5 petabytes of data if you oversample its wires enough. if you count every genome in the brain it easily contains that much. my point being, I agree, but I also see how someone could come up with a huge number like that and not be totally locally wrong, just highly misleading.
no, you cannot. ducks cannot be moved; ducks are born, never move, and eventually crystallize into a duck statue after about 5 years of life standing in one spot.
@Jim Fisher what's your reasoning for removing the archive.org links?
As far as I know, there has never been a society that both scaled and durably resisted command-power being sucked into a concentrated authority bubble; whether this command-power/authority was tokenized via rank insignia or via numerical wealth ratings, the task of building a large-scale society of hundreds of millions to billions that can coordinate, synchronize, keep track of each others' needs and wants, fulfill the fulfillable needs and most wants, and nevertheless retains the benefits of giving both humans and nonhumans significant slack that the best designs for medium-scale societies of single to tens of millions like indigenous governance does and did, is an open problem. I have my preferences for what areas of thought are promising, of course.
Structuring numericalization of which sources of preference-statement-by-a-wanting-being are interpreted as command by the people, motors, and machines in the world appears to me to inlines the alignment problem and generalize it away from AI. It seems to me right now that this is the perspective where "we already have unaligned AI" makes the most sense - what is coming is then more powerful unaligned ai - and it seems to me that promising movement on aligning AI with moral cosmopolitanism will likely be portable back into this more general version. Right now, the competitive dynamics of markets - where purchasers typically sort offerings by some combination of metrics that centers price - creates dynamics where sellers that can produce things the most cheaply in a given area win. Because of monopolization and the externalities it makes tractable, the organizations most able to sell services which involve the work of many AI research workers and the largest compute clusters are somewhat concentrated, with the more cheaply implementable AI systems in more hands but most of those hands are the ones most able to find vulnerabilities in purchasers' decisionmaking and use it to extract numericalized power coupons (money).
It seems to me that ways to solve this would involve things that are already well known: if very-well-paid workers at major AI research labs could find it in themselves to unionize, they may be more able to say no to things where their organizations' command structure has misplaced incentives stemming from those organizations' stock contract owners' local incentives, maybe. But I don't see a quick shortcut around it and it doesn't seem like it's as useful as technical research on how to align things like profit motive with cosmopolitan values, eg via things like Dominant Assurance Contracts.
I have the sense that it's not possible to make public speech non-political, and in order to debate things in a way that doesn't require thinking about how everyone who reads them might consider them, one has to simply write things where they'll only be considered by those you know well. That's not to say I think writing things publicly is bad; but I think tools for understanding what meaning will be taken by different people from a phrase would help people communicate the things they actually mean.
those are easy enough to come by from existing ideas that you probably shouldn't make the world worse by suggesting new ones and adding to the pile of loose gunpowder covering the world. I've come up with plenty myself; I simply try to extract a way to thwart them and discard the husk that suggests how to harm people. compare https://www.lesswrong.com/posts/bTteXdzcpAsGQE5Qc/biosecurity-culture-computer-security-culture
I feel that this is appropriate for those hung up on the concept of consciousness, but there are some technical confusions about the topic that I find frustrating. It does still seem to conflate consciousness with agency in a way that I'm not sure is correct. But, if one is thinking of agency when saying consciousness, it's not the worst resource. I'd argue that consciousness is a much easier bar to pass than agency, and agency is the concern - but given that most people think of them as the same and even view their own consciousness through the lens of how it contributes to their own agency, perhaps it's not so bad. But one can be awake, coherent, selfhood-having, and yet not have much in the way of intention, and that is the thing I wish it had focused on more. It's certainly on the right side of things in terms of not just seeing ai as a technology.
Yeah, sounds right, though I've encountered hierarchy resistant organizations that successfully route task by capabilities. It's currently quite fragile but I don't see any fundamental reason it has to stay that way. A non-concentrateable currency creation system such as grassroots economics and a cultural focus on credit assignment towards technical workers can move towards keeping the graph in a mesh structure, some would say the numbers are optional in small orgs, sometimes even big ones, compare types of open source culture. The difficulty is ensuring that any numerical tracking of predictions about who will do what well and by whose request, stays precise enough, without getting precise enough for the people who want to exploit politics-like vulnerabilities to be able to do so.
- prominent placement. eg:
- use of tags on a post embeded as their intro into the post even without hover, perhaps at the bottom of the post, perhaps at the top. Users don't seem to consistently realize that the wiki tags are lists of relevant topics.
- recently edited wiki pages in the sidebar.
- or at least just put the wiki in the sidebar at all, or clarify what it is, right now it's just under "concepts" and clicking that gives you a big, hard-to-read list. You can hover each one to get the beginning of the summary, but to do that you need to have already been motivated to do so, and it's a trivial inconvenience.
- use of tags on a post embeded as their intro into the post even without hover, perhaps at the bottom of the post, perhaps at the top. Users don't seem to consistently realize that the wiki tags are lists of relevant topics.
- suggestion in the lesswrong editor to make phrases that match as titles of wiki pages into links to those wiki pages
- or just find all places a wiki page is mentioned as a link from the wiki page; this is already possible with the current upgraded search engine, just would be more convenient to have a button for it
- extra bonus points for showing the search results on the wiki page itself in a way that doesn't take too much attention but which makes it a clear feature. eg, put a search for untagged posts and comments below the list of tagged posts.
- or just find all places a wiki page is mentioned as a link from the wiki page; this is already possible with the current upgraded search engine, just would be more convenient to have a button for it
- make sure wiki page edits show up in recent discussion. I vaguely remember they might already, but also remember finding them slightly confusing. might be false memory either way.
- related pages via semantic search? longstanding wishlist item. search recently improved, possibly already solved.
- in particular, suggested or similar tags views on every post.
- this could be done by embedding every post (or every paragraph in every post) using the openai api (or another embedding model from MTEB) and using a semantic-search-with-BM25 similarity comparison to find relevant wiki topics. Bonus points for two way linking.
- view on wikipedia, ask gpt, etc buttons, to accelerate importing existing overviews?
- LLM programs to detect out of date wiki pages, eg given page=<wiki page content>, post=<post tagged with that wiki tag>, run gpt4
[{"role": "system", "content": f"Here is a wiki page. Bot please review whether user's article has anything new to add on top of the following wiki page.\n\n> {page}\n\nBot write a VERY CONCISE, ONE SENTENCE, NO NEWLINE overview of what topics the user's article adds beyond the above wiki page."}, {"role": "assistant", "content": "Min word use active."}, {"user": post}]
and then use this to generate summaries of what each post with a tag has to add to a topic it's tagged with. I'd guesstimate without even fermi estimating it that this would cost $5k to run all lesswrong posts with gpt4. Note that the brevity prompt is for human use, not to save AI tokens.
Note that a wrench current paradigms throw in this is that self-improvement processes would not look uniquely recursive, since all training algorithms sort of look like "recursive self improvement". instead, RSI is effectively just "oh no, the training curve was curved differently on this training run", which is something most likely to happen in open world RL. But I agree, open world RL has the ability to be suddenly surprising in capability growth. and there wouldn't be much of an opportunity to notice the problem unless we've already solved how to intentionally bound capabilities in RL.
There has been some interesting work on bounding capability growth in safe RL already, though. I haven't looked closely at it, I wonder if any of it is particularly good.
edit: note that I am in fact claiming that after miri deconfuses us, it'll turn out to apply to ordinary gradient updates
Hmm. If academia picks stuff up, perhaps interhuman cosmopolitan alignment could use much more discussion on lesswrong, especially from interdisciplinary perspectives. In other words, if academia picks up short term alignment, long term alignment becomes a question of how to interleave differing value systems while ensuring actual practical mobility for socio emotionally disempowered folks. The thing social justice is trying to be but keeps failing at due to corruption turning attempts at analytical "useful wokeness" into crashed slogan thought; the thing libertarianism is trying to be but keeps failing at due to corruption turning defense of selves into decision theoretically unsound casual selfish reasoning; the thing financial markets keep trying to be but keep failing at due to corruption turning competitive reasoning systems into destructive centralization; etc. Qualitative insights from economics and evolutionary game theory and distributed systems programming and etc about how to see all of these issues as the same one as the utility function question in strongly supercapable superreasoner AI. Etc.
Just some rambling this brought to mind.
edited sep 13, here's what GPT4 wrote when asked to clarify wtf I'm talking about:
It seems like the paragraph discusses the potential of including more conversations about interdisciplinary cosmopolitan alignment in the community at lesswrong, particularly in light of developments in academia. You're recommending the community focus on synthesizing crucial insights from across various disciplines and cultural outlooks, with a view to advancing not just short-term but also long-term alignment efforts. The intent is towards a cosmopolitan harmony, where different value systems can coexist while providing tangible empowerment opportunities for people who are socio-emotionally disadvantaged.
The concept of 'alignment' you're discussing appears to be an application of social justice and economic principles aimed at driving change, though this is often undermined by corruption hindering promising initiatives. It corresponds to the 'useful wokeness', which unfortunately ends up being reduced to mere slogans and uncritical loyalty to ideological authority.
Another suggestion you indicated is found in the ideation behind libertarianism. This political ideology works towards autonomy and personal liberty but fails due to corruption that changes this ideal into impractical self-centeredness and hypocritical rule implementation. The common theme across these diverse though corruption-prone strategies is further amplified in your criticism of financial market dynamics, where the competitive nature of trading based reasoning systems often leads to harmful centralization. The discussed failure here appears to be more significant.
The utility function argument in artificial intelligence (AI) with high potential capabilities and reasoning abilities seem to be at the core of your argument. You suggest drawing qualitative insights from economics, evolutionary game theory, distributed system programming, and other disciplines for a more comprehensive understanding of these issues. By attempting to view these myriad challenges as interrelated, one might recast these into a single question about utility function—which if navigated effectively, could address the question of alignment not only in AIs but also across society at large.
via kagi summarizer,
[main summary is entirely redundant with the post]
[most itemized summary was also redundant]
- Geoengineering would need to be done carefully and intentionally, rather than the current reckless and haphazard pollution.
- As the impacts of climate change become more severe, humanity will likely need to not only reduce carbon emissions but also remove existing CO2 and mitigate impacts through geoengineering.
- There are alternative ways to seed clouds for geoengineering purposes that do not involve sulfur dioxide, such as spraying sea water.
- Discussing geoengineering now does not mean giving up on reducing carbon emissions and transitioning to renewable energy.
it's not terribly in depth, but I guess those are additional points. more details are in the video, but not a lot more.
https://en.wikipedia.org/wiki/Prioritarianism and https://www.lesswrong.com/posts/hTMFt3h7QqA2qecn7/utilitarianism-meets-egalitarianism may be relevant here. Are you familiar with https://www.lesswrong.com/s/hCt6GL4SXX6ezkcJn as well? I'm curious how you'd compare the justification for your use of arctan to the justifications in those articles.
I don't think you can make LLMs feel alien because they are not in fact highly alien: neural systems are pretty familiar to other neural systems - where by neural system I mean a network of interacting components that learns via small updates which are local in parameter space - and you're more likely to make people go "wow, brains are cool" or "wow, ai is cool" than think it's a deeply alien mind, because there's enough similarity that people do studies to learn about neuroscience from deep learning. Also, I've seen evidence in public that people believe chatgpt's pitch that it's not conscious.
Consider this analogy: a child raised in a household espousing violent fascist ideologies may develop behaviors and attitudes that reflect these harmful beliefs. Conversely, the same child nurtured in a peaceful, loving environment may manifest diametrically opposite characteristics. Similarly, we could expect an AI trained on human data that encapsulates how humans see the world to align with our perspective
Then I have bad news about that Internet data and the portion of humanity who endorse large fragments such as authoritarianism or the whole of the fascism recipe, worldwide. Liberation, morality, care for other beings, drive for a healthy community, etc are not at all guaranteed even just in humans. In fact, this is a reason that even if ai is not on its own an xrisk, we should not be instantly reassured.
I think the appropriate choice would probably be https://www.ajl.org/
I feel like their opinions are already fairly well documented, and don't necessarily argue quite the way you're imagining; my sense is they're already in many ways on the same page, they just believe that the origin of the agenticness that will destroy humanity is external to the AI and embedded in the market dynamics of ownership contracts. They see most AI x-risk claims as "whataboutism" hype, and are not engaging seriously as they believe it's an attempt to manipulate them out of making progress on their cause areas. Taking their cause areas seriously as a show of good faith would go a long way, imo; I don't think they have any mechanistic objections to the actual idea of x-risk from goodharting at all! they just view that as having dense prior art in the form of anti-capitalism. Here's a good interview with Gebru:
bad ai summary (they'll be disappointed I dared to, heh) - if this summary doesn't pitch you to watch the original and get the detail (recommend 2x speed or more), you're missing the core insights they have to share, imo, but it's better than you not watching the video in terms of understanding their ideas:
Here are 64 key points extracted from the discussion:
1. Large language models like GPT-3 are being misleadingly marketed as a step towards artificial general intelligence when in reality they are just text generators.
2. These models suffer from factual inaccuracies, promote misinformation, and reuse copyrighted work without compensation.
3. Companies hype up artificial intelligence to convince people the software is more capable than it really is and to justify profiting from it.
4. The concerns raised by researchers like Emily and Timnit focus on present harms while AI safety researchers primarily worry about potential future dangers from superintelligence.
5. The AI safety community tends to ignore or dismiss the harms highlighted by AI ethics researchers that focus on today's issues.
6. Large language models suffer from a lack of "scoping" where their potential uses are not clearly defined which makes them difficult to regulate and evaluate.
7. Emily and Timnit argue that the rapid commercialization of large language models has more to do with corporate interests and hype than genuine scientific progress.
8. The narrative that AI is progressing unstoppably is largely fueled by companies and their marketing claims rather than reflecting technological breakthroughs.
9. AI systems today lack any understanding that other minds exist, which is a foundational part of intelligence.
10. AI systems cannot communicate or interact meaningfully with humans the way an intelligent being could.
11. Regulating corporations for the literal output of their AI systems would help reduce harmful effects.
12. The public should push for appropriate legislation around AI that is developed through consultation with those most affected by the technology.
13. OpenAI, despite claiming to work in the interests of humanity and safety, hides important information about how their models are trained which prevents meaningful evaluation and accountability.
14. The paper "Sparks of AGI" from Microsoft researchers exhibits racism by citing a paper arguing for genetic differences in intelligence between races.
15. The "AI pause" letter cites flawed research and misunderstands the capabilities and limitations of large language models.
16. The "AI pause" letter promotes the technology even as it claims to raise concerns about it.
17. The concepts of "AI safety" and "AGI" promoted by organizations like OpenAI have eugenicist roots and are aligned with corporate interests.
18. Researchers should release data and model details to allow for accountability, transparency, and evaluation of any harms.
19. Large language models can deceptively mimic the form of human language and intelligence without possessing the underlying content or abilities.
20. AI companies mislead the public about capabilities to justify profits and build hype that attracts funding and talent.
21. The harms from large language models are now, in reifying discrimination through training data and misleading users, not potential future risks.
22. Not releasing training data and model specifics allows companies like OpenAI to avoid accountability for potential copyright violations and data theft.
23. Chat GPT's outputs about factual topics rely on scraping and mashing up information from the web, not any real understanding.
24. Self-driving cars fail at a foundational part of intelligence: communicating and interacting meaningfully with humans.
25. Corporations and billionaires, not technological progress, are primarily responsible for the speed at which large language models have proliferated.
26. There is no evidence large language models are actually intelligent - they only appear coherent to users who assume intelligence.
27. Calls for "AI regulation" often ignore the harms faced by those exploited in developing AI systems and hurt by its outcomes.
28. Concerns about potential "AGI takeover" ignore today's real issues of information pollution, misinformation, and exploitation of artists' work.
29. A narrow vision of who counts as a "person" allows AI safety researchers to ignore or minimize current harms faced by many groups.
30. An "AI pause" would be ineffective at stopping harms since data collection could continue and harm is already occurring.
31. Genuine scientific progress involves choice and communication, not inevitable paths determined by funding and hype.
32. Speculative fiction explores how technology impacts the human condition, rather than just focusing on "cool" applications.
33. The harms today from AI involve unequal access to resources and exacerbating discrimination, not potential doomsday scenarios.
34. An unjust economic system that concentrates corporate power and wealth in few hands drives irresponsible AI development.
35. Fundamental ignorance about what intelligence even is underlies the notion that large language models represent "the first sparks of AGI."
36. Good models of intelligence exhibit a range of intelligent behaviors, not just linguistic coherence.
37. Companies benefit from artificially intelligent-sounding marketing claims while marginalized groups bear the brunt of AI's harms.
38. Large language models are "unscoped technologies" built for any use case rather than aimed at specific, well-defined tasks.
39. No evidence suggests large language models actually understand language beyond providing seemingly plausible outputs.
40. Researchers have proposed regulation that includes watermarking synthetic media and liability for AI-caused harms.
41. The idea that AI is progressing unstoppably toward superintelligence combines science fiction fantasies with a selective reading of history.
42. Researchers evaluating the capabilities of AI systems should provide data, parameters, and documentation to allow for transparency and accountability.
43. ChatGPT-3 is polluting the information ecosystem with non-information and requires governance through legislation.
44. The harms from large language models are real now and can be addressed through policy, not hypothetical future dangers.
45. AI that relies on racially biased studies to define intelligence reveals racism within the AI research community.
46. AI research and machine learning techniques do not inherently lead to AGI; that narrative is driven by funding and corporate interests.
47. The key question is not how to build AGI but why - who benefits and who is harmed, and for what purposes it would be created.
48. Claims that AI will revolutionize entire sectors of the economy within 5 years rely on hype and unrealistic timelines.
49. Understanding the harms today from AI means centering the perspectives of those most affected by biases in data and uses of the technology.
50. Large language models succeed at mimicking the superficial form of language through statistical tricks, not through any inherent capabilities.
51. Responsible development of AI systems requires restricting what technologies can be created as much as developing them in ethically justifiable ways.
52. Current AI systems lack foundational capabilities required for intelligence like understanding that other minds exist with their own perspectives.
53. AI researchers concerned about future risks ignore or downplay present-day harms highlighted by AI ethics researchers.
54. Chasing funding to develop AGI and hype about future possibilities has distracted from addressing real problems AI could help solve today.
55. The onus should not be on marginalized groups to prove AI is harmful when the burden of responsibility lies with developers to show that it is beneficial.
56. Billionaires and technology companies claiming to develop AI for the greater good are primarily motivated by profit and self-interest.
57. Corporations seek to maximize profit from AI while shifting responsibility for any harms onto end users and the public.
58. The unchecked spread of potentially harmful AI tools requires collective action and regulation rather than reliance on individual discernment.
59. AI constitutes a choice by humans about what technologies to develop, not an inevitable progression - we can decide what problems to focus our efforts on solving.
60. AI hype benefits those who control funding and research directions while sidelining or harming marginalized groups.
61. The public needs information about how AI systems actually work in order to evaluate claimed capabilities and push for responsible outcomes.
62. Viewing today's AI capabilities through the lens of science fiction fantasies obscures real problems that could be helped by more narrowly targeted technologies.
63. Hyperbolic claims that AI will revolutionize society within just a few years ignore complexities of technological progress and human development.
64. Companies do not always reveal the harms of their technologies or the means to mitigate those harms because it would reduce profits from AI.
can you and others please reply with lists of people you find notable for their high signal to noise ratio, especially given twitter's sharp decline in quality lately?
I no longer think there's anything we could learn about how to align an actually superhuman agi that we can't learn from a weaker one.
(edit 2mo later: to rephrase - there exist model organisms of all possible misalignment issues that are weaker than superhuman. this is not to say human-like agi can teach us everything we need to know.)
Yeah I travel and ask randos on the street and they agree AI is about to kill us all. Does he travel?
Khan Academy's science videos are a valuable resource but may not promote meaningful learning on their own. Students often think they already know the material so do not pay full attention. When asked what they saw, they remember their own ideas instead of what was presented. Simply presenting correct information is not enough; students' misconceptions must be addressed to increase mental effort and learning. The most effective video showed an actor illustrating common misconceptions, which students then had to reconsider. This led to higher post-test scores and more reported mental effort, showing that confronting misconceptions can improve science learning from videos.
- Khan Academy videos created by Sal Khan have over 2200 videos covering a wide range of subjects including math, history and science.
- The author is skeptical that Khan Academy science videos promote meaningful learning for viewers.
- The author conducted a study where students watched science explanation videos but showed little improvement in test scores, indicating they did not learn much.
- Students often think they already know the material in science videos so they do not pay full attention and falsely remember their own ideas as what was presented.
- Simply presenting correct scientific information is not effective as students do not recognize how it differs from their existing ideas.
- Addressing common misconceptions in the videos helped increase students' mental effort and resulted in more learning.
- Starting with misconceptions and then explaining the scientific ideas may be a more effective approach for science videos.
- Khan Academy videos are a valuable resource but may not be that effective for those just starting to learn science as they do not challenge existing misconceptions.
- Increased mental effort while watching the videos translated to more learning for the students.
- The author tries to start Veritasium videos by addressing common misconceptions to tackle them effectively.
https://www.youtube.com/watch?v=eVtCO84MDj8
Khan Academy and the Effectiveness of Science Videos
Veritasium
2011
I'd suggest that's what's needed is a strong immune system against conformity and control which does not itself require conformity or control: a generalized resistance to agentic domination from another being, of any kind, and as part of it, a culture of habit around joining up with others who are skilled at resisting pressure in response to an attempted pressure, without that join up creating the problems it attempts to prevent. I'm a fan of this essay on this topic, as well as other writing from that group.
I've missed one or more facts that link those threads, can you tell me which part of critch's approach stands out to you in this context and what made it do so? I agree that his is some of my favorite work, but it's not yet obvious to me that it actually checks all the boxes, in particular whether humans deploying ai will have values that directly contract usage of his insights. it also still isn't clear to me whether anything in his work helps with is/ought.
I've downvoted my own comment for being off topic humor but this ran through my head,
Sure, I agree, the asteroid is going to kill us all. But it would be courteous to acknowledge that it's going to hit a poor area first, and they'll die a few minutes earlier. Also, uh, all of us are going to die, I think that's the core thing! we should save the poor area, and also all the other areas!
Wait, whoops. Let me retrace identity here, sounds like a big mistake, sorry bout that Meg & Melanie when you see this post someday, heh.
edit: oops! the video I linked doesn't contain a Mitchell at all! It's Emily M. Bender and Timnit Gebru, both of whom I have a high opinion of for their commentary on near-term AI harms, and both of whom I am frustrated with for not recognizing how catastrophic those very harms could become if they were to keep on getting worse.
Mitchell gebru and bender express their opinions on such things in more detail in the video I linked. Here's the overcompressed summary, which badly miscompresses the video, but which is a reasonable pitch for why you should watch it to get the full thing in order to respond to the points eloquently rather than using the facsimile. If you can put your annoyance at them missing the point about x-risk on hold and just try to empathize with their position having also been trying to ring alarm bells and being dismissed, and see how they're feeling like the x-risk crowd is just controlled opposition being used to dismiss their warnings, I think it could be quite instructive.
I also strongly recommend watching this video - timestamp is about 30sec before the part I'm referencing - where Bengio and Tegmark have a discussion with, among others, Tawana Petty, and they also completely miss the point about present-day harms. In particular, note that as far as I can tell she's not frustrated that they're speaking up, she's frustrated that they seem to be oblivious in conversation to what the present day harms even are; when she brings it up, they defend themselves as having already done something, which in my view misses the point because she was looking for action on present day harms to be weaved into the action they're demanding from the start. "Why didn't they speak up when Timnit got fired?" or so. She's been pushing for people like them to speak up for years, and she appears to feel frustrated that even when they bring it up they won't mention the things she sees as the core problems. Whether or not she's right that the present day problems are the core, I agree enthusiastically that present day problems are intensely terrible and are a major issue we should in fact acknowledge and integrate into plans to take action as best we can. This will remain a point of tension, as some won't want to "dilute" the issue by bringing up "controversial" issues like racism. But I'd like to at least zoom in on this core point of conflict, since it seems to get repeatedly missed. We need to not be redirecting away from this, but rather integrating. I don't know how to do that off the top of my head. Tegmark responds to this, but I feel like it's a pretty crappy response that was composed on the fly, and it'd be worth the time to ponder asynchronously how to respond more constructively.
- "This has been killing people!"
- "Yes, but it might kill all people!"
- "Yes, but it's killing people!"
- "Of course, sure, whatever, it's killing people, but it might kill all people!"
You can see how this is not a satisfying response. I don't pretend to know what would be.
Edit #2: Severe identity error on my part! I seem to have been confusing who's who from memory badly, I made the video summary when I first saw the video and then turned that summary into a chattable bot today, and lost track of who was who in the process! I stand by the point made here, but it seems to be somewhat out of context, as it is merely related to the kind of thing Melanie Mitchell said by nature of being made by people who make similar points. I'm not going to delete this comment, but I'd appreciate folks wiping out all the upvotes.
I think that the AI x-risk crowd is continuing to catastrophically misunderstand a key point in the Mitchell AI Ethics researchers' view: their claim that there are, in fact, critical present-day harms from ai, they should be acknowledged, and they should in fact be solved very urgently. I happen to think that x-risk from AI is made of the same type of threat; but even if they weren't, I think that 1. Mitchell crowd AI Ethics crowd are being completely unreasonable in dismissing x-risk. you think that somehow capitalism is going to kill us all, and AI won't supercharge capitalism? what the hell are you smoking? 2. also, even if not for threat from capitalism, AI will do the same sort of stuff that makes capitalism bad but so much harder that even capitalism won't be able to take it.
We can't have people going "no, capitalism is fine actually" to someone whose whole point is that capitalist oppression is a problem. They'll just roll their eyes. Capitalism is unpopular actually!
Also, I don't really need to define the word for the type of argument one would have with MMitchell Gebru and Bender, she'd know what it means; but I would define the problem behaviors as optimization towards a numeric goal (increase investor payout) without regard for the human individuals in the system (workers, customers; even really investors don't get a good deal besides the money number going up). That's exactly what we're worried about with AI - but now without humans in the loop. Her claims that it's just hype are nonsense, she believes lecun's disinformation - and he's an agent of one of the nastiest capitalist orgs around!
Edit: here's a poe bot which embeds an awkward summary of a video interview they did. I have hidden the prompt and directed claude to not represent itself as being an accurate summary; however, claude is already inclined to express views similar to theirs (especially in contrast to ChatGPT, which does not), so I think claude could be an interesting debate partner, especially for those who look down on their views. Here's an example conversation where I pasted an older version of this comment I made before I realized I'd gotten identities wrong. I also strongly recommend watching the video it's based on, probably at 2x speed.
For sure, though all that demonstrates is that it'll have to import one of the already trained nanotech-focused models
I love this exercise, which completely kicked my butt. I would suggest more spoiler warnings, maybe a page length gap?
I make them often enough. they don't get upvoted a ton usually.
every death and every forgotten memory is soul degradation. we won't be able to just reconstruct everyone exactly.
a pause lets some people ignore the pause and move in bad directions. we need to be able as a civilization to prevent the tugs on society to get sucked into AIs. the AIs of today will take longer to kill us all, but they'll still degrade our soul-data, compare YouTube recommender. authoritarian cultures that want to destroy humanity's soul might agree not to make bigger ai, but today's big ai is plenty. it's not like not pausing is massively better; nothing but drastically speeding up safety and liberty-generating alignment could save us
those sound like secondhand positions to me. not like those people were the originators of the reasoning. I think a pause is likely to guarantee we die though. we need to actually resist all bad directions, which a pause just lets some people ignore. pauses could never be enforced well enough to save us without an already save us grade ai.
- Many companies and platforms are becoming more restrictive and hostile towards developers, limiting what can be built on their sites. This reduces creativity and usefulness of the internet.
- Major platforms are deleting old content and inactive accounts in mass, resulting in a loss of internet history and institutional memory.
- Search engines are becoming less useful and filled with ads, clickbait content, and generic results that don't answer users' questions.
- Search engine optimization practices have homogenized the internet and sterilized content, focusing more on Google's algorithm than the end user.
- Generative AI responses in search have so far been plagued with issues and have not delivered the promised quality of results.
- Google's manifest V3 changes will undermine the effectiveness of ad blockers and privacy extensions, benefiting Google's own business model.
- The internet is moving towards a future where useful information is hidden behind paywalls and walled gardens, while public spaces are filled with AI-generated content.
- Optimism for technological progress and the future of the internet is declining.
- Corporations are putting the burden of their growth onto users, resulting in a worse experience.
- Transparency, advocacy, and supporting independent creators can help ensure an open and user-friendly internet.
- Hollywood has been producing more sequels, prequels, remakes and films based on existing properties in recent years compared to original films. In 2021, only one of the top 10 grossing films was an original idea.
- This trend is due to consolidation in the film industry where there are fewer major players and studios. Consolidation has also led to fewer films being produced.
- The rise of streaming services like Netflix has led to vertical integration where one company controls both content production and distribution. This gives them more control over who profits.
- Creators and independent producers have less control and profit due to the lack of a clear marketplace to sell their content. Writer pay has actually decreased in recent years.
- The UK has been more successful at producing original content due to government regulations that require broadcasters to commission a portion of their programming from independent producers and allow them to retain secondary rights.
- Small to mid-sized production companies have been able to thrive in the UK due to these regulations.
- There are questions over whether the current streaming model in Hollywood is sustainable in the long run.
- Wall Street investors want to see profitability after years of growth investment in streaming companies.
- Creators are fighting for terms that give them the ability and incentive to make great original content.
- The outcome of this fight will determine whether the next great original film gets made.
- Solar energy prices have dropped significantly in the last few decades, making it cheaper than fossil fuels like coal in most places.
- However, investment and deployment of solar energy have stagnated despite the lower prices, as profitability remains an issue.
- Companies like Shell have pledged to transition to renewables, but they have conditioned it on renewables delivering high profit margins of 8-12%, which is unlikely.
- Returns on renewable energy projects are typically around 4-8%, much lower than what companies like Shell require.
- Fossil fuel companies and asset management firms are not investing substantially in renewables, as profit remains the main driver rather than sustainability goals.
- The parts of the solar business that are profitable involve manufacturing and mining, where exploitation and poor conditions remain.
- Profit, not price, determines what gets produced. Without profit potential, the transition to renewables will not happen at scale.
- Like water power in the past, solar energy is a cheaper source of energy but less profitable due to difficulties in privatization and exploitation.
- The transition to renewables will likely come with major drawbacks as long as profit remains a requirement.
- Systemic changes are needed to make the transition to renewables in a just and sustainable manner.
cool work! this feels related to https://arxiv.org/abs/2304.11082 - what are your thoughts on the connection?
For what it's worth, I think most people I know expect most professed values to be violated most of the time, and so they think that libertarians advocating for this is perfectly ordinary; the surprising thing would be if professed libertarians weren't constantly showing up advocating for regulating things. Show don't tell in politics and ideology. That's not to say professing values is useless, just that there's not an inconsistency to be explained here, and if I link people in my circles this post, they'd respond with an eyeroll at the possibility that if only they were more libertarian they'd be honest - because the name is most associated with people using the name to lie.
fair nuff! yeah properly demonstrating online sounds really hard.
it only works when you are able to reduce social anxiety by showing that they're welcome. someone who is cripplingly anxious typically wants to feel like they're safe, so showing them a clearer map to safety includes detecting the structure of their social anxiety first and getting in sync with it. then you can show them they're welcome in a way that makes them feel safer, not less. to do this requires gently querying their anxiety's agentic target and inviting the group to behave in ways that satisfy what their brain's overactivation wants.
I think the only content left would be the actual art. not the stuff that only deserves the name content.