BOUNTY AVAILABLE: AI ethicists, what are your object-level arguments against AI notkilleveryoneism?
post by Peter Berggren (peter-berggren) · 2023-07-06T17:32:08.675Z · LW · GW · 6 commentsContents
Conditions for the bounty None 6 comments
I am prepared to pay out anywhere between $20 and $100 to AI ethicists of the DAIR/"Stochastic Parrots" school of thought if they provide their object-level arguments against the idea that preventing AI from killing everyone is a real and important issue. This pay will depend on their notability within AI ethics, as well as the clarity and persuasiveness of their arguments.
Conditions for the bounty
- The bounty must be claimed by an AI ethicist of the DAIR/"Stochastic Parrots" school of thought. Ethicists from other schools of thought (such as the "what if self-driving cars face trolley problems" school of thought) may be given bounties on a case-by-case basis, but probably not. Any member of DAIR or coauthor of the "Stochastic Parrots" paper counts for this, but people outside of these specific circles may qualify at my discretion, if I believe that their intellectual output is similar to or connected with DAIR or the "Stochastic Parrots" coauthors.
- The arguments provided by the claimant must be posted publicly, ideally in the comment section of this thread.
- The arguments provided by the claimant must be object-level. This means that they must discuss concrete subjects specific to the issues at hand. This is in contrast to meta-level arguments, which focus on facts about the question (rather than about the issues it addresses), such as difficulties involved in future prediction, the cultural milieu of contemporary AI notkilleveryoneism, the framing of my questions, etc. Note that I have nothing against meta-level arguments; it's just that I've already seen plenty of meta-level arguments by AI ethicists against AI notkilleveryoneism, and I want to see some object-level arguments.
- The arguments provided by the claimant must be a good-faith summary of the claimant's actual object-level arguments against AI notkilleveryoneism. For example, "AI notkilleveryoneism is unimportant because paperclips are shiny" will not count, even if made by a qualifying claimant, even though it is object-level. I do not expect that I will need to invoke this condition, but I may do so at my discretion.
- The following AI ethicists will be presumptively considered valid claimants, and will fall into the most notable category (meaning that I will pay each of them the maximum $100 bounty assuming they follow all the terms of the bounty, unless I notice loophole abuse):
Emily Bender
Timnit Gebru
Margaret Mitchell
Melanie Mitchell
Emile Torres
Note that there is no requirement for the arguments to change my mind, or even to be persuasive in the slightest. The only requirements are the above ones. If someone manages to abuse a loophole to get there, I will pay them the minimum bounty of $20, and then modify the rules for all future claimants to preempt this loophole.
So far, Emile Torres has already responded to the bounty by recommending their book as the place where their object-level arguments have been written. I will judge this as soon as I am able to check this book out from a library near me.
Note that I may need to close this bounty if I get too many claims from it, because I have a limited budget. All the more reason to get your arguments in here soon!
6 comments
Comments sorted by top scores.
comment by the gears to ascension (lahwran) · 2023-07-07T04:01:00.343Z · LW(p) · GW(p)
I feel like their opinions are already fairly well documented, and don't necessarily argue quite the way you're imagining; my sense is they're already in many ways on the same page, they just believe that the origin of the agenticness that will destroy humanity is external to the AI and embedded in the market dynamics of ownership contracts. They see most AI x-risk claims as "whataboutism" hype, and are not engaging seriously as they believe it's an attempt to manipulate them out of making progress on their cause areas. Taking their cause areas seriously as a show of good faith would go a long way, imo; I don't think they have any mechanistic objections to the actual idea of x-risk from goodharting at all! they just view that as having dense prior art in the form of anti-capitalism. Here's a good interview with Gebru:
bad ai summary (they'll be disappointed I dared to, heh) - if this summary doesn't pitch you to watch the original and get the detail (recommend 2x speed or more), you're missing the core insights they have to share, imo, but it's better than you not watching the video in terms of understanding their ideas:
Replies from: peter-berggrenHere are 64 key points extracted from the discussion:
1. Large language models like GPT-3 are being misleadingly marketed as a step towards artificial general intelligence when in reality they are just text generators.
2. These models suffer from factual inaccuracies, promote misinformation, and reuse copyrighted work without compensation.
3. Companies hype up artificial intelligence to convince people the software is more capable than it really is and to justify profiting from it.
4. The concerns raised by researchers like Emily and Timnit focus on present harms while AI safety researchers primarily worry about potential future dangers from superintelligence.
5. The AI safety community tends to ignore or dismiss the harms highlighted by AI ethics researchers that focus on today's issues.
6. Large language models suffer from a lack of "scoping" where their potential uses are not clearly defined which makes them difficult to regulate and evaluate.
7. Emily and Timnit argue that the rapid commercialization of large language models has more to do with corporate interests and hype than genuine scientific progress.
8. The narrative that AI is progressing unstoppably is largely fueled by companies and their marketing claims rather than reflecting technological breakthroughs.
9. AI systems today lack any understanding that other minds exist, which is a foundational part of intelligence.
10. AI systems cannot communicate or interact meaningfully with humans the way an intelligent being could.
11. Regulating corporations for the literal output of their AI systems would help reduce harmful effects.
12. The public should push for appropriate legislation around AI that is developed through consultation with those most affected by the technology.
13. OpenAI, despite claiming to work in the interests of humanity and safety, hides important information about how their models are trained which prevents meaningful evaluation and accountability.
14. The paper "Sparks of AGI" from Microsoft researchers exhibits racism by citing a paper arguing for genetic differences in intelligence between races.
15. The "AI pause" letter cites flawed research and misunderstands the capabilities and limitations of large language models.
16. The "AI pause" letter promotes the technology even as it claims to raise concerns about it.
17. The concepts of "AI safety" and "AGI" promoted by organizations like OpenAI have eugenicist roots and are aligned with corporate interests.
18. Researchers should release data and model details to allow for accountability, transparency, and evaluation of any harms.
19. Large language models can deceptively mimic the form of human language and intelligence without possessing the underlying content or abilities.
20. AI companies mislead the public about capabilities to justify profits and build hype that attracts funding and talent.
21. The harms from large language models are now, in reifying discrimination through training data and misleading users, not potential future risks.
22. Not releasing training data and model specifics allows companies like OpenAI to avoid accountability for potential copyright violations and data theft.
23. Chat GPT's outputs about factual topics rely on scraping and mashing up information from the web, not any real understanding.
24. Self-driving cars fail at a foundational part of intelligence: communicating and interacting meaningfully with humans.
25. Corporations and billionaires, not technological progress, are primarily responsible for the speed at which large language models have proliferated.
26. There is no evidence large language models are actually intelligent - they only appear coherent to users who assume intelligence.
27. Calls for "AI regulation" often ignore the harms faced by those exploited in developing AI systems and hurt by its outcomes.
28. Concerns about potential "AGI takeover" ignore today's real issues of information pollution, misinformation, and exploitation of artists' work.
29. A narrow vision of who counts as a "person" allows AI safety researchers to ignore or minimize current harms faced by many groups.
30. An "AI pause" would be ineffective at stopping harms since data collection could continue and harm is already occurring.
31. Genuine scientific progress involves choice and communication, not inevitable paths determined by funding and hype.
32. Speculative fiction explores how technology impacts the human condition, rather than just focusing on "cool" applications.
33. The harms today from AI involve unequal access to resources and exacerbating discrimination, not potential doomsday scenarios.
34. An unjust economic system that concentrates corporate power and wealth in few hands drives irresponsible AI development.
35. Fundamental ignorance about what intelligence even is underlies the notion that large language models represent "the first sparks of AGI."
36. Good models of intelligence exhibit a range of intelligent behaviors, not just linguistic coherence.
37. Companies benefit from artificially intelligent-sounding marketing claims while marginalized groups bear the brunt of AI's harms.
38. Large language models are "unscoped technologies" built for any use case rather than aimed at specific, well-defined tasks.
39. No evidence suggests large language models actually understand language beyond providing seemingly plausible outputs.
40. Researchers have proposed regulation that includes watermarking synthetic media and liability for AI-caused harms.
41. The idea that AI is progressing unstoppably toward superintelligence combines science fiction fantasies with a selective reading of history.
42. Researchers evaluating the capabilities of AI systems should provide data, parameters, and documentation to allow for transparency and accountability.
43. ChatGPT-3 is polluting the information ecosystem with non-information and requires governance through legislation.
44. The harms from large language models are real now and can be addressed through policy, not hypothetical future dangers.
45. AI that relies on racially biased studies to define intelligence reveals racism within the AI research community.
46. AI research and machine learning techniques do not inherently lead to AGI; that narrative is driven by funding and corporate interests.
47. The key question is not how to build AGI but why - who benefits and who is harmed, and for what purposes it would be created.
48. Claims that AI will revolutionize entire sectors of the economy within 5 years rely on hype and unrealistic timelines.
49. Understanding the harms today from AI means centering the perspectives of those most affected by biases in data and uses of the technology.
50. Large language models succeed at mimicking the superficial form of language through statistical tricks, not through any inherent capabilities.
51. Responsible development of AI systems requires restricting what technologies can be created as much as developing them in ethically justifiable ways.
52. Current AI systems lack foundational capabilities required for intelligence like understanding that other minds exist with their own perspectives.
53. AI researchers concerned about future risks ignore or downplay present-day harms highlighted by AI ethics researchers.
54. Chasing funding to develop AGI and hype about future possibilities has distracted from addressing real problems AI could help solve today.
55. The onus should not be on marginalized groups to prove AI is harmful when the burden of responsibility lies with developers to show that it is beneficial.
56. Billionaires and technology companies claiming to develop AI for the greater good are primarily motivated by profit and self-interest.
57. Corporations seek to maximize profit from AI while shifting responsibility for any harms onto end users and the public.
58. The unchecked spread of potentially harmful AI tools requires collective action and regulation rather than reliance on individual discernment.
59. AI constitutes a choice by humans about what technologies to develop, not an inevitable progression - we can decide what problems to focus our efforts on solving.
60. AI hype benefits those who control funding and research directions while sidelining or harming marginalized groups.
61. The public needs information about how AI systems actually work in order to evaluate claimed capabilities and push for responsible outcomes.
62. Viewing today's AI capabilities through the lens of science fiction fantasies obscures real problems that could be helped by more narrowly targeted technologies.
63. Hyperbolic claims that AI will revolutionize society within just a few years ignore complexities of technological progress and human development.
64. Companies do not always reveal the harms of their technologies or the means to mitigate those harms because it would reduce profits from AI.
↑ comment by Peter Berggren (peter-berggren) · 2023-07-07T13:07:23.490Z · LW(p) · GW(p)
Thank you very much! I won't be sending you a bounty, as you're not an AI ethicist of the type discussed here, but I'd be happy to send $50 to a charity of your choice. Which one do you want?
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-07-07T20:46:06.340Z · LW(p) · GW(p)
I think the appropriate choice would probably be https://www.ajl.org/
comment by RomanHauksson (r) · 2023-07-06T23:01:49.467Z · LW(p) · GW(p)
I don't think it's a good idea to frame this as "AI ethicists vs. AI notkilleveryoneists", as if anyone that cares about issues related to the development of powerful AI has to choose to only care about existential risk or only other issues. I think this framing unnecessarily excludes AI ethicists from the alignment field, which is unfortunate and counterproductive since they're otherwise aligned with the broader idea of "AI is going to be a massive force for societal change and we should make sure it goes well".
Suggestion: instead of addressing "AI ethicists" or "AI ethicists of the DAIR / Stochastic Parrots school of thought", why not address "AI X-risk skeptics"?
Replies from: peter-berggren↑ comment by Peter Berggren (peter-berggren) · 2023-07-06T23:11:29.191Z · LW(p) · GW(p)
I've seen plenty of AI x-risk skeptics present their object-level argument, and I'm not interested in paying out a bounty for stuff I already have. I'm most interested in the arguments from this specific school of thought, and that's why I'm offering the terms I offer.
Replies from: r↑ comment by RomanHauksson (r) · 2023-07-07T00:46:58.559Z · LW(p) · GW(p)
I see. Maybe you could address it towards "DAIR, and related, researchers"? I know that's a clunkier name for the group you're trying to describe, but I don't think more succinct wording is worth progressing towards a tribal dynamic between researchers who care about X-risk and S-risk and those who care about less extreme risks.