Why Yudkowsky Is Wrong And What He Does Can Be More Dangerous
post by idontagreewiththat (vlad-x) · 2023-06-06T17:59:57.486Z · LW · GW · 4 commentsContents
4 comments
Note: I respect Yudkowsky and I support slowing down the AI arms race.
My problem with that article is that it is misguided, manipulative and proposes dangerous solutions.
In response to Eliezer Yudkowsky's recent article calling for a complete shutdown of artificial intelligence (AI) research due to the potential risks, it's crucial to address the many misconceptions and manipulative tactics employed in his argument.
Yudkowsky's reasoning is driven by panic, fear of the unknown, and a series of cognitive biases that distort his perception of the situation. In reality, the measures he proposes would likely lead to even more human suffering.
First and foremost, Yudkowsky's belief that AI could suddenly surpass human intelligence and become an alien-level threat is fundamentally flawed.
AI systems, such as Large Language Models (LLMs), are trained on human data and designed by human engineers. It's impossible for them to exceed the bounds of human knowledge and expertise, as they're inherently limited by the information they've been exposed to. Yudkowsky's fears are based on a misunderstanding of how AI systems work, and are closer to science fiction.
Next, Yudkowsky's argument is riddled with cognitive biases that cloud his judgment.
The availability heuristic, for example, leads him to focus on the most extreme and disastrous outcomes, while neglecting the many positive advancements that AI has brought and will bring to society. Additionally, the negativity bias causes him to emphasize the potential dangers of AI without considering its benefits, such as improved healthcare, environmental sustainability, and economic growth.
Furthermore, Yudkowsky's narrative is manipulative and designed to induce a strong emotional response from readers. By invoking images of children dying and the end of humanity, he attempts to persuade people through fear rather than rational argumentation.
This approach is not only unethical but also counterproductive, as it prevents readers from thinking logically about the issue at hand.
Manipulating people's emotions is a dangerous game.
When individuals are influenced by strong emotions, they become less rational and are more likely to make poor decisions. Yudkowsky's fear-mongering only serves to create an atmosphere of panic and confusion, making it difficult for policymakers and researchers to address the actual challenges and opportunities presented by AI.
Moreover, Yudkowsky's proposed measures to halt AI research and development would have rather bad consequences. By stifling innovation and progress, we would throttle advancements in medicine, environmental protection, and other critical fields. Yudkowsky's plan would also lead to a global economic downturn, as the AI industry is a significant driver of growth and job creation.
Ironically, Yudkowsky's extreme approach could even increase the risk of an AI-related catastrophe.
By forcing AI development underground or into the hands of rogue actors, we would lose the ability to regulate and monitor its progress, potentially leading to the very scenarios he fears. Responsible AI research, guided by ethical principles and transparent collaboration, is the best way to ensure that AI technologies are developed safely and for the benefit of all.
Implementing Yudkowsky's proposals would not only have disastrous economic and technological consequences but would also lead to severe political repercussions on a global scale.
The complete shutdown of AI research would create an unprecedented power vacuum in the international arena, as countries that currently lead in AI research, such as the United States and China, would be forced to abandon their competitive advantage. This could lead to destabilization and increased tensions between nations, as they scramble to fill the void left by the absence of AI development.
Moreover, the enforcement of a worldwide moratorium on AI research would require an extraordinary level of international cooperation and trust, which is unlikely to be achieved in the current geopolitical climate. Nations with differing political ideologies and priorities may view the moratorium as an attempt to undermine their sovereignty or as a ploy to maintain the status quo of global power dynamics.
As a result, Yudkowsky's proposals could inadvertently provoke increased mistrust and hostility between countries, exacerbating existing conflicts and potentially sparking new ones.
In addition, Yudkowsky's call for shutting down all large GPU clusters and monitoring the sale of GPUs could lead to a significant erosion of privacy rights and individual freedoms.
To enforce such measures, governments would need to implement invasive surveillance systems and assume unprecedented control over private enterprises and individuals. This could result in a chilling effect on innovation in other fields beyond AI and a general atmosphere of fear and suspicion, as people become increasingly wary of government intrusion into their lives.
Furthermore, the implementation of Yudkowsky's proposals may even encourage a global technological arms race, as nations seek to acquire alternative means of maintaining their power and influence.
In the absence of AI research, countries may invest heavily in other emerging technologies, such as biotechnology or advanced weaponry, which could pose their own set of risks and challenges to global stability.
Lastly, Yudkowsky's proposals, if enacted, could hinder international collaboration on critical global issues that require concerted efforts, such as climate change, public health crises, and humanitarian emergencies.
In summary, the political consequences of implementing Yudkowsky's proposals to shut down AI research would be nothing short of disastrous.
The resulting power vacuum, increased geopolitical tensions, erosion of privacy rights, potential technological arms race, and diminished capacity for international collaboration would only serve to create a more unstable and dangerous world.
Instead of following Yudkowsky's reactionary suggestions, we must engage in thoughtful, responsible AI research and development, guided by international cooperation and a shared commitment to addressing global challenges.
By fostering collaboration and open dialogue, we can work towards harnessing the transformative potential of AI in a way that benefits all of humanity, while mitigating the risks and maintaining a stable global political landscape.
4 comments
Comments sorted by top scores.
comment by AprilSR · 2023-06-06T20:34:03.746Z · LW(p) · GW(p)
AI systems, such as Large Language Models (LLMs), are trained on human data and designed by human engineers. It's impossible for them to exceed the bounds of human knowledge and expertise, as they're inherently limited by the information they've been exposed to.
Maybe, on current algorithms, LLMs run into a plateau around the level of human expertise. That does seem plausible. But it is not because being trained on human data necessarily limits you to human level!
Accurately predicting human text is much harder than just writing stuff on the internet. If GPT were to perfect the skill it is being trained on, it would have to be much smarter than a human!
comment by Raemon · 2023-04-06T20:17:47.628Z · LW(p) · GW(p)
Mod note: after reviewing this post, I don’t think it meets LessWrong’s bar for quality. I'm moving it back to drafts. We're in the process of fine-tuning our moderation policy but you can read up on our thoughts here [LW(p) · GW(p)].
We’re slightly more flexible when it comes to criticism, but it’s fairly common to have criticism that some combination of factually mistaken, not well argued, and/or rehashing old arguments without acknowledging and addressing old counterarguments. This post seems to fall in this category.
(It's best to think of LessWrong as somewhere between "a forum" and "a university." A university might include undergrads learning new skills, as well as grad students and professors working on novel research. But everyone there has still been vetted in some way, i.e. submitting an application and/or passing an entrance exam. You can try again with another comment or post, but I do ask before you do that you read through this thread to get a rough sense of what to do differently in the future. You should expect a lot of both learning and cultural onboarding in order to be a good fit for LessWrong)
comment by Rory Vogel (rory-vogel) · 2023-07-03T23:44:18.034Z · LW(p) · GW(p)
Hi I'm just wondering if there's a real author's name and credentials for this article that I can use because I really like the article and I want to use it for a card in LD debate but I can't use it without proving its credible and idontagreewiththat isn't really a name I can use.
Replies from: david-james↑ comment by David James (david-james) · 2024-06-15T02:05:59.180Z · LW(p) · GW(p)
First, I encourage you to put credence in the current score of -40 and a moderator saying the post doesn't meet LessWrong's quality bar.
By LD you mean Lincoln-Douglas debate, right? If so, please continue reading.
Second, I'd like to put some additional ideas up for discussion and consideration -- not debate -- I don't want to debate you, certainly not in LD style. If you care about truth-seeking, I suggest taking a hard and critical look at LD. To what degree is Lincoln-Douglas debate organized around truth-seeking? How often does a participant in an LD debate change their position based on new evidence? In my understanding, in practice, LD is quite uninterested in the notion of being "less wrong". It seems to be about a particular kind of "rhetorical art" of fortifying one's position as much as possible while attacking another's. One might hope that somehow the LD debate process surfaces the truth. Maybe, in some cases. But generally speaking, I find it to be a woeful distortion of curious discussion and truth-seeking.