Ordinary claims require ordinary evidence
post by blake8086 · 2023-09-05T22:09:16.044Z · LW · GW · 3 commentsContents
Ordinary claims require ordinary evidence Why It's Not an Extraordinary Claim The Five Core Axioms The Ultimate Challenge for Humanity Self-promo None 3 comments
Ordinary claims require ordinary evidence
(this is a post version of a YouTube video I made)
One common argument is that making extraordinary claims, such as AI being harmful, requires extraordinary evidence. However, I believe that asserting AI's potential to be harmful is not an extraordinary claim at all. Rather, it's grounded in several key axioms that, when examined, are hard to refute.
Why It's Not an Extraordinary Claim
I think the AI Optimist imagines a particular scenario or set of scenarios (perhaps "Terminator" or [insert fictional franchise here]) and says "that seems improbable". Perhaps Eliezer comes along and posits one additional scenario, and the Optimist says "all of those combined are improbable". "Do you have any proof that this [particular tiny set of scenarios] will happen!?" But the space of AI ruin is vast, any failure scenario would ruin everything.
To me, AI ruin seems to a natural consequence of five simple processes and conditions:
The Five Core Axioms
1. AI gets better, never worse: AI's intelligence, however you define it, is increasing. As new research emerges, the knowledge becomes a permanent part of the record. Like other technological advances, we build on it rather than regress. People constantly throw more resources at AI, training up bigger and bigger models without any regard to safety.
2. Intelligence always helps: Being more intelligent always aids success in the real world. A slight edge in intelligence has allowed humans to dominate the Earth. There is no reason to expect a different outcome with an entity more intelligent than humans.
3. No one knows how to align AI: No one can precisely instruct AI to align with complex human values or happiness. We can optimize for likely prediction of a data point, but no one has written a Python function to rank outcomes by how positive they are for humanity.
4. Resources are finite: Any AI acting in the real world will inevitably compete with humans for resources. These resources, once consumed by AI, won’t be available for human use, leading to potential conflicts.
5. AI cannot be stopped: Once an AI becomes more intelligent and possibly harmful, it's impossible to halt it. Stopping an unaligned AI requires human decision-making to defeat more-intelligent-than-human decision making, which isn't possible.
Combined, these axioms point towards AI: becoming smarter, outcompeting humans, being unaligned with human interests, taking resources from humanity, and being unstoppable. These all seem pretty straightforwardly true (at least to me)
The Ultimate Challenge for Humanity
In my opinion, these axioms point towards a simple conclusion: AI risk is the ordinary claim, and the concept of AI being "safe" is the extremist viewpoint, for which no "extraordinary" evidence exists.
Please let me know what mistakes I've made here, or where my arguments are wrong.
Self-promo
I'm working on a little project for like-minded people to hang out and chat. It's at together.lol, please drop by and let me know what you think.
3 comments
Comments sorted by top scores.
comment by TAG · 2023-09-15T13:43:17.494Z · LW(p) · GW(p)
- No one knows how to align AI: No one can precisely instruct AI to align with complex human values or happiness.
OTOH, alignment/control is part of functionality, and any AI that does something useful, that;'s commercially viable, must be reasonably responsive to it's users wishes: so all commercially viable AIs, such as the GPT's are aligned at a good-enough level.
The inevitable response to that is whats good enough for a not quite human level AI is not good enough for an superintelligence ... which presupposes that the ASI is going to emerge suddenly and./or unexpectedly from the AHLI ... in other words, that the ASI is not going to emerge from incremental improvements to both the capabilities and alignment of the seed AI.
And that, of course, is not obvious either.
comment by Jiro · 2023-09-06T21:11:30.724Z · LW(p) · GW(p)
Every rational person who believes in a conclusion thinks that the conclusion is a natural consequence of the existing evidence. If that was enough to make a claim not-extraordinary, no claim would ever be extraordinary by the standards of the person making it.
comment by Odd anon · 2023-09-06T23:19:41.560Z · LW(p) · GW(p)
There are good ways to argue that AI X-risk is not an extraordinary claim, but this is not it. Besides for that "a derivation from these 5 axioms" does not make a claim "ordinary", the axioms themselves are pretty suspect or at least not simple.
"AI gets better, never worse" does not automatically imply to everyone that it gets better forever, or that it will soon surpass humans. "Intelligence always helps" is true, but non-obvious to many people. "No one knows how to align AI" is something that some would strongly disagree with, not having seen their personal idea disproved. "Resources are finite" jumps straight to some conclusions that require justification, including assumptions about the AI's goals. "AI cannot be stopped" is strongly counter-intuitive to most people, especially since they've been watching movies about just that for their whole lives.
And none of these arguments are even necessary, because AI being risky is the normal position in society. The average person believes that there are dangers, even if polls are inconsistent about whether an absolute majority often worries particularly about AI wiping out humanity. The AI optimist's position is the "weird", "extraordinary" one.
Contrast the post with the argument from stopai's homepage: "OpenAI, DeepMind, Anthropic, and others are spending billions of dollars to build godlike AI. Their executives say they might succeed in the next few years. They don’t know how they will control their creation, and they admit humanity might go extinct. This needs to stop." In that framing, it is hard to argue that it's an extraordinary claim.