Focus on existential risk is a distraction from the real issues. A false fallacy
post by Nik Samoylov (nik-samoylov) · 2023-10-30T23:42:02.066Z · LW · GW · 11 commentsContents
11 comments
A retort familiar to people discussing existential risk is that these conversations distract us from the “real issues” and “current harms” of AI.
On the surface this retort is easily debunked:
- What is more “harmful” or an “issue” than extinction of our species?
- The current issues one speaks of affect a smaller number of people than the 100% of people who would be affected by mass death.
- Whataboutism.
- This retort itself is a distraction from the real issue of possible extinction.
- The preparation and effort required to deal with x-risk is greater than the preparation and effort to deal with the current harms.
- We already have laws for current harms, but few regulations to deal with extinction-level threats.
- The urgency of dealing with fast take-off scenarios is greater than with bias and discrimination.
- This list can be completed with more excellent responses.
Below are three reasons why discussing existential risk distracts us from the real issues and current harms:
Firstly, the statement is factually correct. Humans have only so much attention they can pay to AI. If you talk about x-risk, that means there is less time to talk about current harms. If you pay lawyers to draft a bill aimed at preventing x-risk and focus on that, they may neglect to draft in provisions to address current harms. Same goes if you get hold of a politician whose time is very limited and precious.
Second, focusing one’s mind on existential risk is just that: focusing usually means saying “no” to other things. Imagine you are buying a ticket to the Arctic to take pictures of blooming tundra. The brochure is all about the pictures and types of cameras best recommended for the most striking pictures of tussock grasses. And no word about how to get there, how to get dressed, what visas you need, etc. Just pay $10,000 and focus on the tundra. You call the sales infoline and their robotic voice tells you “You really need to consider the flowering tundra in its splendour. We will get you there”. “What visa do I need? Which country is it in?” you wonder. “Please focus on the tundra,” the highly convincing salesbot replies. That’s what “focusing” on existential risk means. It means risking failure to consider other relevant factors in decisions and policies around AI.
Even if one were to believe in “transformative AI” and “safe AGI”, one needs to have a plan to get where they want to go. Obstacles such as bias and discrimination and copyright violations and … must be addressed already now while they are still addressable, not “when we get there”.
Third, “existential risk” is a characterisation of the level of risk, not nature of risk. Extinction can come through many forms, including discriminatory decision making (the Australian Robodebt scandal is a good example). Including not having anything to eat because of under-employment, which is already starting to affect copywriters and digital artists. It can come from foom too. All these issues can be magnified and intertwingled as AI companies scale AI (in model sizes, use-cases, capital, hardware, influence, algorithmic improvements, etc.). Addressing and solving AI issues as and when they emerge is important. We are already behind on that.
We need to address old issues, monitor new issues and react quickly, and work on ways to prevent future issues. Discussion of existential risk (including "alignment plans" if you are writing one) must not be decoupled from remedying other harms and building a solid way forward.
11 comments
Comments sorted by top scores.
comment by geoffreymiller · 2023-10-31T01:31:35.603Z · LW(p) · GW(p)
I'm actually quite confused by the content and tone of this post.
Is it a satire of the 'AI ethics' position?
I speculate that the downvotes might reflect other people being confused as well?
Replies from: nik-samoylov↑ comment by Nik Samoylov (nik-samoylov) · 2023-10-31T02:11:53.680Z · LW(p) · GW(p)
Is it a satire of the 'AI ethics' position?
No, it is not actually.
What is confusing? :-)
Replies from: rsaarelm↑ comment by rsaarelm · 2023-11-01T07:38:05.124Z · LW(p) · GW(p)
The post reads like a half-assed college essay where you're going through the motions of writing without things really coming together. Heavy on the structure, there's no clear thread of rhetoric progressing through it, and it's hard to get a clear sense where you're coming from with the whole thing. The overall impression is just list of disjointed arguments, essay over.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2023-11-01T10:58:46.409Z · LW(p) · GW(p)
Just like ChatGPT, in other words.
comment by Joseph Miller (Josephm) · 2023-10-31T10:49:15.625Z · LW(p) · GW(p)
I'm interested why this is down voted so much. Upvote the child comment that best matches your reason for down voting.
Replies from: Josephm, Josephm, Josephm, Josephm, Josephm↑ comment by Joseph Miller (Josephm) · 2023-10-31T10:53:05.206Z · LW(p) · GW(p)
Writing style / tone
↑ comment by Joseph Miller (Josephm) · 2023-10-31T10:53:24.505Z · LW(p) · GW(p)
I actually upvoted
↑ comment by Joseph Miller (Josephm) · 2023-10-31T10:50:47.336Z · LW(p) · GW(p)
Other
↑ comment by Joseph Miller (Josephm) · 2023-10-31T10:50:36.782Z · LW(p) · GW(p)
Post is boring / obvious / doesn't have new ideas
↑ comment by Joseph Miller (Josephm) · 2023-10-31T10:49:34.575Z · LW(p) · GW(p)
Post is wrong / misleading
comment by Nik Samoylov (nik-samoylov) · 2023-11-02T08:05:07.639Z · LW(p) · GW(p)
So I am ready for this comment to be downvoted 😀
I realise that what I wrote did not resonate with the readers.
But I am not an inexperienced writer. I would not rate the above piece as below average in substance, precision, or style. Perhaps the main addressees were not clear (even though they are named in the last paragraph).
I am exploring a tension in this post and I feel this very tension has burst out into the comments and votes. This tension wants to be explored further and I will take time to write better about it.