post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Odd anon · 2023-10-24T19:46:44.133Z · LW(p) · GW(p)

Agreed, the terms aren't clear enough. I could be called an "AI optimist", insofar as I think that a treaty preventing ASI is quite achievable. Some who think AI will wipe out humanity are also "AI optimists", because they think that would be a positive outcome. We might both be optimists, and also agree on what the outcome of superintelligence could be, but these are very different positions. Optimism vs pessimism is not a very useful axis for understanding someone's views.

This paper uses the term "AI risk skeptics", which seems nicely clear. I tried to invent a few terms for specific subcategories here [LW · GW], but they're somewhat unwieldy. Nevin Freeman tried to figure out an alternative term for "doomer", but the conclusion of "AI prepper" doesn't seem great to me.

Replies from: 314159
comment by amelia (314159) · 2023-10-24T21:38:39.587Z · LW(p) · GW(p)

Thank you for your thoughtful and useful comment. 

Regarding "AI optimists," I had not yet seen the paper currently on arxiv, but "AI risk skeptics" is indeed far more precise than "AI optimists." 100 percent agreed.

Regarding alternatives to "AI pessimists" or "doomers," Nevin Freeman's term "AI prepper" is definitely an improvement. I guess I have a slight preference for "strategist," like I used above, over "prepper," but I'm probably biased out of habit. "Risk mitigation advocate" or "risk mitigator" would also work but they are more unwieldy than a single term.  

The "Taxonomy on AI-Risk Counterarguments" post is incredible in its analysis, precision and usefulness. I think that simply having some terminology is extremely useful, not just for dialog, but for thought as well. 

As we know, historically repressive regimes like the Soviet Union and North Korea have eliminated terms from the lexicon, to effective end. (It's hard for people to think of concepts for which they have no words.)    

I think that discussing language, sharpening the precision of our language, and developing new terminology has the opposite effect, in that people can build new ideas when they work with more precise and more efficient building materials. Words definitely matter. 

Thanks again. 

comment by Ariel Kwiatkowski (ariel-kwiatkowski) · 2023-10-23T23:20:18.411Z · LW(p) · GW(p)

I'm never a big fan of this sort of... cognitive rewiring? Juggling definitions? This post reinforces my bias, since it's written from a point of very stong bias itself.

AI optimists think AI will go well and be helpful.

AI pessimists think AI will go poorly and be harmful.

It's not that deep.

 

The post itself is bordering on insulting anyone who has a different opinion than the author (who, no doubt, would prefer the label "AI strategist" than "AI extremists"). I was thinking about going into the details of why, but honestly... this is unlikely to be productive discourse coming from a place where the "other side" is immediately compared to nationalists (?!) or extremists (?!!!).

 

I'm an AI optimist. I think AI will go well and will help humanity flourish, through both capabilities and alignment research. I think things will work out. That's all.

Replies from: 314159
comment by amelia (314159) · 2023-10-24T00:53:34.893Z · LW(p) · GW(p)

Thanks for the feedback, but I don't think it's about "cognitive rewiring." It's more about precision of language and comprehension. You said "AI optimists think AI will go well and be helpful," but doesn't everyone believe that is a possibility? The bigger question is what probability you assign to the "go well and be helpful" outcome. Is there anything we can do to increase the probability? What about specific policies? You say you're an "AI optimist," but I still don't know the scope of what that entails w/ specific policies. Does that mean you support open source AI? Do you oppose all AI regulations? What about an AI pause in development for safety? The terms "AI optimist" and "AI pessimist" don't tell me much on their own. 

One inspiration for my post is the now infamous exchange that went on between Yann LeCun and Yoshua Bengio. 

As I'm sure you saw, Yann LeCun posted this on his Facebook page (& reposted on X): 

"The heretofore silent majority of AI scientists and engineers who

- do not believe in AI extinction scenarios or

- believe we have agency in making AI powerful, reliable, and safe and

- think the best way to do so is through open source AI platforms,

NEED TO SPEAK UP !"

https://www.facebook.com/yann.lecun/posts/pfbid02We6SXvcqYkk34BETyTQwS1CFLYT7JmJ1gHg4YiFBYaW9Fppa3yMAgzfaov7zvgzWl

Yoshua Bengio replied as follows:

Let me consider your three points. 

(1) It is not about 'believing' in specific scenarios. It is about prudence. Neither you nor anyone has given me any rational and credible argument to suggest that we would be safe with future unaligned powerful AIs and right now we do not know how to design such AIs. Furthermore, there are people like Rich Sutton who seem to want us humans to welcome our future overlords and may *give* the gift of self-preservation to future AI systems, so even if we did find a way to make safe AIs, we would still have a socio-political problem to avoid grave misuse, excessive power concentration and the emergence of entities smarter than us and with their own interests. 

(2) Indeed we do have agency, but right now we invest 50 to 100x more on AI capabilities than in AI safety and governance. If we want to have a chance to solve this problem, we need major investments both from industry and governments/academia. Denying the risks is not going to help achieve that. Please realize what you are doing. 

(3) Open-source is great in general and I am and have been for all my adult life a big supporter, but you have to consider other values when taking a decision. Future AI systems will definitely be more powerful and thus more dangerous in the wrong hands. Open-sourcing them would be like giving dangerous weapons to everyone. Your argument of allowing everyone to manipulate powerful AIs is like the libertarian argument that everyone should be allowed to own a machine-gun or whatever weapon they want. From memory, you disagreed with such policies. And things get worse as the power of the tool (hence of the weapons derived from it) increases. Do governments allow anyone to build nuclear bombs, manipulate dangerous pathogens, or drive passenger jets? No. These are heavily regulated by governments.

--

[I added spacing to Bengio's post for readability.]

Media articles about this, along with commenters, have described LeGun as an "AI optimist" and Bengio as an "AI pessimist." 

Just like in how you and I communicated, I think these terms, and even the "good vs bad" dichotomy, radically simplify the nature of the situation. Meanwhile, if the general public were asked what they think the "AI optimist" (supposedly LeGun) or the "pessimist" (supposedly Bengio) believe here, I'm not sure anyone would come back with an accurate response. Thus, the terms are ineffective. 

Obviously you can think of yourself with any term you like, but with respect to others, it seems the term "AI strategist" for Bengio--not to mention Eliezer--is more likely to call to mind something closer to what they actually believe. 

And isn't conveyance of accurate meaning the primary goal of communication?