post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Misaligned-Semi-intelligence (MisalignedIntelligence) · 2023-07-07T17:56:08.460Z · LW(p) · GW(p)

Comment on:

This morning I was thinking about trying to find some sort of written account of the best versions and/or most charitable interpretations of the views and arguments of the "Not-worried-about-x-risk" people. But written by someone who is concerned about X-risk, because when non-x-risk people try to explain what they think, I genuinely feel like they are speaking a different language. And this causes me a reasonable amount of stress, because so many people who I would consider significantly smarter than me and better than me at thinking about things... aren't worried about x-risk. But I can't understand them.

So, when I saw the title of this post and read the first sentence, I was pretty excited, because I thought it had a good chance of being exactly what I was looking for. But after reading it, I think it just increased my feeling of not understanding. Anytime I try to imagine myself holding or defending these views, I always come to the conclusion that my primary motivation would be "I want these things to be true". But I also know that most of these people are very capable of recognizing when they believe something just because they want to, and I don't really think that's compelling as a complete explanation for their position.

I don't even know if this is a "complaint" about the explanation presented here, or the views themselves. Because I don't understand the views themselves well enough to separate the two.

Replies from: NinaR
comment by Nina Panickssery (NinaR) · 2023-07-07T18:08:45.964Z · LW(p) · GW(p)

That's a completely fair point/criticism. 

I also don't buy these arguments and would be interested in AI X-Risk skeptics helping me steelman further / add more categories of argument to this list. 

However, as someone in a similar position, "trying to find some sort of written account of the best versions and/or most charitable interpretations of the views and arguments of the "Not-worried-about-x-risk" people," I decided to try and do this myself as a starting point. 

Replies from: MisalignedIntelligence
comment by Misaligned-Semi-intelligence (MisalignedIntelligence) · 2023-07-07T20:30:04.981Z · LW(p) · GW(p)

I don't want it to sound like this wasn't useful or worth reading. My negativity is pretty much entirely due to me really wanting a moment of clarity and not getting it. I think you did a good job of capturing what they actually do say, and I'll probably come back to it a few times.

comment by the gears to ascension (lahwran) · 2023-07-08T00:17:13.209Z · LW(p) · GW(p)

Consider this analogy: a child raised in a household espousing violent fascist ideologies may develop behaviors and attitudes that reflect these harmful beliefs. Conversely, the same child nurtured in a peaceful, loving environment may manifest diametrically opposite characteristics. Similarly, we could expect an AI trained on human data that encapsulates how humans see the world to align with our perspective

Then I have bad news about that Internet data and the portion of humanity who endorse large fragments such as authoritarianism or the whole of the fascism recipe, worldwide. Liberation, morality, care for other beings, drive for a healthy community, etc are not at all guaranteed even just in humans. In fact, this is a reason that even if ai is not on its own an xrisk, we should not be instantly reassured.

comment by Herb Ingram · 2023-07-08T09:02:40.717Z · LW(p) · GW(p)

To me, the arguments from both sides, both arguing for and against worrying about existential risk from AI, make sense. People have different priors and biased access to information. However, even if everyone agreed on all matters of fact that can be currently established, the disagreement would persist. The issue is that predicting the future is very hard and we can't expect to be in any way certain what will happen. I think the interesting difference between how people "pro" and "contra" AI-x-risk think about this is in dealing with this uncertainty.

Imagine you have a model of the world, which is the best model you have been able to come up with after trying very hard. This model is about the future and predicts catastrophe unless something is done about it now. It's impossible to check if the model holds up, other than by waiting until it's too late. Crucially, your model seems unlikely to make true predictions: it's about the future and rests on a lot of unverifiable assumptions. What do you do?

People "pro-x-risk" might say: "we made the best model we could make, it says we should not build AI. So let's not do that, at least until our models are improved and say it's safe enough to try. The default option is not to do something that seems very risky.".

The opponents might say: "this model is almost certainly wrong, we should ignore what it says. Building risky stuff has kinda worked so far, let's just see what happens. Besides, somebody will do it anyway."

My feeling when listening to eleborate and abstract discussions is that people mainly disagree on this point. "What's the default action?" or, in other words, "who has the burden of proof?". That proof is basically impossible to give for either side.

It's obviously great that people are trying to improve their models. That might get harder to do the more politicized the issue becomes.