[Link] Musk's non-missing mood
post by jimrandomh · 2021-07-12T22:09:12.165Z · LW · GW · 20 commentsThis is a link post for https://lukemuehlhauser.com/musks-non-missing-mood/
Contents
21 comments
Luke Muehlhauser writes:
Over the years, my colleagues and I have spoken to many machine learning researchers who, perhaps after some discussion and argument, claim to think there’s a moderate chance — a 5%, or 15%, or even a 40% chance — that AI systems will destroy human civilization in the next few decades. 1 However, I often detect what Bryan Caplan has called a “missing mood“; a mood they would predictably exhibit if they really thought such a dire future was plausible, but which they don’t seem to exhibit. In many cases, the researcher who claims to think that medium-term existential catastrophe from AI is plausible doesn’t seem too upset or worried or sad about it, and doesn’t seem to be taking any specific actions as a result.
Not so with Elon Musk. Consider his reaction (here and here) when podcaster Joe Rogan asks about his AI doomsaying. Musk stares at the table, and takes a deep breath. He looks sad. Dejected. Fatalistic. Then he says:
20 comments
Comments sorted by top scores.
comment by orthonormal · 2021-07-13T02:07:06.551Z · LW(p) · GW(p)
One of his main steps was founding OpenAI, whose charter looks like a questionable decision now from an AI Safety standpoint (as they push capabilities of language models and reinforcement learning forward, while driving their original safety team away) and looked fishy to me even at the time (simply because more initiatives make coordination harder).
I agree that Musk takes AI risk seriously, and I understand the "try something" mentality. But I suspect he founded OpenAI because he didn't trust a safety project he didn't have his hands on himself; then later he realized OpenAI wasn't working as he hoped, so he drifted away to focus on Neuralink.
comment by adamShimi · 2021-07-13T13:05:51.853Z · LW(p) · GW(p)
I feel like the linked post is extolling the virtue of something that is highly unproductive and self-destructive: using your internal grim-o-meter to measure the state of the world/future. As Nate points out in his post, this is a terrible idea. Maybe Musk can be constantly grim while being productive on AI Alignment, but from my experience, people constantly weighted down by the shit that happens don't do creative research -- they get depressed and angsty. Even if they do some work, they burnout way more often.
That being said, I agree that it makes sense for people really involved in this topic to freak out from time to time (happens to me). But I don't want to make freaking out the thing that every Alignment researcher feels like they have to signal.
comment by jimrandomh · 2021-07-12T22:13:35.925Z · LW(p) · GW(p)
My best guess is that this "missing mood" effect is a defensive reaction to a lack of plausible actionable steps: upon first being convinced people get upset/worried/sad, but they fail to find anything useful to do about the problem, so they move on and build up some psychological defenses against it.
Replies from: daniel-kokotajlo, hg00↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-07-13T05:57:16.079Z · LW(p) · GW(p)
Maybe? But if they are doing AI research, it shouldn't be too hard for them to either (a) stop doing AI research, and thereby stop contributing to the problem, or (b) pivot their AI research more towards safety rather than capabilities, or at least (c) help raise awareness about the problem so that it becomes common knowledge so that everyone can stop all together and/or suitable regulation can be designed.
Replies from: Lukas_Gloor, Nisan↑ comment by Lukas_Gloor · 2021-07-13T20:39:50.341Z · LW(p) · GW(p)
Edit: Oops, my comment shouldn't be a direct reply here (but it fits into this general comment section, which is why I'm not deleting it). I didn't read the parent comment that Daniel was replying to above and assumed he was replying in a totally different context (Musk not necessarily acting rationally on his non-missing mood, as opposed to Daniel talking about AI researchers and their missing mood.)
--
Yeah. I watched a Q&A on youtube after a talk by Sam Altman, roughly a year or two ago (or so), where Altman alluded that Musk had wanted some of OpenAI's top AI scientists because Tesla needed them. It's possible that the reason he left OpenAI was simply related to that, not to anything about strategically thinking about AI futures, missing moods, etc.
More generally, I feel like a lot of people seem to think that if you run a successful company, you must be brilliant and dedicated in every possible way. No, that's not how it works. You can be a genius at founding and running companies and making lots of money without necessarily being good at careful reasoning about paths to impact other than "making money." Probably these skills even come apart at the tails.
↑ comment by Nisan · 2021-07-13T21:40:39.940Z · LW(p) · GW(p)
Agreed that it shouldn't be hard to do that, but I expect that people will often continue to do what they find intrinsically motivating, or what they're good at, even if it's not overall a good idea. If this article can be believed, a senior researcher said that they work on capabilities because "the prospect of discovery is too sweet".
↑ comment by hg00 · 2021-07-14T06:40:00.104Z · LW(p) · GW(p)
An interesting missing mood I've observed in discussions of AI safety: When a new idea for achieving safe AI is proposed, you might expect that people concerned with AI risk would show a glimmer of eager curiosity. Perhaps the AI safety problem is actually solvable!
But I've pretty much never observed this. A more common reaction seems to be a sort of an uneasy defensiveness, sometimes in combination with changing the subject.
Another response I occasionally see is someone mentioning a potential problem in a manner that practically sounds like they are rebuking the person who shared the new idea.
I eventually came to the conclusion that there is some level on which many people in the AI safety community actually don't want to see the problem of AI safety solved, because too much of their self-concept is wrapped up in AI safety being a super difficult problem. I highly doubt this occurs on a conscious level, it's probably due to the same sort of subconscious psychological defenses you describe, e.g. embarrassment at not having seen the solution oneself.
Replies from: cousin_itcomment by Jiro · 2021-07-14T15:58:46.407Z · LW(p) · GW(p)
The idea of a missing mood, from following the link to Bryan Caplan's article, seems to amount to two ideas:
- "I think it has more costs than other people think, so even if someone thinks the benefits outweigh the costs, if they're not taking the costs seriously enough, they have a missing mood."
- "I think it has more benefits than other people think, so even if someone thinks the costs outweigh the benefits, if they're not taking the benefits seriously enough, they have a missing mood."
These are, of course, two sides of the same coin and have the same problem: You're assuming that the first half of your position (costs in case 1, benefits in case 2) is not only correct, but so obviously correct that nobody can reasonably disagree with it; if someone acts as they don't believe it, there must be some other explanation. This is better than assuming your entire position is correct, but it's still poor epistemic hygeine. For instance, both the military hawks example (case 1) and the immigration example (case 2) fail if your opponent doesn't value non-Americans very much, so there are lower costs or benefits, respectively.
Beware of starting with disagreement and concluding insincerity.
comment by George3d6 · 2021-07-13T17:23:21.564Z · LW(p) · GW(p)
I'll make an analogy here as to get around the AI-worship induced gut reactions:
I think most people are fairly convinced there isn't a moral imperative beyond their own life, as in, even if behaving as if your own life is the ultimate driver of moral value is wrong and ineffective, from a logical standpoint it is, once your conscious experience ends everything ends.
I'm not saying this is certain, it may be that the line between conscious states is so blurry that continuity between sleep and awakenes is basically 0, or as much as that between you can other completely different humans (which will be alive even once you die and will keep on flourishing). It may be that there is a ghost in the machine under whatever metaphysical framework you want... but, if I had to take a bet, I'd say something like ... 15,40,60% chance that once you close your eyes is over, the universe is done for.
I think many people accept this viewpoint, but most of them don't spend even a moment thinking about anti-aging, even those like myself that do, aren't to concerned about death in a "mood" sense. Why would you be? It's inevitable, like, yeah, your actions might contribute to averting death by 0.x% if you're very lucky and so you should pursue that area because... well, nothing better to do, right? But it makes no sense to concern oneself about death in an emotional way since it's likely coming anyway.
After all the purpose of life is living, and if you're not living because you're worrying about death you lost, even in the case where you were able to defeat death, you still lost, you didn't live, or less metaphorically you lived a life of suffering, or of unmeet potential.
Nor does it help to be parallelized by the fear of death every waking moment of one's life. It will likely make you less able to destory the very evil you are oposing.
Such is the case with every potential horrible inevitability in life, even if it is "absolute" in it's bad-ness, being afraid of it will not make ir easier to avoid and it might ultimately defeat the purpose of avoiding it, which is the happiness of you and the people you care about, since all of those will be more miserable if you are paralleized by fear.
So even if whatever fake model you had assumed a 99.9% chance of being destroyed by HAL or whatever in 10 years from now, it would still be the most sensible course of action to not get too emotional about the whole thing.
comment by [deleted] · 2021-07-13T17:09:10.111Z · LW(p) · GW(p)
Is death by AI really any more dire than the default outcome, i.e. the slow and agonizing decay of the body until cancer/Alzheimer's delivers the final blow?
Replies from: Vladimir_Nesov, Dagon↑ comment by Vladimir_Nesov · 2021-07-13T19:08:52.504Z · LW(p) · GW(p)
Senescence doesn't kill the world.
Replies from: Vanilla_cabs, None↑ comment by Vanilla_cabs · 2021-07-13T19:33:50.739Z · LW(p) · GW(p)
And it doesn't expand into the universe to kill every other life.
Replies from: None↑ comment by [deleted] · 2021-07-13T21:39:43.036Z · LW(p) · GW(p)
How strange for us to achieve superintelligence where every other life in the universe has failed, don't you think?
Replies from: Vanilla_cabs↑ comment by Vanilla_cabs · 2021-07-14T08:02:07.206Z · LW(p) · GW(p)
Well, that's just a variation of the Fermi paradox, isn't it? What's strange is that we don't observe any sign of alien sentience, superintelligence or not. I guess, if we're in the zoo hypothesis, then the aliens will probably step in and stop us from developing a rogue AI (anytime now). But I wouldn't pin my hopes for life in the universe on it.
Replies from: None↑ comment by [deleted] · 2021-07-14T11:38:16.779Z · LW(p) · GW(p)
It was a rhetorical question, there is nothing strange about not observing aliens. I'm an avid critic of the Fermi paradox. You simply update towards their nonexistence and, to a lesser extent, whatever other hypothesis fits that observation. You don't start out with the romantic idea that aliens ought to be out there, living their parallel lives, and then call the lack of evidence thereof a "paradox".
The probability that all sentient life in the observable universe just so happens to invariably reside in the limbo state between nonexistence and total dominance is vanishingly small, to a comical degree. Even on our own Earth, sentient life only occupies a small fragment of our evolutionary history, and intelligent life even more so. Either we're alone, or we're in a zoo/simulation.
Either way, Clippy doesn't kill more than us.
Replies from: Vanilla_cabs↑ comment by Vanilla_cabs · 2021-07-14T12:02:12.512Z · LW(p) · GW(p)
But it is surprising that life could only appear on our planet, since it doesn't seem to have unique features. If we're alone, that probably means we're just first. If we just blow up ourselves, another sentient species will probably appear someday somewhere else with a chance to not mess up. But an expanding unaligned AI will wipe out all chance of life appearing in the future. That's a big difference.
Replies from: None↑ comment by [deleted] · 2021-07-14T12:21:39.552Z · LW(p) · GW(p)
But it is surprising that life could only appear on our planet, since it doesn't seem to have unique features.
What does "could appear" mean here? 1 in 10? 1 in a trillion? 1 in 10^50?
Remember we live in a tiny universe with only ~10^23 stars.
↑ comment by Dagon · 2021-07-14T01:57:17.883Z · LW(p) · GW(p)
To an individual human, death by AI (or by climate catastrophe) is worse than old age "natural" death only to the extent that it comes sooner, and perhaps in being more violent. To someone who cares about others, the large number of looming deaths is pretty bad. To someone who cares about the species, or who cares about quantity of sentient individuals, AI is likely to reduce total utility by quite a bit.
To someone who loves only abstract intelligence and quantifies by some metric I don't quite get, AI may be just as good as (or better than) people.
Replies from: None↑ comment by [deleted] · 2021-07-14T21:18:03.229Z · LW(p) · GW(p)
To an individual human, death by AI (or by climate catastrophe) is worse than old age "natural" death only to the extent that it comes sooner, and perhaps in being more violent.
I would expect death by AI to be very swift but not violent, e.g. nanites releasing neurotoxin into the bloodstream of every human on the planet like Yudkowsky suggested.
To someone who cares about the species, or who cares about quantity of sentient individuals, AI is likely to reduce total utility by quite a bit.
Like I said above, I expect the human species to be doomed by default due to lots of other existential threats, so in the long term superintelligent AI has only upsides.