The deepest atheist: Sam Altman

post by Trey Edwin (Paolo Vivaldi) · 2024-10-10T03:27:34.465Z · LW · GW · 2 comments

Contents

2 comments

I’ve held the prediction for some time that we are likely to see a fall from grace in Sam Altman, akin to Sam Bankman-Fried’s.

This is a difficult intuition to pin down, so I thought I should write it out. I believe it’s also pertinent to the Rationalist community: it is a warning to head a lesson from SBF that I believe has not yet been taken to heart.

It’s one of many riffs on ‘the road to hell is paved with good intentions’.

Recently, Joe Carlsmith captured what 'deep atheism' is in relation to AI risk. In particular, he unpacks Yudkowsky’s particular flavour of deep atheism -- how and why it came to be. I think it paints an extraordinarily clear picture of the psychology of individuals like Yudkowsky, but also of SBF and Sam Altman — and much of the broader Rationalist and EA community.

I don’t want to argue about the validity of the deep atheist worldview. I think I am largely a deep atheist myself. What I would like to argue is one of the effects it seems to have on behaviour: the sad irony that it motivates some individuals to push the world further in the direction of deep atheism's terrors, instead of drawing the world away from it (which is the intention).  

FTX was terrifying. OpenAI is terrifying.

I am convinced that, if you were to show early 2010s Sam Altman the OpenAI of today — and perhaps you removed him as the figure that has behaved in the ways he has — he would be appalled. I believe this would go for SBF and FTX too. But the terror — which motivate the beliefs and epistemology of deep atheism — seem to push people towards this behaviour nonetheless. 

From Deep Atheism and AI Risk

And once you’ve got a heart, suddenly your own intelligence, at least, is super great. Sure, it’s just a tool in the hands of some contingent, crystallized fragment of a dead-eyed God. And sure, yes, it’s dual use. Gotta watch out. But in your own case, the crystal in question is your heart...

Indeed, for all his incredulity and outrage at human stupidity, Yudkowsky places himself, often, on team humanity. He fights for human values; he identifies with humanism [LW · GW]; he makes Harry’s patronus a human being. And he sees humanity as the key to a good future [LW · GW], too:

Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth… Let go of the steering wheel, and the Future crashes.

Sam Altman did not alienate all the safety-conscious founding members of the team because he does not care about safety. Altman is one of the safety-conscious founding members of the team. Rather, Altman is indelibly informed by Yudkowsky’s deep atheism: a distrust in having his own crystallised fragment of goodness removed from the situation, and casting the fate of a good universe back to Fortune, i.e. the rest of humanity. 

It’s because Altman is so safety-informed that he cannot let go of the wheel. Deep atheism, per Carlsmith, is about the deepest sense of distrust. Sam does not trust Nature; and humanity and humans are a part of nature. 

Indeed, Sam is a part of Nature. He does not even trust himself: this is precisely why OpenAI was formed the way it was, to prevent him from wresting control as he has. But as AGI suddenly appears closer, the mistrust rears its head: Sam is not going to let go; he will always trust himself marginally over those around him. I have no doubt at all — and of course he would disagree — that Yudkowsky would do the same in Sam’s situation. What Yudkowsky would miss, in his potential disagreement, is the way in which participating in OpenAI over eight years, as leader, shapes your sense of control over the situation.

The upshot of atheism is a call to yang – a call for responsibility, agency, creation, vigilance. And it’s a call born, centrally, of a lack of safety. Yudkowsky writes [LW · GW]: “You are not safe. Ever… No one begins to truly search for the Way until their parents have failed them, their gods are dead, and their tools have shattered in their hand.”

What then are we to make of Altman’s The Intelligence Age?

There is almost no chance that Altman, for technical reasons, believes that the safety problem has vanished. And so the first explanation that comes to mind is that he has slowly become convinced that it has vanished, because he needs to be in control so badly that he needs to believe that it has. I.e. this transition has been a psychological solution to maintain a coherency of beliefs and motivations.

A more cynical explanation is that this is merely for show: a part of the control game, to convince investors and existing and future researchers he needs in the company. But I think this doesn't pass: it's clear as day that Altman at some level genuinely believes what he has written. 

The other, more convoluted explanation goes like this, and it is a bit of a reach.

Sam is more terrified than ever. What would have been nice, from the view of a decade ago, would be for humanity's brightest to be working jointly and directly on this issue of AI safety by now. And yet the reality couldn’t be further: Sam has alienated most AGI folk, including close friends; he's off courting state security; and he sees competitors and enemies all around him, attempting to wrest control of ASI from him. 

Sam’s world looks like a nightmare to me. Imagine if you were personally in charge of the Manhattan project, most of your friends had turned to foes, and they were all off trying to complete the project before you did, on the same national soil as you. 

Sam has merely confirmed the world of deep atheism he grew up so terrified by. He can’t trust anyone, he is working against all odds, all sorts of misshapen institutions are bearing down on him, and so on. And yet at no point will his worldview allow him to let go.

What I read in ‘The Intelligence Age’ is someone playing make-believe. He needs to believe that AI will just become tool AI — the safety problem isn’t real — and that humanity will just immediately solve all of its own problems with it, rather harmoniously. If that were true, that would solve his problem today. He’s slowly convinced himself of a false utopia to placate his deep atheism which has never for a moment left him.

This is certainly the more convoluted explanation, but it certainly matches with my observations of SBF's psychology, from well-before the FTX blowup.

2 comments

Comments sorted by top scores.

comment by gwern · 2024-10-10T15:31:36.388Z · LW(p) · GW(p)

This is certainly the more convoluted explanation, but it certainly matches with my observations of SBF's psychology, from well-before the FTX blowup.

I disagree. I think Altman is, in many respects, the exact opposite of SBF, and your read of his personality is wrong. This is why you can't predict things like Sutskever & Murati leaving OA, without being pushed (and in fact Altman going to lengths to keep them), while I could. I encourage you to go back and reread things like the New Yorker profile or discussions of his highschool career or his abortive political run or UBI experiment with that in mind.

comment by Viliam · 2024-10-10T16:14:58.711Z · LW(p) · GW(p)

Okay, this was quite interesting!

I have no doubt at all — and of course he would disagree — that Yudkowsky would do the same in Sam’s situation. What Yudkowsky would miss, in his potential disagreement, is the way in which participating in OpenAI over eight years, as leader, shapes your sense of control over the situation.

Maybe, maybe not. It is difficult to predict people's reactions to hypothetical situations. Yes, strong temptations exist, but they are not as strict as the laws of physics; there are people out there doing all kinds of things that most people wouldn't do.

And even where the predictive power of human behavior is quite strong, it seems to me that the usual pattern is not "virtually all people would do X in a situation Y" but rather "the kind of people who wouldn't do X are very unlikely to end up in the situation Y", either because they don't want to, or because actions very similar to X are already required halfway towards the situation Y.

Imagine if you were personally in charge of the Manhattan project, most of your friends had turned to foes, and they were all off trying to complete the project before you did, on the same national soil as you.

I think the analogy would be even better if it hypothetically turned out that building a nuke is actually surprisingly simple once you understand the trick how to do it, so your former friends are leaving to start their own private nuclear armies, because it is obviously way more profitable than getting a salary as a scientist.

He needs to believe that AI will just become tool AI — the safety problem isn’t real — and that humanity will just immediately solve all of its own problems with it, rather harmoniously.

Very speculatively, he might be sacrificing the Tegmark universes that he already considers doomed anyway, in return for having more power in the ones where it turns out that there is a reason for the AI to remain a tool.

(That of course would be a behavior comparable to playing quantum-suicide Russian roulette.)