On Lex Fridman’s Second Podcast with Altman

post by Zvi · 2024-03-25T12:20:08.780Z · LW · GW · 10 comments

Last week Sam Altman spent two hours with Lex Fridman (transcript). Given how important it is to understand where Altman’s head is at and learn what he knows, this seemed like another clear case where extensive notes were in order.

Lex Fridman overperformed, asking harder questions than I expected and going deeper than I expected, and succeeded in getting Altman to give a lot of what I believe were genuine answers. The task is ‘get the best interviews you can while still getting interviews’ and this could be close to the production possibilities frontier given Lex’s skill set.

There was not one big thing that stands out given what we already have heard from Altman before. It was more the sum of little things, the opportunity to get a sense of Altman and where his head is at, or at least where he is presenting it as being. To watch him struggle to be as genuine as possible given the circumstances.

One thing that did stand out to me was his characterization of ‘theatrical risk’ as a tactic to dismiss potential loss of human control. I do think that we are underinvesting in preventing loss-of-control scenarios around competitive dynamics that lack bad actors and are far less theatrical than those typically focused on, but the overall characterization here seems like a strategically hostile approach. I am sad about that, whereas I was mostly happy with the rest of the interview.

I will follow my usual format for podcasts of a numbered list, each with a timestamp.

  1. (01:13) They open with the Battle of the Board. Altman starts with how he felt rather than any details, and drops this nugget: “And there were definitely times I thought it was going to be one of the worst things to ever happen for AI safety.” If he truly believed that, why did he not go down a different road? If Altman had come out strongly for a transition to Mutari and searching for a new outside CEO, that presumably would have been fine for AI safety. So this then is a confession that he was willing to put that into play to keep power.
  2. (2:45) He notes he expected something crazy at some point and it made them more resilient. Yes from his perspective, but potentially very much the opposite from other perspectives.
  3. (3:00) And he says ‘the road to AGI should be a giant power struggle… not should… I expect that to be the case.’ Seems right.
  4. (4:15) He says he was feeling really down and out of it after the whole thing was over. That certainly is not the picture others were painting, given he had his job back. This suggests that he did not see this outcome as such a win at the time.
  5. (5:15) Altman learned a lot about what you need from a board, and says ‘his company nearly got destroyed.’ Again, his choice. What do you think he now thinks he needs from the board?
  6. (6:15) He says he thinks the board members were well-meaning people ‘on the whole’ and under stress and time pressure people make suboptimal decisions, and everyone needs to operate under pressure.
  7. (7:15) He notes that boards are supposed to be powerful but are answerable to shareholders, whereas non-profit boards answer to no one. Very much so. This seems like a key fact about non-profits and a fundamentally unsolved problem. The buck has to stop somewhere. Sam says he’d like the board to ‘answer to the world as a whole’ so much as that is a practical thing. So, WorldCoin elections? I would not recommend it.
  8. (8:00) What was wrong with the old board? Altman says insufficient size or experience. For new board members, new criteria is more considered, including different expertise on a variety of fronts, also different perspectives on how this will impact society and help people. Says track record is a big deal for board members, much more than for other positions, which says a lot about the board’s old state. Lex asks about technical savvy, Altman says you need some savvy but not in every member. But who has it right now except for Altman? And even he isn’t that technical.
  9. (12:55) Altman notes this fight played out in public, and was exhausting. He continues to say he was ready to move on at first on Friday and didn’t consider the possibility of coming back, and was considering doing a very focused AGI research effort. Which indeed would have been quite bad AI safety. He says he only flipped when he heard the executive team was going to fight back and then on Saturday the board called to consider bringing Altman back. He says he did not want to come back and wanted to stabilize OpenAI, but if that is true, weren’t there very clear alternative paths he could have taken? He could have told everyone to embrace Emmett Shear’s leadership while they worked things out? He could have come back right away while they worked to find a new board? I don’t understand the story Altman is trying to tell here.
  10. (17:15) Very good gracious words about Mira Mutari. Then Altman makes it clear to those who listen that he wants to move on from that weekend. He later (21:30) says he is happy with the new board.
  11. (18:30) He asks about Ilya Sutskever. Ilya not being held hostage, Altman loves Ilya, hopes they work together indefinitely. What did Ilya see? Not AGI. Altman notes he loves that Ilya takes safety concerns very seriously and they talk about a lot about how to get it right, that Ilya is a credit to humanity in how much he wants to get this right. Altman is clearly choosing his words very carefully. The clear implication here is that ‘what Ilya saw’ was something that made Ilya Sutskever concerned from a safety perspective.
  12. (21:10) Why is Ilya still so quiet, Lex asks? Altman doesn’t want to speak for Ilya. Does mention they were at a dinner party lately.
  13. (22:45) Lex asks if the incident made Altman less trusting. Sam instantly says yes, that he thinks he is an extremely trusting person who does not worry about edge cases, and he dislikes that this has made him think more about bad scenarios. So perhaps this could actually be really good? I do not want someone building AGI who does not worry about edge cases and assumes things will work out and trusting fate. I want someone paranoid about things going horribly wrong and not trusting a damn thing without a good reason. Legitimately sorry, Altman, gotta take one for the team on this one.
  14. (24:40) Lex asks about Elon Musk suing OpenAI. Altman says he is not sure what it is really about. That seems like the right answer here. I am sure he strongly suspects what it is about, and Elon has said what it is about, but you don’t want to presume in public, you can never be sure, given that it definitely isn’t about the claims having legal merit. Says OpenAI started purely as a research lab, then adjusted the structure when things changed and things got patched and kind of weirdly structured.
  15. (28:30) Lex asks what the word Open in OpenAI meant to him at the time? Altman says he’d pick a different name now, and his job is largely to put the tech in the hands of people for free, notes free ChatGPT has no advertising and GPT-4 is cheap. Says ‘we should open source some stuff and not other stuff… nuance is the right answer.’ Which is wise. Both agree that the lawsuit legally unserious.
  16. (32:00) Lex mentions Meta opening up Llama, asks about pros and cons of open sourcing. Altman says there is a place, especially for small ones, a mix is right.
  17. (33:00) Altman outright says, if he knew what he knew now, he would have founded OpenAI as a for-profit company.
  18. (34:45) Transition to Sora. Altman is on Team World Model and thinks the approach will go far. Says ‘more than three’ people work on labeling the data, but a lot of it is self-supervised. Notes efficiency isn’t where it needs to be yet.
  19. (40:00) Asked about whether using copyrighted data for AI is fair use, Altman says the question behind the question is should those artists who create the data be paid? And the answer is yes, the model must change, people have to get paid, but it is unclear how. He would want to get paid for anyone creating art in his style, and want to be able to opt out of that if he wanted.
  20. (41:00) Sam excitedly says he is not worried people won’t do and get rewarded for cool shit, that’s hardwired, that’s not going away. I agree that we won’t let lack of hard incentives stop us too much, but we do still need the ability to do it.
  21. (42:10) Sam says don’t ask what ‘jobs’ AI can do, ask what individual tasks it can do, making people more efficient, letting people work better and on different kinds of problems. That seems wise in the near term.
  22. (43:30) Both note that humans care deeply about humans, Altman says it seems very deeply wired that this is what we ultimately care about. Play chess, run races, all that. But, character.ai. So we will see if this proxy can get hijacked.
  23. (45:00) Asked about what makes GPT-4 amazing Altman says it kind of sucks, it’s not where we need to get to. Expects 4→5 to be similar to 3→4. Says he’s using GPT-4 more recently as a brainstorming partner.
  24. (50:00) Altman expects unlimited future context length (his word is billions), you’ll feed in everything. You always find ways to use the exponential.
  25. (53:50) Altman expects great improvement in hallucinations, but does not expect it to be solved this year. How to interpret what that implies about releases?
  26. (56:00) The point of memory is for the model to know you and get more useful over time. User should be able to edit what the AI remembers.
  27. (1:00:20) Felt the love, felt the love. Drink!
  28. (1:00:55) Optimism about getting slower and deeper thinking about (and allocating more compute to) harder problems out of AIs.
  29. (1:02:40) Q*? ‘Not ready to talk about it.’ Also says no secret nuclear facility, but it would be nice. Altman says OpenAI is ‘not a good company at keeping secrets. It would be nice.’ I would think it is going to be highly necessary. If you are playing for these kinds of table stakes you need to be able to keep secrets. Also, we still do not have many of the details of the events of November, so I suppose they can keep at least some secrets?
  30. (1:04:00) Lex asks if there are going to be more leaps similar to ChatGPT. Sam says that’s a good question and pauses to think. There’s plenty of deliberate strategicness to Altman’s answers in general, but also a lot of very clear genuine exploration and curiosity, and that’s pretty great. Altman focuses on the continuous deployment strategy, which he sees as a success by making others pay attention. Which is a double edged sword. Altman says these leaps suggest there should be more iterative releases, not less. Which seems right, given the state of play? At this point might as well ship incrementally?
  31. (1:06:10) When is GPT-5 coming out? Altman says ‘I don’t know, that’s the honest answer.’ I do think that I believe him more because of the second half of that. But what does it mean to not know, beyond the answer not being tomorrow? How much not knowing is required to say you do not know? I don’t know that, either.
  32. (1:06:30) Altman says they will release an amazing new model this year, but he doesn’t know what they’ll call it. Given his statement about the size of the leap from 4→5, presumably this is not a ‘4.5 vs. 5’ question? It’s something else? He says in the coming months they will release ‘many different important things’ before GPT-5.
  33. (1:09:40) Seven trillion dollars! Altman says he never Tweeted that, calls it misinformation. He believes compute will likely be the currency of the future, the most precious commodity, and we should be investing heavily in having more. And it’s a weird market because the demand curve can go out infinitely far at sufficiently low price points. Still believes in fusion, and fission.
  34. (1:12:45) Worry about a fission-style reaction to AI, says some things will go ‘theatrically wrong’ with AI, which seems right, and that he will be at non-zero risk of being shot. Expects it to get caught in left vs. right wars too. Expects far more good than bad from AI, doesn’t talk about what time frame or capability level.
  35. (1:14:45) Competition means better products faster. The downside is a potential increase in an arms race. He says he feels the pressure. Emphasises importance of slow takeoff, although he wants short timelines to go with them. Says Elon Musk cares about safety and thus he assumes Elon won’t race unsafely, which strikes me as a sentence not selected for its predictive accuracy. Also not something I would count on. Consider the track record.
  36. (1:18:10) Better search engine? Boring. We want a whole new approach.
  37. (1:20:00) Altman hates ads. Yes, internet needed ads. But ads are terrible. Yes. Altman not ruling ads out, but has what he calls a bias against them. Good.
  38. (1:23:20) Gemini Incident time. They work hard to get this right, as you’d assume. Would be good to write down exactly what outputs you want. Not principles, specific rules, if I ask X you output Y, you need to say it out loud. Bravo. Of course writing that down makes you even more blameworthy.
  39. (1:25:50) Is San Francisco an ideological bubble impacting OpenAI? Altman says they have battles over AGI but are blessed not to have big culture war problems, at least not anything like what others experience.
  40. (1:26:45) How to do safety, asks Lex. Altman says, that’s hard, will soon be mostly what the company thinks about. No specifics, but Lex wasn’t asking for them. Altman notes dangers of cybersecurity and model theft, alignment work, impact on society, ‘getting to the good outcome is going to take the whole effort.’ Altman says state actors are indeed trying to hack OpenAI as you would expect.
  41. (1:28:45) What is exciting about GPT-5? Altman again says: That it will be smarter. Which is the right answer. That is what matters most.
  42. (1:31:30) Altman says it would be depressing if we had AGI and the only way to do things in the physical world would be to get a human to go do it, so he hopes we get physical robots. They will return to robots at some point. What will the humans be doing, then?
  43. (1:32:30) When AGI? Altman notes AGI definition is disputed, prefers to discuss capability X, says AGI is a mile marker or a beginning. Expects ‘quite capable systems we look at and say wow that is really remarkable’ by end of decade and possibly sooner. Well, yes, of course, that seems like a given?
  44. (1:34:00) AGI implies transformation to Altman, although not singularity-level, and notes the world and world economy don’t seem that different yet. What would be a huge deal? Advancing the rate of scientific progress. Boink. If he got an AGI he’d ask science questions first.
  45. (1:38:00) What about power? Should we trust Altman? Altman says it is important no one person have total control over OpenAI or AGI. You want a robust governance system. Defends his actions and the outcome of the attempted firing but admits the incident makes his case harder to make. Calls for governments to put rules in place. Both agree balance of power is good. The buck has to stop somewhere, and we need to ensure that this somewhere stays human.
  46. (1:41:30) Speaking of which, what about loss of control concerns? Altman says it is ‘not his top worry’ but he might worry about it more later and we have to work on it super hard and we have to get it right. Calls it a ‘theatrical risk’ and says safety researchers got ‘hung up’ on this problem, although it is good that they focus on it, but we risk not focusing enough on other risks. This is quite the rhetorical set of moves to be pulling here. Feels strategically hostile.
  47. (1:43:00) Lex asks about Altman refusing to use capital letters on Twitter. Altman asks, in a way I don’t doubt is genuine, why anyone cares, why do people keep asking this. One response I would give is that every time he does it, there’s a 50% chance I want to quote him, and then I have to go and fix it, and it’s annoying. Same to everyone else who does this – you are offloading the cognitive processing work, and then the actual work of capitalization, onto other people, and you should feel bad about this. Lex thinks it is about Altman not ‘following the rules’ making people uncomfortable. Altman thinks capitalization is dumb in general, I strongly think he is wrong, it is very helpful for comprehension. I don’t do it in Google Search (which he asks about) but I totally do it when taking private notes I will read later.
  48. (1:46:45) Sora → Simulation++? Altman says yes, somewhat, but not centrally.
  49. (1:49:45) AGI will be a psychedelic gateway to a new reality. Drink!
  50. (1:51:00) Lex ends by asking about… aliens? Altman says he wants to believe, and is puzzled by the Fermi paradox.
  51. (1:52:45) Altman wonders, will AGI be more like one brain or the product of a bunch of components and scaffolding that comes together, similar to human culture?

Was that the most valuable use of two hours talking with Altman? No, of course not. Two hours with Dwarkesh Patel would have been far more juicy. But also Altman is friends with Lex and willing to sit down with him, and provide what is still a lot of good content, and will likely do so again. It is an iterated game. So I am very happy for what we did get. You can learn a lot just by watching.

10 comments

Comments sorted by top scores.

comment by aphyer · 2024-03-26T12:52:26.403Z · LW(p) · GW(p)

They open with the Battle of the Board. Altman starts with how he felt rather than any details, and drops this nugget: “And there were definitely times I thought it was going to be one of the worst things to ever happen for AI safety.” If he truly believed that, why did he not go down a different road? If Altman had come out strongly for a transition to Mutari and searching for a new outside CEO, that presumably would have been fine for AI safety. So this then is a confession that he was willing to put that into play to keep power.

 

I don't have a great verbalization of why, but want to register that I find this sort of attempted argument kind of horrifying.

Replies from: Seth Herd
comment by Seth Herd · 2024-03-26T15:00:32.255Z · LW(p) · GW(p)

The argument Zvi is making, or Altman's argument?

Replies from: aphyer
comment by aphyer · 2024-03-26T15:01:52.588Z · LW(p) · GW(p)

The argument Zvi is making.

Replies from: Seth Herd
comment by Seth Herd · 2024-03-26T15:04:23.617Z · LW(p) · GW(p)

Okay, then I can't guess why you find it horrifying, but I'm curious because I think you could be right.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2024-03-27T21:10:08.017Z · LW(p) · GW(p)

There are realistic beliefs Altman could have about what's good or bad for AI safety that would not allow Zvi to draw that conclusion. For instance: 

  • Maybe Altman thinks it's really bad for companies' momentum to go through CEO transitions (and we know that he believes OpenAI having a lot of momentum is good for safety, since he sees them as both adequately concerned about safety and more concerned about it than competitors).
  • Maybe Altman thinks OpenAI would be unlikely to find another CEO who understands the research landscape well enough while also being good at managing, who is at least as concerned about safety as Altman is.
  • Maybe Altman was sort of willing to "put that into play," in a way, but his motivation to do so wasn't a desire for power, nor a calculated strategic ploy, but more the understandable human tendency to hold a grudge (esp. in the short term) against the people who just rejected and humiliated him, so he understandably didn't feel a lot of motivational pull to want help them look better about the coup they had just attempted for what seemed to him as unfair/bad reasons. (This still makes Altman look suboptimal, but it's a lot different from "Altman prefers power so much that he'd calculatedly put the world at risk for his short-term enjoyment of power.")
  • Maybe the moments where Altman thought things would go sideways were only very brief, and for the most part, when he was taking actions towards further escalation, he was already very confident that he'd win. 

Overall, the point is that it seems maybe a bit reckless/uncharitable to make strong inferences about someone's rankings of priorities just based on one remark they made being in tension with them pushing in one direction rather than the other in a complicated political struggle.

Replies from: Lukas_Gloor, Seth Herd
comment by Lukas_Gloor · 2024-03-27T21:32:34.455Z · LW(p) · GW(p)

FWIW, one thing I really didn't like about how he came across in the interview is that he seemed to be engaged in framing the narrative one-sidedly in an underhanded way, sneakily rather than out in the open. (Everyone tries to frame the narrative in some way, but it becomes problematic when people don't point out the places where their interpretation differs from others, because then listeners won't easily realize that there are claims that they still need to evaluate and think about rather than just take for granted and something that everyone else already agrees about.) 

He was not highlighting the possibility that the other side's perspective still has validity; instead, he was shrugging that possibility under the carpet. He talked as though (implicitly, not explicitly) it's now officially established or obviously true that the board acted badly (Lex contributed to this by asking easy questions and not pushing back on anything too much). He focused a lot on the support he got during this hard time and people saying good things about him (eulogy while still alive comparison, highlighting that he thinks there's no doubt about his character) and said somewhat condescending things about the former board (about how he thinks they had good intentions, said in that slow voice and thoughtful tone, almost like they had committed a crime) and then emphasized their lack of experience. 

For contrast, here are things he could have said that would have made it easier for listeners to come to the right conclusions (I think anyone who is morally scrupulous about whether they're in the right in situations when many others speak up against them would have highlighted these points a lot more, so the absence of these bits in Altman's interview is telling us something.)

  • Instead of just saying that he believes the former board members came from a place of good intentions, also say if/whether he believes that some of the things they were concerned about weren't totally unreasonable from their perspective. E.g., acknowledge things he did wrong or things that, while not wrong, understandably would lead to misunderstandings.
  • Acknowledge that just because a decision had been made by the review committee, the matter of his character and suitability for OpenAI's charter is not now settled (esp. given that the review maybe had a somewhat limited scope?). He could point out that it's probably rational (or, if he thinks this is not necesarily mandated, at least flag that he'd understand if some people now feel that way) for listeners of the youtube interview to keep an eye on him, while explaining how he intends to prove that the review committee came to the right decision. 
  • He said the board was inexperienced, but he'd say that in any case, whether or not they were onto something. Why is he talking about their lack of experience so much rather than zooming in on their ability to assess someone's character? It could totally be true that the former board was both inexperienced and right about Altman's unsuitability. Pointing out this possibility himself would be a clarifying contribution, but instead, he chose to distract from that entire theme and muddle the waters by making it seem like all that happened was that the board did something stupid out of inexperience, and that's all there was.
  • Acknowledge that it wasn't just an outpouring of support for him; there were also some people who used to occasion to voice critical takes about him (and the Y Combinator thing came to light). 

(Caveat that I didn't actually listen to the full interview and therefore may have missed it if he did more signposting and perspective taking and "acknowledging that for-him-inconvenient hypotheses are now out there and important if true and hard to dismiss entirely for at the very least the people without private info" than I would've thought from skipping through segments of the interview and Zvi's summary.)

In reaction to what I wrote here, maybe it's a defensible stance to go like, "ah, but that's just Altman being good at PR; it's just bad PR for him to give any air of legitimacy to the former board's concerns." 

I concede that, in some cases when someone accuses you of something, they're just playing dirty and your best way to make sure it doesn't stick is by not engaging with low-quality criticism. However, there are also situations where concerns have enough legitimacy that shrugging them under the carpet doesn't help you seem trustworthy. In those cases, I find it extra suspicious when someone shrugs the concerns under the carpet and thereby misses the opportunity to add clarity to the discussion, make themselves more trustworthy, and help people form better views on what's the case.

Maybe that's a high standard, but I'd feel more reassured if the frontier of AI research was steered by someone who could talk about difficult topics and uncertainty around their suitability in a more transparent and illuminating way. 

comment by Seth Herd · 2024-04-01T19:49:40.750Z · LW(p) · GW(p)

This is great, thanks for filling in that reasoning. I agree that there are lots of plausible reasons Altman could've made that comment, other than disdain for safety.

comment by lc · 2024-03-25T19:51:09.656Z · LW(p) · GW(p)

Lex asks if the incident made Altman less trusting. Sam instantly says yes, that he thinks he is an extremely trusting person who does not worry about edge cases, and he dislikes that this has made him think more about bad scenarios. So perhaps this could actually be really good? I do not want someone building AGI who does not worry about edge cases and assumes things will work out and trusting fate. I want someone paranoid about things going horribly wrong and not trusting a damn thing without a good reason.

Eh... I think you and him are worried about different things.

comment by Gyrodiot · 2024-03-25T23:43:37.106Z · LW(p) · GW(p)

Typo: Mira Murati, not Mutari.

comment by Victor Ashioya (victor-ashioya) · 2024-03-27T06:24:08.634Z · LW(p) · GW(p)

Lex really asked all the right questions. I liked how he tried to trick Sam with Ilya and Q*:

 

It would have been easier for Sam to trip and say something, but he maintained a certain composure, very calm throughout the interview.