Posts

Comments

Comment by Leksu on Altman firing retaliation incoming? · 2023-11-19T13:25:05.409Z · LW · GW

To the extent that Microsoft pressures the OpenAI board about their decision to oust Altman, won't it be easy to (I think accurately) portray Microsoft's behavior as unreasonable and against the common good?

It seems like the main (I would guess accurate) narrative in the media is that the reason for the board's actions was safety concerns.

Let's say Microsoft pulls whatever investments they can, revokes access to cloud compute resources, and makes efforts to support a pro-Altman faction in OpenAI. What happens if the OpenAI board decides to stall and put out statements to the effect of "we gotta do what we gotta do, it's not just for fun that we made this decision"? I would naively guess the public, the law, the media, and governments would be on the board's side. Unsure how much that would matter though.

To me at least it doesn't seem obviously bad for AI safety if OpenAI collapses or significantly loses employees and market value (would love to hear opinions on this).

Pros:

  • Naively the leading capabilities organization slowing down slows down AI capabilities (though hard to say if it slows them down on balance, given some employees will start rivalling companies, and the slowing down could inspire other actors to invest more in becoming the new frontier)
  • Additional signal to the public and governments that irresponsible safety shortcuts are being taken (increases willingness to regulate AI, naively seemsnlike a pro?)

Cons:

  • Altman and other employees will likely start other AI companies, be less responsible, could speed up frontier of capabilities relative to counterfactual, could make coordination harder...
  • Maybe I'm wrong about how all of this will be painted by the media, and the public / government's perceptions

I feel like I'm probably missing some reason Microsoft has more leverage than I'm assuming? Maybe other people are more worried about fragmentation of the AI landscape, and less optimistic about the public's and governments' perceptions of the situation, and the expected value of the actions they might take because of it?

Comment by Leksu on AI Alignment Breakthroughs this Week [new substack] · 2023-10-02T08:17:08.502Z · LW · GW

Agree that would be better. I think just links is useful too though, in case adding more context significantly increases the workload in practice

Comment by Leksu on Atoms to Agents Proto-Lectures · 2023-09-23T16:27:55.172Z · LW · GW

I wonder if there's an AI tool that could post-process it

Comment by Leksu on AI #24: Week of the Podcast · 2023-08-12T18:52:59.202Z · LW · GW

Joscha Bach

Comment by Leksu on Linkpost: We need another Expert Survey on Progress in AI, urgently · 2023-08-12T17:25:43.679Z · LW · GW

Is there a group currently working on such a survey? If not, seems like it wouldn't be very hard to kickstart.

Link to mentioned survey: LINK

Maybe someone from AI Impacts could comment relevant thoughts (are they planning to do a re-run of the survey soon, would they encourage/discourage another group to do a similar survey, do they think now is a good time, do they have the right resources for it now, etc)

Comment by Leksu on AGI safety career advice · 2023-05-05T09:07:01.121Z · LW · GW

Thanks, I think this comment and the subsequent post will be very useful for me!

Comment by Leksu on Possible miracles · 2022-10-10T19:00:41.723Z · LW · GW

Here's what GPT-3 output for me