What's Going on With OpenAI's Messaging?
post by ozziegooen · 2024-05-21T02:22:04.171Z · LW · GW · 13 commentsContents
13 comments
13 comments
Comments sorted by top scores.
comment by Odd anon · 2024-05-21T04:40:56.231Z · LW(p) · GW(p)
Meta’s messaging is clearer.
“AI development won’t get us to transformative AI, we don’t think that AI safety will make a difference, we’re just going to optimize for profitability.”
So, Meta's messaging is actually quite inconsistent. Yann LeCun says (when speaking to certain audiences, at least) that current AI is very dumb, and AGI is so far away it's not worth worrying about all that much. Mark Zuckerberg, on the other hand, is quite vocal that their goal is AGI and that they're making real progress towards it, suggesting 5+ year timelines.
Replies from: steve2152, ozziegooen↑ comment by Steven Byrnes (steve2152) · 2024-05-21T21:36:36.139Z · LW(p) · GW(p)
I think Yann LeCun thinks "AGI in 2040 is perfectly plausible", AND he believes "AGI is so far away it's not worth worrying about all that much". It's a really insane perspective IMO. As recently as like 2020, "AGI within 20 years" was universally (correctly) considered to be a super-soon forecast calling for urgent action, as contrasted with the people who say "centuries".
↑ comment by ozziegooen · 2024-05-21T05:30:18.132Z · LW(p) · GW(p)
That could be.
My recollection from Zuckerberg was that he was thinking of transformative AI, at least, as a fairly far-away goal, more like 8 to 20 years++ (and I'd assume "transformative AI" would be further), and that overall, he just hasn't talked much about it.
I wasn't thinking of all of Yann LeCun's statements, in part because he makes radical/nonsensible-to-me statements all over the place (which makes me assume he's not representing the whole department). It's not clear to me how much most of his views represent Meta, though I realize he is technically in charge of AI there.
↑ comment by O O (o-o) · 2024-05-21T06:00:47.273Z · LW(p) · GW(p)
He isn’t in charge there. He simply offers research directions and probably a link to academia.
Replies from: cubefox, o-o↑ comment by cubefox · 2024-05-21T11:34:55.352Z · LW(p) · GW(p)
Do you have a source for that? His website says:
Replies from: o-oVP and Chief AI Scientist, Facebook
↑ comment by O O (o-o) · 2024-05-25T07:27:22.373Z · LW(p) · GW(p)
https://x.com/ylecun/status/1794248728825524303?s=46&t=lZJAHzXMXI1MgQuyBgEhgA
He’s recently mentioned it again.
comment by Manuel Allgaier (white rabbit) · 2024-05-22T14:16:26.088Z · LW(p) · GW(p)
I've been following Sam Altman's messaging for a while, and it feels like Altman does not have one consistent set of beliefs (like an ethics/safety researcher would) but tends to say different things in different times and places, depending on what seems currently most useful for achieving his goals. Many CEOs do that, but he seems to do that more than other OpenAI staff or executives at Anthropic or Deepmind. I agree with your conclusion, to pay less attention to their messaging and more to their actions.
comment by alcherblack · 2024-05-22T00:49:12.309Z · LW(p) · GW(p)
Broadly agree except for this part:
Its in an area that some people (not the OpenAI management) think is unusually high-risk,
I really can't imagine that someone who wrote "Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity." in 2015 and occasionally references extinction as a possibility when not directly asked about doesn't think AGI development is high risk.
I'm not sure how to square this circle. I almost hope Sam is being consciously dishonest and has a 4D chess plan, as opposed to self-deluding himself that while it's dangerous the risks are low or they're somehow worth it. But it seems that the latter is more likely based on some other stuff he said, e.g. "What I lose the most sleep over is the hypothetical idea that we already have done something really bad by launching ChatGPT".
comment by PeterH · 2024-05-22T17:26:26.348Z · LW(p) · GW(p)
Flagging the most upvoted comment thread [EA(p) · GW(p)] on EA Forum, with replies from Ozzie, which begins:
This post contains many claims that you interpret OpenAI to be making. However, unless I'm missing something, I don't see citations for any of the claims you attribute to them. Moreover, several of the claims feel like they could potentially be described as misinterpretations of what OpenAI is saying or merely poorly communicated ideas.
comment by Rebecca (bec-hawk) · 2024-05-21T12:31:03.379Z · LW(p) · GW(p)
My impression is that post-board drama, they’ve de-emphasised the non-profit messaging. Also in a more recent interview Sam said basically ‘well I guess it turns out the board can’t fire me’ and that in the long term there should be democratic governance of the company. So I don’t think it’s true that #8-10 are (still) being pushed simultaneously with the others.
I also haven’t seen anything that struck me as communicating #3 or #11, though I agree it would be in OpenAI’s interest to say those things. Can you say more about where you are seeing that?
comment by ChristianKl · 2024-05-23T12:52:51.269Z · LW(p) · GW(p)
To add from the recent fiasco about Scarlett Johansson:
(1) We are concerned about people developing emotional attachments to our agents
(2) We picked a voice to make it easier for people to develop emotional attachments to our agents.
comment by Review Bot · 2024-05-21T18:13:10.110Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?