How to think about and deal with OpenAI

post by Rafael Harth (sil-ver) · 2021-10-09T13:10:56.091Z · LW · GW · 54 comments

This is a question post.

Contents

  Answers
    45 AnnaSalamon
    14 James_Miller
    13 lincolnquirk
    12 A Ray
    4 iceman
    2 deluks917
None
54 comments

Eliezer Yudkowsky writes on twitter:

Nothing else Elon Musk has done can possibly make up for how hard the "OpenAI" launch trashed humanity's chances of survival; previously there was a nascent spirit of cooperation, which Elon completely blew up to try to make it all be about who, which monkey, got the poison banana, and by spreading and advocating the frame that everybody needed their own "demon" (Musk's old term) in their house, and anybody who talked about reducing proliferation of demons must be a bad anti-openness person who wanted to keep all the demons for themselves.

Nobody involved with OpenAI's launch can reasonably have been said to have done anything else of relative importance in their lives. The net impact of their lives is their contribution to the huge negative impact of OpenAI's launch, plus a rounding error.

Previously all the AGI people were at the same conference talking about how humanity was going to handle this together. Elon Musk didn't like Demis Hassabis, so he blew that up. That's the impact of his life. The end.

I've found myself repeatedly uncertain about what to make of OpenAI and their impact. The most recent LessWrong discussion that I'm aware of has happened on the post Will OpenAI-s work unintentionally increase existential risk [LW · GW], but the arguments for negative impact are different from the thing Eliezer named.

I'm also not entirely sure whether publicly debating sensitive questions like whether a person or organization accidentally increased existential risk is a good idea in the first place. However, spontaneously bringing up the issue on a twitter thread is unlikely to be optimal. At the very least, it should be beneficial to discuss the meta question, i.e., how we should or shouldn't talk about this. With that in mind, here are three things I would like to understand better:

  1. Concretely speaking, should we be hesitant to talk about this? If so, what kind of discussions are okay?

And -- conditional on discussing them being a good idea:

  1. What is the more detailed story of how the "nascent spirit of cooperation" has degraded or changed since the inception of OpenAI?

  2. What interventions are possible here, if any? (Is it really that difficult to organize some kind of outreach to Elon to try and reverse some of the effects? Naively speaking, my impression has been that our community is sufficiently well connected to do this, and that Elon is amenable to arguments.)

I'm less interested in estimating the total impact of any specific person.

Answers

answer by AnnaSalamon · 2021-10-13T04:32:33.426Z · LW(p) · GW(p)

I think we should not be hesitant to talk about this in public. I used to be of the opposite opinion, believing-as-if there was a benevolent conspiracy that figured out which conversations could/couldn’t nudge AI politics in useful ways, whose upsides were more important than the upsides of LWers/etc. knowing what’s up. I now both believe less in such a conspiracy, and believe more that we need public fora in which to reason because we do not have functional private fora with memory (in the way that a LW comment thread has memory) that span across organizations.

It’s possible I’m still missing something, but if so it would be nice to have it spelled out publicly what exactly I am missing.

I agree with Lincoln Quirk’s comment that things could turn into a kind of culture war, and that this would be harmful. It seems to me it’s worth responding to this by trying unusually hard (on this or other easily politicizable topics) to avoid treating arguments like soldiers [? · GW]. But it doesn’t seem worthwhile to me to refrain from honest attempts to think in public.

comment by lincolnquirk · 2021-10-13T23:05:35.992Z · LW(p) · GW(p)

Ineffective, because the people arguing on the forum are lacking knowledge about the situation. They don't understand OpenAI's incentive structure, plan, etc. Thus any plans they put forward will be in all likelihood useless to OpenAI.

Risky, because (some combination of):

  • it is emotionally difficult to hear that one of your friends is plotting against you (and openAI is made up of humans, many of whom came out of this community)
    • it's especially hard if your friend is misinformed and plotting against you; and I think it likely that the openAI people believe that Yudkowsky/LW commentators are misinformed or at least under-informed (and they are probably right about this)
  • to manage that emotional situation, you may want to declare war back on them, cut off contact, etc.; any of these actions if declared as an internal policy would be damaging to the future relationship between openAI and the LW world
  • openAI has already had a ton of PR issues over the last few years and so they probably have a pretty well developed muscle for dealing internally with bad PR, which this would fall under. If true, the muscle probably looks like internal announcements with messages like "ignore those people/stop listening to them, they don't understand what we do, we're managing all these concerns and those people are over indexing on them anyway"
  • the evaporative cooling effect may eject some people who were already on the fence about leaving, but the people who remain will be more committed to the original mission, more "anti LW" and less inclined to listen to us in the future
  • hearing bad arguments makes one more resistant to similar (but better) arguments in the future

I want to state for the record that I think OpenAI is sincerely trying to make the world a better place, and I appreciate their efforts. I don't have a settled opinion on the sign of their impact so far.

Replies from: lincolnquirk
comment by lincolnquirk · 2021-10-13T23:32:07.691Z · LW(p) · GW(p)

What should be done instead of a public forum? I don't necessarily think there needs to be a "conspiracy", but I do think that it's a heck of a lot better to have one-on-one meetings with people to convince them of things. At my company, when sensitive things need to be decided or acted on, a bunch of slack DMs fly around until one person is clearly the owner of the problem; they end up in charge of having the necessary private conversations (and keeping stakeholders in the loop). Could this work with LW and OpenAI? I'm not sure.

comment by steven0461 · 2021-10-13T20:06:17.301Z · LW(p) · GW(p)

a benevolent conspiracy that figured out which conversations could/couldn’t nudge AI politics in useful ways

functional private fora with memory (in the way that a LW comment thread has memory) that span across organizations

What's standing in the way of these being created?

Replies from: Raemon
comment by Raemon · 2021-10-13T21:00:47.830Z · LW(p) · GW(p)

Mostly time and attention. This has been on the list of things the LessWrong team has considered working on and there's just a lot of competing priorities.

Replies from: steven0461
comment by steven0461 · 2021-10-13T23:08:27.401Z · LW(p) · GW(p)

Hmm, I was imagining that in Anna's view, it's not just about what concrete social media or other venues exist, but about some social dynamic that makes even the informal benevolent conspiracy part impossible or undesirable.

answer by James_Miller · 2021-10-09T20:54:12.766Z · LW(p) · GW(p)

How open do we think OpenAI would be to additional research showing the dangers of AGI?  If OpenAI is pursuing a perilous course, perhaps this community should prioritize doing the kind of research that would persuade them to slow down.  Sam Altman came across to me at the two SSC talks he gave as being highly rational as this community would define rational.  

If this is the correct path, we would benefit from people who have worked at OpenAI explaining what kind of evidence would be needed to influence them towards Eliezer's view of AGI.

comment by DanielFilan · 2021-10-13T19:13:03.420Z · LW(p) · GW(p)

I don't know what 'we' think, but as a person somewhat familiar with OpenAI employees and research output, they are definitely willing to pursue safety and transparency research that's relevant to existential risk, and I don't really know how one could do that without opening oneself up to producing research that provides evidence of AI danger.

answer by lincolnquirk · 2021-10-10T14:09:58.782Z · LW(p) · GW(p)

I'd like to put in my vote for "this should not be discussed in public forums". Whatever is happening, the public forum debate will have no impact on it; but it does create the circumstances for a culture war that seems quite bad.

comment by habryka (habryka4) · 2021-10-10T22:45:57.916Z · LW(p) · GW(p)

Whatever is happening, the public forum debate will have no impact on it;

I think this is wrong. I think a lot of people who care about AI Alignment read LessWrong and might change their relationship to Open AI depending on what is said here. 

comment by AnnaSalamon · 2021-10-13T03:40:03.297Z · LW(p) · GW(p)

I disagree with Lincoln's comment, but I'm confused that when I read it just now it was at -2; it seems like a substantive comment/opinion that deserves to be heard and part of the conversation.

If comments expressing some folks' actual point of view are downvoted below the visibility threshold, it'll be hard to have good substantive conversation.

answer by A Ray · 2022-01-24T04:21:31.165Z · LW(p) · GW(p)

I just saw this, but this feels like a better-late-than-never situation.  I think hard conversations about the possibilities of increasing existential risk should happen.

I work at OpenAI.  I have worked at OpenAI for over five years.

I think we definitely should be willing and able to have these sorts of conversations in public, mostly for the reasons other people have listed.  I think AnnaSalamon is the answer I agree with most.

I want to also add this has made me deeply uncomfortable and anxious countless times over the past few years.  It can be a difficult thing to navigate well or navigate efficiently.  I feel like I've gotten better at it, and better at knowing/managing myself.  I see newer colleagues also suffering from this.  I try to help them when I can.

I'm not sure this is the right answer for all context, but I am optimistic for this one.  I've found the rationality community and the lesswrong community to be much better than average at dealing with bad faith arguments, and for cutting towards the truth.  I think there are communities where it would go poorly enough that it could be net-negative to have the conversation.

Side note: I really don't have a lot of context about the Elon Musk connection, and the guy has not really been involved for years.  I think the "what things (including things OpenAI is doing) might increase existential risk" is an important conversation to have when analyzing forecasts, predictions, and research plans.  I am less optimistic about "what tech executives think about other tech executives" going well.

answer by iceman · 2021-10-14T19:33:58.400Z · LW(p) · GW(p)

I'm skeptical of OpenAI's net impact on the spirit of cooperation because I'm skeptical about the counterfactual prospects of cooperation in the last 6 years had OpenAI not been founded.

The 2000s and early 2010s centralized and intermediated a lot of stuff online, where we trusted centralized parties to be neutral arbiters. We are now experiencing the after effects of that naivete, where Reddit, Twitter and Facebook are censoring certain parties on social media, and otherwise neutral infrastructure like AWS or Cloudflare kick off disfavored parties. I am at the point where I am scared of centralized infrastructure and try to minimize my reliance on it because it allows third parties to apply pressure on me.

These general trends are much larger than Elon Musk and would have happened without him. I'm uncertain to what extent Elon was just reacting to this trend and was ahead of the curve.

Darkly humorous, but OpenAI's recent destruction of otherwise entirely benign users of GPT-3 is teaching people to not rely on centralized AI. Now that OpenAI is large, the Blue Egregore can apply media pressure on the tech company to extract submission, in this case censorship demands. The downstream effects of this demand to think of the non-existent children was the destruction of trust in AI Dungeon leading people to mostly switch to various GPT-J alternatives (including local setups) and the shutdown of the Samantha, which Mr. Rohrer is taking hard.

answer by sapphire (deluks917) · 2021-10-10T19:31:37.018Z · LW(p) · GW(p)

Releasing GPT-3 non-trivially increased the odds of doomsday. So yeah they are not good actors.

comment by Rafael Harth (sil-ver) · 2021-10-10T19:39:51.834Z · LW(p) · GW(p)

Can you elaborate on that? It seems non-obvious; I feel like I could tell a different story

... like, OpenAI has much fewer resources than Deepmind, so if Deepmind released GPT-3 first, and if it is the case that scaling up a language model far enough gives you AGI, there would have been less time in between knowing what AGI will look like and having AGI.

54 comments

Comments sorted by top scores.

comment by jessicata (jessica.liu.taylor) · 2021-10-09T20:36:42.866Z · LW(p) · GW(p)

Do you think there are reasons to avoid publicly debating OpenAI's impact that wouldn't apply to, say, Microsoft, or to the EU?

Replies from: sil-ver, lincolnquirk
comment by Rafael Harth (sil-ver) · 2021-10-10T08:22:25.129Z · LW(p) · GW(p)

If you're asking me, I could imagine that there is a higher chance to insult individual people who matter, but really I don't know. I have no qualms if the answer to my first question is 'basically no reason to be hesitant'.

comment by lincolnquirk · 2021-10-13T23:33:36.052Z · LW(p) · GW(p)

Yes: A far higher % of OpenAI reads this forum than the other orgs you mentioned. In some sense OpenAI is friends with LW, in a way that is not true for the others.

Replies from: Benito
comment by Ben Pace (Benito) · 2021-10-13T23:50:15.668Z · LW(p) · GW(p)

Another perspective to "friends" is "trading partners", which is an intuition I use a lot more often.

comment by Mitchell_Porter · 2021-10-10T07:46:55.787Z · LW(p) · GW(p)

What kind of cooperation is OpenAI supposed to have "trashed"? Did they make AI R&D more competitive? Did they prevent AI safety researchers from talking to each other? 

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-10-10T22:54:23.201Z · LW(p) · GW(p)

There is a "AI-arms-race-o'meter" and as a result of OpenAI the meter has gone up. This is real bad, because when we start getting close to AGI/APS-AI/etc. it's gonna be real important that we slow down and go carefully, but we won't do that if we are racing.

Replies from: Mitchell_Porter, rudi-c, TAG
comment by Mitchell_Porter · 2021-10-11T07:08:44.356Z · LW(p) · GW(p)

May I suggest that for an organization which really could create superhuman general AI, more important than a vague vow to go slow and be careful, would be a concrete vow to try to solve the problem of friendliness/safety/alignment. (And I'll promote June Ku's https://metaethical.ai as the best model we have of what that would look like. Not yet practical, but a step in the right direction.) 

Perhaps it is a result of my being located on the margins, but I have never seen slowing the overall AI race as a tactic worth thinking about. That it is already an out-of-control race in which no one even knows all the competitors, has been my model for a long time. I also think one should not proceed on the assumption that there are years (let alone decades) to spare. For all we know, a critical corner could be turned at any moment. 

Perhaps it feels different if you're DeepMind or MIRI; you may feel that you know most of the competitors, and have some chance of influencing them in a collegial way. But was OpenAI's announcement of GPT-3 in 2020, a different kind of escalation than DeepMind's announcement of AlphaGo in 2016? OpenAI has been concerned with AI safety from the beginning; they had Paul Christiano on their alignment team for years. 

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-10-11T09:08:25.673Z · LW(p) · GW(p)

I am not exactly here to say that DeepMind is that much better! :) One thing I dislike about the OP is that it makes it seem like the problem is specifically with OpenAI compared to other companies. If OpenAI came first and then Elon went and founded DeepMind that would approximately just as bad, or even slightly worse.

I agree that maybe an arms race was inevitable, in which case founding OpenAI maybe wasn't a bad thing after all. Maybe. But maybe not.

It's true that OpenAI had some great safety researchers. Now most of them have quit. (There are still some that remain). But they probably could have got jobs at DeepMind, so this isn't relevant to evaluating Elon's decision.

Also, there's the whole openness ideal/norm. Terrible idea, for reasons various people (e.g. Scott Alexander) said at the time. (I can try to remember what the post was called if you like... it made the same point as Yudkowsky here, if we haven't solved alignment yet and we give AI to everyone then we are killing ourselves. If we have solved alignment, great, but that's the difficult part and we haven't done that yet. That point and a few others.)

Replies from: Mitchell_Porter, RobbBB, TAG
comment by Mitchell_Porter · 2021-10-12T11:51:21.599Z · LW(p) · GW(p)

For the sake of discussion, let's suppose that the next big escalation in AI power is the final one, and that it's less than five years away. Any thoughts on what the best course of action is?

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-10-12T12:56:49.467Z · LW(p) · GW(p)

Warm take: Our main hope lies in the EA/longtermist/AI-safety community. I say this because my first-order answer to your question is "no idea," so instead I go meta and say "Rally the community. Get organized. Create common knowledge. Get the wisest people in a room together to discuss the problem and make a plan. Everybody stick to the plan." (The plan will of course be a living document that updates over time in response to new info. The point is that for a crisis, you need organization; you need a chain of command. It seems to be the main way that humans are able to get large numbers of people working effectively together on short notice. The main problem with organizations is that they tend to become corrupt and decay over time, hence the importance of competition/markets/independence. However this is less of a problem on short timescales and anyway what choice do we have?)

comment by TAG · 2021-10-11T11:42:04.625Z · LW(p) · GW(p)

We have good-enough alignment for the AIs we have. We don't have a general solution to alignment that will work for the ASIs we don't have. We also don't know whether we we need one, ie. we don't know that we need to solve ASI alignment beyond getting ASIs to work acceptably.

I constantly see conflations of AI and ASI. It doesn't give me much faith in amateur (unrelated to industry) efforts at AI safety.

comment by Rudi C (rudi-c) · 2022-01-11T16:33:07.921Z · LW(p) · GW(p)

Taboo “racing”? I don’t understand what concrete actions were thought to have been skipped.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-01-11T17:38:51.474Z · LW(p) · GW(p)

I don't know what you mean by skipped. Here's some more concreteness though:

--Thanks to OpenAI, there is more of an "AI research should be made available to everyone" ethos, more of a "Boo anyone who does AI research and doesn't tell the world what they did or how they did it or even decides not to share the weights!" Insofar as this ethos persists during the crucial period, whichever labs are building AGI will be under more internal and external pressure to publish/share. This makes it harder for them to go slow and be cautious when the stakes are high.

--Thanks to OpenAI, there were two world-leading AGI labs, not one. Obviously it's a lot harder to coordinate two than one. This is not as bad as it sounds because plausibly before the crucial period more AGI labs would have appeared anyway. But still.

--Thanks to OpenAI, scaling laws and GPT tech are public knowledge now. This is a pretty big deal because it's motivating lots of other players to start building AGI or AGI-like things, and because it seems to be opening up lots of new profit opportunities for the AI industry which will encourage further investment, shortening timelines and increasing the number of actors that need to coordinate. Again, presumably this would have happened eventually anyway. But OpenAI made it happen faster.

Replies from: rudi-c, Dirichlet-to-Neumann
comment by Rudi C (rudi-c) · 2022-01-13T10:31:06.923Z · LW(p) · GW(p)

Ironically, I am a believer in FOSS AI models, and I find OpenAI’s influence anything but encouraging in this regard. The only thing they are publicly releasing is marketing nowadays.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-01-13T18:00:18.672Z · LW(p) · GW(p)

Yep! :) But the damage is done; thanks to OpenAI there is now a large(r) group of people who believe in FOSS AI than there otherwise would be, and there are various new actors who have entered the race who wouldn't have if not for OpenAI publications.

To be clear, I'm not confident FOSS AI is bad. I just think it probably is, for reasons mentioned. I won't be too surprised if I turn out to be wrong and actually (e.g. for reasons Dirichlet-to-Neumann mentioned, or because FOSS AI is harder to make a profit on and therefore will get less investment and therefore will be developed later, buying more time for safety research) the FOSS AI ethos was net-positive.

I'd be interested to hear your perspective on FOSS AI.

Replies from: rudi-c
comment by Rudi C (rudi-c) · 2022-01-14T00:33:08.818Z · LW(p) · GW(p)

Epistemic status: I am not an expert on this debate, I have not thought very deeply about it, etc.

  1. I am fairly certain that as long as we don’t fail miserably (i.e., a loose misaligned super AI that collapses our civilization), FOSS AI is extremely preferable to proprietary software. The reasons are common to other software projects, though the usefulness and blackboxy-ness of AGI make this particularly important.
  2. I am skeptical of “conspiracies.” I think a publicly auditable, transparent process with frequent peer feedback on a global scale is much more likely to result in trustable results with fewer unforeseen consequences/edge cases.
  3. I am extremely skeptical of the human incentives that a monopoly on AGI encourages. E.g., when was the single time atomic bombs were used? Exactly when there was a monopoly on them.
  4. I don’t see the current DL approaches as at all near achieving efficient AGI that would be dangerous. AI alignment probably needs more concrete capability research IMO. (At least, more capability research is likely to contribute to safety research as well.) I like the world to enjoy having better narrow AI sooner, and I am not convinced delaying things is buying all that much. (Full disclosure: I am weighing the lives of my social bubble and contemporaries more than random future lives. Though if I were to not do this, then it’s likely that intelligence will evolve again in the universe anyhow, and so humanity failing is not that big of a deal? None of our civilization is built that long-termist, too, so it’s pretty out of distribution for me to think about. Related point: I have an unverified impression that the people who advocate slowing capability research are already well off and healthy, so they don’t particularly need technological progress. Perhaps this is an unfair/false intuition I have, but I do have it, and disabusing me of it will change my opinion a bit.)
  5. In a slow takeoff scenario, my intuition is that multiple competing super intelligences will leave us more leverage. (I assume in a fast takeoff scenario the first such intelligence will crush the others in their infancy.)
  6. Safety research seems to be more aligned with academic incentives than business incentives. Proprietary research is less suited to academia though.
Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-01-14T04:26:19.905Z · LW(p) · GW(p)

Thanks! In case you are interested in my opinion: I think I agree with 1 (but I expect us to fail miserably) and 2 and 6. I disagree with 3 (AGI, unlike nukes, can be used in ways that aren't threatening and don't hurt anyone and just make loads of money and save loads of lives. So people will find it hard to resist using it. So the more people have access to it, the more likely it is it'll be used before it is safe.) My timelines are about 50% by 10 years; I gather from point 4 that yours are longer. I think 5 might be true but might not be; history is full of examples of different groups fighting each other yet still managing to conquer and crush some third group. For example, the Spanish conquistadors were fighting each other even as they conquered Mexico and Peru. Maybe humans will be clever enough to play the AIs off against each other in a way that lets us maintain control until we solve alignment... but I wouldn't bet on it.

comment by Dirichlet-to-Neumann · 2022-01-11T19:49:21.014Z · LW(p) · GW(p)

It seems to me that being open about what you are working on, and having a proven record of publishing/sharing critical informations, including weights, is a very good way to fight the arm race.

If you don't know where your concurrent are, it is much more difficult to stop to think about alignment than to rush toward capacity first. If you know where your concurrent are, and if you know that you will be at worst a couple weeks or months late because they always publish and you will thus be able to catch up, you have much more slack to pursue alignment (or speculative research in general). 

For the strategic arms reduction treaties signed between Russia and the USA, verification tools were a crucial part of the process, because you need to know what the other is doing to disarm:
https://en.wikipedia.org/wiki/START_I#Verification_tools

https://en.wikipedia.org/wiki/New_START#Monitoring_and_verification

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-01-13T17:56:06.237Z · LW(p) · GW(p)

Yes, when we are getting really close to AGI it will be good for the leading contenders to share info with each other. Even then it won't be a good idea for the leading contenders to publish publicly, because then there'll be way more contenders! And now, when we are not really close to AGI, public publication accelerates research in general and thus shortens timelines, while also bringing more actors into the race.

Replies from: Dirichlet-to-Neumann
comment by Dirichlet-to-Neumann · 2022-01-15T17:57:52.004Z · LW(p) · GW(p)

Trust between partners do not happen overnight. You don't suddenly begin sharing information with concurrents when the prize is in sight. We need a history of shared information to build upon, and now - when, as you said, AGI is not really close - is the good time to build it.
Because if you don't trust someone with GPT-3, you are certainly not going to trust them with an AGI.

 

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-01-15T18:16:27.834Z · LW(p) · GW(p)
Because if you don't trust someone with GPT-3, you are certainly not going to trust them with an AGI.

Choosing to not release GPT-3's weights to the whole world doesn't imply that you don't trust DeepMind or Anthropic or whoever. It just implies that there exists at least one person in the world you don't trust.

I agree that releasing everything publicly would make it easier/more likely to release crucial things to key competitors when the time comes. Alas, the harms are big enough to outweigh this benefit, I think.

comment by TAG · 2021-10-11T11:25:16.611Z · LW(p) · GW(p)

There is a “AI-arms-race-o’meter”

Literally?

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-10-11T12:11:13.418Z · LW(p) · GW(p)

Not as far as I'm aware. I don't think anyone is quantitatively tracking this phenomenon.

comment by lsusr · 2021-10-10T04:02:11.416Z · LW(p) · GW(p)

Nobody involved with OpenAI's launch can reasonably have been said to have done anything else of relative importance in their lives. The net impact of their lives is their contribution to the huge negative impact of OpenAI's launch, plus a rounding error.

Using this statement to describe Elon Musk and Sam Altman seems to imply that founding a single AI company is much more important than privatizing spaceflight, inventing practical self-driving cars and leading Y-Combinator.

From the little I know about how Elon Musk and Sam Altman they see the world, both of them would agree AI as the most important issue near-term for humanity and they started OpenAI in order to do something about it. The question isn't whether OpenAI is important. It's whether OpenAI has had a positive or negative effect.

OpenAI is the only organization I know of which is explicitly dedicated to AI safety and is pushing the technical field of AI forward. This seems like a good thing to me. Pushing the technical field of AI forward is how you provide an empirical test of whether you know what you're talking about. If you do the AI safety without the technical advancement you can get lost in an ivory tower.

I'm curious about this whole "nascent spirit of cooperation" thing. We're a species that can kinda sorta cooperate on nuclear Armageddon, carbon emissions and vaccines. Cooperation on AI seems like a much harder problem because the capital expenditures are so low, the strategic advantage is so high and the technology advances so fast.

Replies from: ChristianKl
comment by ChristianKl · 2021-10-11T13:36:11.833Z · LW(p) · GW(p)

Using this statement to describe Elon Musk and Sam Altman seems to imply that founding a single AI company is much more important than privatizing spaceflight, inventing practical self-driving cars and leading Y-Combinator.

From Eliezer's perspective that's the case. Privatized spaceflight or self-driving cars won't change the likelihood that humanity survives signficiantly and Eliezer sees the amount of value that can be created if humanity survives and there's FAI as big enough that those other things are relatively unimportant.

comment by UHMWPE-UwU (abukeki) · 2021-10-09T14:21:33.910Z · LW(p) · GW(p)

Agree with we should reach out to him & the community is connected enough to do so. If he's concerned about AI risk but either being misguided or doing harm (see e.g. here [LW · GW]/here and here [LW · GW]), then someone should just... talk to him about it? The richest man in the world can do a lot either way. (Especially someone as addicted to launching things as him, who knows what detrimental thing he might do next if we're not more proactive.)

I get the impression the folks at FLI are closest to him so maybe they are the best ones to do that.

Replies from: Ruby
comment by Ruby · 2021-10-10T19:09:57.513Z · LW(p) · GW(p)

I believe people have spoken to him. For one thing, he was on a panel at EAG 2015.

Replies from: abukeki
comment by UHMWPE-UwU (abukeki) · 2021-10-10T19:58:26.721Z · LW(p) · GW(p)

I'm aware. I'm just saying a new effort is still needed because his thoughts on alignment/AI risk are still clearly very misguided listening to all his recent public comments on the topic and what he's trying to do with Neuralink etc. so someone really needs to reach out and set him straight.

comment by Charlie Steiner · 2021-10-09T18:18:07.269Z · LW(p) · GW(p)

This incident makes me feel sympathy for people whose words have more than personal consequence. And for twitter users. Seems like a tough place.

I assume Eliezer is at least 50% BSing, but now the words are going to get taken out of context and interpreted as some kind of official statement. Which, to be fair, should have been completely obvious.

Replies from: Benito
comment by Ben Pace (Benito) · 2021-10-09T18:53:33.487Z · LW(p) · GW(p)

I don't know why you think Eliezer is BSing at all. I agree he is an incredible troll in-person and is very funny, but this reads to me as straightforwardly describing how he sees the situation.

Replies from: sil-ver, Charlie Steiner
comment by Rafael Harth (sil-ver) · 2021-10-09T19:25:26.549Z · LW(p) · GW(p)

Yeah, I was assuming he was completely serious , otherwise I would have been hesitant to pick out the quote.

comment by Charlie Steiner · 2021-10-09T20:09:26.549Z · LW(p) · GW(p)

I'm guessing he would have a much more nuanced view of the origins of OpenAI off Twitter.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2021-10-09T20:32:21.813Z · LW(p) · GW(p)

Eliezer seriously believing this is consistent with what I heard while working at MIRI at the time of OpenAI's creation.

comment by Dustin · 2021-10-09T15:52:51.801Z · LW(p) · GW(p)

Going off of Musk's public persona, I think there's some not-insignificant chance that calling him out like Yudkowsky did will be counter-productive. If this is accurate and if Yudkowsky is correct about the negative value of OpenAI...Yudkowsky just put himself in the same shoes as everyone he's describing as having contributed to OpenAI's success by causing Musk to dig his heels in on the subject.

Replies from: Vaniver
comment by Vaniver · 2021-10-10T00:05:52.880Z · LW(p) · GW(p)

Note that Musk parted ways with OpenAI back in 2018, in part because of a conflict of interest (between Tesla and OpenAI).

Replies from: Dustin
comment by Dustin · 2021-10-10T01:37:55.506Z · LW(p) · GW(p)

You know, I actually knew this, but kind of forgot about it momentarily!  I think it was because of the framing of the post that almost makes it sound like an ongoing concern of Musk's.

That being said, I wouldn't be surprised to hear if the general point I'm making holds for at least some subset of current OpenAI people.

comment by Taran · 2021-10-14T10:18:07.755Z · LW(p) · GW(p)

A related discussion from 5 years ago: https://www.lesswrong.com/posts/Nqn2tkAHbejXTDKuW/openai-makes-humanity-less-safe

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-10-10T22:57:01.971Z · LW(p) · GW(p)

Anyone want to make a Downfall meme video titled "MIRI reacts to the announcement of OpenAI?" I for one could sure use a bit of humor to be less depressed, and Yudkowsky's tweet provides plenty of juicy source material.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-10-11T11:03:13.739Z · LW(p) · GW(p)

I'm unsure why this is getting downvoted. Is there something wrong with downfall memes in particular, or is it just the general idea of making memes about this situation that people don't like?

Replies from: ChristianKl
comment by ChristianKl · 2021-10-11T14:09:50.151Z · LW(p) · GW(p)

I expect that there are people who think creating such a video would have negative utility. 

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-10-11T15:41:23.708Z · LW(p) · GW(p)

Because it would draw more attention to this whole thing? Because it might make MIRI look bad?

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-11T16:39:04.607Z · LW(p) · GW(p)

I downvoted it because it's unimportant, and I don't want it distracting from the conversation. If I'd found it funny or clever, I'd have upvoted it or abstained.

On the meta level: I tentatively think LW is too sparing with downvotes, and I think asking people to go into why they didn't find your joke funny (or didn't find it useful, or just hate laughter and sunshine), in response to very small amounts of net downvoting (e.g., -5 rather than -30), makes it a bit harder to shift that norm. I guess I feel like I can guess the general character of others' downvotes, and I don't understand the decision-relevance of investigating the details.

Now please downvote this comment to the extent it was bad / you want to see fewer things like it in the future / you want it to be less prominent on this page. :P

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-10-11T17:51:12.758Z · LW(p) · GW(p)

Ahh, this was very helpful, thanks! I guess I just was overreacting / misinterpreting the signal. Cheers!

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-10-13T01:04:35.154Z · LW(p) · GW(p)

K cool :) For the record if I have a more serious disagreement/objection I'm (relatively) more likely to PM or comment about it.

comment by jbash · 2021-10-09T22:19:47.379Z · LW(p) · GW(p)

As far as I can tell, OpenAI is neither "open" nor "AI". I guess that whether or not they were "open" could matter, but only in the context of them becoming "AI".

All I've heard of them doing is throwing ever-larger computers at essentially the same approach to prediction tasks, and selling the results for trivial applications like dialog in video games. Am I misunderstanding what they're about, or am I missing some reason to expect that iterating their approach up to GPT-69 or whatever will somehow produce a superintelligent agent?

Replies from: Kaj_Sotala, quintin-pope
comment by Kaj_Sotala · 2021-10-10T08:00:41.664Z · LW(p) · GW(p)

They've got some interesting work on what exactly those prediction systems end up looking like on the inside.

comment by Quintin Pope (quintin-pope) · 2021-10-09T23:25:51.982Z · LW(p) · GW(p)

Well, the big, obvious, enormous difference between current deep models and the human brain is that the brain uses WAY more compute. Even the most generous estimates put GPT-3 at using something like 1000x less compute than the brain. OpenAI demonstrated, quite decisively, that increasing model size leads to increasing performance.

Also, generating dialog in video games is NOT trivial (and is well beyond GPT-3's capabilities). Any AI capable of that would be enormously valuable, since it would need a generalized grasp of language close to human-level proficiency and could be adapted to many text generation tasks (novel/script writing, customer service, chatbot based substitutes for companionship, etc).

Replies from: jbash
comment by jbash · 2021-10-10T00:31:46.147Z · LW(p) · GW(p)

I think the big, obvious, enormous difference between GPT-3 and the human brain is that GPT-3 isn't an agent. It's not trained for behavior; it's adjusted for accuracy. It doesn't even have any agency in choosing its input; it's given a big wodge of training data, and has to ingest it. It has less agency than a slug, and therefore can't really learn to do anything "agenty".

I mean, I could be wrong. Maybe it could do something interesting given 1000 times more compute. But it seems unlikely enough that it doesn't worry me. Things like DeepMind's generalized game-playing agents are a lot scarier to me.

Also, generating dialog in video games is NOT trivial (and is well beyond GPT-3's capabilities).

As I understand it, it's actually generating dialog in commercial games today. There was some kind of big flap about people getting it to generate text involving children and sex.

But I didn't mean "trivial" in the sense of "easy". I meant "trivial" in the sense of "doesn't really matter in the grand scheme of things". Even if you add all the applications you listed, it's still not a big deal by the standards of people who talk about X-risk.

Replies from: Kaj_Sotala, ChristianKl
comment by Kaj_Sotala · 2021-10-10T08:06:54.583Z · LW(p) · GW(p)

I think the big, obvious, enormous difference between GPT-3 and the human brain is that GPT-3 isn't an agent. It's not trained for behavior; it's adjusted for accuracy.

It's true that GPT-3 doesn't do everything that a human brain does, but one of my thoughts when reading Duncan's post on shoulder advisors [LW · GW] was that it really sounds like the brain runs something like GPT-? instances that can be trained on various prediction tasks.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2021-10-12T17:33:36.290Z · LW(p) · GW(p)

Something of an side, but what exactly is your definition of 'agent'?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2021-10-12T18:48:52.319Z · LW(p) · GW(p)

Depends on the context. :)

If I had to give a general definition, something like "a system whose behavior can usefully be predicted through the intentional stance" [LW · GW].

comment by ChristianKl · 2021-10-11T13:01:29.330Z · LW(p) · GW(p)

It doesn't even have any agency in choosing its input; it's given a big wodge of training data, and has to ingest it. It has less agency than a slug, and therefore can't really learn to do anything "agenty".

It's quite trival to change it in a way where it's output feeds back into it's input given that it's input is text and it's output is text. 

You can make the output console comments and then feed the resulting console answer back into the model. It likely needs a larger attention fields to be practically useful but more compute and clever ways to handle it could lead there. 

Our own thinking process is also a lot about having a short term memory into which we put new thoughts and based on which our next action/thought gets generated.

comment by lsusr · 2021-10-10T04:01:44.633Z · LW(p) · GW(p)