[Link] Introducing OpenAI

post by Baughn · 2015-12-11T21:54:47.229Z · LW · GW · Legacy · 49 comments

Contents

49 comments

From their site:

OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.

The money quote is at the end, literally—$1B in committed funding from some of the usual suspects.

49 comments

Comments sorted by top scores.

comment by AlexMennen · 2015-12-12T00:26:44.231Z · LW(p) · GW(p)

From their website, it looks like they'll be doing a lot of deep learning research and making the results freely available, which doesn't sound like it would accelerate Friendly AI relative to AI as a whole. I hope they've thought this through.

Edit: It continues to look like their strategy might be counterproductive. [Edited again in response to this.]

Replies from: Kaj_Sotala, Rain, SilentCal, Riothamus, devi, adamzerner, bogus, Furcas
comment by Kaj_Sotala · 2015-12-12T17:45:14.297Z · LW(p) · GW(p)

Edit: It continues to look like they don't know what they're doing.

Please don't use "they don't know what they're doing" as a synonym for "I don't agree with their approach".

comment by Rain · 2015-12-12T03:46:59.387Z · LW(p) · GW(p)

That interview is indeed worrying. I'm surprised by some of the answers.

Replies from: Viliam
comment by Viliam · 2015-12-14T11:46:14.524Z · LW(p) · GW(p)

Like this?

If I’m Dr. Evil and I use it, won’t you be empowering me?

Musk: I think that’s an excellent question and it’s something that we debated quite a bit.

Altman: There are a few different thoughts about this. Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think its far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else. If that one thing goes off the rails or if Dr. Evil gets that one thing and there is nothing to counteract it, then we’re really in a bad place.

The first one is a non-answer, the second one suggests that a proper response to Dr. Evil making a machine that transforms the planet into a grey goo is Anonymous creating another machine which... transforms the grey goo into a nicer color of goo, I guess?

Replies from: HungryHobo, Lumifer
comment by HungryHobo · 2015-12-15T11:44:16.427Z · LW(p) · GW(p)

If you don't believe that a foom is the most likely outcome(a common and not unreasonable position) then it's probably better to have lots of weakly-superhuman AI than a single weakly-superhuman AI.

Replies from: Rain
comment by Rain · 2015-12-23T14:36:04.277Z · LW(p) · GW(p)

Even in that case, whichever actor has the most processors would have the largest "AI farm", with commensurate power projection.

comment by Lumifer · 2015-12-14T15:46:02.674Z · LW(p) · GW(p)

the second one suggest...

I think the second one suggests that they don't believe the future AI will be a singleton.

comment by SilentCal · 2015-12-14T19:20:01.834Z · LW(p) · GW(p)

Their statement accords very well with the Hansonian vision of AI progress.

comment by Riothamus · 2015-12-12T14:09:22.937Z · LW(p) · GW(p)

If I am reading that right, they plan to oppose Skynet by giving everyone a Jarvis.

Does anyone know their technical people, and whether they can be profitably exposed to the latest work on safety?

comment by devi · 2015-12-12T17:06:11.792Z · LW(p) · GW(p)

They seem deeply invested in avoiding an AI arms race. This is a good thing, perhaps even if it speeds up research somewhat right now (avoiding increasing speedups later might be the most important thing: e^x vs 2+x etc etc).

Note that if the Deep Learning/ML field is talent limited rather than funding limited (seems likely given how much funding it has), the only acceleration effects we should expect are from connectedness and openness (i.e. better institutions). When some of this connectedness might be through collaboration with MIRI, this could very well advance AI Safety Research relative to AI research (via tighter integration of the research programs and choices of architecture and research direction, this seems especially important in how it will play out in the endgame).

In summary, this could actually be really good, it's just too early to tell.

comment by Adam Zerner (adamzerner) · 2015-12-12T16:46:36.232Z · LW(p) · GW(p)

Maybe the apparent incompetence is a publicity game, and the do actually know what they're doing?

comment by bogus · 2015-12-12T16:24:44.463Z · LW(p) · GW(p)

Edit: It continues to look like they don't know what they're doing.

Heh. Keep in mind, we've been through this before.

comment by Furcas · 2015-12-12T15:23:38.885Z · LW(p) · GW(p)

What the hell? There's no sign that Musk and Altman have read Bostrom, or understand the concept of an intelligence explosion in that interview.

Replies from: chaosmage, Kaj_Sotala
comment by chaosmage · 2015-12-12T17:40:23.253Z · LW(p) · GW(p)

Musk has read it and has repeatedly and publically agreed with its key points.

comment by Kaj_Sotala · 2015-12-12T17:42:56.769Z · LW(p) · GW(p)

It seems that they consider a soft takeoff more likely than a hard takeoff, which is still compatible with understanding the concept of an intelligence explosion.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2015-12-12T20:11:51.892Z · LW(p) · GW(p)

Yeah the best argument I can think of for this course is something like: soft takeoff is more likely, and even if hard takeoff is a possibility, preparing for hard takeoff is so terrifically difficult that it doesn't make sense to even try. So let's optimize for the scenario where soft takeoff is what happens.

comment by James_Miller · 2015-12-11T23:10:25.969Z · LW(p) · GW(p)

The variance of outcomes over the next few decades just went way up.

Replies from: None
comment by [deleted] · 2015-12-13T00:27:43.178Z · LW(p) · GW(p)

As a result of this development, and assuming some level of collaboration between MIRI and OpenAI, do you believe the "discount rate" for MIRI donations has increased significantly (i.e. it is even more important to give now than later)?

Replies from: James_Miller
comment by James_Miller · 2015-12-13T00:59:25.980Z · LW(p) · GW(p)

Good question. I'm not sure. Given diminishing marginal returns if MIRI and OpenAI are doing the same things then the value of giving to MIRI goes way down. In contrast, if OpenAI is going to speed up the development of AI without putting too much thought into friendly AI, then MIRI and OpenAI are complements and it's more important than ever to give lots of money quickly to MIRI.

Replies from: pcm
comment by pcm · 2015-12-13T19:40:42.713Z · LW(p) · GW(p)

Another factor to consider: If AGI is 30+ years away, we're likely to have another "AI winter". Saving money to donate during that winter has some value.

comment by cursed · 2015-12-11T22:37:41.289Z · LW(p) · GW(p)

With Sam Altman (CEO of YCombinator) talking so much about AI safety and risk over the last 2-3 months, I was so sure that he was working out a deal to fund MIRI. I wonder why they decided to create their own non-profit instead.

Although on second thought, they're aiming for different goals. While MIRI is focused on safety once strong AI occurs, OpenAI is trying to actually speed up the research of strong AI.

Replies from: Soothsilver, ChristianKl, jacob_cannell
comment by Soothsilver · 2015-12-12T13:46:07.440Z · LW(p) · GW(p)

Nate Soares says there will be some collaboration between OpenAI and MIRI:

https://intelligence.org/2015/12/11/openai-and-other-news/

comment by ChristianKl · 2015-12-11T23:47:32.729Z · LW(p) · GW(p)

It interesting that their project is called OpenAI while both Facebook and Google open sourced AI algorithms in the last month and a half.

Neither Google nor Facebook seems to be in the OpenAI list but Amazon Web Services does.

Infosys as the second largest Indian IT company is also an interesting part of the list of funders. There an article from yesterday about forming a new company strategy that involves relying heavily on AI.

I expect OpenAI to actually develop software in a way that MIRI doesn't.

comment by jacob_cannell · 2015-12-12T02:49:59.529Z · LW(p) · GW(p)

I was so sure that he was working out a deal to fund MIRI. I wonder why they decided to create their own non-profit instead.

In practice MIRI is more think-tank than research organization. AFAIK MIRI doesn't even yet claim to have a clear research agenda that leads to practical safe AGI. Their research is more abstract/theoretical/pie in the sky and much harder to measure. Given that numerous AI safety think tanks already now exist, creating a new actual research org non-profit makes sense - it fills in an empty niche. Creating a fresh structure gives the organizers/founders more control and allows them to staff it with researchers they believe in.

comment by John_Maxwell (John_Maxwell_IV) · 2015-12-12T03:18:06.419Z · LW(p) · GW(p)

I left this comment on Hacker News exploring whether "AI for everyone" will be a good thing or not. Interested to hear everyone's thoughts.

Replies from: danieldewey, Dr_Manhattan
comment by danieldewey · 2015-12-12T18:12:05.961Z · LW(p) · GW(p)

Very thoughtful post! I was so impressed that I clicked the username to see who it was, only to see the link to your LessWrong profile :)

Replies from: John_Maxwell_IV
comment by Dr_Manhattan · 2015-12-12T15:28:23.284Z · LW(p) · GW(p)

My concern is similar to your last sentence. I think a lot of choices are being made up front without "thinking them through" as you put it; I wish the resources were spent more evenly to enable answering those questions better, and also allocating some to MIRI which is ironically running a fundraiser right now and getting some 10s and 20s while a quite a pile of resources has been allocated to something I don't (yet) have confidence in under the safety umbrella.

Good thing I hear MIRI is actively in touch with those guys, so I hope the end will be better than the beginning.

comment by devi · 2015-12-12T17:53:58.227Z · LW(p) · GW(p)

It's important to remember the scale we're talking about here. A $1B project (even when considered over its lifetime) in such an explosive field with such prominent backers, would be interpreted as nothing other than a power-grab unless it included a lot of talk about openness (it will still be, but as a less threatening one). Read the interview with Musk and Altman and note how they're talking about sharing data and collaborations. This will include some noticeable short term benefits for the contributors, and pushing for safety, either via including someone from our circles or by a more safety focused mission statement, would impede your efforts at gathering such a strong coalition.

It's easy to moan over civilizational inadequacy and moodily conclude that above shows us how (as a species) we're so obsessed with appropriateness and politics that we will avoid our one opportunity to save ourselves. Sure do some of that, and then think of the actual effects for a few minutes:

If the Value Alignment research program is solvable in the way we all hope it is (complete with a human universal CEV, stable reasoning under self-modification and about other instances of our algorithm) then having lots of implementations running around will be basically the same as distributing the code over lots of computers. If the only problem is that human values won't quite converge: this gives us a physical implementation of the merging algorithm of everyone just doing their own thing and (acausally?) trading with each other.

If we can't quite solve everything that we're hoping for, this does change the strategic picture somewhat. Mainly it seems to push us away from a lot of quick fixes, that will likely seem tempting as we approach the explosion: we can't have a sovereign just run the world like some kind of OS that keeps everyone separate, we'll also be much less likely to make the mistakes of creating CelestAI from Friendship is Optimal, something that optimizes most our goals but has some undesired lock-ins. There are a bunch of variations here, but we seem locked out of strategies that try to achieve some minimum level of the cosmic endowment, while possibly failing at getting a substantial constant fraction of our potential by achieving it at the cost of important values or freedoms.

Whether this is a bad thing or not really depends on how one evaluates two types of risk: (1) the risk of undesired lock-ins from an almost perfect superintelligence getting too much relative power, (2) the risk of bad multi-polar traps. Much of (2) seems solvable by robust cooperation, that we seem to be making good progress on. What keeps spooking me are risks due to consciousness: either mistakenly endowing algorithms with it creating suffering, or evolving to the point that we loose it. These aren't as easily solved by robust cooperation, especially if we don't notice them until it's too late. The real strategic problem right now is that there isn't really anyone we can trust to be unbiased in analyzing the relative dangers of (1) and (2), especially because they pattern-match so well with the ideological split between left and right.

Replies from: Daniel_Burfoot, AlexMennen
comment by Daniel_Burfoot · 2015-12-12T22:53:13.249Z · LW(p) · GW(p)

It's important to remember the scale we're talking about here. A $1B project (...) in such an explosive field

I was sure this sentence was going to complete with something along the lines of "is not such a big deal". Silicon Valley is awash with cash. Mark Zuckerberg paid $22B for a company with 70 employees. Apple has $200B sitting in the bank.

comment by AlexMennen · 2015-12-12T19:05:32.407Z · LW(p) · GW(p)

(2) the risk of bad multi-polar traps. Much of (2) seems solvable by robust cooperation, that we seem to be making good progress on.

Not necessarily. In a multi-polar scenario consisting entirely of Unfriendly AIs, getting them to cooperate with each other doesn't help us.

Replies from: devi
comment by devi · 2015-12-13T00:18:48.497Z · LW(p) · GW(p)

Yes, robust cooperation is not much to us if its cooperation between the paperclip maximizer and the pencilhead minimizer. But if there are a hundred shards that make up human values, and tens of thousands of people running AI's trying to maximize the values they see fit. It's actually not unreasonable to assume that the outcome, while not exactly what we hoped for, is comparable to incomplete solutions that err on the side of (1) instead.

After having written this I notice that I'm confused and conflating: (a) incomplete solutions in the sense of there not being enough time to do what should be done, and (b) incomplete solutions in the sense of it being actually (provably?) impossible to implement what we right now consider essential parts of the solution. Has anyone got thoughts on (a) vs (b)?

Replies from: AlexMennen
comment by AlexMennen · 2015-12-13T19:11:19.928Z · LW(p) · GW(p)

If value alignment is sufficiently harder than general intelligence, then we should expect that given a large population of strong AIs created at roughly the same time, none of them should be remotely close to Friendly.

comment by casebash · 2015-12-13T02:28:33.005Z · LW(p) · GW(p)

I would argue that this is a terrible, terrible, terrible idea. Once you've got an AI, you could just ask it how to make chemical or biological weapons. Or to hack into various computer systems. Or how to create self-replicating nano-bots. The problem is not every attack necessarily has a defense; or, even if such an defense exists it is typically if much more resource consuming. For example, if you want to protect your buildings against bomb attacks, an individual can just target the buildings with minimal defenses. There are also major problems with how good AIs could be at running scams or manipulating people.

comment by turchin · 2015-12-12T09:48:10.272Z · LW(p) · GW(p)

"Musk: I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower. Altman: "I expect that [OpenAI] will [create superintelligent AI], but it will just be open source and useable by everyone <...> Anything the group develops will be available to everyone", "this is probably a multi-decade project <...> there’s all the science fiction stuff, which I think is years off, like The Terminator or something like that. I’m not worried about that any time in the short term"

It's like giving everybody a nuclear reactor and open source knowledge about how to make a bomb. Looks like to result in disaster.

I would like to call this type of thinking "billionaire arrogance" bias. A billionaire thinks that the fact that he is rich is an evidence that he is most clever person in world. But in fact it is evidence that he was lucky before.

Replies from: pico, ChristianKl, Gleb_Tsipursky
comment by pico · 2015-12-12T23:33:04.440Z · LW(p) · GW(p)

Being a billionaire is evidence more of determination than of luck. I also don't think billionaires believe they are the smartest people in the world. But like everyone else, they have too much faith in their own opinions when it comes to areas in which they're not experts. They just get listened to more.

comment by ChristianKl · 2015-12-12T11:42:46.076Z · LW(p) · GW(p)

The whole point of open-source is distributed oversight. It also sounds like they will use the Apache license or the MIT license so there nothing forcing them to publish everything should they decide that isn't wise in two decades.

Replies from: Viliam
comment by Viliam · 2015-12-14T11:51:23.265Z · LW(p) · GW(p)

It also sounds like they will use the Apache license or the MIT license

Good. I was worried for a moment that our new artificial overlords will transform the whole universe into zillion tiny copies of GNU GPL.

comment by Gleb_Tsipursky · 2015-12-12T23:20:02.798Z · LW(p) · GW(p)

I like the "billionaire arrogance" bias, seems it would apply to a lot of areas.

comment by MarkusRamikin · 2016-03-01T08:50:11.992Z · LW(p) · GW(p)

My first thought was: "the way to avoid bad outcomes from bioweapons is to give everyone equal access to bioweapons. Oh, wait..." (Not entirely fair, I know.)

Still, since I heard of this, my quarterly donation to MIRI increased by 5% of my income.

comment by pico · 2015-12-12T05:05:19.444Z · LW(p) · GW(p)

You can tell pretty easily how good research in math or physics is. But in AI safety research, you can fund people working on the wrong things for years and never know, which is exactly the problem MIRI is currently crippled by. I think OpenAI plans to get around this problem by avoiding AI safety research altogether and just building AIs instead. That initial approach seems like the best option. Even if they contribute nothing to AI safety in the near-term, they can produce enough solid, measurable results to keep the organization alive and attract the best researchers, which is half the battle.

What troubles me is that OpenAI could set a precedent for AI safety as a political issue, like global warming. You just have to read the comments on the HN article to find that people don't don't think they need any expertise in AI safety to have strong opinions about it. In particular, if Sam Altman and Elon Musk have some false belief about AI safety, who is going to prove it to them? You can't just do an experiment like you can in physics. That may explain why they have gotten this far without being able to give well-thought-out answers on some important questions. What MIRI got right is that AI safety is a research problem, so only the opinions of the experts matter. While OpenAI is still working on ML/AI and producing measurable results, it might work to have the people who happened to be wealthy and influential in charge. But if they hope to contribute to AI safety, they will have to hand over control to the people with the correct opinions, and they can't tell who those people are.

Replies from: ChristianKl
comment by ChristianKl · 2015-12-12T10:33:00.124Z · LW(p) · GW(p)

That may explain why they have gotten this far without being able to give well-thought-out answers on some important questions.

Which of the answers do you consider not well-thought-out?

comment by turchin · 2015-12-11T23:39:04.581Z · LW(p) · GW(p)

In fact we have too many good AI projects which may result in incompatible versions of AI friendliness and wars between AIs. It often happens in humans history before, most typically when two versions of one religion fight each other (like Shi'ah against Sunni, or different version of Buddhism).

I think it would be much better to concentrate all friendly AI efforts under control of one person or organisation.

Basically we are moving from underinvetmnet stage to overinvetmnet.

Replies from: James_Miller
comment by James_Miller · 2015-12-11T23:44:33.405Z · LW(p) · GW(p)

There is no way around this problem because the run-up to the singularity is going to bring huge economic and military benefits to having slightly better AI than anybody else. Moloch is hard to beat.

Replies from: turchin
comment by turchin · 2015-12-11T23:54:04.326Z · LW(p) · GW(p)

Ok, we have many nuclear powers in world, but only one main non-proliferation agency that is IAEA, and some how work. The same way we could have many AI-projects in the world, but one agency which provide safety guidelines (and it will be logical that it will be MIRI+Bostrom as they did most known research in the topic). But if we have many agencies which provide different guidelines or even several AI with slightly different friendliness we are doomed.

Replies from: James_Miller
comment by James_Miller · 2015-12-12T00:57:11.918Z · LW(p) · GW(p)

Strongly disagree that our current nuclear weapons situation "works". At this very moment a large number of hydrogen bombs sit atop missiles ready at a moments notice to kill hundreds of millions of people. Letting North Korea get atomic weapons required major civilization level incompetence.

Replies from: Tem42, turchin
comment by Tem42 · 2015-12-12T01:29:29.650Z · LW(p) · GW(p)

Moreover, the nuclear weapons situation is much simpler than the AI situation. Pretty much everyone agrees that a nuclear weapon going off in an inhabited area is a big deal that can quickly make life worse for all involved. It is not the case that everyone agrees that general AI is a such a big deal. All the official nuclear powers know that there will be a significant negative response directed at them if they bomb anyone else. They do not know this about AI.

Replies from: Viliam
comment by Viliam · 2015-12-14T12:08:51.602Z · LW(p) · GW(p)

It will be probably much easier to use the AI against someone secretly.

You could try to drop an atomic bomb on someone without them knowing who dropped the bomb on them. But you cannot drop an atomic bomb on them without them knowing that someone dropped the bomb on them.

But you could give your AI a task to invent ways how to move things closer to your desired outcome without creating suspicion. The obvious options would be to make it happen as a "natural" outcome, or to cast the suspicion on someone else, or maybe try to reach the goal in a way that will make people believe it didn't happen or that it wasn't your goal at all. (A superhuman AI could find yet more options; some of them could be incomprehensive to humans. Also options like: the whole world turns into utter chaos; by the way your original goal is completed, but everyone is now too busy and too confused to even notice it or care about it.) How is anyone going to punish that?

comment by turchin · 2015-12-12T09:38:18.618Z · LW(p) · GW(p)

I agree, it works in only limited sense, that is there is no nuclear war for 70 years, but proliferation and risks still exists and even grow.