OpenAI charter
post by wunan · 2018-04-09T21:02:04.621Z · LW · GW · 2 commentsThis is a link post for https://blog.openai.com/openai-charter/
Contents
2 comments
2 comments
Comments sorted by top scores.
comment by Raemon · 2018-04-10T00:41:29.796Z · LW(p) · GW(p)
This is pretty encouraging – a few years ago OpenAI seemed to either have dubious or confusing goals and models of AGI. (I can't remember whether the "give everyone an AGI" thing was real or just a caricature of their position).
The principles listed here at least seem reasonable (although I haven't followed OpenAI enough to have a sense of their ability as an organization to stick to their principles)
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2018-04-10T02:27:39.062Z · LW(p) · GW(p)
I think "give everyone an AGI" comes from this Medium piece that coincided with OpenAI's launch:
Musk: [... W]e want AI to be widespread. There’s two schools of thought — do you want many AIs, or a small number of AIs? We think probably many is good. And to the degree that you can tie it to an extension of individual human will, that is also good. [...]
Altman: We think the best way AI can develop is if it’s about individual empowerment and making humans better, and made freely available to everyone, not a single entity that is a million times more powerful than any human. [...]
Couldn’t your stuff in OpenAI surpass human intelligence?
Altman: I expect that it will, but it will just be open source and useable by everyone instead of useable by, say, just Google. Anything the group develops will be available to everyone. If you take it and repurpose it you don’t have to share that. But any of the work that we do will be available to everyone. [...]
I want to return to the idea that by sharing AI, we might not suffer the worst of its negative consequences. Isn’t there a risk that by making it more available, you’ll be increasing the potential dangers?
Altman: I wish I could count the hours that I have spent with Elon debating this topic and with others as well and I am still not a hundred percent certain. You can never be a hundred percent certain, right? But play out the different scenarios. Security through secrecy on technology has just not worked very often. If only one person gets to have it, how do you decide if that should be Google or the U.S. government or the Chinese government or ISIS or who? There are lots of bad humans in the world and yet humanity has continued to thrive. However, what would happen if one of those humans were a billion times more powerful than another human?
Musk: I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower.
I don't think I've ever seen actual OpenAI staff endorse strategies like that, though, and they've always said they consider openness itself conditional. E.g., Andrej Karpathy from a week or two later:
What if OpenAI comes up with a potentially game-changing algorithm that could lead to superintelligence? Wouldn’t a fully open ecosystem increase the risk of abusing the technology?
In a sense it’s kind of like CRISPR. CRISPR is a huge leap for genome editing that’s been around for only a few years, but has great potential for benefiting — and hurting — humankind. Because of these ethical issues there was a recent conference on it in DC to discuss how we should go forward with it as a society.
If something like that happens in AI during the course of OpenAI’s research — well, we’d have to talk about it. We are not obligated to share everything — in that sense the name of the company is a misnomer — but the spirit of the company is that we do by default.
And Greg Brockman from January 2016:
The one goal we consider immutable is our mission to advance digital intelligence in the way that is most likely to benefit humanity as a whole. Everything else is a tactic that helps us achieve that goal.
Today the best impact comes from being quite open: publishing, open-sourcing code, working with universities and with companies to deploy AI systems, etc.. But even today, we could imagine some cases where positive impact comes at the expense of openness: for example, where an important collaboration requires us to produce proprietary code for a company. We’ll be willing to do these, though only as very rare exceptions and to effect exceptional benefit outside of that company.
In the future, it’s very hard to predict what might result in the most benefit for everyone. But we’ll constantly change our tactics to match whatever approaches seems most promising, and be open and transparent about any changes in approach (unless doing so seems itself unsafe!). So, we’ll prioritize safety given an irreconcilable conflict.