Proposal for "Open Problems in Friendly AI"
post by lukeprog · 2012-06-01T02:06:52.851Z · LW · GW · Legacy · 14 commentsContents
14 comments
Series: How to Purchase AI Risk Reduction
One more project SI is considering...
When I was hired as an intern for SI in April 2011, one of my first proposals was that SI create a technical document called Open Problems in Friendly Artificial Intelligence. (Here is a preview of what the document would be like.)
When someone becomes persuaded that Friendly AI is important, their first question is often: "Okay, so what's the technical research agenda?"
So You Want to Save the World maps out some broad categories of research questions, but it doesn't explain what the technical research agenda is. In fact, SI hasn't yet explained much of the technical research agenda yet.
Much of the technical research agenda should be kept secret for the same reasons you might want to keep secret the DNA for a synthesized supervirus. But some of the Friendly AI technical research agenda is safe to explain so that a broad research community can contribute to it.
This research agenda includes:
- Second-order logical version of Solomonoff induction.
- Non-Cartesian version of Solomonoff induction.
- Construing utility functions from psychologically realistic models of human decision processes.
- Formalizations of value extrapolation. (Like Christiano's attempt.)
- Microeconomic models of self-improving systems (e.g. takeoff speeds).
- ...and several others open problems.
The goal would be to define the open problems as formally and precisely as possible. Some will be more formalizable than others, at this stage. (As a model for this kind of document, see Marcus Hutter's Open Problems in Universal Induction and Intelligence.)
Nobody knows the open problems in Friendly AI research better than Eliezer, so it would probably be best to approach the project this way:
- Eliezer spends a month writing an "Open Problems in Friendly AI" sequence for Less Wrong.
- Luke organizes a (fairly large) research team for presenting these open problems with greater clarity and thoroughness, in the mainstream academic form.
- These researchers collaborate for several months to put together the document, involving Eliezer when necessary.
- SI publishes the final document, possibly in a journal.
Estimated cost:
- 2 months of Eliezer's time.
- 150 hours of Luke's time.
- $40,000 for contributed hours from staff researchers, remote researchers, and perhaps domain experts (as consultants) from mainstream academia.
14 comments
Comments sorted by top scores.
comment by Vladimir_Nesov · 2012-06-01T21:32:44.574Z · LW(p) · GW(p)
The implicit analogy drawn in the introduction between Eliezer Yudkowsky and both Henri Poincare and David Hilbert gives a bad arrogance vibe.
comment by Normal_Anomaly · 2012-06-01T12:18:12.577Z · LW(p) · GW(p)
This is the sort of thing I want to see more of from SI: both the technical research agenda info and the regular posts by Luke about what SI is doing right now. Thanks!
comment by RobertLumley · 2012-06-01T02:59:59.839Z · LW(p) · GW(p)
Might it be worth tagging each of these potential proposals with a certain tag so we could look at them all and evaluate them comparatively?
Replies from: lukeprogcomment by Wei Dai (Wei_Dai) · 2012-06-06T00:31:22.198Z · LW(p) · GW(p)
- Second-order logical version of Solomonoff induction.
I'm not sure this is the right problem. See this post I just made.
- Non-Cartesian version of Solomonoff induction.
UDT seems to solve this well enough that I no longer consider it a major open problem. Is this not your or Eliezer's evaluation?
comment by asr · 2012-06-01T05:28:19.866Z · LW(p) · GW(p)
Interesting direction.
Couple small questions:
Who is the intended audience for this document? Would you be able to name specific researchers who you are hoping to influence?
As an alternative formulation, what's the community in which you hope to publish this?
Why is Eliezer-time measured in months, and Luke-time in hours?
Do you expect to involve folks who haven't previously been involved with SIAI? If so, when?
How large a research team / author list would you expect the final version to have? Is fairly large "5" or "15"?
↑ comment by Solvent · 2012-06-01T06:24:03.780Z · LW(p) · GW(p)
Why is Eliezer-time measured in months, and Luke-time in hours?
That's a good question, especially considering that 250 hours is on the order of months (6 weeks at 40 hours/week, or 4 weeks at 60 hours/week).
EDIT: Units confusion
Replies from: lukeprog, faul_sname↑ comment by lukeprog · 2012-06-01T09:45:25.883Z · LW(p) · GW(p)
Oops, I meant 150 hours for me.
Eliezer's time is measured in months because he tracks his time in days not hours, so I have an easier time predicting how many days (which I can convert to months) something will take Eliezer to complete, rather than how many hours it will take him to complete.
↑ comment by faul_sname · 2012-06-01T17:32:13.314Z · LW(p) · GW(p)
I get 6 weeks at 40 hours/week.
Replies from: Solvent↑ comment by lukeprog · 2012-06-01T16:16:19.629Z · LW(p) · GW(p)
Who is the intended audience for this document?
Every smart person who is fairly persuaded to care about AI risk and then asks us, "Okay, so what's the technical research agenda?" This is a lot of people.
what's the community in which you hope to publish this?
It doesn't matter much. It's something we would email to particular humans who are already interested.
Why is Eliezer-time measured in months, and Luke-time in hours?
See here.
Do you expect to involve folks who haven't previously been involved with SIAI? If so, when?
Possibly, e.g. domain experts in micro-econ. When we need them.
How large a research team / author list would you expect the final version to have? Is fairly large "5" or "15"?
My guess is 10-ish.
comment by [deleted] · 2015-07-29T10:03:44.788Z · LW(p) · GW(p)
Much of the technical research agenda should be kept secret for the same reasons you might want to keep secret the DNA for a synthesized supervirus. But some of the Friendly AI technical research agenda is safe to explain so that a broad research community can contribute to it.
I'm uncomfortable with this.
Since this 2012, has MIRI updated it's stance on self-censoring of the AI research agenda and can this be demonstrated with reference to formerly censored material or otherwise?
If not, are there alternative friendly AI focused organisations who accept donations and censor differently or don't censor?
Thanks for your disclosures Lukeprog, I appreciate the general candor and accountability. It was also nice to read that you were an SI intern in 2011 - quickly you rose to the top! :)
comment by private_messaging · 2012-06-01T12:25:13.145Z · LW(p) · GW(p)
Second-order logical version of Solomonoff induction. Non-Cartesian version of Solomonoff induction.
This begs the question: do you even know what Solomonoff induction is? (edit: to be honest my best guess is that you don't even know the terms with which to know the terms with which to know the terms... a couple dozen layers deep, with which to know what it is. The topic is pretty complicated, but looks pretty simple)
Construing utility functions
If you manage to construct an utility function (and by construct, i mean formally define in mathematics, construct from elementary operations) that actually defines the real world quantities for an agent to maximize (as opposed to finding maximums of functions in the abstract mathematical sense), that'll be a step towards robot apocalypse and away from the currently safe approaches that simply won't work like you guys think an utility maximizer would work and are subsequently safe (in the sense of not leading to the doom scenarios that you predict to arise from 'utility maximization'). (I am pretty sure you won't manage to construct it though, and even if you do nobody competent enough to implement this would be dumb enough to implement this)