Announcement: AI alignment prize round 2 winners and next round
post by cousin_it · 2018-04-16T03:08:20.412Z · LW · GW · 29 commentsContents
The winners The next round None 29 comments
We (Zvi Mowshowitz and Vladimir Slepnev) are happy to announce the results of the second round of the AI Alignment Prize [LW · GW], funded by Paul Christiano. From January 15 to April 1 we received 37 entries. Once again, we received an abundance of worthy entries. In this post we name five winners who receive $15,000 in total, an increase from the planned $10,000.
We are also announcing the next round of the prize, which will run until June 30th, largely under the same rules as before.
The winners
First prize of $5,000 goes to Tom Everitt (Australian National University and DeepMind) and Marcus Hutter (Australian National University) for the paper The Alignment Problem for History-Based Bayesian Reinforcement Learners. We're happy to see such a detailed and rigorous write up of possible sources of misalignment, tying together a lot of previous work on the subject.
Second prize of $4,000 goes to Scott Garrabrant (MIRI) for these LW posts:
- Robustness to Scale [LW · GW]
- Sources of Intuitions and Data on AGI [LW · GW]
- Don't Condition on no Catastrophes [LW · GW]
- Knowledge is Freedom [LW · GW]
Each of these represents a small but noticeable step forward, adding up to a sizeable overall contribution. Scott also won first prize in the previous round.
Third prize of $3,000 goes to Stuart Armstrong (FHI) for his post Resolving human values, completely and adequately [LW · GW] and other LW posts during this round. Human values can be under-defined in many possible ways, and Stuart has been very productive at teasing them out and suggesting workarounds.
Fourth prize of $2,000 goes to Vanessa Kosoy (MIRI) for the post Quantilal control for finite MDPs. The idea of quantilization might help mitigate the drawbacks of extreme optimization, and it's good to see a rigorous treatment of it. Vanessa is also a second time winner.
Fifth prize of $1,000 goes to Alex Zhu (unaffiliated) for these LW posts:
- Reframing misaligned AGI's: well-intentioned non-neurotypical assistants [LW · GW]
- Metaphilosophical competence can't be disentangled from alignment [LW · GW]
- Corrigible but misaligned: a superintelligent messiah [LW · GW]
- My take on agent foundations: formalizing metaphilosophical competence [LW · GW]
Alex's posts have good framings of several problems related to AI alignment, and led to a surprising amount of good discussion.
We will contact each winner by email to arrange transfer of money.
We would also like to thank everyone else who sent in their work! The only way to make progress on AI alignment is by working on it, so your participation is the whole point.
The next round
We are now also announcing the third round of the AI alignment prize.
We're looking for technical, philosophical and strategic ideas for AI alignment, posted publicly between January 1, 2018 and June 30, 2018 and not submitted for previous iterations of the AI alignment prize. You can submit your entries in the comments here or by email to apply@ai-alignment.com. We may give feedback on early entries to allow improvement, though our ability to do this may become limited by the volume of entries.
The minimum prize pool will again be $10,000, with a minimum first prize of $5,000.
Thank you!
29 comments
Comments sorted by top scores.
comment by Scott Garrabrant · 2018-04-16T03:45:35.466Z · LW(p) · GW(p)
It seems like maybe there should be an archive page for past rounds.
Replies from: Scott Garrabrant↑ comment by Scott Garrabrant · 2018-04-16T17:56:24.826Z · LW(p) · GW(p)
Maybe in the form of a LW sequence.
comment by Scott Garrabrant · 2018-06-05T18:28:23.839Z · LW(p) · GW(p)
I might post something else later this month, but if not, my submission is my new Prisoners' Dilemma thing [LW · GW].
Replies from: Scott Garrabrant↑ comment by Scott Garrabrant · 2018-06-27T02:08:35.622Z · LW(p) · GW(p)
Ok, I have two other things to submit:
Counterfactual Mugging Poker Game [LW · GW] and Optimization Amplifies [LW · GW].
I hope that your decision procedure includes a part where if I win, you choose whichever subset of my posts you most want to draw attention to. I think that a single post would get a larger signal boost than each post in a group of three, and would not be offended of one or two of my posts gets cut from the announcement post to increase the signal for other things.
comment by Qiaochu_Yuan · 2018-04-16T03:13:28.127Z · LW(p) · GW(p)
Congrats everyone! Really good to see this sort of thing happening.
Replies from: tom4everitt↑ comment by tom4everitt · 2018-05-20T07:40:42.167Z · LW(p) · GW(p)
Thank you! Really inspiring to win this prize. As John Maxwell stated in the previous round, the recognition is more important than the money. Very happy to receive further comments and criticism by email tom4everitt@gmail.com. Through debate we grow :)
comment by Kaj_Sotala · 2018-06-29T16:27:55.443Z · LW(p) · GW(p)
I submit "Shaping Economic Incentives for Collaborative AGI [LW · GW]" for the next prize.
comment by TurnTrout · 2018-06-16T03:14:24.364Z · LW(p) · GW(p)
My submission is whitelisting [LW · GW]!
Replies from: TurnTrout, roland-pihlakas↑ comment by TurnTrout · 2018-06-30T22:51:12.635Z · LW(p) · GW(p)
Another one: Overcoming Clinginess in Impact Measures [LW · GW]. Also, please download the latest version of my paper, and don't use the one I emailed earlier.
↑ comment by Roland Pihlakas (roland-pihlakas) · 2018-06-25T20:42:04.675Z · LW(p) · GW(p)
To people who become interested in the topic of side effects and whitelists, I would add links to a couple of additional articles of my own past work on related subjects that you might be interested in - for developing the ideas further, for discussion, or for cooperation:
The principles are based mainly on the idea of competence-based whitelisting and preserving reversibility (keeping the future options open) as the primary goal of AI, while all task-based goals are secondary.
https://medium.com/threelaws/implementing-a-framework-of-safe-robot-planning-43636efe7dd8
More technical details / a possible implementation of the above.
This is intended as a comment, not as a prize submission, since I first published these texts 10 years ago.
comment by Stuart_Armstrong · 2018-04-24T13:26:05.433Z · LW(p) · GW(p)
Hey there!
I don't know if you'll consider these reward-function-learning posts as candidates, or whether you'll see them are rephrasing of old ideas. There are some new results in there - the different value functions and the fact that Q-learning agents can learn un-riggable learning processes is new.
https://www.lesswrong.com/posts/55hJDq5y7Dv3S4h49/reward-function-learning-the-value-function
https://www.lesswrong.com/posts/upLot6eG8cbXdKiFS/reward-function-learning-the-learning-process
In any case, thanks for running these competitions!
comment by Charlie Steiner · 2018-04-19T23:44:44.110Z · LW(p) · GW(p)
I made some notes as I read the winner. Is Tom Everitt on here? Should I send these somewhere?
2.2: The description of ξ is a bit odd - it seems like w_ν is your actual prior over models.
2.4: I'm not totally sold on some parts where it seems like you equate agents and policies. I think the thing we want to compare is not the difference in utility given the same policy, but the difference in true utility between policies that are chosen.
If the human and the AI have the same utility-maximizing policy, they can have nonzero "misalignment" if they have different standard deviations in utility. Meanwhile, even if there is zero misalignment on policy π, that might not be the utility-maximizing policy for the AI.
Should footnote 1 read CA rather than MA?
3.5: It seems like an agent is incentivized to create the extra observation systems, and then encode the important facts about the world within delusional high-reward states of the original observation channel, even assuming it still has to use the original observation and action channels. But it might not even require that facade - a corruption-aware system might notice that a policy that involves changing its hardware and how it uses it will get it larger rewards even according to the original utility function over percepts - beneficial corruption, if you will.
5.5: Decoupled data is nice, but I'm concerned that it's not fully addressing the issue it gets brought up to address, which is fundamentally the issue of induction - how to predict reward for novel situations. An agent can learn as much as it wants about human preferences over normal states of the world, you can even tell it "stories" about unseen states, but it can still be misaligned on novel states never mentioned by humans. The issue is not just about the data the agent sees, but about how it interprets it.
5.5.3: For the counterfactual "default" policy, there are still some unsolved problems with matching human intuition. The classic example is an agent whose default policy is no output - and so it thinks that the default state of the world is the programmers acting confused and trying to fix the agent, because that's what happens with the default policy.
5.6: (thinking about fig. 5.2) I'm hesitant to call any of these paths aligned, and I think it has to do with the difference between good agent design and successful learning of human values. This is a bit related to the inductive problem that I don't think decoupled data fully addresses - it's possible for one to code up something that tries to learn from humans, and successfully learns and faithfully maximizes some value function, but that value function is not the human value function. But this problem is quite ill-understood - most current work on it focuses on "corrigibility" in the intuitive sense of being willing to defer to humans even after you think you've learned human preferences.
Replies from: tom4everitt↑ comment by tom4everitt · 2018-05-03T08:01:58.045Z · LW(p) · GW(p)
Thanks for your comments, much appreciated! I'm currently in the middle of moving continents, will update the draft within a few weeks.
comment by romeostevensit · 2018-04-16T18:41:15.474Z · LW(p) · GW(p)
I'm not sure I see the point to awarding an already-in-the-works 67 page paper that happened to be released at the time of the competition if the goal of the prize is to stimulate AI work that otherwise would not have happened.
Replies from: paulfchristiano, Scott Garrabrant, ofer, Raemon↑ comment by paulfchristiano · 2018-04-18T17:08:43.744Z · LW(p) · GW(p)
Personally, my long-term goal is a world where high-quality work on alignment is consistently funded, and where people doing high-quality work on alignment have plenty of money. I think that an effort to restrict to counterfactually-additional alignment work would "save" some money (in the sense that I'd have the money rather than some researcher who is doing alignment work) but wouldn't be great for that long-term goal.
Also, if you actually think about the dynamics they are pretty crappy, even if you only avoid "obvious" cases. For example, it would become really hard for anyone to actually assess counterfactual impact, since every winner would need to make it look like there was at least a plausible counterfactual impact. (I already wish there was less implicit social pressure in that direction.)
Replies from: romeostevensit↑ comment by romeostevensit · 2018-04-18T23:04:21.519Z · LW(p) · GW(p)
On reflection I strongly agree that social pressure around counterfactualness is a net harm for motivation.
↑ comment by Scott Garrabrant · 2018-04-17T01:41:05.795Z · LW(p) · GW(p)
I think you want to reward output rather than output that would not have otherwise happened.
This is similar to the fact that if you want to train calibration, you have to optimize you log score and just observe your lack of calibration as an opportunity to increase your log score.
↑ comment by Ofer (ofer) · 2018-04-16T21:46:48.181Z · LW(p) · GW(p)
If I understand correctly, one of the goals of this initiative is to increase the prestige that is associated with making useful contributions in AI safety. For that purpose, it doesn't matter whether the prize incentivized the winning authors or not. But it is important that enough people will trust that the main criterion for selecting the winning works is usefulness.
↑ comment by Raemon · 2018-04-16T22:38:24.033Z · LW(p) · GW(p)
My take on this is that the ideal version of this prize selects for both usefulness and counterfactualness, but selecting for counterfactualness without producing weird side effects seems hard. (I do think it's worth spending an hour or two thinking about how to properly incentivize or reward counterfactualness, just, if you haven't come up with anything, strictly rewarding quality/usefulness seems better)
Replies from: romeostevensit↑ comment by romeostevensit · 2018-04-18T01:26:18.938Z · LW(p) · GW(p)
> selecting for counterfactualness without producing weird side effects seems hard
agree, I just thought the winner in this case was over the top enough to not be in the fuzzy boundary but clearly on the other side.
Replies from: cousin_it, Raemon↑ comment by cousin_it · 2018-04-18T13:20:51.589Z · LW(p) · GW(p)
Our rules don't draw that boundary at the moment, and I'm not even sure how it could be phrased. Do you have any suggestions?
Replies from: romeostevensit↑ comment by romeostevensit · 2018-04-18T23:06:30.734Z · LW(p) · GW(p)
I wouldn't be in favor of adding explicit rules for goodheart related reasons. I think prizes and grants should have the minimum rules to account for basic logistics and the rest should be illegible.
comment by Dr_Manhattan · 2018-04-16T16:47:25.088Z · LW(p) · GW(p)
I wonder if it's possible to help with this in a tax-advantaged way. Maybe set up donation to MIRI earmarked for this kind of thing.
Replies from: cousin_it↑ comment by cousin_it · 2018-04-16T22:05:08.417Z · LW(p) · GW(p)
The prize isn't affiliated with MIRI or any other organization, it's just us. But maybe we can figure something out. Can you email me (vladimir.slepnev at gmail)?
Replies from: habryka4↑ comment by habryka (habryka4) · 2018-04-17T00:53:40.144Z · LW(p) · GW(p)
BERI seems like the kind of organization that would be happy to help with this.
comment by Ben Pace (Benito) · 2018-04-16T09:59:41.858Z · LW(p) · GW(p)
Congrats to all winners! :-) I've read about half the winning submissions, and am looking forward to looking through the other half.
comment by Roland Pihlakas (roland-pihlakas) · 2018-07-01T00:16:34.691Z · LW(p) · GW(p)
Hello!
Here are my submissions for this time. They are all strategy related.
The first one is a project for popularisation AI safety topics. This is not a technical text by its content but the project itself is still technological.
https://medium.com/threelaws/proposal-for-executable-and-interactive-simulations-of-ai-safety-failure-scenarios-7acab7015be4
As a bonus I would add a couple of non-technical ideas about possible economic or social partial solutions for slowing down AI race (which would enable having more time for solving the AI alignment) :
https://medium.com/threelaws/making-the-tax-burden-of-robot-usage-equal-to-the-tax-burden-of-human-labour-c8e97df751a1
https://medium.com/threelaws/starting-a-human-self-sufficiency-movement-the-handicap-principle-eb3a14f7f5b3
The latter text is not totally new - it is a distilled and edited version of one of my other old texts, that was originally multiple times longer and had a narrower goal than the new one.
Regards:
Roland
comment by avturchin · 2018-04-16T14:09:26.790Z · LW(p) · GW(p)
My new submission is "Dangerous AI is possible before 2030" which stable version is here https://philpapers.org/rec/TURPOT-5
The draft was first discussed on LW on April 3.
I also going to submit another text later which I will link here.