Announcement: AI alignment prize round 3 winners and next round

post by cousin_it · 2018-07-15T07:40:20.507Z · LW · GW · 7 comments

Contents

  The winners
  The next round
None
7 comments

We (Zvi Mowshowitz and Vladimir Slepnev) are happy to announce the results of the third round of the AI Alignment Prize [LW · GW], funded by Paul Christiano. From April 15 to June 30 we received entries from 12 participants, and are awarding $10,000 to two winners.

We are also announcing the fourth round of the prize, which will run until December 31 of this year under slightly different rules. More details below.

The winners

First prize of $7,500 goes to Vanessa Kosoy for The Learning-Theoretic AI Alignment Research Agenda. We feel this is much more accessible than previous writing on this topic, and gives a lot of promising ideas for future research. Most importantly, it explains why she is working on the problems she’s working on, in concrete enough ways to encourage productive debate and disagreement.

Second prize of $2,500 goes to Alexander Turner for the posts Worrying About the Vase: Whitelisting [LW · GW] and Overcoming Clinginess in Impact Measures [LW · GW]. We are especially happy with the amount of good discussion these posts generated.

We will contact each winner by email to arrange transfer of money. Many thanks to everyone else who sent in their work!

The next round

We are now announcing the fourth round of the AI Alignment Prize. Due the drop in number of entries, we feel that 2.5 months might be too short, so this round will run until end of this year.

We are looking for technical, philosophical and strategic ideas for AI alignment, posted publicly between July 15 and December 31, 2018. You can submit links to entries by leaving a comment below, or by email to apply@ai-alignment.com. We will try to give feedback on all early entries to allow improvement. Another change from previous rounds is that we ask each participant to submit only one entry (though possibly in multiple parts), rather than a list of several entries on different topics.

The minimum prize pool will again be $10,000, with a minimum first prize of $5,000.

Thank you!

7 comments

Comments sorted by top scores.

comment by Scott Garrabrant · 2018-12-20T03:43:49.952Z · LW(p) · GW(p)

Abram and I submit Embedded Agency [? · GW].

comment by Ben Pace (Benito) · 2018-07-16T08:49:26.370Z · LW(p) · GW(p)

Woop! Congratulations to both :D

comment by interstice · 2018-12-31T22:53:30.147Z · LW(p) · GW(p)

I submit Predictors as Agents [LW · GW].

comment by Charlie Steiner · 2018-07-16T23:28:48.327Z · LW(p) · GW(p)

Congrats Vadim!

comment by Roland Pihlakas (roland-pihlakas) · 2018-11-24T18:58:28.745Z · LW(p) · GW(p)

Submitting my post for early feedback in order to improve it further:

Exponentially diminishing returns and conjunctive goals: Mitigating Goodhart’s law with common sense. Towards corrigibility and interruptibility.

Abstract.

Utility maximising agents have been the Gordian Knot of AI safety. Here a concrete VNM-rational formula is proposed for satisficing agents, which can be contrasted with the hitherto over-discussed and too general approach of naive maximisation strategies. For example, the 100 paperclip scenario is easily solved by the proposed framework, since infinitely rechecking whether exactly 100 paper clips were indeed produced yields to diminishing returns. The formula provides a framework for specifying how we want the agents to simultaneously fulfil or at least trade off between the many different common sense considerations, possibly enabling them to even surpass the relative safety of humans. A comparison with the formula introduced in “Low Impact Artificial Intelligences” paper by S. Armstrong and B. Levinstein is included.

comment by Gurkenglas · 2018-07-15T22:17:48.902Z · LW(p) · GW(p)

Oh, right, this. My post [LW · GW] wouldn't have stood a chance, right?

Replies from: Zvi
comment by Zvi · 2018-07-16T17:15:50.894Z · LW(p) · GW(p)

Correct.