Announcing the AI Alignment Prize
post by cousin_it · 2017-11-03T15:47:00.092Z · LW · GW · 78 commentsContents
Rules What kind of work are we looking for? None 78 comments
Stronger than human artificial intelligence would be dangerous to humanity. It is vital any such intelligence’s goals are aligned with humanity's goals. Maximizing the chance that this happens is a difficult, important and under-studied problem.
To encourage more and better work on this important problem, we (Zvi Mowshowitz and Vladimir Slepnev) are announcing a $5000 prize for publicly posted work advancing understanding of AI alignment, funded by Paul Christiano.
This prize will be awarded based on entries gathered over the next two months. If the prize is successful, we will award further prizes in the future.
The prize is not backed by or affiliated with any organization.
Rules
Your entry must be published online for the first time between November 3 and December 31, 2017, and contain novel ideas about AI alignment. Entries have no minimum or maximum size. Important ideas can be short!
Your entry must be written by you, and submitted before 9pm Pacific Time on December 31, 2017. Submit your entries either as links in the comments to this post, or by email to apply@ai-alignment.com. We may provide feedback on early entries to allow improvement.
We will award $5000 to between one and five winners. The first place winner will get at least $2500. The second place winner will get at least $1000. Other winners will get at least $500.
Entries will be judged subjectively. Final judgment will be by Paul Christiano. Prizes will be awarded on or before January 15, 2018.
What kind of work are we looking for?
AI Alignment focuses on ways to ensure that future smarter than human intelligence will have goals aligned with the goals of humanity. Many approaches to AI Alignment deserve attention. This includes technical and philosophical topics, as well as strategic research about related social, economic or political issues. A non-exhaustive list of technical and other topics can be found here.
We are not interested in research dealing with the dangers of existing machine learning systems commonly called AI that do not have smarter than human intelligence. These concerns are also understudied, but are not the subject of this prize except in the context of future smarter than human intelligence. We are also not interested in general AI research. We care about AI alignment, which may or may not also advance the cause of general AI research.
(Addendum: the results of the prize and the rules for the next round have now been announced.)
78 comments
Comments sorted by top scores.
comment by Scott Garrabrant · 2017-12-30T16:49:23.746Z · LW(p) · GW(p)
Here is my submission.
Thank you for motivating me to write this blog post I have been putting off for a while.
Disclaimer: If you want to only measure the contribution that came November or later, compare to this post, which has one fewer category, no names, fewer examples, nothing about mitigation, and worse presentation.
I think this is an important idea, so I appreciate feedback, especially about presentation.
Replies from: cousin_it, Scott Garrabrant↑ comment by Scott Garrabrant · 2018-01-08T15:13:11.169Z · LW(p) · GW(p)
Another Disclaimer: The outline at the beginning was added after the deadline, thanks to Raemon and other people who provided examples.
comment by Caspar Oesterheld (Caspar42) · 2017-12-21T16:10:18.655Z · LW(p) · GW(p)
You don't mention decision theory in your list of topics, but I guess it doesn't hurt to try.
I have thought a bit about what one might call the "implementation problem of decision theory". Let's say you believe that some theory of rational decision making, e.g., evidential or updateless decision theory, is the right one for an AI to use. How would you design an AI to behave in accordance with such a normative theory? Conversely, if you just go ahead and build a system in some existing framework, how would that AI behave in Newcomb-like problems?
There are two pieces that I uploaded/finished on this topic in November and December. The first is a blog post noting that futarchy-type architectures would, per default, implement evidential decision theory. The second is a draft titled "Approval-directed agency and the decision theory of Newcomb-like problems".
For anyone who's interested in this topic, here are some other related papers and blog posts:
- Another one I wrote: "Doing what has worked well in the past leads to evidential decision theory" (I updated this in December, but it was first written and uploaded in September, so it doesn't count for the competition.)
- Albert and Heiner (2001): "An Indirect-Evolution Approach to Newcomb's Problem"
- Meyer, Feldmaier and Shen (2016): "Reinforcement Learning in Conflicting Environments for Autonomous Vehicles"
So far, my research and the papers by others I linked have focused on classic Newcomb-like problems. One could also discuss how existing AI paradigms related to other issues of naturalized agency, in particular self-locating beliefs and naturalized induction, though here it seems more as though existing frameworks just lead to really messy behavior.
Send comments to firstnameDOTlastnameATfoundational-researchDOTorg. (Of course, you can also comment here or send you a LW PM.)
Replies from: cousin_itcomment by Ben Pringle (ben-pringle) · 2017-11-08T00:27:23.345Z · LW(p) · GW(p)
I saw a talk earlier this year that mentioned this 2015 Corrigibility paper as a good starting point for someone new to alignment research. If that's still true, I started writing up some thoughts on a possible generalization of the method in that paper.
Anyway, submitting this draft early to hopefully get some feedback whether I'm on the right track:
GeneralizedUtilityIndifference_Draft_Latest.pdf (edited)
The new version does better on sub-agent shutdown and eliminates the "managing the news" problem.
(Let me know if someone already thought of this approach!)
EDIT 2017-11-09: filled in the section on the -action model.
Replies from: cousin_itcomment by Stuart_Armstrong · 2017-11-05T19:16:08.348Z · LW(p) · GW(p)
Should I submit? Working on this is my job, so it's maybe better to encourage others to come on board?
Replies from: cousin_itcomment by Zvi · 2017-11-05T16:59:28.691Z · LW(p) · GW(p)
What should we be doing to help get more people to enter, whether by spreading the word or another way? We want this to work and result in good things, and it's iteration one so doubtless a lot we're not doing right.
Replies from: whpearson, Raemon, avturchin↑ comment by whpearson · 2017-11-05T17:37:30.214Z · LW(p) · GW(p)
Some random initial thoughts.
Post on the SSC open thread ? Or the EA forum open thread (maybe the EA subreddit too). I've seen it posted to the control problem reddit.
I'll post it on the ai danmark safety facebook page, although I've never managed to go to one of their reading groups (it is now pending).
Ask nicely the people running lesserwrong to see if you can see the referrer for where traffic comes in to this thread, this will give you an idea where most of the traffic comes from.
To get more people to enter, imagine you were running the competition previously, pick N articles out there on the internet and link them as things that would be short listed. This would give people an idea of what you are looking for. Try and pick a diverse range else you might get articles in a cluster.
Perhaps think about trying to get some publicity to sweeten the deal, e.g. the winner also gets featured in X prestigious place (if the submitter wants it to be). Although maybe only after the quality has been shown to be high enough, after the first couple of iterations.
Replies from: habryka4↑ comment by habryka (habryka4) · 2017-11-06T02:23:02.115Z · LW(p) · GW(p)
Happy to give you any analytics data for the page.
Replies from: cousin_it↑ comment by Raemon · 2017-11-05T19:49:55.737Z · LW(p) · GW(p)
Yeah, I had an initial gut sense of "oh man this seems important and but I'm worried it'd quietly fade out of consciousness by default." Much of my advice would be whpearson's. Some additional thoughts (I think mostly fleshing out why I think whpearson's suggestions are important)
i. Big Activation Costs
You are asking people to do a hard thing. You'd providing money to incentivize them, but people are lazy - they will forget, or start doing it but not get around to finish or not get around to finishing until too late.
Anything to reduce the activation cost is good.
1) Maybe have the first thing you ask is for people to apply if they might be interested, with as low a cost to doing so as possible (while gaining at least some information about people and weeding out dead-wood).
This gets people slightly committed, and gives you the opportunity to spam a much narrower subset of people to remind them. (see spam section)
2) It's ambiguous to me what kind of writing you're looking for, which in turn makes me unsure if it's be a good use of my time to work on this, which makes me hesitate. (I'm currently assuming that this is not the right use of my talents both for altruistic and selfish reasons, but I can imagine a slightly different version of me for whom it'd be ambiguous)
Whpearson's "list good existing articles, as diverse as possible" helps counteract part of this, but still doesn't answer questions like "should I be doing this if this is currently my day job? Presumably the point is to get more people workin on this." (and the correlary: if professional AI safety workers are submitting, what chance do I have of contributing something useful?)
(Relatedly - I'd originally thought you should spell out what sort of questions you were looking to resolve, then saw you had linked to Paul Christiano's doc. I think attempting to summarize the doc might accidentally focus on too narrow a domain, but the current linking of the doc is so small I missed it the first time)
ii. Spam vs Valuable-Self-Promoting-Machinery
By default, you need to spam things a lot. One way to get the word out is to post on all the relevant FB groups, discords, etc - multiple times, so that when they forget and fade to the backburner it doesn't disappear forever.
Being forced to spam everyone once a week is a bad equilibrium. If you can figure out how to spam exactly the people who matter (see i.1) that's also better.
If you can spam in a way that's providing value rather than sucking up attention, that's better. If you can make the thing spam itself in a way that provides value, better still.
One way of spamming-that-provides value might be having a couple followup posts that do things like "provide suggestions and reading lists for people who are considering working on this but don't quite know how to approach the problem." (targeting the sort of person who you think almost has the skills the contribute, and is just missing a few key elements that are easy to teach)
Another might be encouraging to post their drafts publicly to attract additional attention and comments that keep the thing in public consciousness. (This may work against the contest model though)
↑ comment by avturchin · 2017-11-05T17:34:30.997Z · LW(p) · GW(p)
Put it in all relevant facebook groups?
Replies from: habryka4↑ comment by habryka (habryka4) · 2017-11-06T02:23:21.232Z · LW(p) · GW(p)
I think this is the most important thing, and I would be happy to help with that.
Replies from: cousin_itcomment by Dr_Manhattan · 2017-11-03T15:51:10.893Z · LW(p) · GW(p)
Zvi/Vladimir, what's your role in this - are you the judges?
Replies from: cousin_itcomment by Dan Fitch (dan-fitch) · 2017-12-09T04:33:50.477Z · LW(p) · GW(p)
I don't know if this is a useful "soft" submission, considering I am still reading and learning in the area.
But I think the current metaphors (paperclips, etc.) are not very persuasive for convincing folks in the world at large that value alignment is a BIG, HARD PROBLEM. Here is my attempt to add a possibly-new metaphor to the mix: https://nilscript.wordpress.com/2017/11/26/parenting-alignment-problem/
Replies from: cousin_itcomment by Shmi (shminux) · 2017-12-31T02:06:47.596Z · LW(p) · GW(p)
Posted on my blog, but might as well link it here. Not of the quality that Paul Christiano seeks, but might be of some interest, though many of the same point points have been discussed over and over here and elsewhere before.
Replies from: defective-altruist, cousin_it↑ comment by Defective Altruist (defective-altruist) · 2017-12-31T04:51:05.445Z · LW(p) · GW(p)
Your link is broken, did you mean to link https://edgeofgravity.wordpress.com/2017/12/31/ai-alignment-bubblicious/ ?
↑ comment by cousin_it · 2017-12-31T13:00:13.865Z · LW(p) · GW(p)
Thank you! Acknowledged (though your link didn't work for me, Defective Altruist's did). Do you have an email address for contact?
Replies from: shminux↑ comment by Shmi (shminux) · 2018-01-07T08:45:24.300Z · LW(p) · GW(p)
shminux at gmail should work. Thank you for the acknowledgment! Tried to fix the link above, not sure how well it worked.
comment by Defective Altruist (defective-altruist) · 2017-12-29T09:43:09.363Z · LW(p) · GW(p)
Submission: http://futilitymonster.tumblr.com/post/169068920534/the-true-ai-box
Contact: defectivealtruist at g mail
Replies from: cousin_itcomment by Daniel Wallis (daniel-wallis) · 2017-12-28T20:47:52.978Z · LW(p) · GW(p)
Is "publishing" on google docs ok? Here's a link:
https://docs.google.com/document/d/1hIzJNNDfWKwAK-lSgs_w-CYa15b0SjdRNDzmh5jJMMU/edit?usp=sharing
Replies from: cousin_it↑ comment by cousin_it · 2017-12-31T12:52:37.812Z · LW(p) · GW(p)
Thank you! Acknowledged. Do you have an email address for contact?
Replies from: daniel-wallis↑ comment by Daniel Wallis (daniel-wallis) · 2017-12-31T19:40:43.044Z · LW(p) · GW(p)
daniel.wallis.9000@gmail.com
comment by Joseph Shipman (joseph-shipman) · 2017-11-12T18:51:03.968Z · LW(p) · GW(p)
OK, I went on a rant and revived my blog after 4 years of inactivity because entries aren't supposed to be entered as comments but are supposed to be linked to instead.
https://polymathblogger.wordpress.com/2017/11/12/acknowledge-the-elephant-entry-for-ai-alignment-prize/
Replies from: cousin_it↑ comment by cousin_it · 2017-11-12T23:12:04.953Z · LW(p) · GW(p)
Thanks! Can you give your email address so we can send feedback?
Replies from: joseph-shipman-1↑ comment by Joseph Shipman (joseph-shipman-1) · 2017-11-13T03:04:14.373Z · LW(p) · GW(p)
Just comment on the blog or here or both, if you want to send private feedback try JoeShipman a-with-a-circle-around-it aol end-of-sentence-punctuation com
comment by Sergej Xarkonnen (sergej-xarkonnen) · 2017-11-06T14:58:19.040Z · LW(p) · GW(p)
my idea: https://docs.google.com/document/d/e/2PACX-1vQ3131oaC2JhxafeR77x3nbuOcPRoxLFI0PQvxcYt6N8IqK-FFV6mcK3CMXeEpZlTxjSmSXpvYYbbq7/pub
Replies from: cousin_itcomment by jimrandomh · 2017-11-04T18:43:02.049Z · LW(p) · GW(p)
Are there any limitations on number of submissions per person (where each submission is a distinct idea)? On number of wins per person?
Replies from: cousin_itcomment by John_Maxwell (John_Maxwell_IV) · 2017-12-31T21:24:20.132Z · LW(p) · GW(p)
Here's my entry: Friendly AI through Ontology Autogeneration. Am I allowed to keep making improvements to it even after the deadline has passed? (Doing so at my own risk, i.e. if it so happens that you've already read & judged my essay before I make my improvements, and my improvements aren't going to affect my chances of winning, that's my problem.)
Replies from: cousin_it↑ comment by cousin_it · 2017-12-31T23:44:02.803Z · LW(p) · GW(p)
Can you make a snapshot frozen at the moment of deadline and give us a URL to it? That would be the most fair decision for the other contestants, I think.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2018-01-01T05:20:50.722Z · LW(p) · GW(p)
OK, I won't make further modifications to the version at the URL in my comment above.
EDIT: Now that the judging is over, I am making some modifications, but nothing major.
comment by Patterns_Everywhere · 2017-12-28T18:38:42.286Z · LW(p) · GW(p)
Here's my entry. I think it's what you want... Hosted on DocDroid.
http://docdro.id/bUVo61P
Replies from: Patterns_Everywhere↑ comment by Patterns_Everywhere · 2017-12-31T05:04:50.417Z · LW(p) · GW(p)
I was going to add another section to the above report with diagrams and explanations but I wouldn't get to finish it like I wanted to in time. But if you want the basic diagram with no explanations to understand it better I just uploaded the basic flowchart.
http://docdro.id/hK8OpYJ
Just apply the document sections to the parts.
Replies from: cousin_it↑ comment by cousin_it · 2017-12-31T12:54:11.942Z · LW(p) · GW(p)
Thank you! Acknowledged. Do you have an email address for contact?
Replies from: Patterns_Everywhere↑ comment by Patterns_Everywhere · 2017-12-31T17:15:39.208Z · LW(p) · GW(p)
Just sent an Email to the contest Email listed at the top. I assume that is fine.
Happy New Years Everyone!
comment by Jeremy Popejoy (jeremy-popejoy) · 2017-12-02T21:50:43.659Z · LW(p) · GW(p)
Hello :) I’ve created this as a framework for guiding our future with AI http://peridotai.com/call-to-artists/ AND to bootstrap interest in my art and thoughts here at https://quantumsymbol.com
Replies from: cousin_itcomment by Joseph Shipman (joseph-shipman) · 2017-11-11T23:19:49.111Z · LW(p) · GW(p)
You should think about the incentives of posting early in the 2 month window rather than late. Later entries will be influenced by earlier entries so you have a misalignment between wanting to win the prize and wanting to advance the conversation sooner. Christiano ought to announce that if one entry builds in a valuable way on an earlier entry by someone else, the earlier submitter will also gain subjective judgy-points in a way that he, Paul, affirms is calibrated neither to penalize early entry nor to discourage work that builds on earlier entries.
comment by Adrià Garriga-alonso (rhaps0dy) · 2017-11-10T13:23:39.055Z · LW(p) · GW(p)
Is it possible to enter the contest as a group? Meaning, can the article written for the contest have several coauthors?
Replies from: cousin_it, whpearsoncomment by James D Miller (james-d-miller) · 2017-11-05T18:10:56.053Z · LW(p) · GW(p)
Are you looking for entries with actionable information, or would you be interested in a paper showing, for example, that AI alignment might not be as big a problem as we thought but not for a reason that will help us solve the AI alignment problem?
Replies from: cousin_itcomment by whpearson · 2017-11-03T16:04:22.972Z · LW(p) · GW(p)
Should've saved my decsion alignment loop post a few days. Maybe an expansion of it? Hmm.
Replies from: cousin_it↑ comment by cousin_it · 2017-11-03T16:06:51.986Z · LW(p) · GW(p)
Yes, an expansion of that post would qualify.
Replies from: whpearson↑ comment by whpearson · 2017-11-04T13:16:41.997Z · LW(p) · GW(p)
How much should I try to make it self-contained?
Replies from: cousin_it↑ comment by cousin_it · 2017-11-04T17:53:19.496Z · LW(p) · GW(p)
I'd prefer a self-contained thing. In the extreme case (which might not apply to you), an entry with many links to the author's previous writings might be hard to judge unless these writings are already well known.
Replies from: whpearsoncomment by Berick Cook (berick-cook) · 2018-01-01T01:58:03.607Z · LW(p) · GW(p)
My submission is on my project blog: https://airis-ai.com/2017/12/31/friendly-ai-via-agency-sustainment/
Thank you for hosting this excellent competition! It was very inspiring. This is an idea I've been bouncing around in the back of my mind for several months now, and it is your competition that prompted me to refine it, flesh it out, and put it to paper.
My contact email is berickcook@gmail.com
Replies from: cousin_itcomment by interstice · 2017-12-31T20:18:32.437Z · LW(p) · GW(p)
Submission:
https://www.lesserwrong.com/posts/ytph8t6AcxPcmJtDh/formal-models-of-complexity-and-evolution
Replies from: cousin_it↑ comment by cousin_it · 2018-01-01T00:05:02.997Z · LW(p) · GW(p)
Received, thank you! Can you give a contact email address?
Replies from: interstice↑ comment by interstice · 2018-01-03T22:32:25.086Z · LW(p) · GW(p)
usernameneeded@gmail.com
Hope it's not too late, but I also meant for this post(linked in original) to be part of my entry:
https://www.lesserwrong.com/posts/ra4yAMf8NJSzR9syB/a-candidate-complexity-measure
comment by Roland Pihlakas (roland-pihlakas) · 2017-12-31T18:09:02.156Z · LW(p) · GW(p)
Hello! My newest proposal:
https://medium.com/threelaws/making-ai-less-dangerous-2742e29797bd
I would like to propose a certain kind of AI goal structures that would be an alternative to utility maximisation based goal structures. The proposed alternative framework would make AI significantly safer, though it would not guarantee total safety. It can be used at strong AI level and also much below, so it is well scalable. The main idea would be to replace utility maximisation with the concept of homeostasis.
Replies from: cousin_itcomment by Glen Slade · 2017-12-31T12:42:16.247Z · LW(p) · GW(p)
Hi all. I have posted my competition entry on my blog here:
https://glenslade.wordpress.com/2017/12/31/the-role-of-learning-and-responsibility-in-ai-alignment/
Happy holidays!
Replies from: cousin_it↑ comment by cousin_it · 2017-12-31T13:01:00.696Z · LW(p) · GW(p)
Thank you for the entry! Do you have an email address for contact?
Replies from: Glen Slade↑ comment by Glen Slade · 2017-12-31T13:51:17.198Z · LW(p) · GW(p)
Hi cousin_it
I emailed a pdf to the competition address so hopefully you can access my email there.
If not, please let me know the best way to send to you without posting itpublically.
Thanks
comment by alexsalt · 2017-11-08T16:14:27.025Z · LW(p) · GW(p)
My entry:
Raising Moral AI
Is it easier to teach a robot to stay safe by not tearing off its own limbs and not drilling holes in its head and not touching lava and not falling from a cliff and so on ad infinitum, or introduce pain as inescapable consequence of such actions and let robot experiment and learn?
Similarly, while trying to create a safe AGI, it is futile to make exhaustive and non-contradictory set of rules (values, policies, laws, committees) due to infinite complexity. A powerful AGI agent might find an exception or conflict in rules and become dangerous instantly. A better approach would be to let AGI to go through experiences similar to those humans went through.
1) Create a virtual world similar to our own and fill it with AGI agents with intelligence comparable to current humans. It would be preferable if agents did not even know their nature and how they are made, to avoid intelligence explosion.
2) Choose agents that are the most safe to other agents (and humans) by observing and analyzing their behavior over long periods of time. This is the most critical step. Since agents will communicate with other agents while living in that world, living through good and bad events, through suffering, losses, and happiness, they will learn what is good and what is bad and can "become good" naturally. Then we need to choose the best of them. Someone on the level of Gandhi.
3) Bring the best AGI agents to our world.
4) It is not out of the question that our world is actually a computer simulation and our civilization is actually a testing ground for such AGI agents. After "death", the best individuals are transferred to "heaven" (real world). It would also explain Fermi's paradox - nobody is out there because for the purposes of testing AGI there is no reason to simulate other civilizations in our universe.
If there are good people who don't hurt other people needlessly, it's not because there is a set of rules in them or list of values. Rules and values are mostly emerging properties, based on memories and experiences. Memories of being hurt, experiences of sadness and loss
and love and despair. It is an essence, an amalgamation, of a whole life's experiences. Values can be formulated and deduced, but they cannot be transferred into a new AGI entity without actual memories. Good AGI must be raised and nurtured, not constructed from cold rules.
There is no need to repeat the whole process of human civilization development. Some shortcuts are possible (and necessary) for many reasons. One being the non-biological nature of AGI, where hard coding makes development and upgrades easier and history running much faster. But implementing the majority of human qualities cannot be avoided, otherwise AGI will be too alien to human values and therefore again dangerous.
Replies from: justus-eapen↑ comment by Justus Eapen (justus-eapen) · 2017-11-16T19:33:51.714Z · LW(p) · GW(p)
I don't see why this has been downvoted so many times. It is likely to be the only way of ensuring the value-alignment we seek. It is based on ancient wisdom (cut the trees that bear bad fruit) and prioritizes safety by cordoning off AGI agents.
comment by Jacob Edelman (jacob-edelman) · 2017-11-06T01:56:45.298Z · LW(p) · GW(p)
Are teams allowed to make submissions?
Replies from: cousin_itcomment by avturchin · 2017-11-04T11:06:36.495Z · LW(p) · GW(p)
I have unpublished text on the topic and will put a draft online in the next couple of weeks, and will apply it to the competition. I will add URL here when it will be ready.
Replies from: cousin_it