When was the term "AI alignment" coined?
post by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-10-21T18:27:56.162Z · LW · GW · 1 commentThis is a question post.
Contents
Answers 7 Multicore 6 Ben Pace None 1 comment
Answers
The first MIRI paper to use the term is "Aligning Superintelligence with Human Interests: A Technical Research Agenda" from 2014, the original version of which appears not to exist anywhere on the internet anymore, replaced by the 2017 rewrite "Agent Foundations for Aligning Machine Intelligence with Human Interests:A Technical Research Agenda". Previous papers sometimes offhandedly talked about AI being aligned with human values as one choice of wording among many.
Edit: the earliest citation I can find for Russell talking about alignment is also 2014.
↑ comment by Rob Bensinger (RobbBB) · 2020-10-23T23:13:56.698Z · LW(p) · GW(p)
Regarding v1 of the "Agent Foundations..." paper (then called "Aligning Superintelligence with Human Interests: A Technical Research Agenda"), the original file is here.
To make it easier to find older versions of MIRI papers and see whether there are substantive changes (e.g., for purposes of citing a claim), I've made a https://intelligence.org/revisions/ page listing obsolete versions of a bunch of papers.
Regarding the term "alignment" as a name for the field/problem: my recollection is that Stuart Russell suggested the term to MIRI in 2014, before anyone started using it publicly. We ran with "(AI) alignment" instead of "value alignment" because we didn't want people to equate the value learning problem with the whole alignment problem.
(I also think "value alignment" is confusing because it can be read as saying humans and AI systems both have values, and we're trying to bring the two parties' values into alignment. This conflicts with the colloquial use of "values," which treats it as more of a human thing, compared to more neutral terms like "goals" or "preferences." And Eliezer has historically used "values" to specifically refer to humanity's true preferences.)
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2020-10-24T23:29:29.849Z · LW(p) · GW(p)
Footnote: Looks like MIRI was using "Friendly AI" in our research agenda drafts as of Oct. 23, and we switched to "aligned AI" by Nov. 20 (though we were using phrasings like "reliably aligned with the intentions of its programmers" earlier than that).
I recall Eliezer saying that Stuart Russell named the 'value alignment problem', and that it was derived from that. (Perhaps Eliezer derived it?)
↑ comment by Gurkenglas · 2020-10-21T22:58:59.896Z · LW(p) · GW(p)
I recall Eliezer asking on Facebook for a good word for the field of AI safety research before it was called alignment.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-10-21T23:42:31.760Z · LW(p) · GW(p)
Would be interested in a link if anyone is willing to go look for it.
1 comment
Comments sorted by top scores.
comment by Shmi (shminux) · 2020-10-22T01:15:10.814Z · LW(p) · GW(p)
Google advanced search sucks, but it's clear that AI friendliness and AI safety became AI alignment some time in 2016.