0 comments
Comments sorted by top scores.
comment by cousin_it · 2022-02-17T21:42:29.800Z · LW(p) · GW(p)
Can you describe what changed / what made you start feeling that the problem is solvable / what your new attack is, in short?
Replies from: gworley, Stuart_Armstrong↑ comment by Gordon Seidoh Worley (gworley) · 2022-02-18T01:26:38.884Z · LW(p) · GW(p)
This feels like a key detail that's lacking from this post. I actually downvoted this post because I have no idea if I should be excited about this development or not. I'm pretty familiar with Stuart's work over the years, so I'm fairly surprised if there's something big here.
Might help if I put this another way. I'd be purely +1 on this project if it was just "hey, I think I've got some good ideas AND I have an idea about why it's valuable to operationalize them as a business, so I'm going to do that". Sounds great. However, the bit about "AND I think I know how to build aligned AI for real this time guys and the answer is [a thing folks have been disagreeing about whether or not it works for years]" makes me -1 unless there's some explanation of how it's different this time.
Sorry if this is a bit harsh. I don't want to be too down on this project, but I feel like a core chunk of the post is that there's some exciting development that leads Stuart to think something new is possible but then doesn't really tell us what that something new is, and I feel that by the standards of LW/AF that's good reason to complain and ask for more info.
↑ comment by Stuart_Armstrong · 2022-02-21T13:03:13.234Z · LW(p) · GW(p)
Firstly, because the problem feels central to AI alignment, in the way that other approaches didn't. So making progress in this is making general AI alignment progress; there won't be such a "one error detected and all the work is useless" problem. Secondly, we've had success generating some [AF · GW] key [AF · GW] concepts [AF · GW], implying the problem is ripe for further progress.
comment by Shmi (shminux) · 2022-02-17T23:31:33.287Z · LW(p) · GW(p)
Value extrapolation is thus necessary for AI alignment. It is also almost [LW · GW] sufficient [LW · GW], since it allows AIs to draw correct conclusions from imperfectly defined human data.
I am missing something... The idea of correctly extrapolating human values is basically the definition of the Eliezer's original proposal, CEV. In fact, it's right there in the name. What is the progress over the last decade?
Replies from: Stuart_Armstrong, Evan R. Murphy↑ comment by Stuart_Armstrong · 2022-02-21T13:18:58.844Z · LW(p) · GW(p)
CEV is based on extrapolating the person; the values are what the person would have had, had they been smarter, known more, had more self-control, etc... Once you have defined the idealised person, the values emerge as a consequence. I've criticised this idea [LW · GW] in the past, mainly because the process to generate the idealised person seems vulnerable to negative attractors (Eliezer's most recent version of CEV has less of this problem).
Value extrapolation and model splintering are based on extrapolating features and concepts in models, to other models. This can be done without knowing human psychology or (initially) anything about knowing anything about humans at all, including their existence. See for example the value extrapolation partially resolves symbol grounding [LW · GW] post; I would never write "CEV partially resolves symbol grounding". On the contrary, CEV needs symbol grounding.
Replies from: shminux↑ comment by Shmi (shminux) · 2022-02-22T08:23:57.412Z · LW(p) · GW(p)
I don't really understand the symbol grounding issue, but I can see that "value extrapolation" just happened to sound very similar to CEV and hence my confusion.
↑ comment by Evan R. Murphy · 2022-02-18T05:34:07.791Z · LW(p) · GW(p)
I wanted to look up CEV after reading this comment. Here's a link for anyone else looking: https://intelligence.org/files/CEV.pdf
That acronym stands for "Coherent Extrapolated Volition" not "Coherent Extrapolated Values". But from skimming the paper just now, I think agree with shminux that it's basically the same idea.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2022-02-19T18:42:51.623Z · LW(p) · GW(p)
A more recent explanation of CEV by Eliezer: https://arbital.com/p/cev/
comment by gwern · 2022-02-17T20:21:48.854Z · LW(p) · GW(p)
Aligned AI is a benefit corporation dedicated to solving the alignment problem
Is this a UK or US public-benefit corporation?
Who are the other founders?
Who and how much are you capitalized for?
Replies from: anotheragi, Evan R. Murphy, Stuart_Armstrong↑ comment by anotheragi · 2022-02-17T21:58:54.244Z · LW(p) · GW(p)
Rebecca Gorman (who authored https://arxiv.org/abs/2109.08065 with Stuart Armstrong) is another co-founder according to her LinkedIn page.
↑ comment by Evan R. Murphy · 2022-02-18T05:24:42.155Z · LW(p) · GW(p)
This page says "We are located in Oxford, England." So I think they are a UK public-benefit corporation, but I could be mistaken.
↑ comment by Stuart_Armstrong · 2022-02-21T13:04:38.339Z · LW(p) · GW(p)
UK based currently, Rebecca Gorman other co-founder.
comment by Kaj_Sotala · 2022-02-17T20:08:09.536Z · LW(p) · GW(p)
Best of luck!
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2022-02-21T12:06:15.540Z · LW(p) · GW(p)
Thanks!
comment by Steven Byrnes (steve2152) · 2022-02-21T12:37:00.355Z · LW(p) · GW(p)
Hmm, the only overlap I can see between your recent work and this description (including optimism about very-near-term applications) is the idea of training an ensemble of models on the same data, and then if the models disagree with each other on a new sample, then we're probably out of distribution (kinda like the Yarin Gal dropout ensemble thing and much related work).
And if we discover that we are in fact out of distribution, then … I don't know. Ask a human for help?
If that guess is at all on the right track (very big "if"!), I endorse it as a promising approach well worth fleshing out further (and I myself put a lot of hope on things in that vein working out [LW · GW]). I do, however, think there are AGI-specific issues to think through, and I'm slightly worried that y'all will get distracted by the immediate deployment issues and not make as much progress on AGI-specific stuff. But I'm inclined to trust your judgment :)
comment by J Bostock (Jemist) · 2022-02-18T13:24:32.087Z · LW(p) · GW(p)
Unmentioned but large comparative advantage of this: it's not based in the Bay Area.
The typical alignment pitch of: "Come and work on this super-difficult problem you may or may not be well suited for at all" Is a hard enough sell for already-successful people (which intelligent people often are) without adding: "Also you have to move to this one specific area of California which has a bit of a housing and crime problem and very particular culture"
Replies from: gwern↑ comment by gwern · 2022-02-18T16:04:09.077Z · LW(p) · GW(p)
Unmentioned but large comparative advantage of this: it's not based in the Bay Area.
It's based in the Bay Area of England (Oxford), though, with no mention of remote. So, all the same pathologies: extreme liberal politics, high taxes and cost of living, Dutch disease being captured by NIMBYs with a lock on ever escalating real estate prices and banning density, persistent blatant crime and homelessness (in some ways, worse: I was never yelled at by the homeless in SF like I was in Oxford, and one woman tried to scam me twice. I was there for all of 2 weeks!).
Replies from: Owain_Evans, None↑ comment by Owain_Evans · 2022-02-19T09:24:56.023Z · LW(p) · GW(p)
Taxes in Oxford are more-or-less the same as anywhere else in the UK. These are lower than many European countries but higher than the US (especially states with no income tax).
Rent in SF is more than 2x Oxford (seems roughly right to me) but I agree with what you say on housing.
Having lived in SF and Oxford, the claim about crime and homelessness doesn't match my experience at all (nor any anecdotes I've heard). I'd be very surprised if stats showed more crime in Oxford vs the central parts of SF.
↑ comment by Ben Pace (Benito) · 2022-02-19T09:36:22.501Z · LW(p) · GW(p)
The homeless in Oxford talked to me or followed me more than in Berkeley. (I haven’t spent much time in SF.)
comment by Chris_Leong · 2022-02-18T13:23:45.994Z · LW(p) · GW(p)
If you think this is financially viable, then I'm fairly keen on this, especially if you provide internships and development opportunities for aspiring safety researchers.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2022-02-21T14:22:48.522Z · LW(p) · GW(p)
Yes, those are important to provide, and we will.
comment by Mau (Mauricio) · 2022-02-18T05:29:15.569Z · LW(p) · GW(p)
So we need a way to have alignment deployed throughout the algorithmic world before anyone develops AGI. To do this, we'll start by offering alignment as a service for more limited AIs.
I'm tentatively fairly excited about some version of this, so I'll suggest some tweaks that can hopefully be helpful for your success (or for the brainstorming of anyone else who's thinking about doing something similar in the future).
We will refine and develop this deployment plan, depending on research results, commercial opportunities, feedback, and suggestions.
I suspect there'd be much better commercial/scaling opportunities for a somewhat similar org that offered a more comprehensive, high-quality package of "trustworthy AI services"--e.g., addressing bias, privacy issues, and other more mainstream concerns along with safety/alignment concerns. Then there'd be less of a need to convince companies about paying for some new service--you would mostly just need to convince them that you're the best provider of services that they're already interested in. (Cf. ethical AI consulting companies that already exist.)
(One could ask: But wouldn't the extra price be the same, whether you're offering alignment in a package or separately? Not necessarily--IP concerns and transaction costs incentivize AI companies to reduce the number of third parties they share their algorithms with.)
As an additional benefit, a more comprehensive package of "trustworthy AI services" would be directly competing for consumers with companies like the AI consulting company mentioned above. This might pressure those companies to start offering safety/alignment services--a mechanism for broadening adoption that isn't available to an org that only provides alignment services.
[From the website] We are hiring AI safety researchers, ML engineers and other staff.
Relatedly to the earlier point, given that commercial opportunities are a big potential bottleneck (in other words, given that selling limited alignment services might be as much of a communications and persuasion challenge as it is a technical challenge), my intuition would be to also put significant emphasis into hiring people who will kill it at the persuasion: people who are closely familiar with the market and regulatory incentives faced by relevant companies, people with sales and marketing experience, people with otherwise strong communications skills, etc. (in addition to the researchers and engineers).
Replies from: Tony Barrett, Stuart_Armstrong↑ comment by Tony Barrett · 2022-02-23T04:09:40.611Z · LW(p) · GW(p)
Adding on to Mauricio's idea: Also explore partnering with companies that offer a well-recognized, high-quality package of mainstream "trustworthy AI services" -- e.g., addressing bias, privacy issues, and other more mainstream concerns -- where you have comparative advantage on safety/alignment concerns and they have comparative advantage on the more mainstream concerns. Together with a partner, you could provide a more comprehensive offering. (That's part of the value proposition for them. Also, of course, be sure to highlight the growing importance of safety/alignment issues, and the expertise you'd bring.) Then you wouldn't have to compete in the areas where they have comparative advantage.
Replies from: rgorman↑ comment by Stuart_Armstrong · 2022-02-21T14:22:04.663Z · LW(p) · GW(p)
Thanks for the ideas! We'll think on them.
comment by iceman · 2022-02-18T00:22:50.532Z · LW(p) · GW(p)
Given that there's a lot of variation in how humans extrapolate values, whose extrapolation process do you intend to use?
Replies from: yonatan-cale-1, Charlie Steiner, Stuart_Armstrong↑ comment by Yonatan Cale (yonatan-cale-1) · 2022-02-18T01:05:01.371Z · LW(p) · GW(p)
If that will turn out to be the only problem then we'll be in an amazing situation
↑ comment by Charlie Steiner · 2022-02-18T03:18:05.548Z · LW(p) · GW(p)
Near future AGI might be aligned to the meta-preferences of MTurkers more than anyone else :P
↑ comment by Stuart_Armstrong · 2022-02-21T14:16:26.995Z · LW(p) · GW(p)
We're aiming to solve the problem in a way that is acceptable to one given human, and then generalise from that.
Replies from: Davidmanheim↑ comment by Davidmanheim · 2022-03-03T06:20:40.753Z · LW(p) · GW(p)
This seems fragile in ways that make me less optimistic about the approach overall. We have strong reasons to think that value aggregation is intractable, and (by analogy,) in some ways the problem of coherence in CEV is the tricky part. That is, the problem of making sure that we're not Dutch book-able is, IIRC, NP-complete, and even worse, the problem of aggregating preferences has several impossibility results.
Edit: To clarify, I'm excited about the approach overall, and think it's likely to be valuable, but this part seems like a big problem.
↑ comment by Stuart_Armstrong · 2022-03-03T11:01:09.703Z · LW(p) · GW(p)
I've posted [LW · GW] on the theoretical difficulties of aggregating the utilities of different agents. But doing it in practice is much more feasible (scale the utilities to some not-too-unreasonable scale, add them, maximise sum).
But value extrapolation is different from human value aggregation; for example, low power (or low impact) AIs can be defined with value extrapolation, and that doesn't need human value aggregation.
Replies from: Davidmanheim↑ comment by Davidmanheim · 2022-03-05T19:29:28.790Z · LW(p) · GW(p)
I'm skeptical that many of the problems with aggregation don't both apply to actual individual human values once extrapolated, and generalize to AIs with closely related values, but I'd need to lay out the case for that more clearly. (I did discuss the difficulty of cooperation even given compatible goals a bit in this paper, but it's nowhere near complete in addressing this issue.)
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2022-03-08T10:03:55.291Z · LW(p) · GW(p)
It's worth you write up your point and post it - that tends to clarify the issue, for yourself as well as for others.
↑ comment by rgorman · 2022-03-03T11:05:16.159Z · LW(p) · GW(p)
Hi David,
As Stuart referenced in his comment to your post here, value extrapolation can be the key to AI alignment *without* using it to deduce the set of human values. See the 'List of partial failures' in the original post: With value extrapolation, these approaches become viable.
comment by StellaAthena · 2022-02-20T08:03:08.120Z · LW(p) · GW(p)
To do this, we'll start by offering alignment as a service for more limited AIs. Value extrapolation scales down as well as up: companies value algorithms that won't immediately misbehave in new situations, algorithms that will become conservative and ask for guidance when facing ambiguity.
What are examples of AIs you think you can currently align and how much (order of magnitude, say) would it cost to have you align one for me? If I have a 20B parameter language model, can you align it for me?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2022-03-03T10:56:36.458Z · LW(p) · GW(p)
Reach out to my cofounder (Rebecca Gorman) on linkedin.
comment by Mitchell_Porter · 2022-02-18T09:29:23.363Z · LW(p) · GW(p)
For some time, I have planned to make a post calling for more people to actually try to solve the problem of alignment. I haven't studied Stuart's work in detail (something to be rectified soon), I always say June Ku's metaethical.ai is the most advanced scheme we have, but as an unapologetic fan of CEV, this talk of value extrapolation seems on the right track. I do wonder to what extent a solution to alignment for autonomous superhuman AI can lead (in advance) to spinoff for narrower and less powerful systems - superhuman alignment seems to require a determination of the full "human utility function", or something similar; I suppose the extrapolation part might be relevant for lesser AI, even if the full set of human values are not - but we shall learn more as Stuart's scheme unfolds.
I will add that I am personally interested in contributing to this kind of research (paid work would be most empowering, but absent that, I will still keep doing what I can, when I can, until we run out of time), but my circumstances are a little unusual, and might be incompatible with what some organizations require. So for now I'll just mention my interest.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2022-02-21T14:17:47.523Z · LW(p) · GW(p)
Thanks. Would you want to send me a message explaining your interest and your unusual circumstances (if relevant)?
comment by Joe Collman (Joe_Collman) · 2022-02-18T07:21:46.072Z · LW(p) · GW(p)
I'm encouraged by your optimism, and wish you the best of luck (British, and otherwise), but I hope you're not getting much of your intuition from the "Humans have demonstrated a skill with value extrapolation..." part. I don't think we have good evidence for this in a broad enough range of circumstances for it to apply well to the AGI case.
We know humans do pretty 'well' at this - when surrounded by dozens of other similar agents, in a game-theoretical context where it pays to cooperate, it pays to share values with others, and where extreme failure modes usually lead to loss of any significant power before they can lead to terrible abuse of that power.
Absent such game-theoretic constraints, I don't think we know much at all about how well humans do at this.
Further, I don't think I know what it means to do value extrapolation well - beyond something like "you're doing it well if you're winning" (what would it look like for almost all humans to do it badly?). That's fine for situations where cooperation with humans is the best way to win. Not so much where it isn't.
But with luck I'm missing something!
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2022-02-21T14:20:57.627Z · LW(p) · GW(p)
I do not put too much weight on that intuition, except as an avenue to investigate (how do humans do it, exactly? If it depends on the social environment, can the conditions of that be replicated?).
comment by Mike Johnson (mike-johnson) · 2022-03-01T01:56:23.678Z · LW(p) · GW(p)
I’m really glad to see this. I can’t say I fully grasp your particular approach, but what you’ve written about model fragments has really resonated.
My intuition around value extrapolation is that if we extrapolate the topic itself it’ll eventually turn into creating fine models of nervous system dynamics. Will be curious to see how your work intersects and what it assumes about neuroscience, and also what sort of neuroscience progress you think might make your work easier.
Good luck!
comment by Oliver Sourbut · 2022-02-25T09:45:30.700Z · LW(p) · GW(p)
All the best on this new venture!
Regarding 'value extrapolation', I wrote a little on grounding the acquisition of the right priors by 'learning to value learn' last year in Motivations, Natural Selection, and Curriculum Engineering [AF · GW] (section Transmissible Accumulation). It basically just has seeds of ideas, but you may be interested.
comment by Quintin Pope (quintin-pope) · 2022-02-18T19:47:33.721Z · LW(p) · GW(p)
I think value extrapolation is more tractable than many assume, even for vary powerful systems. I think this because I expect AI systems to strongly prefer a small number of general explanations over many shallow explanations [LW · GW]. I expect such general explanations for human values are more likely to extend to unusual situations than more shallow explanations.
One approach that seems really underexplored is to directly generate data on how human preferences extend to extreme situations or very capable AIs. OpenAI was able to greatly improve the alignment of current language models by learning a reward model from text examples of current language models following human instructions, ranked by how well the AI’s output followed the human’s instruction. We should be able to generate a similar values data set, but for much AIs much stronger than current language models. See here for a more extended discussion [LW · GW].
comment by Koen.Holtman · 2022-02-18T15:15:55.394Z · LW(p) · GW(p)
To do this, we'll start by offering alignment as a service for more limited AIs.
Interesting move! Will be interesting to see how you will end up packaging and positioning this alignment as a service, compared to the services offered by more general IT consulting companies. Good luck!
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2022-02-21T14:24:24.951Z · LW(p) · GW(p)
Bedankt!
comment by Jack R (Jack Ryan) · 2022-02-24T03:46:32.243Z · LW(p) · GW(p)
How do you know when you have solved the value extrapolation problem?
One hypothesis I have for what you might say is something like "a training scheme solves the value extrapolation problem when the sequence of inputs that will be seen in deployment by the AI produced by that training scheme leads to outputs which lead to positive outcomes by human lights" though from what I can tell, that's basically the same as having a training scheme that leads to an "impact aligned [LW · GW]" AI*.
If it isn't this, how is your answer different?
*[ETA: the definition of impact alignment that Evan gives in the linked post technically only refers to an AI "which doesn't take actions that we would judge to be bad/problematic/dangerous/catastrophic," but in my comment above, I meant to refer to what I think is the more relevant property for an AI to have, which I'll call (impact aligned)_Jack: an agent is (impact aligned)_Jack to the degree that, by human lights, it doesn't take bad actions and does take good actions." I think that this is more relevant because Evan's definition doesn't distinguish between a rock and an intuitively aligned AI.]
↑ comment by Stuart_Armstrong · 2022-02-24T13:38:28.835Z · LW(p) · GW(p)
Knowing that we've solve the problem relies on the knowing the innards of the algorithm we've designed, and proving theorems about it, rather than looking solely at its behaviour.
Replies from: Jack Ryan, Jack Ryan↑ comment by Jack R (Jack Ryan) · 2022-02-24T21:59:38.907Z · LW(p) · GW(p)
Oh I see -- could you say more about what characteristics you want the innards to have?
↑ comment by Jack R (Jack Ryan) · 2022-03-28T10:13:13.655Z · LW(p) · GW(p)
Ping about my other comment -- FYI, because I am currently concerned that you don't have criteria for the innards in mind, I'm less excited about your agenda than other alignment theory agendas (though this lack of excitement is somewhat weak, e.g. since I haven't tried to digest your work much yet).
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2022-03-28T22:11:10.112Z · LW(p) · GW(p)
Let me develop the idea a bit more. It is somewhat akin to answering, in the 1968, the question "how do you know you've solved the moon landing problem?" In that case, NASA could point to them having solved a host of related problems (getting into space, getting to the moon, module separation, module reconnection), knowing that their lander could theoretically land on the moon (via knowledge of the laws of physics and of their lander design), estimating that the pilots are capable of dealing with likely contingencies, trusting that their model of the lunar landing problem is correct and has covered various likely contingencies, etc... and then putting it all together into a plan where they could say "successful lunar landing is likely".
Note that various parts of the assumptions could be tested; engineers could probe at the plan and say things like "what if the conductivity of the lunar surface is unusual", and try and see if their plan could cope with that.
Back to value extrapolation. We'd be confident that it is likely to work if we had, for example:
- It works well in an all situations where we can completely test it (eg we have a list of human moral principles, and we can have an AI successfully run a school using those as input).
- It works well on testable subproblems of more complicated situations (eg we inspect the AI's behaviour in specific situations).
- We have models of how value extrapolation works in extreme situations, and strong theoretical arguments that those models are correct.
- We have developed a much better theoretical understanding of value extrapolation, and are confident that it works.
- We've studied the problem adversarially and failed to break the approach.
- We have deployed interpretability methods to look inside the AI at certain places, and what we've seen is what we expect to see.
These are the sort of things that could make us confident that a new approach could work. Is this what you are thinking?
Replies from: Jack Ryan↑ comment by Jack R (Jack Ryan) · 2022-03-29T03:20:11.101Z · LW(p) · GW(p)
Thanks for this list!
Though the list still doesn't strike me as very novel -- it feels that most of these conditions are conditions we've been shooting for anyways.
E.g. conditions 1, 2, and 5 are about selecting for behavior we approve of and condition 5 is just inspection with interpretability tools.
If you feel you have traction on conditions 3 and 4 though, that does seem novel (side-note that condition 4 seems to be a subset of condition 3). I feel skeptical though, since value extrapolation seems like about as hard of a problem as understanding machine generalization in general + the way a thing behaves in a large class of cases seems to be so complicated of a concept that you won't be able to have confident beliefs about it or understand it. I don't have a concrete argument about this though.
Anyways, thanks for responding, and if you have any thoughts about the tractability of conditions 3/4, I'm pretty curious.
↑ comment by Stuart_Armstrong · 2022-03-29T11:06:45.541Z · LW(p) · GW(p)
Yes, the list isn't very novel - I was trying to think of the mix of theoretical and practical results that convince us, in the current world, that a new approach will work. Obviously we want a lot more rigour for something like AI alignment! But there is an urgency to get it fast, too :-(