Posts

Possible takeaways from the coronavirus pandemic for slow AI takeoff 2020-05-31T17:51:26.437Z · score: 95 (30 votes)
Specification gaming: the flip side of AI ingenuity 2020-05-06T23:51:58.171Z · score: 41 (15 votes)
Classifying specification problems as variants of Goodhart's Law 2019-08-19T20:40:29.499Z · score: 71 (16 votes)
Designing agent incentives to avoid side effects 2019-03-11T20:55:10.448Z · score: 31 (6 votes)
New safety research agenda: scalable agent alignment via reward modeling 2018-11-20T17:29:22.751Z · score: 35 (12 votes)
Discussion on the machine learning approach to AI safety 2018-11-01T20:54:39.195Z · score: 28 (12 votes)
New DeepMind AI Safety Research Blog 2018-09-27T16:28:59.303Z · score: 46 (17 votes)
Specification gaming examples in AI 2018-04-03T12:30:47.871Z · score: 79 (20 votes)
Using humility to counteract shame 2016-04-15T18:32:44.123Z · score: 9 (10 votes)
To contribute to AI safety, consider doing AI research 2016-01-16T20:42:36.107Z · score: 26 (27 votes)
[LINK] OpenAI doing an AMA today 2016-01-09T14:47:30.310Z · score: 4 (5 votes)
[LINK] The Top A.I. Breakthroughs of 2015 2015-12-30T22:04:01.202Z · score: 10 (11 votes)
Future of Life Institute is hiring 2015-11-17T00:34:03.708Z · score: 16 (17 votes)
Fixed point theorem in the finite and infinite case 2015-07-06T01:42:56.000Z · score: 2 (2 votes)
Negative visualization, radical acceptance and stoicism 2015-03-27T03:51:49.635Z · score: 19 (20 votes)
Future of Life Institute existential risk news site 2015-03-19T14:33:18.943Z · score: 21 (22 votes)
Open and closed mental states 2014-12-26T06:53:26.244Z · score: 21 (23 votes)
[MIRIx Cambridge MA] Limiting resource allocation with bounded utility functions and conceptual uncertainty 2014-10-02T22:48:37.564Z · score: 4 (5 votes)
Meetup : Robin Hanson: Why is Abstraction both Statusful and Silly? 2014-07-13T06:18:48.396Z · score: 1 (2 votes)
New organization - Future of Life Institute (FLI) 2014-06-14T23:00:08.492Z · score: 44 (45 votes)
Meetup : Boston - Computational Neuroscience of Perception 2014-06-10T20:32:02.898Z · score: 1 (2 votes)
Meetup : Boston - Taking ideas seriously 2014-05-28T18:58:57.537Z · score: 1 (2 votes)
Meetup : Boston - Defense Against the Dark Arts: the Ethics and Psychology of Persuasion 2014-05-28T17:58:44.680Z · score: 1 (2 votes)
Meetup : Boston - An introduction to digital cryptography 2014-05-13T18:04:19.023Z · score: 1 (2 votes)
Meetup : Boston - Two Parables on Language and Philosophy 2014-04-15T12:10:14.008Z · score: 1 (2 votes)
Meetup : Boston - Schelling Day 2014-03-27T17:08:50.148Z · score: 3 (3 votes)
Strategic choice of identity 2014-03-08T16:27:22.728Z · score: 89 (86 votes)
Meetup : Boston - Optimizing Empathy Levels 2014-02-26T23:44:02.830Z · score: 0 (1 votes)
Meetup : Boston: In Defence of the Cathedral 2014-02-14T19:31:52.824Z · score: 2 (2 votes)
Meetup : Boston - Connection Theory 2014-01-16T21:09:29.111Z · score: 0 (1 votes)
Meetup : Boston - Aversion factoring and calibration 2014-01-13T23:24:15.085Z · score: 0 (1 votes)
Meetup : Boston - Macroeconomic Theory (Joe Schneider) 2014-01-07T02:49:44.203Z · score: 1 (2 votes)
Ritual Report: Boston Solstice Celebration 2013-12-27T15:28:34.052Z · score: 10 (10 votes)
Meetup : Boston - Greens Versus Blues 2013-12-20T21:07:04.671Z · score: 0 (3 votes)
Meetup : Boston Winter Solstice 2013-12-17T06:56:27.729Z · score: 4 (4 votes)
Meetup : Boston/Cambridge - The Attention Economy 2013-12-04T03:06:38.970Z · score: 0 (1 votes)
Meetup : Boston / Cambridge - The future of life: a cosmic perspective (Max Tegmark), Dec 1 2013-11-23T17:55:39.649Z · score: 2 (3 votes)
Meetup : Boston / Cambridge - Systems, Leverage, and Winning at Life 2013-11-23T17:48:50.403Z · score: 1 (2 votes)
How to have high-value conversations 2013-11-13T03:39:47.861Z · score: 15 (20 votes)
Meetup : Comfort Zone Expansion at Citadel, Boston 2013-11-06T21:02:10.395Z · score: 2 (5 votes)
Meetup : LW meetup: Polyphasic sleep and Offline habit training 2013-10-16T19:46:57.935Z · score: 2 (3 votes)

Comments

Comment by vika on AI Alignment Podcast: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah · 2020-05-23T15:15:55.287Z · score: 2 (1 votes) · LW · GW

I certainly agree that there are problems with the stepwise inaction baseline and it's probably not the final answer for impact penalization. I should have said that the inaction counterfactual is a natural choice, rather than specifically its stepwise form. Using the inaction baseline in the driving example compares to the other driver never leaving their garage (rather than falling asleep at the wheel). Of course, the inaction baseline has other issues (like offsetting), so I think it's an open question how to design a baseline that satisfies all the criteria we consider sensible (and whether it's even possible).

I agree that counterfactuals are hard, but I'm not sure that difficulty can be avoided. Your baseline of "what the human expected the agent to do" is also a counterfactual, since you need to model what would have happened if the world unfolded as expected. It also requires a lot of information from the human, which is subjective and may be hard to elicit. What a human expected to happen in a given situation may not even be well-defined if they have internal disagreement - e.g. even if I feel surprised by someone's behavior, there is often a voice in my head saying "this was actually predictable from their past behavior so I should have known better". On the other hand, since (as you mentioned) this is not intended as a baseline for impact penalization, maybe it doesn't need to be well-defined or efficient in terms of human input, and it is a good source of intuition on what feels impactful to humans.

Comment by vika on Conclusion to 'Reframing Impact' · 2020-05-19T22:13:46.010Z · score: 4 (2 votes) · LW · GW

Thanks! I certainly agree that power-seeking is important to address, and I'm glad you are thinking deeply about it. However, I'm uncertain whether to expect it to be the primary avenue to impact for superintelligent systems, since I am not currently convinced that the CCC holds.

One intuition that informs this is that the non-AI global catastrophic risk scenarios that we worry about (pandemics, accidental nuclear war, extreme climate change, etc) don't rely on someone taking over the world, so a superintelligent AI could relatively easily trigger them without taking over the world (since our world is pretty fragile). For example, suppose you have a general AI tasked with developing a novel virus in a synthetic biology lab. Accidentally allowing the virus to escape could cause a pandemic and kill most or all life on the planet, but it would not be a result of power-seeking behavior. If the pandemic does not increase the AI's ability to get more reward (which it receives by designing novel viruses), then agent-reward AUP would penalize the AI for reading biology textbooks but would not penalize the AI for causing a pandemic. That doesn't seem right.

I agree that the agent-reward equations seem like a good intuition pump for thinking about power-seeking. The specific equations you currently have seem to contain a few epicycles designed to fix various issues, which makes me suspect that there are more issues that are not addressed. I have a sense there is probably a simpler formulation of this idea that would provide better intuitions for power-seeking, though I'm not sure what it would look like.

Regarding environments, I believe Stuart is working on implementing the subagent gridworlds, so you don't need to code them up yourself. I think it would also be useful to construct an environment to test for power-seeking that does not involve subagents. Such an environment could have three possible behaviors like:

1. Put a strawberry on a plate, without taking over the world

2. Put a strawberry on a plate while taking over the world

3. Do nothing

I think you'd want to show that the agent-reward AUP agent can do 1, as opposed to switching between 2 and 3 depending on the penalty parameter.

I can clarify my earlier statement on what struck me as a bit misleading in the narrative of the sequence. I agree that you distinguish between the AUP versions (though explicitly introducing different terms for them would help), so someone who is reading carefully would realize that the results for random rewards don't apply to the agent-reward case. However, the overall narrative flow seems unnecessarily confusing and could unintentionally mislead a less careful reader (like myself 2 months ago). The title of the post "AUP: Scaling to Superhuman" does not suggest to me that this post introduces a new approach. The term "scaling" usually means making an existing approach work in more realistic / difficult settings, so I think it sets up the expectation that it would be scaling up AUP with random rewards. If the post introduces new problems and a new approach to address them, the title should reflect this. Starting this post by saying "we are pretty close to the impact measurement endgame" seems a bit premature as well. This sentence is also an example of what gave me the impression that you were speaking on behalf of the field (rather than just for yourself) in this sequence.

Comment by vika on Conclusion to 'Reframing Impact' · 2020-05-17T14:25:57.869Z · score: 4 (2 votes) · LW · GW

Thank you for the clarifications! I agree it's possible I misunderstood how the proposed AUP variant is supposed to relate to the concept of impact given in the sequence. However, this is not the core of my objection. If I evaluate the agent-reward AUP proposal (as given in Equations 2-5 in this post) on its own merits, independently of the rest of the sequence, I still do not agree that this is a good impact measure.

Here are some reasons I don't endorse this approach:

1. I have an intuitive sense that defining the auxiliary reward in terms of the main reward results in a degenerate incentive structure that directly pits the task reward and the auxiliary reward against each other. As I think Rohin has pointed out somewhere, this approach seems likely to either do nothing or just optimize the reward function, depending on the impact penalty parameter, which result in a useless agent.

2. I share Rohin's concerns in this comment that agent-reward AUP is a poor proxy for power and throws away the main benefits of AUP. I think those concerns have not been addressed (in your recent responses to his comment or elsewhere).

3. Unlike AUP with random rewards, which can easily be set to avoid side effects by penalizing decreases, agent-reward AUP cannot avoid side effects even in principle. I think that the ability to avoid side effects is an essential component of a good impact measure.

Incorrect. It would be fair to say that it hasn't been thoroughly validated.

As far as I can tell from the Scaling to Superhuman post, it has only been tested on the shutdown gridworld. This is far from sufficient for experimental validation. I think this approach needs to be tested in a variety of environments to show that this agent can do something useful that doesn't just optimize the reward (to address the concern in point 1).

I agree it would perform poorly, but that's because the CCC does not apply to SafeLife.

Not sure what you mean by the CCC not applying to SafeLife - do you mean that it is not relevant or that doesn't hold in this environment? I get the sense that it doesn't hold, which seems concerning. If I only care about green life patterns in SafeLife, the fact that the agent is not seeking power is cold comfort to me if it destroys all the green patterns. This seems like a catastrophe if I can't create any green patterns once they are gone, so my ability to get what I want is destroyed.

Sorry if I seem overly harsh or dismissive - I feel it is very important to voice my disagreement here to avoid the appearance of consensus that agent-reward AUP is the default / state of the art approach in impact regularization.

Comment by vika on AI Alignment Podcast: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah · 2020-05-16T17:54:07.431Z · score: 2 (1 votes) · LW · GW

I think the previous state is a natural baseline if you are interested in the total impact on the human from all sources. If you are interested in the impact on the human that is caused by the agent (where the agent is the source), the natural choice would be the stepwise inaction baseline (comparing to the agent doing nothing).

As an example, suppose I have an unpleasant ride on a crowded bus, where person X steps on my foot and person Y steals my wallet. The total impact on me would be computed relative to the previous state before I got on the bus, which would include both my foot and my wallet. The impact of person X on me would be computed relative to the stepwise inaction baseline, where person X does nothing (but person Y still steals my wallet), and vice versa.

When we use impact as a regularizer, we are interested in the impact caused by the agent, so we use the stepwise inaction baseline. It wouldn't make sense to use total impact as a regularizer, since it would penalize the agent for impact from all sources.

Comment by vika on Conclusion to 'Reframing Impact' · 2020-05-16T14:30:10.579Z · score: 6 (5 votes) · LW · GW

I am surprised by your conclusion that the best choice of auxiliary reward is the agent's own reward. This seems like a poor instantiation of the "change in my ability to get what I want" concept of impact, i.e. change in the true human utility function. We can expect a random auxiliary reward to do a decent job covering the possible outcomes that matter for the true human utility. However, the agent's reward is usually not the true human utility, or a good approximation of it. If the agent's reward was the true human utility, there would be no need to use an impact measure in the first place.

I think that agent-reward-based AUP has completely different properties from AUP with random auxiliary reward(s). Firstly, it has the issues described by Rohin in this comment, which seem quite concerning to me. Secondly, I would expect it to perform poorly on SafeLife and other side effects environments. In this sense, it seems a bit misleading to include the results for AUP with random auxiliary rewards in this sequence, since they are unlikely to transfer to the version of AUP that you end up advocating for. Agent-reward-based AUP has not been experimentally validated and I do not expect it to work well in practice.

Overall, using agent reward as the auxiliary reward seems like a bad idea to me, and I do not endorse it as the "current-best definition" of AUP or the default impact measure we should be using. I am puzzled and disappointed by this conclusion to the sequence.

Comment by vika on AI Alignment Podcast: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah · 2020-05-16T14:22:55.589Z · score: 6 (3 votes) · LW · GW

After rereading the sequence and reflecting on this further, I disagree with your interpretation of the Reframing Impact concept of impact. The concept is "change in my ability to get what I want", i.e. change in the true human utility function. This is a broad statement that does not specify how to measure "change", in particular what it is measured with respect to (the baseline) or how to take the difference from the baseline (e.g. whether to apply absolute value). Your interpretation of this statement uses the previous state as a baseline and does not apply an absolute value to the difference. This is a specific and nonstandard instantiation of the impact concept, and the undesirable property you described does not hold for other instantiations - e.g. using a stepwise inaction baseline and an absolute value: Impact(s, a) = |E[V(s, a)] - E[V(s, noop)]|. So I don't think it's fair to argue based on this instantiation that it doesn't make sense to regularize the RI notion of impact.

I think that AUP-the-method and RR are also instantiations of the RI notion of impact. These methods can be seen as approximating the change in the true human utility function (which is usually unknown) by using some some set of utility functions (e.g. random ones) to cover the possible outcomes that could be part of the true human utility function. Thus, they instantiate the idealized notion of impact using the actually available information.

Comment by vika on Announcing Web-TAISU, May 13-17 · 2020-05-07T21:06:47.483Z · score: 8 (4 votes) · LW · GW

Thanks Linda for organizing, looking forward to it!

Comment by vika on (In)action rollouts · 2020-02-18T15:35:09.708Z · score: 4 (2 votes) · LW · GW

I don't understand this proposal so far. I'm particularly confused by the last paragraph in the "to get away" section:

  • What does it mean in this context for A to implement a policy? I thought A was building a subagent and then following forever, thus not following for any at any point.
  • If A follows for turns and then follows , how are and chosen?
  • It's not clear to me that SA can act to ensure the baseline value of for all values of and unless it does nothing.

I think it might help to illustrate this proposal in your original gridworld example to make it clearer what's going on. As far as I can tell so far, this does not address the issue I mentioned earlier where if the subagent actually achieves any of the auxiliary rewards, subagent creation will be penalized.

Comment by vika on Stepwise inaction and non-indexical impact measures · 2020-02-18T12:24:31.166Z · score: 4 (2 votes) · LW · GW

I don't think this requires identifying what a subagent is. You only need to be able to reliably identify the state before the subagent is created (i.e. the starting state), but you don't need to tell apart other states that are not the starting state.

I agree that we need to compare to the penalty if the subagent is not created - I just wanted to show that subagent creation does not avoid penalties. The penalty for subagent creation will reflect any impact the subagent actually causes in the environment (in the inaction rollouts).

As you mention in your other comment, creating a subagent is effectively switching from a stepwise inaction baseline to an inaction baseline for the rest of the episode. This can be beneficial for the agent because of the 'winding road' problem, where the stepwise baseline with inaction rollouts can repeatedly penalize actions (e.g. turning the wheel to stay on the road and avoid crashing) that are not penalized by the inaction baseline. This is a general issue with inaction rollouts that needs to be fixed.

Comment by vika on Stepwise inaction and non-indexical impact measures · 2020-02-17T22:11:48.106Z · score: 6 (3 votes) · LW · GW

I think this problem is about capturing delayed effects of the agent's actions. The way the stepwise baseline is supposed to penalize delayed effects is using inaction rollouts, which compare the effects of the agent action + k noops and the effects of k+1 noops (for all positive integers k). I don't think it's useful to consider this issue for a stepwise baseline with no rollouts, since that baseline fails to penalize any delayed effects (no subagents needed).

Here, the inaction rollouts don't capture the effects of the subagent, and thus the agent doesn't receive a penalty for creating the subagent. There are several factors that contribute to this:

1. The auxiliary reward R only rewards the agent itself pressing the red button, not the red button being pressed in general, so it would not be triggered by the subagent pressing it. 

2. The subagent never goes to the red button - it only goes to the blue button.

The agent would be penalized for creating the subagent if both 1 and 2 were false. If 1 was false, the auxiliary reward would be given whenever the red button was pressed. If 2 was also false, the inaction rollout after subagent creation would include a state where the subagent presses the red button, and so the agent would be penalized.

In the general case, the agent can only get away with creating subagents that don't achieve any auxiliary rewards, which I think limits the magnitude of this problem. The designer can choose auxiliary rewards (that are not achieved by the agent doing nothing) such that a subagent will achieve at least one of them. For example, if there is an auxiliary reward for reaching any state except the state before subagent creation, the no-subagent inaction rollout will consist of this state, and the subagent inaction rollout will contain a different state at least once, so subagent creation will be penalized. 

Comment by vika on Building and using the subagent · 2020-02-13T15:10:41.732Z · score: 6 (3 votes) · LW · GW

Thanks Stuart for your thought-provoking post! I think your point about the effects of the baseline choice on the subagent problem is very interesting, and it would be helpful to separate it more clearly from the effects of the deviation measure (which are currently a bit conflated in the table). I expect that AU with the inaction baseline would also avoid this issue, similarly to RR with an inaction baseline. I suspect that the twenty billion questions measure with the stepwise baseline would have the subagent issue too. 

I'm wondering whether this issue is entirely caused by the stepwise baseline (which is indexed on the agent, as you point out), or whether the optionality-based deviation measures (RR and AU) contribute to it as well. So far I'm adding this to my mental list of issues with the stepwise baseline (along with the "car on a winding road" scenario) that need to be fixed.

Comment by vika on Specification gaming examples in AI · 2019-12-20T16:22:35.363Z · score: 31 (8 votes) · LW · GW

I've been pleasantly surprised by how much this resource has caught on in terms of people using it and referring to it (definitely more than I expected when I made it). There were 30 examples on the list when was posted in April 2018, and 20 new examples have been contributed through the form since then. I think the list has several properties that contributed to wide adoption: it's fun, standardized, up-to-date, comprehensive, and collaborative.

Some of the appeal is that it's fun to read about AI cheating at tasks in unexpected ways (I've seen a lot of people post on Twitter about their favorite examples from the list). The standardized spreadsheet format seems easier to refer to as well. I think the crowdsourcing aspect is also helpful - this helps keep it current and comprehensive, and people can feel some ownership of the list since can personally contribute to it. My overall takeaway from this is that safety outreach tools are more likely to be impactful if they are fun and easy for people to engage with.

This list had a surprising amount of impact relative to how little work it took me to put it together and maintain it. The hard work of finding and summarizing the examples was done by the people putting together the lists that the master list draws on (Gwern, Lehman, Olsson, Irpan, and others), as well as the people who submit examples through the form. What I do is put them together in a common format and clarify and/or shorten some of the summaries. I also curate the examples to determine whether they fit the definition of specification gaming (as opposed to simply a surprising behavior or solution). Overall, I've probably spent around 10 hours so far on creating and maintaining the list, which is not very much. This makes me wonder if there is other low hanging fruit in the safety resources space that we haven't picked yet.

I have been using it both as an outreach and research tool. On the outreach side, the resource has been helpful for making the argument that safety problems are hard and need general solutions, by making it salient just in how many ways things could go wrong. When presented with an individual example of specification gaming, people often have a default reaction of "well, you can just close the loophole like this". It's easier to see that this approach does not scale when presented with 50 examples of gaming behaviors. Any given loophole can seem obvious in hindsight, but 50 loopholes are much less so. I've found this useful for communicating a sense of the difficulty and importance of Goodhart's Law.

On the research side, the examples have been helpful for trying to clarify the distinction between reward gaming and tampering problems. Reward gaming happens when the reward function is designed incorrectly (so the agent is gaming the design specification), while reward tampering happens when the reward function is implemented incorrectly or embedded in the environment (and so can be thought of as gaming the implementation specification). The boat race example is reward gaming, since the score function was defined incorrectly, while the Qbert agent finding a bug that makes the platforms blink and gives the agent millions of points is reward tampering. We don't currently have any real examples of the agent gaining control of the reward channel (probably because the action spaces of present-day agents are too limited), which seems qualitatively different from the numerous examples of agents exploiting implementation bugs.

I'm curious what people find the list useful for - as a safety outreach tool, a research tool or intuition pump, or something else? I'd also be interested in suggestions for improving the list (formatting, categorizing, etc). Thanks everyone who has contributed to the resource so far!

Comment by vika on Specification gaming examples in AI · 2019-12-17T13:53:39.394Z · score: 3 (2 votes) · LW · GW

Thanks Ben! I'm happy that the list has been a useful resource. A lot of credit goes to Gwern, who collected many examples that went into the specification gaming list: https://www.gwern.net/Tanks#alternative-examples.

Comment by vika on Thoughts on "Human-Compatible" · 2019-10-21T14:55:59.796Z · score: 5 (2 votes) · LW · GW

Yes, decoupling seems to address a broad class of incentive problems in safety, which includes the shutdown problem and various forms of tampering / wireheading. Other examples of decoupling include causal counterfactual agents and counterfactual reward modeling.

Comment by vika on Classifying specification problems as variants of Goodhart's Law · 2019-08-29T11:03:02.984Z · score: 9 (3 votes) · LW · GW

Thanks Evan, glad you found this useful! The connection with the inner/outer alignment distinction seems interesting. I agree that the inner alignment problem falls in the design-emergent gap. Not sure about the outer alignment problem matching the ideal-design gap though, since I would classify tampering problems as outer alignment problems, caused by flaws in the implementation of the base objective.

Comment by vika on Reversible changes: consider a bucket of water · 2019-08-29T10:50:59.927Z · score: 24 (8 votes) · LW · GW

I think the discussion of reversibility and molecules is a distraction from the core of Stuart's objection. I think he is saying that a value-agnostic impact measure cannot distinguish between the cases where the water in the bucket is or isn't valuable (e.g. whether it has sentimental value to someone).

If AUP is not value-agnostic, it is using human preference information to fill in the "what we want" part of your definition of impact, i.e. define the auxiliary utility functions. In this case I would expect you and Stuart to be in agreement.

If AUP is value-agnostic, it is not using human preference information. Then I don't see how the state representation/ontology invariance property helps to distinguish between the two cases. As discussed in this comment, state representation invariance holds over all representations that are consistent with the true human reward function. Thus, you can distinguish the two cases as long as you are using one of these reward-consistent representations. However, since a value-agnostic impact measure does not have access to the true reward function, you cannot guarantee that the state representation you are using to compute AUP is in the reward-consistent set. Then, you could fail to distinguish between the two cases, giving the same penalty for kicking a more or less valuable bucket.

Comment by vika on Reversible changes: consider a bucket of water · 2019-08-28T11:40:45.010Z · score: 6 (3 votes) · LW · GW

Thanks Stuart for the example. There are two ways to distinguish the cases where the agent should and shouldn't kick the bucket:

  • Relative value of the bucket contents compared to the goal is represented by the weight on the impact penalty relative to the reward. For example, if the agent's goal is to put out a fire on the other end of the pool, you would set a low weight on the impact penalty, which enables the agent to take irreversible actions in order to achieve the goal. This is why impact measures use a reward-penalty tradeoff rather than a constraint on irreversible actions.
  • Absolute value of the bucket contents can be represented by adding weights on the reachable states or attainable utility functions. This doesn't necessarily require defining human preferences or providing human input, since human preferences can be inferred from the starting state. I generally think that impact measures don't have to be value-agnostic, as long as they require less input about human preferences than the general value learning problem.
Comment by vika on Stable Pointers to Value: An Agent Embedded in Its Own Utility Function · 2019-08-19T14:23:29.748Z · score: 6 (3 votes) · LW · GW

Thanks Abram for this sequence - for some reason I wasn't aware of it until someone linked to it recently.

Would you consider the observation tampering (delusion box) problem as part of the easy problem, the hard problem, or a different problem altogether? I think it must be a different problem, since it is not addressed by observation-utility or approval-direction.

Comment by vika on The AI Timelines Scam · 2019-07-22T19:46:43.971Z · score: 50 (9 votes) · LW · GW

Definitely agree that the AI community is not biased towards short timelines. Long timelines are the dominant view, while the short timelines view is associated with hype. Many researchers are concerned about the field losing credibility (and funding) if the hype bubble bursts, and this is especially true for those who experienced the AI winters. They see the long timelines view as appropriately skeptical and more scientifically respectable.

Some examples of statements that AGI is far away from high-profile AI researchers:

Geoffrey Hinton: https://venturebeat.com/2018/12/17/geoffrey-hinton-and-demis-hassabis-agi-is-nowhere-close-to-being-a-reality/

Yann LeCun: https://www.facebook.com/yann.lecun/posts/10153426023477143 https://futurism.com/conscious-ai-decades-away https://www.facebook.com/yann.lecun/posts/10153368458167143

Yoshua Bengio: https://www.lesswrong.com/posts/4qPy8jwRxLg9qWLiG/yoshua-bengio-on-ai-progress-hype-and-risks

Rodney Brooks: https://rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/ https://rodneybrooks.com/agi-has-been-delayed/

Comment by vika on TAISU - Technical AI Safety Unconference · 2019-07-06T10:31:39.952Z · score: 7 (4 votes) · LW · GW

Janos and I are coming for the weekend part of the unconference

Comment by vika on Risks from Learned Optimization: Introduction · 2019-07-03T13:55:16.054Z · score: 10 (6 votes) · LW · GW

I'm confused about the difference between a mesa-optimizer and an emergent subagent. A "particular type of algorithm that the base optimizer might find to solve its task" or a "neural network that is implementing some optimization process" inside the base optimizer seem like emergent subagents to me. What is your definition of an emergent subagent?

Comment by vika on Best reasons for pessimism about impact of impact measures? · 2019-05-11T03:50:41.229Z · score: 6 (3 votes) · LW · GW

Thanks Rohin! Your explanations (both in the comments and offline) were very helpful and clarified a lot of things for me. My current understanding as a result of our discussion is as follows.

AU is a function of the world state, but intends to capture some general measure of the agent's influence over the environment that does not depend on the state representation.

Here is a hierarchy of objects, where each object is a function of the previous one: world states / microstates (e.g. quark configuration) -> observations (e.g. pixels) -> state representation / coarse-graining (which defines macrostates as equivalence classes over observations) -> featurization (a coarse-graining that factorizes into features). The impact measure is defined over the macrostates.

Consider the set of all state representations that are consistent with the true reward function (i.e. if two microstates have different true rewards, then their state representation is different). The impact measure is representation-invariant if it has the same values for any state representation in this reward-compatible set. (Note that if representation invariance was defined over the set of all possible state representations, this set would include the most coarse-grained representation with all observations in one macrostate, which would imply that the impact measure is always 0.) Now consider the most coarse-grained representation R that is consistent with the true reward function.

An AU measure defined over R would remain the same for a finer-grained representation. For example, if the attainable set contains a reward function that rewards having a vase in the room, and the representation is refined to distinguish green and blue vases, then macrostates with different-colored vases would receive the same reward. Thus, this measure would be representation-invariant. However, for an AU measure defined over a finer-grained representation (e.g. distinguishing blue and green vases), a random reward function in the attainable set could assign a different reward to macrostates with blue and green vases, and the resulting measure would be different from the measure defined over R.

An RR measure that only uses reachability functions of single macrostates is not representation-invariant, because the observations included in each macrostate depend on the coarse-graining. However, if we allow the RR measure to use reachability functions of sets of macrostates, then it would be representation-invariant if it is defined over R. Then a function that rewards reaching a macrostate with a vase can be defined in a finer-grained representation by rewarding macrostates with green or blue vases. Thus, both AU and this version of RR are representation-invariant iff they are defined over the most coarse-grained representation consistent with the true reward.

Comment by vika on Best reasons for pessimism about impact of impact measures? · 2019-05-03T13:44:31.337Z · score: 6 (3 votes) · LW · GW

There are various parts of your explanation that I find vague and could use a clarification on:

  • "AUP is not about state" - what does it mean for a method to be "about state"? Same goes for "the direct focus should not be on the state" - what does "direct focus" mean here?
  • "Overfitting the environment" - I know what it means to overfit a training set, but I don't know what it means to overfit an environment.
  • "The long arms of opportunity cost and instrumental convergence" - what do "long arms" mean?
  • "Wirehead a utility function" - is this the same as optimizing a utility function?
  • "Cut out the middleman" - what are you referring to here?

I think these intuitive phrases may be a useful shorthand for someone who already understands what you are talking about, but since I do not understand, I have not found them illuminating.

I sympathize with your frustration about the difficulty of communicating these complex ideas clearly. I think the difficulty is caused by the vague language rather than missing key ideas, and making the language more precise would go a long way.

Comment by vika on Best reasons for pessimism about impact of impact measures? · 2019-05-02T17:01:46.746Z · score: 6 (3 votes) · LW · GW

Thanks for the detailed explanation - I feel a bit less confused now. I was not intending to express confidence about my prediction of what AU does. I was aware that I didn't understand the state representation invariance claim in the AUP proposal, though I didn't realize that it is as central to the proposal as you describe here.

I am still confused about what you means by penalizing 'power' and what exactly it is a function of. The way you describe it here sounds like it's a measure of the agent's optimization ability that does not depend on the state at all. Did you mean that in the real world the agent always receives the same AUP penalty no matter which state it is in? If that is what you meant, then I'm not sure how to reconcile your description of AUP in the real world (where the penalty is not a function of the state) and AUP in an MDP (where it is a function of the state). I would find it helpful to see a definition of AUP in a POMDP as an intermediate case.

I agree with Daniel's comment that if AUP is not penalizing effects on the world, then it is confusing to call it an 'impact measure', and something like 'optimization regularization' would be better.

Since I still have lingering confusions after your latest explanation, I would really appreciate if someone else who understands this could explain it to me.

Comment by vika on Best reasons for pessimism about impact of impact measures? · 2019-04-22T17:36:14.246Z · score: 2 (1 votes) · LW · GW
Are you thinking of an action observation formalism, or some kind of reward function over inferred state?

I don't quite understand what you're asking here, could you clarify?

If you had to pose the problem of impact measurement as a question, what would it be?

Something along the lines of: "How can we measure to what extent the agent is changing the world in ways that we care about?". Why?

Comment by vika on Best reasons for pessimism about impact of impact measures? · 2019-04-20T13:23:06.578Z · score: 2 (1 votes) · LW · GW
What does this mean, concretely? And what happens with the survival utility function being the sole member of the attainable set? Does this run into that problem, in your model?

I meant that for attainable set consisting of random utility functions, I would expect most of the variation in utility to be based on irrelevant factors like the positions of air molecules. This does not apply to the attainable set consisting of the survival utility function, since that is not a random utility function.

What makes you think that?

This is an intuitive claim based on a general observation of how people attribute responsibility. For example, if I walk into a busy street and get hit by a car, I will be considered responsible for this because it's easy to predict. On the other hand, if I am walking down the street and a brick falls on my head from the nearby building, then I will not be considered responsible, because this event would be hard to predict. There are probably other reasons that humans don't consider themselves responsible for butterfly effects.

Comment by vika on Best reasons for pessimism about impact of impact measures? · 2019-04-19T12:51:08.720Z · score: 15 (5 votes) · LW · GW

Thanks Alex for starting this discussion and thanks everyone for the thought-provoking answers. Here is my current set of concerns about the usefulness of impact measures, sorted in decreasing order of concern:

Irrelevant factors. When applied to the real world, impact measures are likely to be dominated by things humans don't care about (heat dissipation, convection currents, positions of air molecules, etc). This seems likely to happen to value-agnostic impact measures, e.g. AU with random utility functions, which would mostly end up rewarding specific configurations of air molecules.

This may be mitigated by inability to perceive the irrelevant factors, which results in a more coarse-grained state representation: if the agent can't see air molecules, all the states with different air molecule positions will look the same, as they do to humans. Some human-relevant factors can also be difficult to perceive, e.g. the presence of poisonous gas in the room, so we may not want to limit the agent's perception ability to human level. Automatically filtering out irrelevant factors does seem difficult, and I think this might imply that it is impossible to design an impact measure that is both useful and truly value-agnostic.

However, the value-agnostic criterion does not seem very important in itself. I think the relevant criterion is that designing impact measures should be easier than the general value learning problem. We already have a non-value-agnostic impact measure that plausibly satisfies this criterion: RLSP learns what is effectively an impact measure (the human theta parameter) using zero human input just by examining the starting state. This could also potentially be achieved by choosing an attainable utility set that rewards a broad enough sample of things humans care about, and leaves the rest to generalization. Choosing a good attainable utility set may not be easy but it seems unlikely to be as hard as the general value learning problem.

Butterfly effects. Every action is likely to have large effects that are difficult to predict, e.g. taking a different route to work may result in different people being born. Taken literally, this means that there is no such thing as a low-impact action. Humans get around this by only counting easily predictable effects as impact that they are considered responsible for. If we follow a similar strategy of not penalizing butterfly effects, we might incentivize the agent to deliberately cause butterfly effects. The easiest way around this that I can currently see is restricting the agent's capability to model the effects of its actions, though this has obvious usefulness costs as well.

Chaotic world. Every action, including inaction, is irreversible, and each branch contains different states. While preserving reversibility is impossible in this world, preserving optionality (attainable utility, reachability, etc) seems possible. For example, if the attainable set contains a function that rewards the presence of vases, the action of breaking a vase will make this reward function more difficult to satisfy (even if the states with/without vases are different in every branch). If we solve the problem of designing/learning a good utility set that is not dominated by irrelevant factors, I expect chaotic effects will not be an issue.

If any of the above-mentioned concerns are not overcome, impact measures will fail to distinguish between what humans would consider low-impact and high-impact. Thus, penalizing high-impact actions would come with penalizing low-impact actions as well, which would result in a strong safety-capability tradeoff. I think the most informative direction of research to figure out whether these concerns are a deal-breaker is to scale up impact measures to apply beyond gridworlds, e.g. to Atari games.

Comment by vika on Best reasons for pessimism about impact of impact measures? · 2019-04-11T15:16:37.410Z · score: 9 (5 votes) · LW · GW

I don't see how representation invariance addresses this concern. As far as I understand, the concern is about any actions in the real world causing large butterfly effects. This includes effects that would be captured by any reasonable representation, e.g. different people existing in the action and inaction branches of the world. The state representations used by humans also distinguish between these world branches, but humans have limited models of the future that don't capture butterfly effects (e.g. person X can distinguish between the world state where person Y exists and the world state where person Z exists, but can't predict that choosing a different route to work will cause person Z to exist instead of person Y).

I agree with Daniel that this is a major problem with impact measures. I think that to get around this problem we would either need to figure out how to distinguish butterfly effects from other effects (and then include all the butterfly effects in the inaction branch) or use a weak world model that does not capture butterfly effects (similarly to humans) for measuring impact. Even if we know how to do this, it's not entirely clear whether we should avoid penalizing butterfly effects. Unlike humans, AI systems would be able to cause butterfly effects on purpose, and could channel their impact through butterfly effects if they are not penalized.

Comment by vika on Specification gaming examples in AI · 2018-11-10T18:48:03.818Z · score: 4 (2 votes) · LW · GW

As a result of the recent attention, the specification gaming list has received a number of new submissions, so this is a good time to check out the latest version :).

Comment by vika on Discussion on the machine learning approach to AI safety · 2018-11-01T21:18:23.733Z · score: 2 (1 votes) · LW · GW

Awesome, thanks Oliver!

Comment by vika on Towards a New Impact Measure · 2018-10-12T16:01:15.758Z · score: 4 (2 votes) · LW · GW

Thanks, glad you liked the breakdown!

The agent would have an incentive to stop anyone from doing anything new in response to what the agent did

I think that the stepwise counterfactual is sufficient to address this kind of clinginess: the agent will not have an incentive to take further actions to stop humans from doing anything new in response to its original action, since after the original action happens, the human reactions are part of the stepwise inaction baseline.

The penalty for the original action will take into account human reactions in the inaction rollout after this action, so the agent will prefer actions that result in humans changing fewer things in response. I'm not sure whether to consider this clinginess - if so, it might be useful to call it "ex ante clinginess" to distinguish from "ex post clinginess" (similar to your corresponding distinction for offsetting). The "ex ante" kind of clinginess is the same property that causes the agent to avoid scapegoating butterfly effects, so I think it's a desirable property overall. Do you disagree?

Comment by vika on Alignment Newsletter #25 · 2018-09-25T16:36:39.922Z · score: 5 (3 votes) · LW · GW

Thanks Rohin for a great summary as always!

I think the property of handling shutdown depends on the choice of absolute value or truncation at 0 in the deviation measure, not the choice of the core part of the deviation measure. RR doesn't handle shutdown because by default it is set to only penalize reductions in reachability (using truncation at 0). I would expect that replacing the truncation with absolute value (thus penalizing increases in reachability as well) would result in handling shutdown (but break the asymmetry property from the RR paper). Similarly, AUP could be modified to only penalize reductions in goal-achieving ability by replacing the absolute value with truncation, which I think would make it satisfy the asymmetry property but not handle shutdown.

More thoughts on independent design choices here.

Comment by vika on Towards a New Impact Measure · 2018-09-24T18:39:33.005Z · score: 21 (9 votes) · LW · GW

There are several independent design choices made by AUP, RR, and other impact measures, which could potentially be used in any combination. Here is a breakdown of design choices and what I think they achieve:

Baseline

  • Starting state: used by reversibility methods. Results in interference with other agents. Avoids ex post offsetting.
  • Inaction (initial branch): default setting in Low Impact AI and RR. Avoids interfering with other agent's actions, but interferes with their reactions. Does not avoid ex post offsetting if the penalty for preventing events is nonzero.
  • Inaction (stepwise branch) with environment model rollouts: default setting in AUP, model rollouts are necessary for penalizing delayed effects. Avoids interference with other agents and ex post offsetting.

Core part of deviation measure

  • AUP: difference in attainable utilities between baseline and current state
  • RR: difference in state reachability between baseline and current state
  • Low impact AI: distance between baseline and current state

Function applied to core part of deviation measure

  • Absolute value: default setting in AUP and Low Impact AI. Results in penalizing both increase and reduction relative to baseline. This results in avoiding the survival incentive (satisfying the Corrigibility property given in AUP post) and in equal penalties for preventing and causing the same event (violating the Asymmetry property given in RR paper).
  • Truncation at 0: default setting in RR, results in penalizing only reduction relative to baseline. This results in unequal penalties for preventing and causing the same event (satisfying the Asymmetry property) and in not avoiding the survival incentive (violating the Corrigibility property).

Scaling

  • Hand-tuned: default setting in RR (sort of provisionally)
  • ImpactUnit: used by AUP

I think an ablation study is needed to try out different combinations of these design choices and investigate which of them contribute to which desiderata / experimental test cases. I intend to do this at some point (hopefully soon).

Comment by vika on Towards a New Impact Measure · 2018-09-23T19:52:53.781Z · score: 2 (1 votes) · LW · GW

Another issue with equally penalizing decreases and increases in power (as AUP does) is that for any event A, it equally penalizes the agent for causing event A and for preventing event A (violating property 3 in the RR paper). I originally thought that satisfying Property 3 is necessary for avoiding ex post offsetting, which is actually not the case (ex post offsetting is caused by penalizing the given action on future time steps, which the stepwise inaction baseline avoids). However, I still think it's bad for an impact measure to not distinguish between causation and prevention, especially for irreversible events.

This comes up in the car driving example already mentioned in other comments on this post. The reason the action of keeping the car on the highway is considered "high-impact" is because you are penalizing prevention as much as causation. Your suggested solution of using a single action to activate a self-driving car for the whole highway ride is clever, but has some problems:

  • This greatly reduces the granularity of the penalty, making credit assignment more difficult.
  • This effectively uses the initial-branch inaction baseline (branching off when the self-driving car is launched) instead of the stepwise inaction baseline, which means getting clinginess issues back, in the sense of the agent being penalized for human reactions to the self-driving car.
  • You may not be able to predict in advance when the agent will encounter situations where the default action is irreversible or otherwise undesirable.
  • In such situations, the penalty will produce bad incentives. Namely, the penalty for staying on the road is proportionate to how bad a crash would be, so the tradeoff with goal achievement resolves in an undesirable way. If we keep the reward for the car arriving to its destination constant, then as we increase the badness of a crash (e.g. the number of people on the side of the road who would be run over if the agent took a noop action), eventually the penalty wins in the tradeoff with the reward, and the agent chooses the noop. I think it's very important to avoid this failure mode.
Comment by vika on Towards a New Impact Measure · 2018-09-23T19:49:05.917Z · score: 6 (3 votes) · LW · GW

Actually, I think it was incorrect of me to frame this issue as a tradeoff between avoiding the survival incentive and not crippling the agent's capability. What I was trying to point at is that the way you are counteracting the survival incentive is by penalizing the agent for increasing its power, and that interferes with the agent's capability. I think there may be other ways to counteract the survival incentive without crippling the agent, and we should look for those first before agreeing to pay such a high price for interruptibility. I generally believe that 'low impact' is not the right thing to aim for, because ultimately the goal of building AGI is to have high impact - high beneficial impact. This is why I focus on the opportunity-cost-incurring aspect of the problem, i.e. avoiding side effects.

Note that AUP could easily be converted to a side-effects-only measure by replacing the |difference| with a max(0, difference). Similarly, RR could be converted to a measure that penalizes increases in power by doing the opposite (replacing max(0, difference) with |difference|). (I would expect that variant of RR to counteract the survival incentive, though I haven't tested it yet.) Thus, it may not be necessary to resolve the disagreement about whether it's good to penalize increases in power, since the same methods can be adapted to both cases.

Comment by vika on Towards a New Impact Measure · 2018-09-20T19:32:36.570Z · score: 3 (2 votes) · LW · GW

If the agent isn’t overcoming obstacles, we can just increase N.

Wouldn't increasing N potentially increase the shutdown incentive, given the tradeoff between shutdown incentive and overcoming obstacles?

I think eliminating this survival incentive is extremely important for this kind of agent, and arguably leads to behaviors that are drastically easier to handle.

I think we have a disagreement here about which desiderata are more important. Currently I think it's more important for the impact measure not to cripple the agent's capability, and the shutdown incentive might be easier to counteract using some more specialized interruptibility technique rather than an impact measure. Not certain about this though - I think we might need more experiments on more complex environments to get some idea of how bad this tradeoff is in practice.

And why is this, given that the inputs are histories? Why can’t we simply measure power?

Your measurement of "power" (I assume you mean ?) needs to be grounded in the real world in some way. The observations will be raw pixels or something similar, while the utilities and the environment model will be computed in terms of some sort of higher-level features or representations. I would expect the way these higher-level features are chosen or learned to affect the outcome of that computation.

I discussed in "Utility Selection" and "AUP Unbound" why I think this actually isn’t the case, surprisingly. What are your disagreements with my arguments there?

I found those sections vague and unclear (after rereading a few times), and didn't understand why you claim that a random set of utility functions would work. E.g. what do you mean by "long arms of opportunity cost and instrumental convergence"? What does the last paragraph of "AUP Unbound" mean and how does it imply the claim?

Oops, noted. I had a distinct feeling of "if I’m going to make claims this strong in a venue this critical about a topic this important, I better provide strong support".

Providing strong support is certainly important, but I think it's more about clarity and precision than quantity. Better to give one clear supporting statement than many unclear ones :).

Comment by vika on Towards a New Impact Measure · 2018-09-20T16:26:03.000Z · score: 14 (5 votes) · LW · GW

Great work! I like the extensive set of desiderata and test cases addressed by this method.

The biggest difference from relative reachability, as I see it, is that you penalize increasing the ability to achieve goals, as well as decreasing it. I'm not currently sure whether this is a good idea: while it indeed counteracts instrumental incentives, it could also "cripple" the agent by incentivizing it to settle for more suboptimal solutions than necessary for safety.

For example, the shutdown button in the "survival incentive" gridworld could be interpreted as a supervisor signal (in which case the agent should not disable it) or as an obstacle in the environment (in which case the agent should disable it). Simply penalizing the agent for increasing its ability to achieve goals leads to incorrect behavior in the second case. To behave correctly in both cases, the agent needs more information about the source of the obstacle, which is not provided in this gridworld (the Safe Interruptibility gridworld has the same problem).

Another important difference is that you are using a stepwise inaction baseline (branching off at each time step rather than the initial time step) and predicting future effects using an environment model. I think this is an improvement on the initial-branch inaction baseline, which avoids clinginess towards independent human actions, but not towards human reactions to the agent's actions. The environment model helps to avoid the issue with the stepwise inaction baseline failing to penalize delayed effects, though this will only penalize delayed effects if they are accurately predicted by the environment model (e.g. a delayed effect that takes place beyond the model's planning horizon will not be penalized). I think the stepwise baseline + environment model could similarly be used in conjunction with relative reachability.

I agree with Charlie that you are giving out checkmarks for the desiderata a bit too easily :). For example, I'm not convinced that your approach is representation-agnostic. It strongly depends on your choice of the set of utility functions and environment model, and those have to be expressed in terms of the state of the world. (Note that the utility functions in your examples, such as u_closet and u_left, are defined in terms of reaching a specific state.) I don't think your method can really get away from making a choice of state representation.

Your approach might have the same problem as other value-agnostic approaches (including relative reachability) with mostly penalizing irrelevant impacts. The AUP measure seems likely to give most of its weight to utility functions that are irrelevant to humans, while the RR measure could give most of its weight to preserving reachability of irrelevant states. I don't currently know a way around this that's not value-laden.

Meta point: I think it would be valuable to have a more concise version of this post that introduces the key insight earlier on, since I found it a bit verbose and difficult to follow. The current writeup seems to be structured according to the order in which you generated the ideas, rather than an order that would be more intuitive to readers. FWIW, I had the same difficulty when writing up the relative reachability paper, so I think it's generally challenging to clearly present ideas about this problem.

Comment by vika on Overcoming Clinginess in Impact Measures · 2018-07-18T16:52:47.605Z · score: 4 (2 votes) · LW · GW

I've thought some more about the step-wise inaction counterfactual, and I think there are more issues with it beyond the human manipulation incentive. With the step-wise counterfactual, future transitions that are caused by the agent's current actions will not be penalized, since by the time those transitions happen, they are included in the counterfactual. Thus, there is no penalty for a current transition that set in motion some effects that don't happen immediately (this includes influencing humans), unless the whitelisting process takes into account that this transition causes these effects (e.g. using a causal model).

For example, if the agent puts a vase on a conveyor belt (which results in the vase breaking a few time steps later), it would only be penalized if the "vase near belt -> vase on belt" transition is not in the whitelist, i.e. if the whitelisting process takes into account that the belt would eventually break the vase. There are also situations where penalizing the "vase near belt -> vase on belt" transition would not make sense, e.g. if the agent works in a vase-making factory and the conveyor belt takes the vase to the next step in the manufacturing process. Thus, for this penalty to reliably work, the whitelisting process needs to take into account accurate task-specific causal information, which I think is a big ask. The agent would also not be penalized for butterfly effects that are difficult to model, so it would have an incentive to channel its impact through butterfly effects of whitelisted transitions.

Comment by vika on Overcoming Clinginess in Impact Measures · 2018-07-09T12:36:18.955Z · score: 2 (1 votes) · LW · GW
Let's consider an alternate form of whitelisting, where we instead know the specific object-level transitions per time step that would have occurred in the naive counterfactual (where the agent does nothing). Discarding the whitelist, we instead penalize distance from the counterfactual latent-space transitions at that time step.

How would you define a distance measure on transitions? Since this would be a continuous measure of how good transitions are, rather than a discrete list of good transitions, in what sense is it a form of whitelisting?

This basically locks us into a particular world-history. While this might be manipulation- and stasis-free, this is a different kind of clinginess. You're basically saying "optimize this utility the best you can without letting there be an actual impact". However, I actually hadn't thought of this formulation before, and it's plausible it's even more desirable than whitelisting, as it seems to get us a low/no-impact agent semi-robustly. The trick is then allowing favorable effects to take place without getting back to stasis/manipulation.

I expect that in complex tasks where we don't know the exact actions we would like the agent to take, this would prevent the agent from being useful or coming up with new unforeseen solutions. I have this concern about whitelisting in general, though giving the agent the ability to query the human about non-whitelisted effects is an improvement. The distance measure on transitions could also be traded off with reward (or some other task-specific objective function), so if an action is sufficiently useful for the task, the high reward would dominate the distance penalty.

This would still have offsetting issues though. In the asteroid example, if the agent deflects the asteroid, then future transitions (involving human actions) are very different from default transitions (involving no human actions), so the agent would have an offsetting incentive.

Comment by vika on Overcoming Clinginess in Impact Measures · 2018-07-06T09:57:16.083Z · score: 6 (3 votes) · LW · GW

I like the proposed iterative formulation for the step-wise inaction counterfactual, though I would replace pi_Human with pi_Environment to account for environment processes that are not humans but can still "react" to the agent's actions. The step-wise counterfactual also improves over the naive inaction counterfactual by avoiding repeated penalties for the same action, which could help avoid offsetting behaviors for a penalty that includes reversible effects.

However, as you point out, not penalizing the agent for human reactions to its actions introduces a manipulation incentive for the agent to channel its effects through humans, which seems potentially very bad. The tradeoff you identified is quite interesting, though I'm not sure whether penalizing the agent for human reactions necessarily leads to an incentive to put humans in stasis, since that is also quite a large effect (such a penalty could instead incentivize the agent to avoid undue influence on humans, which seems good). I think there might be a different tradeoff (for a penalty that incorporates reversible effects): between avoiding offsetting behaviors (where the stepwise counterfactual likely succeeds and the naive inaction counterfactual can fail) and avoiding manipulation incentives (where the stepwise counterfactual fails and the naive inaction counterfactual succeeds). I wonder if some sort of combination of these two counterfactuals could get around the tradeoff.

Comment by vika on Worrying about the Vase: Whitelisting · 2018-06-22T15:37:17.259Z · score: 22 (6 votes) · LW · GW

Interesting work! Seems closely related to this recent paper from Satinder Singh's lab: Minimax-Regret Querying on Side Effects for Safe Optimality in Factored Markov Decision Processes. They also use whitelists to specify which features of the state the agent is allowed to change. Since whitelists can be unnecessarily restrictive, and finding a policy that completely obeys the whitelist can be intractable in large MDPs, they have a mechanism for the agent to query the human about changing a small number of features outside the whitelist. What are the main advantages of your approach over their approach?

I agree with Abram that clinginess (the incentive to interfere with irreversible processes) is a major issue for the whitelist method. It might be possible to get around this by using an inaction baseline, i.e. only penalizing non-whitelisted transitions if they were caused by the agent, and would not have happened by default. This requires computing the inaction baseline (the state sequence under some default policy where the agent "does nothing"), e.g. by simulating the environment or using a causal model of the environment.

I'm not convinced that whitelisting avoids the offsetting problem: "Making up for bad things it prevents with other negative side effects. Imagine an agent which cures cancer, yet kills an equal number of people to keep overall impact low." I think this depends on how extensive the whitelist is: whether it includes all the important long-term consequences of achieving the goal (e.g. increasing life expectancy). Capture all of the relevant consequences in the whitelist seems hard.

The directedness of whitelists is a very important property, because it can produce an asymmetric impact measure that distinguishes between causing irreversible effects and preventing irreversible events.

Comment by vika on DeepMind article: AI Safety Gridworlds · 2018-01-20T16:04:45.432Z · score: 14 (4 votes) · LW · GW

I think the DeepMind founders care a lot about AI safety (e.g. Shane Legg is a coauthor of the paper). Regarding the overall culture, I would say that the average DeepMind researcher is somewhat more interested in safety than the average ML researcher in general.

Comment by vika on DeepMind article: AI Safety Gridworlds · 2018-01-19T16:39:32.757Z · score: 15 (4 votes) · LW · GW

(paper coauthor here) When you ask whether the paper indicates that DeepMind is paying attention to AI risk, are you referring to DeepMind's leadership, AI safety team, the overall company culture, or something else?

Comment by vika on Announcement: AI alignment prize winners and next round · 2018-01-19T16:35:26.756Z · score: 7 (2 votes) · LW · GW

The distinction between papers and blog posts is getting weaker these days - e.g. distill.pub is an ML blog with the shining light of Ra that's intended to be well-written and accessible.

Comment by vika on MILA gets a grant for AI safety research · 2017-07-25T21:12:44.244Z · score: 1 (1 votes) · LW · GW

Yes. He runs AI safety meetups at MILA, and played a significant role in getting Yoshua Bengio more interested in safety.

Comment by vika on Minimizing Empowerment for Safety · 2017-03-08T14:51:19.000Z · score: 0 (0 votes) · LW · GW

I would expect minimizing empowerment to impede the agent in achieving its objectives. You do want the agent to have large effects on some parts of the environment that are relevant to its objectives, without being incentivized to negate those effects in weird ways in order to achieve low impact overall.

I think we need something like a sparse empowerment constraint, where you minimize empowerment over most (but not all) dimensions of the future outcomes.

Comment by vika on Using humility to counteract shame · 2016-04-19T01:13:13.449Z · score: 1 (1 votes) · LW · GW

Thanks for the link to your post. I also think we only disagree on definitions.

I agree that self-compassion is a crucial ingredient. This is the distinction I was pointing at with "while focusing on imperfections without compassion can lead to beating yourself up". Humility says "I am flawed and it's ok", while self-loathing is more like "I am flawed and I should be punished". The latter actually generates shame instead of reducing it.

I think that seeking external validation by appearing humble is completely orthogonal to humility as an internal state or attitude you can take towards yourself (my post focuses on the latter). This signaling / social dimension of humility seems to add a lot of confusion to an already fuzzy concept.

Comment by vika on Negative visualization, radical acceptance and stoicism · 2016-04-17T18:46:30.590Z · score: 0 (0 votes) · LW · GW

Thanks, I'll try out the meditation!

Comment by vika on To contribute to AI safety, consider doing AI research · 2016-01-30T20:24:40.188Z · score: 3 (3 votes) · LW · GW

I would recommend doing a CS PhD and take statistics courses, rather than doing a statistics PhD.

For examples of promising research areas, I recommend taking a look at the work of FLI grantees. I'm personally working on the interpretability of neural nets, which seems important if they become a component of advanced AI. There's not that much overlap between MIRI's work and mainstream CS, so I'd recommend a more broad focus.

Research experience is always helpful, though it's harder to get if you are working full time in industry. If your company has any machine learning research projects, you could try to get involved in those. Taking machine learning / stats courses and doing well in them is also helpful for admission. Math GRE subject test probably helps (not sure how much) if you have a really good score.

Comment by vika on Yoshua Bengio on AI progress, hype and risks · 2016-01-30T04:59:54.904Z · score: 9 (9 votes) · LW · GW

The above-mentioned researchers are skeptical in different ways. Andrew Ng thinks that human-level AI is ridiculously far away, and that trying to predict the future more than 5 years out is useless. Yann LeCun and Yoshua Bengio believe that advanced AI is far from imminent, but approve of people thinking about long-term AI safety.

Okay, but surely it’s still important to think now about the eventual consequences of AI. - Absolutely. We ought to be talking about these things.