Reframing Impact

post by TurnTrout · 2019-09-20T19:03:27.898Z · LW · GW · 15 comments

Contents

  Technical Appendix: First safeguard?
None
15 comments

Technical Appendix: First safeguard?

This sequence is written to be broadly accessible, although perhaps its focus on capable AI systems assumes familiarity with basic arguments for the importance of AI alignment. The technical appendices are an exception, targeting the technically inclined.

Why do I claim that an impact measure would be "the first proposed safeguard which maybe actually stops a powerful agent with an imperfect [LW · GW] objective from ruining things – without assuming anything about the objective"?

The safeguard proposal shouldn't have to say "and here we solve this opaque, hard problem, and then it works". If we have the impact measure, we have the math, and then we have the code.

So what about:

Notes

The best way to use this book is NOT to simply read it or study it, but to read a question and STOP. Even close the book. Even put it away and THINK about the question. Only after you have formed a reasoned opinion should you read the solution. Why torture yourself thinking? Why jog? Why do push-ups?

If you are given a hammer with which to drive nails at the age of three you may think to yourself, "OK, nice." But if you are given a hard rock with which to drive nails at the age of three, and at the age of four you are given a hammer, you think to yourself, "What a marvellous invention!" You see, you can't really appreciate the solution until you first appreciate the problem.

~ Thinking Physics

15 comments

Comments sorted by top scores.

comment by habryka (habryka4) · 2020-03-03T19:55:04.112Z · LW(p) · GW(p)

Promoted to curated: I really liked this sequence. I think in many ways it has helped me think about AI Alignment from a new perspective, and I really like the illustrations and the way it was written, and how it actively helped me along the way thing actively about the problems, instead of just passively telling me solutions.

Now that the sequence is complete, it seemed like a good time to curate the first post in the sequence. 

comment by NaiveTortoise (An1lam) · 2019-09-20T22:38:11.510Z · LW(p) · GW(p)

I enjoyed the post and in particular really liked the illustrated format. Definitely planning to read the rest!

I'm now wishing more technical blog posts were illustrated like this...

Replies from: Raemon
comment by Raemon · 2019-09-20T22:57:37.911Z · LW(p) · GW(p)

Checking that you've read the Embedded Agency sequence [? · GW]?

Replies from: An1lam
comment by NaiveTortoise (An1lam) · 2019-09-21T01:33:27.951Z · LW(p) · GW(p)

Yup, I have (and the untrollable mathematician one). I dashed off that comment but really meant something like, "I hope this trend takes off."

Replies from: ramana-kumar
comment by Ramana Kumar (ramana-kumar) · 2019-10-22T14:53:51.965Z · LW(p) · GW(p)

One misgiving I have about the illustrated format is that it's less accessible than text. I hope the authors of work in this format keep the needs of a wide variety of readers in mind.

Replies from: TurnTrout
comment by TurnTrout · 2019-10-22T15:37:11.792Z · LW(p) · GW(p)

Accessible in what way? I’m planning to put up a full a text version at the end.

EDIT: I haven't done this yet, unfortunately. I still want to do it.

Replies from: niplav
comment by niplav · 2021-04-19T20:56:06.464Z · LW(p) · GW(p)

If the question about accessibility hasn't been resolved, I think Ramana Kumar was talking about making the text readable for people with visual impairments.

comment by Rohin Shah (rohinmshah) · 2020-12-02T18:57:56.139Z · LW(p) · GW(p)

I'm nominating the entire sequence because it's brought a lot of conceptual clarity to the notion of "impact", and has allowed me to be much more precise in things I say about "impact".

comment by Logan Riggs (elriggs) · 2020-12-09T23:29:51.567Z · LW(p) · GW(p)

This post (or sequence of posts) not only gave me a better handle on impact and what that means for agents, but it also is a concrete example of de-confusion work. The execution of the explanations gives an "obvious in hindsight" feeling, with "5-minute timer"-like questions which pushed me to actually try and solve the open question of an impact measure. It's even inspired me to apply this approach to other topics in my life that had previously confused me; it gave me the tools and a model to follow.

And, the illustrations are pretty fun and engaging, too.

comment by Zack_M_Davis · 2019-09-22T03:35:22.125Z · LW(p) · GW(p)

(I was briefly confused by the "Think about what Frank brings us for each distance" "slide" because it doesn't include the pinkest marble: I saw the second-pinkest marble (on the largest dotted circle) thinking that it was meant to be the pinkest (because it's rightmost on the "Search radius" legend) and was like, "Wait, why is the pinkest marble closer than the terrorist in this slide when it was farther away in the previous slide?")

Replies from: TurnTrout
comment by TurnTrout · 2019-09-22T03:52:16.894Z · LW(p) · GW(p)

Yeah, the Maximum Pink marble has a sheen on it, but outside of that admittedly obscure cue... there's only so many gradations of pink you can tell apart at once.

comment by jacobjacob · 2021-01-09T23:46:29.575Z · LW(p) · GW(p)

Here are prediction questions for the predictions that TurnTrout himself provided in the concluding post of the Reframing Impact sequence [LW · GW]. 

comment by sayan · 2019-09-21T06:49:04.336Z · LW(p) · GW(p)

I think this post is broadly making two claims -

  1. Impactful things fundamentally feel different.

  2. A good Impact Measure should be designed in a way that it strongly safeguards against almost any imperfect objective.

It is also (maybe implicitly) claiming that the three properties mentioned completely specify a good impact measure.

I am looking forward to reading the rest of the sequence with arguments supporting these claims.

Replies from: TurnTrout
comment by TurnTrout · 2019-09-21T15:53:17.192Z · LW(p) · GW(p)

It is also (maybe implicitly) claiming that the three properties mentioned completely specify a good impact measure.

I don't know that I'd claim that these completely specify a good impact measure, but I'd imagine most impact measures satisfying these properties are good (i.e. natural curves fit to those three points end up pretty good, I think).

comment by Gurkenglas · 2019-09-21T10:32:39.015Z · LW(p) · GW(p)

I propose to measure impact by counting bits of optimization power, as in my Oracle question contest submission. Find some distribution over plans we might use if we didn't have an AI, such as stock market trading policies. Have the AI output a program that outputs plans according to some distribution. Measure impact by computing a divergence between the two distributions, such as the maximum pointwise quotient - if no plan becomes more than twice as likely, that's no more than one bit of optimization power. Note that the AI is incentivized to prove its output's impact bound to some dumb proof checker. If the AI cuts away the unprofitable half of policies, that is more than enough to get stupid rich.