Reframing Impact

post by TurnTrout · 2019-09-20T19:03:27.898Z · LW · GW · 11 comments

Contents

  Technical Appendix: First safeguard?
None
11 comments

Technical Appendix: First safeguard?

This sequence is written to be broadly accessible, although perhaps its focus on capable AI systems assumes familiarity with basic arguments for the importance of AI alignment. The technical appendices are an exception, targeting the technically inclined.

Why do I claim that an impact measure would be "the first proposed safeguard which maybe actually stops a powerful agent with an imperfect [LW · GW] objective from ruining things – without assuming anything about the objective"?

The safeguard proposal shouldn't have to say "and here we solve this opaque, hard problem, and then it works". If we have the impact measure, we have the math, and then we have the code.

So what about:

Notes

The best way to use this book is NOT to simply read it or study it, but to read a question and STOP. Even close the book. Even put it away and THINK about the question. Only after you have formed a reasoned opinion should you read the solution. Why torture yourself thinking? Why jog? Why do push-ups?
If you are given a hammer with which to drive nails at the age of three you may think to yourself, "OK, nice." But if you are given a hard rock with which to drive nails at the age of three, and at the age of four you are given a hammer, you think to yourself, "What a marvellous invention!" You see, you can't really appreciate the solution until you first appreciate the problem.
~ Thinking Physics

11 comments

Comments sorted by top scores.

comment by habryka (habryka4) · 2020-03-03T19:55:04.112Z · LW(p) · GW(p)

Promoted to curated: I really liked this sequence. I think in many ways it has helped me think about AI Alignment from a new perspective, and I really like the illustrations and the way it was written, and how it actively helped me along the way thing actively about the problems, instead of just passively telling me solutions.

Now that the sequence is complete, it seemed like a good time to curate the first post in the sequence. 

comment by NaiveTortoise (An1lam) · 2019-09-20T22:38:11.510Z · LW(p) · GW(p)

I enjoyed the post and in particular really liked the illustrated format. Definitely planning to read the rest!

I'm now wishing more technical blog posts were illustrated like this...

comment by Raemon · 2019-09-20T22:57:37.911Z · LW(p) · GW(p)

Checking that you've read the Embedded Agency sequence [? · GW]?

comment by NaiveTortoise (An1lam) · 2019-09-21T01:33:27.951Z · LW(p) · GW(p)

Yup, I have (and the untrollable mathematician one). I dashed off that comment but really meant something like, "I hope this trend takes off."

comment by xrchz · 2019-10-22T14:53:51.965Z · LW(p) · GW(p)

One misgiving I have about the illustrated format is that it's less accessible than text. I hope the authors of work in this format keep the needs of a wide variety of readers in mind.

comment by TurnTrout · 2019-10-22T15:37:11.792Z · LW(p) · GW(p)

Accessible in what way? I’m planning to put up a full a text version at the end.

comment by sayan · 2019-09-21T06:49:04.336Z · LW(p) · GW(p)

I think this post is broadly making two claims -

  1. Impactful things fundamentally feel different.

  2. A good Impact Measure should be designed in a way that it strongly safeguards against almost any imperfect objective.

It is also (maybe implicitly) claiming that the three properties mentioned completely specify a good impact measure.

I am looking forward to reading the rest of the sequence with arguments supporting these claims.

comment by TurnTrout · 2019-09-21T15:53:17.192Z · LW(p) · GW(p)

It is also (maybe implicitly) claiming that the three properties mentioned completely specify a good impact measure.

I don't know that I'd claim that these completely specify a good impact measure, but I'd imagine most impact measures satisfying these properties are good (i.e. natural curves fit to those three points end up pretty good, I think).

comment by Zack_M_Davis · 2019-09-22T03:35:22.125Z · LW(p) · GW(p)

(I was briefly confused by the "Think about what Frank brings us for each distance" "slide" because it doesn't include the pinkest marble: I saw the second-pinkest marble (on the largest dotted circle) thinking that it was meant to be the pinkest (because it's rightmost on the "Search radius" legend) and was like, "Wait, why is the pinkest marble closer than the terrorist in this slide when it was farther away in the previous slide?")

comment by TurnTrout · 2019-09-22T03:52:16.894Z · LW(p) · GW(p)

Yeah, the Maximum Pink marble has a sheen on it, but outside of that admittedly obscure cue... there's only so many gradations of pink you can tell apart at once.

comment by Gurkenglas · 2019-09-21T10:32:39.015Z · LW(p) · GW(p)

I propose to measure impact by counting bits of optimization power, as in my Oracle question contest submission. Find some distribution over plans we might use if we didn't have an AI, such as stock market trading policies. Have the AI output a program that outputs plans according to some distribution. Measure impact by computing a divergence between the two distributions, such as the maximum pointwise quotient - if no plan becomes more than twice as likely, that's no more than one bit of optimization power. Note that the AI is incentivized to prove its output's impact bound to some dumb proof checker. If the AI cuts away the unprofitable half of policies, that is more than enough to get stupid rich.