False assumptions and leaky abstractions in machine learning and AI safety

post by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2019-06-28T04:54:47.119Z · LW · GW · 3 comments

Contents

3 comments

3 comments

Comments sorted by top scores.

comment by Gordon Seidoh Worley (gworley) · 2019-06-28T18:16:43.995Z · LW(p) · GW(p)

If you haven't already seen it, you might find this paper I wrote (preprint, in review) relevant to your interest in this topic, since I suspect the issue runs even deeper than technical assumptions down to philosophical assumptions; or more precisely, hinge assumptions one must adopt to deal with unknowability (or if you don't believe in unknowability, then just pragmatic assumptions made to deal with epistemic uncertainty).

I'm curious about your thoughts around the problems of embedded agency. My view is that most of what's meaningful about understanding embedded agency is that it exposes the problems of machine learning models that make strong assumptions about the world that don't hold up, such that the problems of embedded agency are the problems of making overly strong assumptions (cf. AIXI and the anvil problem). Is this what you are pointing to or were your trying to say something different; your words are short enough that there's some ambiguity to me about where you suspect the problem-causing assumptions lie.

Replies from: capybaralet
comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2019-06-30T18:36:35.672Z · LW(p) · GW(p)

IIUC, yes, that's basically what I was trying to say about embedded agency.


comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2019-08-06T01:11:35.078Z · LW(p) · GW(p)

A few more important examples of important leaky abstractions that we might worry about protecting/enforcing:

  • Casual interventions (as "uncaused causes", ala free will).
  • Boxes that don't leak information (BoMAI)

Making a more complete list would be a good project