List of Problems That Motivated UDT

post by Wei Dai (Wei_Dai) · 2012-06-06T00:26:00.625Z · LW · GW · Legacy · 11 comments

Contents

11 comments

I noticed that recently I wrote several comments of the form "UDT can be seen as a step towards solving X" and thought it might be a good idea to list in one place all of the problems that helped motivate UDT1 (not including problems that came up subsequent to that post). 


11 comments

Comments sorted by top scores.

comment by wedrifid · 2013-07-09T05:04:13.481Z · LW(p) · GW(p)

Quantum Immortality/Suicide

This doesn't seem to fit. There isn't anything about Quantum Immortality that requires UDT (except in as much as any decision requires at least some kind of decision theory). The difficulty (and common confusion) is around translating primitive preference-intuitions into preference-beliefs about wavefunctions or branches. Once the values are given, both CDT and EDT will just result in the same decision that UDT would make (unless the specific decision also combines one of the other issues UDT is required for.)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2013-09-10T20:18:09.259Z · LW(p) · GW(p)

There isn't anything about Quantum Immortality that requires UDT (except in as much as any decision requires at least some kind of decision theory). The difficulty (and common confusion) is around translating primitive preference-intuitions into preference-beliefs about wavefunctions or branches. Once the values are given, both CDT and EDT will just result in the same decision that UDT would make (unless the specific decision also combines one of the other issues UDT is required for.)

I think UDT makes it possible to understand what decisions are, and how wavefunctions can depend on one's decisions. Before I came up UDT ideas, this was really unclear to me, and I had considered some other decision theory approaches where Quantum Immortality was sort of baked in. For example I had the idea that the wavefunction couldn't be changed, but when you make decisions, you're choosing which branch of the wavefunction your consciousness continues into.

comment by reup · 2012-06-08T01:13:23.353Z · LW(p) · GW(p)

Is there a post on the relative strengths/weaknesses of UDT and TDT? I've searched but haven't found one.

comment by private_messaging · 2012-06-06T10:24:14.353Z · LW(p) · GW(p)

Look how robot controllers are implemented, look at real theories, observe that treating copies as extra servos is trivial change and works. It also works when the copies are not full and can distinguish between each other. Also, re-learn that values in theory are theoretical and are not homologous to underlying physical implementation; it is of no more interest that the action A is present in N physically independent systems, than that the action A is a real number but hardware is using floating point binary.

Philosophers have tendency to pick some random minor implementation detail, and get some sort of philosophical problem with it. For example the world may be deterministic, a minor implementation detail, the philosophers go "where's my free will?". Exact same thing with decision theories. Same theoretic action variable represents several different objects, that could be 2 robot arms wired in parallel, that could be two controllers with identical state wired to 2 robot arms, everything works the same but for the latter philosophers go "where's my causality?". Never mind that the physics is reversible at fundamental level and notion of causality is just a cognitive tool, for everyone else.

Replies from: khafra, Wei_Dai
comment by khafra · 2012-06-06T13:32:18.813Z · LW(p) · GW(p)

This reminds me of the debate between programmers who want to design an elegant system that accomplishes all the desired functions as consequences of a fundamentally simple design, and the programmers who just want to make it work and ship. Depending on the problem you're solving, and the constraints you're working under, I think either approach can be appropriate. Peter Norvig's sudoku solver is in the "elegant" school, but if I were writing one from scratch, I'd do better to build something ugly and keep testing it until it seemed reliable.

I'm sorta leaning toward the "natural and elegant" approach for decision theories, since they'd have to face unknown new challenges without breaking, but patching CDT with cybernetics and such might work as well.

Replies from: David_Gerard, private_messaging
comment by David_Gerard · 2012-06-06T15:02:34.963Z · LW(p) · GW(p)

More to the point, actually solving some of these problems may well be NP-complete. But what do we and evolution do in practice, when we have to solve the problem and throwing up our hands is not an option? We and it use a numerical approximation which works pretty darned well. Worse is, in fact, better.

comment by private_messaging · 2012-06-07T11:26:44.126Z · LW(p) · GW(p)

This reminds me of the debate between programmers who want to design an elegant system that accomplishes all the desired functions as consequences of a fundamentally simple design, and the programmers who just want to make it work and ship. Depending on the problem you're solving, and the constraints you're working under, I think either approach can be appropriate.

I think the resemblance is only superficial. There is nothing inelegant in treating two wired-in-parallel robotic arms controlled by the same controller, in same way regardless of whenever the controller is same 'real physical object', especially considering that we live in the world where if you have two electrons (or two identical anything), them being separate objects is purely in the eye of the beholder.

The whole point is that you abstract out inelegant details such as whenever the same controllers are physically one system or not. This abstraction is not at odds with mathematical elegance, it is the basis for mathematical elegance. It however is at odd with philosophical compactness-by-confusion. This abstraction does not allow for the notion of causality that was oversimplified to the point of irrelevance.

comment by Wei Dai (Wei_Dai) · 2012-06-07T17:05:53.348Z · LW(p) · GW(p)

I'm not sure if you're aware that my interest in these problems is mostly philosophical to begin with. For example I wrote the post that is the first link in my list in 1997, when I had no interest in AI at all, but was thinking about how humans would deal with probabilities when mind copying becomes possible in the future. Do you object to philosophers trying to solve philosophical problems in general, or just to AI builders making use of philosophical solutions or thinking like philosophers?

Replies from: private_messaging
comment by private_messaging · 2012-06-07T17:58:39.940Z · LW(p) · GW(p)

The philosophical thinking is usually done in terms of the concepts that are later found irrelevant (or which are known to be irrelevant to begin with). What I object to is philosopher's arrogance in form of gross overestimate of the relevance of the philosophical 'problems' and philosophical 'solutions' to anything.

If the philosophical notion of causality has a problem with abstracting away irrelevant low level details of the method of control of a manipulator, that is a problem with philosophical notion of causality, not a problem with the design of intelligent systems. Philosophy seems to be an incredibly difficult to avoid failure mode of intelligences - whereby the intelligence fails to establish relevant concepts and proceeds to reason in terms of faulty concepts.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-06-07T18:59:20.389Z · LW(p) · GW(p)

What's your opinion of von Neumann and Morgenstern's work on decision theory? Do you also consider it to be "later found irrelevant" or do you think consider it an exception to "usually"? Or do you not consider it to be "philosophical"? What about philosophical work on logic (e.g., Aristotle's first steps towards formalizing correct reasoning)?

Replies from: private_messaging
comment by private_messaging · 2012-06-08T13:09:59.496Z · LW(p) · GW(p)

The parallels to them seem to be a form of 'but many scientists were ridiculed'. The times when philosophy is useful seem to be restricted to building high level concepts out of lower level concepts, adapting high level concepts to be relevant. Rather than this top down process starting from something potentially very confused.

What we see with the causality is that in the common discourse, it is normal to say things like 'because , something'. Because algorithm returned 1 box, the predictor predicted 1 box, and the robot took 1 box, is perfectly normal, valid statement. It's probably the only kind of causality there could be, lacking any physical law of causality. The philosophers take that notion of causality, confuse it with some properties of the world, and end up having 'does not compute' moments about particular problems like Newcomb's.