Posts

Two Major Obstacles for Logical Inductor Decision Theory 2017-06-10T05:48:27.711Z
Even when contrarians win, they lose: Jeff Hawkins 2015-04-08T04:54:49.342Z
Are there any Lesswrongers in the Waterloo, Ontario area? 2011-08-24T17:54:41.669Z

Comments

Comment by endoself on We are the Athenians, not the Spartans · 2017-06-13T03:41:30.738Z · LW · GW

Note that I ... wrote the only comment on the IAF post you linked

Yes, I replied to it :)

Unfortunately, I don't expect to have more Eliezer-level explanations of these specific lines of work any time soon. Eliezer has a fairly large amount of content on Arbital that hasn't seen LW levels of engagement either, though I know some people who are reading it and benefiting from it. I'm not sure how LW 2.0 is coming along, but it might be good to have a subreddit for content similar to your recent post on betting. There is an audience for it, as that post demonstrated.

Comment by endoself on We are the Athenians, not the Spartans · 2017-06-12T21:49:46.776Z · LW · GW

Maybe you've heard this before, but the usual story is that the goal is to clarify conceptual questions that exist in both the abstract and more practical settings. We are moving towards considering such things though - the point of the post I linked was to reexamine old philosophical questions using logical inductors, which are computable.

Further, my intuition from studying logical induction is that practical systems will be "close enough" to satisfying the logical induction critereon that many things will carry over (much of this is just intuitions one could also get from online learning theory). E.g. in the logical induction decision theory post, I expect the individual points made using logical inductors to mostly or all apply to practical systems, and you can use the fact that logical inductors are well-defined to test further ideas building on these.

Comment by endoself on We are the Athenians, not the Spartans · 2017-06-11T23:42:45.002Z · LW · GW

Scott Garrabrant and I would be happy to see more engagement with the content on Agent Foundations (IAF). I guess you're right that the math is a barrier. My own recent experiment of linking to Two Major Obstacles for Logical Inductor Decision Theory on IAF was much less successful than your post about betting, but I think that there's something inessential about the inaccessiblity.

In that post, for example, I think the math used is mostly within reach for a technical lay audience, except that an understanding of logical induction is assumed, though I may have missed some complexity in looking it over just now. Even for that, it should be possible to explain enough about logical inductors briefly and accessibly enough to let someone understand a version of that post, though I'm not sure if that has been done. People recommend this talk as the best existing introduction.

Comment by endoself on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. · 2015-11-30T07:06:53.526Z · LW · GW

I model probabilistic thinking as something you build on top of all this. First you learn to model the world at all (your steps 3-8), then you learn the mathematical description of part of what your brain is doing when it does all this. There are many aspects of normative cognition that Bayes doesn't have anything to say about, but there are also places where you come to understand what your thinking is aiming at. It's a gears model of cognition rather than the object-level phenomenon.

If you don't have gears models at all, then yes, it's just another way to spout nonsense. This isn't because it's useless, it's because people cargo-cult it. Why do people cargo-cult Bayesianism so much? It's not the only thing in the sequences. The first post, The Simple Truth, big parts of Mysterious Answers to Mysterious Questions, and basically all of Reductionism are about the gears-model skill. Even the name rationalism evokes Descartes and Leibniz, who were all about this skill. My own guess is that Eliezer argued more forcefully for Bayesianism than for gears models in the sequences because, of the two, it is the skill that came less naturally to him, and that stuck.

What would cargo-cult gears models look like? Presumably, scientism, physics envy, building big complicated models with no grounding in reality. This too is a failure mode visible in our community.

Comment by endoself on Welcome to Less Wrong! (8th thread, July 2015) · 2015-07-27T21:48:37.899Z · LW · GW

Hi Yaacov!

The most active MIRIx group is at UCLA. Scott Garrabrant would be happy to talk to you if you are considering research aimed at reducing x-risk. Alternatively, some generic advice for improving your future abilities is to talk to interesting people, try to do hard things, and learn about things that people with similar goals do not know about.

Comment by endoself on Even when contrarians win, they lose: Jeff Hawkins · 2015-04-09T00:38:36.264Z · LW · GW

As far as I can tell, you've misunderstood what I was trying to do with this post. I'm not claiming that Hawkins' work is worth pursuing further; passive_fist's analysis seems pretty plausible to me. I was just trying to give people some information that they may not have on how some ideas developed, to help them build a better model of such things.

(I did not downvote you. If you thought that I was arguing for further work towards Hawkins' progam, then your comment would be justified, and in any case this is a worthwhile thing for me to explicitly disclaim.)

Comment by endoself on Even when contrarians win, they lose: Jeff Hawkins · 2015-04-09T00:24:36.016Z · LW · GW

Yeah, I didn't mean to contradict any of this. I wonder how much a role previous arguments from MIRI and FHI played in changing the zeitgeist and contributing to the way Superintelligence was received. There was a slow increase in uninformed fear-of-AI sentiments over the preceding years, which may have put people in more of a position to consider the arguments in Superintelligence. I think that much of this ultimately traces back to MIRI and FHI; for example many anonymous internet commenters refer to them or use phrasing inspired by them, though many others don't. I'm more sceptical that this change in zeitgeist was helpful though.

Of course specific people who interacted with MIRI/FHI more strongly, such as Jaan Tallinn and Peter Thiel, were helpful in bring the discourse to where it is today.

Comment by endoself on Even when contrarians win, they lose: Jeff Hawkins · 2015-04-08T08:36:57.190Z · LW · GW

The quote from Ng is

The big AI dreams of making machines that could someday evolve to do intelligent things like humans could, I was turned off by that. I didn’t really think that was feasible, when I first joined Stanford. It was seeing the evidence that a lot of human intelligence might be due to one learning algorithm that I thought maybe we could mimic the human brain and build intelligence that’s a bit more like the human brain and make rapid progress. That particular set of ideas has been around for a long time, but [AI expert and Numenta cofounder] Jeff Hawkins helped popularize it.

I think it's pretty clear that he would have worked on different things if not for Hawkins. He's done a lot of work in robotics, for example, so he could have continued working on robotics if he didn't get interested in general AI. Maybe he would have moved into deep learning later in his career, as it started to show big results.

Comment by endoself on Open thread, Feb. 16 - Feb. 22, 2015 · 2015-02-18T22:09:01.967Z · LW · GW

GiveWell, GiveDirectly, Evidence Action/Deworm the World. You can vote for multiple charities.

Comment by endoself on 2014 Less Wrong Census/Survey · 2014-10-24T04:23:01.182Z · LW · GW

I took the survey.

Comment by endoself on Knightian Uncertainty and Ambiguity Aversion: Motivation · 2014-07-22T06:53:51.463Z · LW · GW

I can't see how this would work. Wouldn't the UDT-ish approach be to ask an MMEU agent to pick a strategy once, before making any updates? The MMEU agent would choose a strategy that makes it equivalent to a Bayesian agent, as I describe. The characteristic ambiguity-averse behaviour only appears if the agent is allowed to update.

Given a Cartesian boundary between agent and environment, you could make an agent that prefers to have its future actions be those that are prescribed by MMEU, and you'd then get MMEU-like behaviour persisting upon reflection, but I assume this isn't what you mean since it isn't UDT-ish at all.

Comment by endoself on Knightian Uncertainty and Ambiguity Aversion: Motivation · 2014-07-22T05:01:59.998Z · LW · GW

MMEU isn't stable upon reflection. Suppose that in addition to the mysterious [0.4, 0.6] coin, you had a fair coin, and I tell you that all offer bet 1 ("pay 50¢ to be payed $1.10 if the coin came up heads") if the fair coin comes up heads and bet 2 if the fair coin comes up tails, but you have to choose whether to accept or reject before flipping the fair coin to decide which bet will be chosen. In this case, the Knighian uncertainty cancels out, and your expected winnings are +5¢ no matter which value is [0.4, 0.6] is taken to be the true probabilty of the mysterious coin, so you would take this bet on MMEU.

Upon seeing how the fair coin turns out, however, MMEU would tell you to reject whichever of bets 1 and 2 is offered. Thus, if I offer to let you see the result of the fair coin before deciding whether to accept the bet, you will actually prefer not to see the coin, for an expected outcome of +5¢, rather than see the coin, reject the bet, and win nothing with certainty. Alternatively, if given the chance, you would prefer to self-modify so as to not exhibit ambiguity aversion in this scenario.

In general, any agent using a decision rule that is not generalized Bayesian performs strictly worse than some generalized Bayes decision rule. Note, though, that this does not mean that such an agent is forced to accept at least one of bets 1 and 2, since rejecting whichever of them is offered is a Bayes rule; for example, a Bayesian agent who believes that the bookie knows something that they don't will behave in this way. It does mean, though, that there are many situations where MMEU cannot work, such as in my example above, since in such scenarios it is not equivalent to any Bayes rule.

Comment by endoself on Knightian Uncertainty and Ambiguity Aversion: Motivation · 2014-07-22T04:57:24.344Z · LW · GW

This is a very general point. Most of the uncertainty people face is of the sort that they would naively classify as Knighian, so if people actually behaved according to MMEU, then they would essentially be playing minimax against the world.

Comment by endoself on 2013 Less Wrong Census/Survey · 2013-11-24T06:31:11.413Z · LW · GW

I took the census. My answers for MWI and Ailens were conditional on ¬Simulation, since if we are in a simulation where MWI doesn't hold, the simulation is probably intended to provide information about a universe in which MWI does hold.

Comment by endoself on Quantum versus logical bombs · 2013-11-18T07:06:22.750Z · LW · GW

I'm not sure what quantum mechanics has to do with this. Say humanity is spread over 10 planets. Would you rather take a logical 9/10 chance of wiping out humanity, or destroy 9 of the planets with certainty (and also destroy 90% of uninhabited planets to reduce the potential for future growth by the same degree)? Is there any ethically relevant difference between these scenarios?

Comment by endoself on Reduced impact AI: no back channels · 2013-11-18T04:07:09.049Z · LW · GW

even if P is omniscient, P' still has to estimate it's expected output from its own limited perspective. As long as this estimate is reasonable, the omniscience of P doesn't cause a problem (and remember that P is fed noisy data).

Don't you have to get the exact level of noise that will prevent the AI from hiding from P without letting P reconstruct the AI's actions if it does allow itself to be destroyed? An error in either direction can be catastrophic. If the noise is to high, the AI takes over the world. If the noise is to low, E'(P(Sᵃ|X,Oᵃ,B)/P(Sᵃ|¬X,Õᵃ,B) | a) is going to be very far from 1 no matter what, so there is no reason to expect that optimizing it is still equivalent to reducing impact.

Comment by endoself on Lone Genius Bias and Returns on Additional Researchers · 2013-11-02T20:25:58.258Z · LW · GW

He's talking specifically about people donating to AMF. There are more things people can do than donate to AMF and donate to one of MIRI, FHI, CEA, and CFAR.

Comment by endoself on MIRI's 2013 Summer Matching Challenge · 2013-10-18T23:16:23.997Z · LW · GW

Yes, you can take the probability that they will halt given a random input. This is analogous to the case of a universal Turing machine, since the way we ask it to simulate a random Turing machine is by giving it a random input string.

Comment by endoself on The Cause of Time · 2013-10-06T22:48:44.052Z · LW · GW

Yes, but this is a completely different matter than your original post. Obviously this is how we should handle this weird state of information that you're constructing, but it doesn't have the causal interpretation you give it. You are doing something, but it isn't causal analysis. Also, in the scenario you describe, you have the association information, so you should be using it.

Comment by endoself on The Cause of Time · 2013-10-06T06:57:05.317Z · LW · GW

Causal networks do not make an iid assumption.

Yeah, I guess that's way too strong; there are a lot of alternative assumptions also that justify using them.

What is a sample? How do we know two numbers (or other strings) came from the same sample?

I think we just have to assume this problem solved. Whenever we use causal networks in practice, we know what a sample is. You can try to weaken this and see if you still get anything useful, but this is very different then 'conditioning on time' as you present in the post.

Since the association contains information separate from the values themselves, how can we incorporate that information into the framework explicitly?

Bayes theorem? If we have a strong enough prior and enough information to reverse-engineer the association reasonably well, then we might be able to learn something. If you're running a clinical trial and you recorded which drugs were given out, but not to which patients, then you need other information, such as a prior about which side-effects they cause and measurements of side-effects that are associated with specific patients. Otherwise you just don't have the data necessary to construct the model.

Comment by endoself on The Cause of Time · 2013-10-06T04:07:16.773Z · LW · GW

In fact, in order to truly ignore time data, we cannot even order the points according to time! But that means that we no longer have any way to line up the points T0 with e0, T1 with e1, etc.

What? This makes no sense.

I guess you haven't seen this stated explicitly, but the framework of causal networks makes an iid assumption. The idea is that the causal network represents some process that occurs a lot, and we can watch it occur until we get a reasonably good understanding of the joint distribution of variables. Part of this is that it the same process occurring, so there is no time dependence built into the framework.

For some purposes, we can model time by simply including it as an observed variable, which you do in this post. However, the different measurements of each variable are associated because they come from the same sample of the (iid) causal process, whether or not we are conditioning on time. The way you are trying to condition on time isn't correct, and the correlation does exists in both cases. (Really, we care about dependence rather than correlation, but it doesn't make a difference here.)

I do think that this is a useful general direction of analysis. If the question is meaningful at all, then the answer is probably that given by Armok_GoB in the original thread, but it would be useful to clarify what exactly the question means. There is probably a lot of work to be done before we really understand such things, but I would advise you to better understand the ideas behind causal networks before trying to contribute.

Comment by endoself on What makes us think _any_ of our terminal values aren't based on a misunderstanding of reality? · 2013-09-26T04:55:58.982Z · LW · GW

http://lesswrong.com/lw/p2/hand_vs_fingers/

Comment by endoself on What makes us think _any_ of our terminal values aren't based on a misunderstanding of reality? · 2013-09-26T04:53:55.860Z · LW · GW

I, for one, have the terminal value of continued personal existence (a.k.a. being alive). On LW I'm learning that continuity, personhood, and existence might well be illusions. If that is the case, my efforts to find ways to survive amount to extending something that isn't there in the first place

I am confused about this as well. I think the right thing to do here is to recognize that there is a lot we don't know about, e.g. personhood, and that there is a lot we can do to clarify our thinking on personhood. When we aren't confused about this stuff anymore, we can look over it and decide what parts we really valued; our intuitive idea of personhood clearly describes something, even recognizing that a lot of the ideas of the past are wrong. Note also that we don't gain anything by remaining ignorant (I'm not sure if you've realized this yet).

Comment by endoself on Lesswrong Philosophy and Personal Identity · 2013-08-28T02:32:22.498Z · LW · GW

Can you elaborate? This sounds interesting.

Comment by endoself on A summary of Savage's foundations for probability and utility. · 2013-08-08T23:30:15.789Z · LW · GW

Neural signals represent things cardinally rather than ordinally, so those voting paradoxes probably won't apply.

Even conditional on humans not having transitive preferences even in an approximate sense, I find it likely that it would be useful to come up with some 'transativization' of human preferences.

Agreed that there's a good chance that game-theoretic reasoning about interacting submodules will be important for clarifying the structure of human preferences.

Comment by endoself on The Empty White Room: Surreal Utilities · 2013-07-26T04:38:15.534Z · LW · GW

What's wrong with the surreals? It's not like we have reason to keep our sets small here. The surreals are prettier, don't require an arbitrary nonconstructive ultrafilter, are more likely to fall out of an axiomatic approach, and can't accidently end up being too small (up to some quibbles about Grothendieck universes).

Comment by endoself on Evidential Decision Theory, Selection Bias, and Reference Classes · 2013-07-14T07:51:42.376Z · LW · GW

No, that's not what I meant at all. In what you said, the agent needs to be separate from the system in order to preform do-actions. I want an agent that knows it's an agent, so it has to have a self-model and, in particular, has to be inside the system that is modelled by our causal graph.

One of the guiding heuristics in FAI theory is that an agent should model itself the same way it models other things. Roughly, the agent isn't actually tagged as different from nonagent things in reality, so any desired behaviour that depends on correctly making this distinction cannot be regulated with evidence as to whether it is actually making the distinction the way we want it to. A common example of this is the distinction between self-modification and creating a successor AI; an FAI should not need to distinguish these, since they're functionally the same. These sorts of ideas are why I want the agent to be modelled within its own causal graph.

Comment by endoself on Evidential Decision Theory, Selection Bias, and Reference Classes · 2013-07-09T12:09:52.161Z · LW · GW

Look, HIV patients who get HAART die more often (because people who get HAART are already very sick). We don't get to see the health status confounder because we don't get to observe everything we want. Given this, is HAART in fact killing people, or not?

Well, of course I can't give the right answer if the right answer depends on information you've just specified I don't have.

You're sort of missing what Ilya is trying to say. You might have to look at the actual details of the example he is referring to in order for this to make sense. The general idea is that even though we can't observe certain variables, we still have enough evidence to justify the causal model where HAART leads to fewer people die, so we can conclude that we should prescribe it.

I would object to Ilya's more general point though. Saying that EDT would use E(death|HAART) to determine whether to prescribe HAART is making the same sort of reference class error you discuss in the post. EDT agents use EDT, not the procedures used to A0 and A1 in the example, so we really need to calculate E(death|EDT agent prescribes HAART). I would expect this to produce essentially the same results as a Pearlian E(death | do(HAART)), and would probably regard it as a failure of EDT if it did not add up to the same thing, but I think that there is value in discovering how exactly this works out, if it does.

Comment by endoself on Evidential Decision Theory, Selection Bias, and Reference Classes · 2013-07-09T09:39:37.818Z · LW · GW

If you want to change what you want, then you've decided that your first-orded preferences were bad. EDT recognizing that it can replace itself with a better decision theory is not the same as it getting the answer right; the thing that makes the decision is not EDT anymore.

Comment by endoself on Evidential Decision Theory, Selection Bias, and Reference Classes · 2013-07-08T10:54:46.602Z · LW · GW

No. For example, AIXI is what I would regard as essentially a Bayesian agent, but it has a notion of causality because it has a notion of the environment taking its actions as an input.

This looks like a symptom of AIXI's inability to self-model. Of course causality is going to look fundamental when you think you can magically intervene from outside the system.

Do you share the intuition I mention in my other comment? I feel that they way this post reframes CDT and TDT as attempts to clarify bad self-modelling by naive EDT is very similar to the way I would reframe Pearl's positions as an attempt to clarify bad self-modelling by naive probability theory a la AIXI.

Comment by endoself on Evidential Decision Theory, Selection Bias, and Reference Classes · 2013-07-08T10:37:09.065Z · LW · GW

These three causal graphs cannot be distinguished by the observational statistics. The causal information given in the problem is an essential part of its statement, and no decision theory which ignores causation can solve it.

I think this isn't actually compatible with the thought experiment. Our hypothetical agent knows that it is an agent. I can't yet formalize what I mean by this, but I think that it requires probability distributions corresponding to a certain causal structure, which would allow us to distinguish it from the other graphs. I don't know how to write down a probability distribution that contains myself as I write it, but it seems that such a thing would encode the interventional information about the system that I am interacting with on a purely probabilistic level. If this is correct, you wouldn't need a separate representation of causality to decide correctly.

Comment by endoself on Evidential Decision Theory, Selection Bias, and Reference Classes · 2013-07-08T08:57:20.018Z · LW · GW

UDT corresponds to something more mysterious

Don't update at all, but instead optimize yourself, viewed as a function from observations to actions, over all possible worlds.

There are tons of details, but it doesn't seem impossible to summarize in a sentence.

Comment by endoself on Progress on automated mathematical theorem proving? · 2013-07-04T00:01:39.890Z · LW · GW

I'd like to make explicit the connection of this idea to hard takeoff, since it's something I've thought about before but isn't stated explicitly very often. Namely, this provides some reason to think that by the time an AGI is human-level in the things humans have evolved to do, it will be very superhuman in things that humans have more difficulty with, like math and engineering.

Comment by endoself on Useful Concepts Repository · 2013-06-11T09:40:00.426Z · LW · GW

It provides a usefully concept, which can be carried over into other domains. I suppose there are other techniques that use a temperature, but I'm much less familiar with them and they are more complicated. Is understanding other metaheuristics more useful to people who aren't actually writing a program preforms some optimization than just understanding simulated annealing?

Comment by endoself on Useful Concepts Repository · 2013-06-10T21:20:44.056Z · LW · GW

But it's actually important to the example. If someone intends to allocate their time searching for small and large improvements to their life, then simulated annealing suggests that they should make more of the big ones first. (The person you describe has may not have done this, since they've settled into a local optimum but now decide to find a completely different point on the fitness landscape, though without more details it's entirely possible they've decided correctly here.)

Comment by endoself on Useful Concepts Repository · 2013-06-10T20:47:51.432Z · LW · GW

Your second paragraph could benefit from the concept of simulated annealing.

Comment by endoself on Prisoner's Dilemma (with visible source code) Tournament · 2013-06-06T00:52:50.138Z · LW · GW

I'm not sure what you mean. Do you mean the scores given that you choose to cooperate and defect? There's a lot of complexity hiding in 'given that', and we don't understand a lot of it. This is definitely not a trivial fix to Lumifer's program.

Comment by endoself on Prisoner's Dilemma (with visible source code) Tournament · 2013-06-05T23:55:44.784Z · LW · GW

Another problem is that you cooperate agains CooperateBot.

Comment by endoself on Open Thread, June 2-15, 2013 · 2013-06-05T09:20:22.135Z · LW · GW

From If Many-Worlds had Come First:

the thought experiment goes: 'Hey, suppose we have a radioactive particle that enters a superposition of decaying and not decaying. Then the particle interacts with a sensor, and the sensor goes into a superposition of going off and not going off. The sensor interacts with an explosive, that goes into a superposition of exploding and not exploding; which interacts with the cat, so the cat goes into a superposition of being alive and dead. Then a human looks at the cat,' and at this point Schrödinger stops, and goes, 'gee, I just can't imagine what could happen next.' So Schrödinger shows this to everyone else, and they're also like 'Wow, I got no idea what could happen at this point, what an amazing paradox'. Until finally you hear about it, and you're like, 'hey, maybe at that point half of the superposition just vanishes, at random, faster than light', and everyone else is like, 'Wow, what a great idea!'"

Obviously this is a parody and Eliezer is making an argument for many worlds. However, this isn't that far from how the thought experiment is presented in introductory books and even popularizations. Why, then, don't more people realize that many worlds is correct? Why aren't tons of bright middle-school children who read science fiction and popular science spontaneously rediscovering many worlds?

Comment by endoself on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-10T03:58:52.481Z · LW · GW

I agree with this; the 'e.g.' was meant to point toward the most similar theories that have names, not pin down exactly what Eliezer is doing here. I though that it would be better to refer to the class of similar theories here since there is enough uncertainty that we don't really have details.

Comment by endoself on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-09T02:52:39.532Z · LW · GW

Yeah, this whole line of reasoning fails if you can get to 3^^^3 utilons without creating ~3^^^3 sentients to distribute them among.

Overall I'm having a really surprising amount of difficulty thinking up an example where you have a lot of causal importance but no anthropic counter-evidence.

I'm not sure what you mean. If you use an anthropic theory like what Eliezer is using here (e.g. SSA, UDASSA) then an amount of causal importance that is large compared to the rest of your reference class implies few similar members of the reference class, which is anthropic counter-evidence, so of course it would be impossible to think of an example. Even if nonsentients can contribute to utility, if I can create 3^^^3 utilons using nonsentients, than some other people probably can to, so I don't have a lot of causal importance compared to them.

Anyway, does "anthropic" even really have anything to do with qualia? The way people talk about it it clearly does, but I'm not sure it even shows up in the definition—a non-sentient optimizer could totally make anthropic updates.

This is the contrapositive of the grandparent. I was saying that if we assume that the reference class is sentients, then nonsentients need to reason using different rules i.e. a different reference class. You are saying that if nonsentients should reason using the same rules, then the reference class cannot comprise only sentients. I actually agree with the latter much more strongly, and I only brought up the former because it seemed similar to the argument you were trying to remember.

There are really two separate questions here, that of how to reason anthropically and that of how magic reality-fluid is distributed. Confusing these is common, since the same sort of considerations affect both of them and since they are both badly understood, though I would say that due to UDT/ADT, we now understand the former much better, while acknowledging the possibility of unknown unknowns. (Our current state of knowledge where we confuse these actually feels a lot like people who have never learnt to separate the descriptive and the normative.)

The way Eliezer presented things in the post, it is not entirely clear which of the two he meant to be responsible for the leverage penalty. It seems like he meant for it to be an epistemic consideration due to anthropic reasoning, but this seems obviously wrong given UDT. In the Tegmark IV model that he describes, the leverage penalty is caused by reality-fluid, but it seems like he only intended that as an analogy. It seems a lot more probable to me though, and it is possible that Eliezer would express uncertainty as to whether the leverage penalty is actually caused by reality-fluid, so that it is a bit more than an analogy. There is also a third mathematically equivalent possibility where the leverage penalty is about values, and we just care less about individual people when there are more of them, but Eliezer obviously does not hold that view.

Comment by endoself on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-09T02:14:08.765Z · LW · GW

Maybe I was unclear. I don't dismiss Y=TL4 as wrong, I ignore it as untestable and therefore useless for justifying anything interesting, like how an AI ought to deal with tiny probabilities of enormous utilities.

He's not saying that the leverage penalty might be correct because we might live in a certain type of Tegmark IV, he's saying that the fact that the leverage penalty would be correct if we did live in Tegmark IV + some other assumptions shows (a) that it is a consistent decision procedure and¹ (b) it is the sort of decision procedure that emerges reasonably naturally and is thus a more reasonable hypothesis than if we didn't know it comes up natuarally like that.

It is possible that it is hard to communicate here since Eliezer is making analogies to model theory, and I would assume that you are not familiar with model theory.

¹ The word 'and' isn't really correct here. It's very likely that EY means one of (a) and (b), and possibly both.

Comment by endoself on Testing lords over foolish lords: gaming Pascal's mugging · 2013-05-09T02:03:29.323Z · LW · GW

Pascal's mugging is less of a problem if your utility function is bounded, and it completely goes away if the bound is reasonably low, since then there just isn't any amount of utility that would outweight the improbability of the mugger being truthful.

Comment by endoself on Testing lords over foolish lords: gaming Pascal's mugging · 2013-05-08T08:10:47.821Z · LW · GW

I'm referring to an infinity of possible outcomes, not an infinity of possible choices. This problem still applies if the agent must pick from a finite list of actions.

Specifically, I'm referring to the problem discussed in this paper, which is mostly the same problem as Pascal's mugging.

Comment by endoself on Testing lords over foolish lords: gaming Pascal's mugging · 2013-05-08T06:54:13.396Z · LW · GW

If Pascal's mugger was a force of nature - a new theory of physics, maybe - then the case for keeping to expected utility maximisation may be quite strong.

There's still the failure of convergence. If the theory that made you think that it would be a good idea to accept Pascal's mugging tells you to sum an infinite series, and that infinite series diverges, then the theory is wrong.

Comment by endoself on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-08T06:34:53.057Z · LW · GW

You still get a probability function without Savage's P6 and P7, you just don't get a utility function with codomain the reals, and you don't get expectations over infinite outcome spaces. If we add real-valued probabilities, for example by assuming Savage's P6', you even get finite expectations, assuming I haven't made an error.

Comment by endoself on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-08T06:21:49.326Z · LW · GW

I can find one discussion where, when the question of bounded utility functions came up, Eliezer responded, "[To avert a certain problem] the bound would also have to be substantially less than 3^^^^3." -- but this indicates a misunderstanding of the idea of utility, because utility functions can be arbitrarily (positively) rescaled or recentered. Individual utility "numbers" are not meaningful; only ratios of utility differences.

I think he was assuming a natural scale. After all, you can just pick some everyday-sized utility difference to use as your unit, and measure everytihng on that scale. It wouldn't really matter what utility difference you pick as long as it is a natural size, since multiplying by 3^^^3 is easily enough for the argument to go through.

Comment by endoself on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-08T06:14:01.269Z · LW · GW

Quantum mechanics actually has lead to some study of negative probabilities, though I'm not familiar with the details. I agree that they don't come up in the standard sort of QM and that they don't seem helpful here.

Comment by endoself on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-08T06:05:32.336Z · LW · GW

IIRC putting all possible observers in the same reference class leads to bizarre conclusions...? I can't immediately re-derive why that would be.

The only reason that I have ever thought of is that our reference class should intuitively consist of only sentient beings, but that nonsentient beings should still be able to reason. Is this what you were thinking of? Whether it applies in a given context may depend on what exactly you mean by a reference class in that context.

Comment by endoself on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-07T16:26:05.221Z · LW · GW

If what he says is true, then there will be 3^^^3 years of life in the universe. Then, assuming this anthropic framework is correct, it's very unlikely to find yourself at the beginning rather than at any other point in time, so this provides 3^^^3-sized evidence against this scenario.