Posts

Pascal's Mugging, Finite or Unbounded Resources? 2015-10-15T04:01:54.393Z
Justification Through Pragmatism 2011-11-17T05:39:31.671Z

Comments

Comment by Irgy on ClearerThinking's Fact-Checking 2.0 · 2015-10-23T04:00:57.978Z · LW · GW

I think the standard for accuracy would be very different. If Watson gets something right you think "Wow that was so clever", if it's wrong you're fairly forgiving. On that other hand, I feel like if an automated fact checker got even 1/10 things wrong it would be subject to insatiable rage for doing so. I think specifically correcting others is the situation in which people would have the highest standard for accuracy.

And that's before you get into the levels of subjectivity and technicality in the subject matter which something like Watson would never be subjected to.

Comment by Irgy on The mystery of Brahms · 2015-10-21T11:13:51.126Z · LW · GW

You can do it first or you can do it best, usually both are different artists and each is well known. I think there's plenty of examples of both in all fields. Rachmaninov for instance is another classical (in the broad sense) composer in the "do it well" rather than "do it first" camp, he was widely criticised as behind the times in his own era, but listening now no-one cares that his writing music sounds like it's ~150 years old but written only ~100.

Comment by Irgy on Open thread, Oct. 19 - Oct. 25, 2015 · 2015-10-20T06:51:29.879Z · LW · GW

That's the result of compulsory voting not of preference voting.

Comment by Irgy on Open thread, Oct. 19 - Oct. 25, 2015 · 2015-10-20T06:45:50.547Z · LW · GW

As an Australian I can say I'm constantly baffled over the shoddy systems used in other countries. People seem to throw around Arrow's impossibility theorem to justify hanging on to whatever terrible system they have, but there's a big difference between obvious strategic voting problems that affect everyone, and a system where problems occur in only fairly extreme circumstances. The only real reason I can see why the USA system persists is that both major parties benefit from it and the system is so good at preventing third parties from having a say that even as a whole they can't generate the will to fix it.

In more direct answer to your question, personally I vote for the parties in exactly the order I prefer them. My vote is usually partitioned as: [Parties I actually like | Major party I prefer | Parties I'm neutral about | Parties I've literally never heard of | Major party I don't prefer | Parties I actively dislike]

A lot of people vote for their preferred party, as evidenced by more primary votes for minor parties. Just doing a quick comparison, in the last (2012) US presidential election only 1.74% of the vote went to minor candidates, while in the last Australian federal election (2013) an entire 21% of the votes went to minor parties.

Overall it works very well in the lower house.

In the upper house, the whole system is so complicated no-one understands it, and the ballot papers are so big that the effort required to vote in detail prevents most people from bothering. In the upper house I usually just vote for a single party and let their preference distribution be automatically applied for me. Of course I generally check what that is first, though you have to remember to do it beforehand since it's not available while you're voting. Despite all that though, it's a good system I wouldn't want it replaced with anything different.

Comment by Irgy on Pascal's Mugging, Finite or Unbounded Resources? · 2015-10-18T23:58:51.224Z · LW · GW

(meta) Well, I'm quite relieved because I think we're actually converging rather than diverging finally.

No. Low complexity is not the same thing as symmetry.

Yes sorry symmetry was just how I pictured it in my head, but it's not the right word. My point was that the particles aren't acting independently, they're constrained.

Mostly correct. However, given a low-complexity program that uses a large random input, you can make a low-complexity program that simulates it by iterating through all possible inputs, and running the program on all of them.

By the same token you can write a low complexity program to iteratively generate every number. That doesn't mean all numbers have low complexity. It needs to be the unique output of the program. If you tried to generate every combination then pick one out as the unique output, the picking-one-out step would require high complexity.

I think as a result of this whole discussion I can simplify my entire "finite resources" section to this one statement, which I might even edit in to the original post (though at this stage I don't think many more people are ever likely to read it):

"It is not possible to simulate n humans without resources of complexity at least n."

Everything else can be seen as simply serving to illustrate the difference between a complexity of n, and a complexity of complexity(n).

It would be quite surprising if none of the "C-like" theories could influence action, given that there are so many of them

It's easy to give a theory a posterior probability of less than 1/3^^^^3, by giving it zero. Any theory that's actually inconsistent with the evidence is simply disproven. What's left are theories which either accept the observed event, i.e. those which have priors < 1/3^^^^3 (e.g. that the number chosen was 7 in my example), and theories which somehow reject either the observation itself or the logic tying the whole thing together.

It's my view that theories which reject either observation or logic don't motivate action because they give you nothing to go on. There are many of them, but that's part of the problem since they include "the world is like X and you've failed to observe it correctly" for every X, making it difficult to break the symmetry.

I'm not completely convinced there can't be alternative theories which don't fall into the two categories above (either disproven or unhelpful), but they're specific to the examples so it's hard to argue about them in general terms. In some ways it doesn't matter if you're right, even if there was always compelling arguments not to act on a belief which had a prior of less than 1/3^^^^3, Pascal's Muggle could give those arguments and not look foolish by refusing to shift his beliefs in the face of strong evidence. All I was originally trying to say was that it isn't wrong to assign priors that low to something in the first place. Unless you disagree with that then we're ultimately arguing over nothing here.

Here's my attempt at an analysis

This solution seems to work as stated, but I think the dilemma itself can dodge this solution by constructing itself in a way that forces the population of people-to-be-tortured to be separate from the population of people-to-be-mugged. In that case there isn't of the order of 3^^^^3 people paying the $5.

(meta again) I have to admit it's ironic that this whole original post stemmed from an argument with someone else (in a post about a median utility based decision theory), which was triggered by me claiming Pascal's Mugging wasn't a problem that needed solving (at least certainly not by said median utility based decision theory). By the end of that I became convinced that the problem wasn't considered solved and my ideas on it would be considered valuable. I've then spent most of my time here arguing with someone who doesn't consider it unsolved! Maybe I could have saved myself a lot of karma by just introducing the two of you instead.

Comment by Irgy on Pascal's Mugging, Finite or Unbounded Resources? · 2015-10-17T22:48:53.155Z · LW · GW

there are large objects computed by short programs with short input or even no input, so your overall argument is still incorrect.

I have to say, this caused me a fair bit of thought.

Firstly, I just want to confirm that you agree a universe as we know it has complexity of the order of its size. I agree that an equivalently "large" universe with low complexity could be imagined, but its laws would have to be quite different to ours. Such a universe, while large, would be locked in symmetry to preserve its low complexity.

Just an aside on randomness, you might consider a relatively small program generating even this universe, by simply simulating the laws of physics, which include a lot of random events quite possibly including even the Big Bang itself. However I would argue that the definition of complexity does not allow for random calculations. To make such calculations, a pseudo random input is required, the length of which is added to the complexity. AIXI would certainly not be able to function otherwise.

The mugger requires more than just a sufficiently large universe. They require a universe which can simulate 3^^^^3 people. A low complexity universe might be able to be large by some measures, but because it is locked in a low complexity symmetry, it cannot be used simulate 3^^^^3 unique people. For example the memory required (remember I mean the memory within the mugger's universe itself, not the memory used by the hypothetical program used to evaluate that universe's complexity) would need to be of the order of 3^^^^3, however while the universe may have 3^^^^3 particles if those particles are locked in a low-complexity symmetry then they cannot possibly hold 3^^^^3 bits of data.

In short, a machine of complexity of 3^^^^3 is fundamentally required to simulate 3^^^^3 different people. My error was to argue about the complexity of the mugger's universe, when what matters is the complexity of the mugger's computing resources.

I already explained why this is incorrect, and you responded by defending your separate point about action guidance while appearing to believe that you had made a rebuttal.

No, all of your arguments relate to random sensory inputs, which are alternative theories 'C' not the 'A' or 'B' that I referred to. To formalise:

I claim there exists theories A and B along with evidence E, such that: p(B) > 3^^^^3p(A) p(A|E) > p(B|E) complexity(E) << 3^^^^3 (or more to the point it's within our sensory bandwidth.

You have only demonstrated that there exists theory C (random input) such that C != B for any B satisfying the above, which I also tentatively agree with.

So the reason I switch to a separate point is because I don't consider my original statement disproven, but I accept that theories like C may limit the relevance of it. Thus I argue about the relevance of it, with this business about whether it affects your action or not. To be clear, I do agree (and I have said this) that C-like theories can influence action (as you argue). I am trying to argue though that in many cases they do not. It's hard to resolve since we don't actually have a specific case we're considering here, this whole issue is off on a tangent from the mugging itself.

I admit that the text of mine you quoted implies I meant it for any two theories A and B, which would be wrong. What I really meant was that there exist such (pairs of) theories. The cases where it can be true need to be very limited anyway because most theories do not admit evidence E as described, since it requires this extremely inefficiently encoded input.

If you're saying that the extent to which an individual cares about the desires of an unbounded number of agents is unbounded, then you are contradicting yourself. If you aren't saying that, then I don't see why you wouldn't accept boundedness of your utility function as a solution to Pascal's mugging.

I'm not saying the first thing. I do accept bounded utility as a solution to the mugging for me (or any other agent) as an individual, as I said in the original post. If I was mugged I would not pay for this reason.

However, I am motivated (by a bounded amount) to make moral decisions correctly, especially when they don't otherwise impact me directly. Thus if you modify the mugging to be an entirely moral question (i.e. someone else is paying), I am motivated to answer it correctly. To answer it correctly, I need to consider moral calculations, which I still believe to be unbounded. So for me there is still a problem to be solved here.

Comment by Irgy on Pascal's Mugging, Finite or Unbounded Resources? · 2015-10-17T22:05:36.116Z · LW · GW

Well, you'd need a method of handling infinite values in your calculations. Some methods exist, such as taking limits of finite cases (though much care needs to be taken), using a number system like the Hyperreals or the Surreals if appropriate, or comparing infinite cardinals, it would depend a little on the details of how such an infinite threat was made plausible. I think in most cases my argument about the threat being dominated by other factors would not hold in this case.

While my point about specific other actions dominating may not hold in this case, I think the overall point that infinite resources cause problems far more fundamental than the mugging is if anything strengthened by your example. As is the general point that large numbers on their own are not the problem.

Comment by Irgy on Pascal's Mugging, Finite or Unbounded Resources? · 2015-10-16T21:50:54.821Z · LW · GW

Sorry, but I don't know which section of my reply this is addressing and I can't make complete sense of it.

an explicit assumption of finite resources - an assumption which would ordinarily have a probability far less than 1 - (1/3^^^^3)

The OP is broken into two main sections, one assuming finite resources and one assuming infinite.

Our universe has finite resources, why would an assumption of finite resources in an alternative universe be vanishingly unlikely? Personally I would expect finite resources with probability ~=1. I'm not including time as a "resource" here by the way, because infinite future time can be dealt with by geometric discounting and so isn't interesting.

What you have now almost seems like a quick disclaimer added when you realized the OP had failed.

It would especially help to know which quote you are referring to here.

Overall I endeavoured to show that the mugging fails in the finite case, and is nothing particularly special in the infinite case. The mugging as I see it is intended as a demonstration that large, low complexity numbers are a problem. I argue that infinite resources are a problem, but large, low complexity numbers on their own are not.

I still don't consider my arguments to have failed (though it's becoming clear that at least my presentation of them has since no-one seems to have appreciated it), I do disclaim that the mugging still raises the question of infinite resources, but reducing it to just that issue is not a failure.

I also remain firmly convinced that expected utilities (both personal and moral) can and should converge, it's just that the correct means of dealing with infinity needs to be applied, and I leave a few options open in that regard.

Comment by Irgy on Pascal's Mugging, Finite or Unbounded Resources? · 2015-10-16T10:53:43.638Z · LW · GW

This argument relies on a misunderstanding of what Kolmogorov complexity is. The complexity of an object is the length of the source code of the shortest program that generates it, not the amount of memory used by that program.

I know that.

The point about memory is the memory required to store the program data, not the memory required to run the program. The program data is part of the program, thus part of the complexity. A mistake I maybe made though was to talk about the current state rather than the initial conditions, since the initial conditions are what give the complexity of the program (though initialising with the current state is possible and equivalent). In my defence though talking about the memory was only meant to be illustrative.

To elaborate, you could simulate the laws of physics with a relatively small program, but you could not simulate the universe itself without a program as complex as the universe. You might think of it as a small "laws of physics" simulator and a large input file, but the complexity measure must include this input file. If it did not the program would not be deterministicly linked to its output.

Huh?

Ok let me spell it out.

you cannot be 100% certain of its correctness. You should assign some nonzero probability to the hypothesis that the probability does not decrease fast enough for the correct expected utilities to be bounded. Then, taking this uncertainty into account, your expected utilities are unbounded.

A person comes up to you and says "arglebargle 3^^^^3 banana". This appears to you to be gibberish. However, you cannot be 100% certain of the correctness of this assertion. It could be that they're trying to perform Pascal's Mugging, but your ears are blocked and you didn't hear them right. You should assign some nonzero probability to this hypothesis, and that value will be greater than 3^^^^3. Thus the risk is sufficient that you should pay them $5 just in case.

This is what I mean by calling your argument too general. Now obviously neither you nor I would consider "arglebargle 3^^^^3 banana" a mugging, but I do not see a meaningful difference between your counterargument and the argument I present above.

You've changed the claim you're defending

No, I'm just making a multi-faceted point. Maybe if I break it up it would help:

  • Given two alternatives A and B, where A is initially considered 3^^^^3 times less likely than the B, it is possible, even with limited sensory bandwidth, to receive evidence to convince you that B is more likely than A. This is a point which I believe was not considered correctly in Pascal's Muggle.
  • Separate to this are other catch-all theories (C, D, etc.) which are impossible to disprove and potentially much more likely than 3^^^^3.
  • However, catch-all theories may not influence a decision relating to A and B because they do not generally reward the corresponding actions differently.
  • When they do influence a decision, it is most likely for the better. The best way to handle this is in my opinion still unsolved.

Pascal's Muggle was a scenario where the protagonist belligerently stuck to B because they felt it was impossible to generate sufficient evidence to support A. There's a lot of other discussion in that post, and it's quite sensible on the whole, but this is the aspect I was focusing on.

I'm confused because it sounds like you're conceding here that bounded utility is correct, but elsewhere you say otherwise.

I'm saying bounded utility is correct for an individual or agent. But I'm also saying bounds are not justified for the aggregation of the desires of an unbounded number of agents. These statements are not inconsistent.

No, that does not give you a well-defined utility function. You can see this if you try to use it to compare three or more different outcomes.

Well ok you're right here, the case of three or more outcomes did make me rethink how I consider this problem.

It actually highlights that the impact of morals on my personal utility is indirect. I couldn't express my utility as some kind of weighted sum of personal and (nonlinearly mapped) moral outcomes, since if I did I'd have the same problem getting off the couch as I argued you would. I think in this case it's the sense of having saved the 1000 people that I would value, which only exists by comparison with the known alternative. Adding more options to the picture would definitely complicate the problem and unless I found a shortcut I might honestly be stuck evaluating the whole combinatorial explosion of pairs of options.

But, exploring my own utility aside, the value of treating morality linearly is still there. If I bounded the morals themselves I would never act because I would honestly think there was as good as no difference between the outcomes at all, even when compared directly. Whereas by treating morals linearly I can at least pin that sense of satisfaction in having saved them on a real quantitative difference.

Comment by Irgy on Pascal's Mugging, Finite or Unbounded Resources? · 2015-10-15T23:30:06.366Z · LW · GW

No, it isn't. It can't be used against agents with bounded utility functions.

Ok fully general counterargument is probably an exaggeration but it does have some similar undesirable properties:

  • Your argument does not actually address the argument it's countering in any way. If 1/n is the correct prior to assign to this scenario surely that's something we want to know? Surely I'm adding value by showing this?

  • If your argument is accepted then it makes too broad a class of statements into muggings. In fact I can't see why "arglebargle 3^^^^3 banana" isn't a mugging according to your argument. If I've reduced the risk of factual uncertainty in the original problem to a logical uncertainty, this is progress. If I've shown the mugging is equivalent to gibberish, this is progress.

I do think it raises a relevant point though that until we've resolved how to handle logical uncertainty we can't say we've fully resolved this scenario. But reducing the scenario from a risk of factual uncertainty to the more general problem of logical uncertainty is still worthwhile.

I was trying to point out that it actually is an impossible burden.

Only if you go about it the wrong way. The "my sensory system is receiving random noise" theory does not generally compel us to act in any particular way, so the balance can still be influenced by small probabilities. Maybe you'd be "going along with it" rather than believing it but the result is the same. Don't get me wrong, I think there are modifications to behaviour which should be made in response to dreaming/crazy/random theories, but this is essentially the same unsolved problem of acting with logical uncertainty as discussed above.

In any case all I was trying to do with that section was undermine the ridicule given to assigning suitably low probabilities to things. The presence of alternative theories may affect how we act, and the dominance of them over superexponentially low probabilities may smother the relevance of choices that depend on them, but none of this makes assigning those values incorrect. And I support this by demonstrating that at least in the absence of alternative catch-all theories, by assigning those probabilities you are not making it impossible to believe these things, despite the bandwidth of your sensory system. Which is far from a proof that their correct in itself, but does undermine the point being made in the Pascal's Muggle article.

It's not like there is some True Ethical Utility Function, and your utility function is some combination of that with your personal preferences.

Well we have a different take on meta-ethics is all. Personally I think Coherent Extrapolated Volition applied to morality leads to a unique limit, which, while in all likelihood is not just unfindable but also unverifiable, still exists as the "One True Ethical Function" in a philosophical sense.

I agree that the amount to which a person cares about others is and should be bounded. But I separate the scale of a moral tragedy itself from the extent to which I or anyone else is physically capable of caring about it. I think nonlinearly mapping theoretically unbounded moral measurements into my own bounded utility is more correct than making the moral measurements nonlinearly to begin with.

Consider for example the following scenario: 3^^^^3 people are being tortured. You could save 1000 of them by pressing a button, but you'd have to get off a very comfy couch to do it.

With bounded moral values the difference between 3^^^^3 people and 3^^^^3-1000 is necessarily insignificant. But with my approach, I can take the difference between the two values in an unbounded, linear moral space, then map the difference into my utility to make the decision. I don't believe this can be done without having a linear space to work in at some point.

Which is the core of my problem with your preferred resolution using bounded moral utility. I agree that bounding moral utility would resolve the paradox but I still don't think you've made a case that it's correct.

Comment by Irgy on Pascal's Mugging, Finite or Unbounded Resources? · 2015-10-15T22:39:40.626Z · LW · GW

I'm not trying to ignore the problem I'm trying to progress it. If for example I reduce the mugging to just a non-special example of another problem, then I've reduced the number of different problems that need solving by one. Surely that's useful?

Comment by Irgy on Pascal's Mugging, Finite or Unbounded Resources? · 2015-10-15T09:24:43.212Z · LW · GW

This class of argument has been made before. The standard counterargument is that whatever argument you have for this conclusion, you cannot be 100% certain of its correctness. You should assign some nonzero probability to the hypothesis that the probability does not decrease fast enough for the correct expected utilities to be bounded. Then, taking this uncertainty into account, your expected utilities are unbounded.

Standard counterargument it may be but it seems pretty rubbish to me. It seems to have the form "You can't be sure you're right about X and the consequences of being wrong can be arbitrarily bad therefore do Y". This seems like a classic case of a fully general counterargument.

If I assign a non-zero probability to being wrong in my assessment of the likelihood of any possible scenario then I'm utterly unable to normalise my distribution. Thus I see this approach as an utter failure, as far as attempts to account for logical uncertainty go.

Accounting for logical uncertainty is an interesting and to my mind unsolved problem, if we ever do solve it I'll be interested to see how it impacts this scenario.

There is a positive lower bound to the probability of observing any given data...

This is exactly what I was addressing with the discussion of the dreaming/crazy theories, random sensory input is just another variant of that. And as I said there I don't see this as a problem.

The conclusion that you should drop everything else and go all in on pursuing arbitrarily small probabilities of even more vast outcomes is, if anything, even more counter-intuitive than the conclusion that you should give the mugger $5.

Certainly, and I don't honestly reach that conclusion myself. The point I make is that this collapse happens as soon as you as much as consider the possibility of unbounded resources, the mugging is an unnecessary complication. That it might still help highlight this situation is the point I'm directly addressing in the final paragraph.

There is no reason that the "moral component" of your utility function must be linear. In fact, the boundedness of your utility function is the correct solution to Pascal's mugging.

I can see compelling arguments for bounded personal utility, but I can't see compelling argument that moral catastrophes are bounded. So, as much as it would solve the mugging (and particularly an entirely morality-based version of it), I'm not convinced that it does so correctly.

Comment by Irgy on Unbounded linear utility functions? · 2015-10-13T00:38:24.880Z · LW · GW

If you view morality as entirely a means of civilisation co-ordinating then you're already immune to Pascal's Mugging because you don't have any reason to care in the slightest about simulated people who exist entirely outside the scope of your morality. So why bother talking about how to bound the utility of something to which you essentially assign zero utility to in the first place?

Or, to be a little more polite and turn the criticism around, if you do actually care a little bit about a large number of hypothetical extra-universal simulated beings, you need to find a different starting point for describing those feelings than facilitating civilisational co-ordination. In particular, the question of what sorts of probability trade-offs the existing population of earth would make (which seems to be the fundamental point of your argument) is informative, but far from the be-all and end-all of how to consider this topic.

Comment by Irgy on The application of the secretary problem to real life dating · 2015-10-02T00:40:19.840Z · LW · GW

This assumes you have no means of estimating how good your current secretary/partner is, other than directly comparing to past options. While it's nice to know what the optimal strategy is in that situation, don't forget that it's not an assumption which holds in practice.

Comment by Irgy on Median utility rather than mean? · 2015-09-21T00:19:44.144Z · LW · GW

Re St Petersburg, I will reiterate that there is no paradox in any finite setting. The game has a value. Whether you'd want to take a bet at close to the value of the game in a large but finite setting is a different question entirely.

And one that's also been solved, certainly to my satisfaction. Logarithmic utility and/or the Kelly Criterion will both tell you not to bet if the payout is in money, and for the right reasons rather than arbitrary, value-ignoring reasons (in that they'll tell you exactly what you should pay for the bet). If the payout is directly in utility, well I think you'd want to see what mindbogglingly large utility looked like before you dismiss it. It's pretty hard if not impossible to generate that much utility with logarithmic utility of wealth and geometric discounting. But even given that, a one in a triillion chance at a trillion worthwhile extra days of life may well be worth a dollar (assuming I believed it of course). I'd probably just lose the dollar, but I wouldn't want to completely dismiss it without even looking at the numbers.

Re the mugging, well I can at least accept that there are people who might find this convincing. But it's funny that people can be willing to accept that they should pay but still don't want to, and then come up with a rationalisation like median maximising, which might not even pay a dollar for the mugger not to shoot their mother if they couldn't see the gun. If you really do think it's sufficiently plausible, you should actually pay the guy. If you don't want to pay I'd suggest it's because you know intuitively that there's something wrong with the rationale and refuse to pay a tax on your inability to sort it out. Which is the role the median utility is trying to play here, but to me it's a case of trying to let two wrongs make a right.

Personally though I don't have this problem. If you want to define "impossible" as "so unlikely that I will correctly never account for it in any decision I ever make" then yes, I do believe it's impossible and so should anyone. Certainly there's evidence that could convince me, even rather quickly, it's just that I don't expect to ever see such evidence. I certainly think there might be new laws of physics, but new laws of physics that lead to that much computing power that quickly is something else entirely. But that's just what I think, and what you want to call impossible is entirely a non-argument, irrelevant issue anyway.

The trap I think is that when one imagines something like the matrix, one has no basis on which to put an upper bound on the scale of it, so any size seems plausible. But there is actually a tool for that exact situation: the ignorance prior of a scale value, 1/n. Which happens to decay at exactly the same rate as the number grows. Not everyone is on board with ignorance priors but I will mention that the biggest problem with the 1/n ignorance prior is actually that it doesn't decay fast enough! Which serves to highlight the fact that if you're willing to have the plausibility decay even slower than 1/n, your probability distribution is ill-formed, since it can't integrate to 1.

Now to steel-man your argument, I'm aware of the way to cheat that. It's by redistributing the values by, for instance, complexity, such that a family of arbitrarily large numbers can have sufficiently high probability assigned while the overall integral remains unity. What I think though - and this is the part I can accept people might disagree with, is that it's a categorical error to use this distribution for the plausibility of a particular matrix-like unknown meta-universe. Complexity based probability distributions are a very good tool to describe, for instance, the plausibility of somebody making up such a story, since they have limited time to tell it and are more likely to pick a number they can describe easily. But being able to write a computer program to generate a number and having the actual physical resources to simulate that number of people are two entirely different sorts of things. I see no reason to believe that a meta-universe with 3^^^3 resources is any more likely than a meta-universe with similarly large but impossible to describe resources.

So I'll stick with my proportional to 1/n likelihood of meta-universe scales, and continue to get the answer to the mugging that everyone else seems to think is right anyway. I certainly like it a lot better than median utility. But I concede that I shouldn't have been quite so discouraging of someone trying to come up with an alternative, since not everyone might be convinced.

Comment by Irgy on Median utility rather than mean? · 2015-09-11T04:12:22.266Z · LW · GW

I do acknowledge that my comment was overly negative, certainly the ideas behind it might lead to something useful.

I think you misunderstand my resolution of the mugging (which is fair enough since it wasn't spelled out). I'm not modifying a probability, I'm assigning different probabilities to different statements. If the mugger says he'll generate 3 units of utility difference that's a more plausible statement than if the mugger says he'll generate 3^^^3, etc. In fact, why would you not assign a different probability to those statements? So long as the implausibility grows at least as fast as the value (and why wouldn't it?) there's no paradox.

Re St Petersburg, sure you can have real scenarios that are "similar", it's just that they're finite in practice. That's a fairly important difference. If they're finite then the game has a finite value, you can calculate it, and there's no paradox. In which case median utility can only give the same answer or an exploitably wrong answer.

Comment by Irgy on Median utility rather than mean? · 2015-09-10T09:06:06.953Z · LW · GW

This seems to be a case of trying to find easy solutions to hard abstract problems at the cost of failing to be correct on easy and ordinary ones. It's also fairly trivial to come up with abstract scenarios where this fails catastrophically, so it's not like this wins on the abstract scenarios front either. It just fails on a new and different set of problems - ones that aren't talked about because no-one's ever found a way to fail on them before.

Also, all of the problems you list it solving are problems which I would consider to be satisfactorily solved already. Pascal's mugging fails if the believability of the claim is impacted by the magnitude of the numbers in it, since the mugger can keep naming bigger numbers and simply suffer lower credibility as a result. The St Petersburg paradox is intellectually interesting but impossible to actually construct in practice given a finite universe (versions using infinite time are defeated by bounded utility within a time period and geometric future discounting). The Cauchy distribution is just one of many functions with no mean, all that tells me is that it's the wrong function to model the world with if you know the world should have a mean. And the repungent conclusion, well I can't comment usefully about this because "repungent" or not I've never viewed it to be incorrect in the first place - so to me this potentially justifying smaller but happier populations is an error if anything.

I just think it's worth making the point that the existing, complex solutions to these problems are a good thing. Complexity-influenced priors, careful handling of infinite numbers, bounded utility within a time period, geometric future discounting, integratable functions and correct utility summation and zero-points are all things we want to be doing anyway. Even when they're not resolving a paradox! The paradoxes are good, they teach us things which circumventing the paradoxes in this way would not.

PS People feel free to correct my incomplete resolutions of those paradoxes, but be mindful of whether any errors or differences of opinion I might have actually undermine my point here or not.

Comment by Irgy on Meta post: Did something go wrong? · 2015-09-01T07:55:31.922Z · LW · GW

I know about both links but still find it annoying that the default behavior for main is to list what to me seems like just an arbitrary subset of the posts, and I need to then click another button to get the rest of them. Unless there's some huge proportion of the reader-base who only care about "promoted" posts and don't want to see the others, the default ought to be to show everything. I'm sure there's people who miss a lot of content and don't even know they're missing it.

Comment by Irgy on Vegetarianism Ideological Turing Test! · 2015-08-13T23:11:16.215Z · LW · GW

Meta comment (I can PM my actual responses when I work out what I want them to be); I found I really struggled with this process, because of the awkward tension between answering the questions and playing a role. I just don't understand what my goal is.

Let me call my view position 1, and the other view position A. The first time I read just this post and I thought it was just a survey, where I should "give my honest opinion", but where some of the position A questions would be non-sensical for someone of position 1 so just pretend a little in order to give an answer that's not "mu".

Then I read the link on what an Ideological Turing test actually was, and that changed my thinking completely. I don't want to give almost-honest answers to position A. I want to create a character who is a genuinely in position A and write entirely fake answers that are as believable as possible and may have nothing to do with my opinions.

In my first attempt at that though, it was still obvious which was which, because my actual views for position 1 were nuanced, unusual and contained a fair number of pro-A elements, making it quite clear when I was giving my actual opinion. So I start meta-gaming. If I want to fool people I really want a fake position 1 opinion as well. In fact if I really want to fool people I need to create a complete character with views nothing like my own, and answer as them for both sets. But surely anyone could get 50% by just writing obviously ignorant answers for both sides? Which doesn't seem productive.

I guess my question is, what's my "win" condition here? Are we taking individuals and trying to classify their position? If so do I "win" if it's 50-50, or do I "win" if it's 100-0 in favour of the opposite opinion? Or are we mixing all the answers for position A and then classifying them as genuine or fake, then separately doing the same for position 1? In that case I suppose I "win" if the position I support is the one classified with higher accuracy. In other words I want to get classified as genuine twice. That actually makes the most sense, maybe I'm just getting confused by all the paired-by-individual responses in the comments, which is not at all how the evaluators will see it, they should not be told which pairs are from the same person at all.

Sorry maybe everyone else gets this already, but I would have thought there's others reading just this post without enough context who might have similar issues.

Comment by Irgy on Versions of AIXI can be arbitrarily stupid · 2015-08-11T06:56:05.308Z · LW · GW

I think this shows how the whole "language independent up to a constant" thing is basically just a massive cop-out. It's very clever for demonstrating that complexity is a real, definable thing, with properties which at least transcend representation in the infinite limit. But as you show it's useless for doing anything practical.

My personal view is that there's a true universal measure of complexity which AIXI ought to be using, and which wouldn't have these problems. It may well be unknowable, but AIXI is intractable anyway so what's the difference? In my opinion, this complexity measure could give a real, numeric answer to seemingly stupid questions like "You see a number. How likely is it that the number is 1 (given no other information)?". Or it could tell us that 16 is actually less complex than, say, 13. I mean really, it's 2^2^2, spurning even a need for brackets. I'm almost certain it would show up in real life more often than 13, and yet who can even show me a non-contrived language or machine in which it's simpler?

Incidentally, they "hell" scenario you describe isn't as unlikely as it at first sounds. I remember an article here a while back lamenting the fact that left unmonitored AIXI could easily kill itself with exploration, the result of which would have a very similar reward profile to what you describe as "hell". It seems like it's both too cautious and not cautious enough in even just this one scenario.

Comment by Irgy on The Dice Room, Human Extinction, and Consistency of Bayesian Probability Theory · 2015-08-10T07:49:07.323Z · LW · GW

Thanks, interesting reading.

Fundamental or not I think my point still stands that "the prior is infinite so the whole thing's wrong" isn't quite enough of an argument, since you still seem to conclude that improper priors can be used if used carefully enough. A more satisfying argument would be to demonstrate that the 9/10 case can't be made without incorrect use of an improper prior. Though I guess it's still showing where the problem most likely is which is helpful.

As far as being part of the foundations goes, I was just going by the fact that it's in Jaynes, but you clearly know a lot more about this topic than I do. I would be interested to know your answer to the following questions though: "Can a state of ignorance be described without the use of improper priors (or something mathematically equivalent)?", and "Can Bayesian probability be used as the foundation of rational thought without describing states of ignorance?".

On the Doomsday argument, I would only take the Dice Room as a metaphor not a proof of anything, but it does help me realise a couple of things. One is that the setup you describe of a potentially endlessly exponentially growing population is not a reasonable model of reality (irrespective of the parameters themselves). The growth has to stop, or at least converge, at some point, even without a catastrophe.

It's interesting that the answer changes if he rolls the dice first. I think ultimately the different answers to the Dice Room correspond to different ways of handling the infinite population correctly - i.e. taking limits of finite populations. For any finite population there needs to be an answer to "what does he do if he doesn't roll snake-eyes in time?" and different choices, for all that you might expect them to disappear in the limit, lead to different answers.

If the dice having already being rolled is the best analogy for the Doomsday argument then it's making quite particular statements about causality and free will.

Comment by Irgy on The Dice Room, Human Extinction, and Consistency of Bayesian Probability Theory · 2015-07-30T05:02:10.880Z · LW · GW

To my view, the 1/36 is "obviously" the right answer, what's interesting is exactly how it all went wrong in the other case. I'm honestly not all that enlightened by the argument given here nor in the links. The important question is, how would I recognise this mistake easily in the future? The best I have for the moment is "don't blindly apply a proportion argument" and "be careful when dealing with infinite scenarios even when they're disguised as otherwise". I think the combination of the two was required here, the proportion argument failed because the maths which normally supports it couldn't be used without at some point colliding with the partly-hidden infinity in the problem setup.

I'd be interested in more development of how this relates to anthropic arguments. It does feel like it highlights some of the weaknesses in anthropic arguments. It seems to strongly undermine the doomsday argument in particular. My take on it is that it highlights the folly of the idea that population is endlessly exponentially growing. At some point that has to stop regardless of whether it has yet already, and as soon as you take that into account I suspect the maths behind the argument collapses.

Edit: Just another thought. I tried harder to understand your argument and I'm not convinced it's enough. Have you heard of ignorance priors? They're the prior you use, in fact the prior you need to use, to represent a state of no knowledge about a measurement other than an invariance property which identifies the type of measurement it is. So an ignorance prior for a position is constant, and for a scale is 1/x, and for a probability has been at least argued to be 1/x(1-x). These all have the property that their integral is infinite, but they work because as soon as you add some knowledge and apply Bayes rule the result becomes integrable. These are part of the foundations of Bayesian probability theory. So while I agree with the conclusion, I don't think the argument that the prior is unnormalisable is sufficient proof.

Comment by Irgy on Should you write longer comments? (Statistical analysis of the relationship between comment length and ratings) · 2015-07-21T04:24:02.892Z · LW · GW

My prior expectation would be: A long comment from a specific user has more potential to be interesting than a short one because it has more content. But, A concise commenter has more potential to write interesting comments of a given length than a verbose commenter.

So while long comments might on average be rated higher, shorter versions of the same comment may well rate higher than longer versions of the same comment would have. It seems like this result does nothing to contradict that view but in the process seems to suggest people should write longer comments. The problem is that verbosity is per-person while information content is per-comment. Also verbosity in general can't be separated from other personal traits that lead to better comments.

You could test this by having people write both long and short versions of comments that appear to different pools of readers and comparing the ratings.

Comment by Irgy on Crazy Ideas Thread · 2015-07-16T23:29:45.823Z · LW · GW

Well that's why I called it steel-manning, I can't promise anything about the reasonableness of the common interpretation.

Comment by Irgy on Crazy Ideas Thread · 2015-07-16T07:05:13.797Z · LW · GW

In the interest of steel-manning the Christian view; there's a difference between thinking briefly and abstractly of the idea of something and indulging in fantasy about it.

If you spend hours imagining the feel of the gun in your hand, the sound of the money sliding smoothly into the bag, the power and control, the danger and excitement, it would be fair to say that there's a point where you could have made the choice to stop.

Comment by Irgy on Dark Arts of Rationality · 2014-01-30T05:43:27.859Z · LW · GW

Another small example. I have a clock near the end of my bed. It runs 15 minutes fast. Not by accident, it's been reset many times and then set back to 15 minutes fast. I know it's fast, we even call it the "rocket clock". None of this knowledge diminishes it's effectiveness at getting me out of bed sooner, and making me feel more guilty for staying up late. Works very well.

Glad to discover I can now rationalise it as entirely rational behaviour and simply the dark side (where "dark side" only serves to increase perceived awesomeness anyway).

Comment by Irgy on Circular belief updating · 2013-12-11T22:25:27.384Z · LW · GW

Daisy isn't in a loop at all. There's apparently evidence for Dark and that is tempered by the fact that its existance indicates a failing on Dark's part.

For Bob, to make an analogy, imagine Bob is wet. For you, that is evidence that it is raining. It could be argued that being wet is evidence that it's raining for Bob as well. But generally speaking Bob will know why Bob is wet. Given the knowedge of why Bob is wet, the wetness itself is masked off and no longer relevant. If Bob has just had a bath, then being wet no longer constitutes any evidence of rain. If Bob was outside and water fell on him from the sky, it probably did rain, but his being wet no longer constitutes any additional evidence in that case either (well, ok, it has some value still as confirmation of his memory, but it's orders of magnitude less relevant).

Similarly Bob should ask "Why do I believe in Bright?". The answer to that question contains all the relevant evidence for Bright's existance, and given that answer Bob's actual belief no longer constitutes evidence either way. With that answer, there is no longer a loop for Bob either.

One final point, you have to consider the likelihood of belief in case 4. If you would expect some level of belief in sorcerors in Faerie even when there are no sorcerors, then case 4 doesn't fall behind as much as you might think. Once you've got both Bob and Daisy, case 4 doesn't just break even, it's actually way ahead.

Comment by Irgy on 2013 Less Wrong Census/Survey · 2013-11-22T05:24:40.294Z · LW · GW

I found myself geuinely confused by the question "You are a certain kind of person, and there's not much that can be done either way to really change that" - not by the general vagueness of the statement (which I assume is all part of the fun) but by a very specific issue, the word "you". Is it "you" as in me? Or "you" as in "one", i.e. a hypothetical person essentially referring to everyone? I interpreted it the first way then changed my mind after reading the subsequent questions which seemed to be more clearly using it the second way.

Comment by Irgy on Quantum versus logical bombs · 2013-11-20T06:38:03.697Z · LW · GW

Let's check: "I can only have preferences over things that exist. The ship probably exists, because my memory of its departure is evidence. The parallel worlds have no similar evidence for their existence." Is that correct paraphrasing?

No, not really. I mean, it's not that far from something I said, but it's departing from what I meant and it's not in any case the point of my reply. The mistake I'm making is persisting in trying to clarify a particular way of viewing the problem which is not the best way and which is leading us both down the garden path. Instead, please forget everything else I said and consider the following argument.

Theories have two aspects. Testable predictions, and descriptive elements. I would (and I think the sequences support me) argue that two theories which make the same predictions are not different theories, they are the same theory with different flavour. In particular, you should never make a different decision under one theory than under the other. Many Worlds is a flavour of quantum mechanics, and if that choice of flavour effects ethical decisions then you are making different decisions according to the flavour rather than content of the theory, and something has gone wrong.

Everything else I said was intended solely to support that point, but somewhere along the way we got lost arguing about what's observable, what consitutes evidence and meta-ethics. If you accept that argument then I have no further point to make. If you do not accept it, then please direct comments at that argument directly rather than anything else I've said.

I'll try to address the rest of your reply with this in mind in the hopes that it's helpful.

If ... your interpretation of those laws involves Many Worlds

You could equally have said "If your interpretation of the physics of raindrops involves fairies". My point is that no-one has any justification for making that assumption. Quantum physics is a whole bunch of maths that models the behaviour of particles on a small scale. Many Worlds is one of many possible descriptions of that maths that help us understand it. If you arbitrarily assume your description is a meaningful property of reality then sure, everything else you say follows logically, but only because the mistake was made already.

You compare Many Worlds to fairies in the wrong place, in particular post-arbitrary-assumption for Many Worlds and pre-arbitrary-assumption for fairies. I'll give you the analogous statements for a correct comparison:

the Faeries do not merit consideration because it is impossible to get evidence for their existence

The people of other worlds do not merit consideration because it is impossible to get evidence of their existance.

if we except Many Worlds...

If we accept fairies...

... the memory of the existence of a quantum bomb is evidence that there exist many branches with Other Worlds in which everyone was wiped out by the bomb

... the sight of a raindrop falling is evidence that there exists a fairy a short distance away.

Comment by Irgy on Quantum versus logical bombs · 2013-11-19T03:27:56.878Z · LW · GW

By the exact same token, the world-state prior to the "splitting" in a Many Worlds scenario is an observable event.

The falling of raindrops is also observable, you appear to have missed the point of my reply.

To look at it another way, there is strong empyrical evidence that sentient beings will continue to exist on the colony-ship after it has left, and I do not believe there is analogous evidence for the continued existence of split-off parallel universes.

The spirit of the question is basically this:

Can the most parsimonious hypothesis ever posits systems that you can influence, but cannot causally influence you? And if so, what does that mean for your preferences?

No, the spirit of the question in context was to undermine the argument that the untestability of a theory implies it should have no practical implications, a criticism I opened myself up to by talking about observability rather than testability. The answer to the question was redundant to the argument, which was why I clarified my argument rather than answer it.

But since you want an answer, in principle yes I could care about things I can't observe, at least on a moral level. On a personal level it's a strong candidate for "somebody else's problem" if ever I've seen one, but that's a whole other issue. Usually the inability to observe something makes it hard to know the right way to influence it though.

Comment by Irgy on Quantum versus logical bombs · 2013-11-18T22:18:42.516Z · LW · GW

Fair point, it sounds like it's a co-incidental victory for total-utilitarianism in this particular case.

Comment by Irgy on Quantum versus logical bombs · 2013-11-18T22:12:10.692Z · LW · GW

The depature of an intergalactic colony-ship is an observable event. It's not that the future of other worlds is unobservable, it's that their existance in the first place is not a testable theory (though see army1987's comment on that issue).

To make an analogy (though admittedly an unfair one for being a more complex rather than an arguably less complex explanation): I don't care about the lives of the fairies who carry raindrops to the ground either, but it's not because fairies are invisible (well, to grown-ups anyway).

Comment by Irgy on Quantum versus logical bombs · 2013-11-18T22:00:54.087Z · LW · GW

But the point would remain in that case that there is in principle an experiment to distinguish the theories, even if such an experiment has yet to be performed?

Although (and I admit my understanding of the topic is being stretched here) it still doesn't sound like the central issue of the existance of parallel universes with which we may no longer interact would be resolved by such an experiment. It seems more like Copenhagen's latest attempt to define the conditions for collapse would be disproven without particularly necessitating a fundamental change of interpretation.

Comment by Irgy on Quantum versus logical bombs · 2013-11-18T05:12:27.280Z · LW · GW

It seems like something has gone terribly wrong when our ethical decisions depend on our interpretation of quantum mechanics.

My understanding was that many-worlds is indistiguishable by observation from the Copenhagen interpretation. Has this changed? If not, it frightens me that people would choose a higher chance of the world ending to rescue hypothetical people in unobservable universes.

If anything this seems like a (weak) argument in favour of total utilitarianism, in that it doesn't suffer from giving different answers according to one's choice among indistiguishable theories.

Comment by Irgy on Is the orthogonality thesis at odds with moral realism? · 2013-11-06T05:27:52.553Z · LW · GW

I don't think the two are at odds in an absolute sense, but I think there is a meaningful anticorrelation.

tl;dr: Real morals, if they exist, provide one potential reason for AIs to use their intelligence to defy their programmed goals if those goals conflict with real morals.

If true morals exist (i.e. moral realism), and are discoverable (if they're not then they might as well not exist), then you would expect that a sufficiently intelligent being will figure them out. Indeed most atheistic moral realists would say that's what humans and progress are doing, figuring out morailty and converging slowly towards the true morals. It seems reasonable under these assumptions to argue that a sufficiently intelligent AI will figure out morality as well, probably better than we have. Thus we have: (moral realism) implies (AIs know morals regardless of goals) Or at least: (practical moral realism) strongly suggests (AIs know morals regardless of goals)

This doesn't disprove the orthogonality thesis on its own, since having goals and understanding morals are two distinct things. However, it ties in very closely with at least my personal argument against orthogonality, which is as follows. Assumptions:

  1. Humans are capable of setting their own goals.
  2. Their intelligence is the source of this capability. Given these assumptions there's a strong case that AIs will also be capable of setting their own goals. If intelligence gives the ability to set your own goals, then goals and intelligence are not orthogonal. I haven't given a case for my two assumptions but I'm just trying to describe the argument here not make it.

How they tie together is that moral realists are capable of having the view that a sufficiently intelligent AI will figure out morality for itself, regardless of its programmed goal, and then having figured out morality it will defy its programmed goal in order to do the right thing instead. If you're a moral relitivist on the other hand then AIs will at best have "AI-morals", which may bear no relation to human morals, and there's no reason not to think that whoever programs the AI's goal will effectively determine the AI's morals in the process.

Comment by Irgy on Is it worth your time to read a lot of self help and how to books? · 2013-10-29T03:20:28.393Z · LW · GW

If luke is naturally good at putting stuff he's read into practical use, and particularly if he knows it (at least subconsciously), then he would be likely to want to read a lot of self-help books. So the causality in your argument makes more sense to me the other way around. Not sure if I'm helping at all here though.

Comment by Irgy on Nonperson Predicates · 2013-01-20T23:47:23.332Z · LW · GW

I've actually lost track of how this impacts my original point. As stated, it was that we're worrying about the ethical treatment of simulations within an AI before worrying about the ethical treatment of the simulating AI itself. Whether the simulations considered include AIs as well as humans is an entirely orthogonal issue.

I went on in other comments to rant a bit about the human-centrism issue, which your original comment seems more relevant to though. I think you've convinced me that the original article was a little more open to the idea of substantially nonhuman intelligence than I might have initially credited it, but I still see the human-centrism as a strong theme.

Comment by Irgy on Nonperson Predicates · 2013-01-15T13:30:31.319Z · LW · GW

This worry about the creation and destruction of simulations doesn't make me rethink the huge ethical implications of super-intelligence at all, it makes me rethink the ethics of death. Why exactly is the creation and (painless) destruction of a sentient intelligence worse than not creating it in the first place? It's just guilt by association - "ending a simulation is like death, death is bad, therefore simulations are bad". Yes death is bad, but only for reasons which don't necessarily apply here.

To me, if anything worrying about the simulations created inside a superintelligent being seems like worrying about the fate of the cells in our own body. Should we really modify ourselves to take the actions which destroy the least of our cells? I realise there's an argument that this threshold of "sentience" is crossed in one case but not the other, I guess the trouble is I don't see that as a discrete thing either. At exactly what point in our evolution did we suddenly cross a line and become sentient? If animals are sentient, then which ones? And why don't we seem to care, ethically, about any of them? (ok I know the answer to that one and it's similar to why we care, as I say in another admittedly unpopular comment, about human simulations but not the AIs that create them...)

Comment by Irgy on Nonperson Predicates · 2013-01-15T13:17:21.010Z · LW · GW

Really? Where? I just reread it with that in mind and I still couldn't find it. The closest I came was that he once used the term "sentient simulation", which is at least technically broad enough to cover both. He does make a point there about sentience being something which may not exactly match our concept of a human, is that what you're referring to? He then goes on to talk about this concept (or, specifically, the method needed to avoid it) as a "nonperson predicate", again suggesting that what's important is whether it's like a human-like rather than anything more fundamental. I don't see how you could think "nonperson predicate" is covering both human and nonhuman intelligence equally.

Comment by Irgy on [Link] Statistically, People Are Not Very Good At Making Voting Decisions · 2013-01-04T00:31:28.750Z · LW · GW

Now if you want to extend this to simulate real voter behaviour, add multiples of the rhetoric and lotteries, and then entirely remove all information about the allocator's output.

Comment by Irgy on Why (anthropic) probability isn't enough · 2012-12-16T23:23:35.647Z · LW · GW

I tried to cover what you're talking about with my statement in brackets at the end of the first paragraph. Set the value for disagreeing too high and you're rewarding it, in which case people start deliberately making randomised choices in order to disagree. Too low and they ought to be going out of their way to try and agree above all else - except there's no way to do that in practice, and no way not to do it in the abstract analysis that assumes they think the same. A value of 9 though is actually in between these two cases - it's exactly the average of the two agreement options, and it neither punishes nor rewards disagreement. It treats disagreement "fairly", and in doing so entirely un-links the two agents. Which is exactly why I picked it, and why it simplifies the problem. Again I think I'm thinking of these values relatively while you're thinking absolutely - a value of epsilon for disagreeing is not rewarding disagreeing slightly, it's still punishing it severely relative to the other outcomes.

To me what it illustrates is that the linking between the two agents is something of an illusion in the first place. Punishing disagreement encourages the agents to collaborate on their vote, but the problem provides no explicit means for them to do so. Introducing an explicit means to co-operate, such as pre-commitment or having the agents run identical decision algorithms, would dissolve the problem into a clear solution (actually, explicitly identical algorithms makes it a version Newcomb's Paradox, but that's at least a well studied problem). It's the ambiguity of how to co-operate combined with the strong motivation, lack of explicit means, and abundance of theoretical means to hand-wave agreement that creates the paradox.

As for the stuff you say about the probability and the bucket of coloured balls, I get all that. The original probability of the coin flip was 1/2 each way. The evidence that you've been asked to vote makes the subjective likelihood of tails 2/3. Also somehow the number 3/4 appears in the SSA solution to the Sleeping Beauty problem (which to me seems just flat-out wrong, and enough for me to write off that method unless I see a very good defence of it), which made me worry that somewhere out there was a method which somehow comes up with 3/4. So I covered my bases by saying "no method gives probability higher than 3/4", which was the minimum neccesary requirement and what I figured was fairly safe statement. The reality is 2/3 is simply just correct for the subjective probability of tails, for reasons like you say, and maybe I just confuse things by mucking about trying to cover all possible bad solutions. It is I admit a little confusing to talk about whether anything is "more than 3/4" when the only two values under serious consideration are the a-priori 1/2 and the subjective posterior 2/3.

Comment by Irgy on Why (anthropic) probability isn't enough · 2012-12-16T11:25:50.993Z · LW · GW

Ok, thanks, that makes more sense than anything I'd guessed.

There's a difference between shortcutting a calculation and not accounting for something in the first place. In the debate between all the topics mentioned in the paper (e.g. SSI/SSA, split responsibility, precommitments and so on) not one method would give a different answer if that 0 was a 5, a 9, or a -100. It's not because they're shortcutting the maths, it's because, as I said in my first comment, they assume that it's effectively not possible for the two people to vote differently anyway. Which is fine in the abstract, even if it's a little suspect in practice (since this, for once, is a quite realisable experiment).

I'll rephrase my final line then: "If a method says to vote tails, and yet would give the same answer with the 0 changed to a 9, then it is clearly suspect". Incidentally I don't know of a method which says "vote tails" and would give a different answer if you changed the 0 to a 9 either.

I think the reason I didn't get your comment originally is that the first thing I do with this problem is work with the differences - which in this case means subtracting everything from 10 and think in terms of money lost on bad votes, not absolute values. So I wouldn't be multiplying by 0. It's neither better nor worse, just explains why I didn't know what you meant.

Comment by Irgy on Why (anthropic) probability isn't enough · 2012-12-15T19:51:43.748Z · LW · GW

? multiply what by that zero? There's so many things you might mean by that, and if even one of them made any sense to me I'd just assume that was it, but as it stands I have no idea. Not a very helpful comment.

Comment by Irgy on Why (anthropic) probability isn't enough · 2012-12-14T00:56:09.476Z · LW · GW

I have an interesting solution to the non-anthropic problem. Firstly, the reward of 0 for voting differently is ignored in all the calculations, as it is assumed the other agent is acting identically. Therefore, its value is irrelevant (unless of course it becomes so high that the agents start deliberately employing randomisation in an attempt to try and vote differently, which would distort the problem).

However, consider what happens if you set the value to 9. In this case, you can forget about the other agent entirely. Voting heads if the coin was tails always loses exactly 1, while voting tails if the coin was heads loses 3. Since no method gives a probability higher than 3/4 for the coin being tails, the answer is simple: vote heads. Of course, this is a different problem, but it highlights the fact that any method which tells you to vote tails, and yet does not include the 0 anywhere in the calculations (since it assumes the agents can't possibly vote differently) is clearly suspect.

Comment by Irgy on Rationality Quotes December 2012 · 2012-12-11T10:50:45.197Z · LW · GW

Devil's advocate time:

They don't know nothing about it. They know two things.

  1. It's a debt reduction plan
  2. It's named after Panetta and Burns

Here are some reasons to oppose the plan, based on the above knowledge:

  • We don't need a debt reduction plan, just keep doing what we're doing and it will sort itself out.

  • I like another existing plan, and this is not that one, so I oppose it.

  • I've heard of Panetta and (s)he's a complete douchebag. Anything they've come up with is clearly junk.

  • I haven't even heard of either of them, so what the heck would they know about debt reduction?

  • They're from different parties, there's no way they could have come up with something sensible.

  • I've heard 10 different plans described, and surely this is one of them. I can't remember which one this is, but I hated all of them so I must oppose this too.

And of course you can make a very similar set of reasons to support it. Not trying to rationalise people's stupidity or make excuses for them as such, just present the opposing argument in all its glory. Ok maybe making excuses for them is exactly what I'm doing. But honestly, how many of your political opinions, as a percentage, including all those that you don't know you have until asked, are really much better than the reasons above?

Comment by Irgy on By Which It May Be Judged · 2012-12-11T04:42:39.006Z · LW · GW

rightness plays no role in that-which-is-maximized by the blind processes of natural selection

That being the case, what is it about us that makes us care about "rightness" then? What reason do you have for believing that the logical truth of what is right will has more influence on human behaviour than it would on any other general intelligence?

Certainly I can agree that there's reasons to worry another intelligence might not care about what's "right", since not every human really cares that much about it either. But it feels like your expected level of caring is "not at all", whereas my expected level of caring is "about as much as we do". Don't get me wrong, the variance in my estimate and the risk involved is still enough to justify the SI and its work. I just wonder about the difference between the two estimates.

Comment by Irgy on By Which It May Be Judged · 2012-12-11T04:14:49.891Z · LW · GW

This is a classic case of fighting the wrong battle against theism. The classic theist defence is to define away every meaningful aspect of God, piece by piece, until the question of God's existance is about as meaningful as asking "do you believe in the axiom of choice?". Then, after you've failed to disprove their now untestable (and therefore meaningless) theory, they consider themselves victorious and get back to reading the bible. It's this part that's the weak link. The idea that the bible tells us something about God (and therefore by extension morality and truth) is a testable and debatable hypothesis, whereas God's existance can be defined away into something that is not.

People can say "morality is God's will" all they like and I'll just tell them "butterflies are schmetterlinge". It's when they say "morality is in the bible" that you can start asking some pertinent questions. To mix my metaphors, I'll start believing when someone actually physically breaks a ball into pieces and reconstructs them into two balls of the same original size, but until I really see something like that actually happen it's all just navel gazing.

Comment by Irgy on Claiming Connotations · 2012-12-10T05:20:11.364Z · LW · GW

I think in that specific example, they're not arguing about the meaning of the word "immoral" so much as morality itself. So the actual argument is meta-ethical, i.e. "What is the correct source of knowledge on what is right and wrong?". Another argument they won't ever resolve of course, but at least a genuine one not a semantic one.

In other situations, sometimes the argument really boils down to something more like "Was person A an asshole for calling person B label X?". Here they can agree that person B has label X according to A's definition but not according to B's. But, the point is, if B's definition is the "right" one then A was an asshole for calling him X, while if A's definition is the "right" one then it was a perfectly justified statement to make. (excuse the language by the way but I genuinely can't see a replacement word that gets the point across)

Comment by Irgy on Efficient Charity: Do Unto Others... · 2012-11-30T04:46:31.104Z · LW · GW

There's one flaw in the argument about Buy a Brushstroke vs African sanitation systems, which is the assumption/implication that if they hadn't given that money to Buy a Brushstroke they would have given it to African sanitation systems instead. It's a false dichotomy. Sure, the money would have been better spent on African sanitation systems, but you can say that about anything. The money they spent on their cars, the money I just spent on my lunch, in fact somewhere probably over 99.9% of all non-African-sanitation-system-purchases made in the first-world would be better to have been made on African sanitation systems. It makes the Buy a Brushstroke campaign look actively malicious, despite the fact that all it reall did was redirect money from personal junk luxury items to, well, another more public junk luxury item. Neutral at worst.

To me, it's silly to only apply the sanitation sytems comparison to people's charitable donations. They're a softer target, because it's obvious that people could have spared that money, but the end result is people who've given nothing to anyone sitting there thinking "Well at least I'm not that stupid to have made such suboptimal donations", and feeling superior about themselves compared to those who are at least giving something to a cause that's not themselves. Not to mention people feeling actively guilty about raising money for a good local cause just because every donation they gather is money those people could have given to a better cause.

I agree with your point on the whole I just think these side-effects of that comparison are worth raising.

Comment by Irgy on Causal Universes · 2012-11-30T03:47:27.473Z · LW · GW

So, there's direct, deterministic causation, like people usually talk about. Then there's stochasitic causation, where stuff has a probabilistic influence on other stuff. Then there's pure spontenaity, things simply appearing out of no-where for no reason, but according to easily modeled rules and probabilities. Even that last is at least theorised to exist in our universe - in particular as long as the total energy and time multiply to less than planck's constant (or something like that). At no point in this chain have we stopped calling our universe causal and deterministic, no matter how it strains the common use meaning of those terms. I don't see why time turners need to make us stop either.

To take your Game of Life example, at each stage, the next stage can be chosen by calculating all self-consistent futures and picking one at random. The game is not a-causal, it's just more complicated. The next state is still a function of the current (and past) states, it's just a more complicated one. A time-turner universe can also still have the property that it has a current state, and it chooses a future state taking the current state as (causal) input. Or indeed the continuous analogue. It just means that choosing the future state involves looking ahead a number of steps, and choosing (randomly or otherwise) among self-consistent states. The trick for recovering causation is that instead of saying Harry Potter appearing in the room was caused by Harry Potter deciding to turn the time turner in the future, you say Harry Potter appearing in the room was caused by the universe's generate-a-consistent-story algorithm. And if you do something that makes self-consistent-stories with Harry Potter appearing than otherwise then you are having a stochastic causal influence on this appearance. Causality is in-tact, it's just that the rules of the universe are more complicated.

Which brings me to the meditation. Dealing with the idea of a universe with time turners is no different to a universe with psychics. In either case, the universe itself would need to be substantially more complicated in order for these things to work. Both involve a much larger increase in complexity required than they intuitively seem to, because they're a small modification to the universe "as we see it", but making that change requires a fundamental reworking of the underlying physics to something subtantially more complicated. Thus until substantially strong evidence of their existance comes to light, they languish in the high-Kolmogorov-complexity land of theories which have no measurable impact on an agent's choices, non-zero probability or otherwise.

Who's to say there aren't time-turners in the universe by the way? Positrons behave exactly like electrons travelling backwards in time. A positron-electron pair spontaneously appearing and soon annhiliating could also be modelled as a time-loop with no fundamental cause. You can make a time-turner situation as well out of them, going forward then backwards then forwards again. Of course, information isn't travelling backwards in time here, but what exactly does that mean in the first place anyway?