## Posts

## Comments

**rstarkov**on Rationality Quotes April 2014 · 2014-04-15T13:59:37.917Z · score: 1 (1 votes) · LW · GW

Indeed, terse "explanations" that handwave more than explain are a pet peeve of mine. They can be outright confusing and cause more harm than good IMO. See this question on phrasing explanations in physics for some examples.

**rstarkov**on A Fervent Defense of Frequentist Statistics · 2014-03-01T14:27:24.645Z · score: 2 (2 votes) · LW · GW

One useful definition of Bayesian vs Frequentist that I've found is the following. Suppose you run an experiment; you have a hypothesis and you gather some data.

- if you try to obtain the probability of the data, given your hypothesis (treating the hypothesis as fixed), then you're doing it the frequentist way
- if you try to obtain the probability of the hypothesis, given the data you have, then you're doing it the Bayesian way.

I'm not sure whether this view holds up to criticism, but if so, I sure find the latter much more interesting than the former.

**rstarkov**on 2013 Less Wrong Census/Survey · 2013-12-05T02:52:21.205Z · score: 16 (16 votes) · LW · GW

This has been the most fun, satisfying survey I've ever been part of :) Thanks for posting this. Can't wait to see the results!

One question I'd find interesting is closely related to the probability of life in the universe. Namely, what are the chances that a randomly sampled spacefaring lifeform would have an intelligence similar enough to ours for us to be able to communicate meaningfully, both in its "ways" and in general level of smarts, if we were to meet.

Given that I enjoyed taking part in this, may I suggest that more frequent and in-depth surveys on specialized topics might be worth doing?

**rstarkov**on The Robots, AI, and Unemployment Anti-FAQ · 2013-07-26T15:15:31.689Z · score: 8 (10 votes) · LW · GW

Maybe we've finally reached the point where there's no work left to be done

If so, this is superb! This is the end goal. A world in which there is no work left to be done, so we can all enjoy our lives, free from the requirement to work.

The thought that work is desirable has been hammered into our heads so hard that it's a really, really dubious proposition that actually a world where nobody has to work is the ultimate goal. Not one in which everyone works. That world sucks. That's world in which 85% of us live today.

**rstarkov**on Taboo Your Words · 2013-02-08T07:52:42.025Z · score: 2 (2 votes) · LW · GW

I've first read this about two years ago and it has been an invaluable tool. I'm sure it has saved countless hours of pointless arguments around the world.

When I realise that an inconsistency in how we interpret a specific word is a problem in a certain argument and apply this tool, it instantly transforms arguments which actually *are* about the meaning of the word to make them a lot more productive (it turns out it can be unobvious that the actual disagreement is about what a specific word means). In other cases it just helps get back on the right track instead of getting distracted by what we mean when we say a certain word that is actually beside the point.

It does occasionally take a while to convince the other party to the argument that I'm not trying to fool or trick them when I ask for us to apply this method. Another observation is that the article on Empty Labels has transformed my attitude towards the meaning of words, so when it turns out we disagree about meanings, I instantly lose interest and this can confuse the other party.

**rstarkov**on Solving the two envelopes problem · 2012-08-05T18:58:22.389Z · score: 2 (2 votes) · LW · GW

Addressed by making a few edits to the "Solution" section. Thank you!

**rstarkov**on Solving the two envelopes problem · 2012-08-05T15:02:14.904Z · score: 1 (1 votes) · LW · GW

All fair points. I did want to post this to main, but decided against it in the end. Didn't know I could move it to main afterwards. Will work on the title, after I've fixed the error pointed out by VincentYu.

**rstarkov**on Newcomb's Problem and Regret of Rationality · 2011-08-31T15:17:55.652Z · score: 0 (0 votes) · LW · GW

I've reviewed the language of the original statement and it seems that the puzzle is set in essentially the real world with two major givens, i.e. facts in which you have 100% confidence.

Given #1: Omega was correct on the last 100 occurrences.

Given #2: Box B is already empty or already full.

There is no leeway left for quantum effects, or for your choice affecting in any way what's in box B. You cannot make box B full by consciously choosing to one-box. The puzzle says so, after all.

If you read it like this, then I don't see why you would possibly one-box. Given #2 already implies the solution. 100 successful predictions must have been achieved through a very low probability event, or a trick, e.g by offering the bet only to those people whose answer you can already predict, e.g. by reading their LessWrong posts.

If you *don't* read it like this, then we're back to the "gooey vagueness" problem, and I will once again insist that the puzzle needs to be fully defined before it can be attempted. For example, by removing both givens, and instead specifying exactly what you know about those past 100 occurrences. Were they definitely not done on plants? Was there sampling bias? Am I considering this puzzle as an outside observer, or am I imagining myself being *part* of that universe - in the latter case I have to put some doubt into everything, as I can be hallucinating. These things *matter*.

With such clarifications, the puzzle becomes a matter of your confidence in the past statistics vs. your confidence about the laws of physics precluding your choice from actually influencing what's in box B.

**rstarkov**on Newcomb's Problem and Regret of Rationality · 2011-08-31T00:14:24.481Z · score: 0 (0 votes) · LW · GW

I'm not sure I understand correctly, but let me phrase the question differently: what sort of confidence do we have in "99.9%" being an accurate value for Omega's success rate?

From your previous comment I gather the confidence is absolute. This removes one complication while leaving the core of the paradox intact. I'm just pointing out that this isn't very clear in the original specification of the paradox, and that clearing it up is useful.

To explain why it's important, let me indeed think of an AI like hairyfigment suggested. Suppose someone says they have let 100 previous AIs flip a fair coin 100 times each and it came out heads every single time, because they have magic powers that make it so. This someone presents me video evidence of this feat.

If faced with this in the real world, an AI coded by me would *still* bet close to 50% on tails if offered to flip its own fair coin against this person, because I have strong evidence that this someone is a cheat, and their video evidence is fake. Just something I know from a huge amount of background information that was not explicitly part of this scenario.

However, when *discussing* such scenarios, it is sometimes useful to assume hypothetical scenarios *unlike* the real world. For example, we could state that this someone has *actually* performed the feat, and that there is absolutely no doubt about that. That's impossible in our real world, but it's useful for the sake of discussing bayesianism. Surely any bayesianist's AI would expect heads with high probability *in this hypothetical universe*.

So, are we looking at "Omega in the real world where someone I don't even know tells me they are really damn good at predicting the future", or "Omega in some hypothetical world where they are actually known with absolute certainty to be really good at predicting the future"?

**rstarkov**on Newcomb's Problem and Regret of Rationality · 2011-08-29T16:38:01.400Z · score: 0 (0 votes) · LW · GW

While I disagree that one-boxing still wins, I'm most interested in seeing the "no future peeking" and the actual Omega success rate being defined as givens. It's important that I can rely on the 99.9% value, rather than wondering whether it is perhaps inferred from their past 100 correct predictions (which could, with a non-negligible probability, have been a fluke).

**rstarkov**on Newcomb's Problem and Regret of Rationality · 2011-05-22T18:28:06.339Z · score: -2 (2 votes) · LW · GW

It's only controversial because it's dressed up in wooey vagueness

I also happen to think that under-specification of this puzzle adds significantly to the controversy.

What the puzzle doesn't tell us is the properties of the universe in which it is set. Namely, whether the universe permits future to influence the past, which I'll refer to as "future peeking".

(alternatively, whether the universe somehow allows someone within the universe to precisely simulate the future faster than it actually comes - a proposition I don't believe is ever true in *any* universe defined mathematically).

This is important because if the future can't influence the past, then it is known with absolute certainty that taking two boxes won't possibly change what's in them (this is, after all, a basic given of the universe). Whether Omega has predicted something before is completely irrelevant now that the boxes are placed.

Alas, we aren't told what the universe is like. If that is intentionally part of the puzzle then the only way to solve it would be to enumerate all possible universes, assigning each one a probability of being ours based on all the available evidence, and essentially come up with a probability that "future peeking" is impossible in our universe. One would then apply simple arithmetic to calculate the expected winnings.

Unfortunately P("future peeking allowed") it's one of those probabilities that is completely incalculable for any practical purpose. Thus if "no future peeking" isn't a *given*, the best answer is "I don't know if taking two boxes is best because there's this one probability I can't actually calculate in practice".

**rstarkov**on "Is there a God" for noobs · 2011-03-25T11:42:53.444Z · score: 1 (1 votes) · LW · GW

To expand a bit on the first paragraph, I feel that such reasonable arguments are to many people about the same as the proof of Poincaré conjecture is to me: I fully understand the proposition, but I'm not nearly smart enough to follow the proof sufficiently well to be confident it's right.

Importantly, I can also follow the *outline* of the proof, to see how it's intended to work, but this is of course insufficient to establish the validity of the proof.

So the only real reason I happen to trust this proof is that I already have a pre-established trust in the community who reviewed the proof. But of course the same is also true of a believer who has a pre-established trust in the theist community.

So the guide would require a section on "how to pick authorities to trust", which would explain why it's necessary (impractical to verify everything yourself) and why the scientific community is the best one to trust (highest rate of successful predictions and useful conclusions).

**rstarkov**on "Is there a God" for noobs · 2011-03-25T11:37:06.594Z · score: 2 (2 votes) · LW · GW

I have found that the logical approach like this one works much more rarely than it doesn't, simply because it appears that people can manage not to trust reason, or to doubt the validity of the (more or less obvious) inferences involved.

Additionally, belief is so emotional that even people who see all the logic, and truly seem to appreciate that believing in God is completely silly, still can't rid themselves of the belief. It's like someone who knows household spiders are not dangerous in any way and yet are more terrified of them than, say, an elephant.

Perhaps what's needed in addition to this is a separate "How to eschew the idea of god from your brain" guide. It would include practical advice collected from various self-admitted ex-believers. Importantly, I think people who have never believed should avoid contributing to such a guide unless they have reasons to believe that they have an extraordinary amount of insight into a believer's mind.

**rstarkov**on The "supernatural" category · 2011-03-25T03:54:18.274Z · score: 3 (3 votes) · LW · GW

Of course I'd argue that the game of life is not an isolated universe if one can toggle cells in it, and if you consider the whole lot then there's nothing supernatural about the process of cells being toggled.

But this is a good example. I asked about what others mean by "supernatural" and this sounds very close indeed!

**rstarkov**on The "supernatural" category · 2011-03-25T00:39:43.400Z · score: 1 (1 votes) · LW · GW

Sounds like a reasonable way of putting it. So a weapon shooting invisible (to the human eye) bullets would be classified as "supernatural" by someone from the stone age, because to them, killing someone requires direct contact with a visible weapon or projectile, that has appreciable travel time. Right?

Although "hard science" would have to be excluded from this, even though it contains lots of stuff that doesn't obey the same laws as most stuff we see.

**rstarkov**on The "supernatural" category · 2011-03-24T23:53:08.035Z · score: 1 (1 votes) · LW · GW

I suppose it's not the most concise post I've ever written. Thanks for the feedback!

**rstarkov**on The "supernatural" category · 2011-03-24T23:18:08.664Z · score: 2 (2 votes) · LW · GW

So from the negative votes I'm guessing that this is not something you guys find appropriate in "discussion"? It would help me as a newcomer if you also suggested what makes it bad :)

**rstarkov**on Probability is in the Mind · 2011-03-24T15:36:12.010Z · score: 5 (5 votes) · LW · GW

Even more important, I think, is the realization that, to decide how much you're willing to bet on a specific outcome, all of the following are essentially the same:

- you do have the information to calculate it but haven't calculated it yet
- you don't have the information to calculate it but know how to obtain such information.
- you don't have the information to calculate it

The bottom line is that you *don't know what the next value will be*, and that's the only thing that matters.

**rstarkov**on Bayesianism in the face of unknowns · 2011-03-24T14:11:37.131Z · score: 1 (1 votes) · LW · GW

Thanks for this, it really helped.

it doesn't guarantee that we have time, resources, or inclination to actually calculate it

Here's how I understand this point, that finally made things clearer:

Yes, there exists a more accurate answer, and we might even be able to discover it by investing some time. But until we do, the fact that such an answer exists is *completely irrelevant*. It is orthogonal to the problem.

In other words, doing the calculations would give us more information to base our prediction on, but *knowing that we can* do the calculation doesn't change it in the slightest.

Thus, we are justified to treat this as "don't know at all", even though it *seems* that we do know something.

Probability is in the mind

Great read, and I think things have finally fit into the right places in my head. Now I just need to learn to guesstimate what the maximum entropy distribution might look like for a given set of facts :)

Well, that and how to actually churn out confidence intervals and expected values for experiments like this one, so that I know how much to bet given a particular set of knowledge.

**rstarkov**on Bayesianism in the face of unknowns · 2011-03-13T20:07:10.558Z · score: 0 (0 votes) · LW · GW

Perhaps - obviously each coin is flipped just once, i.e. Binomial(n=1,p), which is the same thing as Bernoulli(p). I was trying to point out that for any other *n* it would work the same as a normal coin, if someone were to keep flipping it.

**rstarkov**on Bayesianism in the face of unknowns · 2011-03-13T15:26:03.175Z · score: 0 (0 votes) · LW · GW

And just as it gets *really* interesting, that chapter ends. There is no solution provided for stage 4 :/

**rstarkov**on Bayesianism in the face of unknowns · 2011-03-13T14:42:44.601Z · score: 0 (0 votes) · LW · GW

Bayesianism tells us that there is a unique answer in the form of a probability for the next coin to be heads

I'm obviously new to this whole thing, but is this a largely undebated, widely accepted view on probabilities? That there are NO situations in which you can't meaningfully state a probability?

For example, let's say we have observed 100 samples of a real-valued random variable. We can use the maximum entropy principle, and thus use the normal distribution (whcih is maximal-entropy for unbounded reals). We then use standard methods to estimate population mean, and can even provide a probability that it's in a certain interval.

But how valid is this result when we knew nothing of the original distribution? What if it *was* something awkward like the Cauchy distribution? It has no mean; so our interval is meaningless. You can't just say that "well, we're 60% certain it's in this interval, that leaves 40% chance of us being wrong" - because it doesn't; the mean isn't *outside* the interval either! A *complete* answer would allow for a third outcome, that the mean isn't defined, but how exactly do you assign a number to this probability?

With this in mind, do we still believe that it's not wrong (or less wrong? :D) to assume a normal distribution, make our calculations and decide how much you'd bet that the mean of the next 100,000 samples is in the range -100..100? (the sample means of Cauchy distributions diverge as you add more samples)

**rstarkov**on Bayesianism in the face of unknowns · 2011-03-13T14:34:12.343Z · score: 0 (0 votes) · LW · GW

I read this to say that you can't calculate a value that is guaranteed to break even in the long term, because there isn't enough information to do this. (which I tend to agree with)

**rstarkov**on Bayesianism in the face of unknowns · 2011-03-13T11:53:40.925Z · score: 0 (0 votes) · LW · GW

If I were trying to make a profit then I'd need to know how much to charge for entry. If I could calculate that then yes, I'd offer the bet regardless of how many heads came out of 100 trials.

But this is entirely beside the point; the purpose of this thought experiment is for me to show which parts of bayesianism I don't understand and solicit some feedback on those parts.

In particular, a procedure that I could use to actually pick a break-even price of entry would be very helpful.

**rstarkov**on Bayesianism in the face of unknowns · 2011-03-13T11:48:12.242Z · score: 0 (0 votes) · LW · GW

You take the evidence, and you decide that you pay X. Then we run it lots of times. You pay X, I pick a random coin and flip it. I pay your winnings. You pay X again, I pick again, etc. X is fixed.

**rstarkov**on Bayesianism in the face of unknowns · 2011-03-12T23:10:15.962Z · score: 0 (0 votes) · LW · GW

Preferably, let other people play the game first to gather the evidence at no cost to myself.

For the record, this is not permitted.

My take at it is basically this: average over all possible distributions

It's easy to say this but I don't think this works when you start doing the maths to get actual numbers out. Additionally, if you really take ALL possible distributions then you're already in trouble, because some of them are pretty weird - e.g. the Cauchy distribution doesn't have a mean or a variance.

distribution about which we initially don’t know anything and gradually build up evidence

I'd love to know if there are established formal approaches to this. The only parts of statistics that I'm familiar with assume known distributions and work from there. Anyone?

**rstarkov**on Bayesianism in the face of unknowns · 2011-03-12T22:36:22.736Z · score: 2 (2 votes) · LW · GW

The properties of the pool are unknown to you, so you have to take into account the possibility that I've tuned them somehow. But you do know that the 100 coins I drew from that pool were drawn fairly and randomly.

**rstarkov**on Bayesianism in the face of unknowns · 2011-03-12T21:17:23.430Z · score: 0 (0 votes) · LW · GW

I have clarified my post to specify that for each flip, I pick a coin from this infinite pool at random. Suppose you also magically know with absolute certainty that these givens are true. Still $10?

**rstarkov**on Is Morality a Valid Preference? · 2011-03-09T18:31:39.502Z · score: 0 (0 votes) · LW · GW

This is a good point, and I've pondered on this for a while.

Following your logic: we can observe that I'm not spending all my waking time caring about A (people dying somewhere for some reason). Therefore we can conclude that the death of those people is comparable to mundane things I choose to do instead - i.e. the mundane things are not infinitely less important than someone's death.

But this only holds if my decision to do the mundane things in preference to saving someone's life is rational.

I'm still wondering whether I do the mundane things by rationally deciding that they are more important than my contribution to saving someone's life could be, or by simply being irrational.

I am leaning towards the latter - which means that someone's death could still be infinitely worse *to me* than something mundane, except that this fact is not accounted for in my decision making because I am not fully rational no matter how hard I try.

**rstarkov**on Newcomb's Problem and Regret of Rationality · 2011-03-09T18:07:29.883Z · score: 1 (1 votes) · LW · GW

The original description of the problem doesn't mention if you know of Omega's strategy for deciding what to place in box B, or their success history in predicting this outcome - which is obviously a very important factor.

If you know these things, then the only rational choice, obviously and by a huge margin, is to pick only box B.

If you don't know anything other than box B may or may not contain a million dollars, and you have no reasons to believe that it's unlikely, like in the lottery, then the only rational decision is to take both. This also seems to be completely obvious and unambiguous.

But since this community has spent a while debating this, I conclude that there's a good chance I have missed something important. What is it?

**rstarkov**on Torture vs. Dust Specks · 2011-02-21T23:30:04.370Z · score: -1 (1 votes) · LW · GW

I don't know. I don't suppose you claim to know at which point the number of dust specks is small enough that they are preferable to 50 years of torture?

(which is why I think that Idea 2 is a better way to reason about this)

**rstarkov**on Is Morality a Valid Preference? · 2011-02-21T21:37:30.254Z · score: 2 (2 votes) · LW · GW

Argh, I have accidentally reported your comment instead of replying. I did wonder why it asks me if I'm sure... Sorry.

It does indeed appear that the only rational approach is for them to be treated as comparable. I was merely trying to suggest a possible underlying basis for people consistently picking dust specks, regardless of the hugeness of the numbers involved.

**rstarkov**on Is Morality a Valid Preference? · 2011-02-21T20:48:27.842Z · score: 4 (4 votes) · LW · GW

I think Torture vs Dust Specks makes a hidden assumption that the two things are comparable. It appears that people don't actually think like that; even an *infinite* amount of dust specks are worse than a single person being tortured or dying. People arbitrarily place some bad things into a category that's infinitely worse than another category.

So, I'd say that you aren't preferring morality; you are simply placing 50 years of torture as *infinitely* worse than a dust speck; no number people getting dust specks can possibly be worse than 50 years of torture.

**rstarkov**on Torture vs. Dust Specks · 2011-02-21T20:34:15.440Z · score: 0 (2 votes) · LW · GW

Idea 1: dust specks, because on a linear scale (which seems to be always assumed in discussions of utility here) I think 50 years of torture is more than 3^^^3 times worse than a dust speck in one's eye.

Idea 2: dust specks, because most people arbitrarily place bad things into incomparable categories. The death of your loved one is deemed to be *infinitely* worse than being stuck in an airport for an hour. It is incomparable; any amount of 1 hour waits are less bad than a single loved one dying.