Posts

Index of some decision theory posts 2017-03-08T22:30:05.000Z · score: 4 (4 votes)
Open problem: thin logical priors 2017-01-11T20:00:08.000Z · score: 5 (5 votes)
Training Garrabrant inductors to predict counterfactuals 2016-10-27T02:41:49.000Z · score: 3 (3 votes)
Desiderata for decision theory 2016-10-27T02:10:48.000Z · score: 2 (2 votes)
Failures of throttling logical information 2016-02-24T22:05:51.000Z · score: 2 (2 votes)
Speculations on information under logical uncertainty 2016-02-24T21:58:57.000Z · score: 1 (1 votes)
Existence of distributions that are expectation-reflective and know it 2015-12-10T07:35:57.000Z · score: 5 (5 votes)
A limit-computable, self-reflective distribution 2015-11-15T21:43:59.000Z · score: 10 (9 votes)
Uniqueness of UDT for transparent universes 2014-11-24T05:57:35.000Z · score: 4 (4 votes)

Comments

Comment by tsvibt on Open problem: thin logical priors · 2017-01-14T08:51:22.000Z · score: 0 (0 votes) · LW · GW

I agree that the epistemic formulation is probably more broadly useful, e.g. for informed oversight. The decision theory problem is additionally compelling to me because of the apparent paradox of having a changing caring measure. I naively think of the caring measure as fixed, but this is apparently impossible because, well, you have to learn logical facts. (This leads to thoughts like "maybe EU maximization is just wrong; you don't maximize an approximation to your actual caring function".)

Comment by tsvibt on Concise Open Problem in Logical Uncertainty · 2016-01-02T02:01:45.000Z · score: 1 (1 votes) · LW · GW

In case anyone shared my confusion:

The while loop where we ensure that eps is small enough so that

bound > bad1() + (next - this) * log((1 - p1) / (1 - p1 - eps))

is technically necessary to ensure that bad1() doesn't surpass bound, but it is immaterial in the limit. Solving

bound = bad1() + (next - this) * log((1 - p1) / (1 - p1 - eps))

gives

eps >= (1/3) (1 - e^{ -[bound - bad1()] / [next - this]] })

which, using the log(1+x) = x approximation, is about

(1/3) ([bound - bad1()] / [next - this] ).

Then Scott's comment gives the rest. I was worried about the fact that we seem to be taking the exponential of the error in our approximation, or something. But Scott points out that this is not an issue because we can make [next-this] as big as we want, if necessary, without increasing bad1() at all, by guessing p1 for a very long time until [bound - bad1()] / [next - this]] is close enough to zero that the error is too small to matter.

Comment by tsvibt on Concise Open Problem in Logical Uncertainty · 2015-12-30T09:22:23.000Z · score: 0 (0 votes) · LW · GW

Could you spell out the step

every iteration where mean(𝙴[𝚙𝚛𝚎𝚟:𝚝𝚑𝚒𝚜])≥2/5 will cause bound - bad1() to grow exponentially (by a factor of 11/10=1+(1/2)(−1+2/5𝚙𝟷))

a little more? I don't follow. (I think I follow the overall structure of the proof, and if I believed this step I would believe the proof.)

We have that eps is about (2/3)(1-exp([bad1() - bound]/(next-this))), or at least half that, but I don't see how to get a lower bound on the decrease of bad1() (as a fraction of bound-bad1() ).

Comment by tsvibt on LessWrong 2.0 · 2015-12-04T12:25:56.876Z · score: 1 (1 votes) · LW · GW

(Upvoted, thanks.)

I think I disagree with the statement that "Getting direct work done." isn't a purpose LW can or should serve. The direct work would be "rationality research"---figuring out general effectiveness strategies. The sequences are the prime example in the realm of epistemic effectiveness, but there's lots of open questions in productivity, epistemology, motivation, etc.

Comment by tsvibt on A Proposal for Defeating Moloch in the Prison Industrial Complex · 2015-06-03T00:17:19.574Z · score: 7 (7 votes) · LW · GW

This still incentivizes prisons to help along the death of prisoners that they predict are more likely then the prison-wide average to repeat-offend, in the same way average utilitarianism recommends killing everyone but the happiest person (so to speak).

Comment by tsvibt on The value of learning mathematical proof · 2015-06-02T22:30:12.358Z · score: 0 (0 votes) · LW · GW

I see. That could be right. I guess I'm thinking about this (this = what to teach/learn and in what order) from the perspective of assuming I get to dictate the whole curriculum. In which case analysis doesn't look that great, to me.

Comment by tsvibt on The value of learning mathematical proof · 2015-06-02T21:33:34.417Z · score: 0 (0 votes) · LW · GW

Ok that makes sense. I'm still curious about any specific benefits that you think studying analysis has, relative to other similarly deep areas of math, or whether you meant hard math in general.

Comment by tsvibt on The value of learning mathematical proof · 2015-06-02T18:17:49.541Z · score: 1 (1 votes) · LW · GW

(See reply there.)

Comment by tsvibt on The value of learning mathematical proof · 2015-06-02T18:17:22.135Z · score: 2 (2 votes) · LW · GW

Seems like it's precisely because of the complicated technical foundation that real analysis was recommended.

What I'm saying is, that's not a good reason. Even the math with simple foundations has surprising results with complicated proofs that require precise understanding. It's hard enough as it is, and I am claiming that analysis is too much of a filter. It would be better to start with the most conceptually minimal mathematics.

Even great mathematicians ran into trouble playing fast and loose with the real numbers. It took them about two hundred years to finally lay rigorous foundations for calculus.

...implying that it is actually pretty confusing. There are good reasons for wanting to learn analysis because it is applied so widely. But from the specific perspective of trying to learn lessons about math and rigorous argument in general, it seems like you want a subject that is legitimate math but otherwise as simple as possible. To some extent, trying to do real analysis as a first real math class is like trying to teach physics class in a foreign language. On the one hand, you just want to learn the physics, but at the same time you always have to translate into your native tongue, worrying that you made a subtle mistake in translation. If you want to learn how to prove stuff in general, you don't also want the objects that you're proving stuff about to be overcomplicated to the point that it's a whole chore just to understand what you're talking about. That is an important but distinct skill from understanding and inventing proofs.

Comment by tsvibt on The value of learning mathematical proof · 2015-06-02T10:43:36.322Z · score: 4 (4 votes) · LW · GW

Could you say more about why you think real analysis specifically is good for this kind of general skill? I have pretty serious doubts that analysis is the right way to go, and I'd (wildly) guess that there would be significant benefits from teaching/learning discrete mathematics in place of calculus. Combinatorics, probability, algorithms; even logic, topology, and algebra.

To my mind all of these things are better suited for learning the power of proof and the mathematical way of analyzing problems. I'm not totally sure why, but I think a big part of it is that analysis has a pretty complicated technical foundation that already implicitly uses topology and/or logic (to define limits and stuff), even though you can sort of squint and usually kind of get away with using your intuitive notion of the continuum. With, say, combinatorics or algorithms, everything is very close to intuitive concepts like finite collections of physical objects; I think this makes it all the more educational when a surprising result is proven, because there is less room for a beginner to wonder whether the result is an artifact of the funny formalish stuff.

Comment by tsvibt on Open Thread, May 11 - May 17, 2015 · 2015-05-12T02:03:29.552Z · score: 1 (3 votes) · LW · GW

PSA: If you wear glasses, you might want to take a look behind the little nosepads. Some... stuff... can build up there. According to this unverified source it is oxidized copper from glasses frame + your sweat, and can be cleaned with an old toothbrush + toothpaste.

Comment by tsvibt on Precisely Bound Demons and their Behavior · 2015-03-07T03:34:19.359Z · score: 4 (4 votes) · LW · GW

There are ten thousand wrong solutions and four good solutions. You don't get much info from being told a particular bad solution. The opposite of a bad solution is a bad solution.

Comment by tsvibt on Open thread, Mar. 2 - Mar. 8, 2015 · 2015-03-06T20:49:33.806Z · score: 0 (0 votes) · LW · GW

Lol yeah ok. I was unsure because alexa says 9% of search traffic to LW is from "demetrius soupolos" and "traute soupolos" so maybe there was some big news story I didn't know about.

Comment by tsvibt on Open thread, Mar. 2 - Mar. 8, 2015 · 2015-03-06T16:16:11.356Z · score: 0 (0 votes) · LW · GW

http://www.alexa.com/siteinfo/lesswrong.com

(The recent uptick is due to hpmor, I suppose?)

Comment by tsvibt on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 116 · 2015-03-06T15:02:59.777Z · score: 4 (4 votes) · LW · GW

I'd say your first thought was right.

She noticed half an hour later on, when Harry Potter seemed to sway a bit, and then hunch over, his hands going to cover up his forehead; it looked like he was prodding at his forehead scar. The thought made her slightly worried; everyone knew there was something going on with Harry Potter, and if Potter's scar was hurting him then it was possible that a sealed horror was about to burst out of his forehead and eat everyone. She dismissed that thought, though, and continued to explain Quidditch facts to the historically ignorant at the top of her lungs.

She definitely noticed when Harry Potter stood up, hands still on his forehead, and dropped his hands to reveal that his famous lightning-bolt scar was now blazing red and inflamed. It was bleeding, with the blood dripping down Potter's nose.

Comment by tsvibt on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 116 · 2015-03-06T14:57:49.575Z · score: 0 (0 votes) · LW · GW

Hermione the Hi-Fi Heiress of Hufflepuff and Harry the Humanist

Comment by tsvibt on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 114 + chapter 115 · 2015-03-04T07:26:53.938Z · score: 12 (12 votes) · LW · GW

As a simple matter of fact, Voldemort is stronger than Harry in basically every way, other than Harry's (incomplete) training in rationality. If Voldemort were a good enough planner, there's no way he could lose; he is smarter, more powerful, and has more ancient lore than any other wizard. If Voldemort were also rational, and didn't fall prey to overconfidence bias / planning fallacy...

Well, you can be as rational as you like, but if you are human and your opponent is a superintelligent god with a horde of bloodthirsty nanobots, the invincible Elder Lightsaber, and the One Thing to Rule Them All, then the story is going to read less like HPMOR, and more like:

"...HE IS THE END OF THE WORLD." Quirinus Quirrell calmly activated the toe ring he had prepared months ago, causing the capsule of sulfuric acid embedded in the top of Harry's skull (placed there earlier by an Imperiused Madam Pomfrey, in case of emergency) to break open and quickly dissolve the other Tom Riddle. Quirrell shook his head in disappointment as he felt the sense of doom diminish and then disappear, but it had to be done. He turned to walk towards the third floor corridor. The End.

Comment by tsvibt on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-03-01T05:34:38.221Z · score: 3 (3 votes) · LW · GW

A brief and terrible magic lashed out from the Defense Professor's wand, scouring the hole in the wall, scarring the huge chunk of metal that lay in the room's midst; as Harry had requested, saying that the method he'd used might identify him.

Chapter 58

I'm kind of worried about this... all the real attempted solutions I've seen use partial transfiguration. But if we take "the antagonist is smart" seriously, and given the precedent for V remembering and connecting obscure things (e.g. the Resurrection Stone), we should assume V has protections against that tactic. It is not a power the Dark Lord knows not. And come to think of it, V also saw Harry cutting trees down with partial transfiguration.

Comment by tsvibt on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-03-01T01:39:25.609Z · score: 2 (2 votes) · LW · GW

Didn't V see at least the results of a Partial Transfiguration in Azkaban (used to cut through the wall)? Doesn't seem like something V would just ignore or forget.

Comment by tsvibt on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-02-28T23:01:38.100Z · score: 4 (4 votes) · LW · GW

Since they are touching his skin, does he need his wand to cancel the Transfiguration?

Comment by tsvibt on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 112 · 2015-02-27T12:42:48.044Z · score: 1 (1 votes) · LW · GW

Right, this is a stronger interpretation.

Comment by tsvibt on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 112 · 2015-02-27T04:09:14.877Z · score: 0 (0 votes) · LW · GW

This is persuasive, but... why the heck would Voldemort go the trouble of breaking into Azkaban instead of grabbing Snape or something?

Comment by tsvibt on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 112 · 2015-02-26T00:42:58.224Z · score: 0 (0 votes) · LW · GW

Oh right, good point. Likewise Nott.

Comment by tsvibt on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 112 · 2015-02-26T00:24:13.577Z · score: 0 (0 votes) · LW · GW

So, like, is Snape in that crowd of Death Eaters, or what?

Comment by tsvibt on Request: Sequences book reading group · 2015-02-25T01:16:36.334Z · score: 1 (1 votes) · LW · GW

FYI, each sequence is (very roughly) 20,000 words.

Comment by tsvibt on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapters 105-107 · 2015-02-17T12:41:12.231Z · score: 3 (3 votes) · LW · GW

(Presumably Parseltongue only prevents willful lies.)

Quirrell also claims (not in Parseltongue):

Occlumency cannot fool the Parselmouth curse as it can fool Veritaserum, and you may put that to the trial also.

It seems like what you can say in Parseltongue should only depend on the actual truth and on your mental state. What happens if I Confundus / Memory Charm someone into believing X? Can they say X in Parseltongue? If they can say it just because they believe it, then Parseltongue is not so hard to bypass; I just Confundus myself (or get someone to do it for me), tell the lie, and then cancel the Confundus. If they can't say something because it is actually false, then Parseltongue is an instant win condition. You just use binary search to figure out the truth about anything.

Or maybe Parseltongue checks the speaker for mind magic, since this is the same principle as the Dark Mark, and Salazar is not too many levels below Voldemort. Is this evidence against the "Harry was Confounded to not realize Quirrell was Voldemort" theory? I don't remember if he talked about that in Parseltongue...

Comment by tsvibt on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapters 105-107 · 2015-02-17T02:27:46.692Z · score: 10 (10 votes) · LW · GW

[EDIT: the Dark Lord of the Matrix have fixed this.]

There's a glitch in the Matrix:

A blank-eyed Professor Sprout had now risen from the ground, had picked up Harry's wand and was wrapping it in a shimmering cloth.

Then Harry does some bargaining, and then...

After that, Professor Sprout picked up Harry's wand, and wrapped it in shimmering cloth; then she placed it on the floor, and pointed her own wand at Harry.

Comment by tsvibt on An alarming fact about the anti-aging community · 2015-02-16T20:11:17.571Z · score: 5 (5 votes) · LW · GW

Seconded. Specifically, citations for the implied claims (1) that it is not exorbitantly expensive to perform the organ regeneration or pay for an insurance policy that will pay for that, and (2) how often death is caused by something that can be fixed with organ transplants. Also relevant would be the probability that you would get a successful organ transplant without the cell preservation.

Comment by tsvibt on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 104 · 2015-02-16T10:54:26.477Z · score: 11 (11 votes) · LW · GW

A bunch of unspecified Muggle items he got the Weasleys to obtain for him.

Comment by tsvibt on Stupid Questions February 2015 · 2015-02-08T07:39:04.429Z · score: 0 (0 votes) · LW · GW

π maximally simplifies finding the circumference of of a circle from its diameter

More importantly, π is the area of the unit circle. If you're talking about angles you want τ (tau), if you're talking about area you want π. And you always want pie, ha ha.

Comment by tsvibt on Signalling with T-Shirt slogans · 2014-12-21T15:38:36.111Z · score: 2 (2 votes) · LW · GW

Here's a shirt I made, stating that PA is consistent in mysterious looking symbols. Not directly rationality related, but could be a conversation starter. http://teespring.com/paconsistent

Comment by tsvibt on Open thread, Oct. 27 - Nov. 2, 2014 · 2014-10-27T21:46:38.095Z · score: 1 (1 votes) · LW · GW

Personally, the nonverbal thing is the proper content of math---drawing (possibly mental) pictures to represent objects and their interactions. If I get stuck, I try doing simpler examples. If I'm still stuck, then I start writing things down verbally, mainly as a way to track down where I'm confused or where exactly I need to figure something out.

Comment by tsvibt on Power and difficulty · 2014-10-26T02:02:08.626Z · score: 0 (0 votes) · LW · GW

Oh, I slightly misread some of the previous paragraphs. I was thinking specifically in terms of skills that you develop by doing something hard, rather than object-level products. What you said now makes perfect sense; and in either case writing a third game directly in machine code would be a waste of time, despite still being pretty hard.

Comment by tsvibt on Power and difficulty · 2014-10-22T06:23:11.978Z · score: 1 (1 votes) · LW · GW

Upvoted.

Similarly, writing a game in machine code or as a set of instructions for a Turing machine is certainly difficult, but also pretty dumb, and has no significant payoff beyond writing the game in a higher-level language.

IAWYC, but this example doesn't seem true. The additional payoff would be that you are forced to invent a memory system, bootstrapping compilers, linear algebra algorithms, etc., depending on how complicated the game is.

Comment by tsvibt on Open thread, Oct. 6 - Oct. 12, 2014 · 2014-10-06T20:32:46.566Z · score: 1 (1 votes) · LW · GW

It seems like FAI requires deeper math than UFAI, for some appropriate value of deeper. But this "trial and error" still requires some math. You could imagine a fictitious Earth where suddenly it becomes easy to learn enough to start messing around with neural nets and decision trees and metaheuristics (or something). In that Earth, AI risk is increased by improving math education in that particular weird way.

I am trying to ask whether, in our Earth, there is a clear direction AI risk goes given more plausible kinds of improvements in math education. Are you basically saying that the math for UFAI is easy enough already that not too many new cognitive resources, freed up by those improvements, would go towards UFAI? That doesn't seem true...

Comment by tsvibt on Open thread, Oct. 6 - Oct. 12, 2014 · 2014-10-06T16:19:17.894Z · score: 1 (1 votes) · LW · GW

Question: Say someone dramatically increased the rate at which humans can learn mathematics (over, say, the Internet). Assume also that an intelligence explosion is likely to occur in the next century, it will be a singleton, and the way it is constructed determines the future for earth-originating life. Does the increase in math learning ability make that intelligence explosion more or less likely to be friendly?

Responses I've heard to questions of the form, "Does solving problem X help or hinder safe AGI vs. unsafe AGI?":

  1. Improvements in rationality help safe AI, because sufficiently rational humans usually become unlikely to create unsafe AI. Most other improvements are a wash, because they help safe AI and unsafe AI equally.

  2. Almost any improvement in productivity will slightly help safe AI, because more productive humans have more unconstrained time (i.e. time not spent paying the bills). Humans tend to do more good things and move towards rationality in their less constrained time, so increasing that time is a net win.

Not sure how I feel about these responses. But neither of them directly answers the question about math.

One answer would be that improving higher math education would be a net win because safe AI will definitely require hard math, whereas improving all math education would be a net loss because, like Moore's Law, it would increase cognitive resources across the board, pushing the timeline further up. Note that if we ignore network effects (researchers talking to researchers, convincing them to not work on unsafe AI), the question becomes: Is the effect of improving X more like shifting the timeline forward by Y years, as in increasing computing power, or is it more like stretching the timeline by some linear factor, as in increasing human productivity? Thoughts?

Comment by tsvibt on Open thread, Sept. 29 - Oct.5, 2014 · 2014-10-02T03:12:57.975Z · score: 2 (2 votes) · LW · GW

The literal reading would be A-I-ksi or A-I-zai, said aye-eye-ksee or aye-eye-zai, because AI is standing for Artificial Intelligence and XI is the greek letter. But yeah, I just avoid mentioning it by name :)

Comment by tsvibt on [Link] Forty Days · 2014-09-29T15:11:06.902Z · score: 19 (21 votes) · LW · GW

Interesting post.

But:

HIV-infected people must provide the names of all sexual partners for the past sic months.

You missed a golden opportunity:

...all sexual partners for the past sic [sic] months.

Comment by tsvibt on 2014 iterated prisoner's dilemma tournament results · 2014-09-25T22:43:13.536Z · score: 6 (6 votes) · LW · GW

"Do not attempt long chains of reasoning or complicated plans."

Comment by tsvibt on Open thread, Sept. 1-7, 2014 · 2014-09-06T09:09:09.748Z · score: 2 (2 votes) · LW · GW

All else being equal, if you have the choice, would you pick (a) your son/daughter immediately ceases to exist, or (b) your son/daughter experiences a very long, joyous life, filled with love and challenge and learning, and yes, some dust specks and suffering, but overall something they would describe as "an awesome time"? (The fact that you might be upset if they ceased to exist is not the point here, so let it be specified that (a) is actually everyone disappearing, which includes your child as a special case, and likewise (b) for everyone, again including your child as a special case.)

Comment by tsvibt on Raven paradox settled to my satisfaction · 2014-08-06T07:31:31.290Z · score: 0 (0 votes) · LW · GW

Right, I should have written, "I agree. Also, ...". I just wanted to find the source of the intuition that seeing non-black non-ravens is evidence for "non-black -> non-raven".

Comment by tsvibt on Raven paradox settled to my satisfaction · 2014-08-06T06:00:26.072Z · score: 3 (2 votes) · LW · GW

I think it's just wrong that "H1': If it is not black, it is not a raven" predicts that you will observe non-black non-raven objects, under the assumption/prior that the color distributions within each type of object (chairs, ravens, bananas, etc.) are independent of each other.

The intuition comes from implicitly visualizing the observation of an unknown non-black object O; then, indeed, H1 predicts that O will turn out to not be a raven. Then point is, even observing that O is non-black would decrease your credence in H1; and then increase it again when you saw that O was not a raven. Since H1 is only about ravens, by the independence assumption, H1 says nothing about non-ravens and whether you will see non-black ones. (I.e., its likelihood ratio for "observe a non-black non-raven object" is 1.)

Comment by tsvibt on Three questions about source code uncertainty · 2014-07-24T17:13:08.817Z · score: 1 (1 votes) · LW · GW

1) Yes, presumably; your brain is a vast store of (evolved)(wetware)(non-serial)(ad-hoc)(etc.)algorithms that has so far been difficult for neuroscientists to document.

2) Just plain empirical? There's nothing stopping you from learning your own source code, in principle, it's just that we don't AFAIK have scanners that can view "many" nearby neurons, in real time, individually (as opposed an fMRI).

3) Well that's much more difficult. Not sure why Mark_Friedenbach's comment was downvoted though, except maybe snarkiness; heuristics and biases is a small step towards understanding some of the algorithms you are (and correcting for their systematic errors in a principled way).

Comment by tsvibt on Rationality Quotes July 2014 · 2014-07-23T23:10:04.216Z · score: 3 (19 votes) · LW · GW

There's a saying that goes "People who live in glass houses shouldn't throw stones." Okay. How about "Nobody should throw stones." That's crappy behavior. My policy is: "No stone throwing regardless of housing situation." Don't do it. There is one exception though. If you're trapped in a glass house, and you have a stone, then throw it. What are you, an idiot? So maybe it's "Only people in glass houses should throw stones, provided they are trapped in the house with a stone." It's a little longer, but yeah.

---Demetri Martin, Person (2007)

Comment by tsvibt on Some alternatives to “Friendly AI” · 2014-07-07T01:48:23.304Z · score: 0 (0 votes) · LW · GW

Stable AI

Comment by tsvibt on Rationalist Sport · 2014-06-25T04:25:05.282Z · score: 1 (1 votes) · LW · GW

Rock-climbing:

  • Individualistic
  • Meditative
  • Works many different muscles
Comment by tsvibt on The Power of Noise · 2014-06-17T17:14:45.261Z · score: 3 (3 votes) · LW · GW

Many important problems in graph theory, Ramsey theory, etc. were solved by considering random combinatorial objects (this was one of the great triumphs of Paul Erdos) and thinking in purely deterministic terms seems very unlikely to have solved these problems.

From a Bayesian perspective, a probability is a cognitive object representing the known evidence about a proposition, flattened into a number. It wouldn't make sense to draw conclusions about e.g. the existence of certain graphs, just because we in particular are uncertain about the structure of some graph.

The "probabilistic method", IMHO, is properly viewed as the "measure-theoretic method", which is what mathematicians usually mean by "probabilistic" anyway. That is, constructions involving random objects can usually (always?) be thought of as putting a measure on the space of all objects, and then arguing about sets of measure 0 and 1 and etc. (I would be interested in seeing examples where this transformation is (a) not relatively straightforward or (b) impossible.) Although the math is the same up to a point, these are different conceptual tools. From Jaynes, Probability Theory:

For example our system of probability could hardly, in style, philosophy, and purpose, be more different from that of Kolmogorov. What we consider to be fully half of probability theory as it is needed in current applications (the principles for assigning probabilities by logical analysis of incomplete information) is not present at all in the Kolmogorov system. Yet when all is said and done we find ourselves, to our own surprise, in agreement with Kolmogorov and in disagreement with his critics, on nearly all technical issues.

Whether thinking in terms of randomness is a useful conceptual tool is a different question; personally, I try to separate the intuitions into Bayesian (for cognition) and measure theory (for everything else, e.g. randomized algorithms, quantum mechanics, etc.). It would be nice if these were one and the same, i.e. if Bayesian probability was just measure theoretic probability over sets of hypotheses, but I don't know of a good choice for the hypothesis space. The Cox theorems work from basic desiderata of a probabilistic calculus, independent of any measure theory; that is the basis of Bayesian probability theory (see Jaynes, Chapter 2).

Comment by tsvibt on Open Thread, May 19 - 25, 2014 · 2014-05-20T15:16:42.665Z · score: 1 (3 votes) · LW · GW

"[...]may be the case[...]"

Sometimes this phrase is harmless, but sometimes it is part of an important enumeration of possible outcomes/counterarguments/whatever. If "the case" does not come with either a solid plan/argument or an explanation why it is unlikely or not important, then it is often there to make the author and/or the audience feel like all the bases have been covered. E.g.,

We should implement plan X. It may be the case that [important weak point of X], but [unrelated benefit of X].

Comment by tsvibt on Open Thread March 31 - April 7 2014 · 2014-04-02T05:19:19.662Z · score: 2 (2 votes) · LW · GW

Proof that there is no sequences of algorithms A1, A2, ..., assigned to each prisoner, giving a winning strategy (assuming a computable warden given indices for the Ak):

Gur jneqra fvzhyngrf N-bar ba mreb mreb mreb... hagvy vg bhgchgf bar be mreb nsgre ernqvat x ovgf bs gur vachg. Gur jneqra gura cynprf gur bgure pbybe ung ba cevfbare 1, jub jvyy sbyybj N-bar naq thrff vapbeerpgyl; gur jneqra rafherf guvf ol cynpvat mreb ba cevfbaref gjb guebhtu x. Gur jneqra ercrngf guvf jvgu N-x+1 naq cevfbare x+1, fb gung x+1 jvyy thrff vapbeerpgyl. Naq fb ba. Fvapr rnpu Nx unygf nsgre ernqvat svavgryl znal ovgf sebz vgf benpyr, gur jneqra pna sbepr na vapbeerpg nafjre jvgu bayl svavgryl znal ovgf. Guvf jnl, gur jneqra pna sbepr vasvavgryl znal vapbeerpg nafjref. (Guvf eryngvivmrf gb nal benpyrf lbh pner gb tvir gb gur cevfbaref, nf ybat nf gur jneqra unf npprff gb gur pbhagnoyr wbva bs gubfr benpyrf.)

Comment by tsvibt on Open Thread March 31 - April 7 2014 · 2014-04-01T18:43:44.796Z · score: 2 (2 votes) · LW · GW

The problem is correct as stated, and solutions above by RichardKennaway and Oscar_Cunningham are correct. I think you may have missed that the prisoners are all distinguishable, a.k.a. they are numbered 1,2,3,.... Or you are confused about the win condition; we don't have to guarantee that any particular prisoner guesses correctly, just that only finitely many guess incorrectly.

Sub-puzzle: prove definitively that if the prisoners are not distinguishable, then there is no winning strategy.