LW's front page freezes, hangs and bugs on Chrome 2011-07-19T19:38:44.721Z
LW's image problem: "Rationality" is suspicious 2011-07-19T18:16:07.999Z 2011-05-26T20:40:46.865Z
Extremely Counterfactual Mugging or: the gist of Transparent Newcomb 2011-02-09T15:20:54.505Z
Punishing future crimes 2011-01-28T21:00:27.198Z
Omega can be replaced by amnesia 2011-01-26T12:31:04.595Z
Pascal's Gift 2010-12-25T19:42:51.483Z
Should LW have a public censorship policy? 2010-12-11T22:45:15.282Z
Does TDT pay in Counterfactual Mugging? 2010-11-29T21:31:36.631Z
Sleeping Beauty as a decision problem (solved) 2010-10-10T03:15:08.755Z


Comment by Bongo on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-23T04:30:16.780Z · LW · GW

Harry didn't hear Hermione's testimony. Therefore, he can go back in time and change it to anything that would produce the audience reaction he saw, without causing paradox.

Comment by Bongo on A singularity scenario · 2012-03-17T18:16:33.548Z · LW · GW

I almost downvoted this because when I clicked on it from my RSS reader, it appeared to have been posted on main LW instead of discussion (known bug). This might be the reason for a lot of mysterious downvoting, actually.

Comment by Bongo on "Nice Guys Finish First" - Youtube Video of selected reading (by Dawkins) from The Selfish Gene · 2012-03-17T02:28:47.683Z · LW · GW

(Bug report: I was sent to this post via this link, and I see MAIN bolded above the title instead of DISCUSSION. The URL is misleading too, shouldn't urls of discussion posts contain "/r/discussion/" instead of "/lw"?)

(EDIT: Grognor just told me that "every discussion post has a main-style URL that bolds MAIN")

Comment by Bongo on Cult impressions of Less Wrong/Singularity Institute · 2012-03-15T22:00:32.976Z · LW · GW

fraction of revenue that ultimately goes to paying staff wages

About a third in 2009, the last year for which we have handy data.

Comment by Bongo on Harry Potter and the Methods of Rationality discussion thread, part 10 · 2012-03-15T14:15:18.863Z · LW · GW

Snape says this in both MoR and the original book:

"I can teach you how to bottle fame, brew glory, even stopper death"

Isn't this silly? Of course you can stopper death, because duh, poisons exist.

It might be just a slip-up in the original book, but I'm hoping it will somehow make sense in MoR. My first thought was that maybe a magical death potion couldn't be stopped using magical healing, unlike non-magical poisons.

I asked this on IRC and got some interesting ideas. feep thought it might mean that you can make a Potion of Dementor, which would fit since dementors are avatars of death in MoR and stoppering death would be actually impressive if it meant that. Orionstein suggested it might be a potion made from eg. a bullet that's killed someone, which, given what we know of how potions work from chapter 78, might also result in a potion with deathy effects above and beyond just those of poison.

Comment by Bongo on The Singularity Institute's Arrogance Problem · 2012-02-22T20:15:43.345Z · LW · GW

This usually stops me from using myself as examples, sometimes with the result that the post stays unwritten or unpublished.

You could just tell the story with "me" replaced by "my friend" or "someone I know" or "Bob". I'd hate to miss a W_D post because of a trivial thing like this.

Comment by Bongo on Welcome to Less Wrong! · 2011-12-27T20:35:10.751Z · LW · GW

I ... was shocked at how downright anti-informative the field is


shocked at how incredibly useless statistics is


The opposite happened with the parapsychology literature


Comment by Bongo on Welcome to Less Wrong! · 2011-12-27T20:20:25.955Z · LW · GW

algorithmic probability ... does not say that naturalistic mechanistic universes are a priori more probable!


Comment by Bongo on Welcome to Less Wrong! · 2011-12-27T20:19:45.130Z · LW · GW

confirmation bias ... doesn't actually exist.


Comment by Bongo on Welcome to Less Wrong! · 2011-12-19T18:57:48.169Z · LW · GW

I wonder how this comment got 7 upvotes in 9 minutes.

EDIT: Probably the same way this comment got 7 upvotes in 6 minutes.

Comment by Bongo on More "Personal" Introductions · 2011-12-07T08:26:11.450Z · LW · GW

This could be an option.

Comment by Bongo on Open Thread: December 2011 · 2011-12-04T18:06:24.475Z · LW · GW

(An increasing probability distribution over the natural numbers is impossible. The sequence (P(1), P(2),...) would have to 1) be increasing 2) contain a nonzero element 3) sum to 1, which is impossible.)

Comment by Bongo on Tidbit: “Semantic over-achievers” · 2011-12-03T08:03:35.828Z · LW · GW

There's a related problem; Humans have a tendency to once they have terms for something take for granted that something that looks at a glance to make rough syntactic sense that it actually has semantics behind it.

This sentence is so convoluted that at first I thought it was some kind of meta joke.

Comment by Bongo on Facing the Intelligence Explosion discussion page · 2011-11-29T23:30:28.511Z · LW · GW

It's also another far-mode picture.

Comment by Bongo on Poll results: LW probably doesn't cause akrasia · 2011-11-22T01:59:37.181Z · LW · GW

73 tabs, 4 windows.

Comment by Bongo on Existential Risk · 2011-11-16T12:45:46.355Z · LW · GW

Also, I'd say both of those pictures seem to have the effect of inducing far mode.

Comment by Bongo on On maximising expected value · 2011-10-26T11:38:16.932Z · LW · GW

Given any problem, one should look at it, and pick the course that maximising one's expectation. ... what if my utility is non-linear

You're confusing expected outcome and expected utility. Nobody thinks you should maximize the utility of the expected outcome; rather you should maximize the expected utility of the outcome.

Lets now take another example: I am on Deal or No Deal, and there are three boxes left: $100000, $25000 and $.01. The banker has just given me a deal of $20000 (no doubt to much audience booing). Should I take that? Expected gains maximisation says certainly not!

Yes, and expected gains maximization, which nobody advocates, is stupid, unlike expected utility maximization, which will take into account the fact that your utility function is probably not linear on money.

Comment by Bongo on Footage of the Michael Fox lecture · 2011-10-24T06:23:37.227Z · LW · GW

Is there a video of the full lecture?

Comment by Bongo on Students asked to defend AGI danger update in favor of AGI riskiness · 2011-10-19T22:34:58.752Z · LW · GW

it seems that an isomorphic argument 'proves' that computer programs will crash - since "almost any" computer program crashes.

More obviously, an isomorphic argument 'proves' that books will be gibberish - since "almost any" string of characters is gibberish. An additional argument that non-gibberish books are very difficult to write and that naively attempting to write a non-gibberish book will almost certainly fail on the first try, is required. The analogous argument exists for AGI, of course, but is not given there.

Comment by Bongo on [link] SMBC on utilitarianism and vegatarianism. · 2011-10-16T21:35:33.000Z · LW · GW

It was probably that, but note that that page is not concerned with minimizing killing, but minimizing the suffering-adjusted days of life that went into your food. (Which I think is a good idea; I've used that page's stats to choose my animal products for a year now.)

Comment by Bongo on Rationality Quotes October 2011 · 2011-10-07T09:14:38.499Z · LW · GW

By doing this you condition them to accept the radical form of dominance where they have the authority to tell you what you are morally entitled to believe.

*where you have the authority to tell them (?)

Comment by Bongo on Pascal's wager re-examined · 2011-10-07T09:11:00.594Z · LW · GW

My impression is that the level went up and then down:

  • OB-era comment threads were bad.
  • During the first year of LW the posts were good.
  • Nowadays the posts are bad again.
Comment by Bongo on 'Newcomblike' Video Game: Frozen Synapse · 2011-10-03T23:09:05.062Z · LW · GW

That play on google video

Comment by Bongo on LessWrong gaming community · 2011-09-26T18:18:54.092Z · LW · GW

LW Minecraft server anyone?

Comment by Bongo on Rationality tip: Predict your comment karma · 2011-09-16T01:48:13.161Z · LW · GW

If you really can predict your karma, you should post encrypted predictions* offsite at the same time as you make your post, or use some similar scheme so your predictions are verifiable.

Seems obviously worth the bragging rights.

* A prediction is made up of a post id, a time, and a karma score, and means that the post will have that karma score at that time.

Comment by Bongo on Are Deontological Moral Judgments Rationalizations? · 2011-08-18T21:17:57.926Z · LW · GW

You only have two choices. You can be the kind of person who kills the fat mat in order to save four other lives and kills the fat man in order to get a million dollars for yourself. Or you can be the kind of person who refuses to kill the fat man in both situations. Because of human hardware, those are your only choices.

This seems obviously false.

Comment by Bongo on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) · 2011-08-18T18:48:44.685Z · LW · GW

Thus, when aiming to maximize expected positive impact, it is not advisable to make giving decisions based fully on explicit formulas.

I love that you don't seem to argue against maximizing EV, but rather to argue that a certain method, EEV, is a bad way to maximize EV. If this was stated at the beginning of the article I would have been a lot less initially skeptical.

Comment by Bongo on Are Deontological Moral Judgments Rationalizations? · 2011-08-17T08:30:49.760Z · LW · GW

So I guess the takeaway is that if you care more about your status as a predictable, cooperative, and non-threatening person than about four innocent lives, don't push the fat man.

Comment by Bongo on Take heed, for it is a trap · 2011-08-14T13:11:42.621Z · LW · GW

I don't think it's that bad. Anything at an inferential distance sounds ridiculous is you just matter-of-factly assert it, but that just means that if you want to tell someone about something at an inferential distance don't just matter-of-factly assert it. The framing probably matters at least as much as the content.

Comment by Bongo on Take heed, for it is a trap · 2011-08-14T12:14:28.351Z · LW · GW

science is wrong

No. Something like "Bayesian reasoning is better than science" would work.

Every fraction of a second you split into thousands of copies of yourself.

Not "thousands". "Astronomically many" would work.

Computers will soon become so fast that AI researchers will be able to create an artificial intelligence that's smarter than any human

That's the accelerating change, not the intelligence explosion school of singularity. Only the latter is popular around here.

Also, we sometimes prefer torture to dust-specs.

Add "for sufficiently many dust-specks".

I also agree with lessdazed's first three criticisms.


Other than these, it's not a half-bad summary!

Comment by Bongo on Counting upvotes/downvotes · 2011-08-07T22:17:26.689Z · LW · GW

A little UI idea to avoid number clutter: represent the controversy score by having the green oval be darker (or lighter) green the more controversial the post is.

Comment by Bongo on My Expected Value Approach to Newcomb's Problem · 2011-08-05T13:07:12.228Z · LW · GW

Extremely counterfactual mugging is the simplest such variation IMO. Though it has the same structure as Parfit's Hitchhiker, it's better because issues of trust and keeping promises don't come into it. Here it is:

Omega will either award you $1000 or ask you to pay him $100. He will award you $1000 if he predicts you would pay him if he asked. He will ask you to pay him $100 if he predicts you wouldn't pay him if he asked.

Omega asks you to pay him $100. Do you pay?

Comment by Bongo on The Allais Paradox and the Dilemma of Utility vs. Certainty · 2011-08-04T18:33:41.702Z · LW · GW

You mean this?:

1.) 26986000 people die, with certainty.

2.) 0.0001% chance that nobody dies; 99.9999% chance that 27000000 people die.

And of course the answer is obvious. Given a population of 40 billion, you'd have to be a monster to not pick 2. :)

Comment by Bongo on The Allais Paradox and the Dilemma of Utility vs. Certainty · 2011-08-04T09:43:14.593Z · LW · GW

The expected utility calculations now say choice 1 yields $14000 and choice 2 yields $17000.

The expected payoff calculations say that. Expected utility calculations say nothing since you haven't specified a utility function. Neither can you say that choice 2 must be better because of the fact that for any reasonable utility function U($14k)<U($17k), because the utility of the expected payoff is not equal to the expected utility.

EDIT: pretty much every occurrence of "expected utility" in this post should be replaced with "expected payoff".

Comment by Bongo on The Allais Paradox and the Dilemma of Utility vs. Certainty · 2011-08-04T09:36:14.601Z · LW · GW

Reminder: the Allais Paradox is not that people prefer 1A>1B, it's that people prefer 1A>1B and 2B>2A. If you prefer 1A>1B and 2A>2B it could because of having non-linear utility for money, which is perfectly reasonable and non-paradoxical. Neither does "Shut up and multiply" have anything to do with linear utility functions for money.

Comment by Bongo on The $125,000 Summer Singularity Challenge · 2011-08-01T11:45:24.230Z · LW · GW

Added some exclamation marks to bring out the sarcasm.

Comment by Bongo on The $125,000 Summer Singularity Challenge · 2011-08-01T10:43:11.175Z · LW · GW

If you already know your decision the value of the research is nil.

No because then if someone challenges your decision you can give them citations! And then you can carry out the decision without the risk of looking weird!

Comment by Bongo on Expecting Short Inferential Distances · 2011-08-01T10:37:14.788Z · LW · GW

Leading people to lesswrong on average makes them scoff then add things to their stereotype cache.

This is probably because of the site design and not necessary.

Comment by Bongo on Really good education podcasts · 2011-07-31T15:29:23.430Z · LW · GW

Downvoted for bad grammar but:

Podcasts only go so far. I recommend downloading lectures etc. from youtube and converting to mp3. The best downloader-converter I've found for Windows is this, and for Linux, this (read the comments for how to get it to work). I assume you know how to find stuff on youtube so I'll skip the recommendations, but I've probably listened to thousands of hours of stuff from there and haven't run out yet.

Comment by Bongo on Secrets of the eliminati · 2011-07-25T11:24:10.655Z · LW · GW

I also (1 2) downvoted only after reading.

Comment by Bongo on Help me transition to human society! · 2011-07-24T21:49:31.988Z · LW · GW

I disagree. I'm entertained.

Comment by Bongo on To Speak Veripoop · 2011-07-24T16:47:33.003Z · LW · GW

![](image url here)
Comment by Bongo on Secrets of the eliminati · 2011-07-24T16:21:32.082Z · LW · GW

I believe Vladimir_Nesov was talking about the obscure language in your comments.

Comment by Bongo on Dungeons and Discourse implementation · 2011-07-24T16:14:10.740Z · LW · GW

I don't know how much sense the real-world tropes of skeptical atheists and fervently faithful theists make in a world where you can literally bargain with God to get your dead friend back from Heaven. In the D&Dis world, it really is atheism that requires faith!

Comment by Bongo on Secrets of the eliminati · 2011-07-24T15:41:08.847Z · LW · GW

This read vaguely like it could possibly be interpreted in a non-crazy way if you really tried... until the stuff about jesus.

I mean, whereas the rest of the religous terminology could plausibly be metaphorical or technical, it actually looks as if you're actually non-metaphorically saying that jesus died so we could have a positive singularity.

Please tell me that's not really what you're saying. I would hate to see you go crazy for real. You're one of my favorite posters even if I almost always downvote your posts.

Comment by Bongo on Dungeons and Discourse implementation · 2011-07-24T13:19:48.116Z · LW · GW

Looks awesome. Some errata:

  • bottom of page 7 says Cartesian doubt is 3 speed and 1 rationality, while the list on page 13 says it's 3 speed and 0 rationality.
  • second paragraph on page 7 says "cast two squares and then cast the spell".
  • page 59 lists LHP things for RHP, where it says "giving you"
  • page 89 says "PROBABILITY THEORY: THE LANGUAGE OF SCIENCE" whereas it's actually the logic of of science.
Comment by Bongo on A funny argument for traditional morality · 2011-07-20T09:56:45.368Z · LW · GW

This wasn't about people but generic game-theoretic agents (and all else equal generic game-theoretic agents prefer to exist because then there will be someone in the world with their utility function exerting an influence on the world so as to make it rate higher in their utility function than it would have if there wasn't anyone).

Comment by Bongo on Experiment: Psychoanalyze Me · 2011-07-19T09:56:51.138Z · LW · GW

You made this thread at least partly to flaunt your status as someone who can get away with making a thread all about themselves (on the main LW no less).

Comment by Bongo on Psychologist making pseudo-claim that recent works "compromise the Bayesian point of view" · 2011-07-19T05:17:05.013Z · LW · GW

Downvoted for "pseudo-claim".

Comment by Bongo on Approving reinforces low-effort behaviors · 2011-07-16T01:33:57.191Z · LW · GW

Consider the action of making a goal. I go to all my friends and say "Today I shall begin learning Swahili." This is easy to do. There is no chance of me intending to do so and failing; my speech is output by the same processes as my intentions, so I can "trust" it. But this is not just an output of my mental processes, but an input. One of the processes potentially reinforcing my behavior of learning Swahili is "If I don't do this, I'll look stupid in front of my friends."

I know it's only an example but it needs to be pointed out that maybe saying to all your friends that you're going to do it actually makes you less likely to do it.