Posts
Comments
Harry didn't hear Hermione's testimony. Therefore, he can go back in time and change it to anything that would produce the audience reaction he saw, without causing paradox.
I almost downvoted this because when I clicked on it from my RSS reader, it appeared to have been posted on main LW instead of discussion (known bug). This might be the reason for a lot of mysterious downvoting, actually.
(Bug report: I was sent to this post via this link, and I see MAIN bolded above the title instead of DISCUSSION. The URL is misleading too, shouldn't urls of discussion posts contain "/r/discussion/" instead of "/lw"?)
(EDIT: Grognor just told me that "every discussion post has a main-style URL that bolds MAIN")
fraction of revenue that ultimately goes to paying staff wages
About a third in 2009, the last year for which we have handy data.
Snape says this in both MoR and the original book:
"I can teach you how to bottle fame, brew glory, even stopper death"
Isn't this silly? Of course you can stopper death, because duh, poisons exist.
It might be just a slip-up in the original book, but I'm hoping it will somehow make sense in MoR. My first thought was that maybe a magical death potion couldn't be stopped using magical healing, unlike non-magical poisons.
I asked this on IRC and got some interesting ideas. feep thought it might mean that you can make a Potion of Dementor, which would fit since dementors are avatars of death in MoR and stoppering death would be actually impressive if it meant that. Orionstein suggested it might be a potion made from eg. a bullet that's killed someone, which, given what we know of how potions work from chapter 78, might also result in a potion with deathy effects above and beyond just those of poison.
This usually stops me from using myself as examples, sometimes with the result that the post stays unwritten or unpublished.
You could just tell the story with "me" replaced by "my friend" or "someone I know" or "Bob". I'd hate to miss a W_D post because of a trivial thing like this.
I ... was shocked at how downright anti-informative the field is
Explain?
shocked at how incredibly useless statistics is
Explain?
The opposite happened with the parapsychology literature
Elaborate?
algorithmic probability ... does not say that naturalistic mechanistic universes are a priori more probable!
Explain?
confirmation bias ... doesn't actually exist.
Explain?
I wonder how this comment got 7 upvotes in 9 minutes.
EDIT: Probably the same way this comment got 7 upvotes in 6 minutes.
This could be an option.
(An increasing probability distribution over the natural numbers is impossible. The sequence (P(1), P(2),...) would have to 1) be increasing 2) contain a nonzero element 3) sum to 1, which is impossible.)
There's a related problem; Humans have a tendency to once they have terms for something take for granted that something that looks at a glance to make rough syntactic sense that it actually has semantics behind it.
This sentence is so convoluted that at first I thought it was some kind of meta joke.
It's also another far-mode picture.
73 tabs, 4 windows.
Also, I'd say both of those pictures seem to have the effect of inducing far mode.
Given any problem, one should look at it, and pick the course that maximising one's expectation. ... what if my utility is non-linear
You're confusing expected outcome and expected utility. Nobody thinks you should maximize the utility of the expected outcome; rather you should maximize the expected utility of the outcome.
Lets now take another example: I am on Deal or No Deal, and there are three boxes left: $100000, $25000 and $.01. The banker has just given me a deal of $20000 (no doubt to much audience booing). Should I take that? Expected gains maximisation says certainly not!
Yes, and expected gains maximization, which nobody advocates, is stupid, unlike expected utility maximization, which will take into account the fact that your utility function is probably not linear on money.
Is there a video of the full lecture?
it seems that an isomorphic argument 'proves' that computer programs will crash - since "almost any" computer program crashes.
More obviously, an isomorphic argument 'proves' that books will be gibberish - since "almost any" string of characters is gibberish. An additional argument that non-gibberish books are very difficult to write and that naively attempting to write a non-gibberish book will almost certainly fail on the first try, is required. The analogous argument exists for AGI, of course, but is not given there.
It was probably that, but note that that page is not concerned with minimizing killing, but minimizing the suffering-adjusted days of life that went into your food. (Which I think is a good idea; I've used that page's stats to choose my animal products for a year now.)
By doing this you condition them to accept the radical form of dominance where they have the authority to tell you what you are morally entitled to believe.
*where you have the authority to tell them (?)
My impression is that the level went up and then down:
- OB-era comment threads were bad.
- During the first year of LW the posts were good.
- Nowadays the posts are bad again.
LW Minecraft server anyone?
If you really can predict your karma, you should post encrypted predictions* offsite at the same time as you make your post, or use some similar scheme so your predictions are verifiable.
Seems obviously worth the bragging rights.
* A prediction is made up of a post id, a time, and a karma score, and means that the post will have that karma score at that time.
You only have two choices. You can be the kind of person who kills the fat mat in order to save four other lives and kills the fat man in order to get a million dollars for yourself. Or you can be the kind of person who refuses to kill the fat man in both situations. Because of human hardware, those are your only choices.
This seems obviously false.
Thus, when aiming to maximize expected positive impact, it is not advisable to make giving decisions based fully on explicit formulas.
I love that you don't seem to argue against maximizing EV, but rather to argue that a certain method, EEV, is a bad way to maximize EV. If this was stated at the beginning of the article I would have been a lot less initially skeptical.
So I guess the takeaway is that if you care more about your status as a predictable, cooperative, and non-threatening person than about four innocent lives, don't push the fat man.
I don't think it's that bad. Anything at an inferential distance sounds ridiculous is you just matter-of-factly assert it, but that just means that if you want to tell someone about something at an inferential distance don't just matter-of-factly assert it. The framing probably matters at least as much as the content.
science is wrong
No. Something like "Bayesian reasoning is better than science" would work.
Every fraction of a second you split into thousands of copies of yourself.
Not "thousands". "Astronomically many" would work.
Computers will soon become so fast that AI researchers will be able to create an artificial intelligence that's smarter than any human
That's the accelerating change, not the intelligence explosion school of singularity. Only the latter is popular around here.
Also, we sometimes prefer torture to dust-specs.
Add "for sufficiently many dust-specks".
I also agree with lessdazed's first three criticisms.
--
Other than these, it's not a half-bad summary!
A little UI idea to avoid number clutter: represent the controversy score by having the green oval be darker (or lighter) green the more controversial the post is.
Extremely counterfactual mugging is the simplest such variation IMO. Though it has the same structure as Parfit's Hitchhiker, it's better because issues of trust and keeping promises don't come into it. Here it is:
Omega will either award you $1000 or ask you to pay him $100. He will award you $1000 if he predicts you would pay him if he asked. He will ask you to pay him $100 if he predicts you wouldn't pay him if he asked.
Omega asks you to pay him $100. Do you pay?
You mean this?:
1.) 26986000 people die, with certainty.
2.) 0.0001% chance that nobody dies; 99.9999% chance that 27000000 people die.
And of course the answer is obvious. Given a population of 40 billion, you'd have to be a monster to not pick 2. :)
The expected utility calculations now say choice 1 yields $14000 and choice 2 yields $17000.
The expected payoff calculations say that. Expected utility calculations say nothing since you haven't specified a utility function. Neither can you say that choice 2 must be better because of the fact that for any reasonable utility function U($14k)<U($17k), because the utility of the expected payoff is not equal to the expected utility.
EDIT: pretty much every occurrence of "expected utility" in this post should be replaced with "expected payoff".
Reminder: the Allais Paradox is not that people prefer 1A>1B, it's that people prefer 1A>1B and 2B>2A. If you prefer 1A>1B and 2A>2B it could because of having non-linear utility for money, which is perfectly reasonable and non-paradoxical. Neither does "Shut up and multiply" have anything to do with linear utility functions for money.
Added some exclamation marks to bring out the sarcasm.
If you already know your decision the value of the research is nil.
No because then if someone challenges your decision you can give them citations! And then you can carry out the decision without the risk of looking weird!
Leading people to lesswrong on average makes them scoff then add things to their stereotype cache.
This is probably because of the site design and not necessary.
Downvoted for bad grammar but:
Podcasts only go so far. I recommend downloading lectures etc. from youtube and converting to mp3. The best downloader-converter I've found for Windows is this, and for Linux, this (read the comments for how to get it to work). I assume you know how to find stuff on youtube so I'll skip the recommendations, but I've probably listened to thousands of hours of stuff from there and haven't run out yet.
I disagree. I'm entertained.
I believe Vladimir_Nesov was talking about the obscure language in your comments.
I don't know how much sense the real-world tropes of skeptical atheists and fervently faithful theists make in a world where you can literally bargain with God to get your dead friend back from Heaven. In the D&Dis world, it really is atheism that requires faith!
This read vaguely like it could possibly be interpreted in a non-crazy way if you really tried... until the stuff about jesus.
I mean, whereas the rest of the religous terminology could plausibly be metaphorical or technical, it actually looks as if you're actually non-metaphorically saying that jesus died so we could have a positive singularity.
Please tell me that's not really what you're saying. I would hate to see you go crazy for real. You're one of my favorite posters even if I almost always downvote your posts.
Looks awesome. Some errata:
- bottom of page 7 says Cartesian doubt is 3 speed and 1 rationality, while the list on page 13 says it's 3 speed and 0 rationality.
- second paragraph on page 7 says "cast two squares and then cast the spell".
- page 59 lists LHP things for RHP, where it says "giving you"
- page 89 says "PROBABILITY THEORY: THE LANGUAGE OF SCIENCE" whereas it's actually the logic of of science.
This wasn't about people but generic game-theoretic agents (and all else equal generic game-theoretic agents prefer to exist because then there will be someone in the world with their utility function exerting an influence on the world so as to make it rate higher in their utility function than it would have if there wasn't anyone).
You made this thread at least partly to flaunt your status as someone who can get away with making a thread all about themselves (on the main LW no less).
Downvoted for "pseudo-claim".
Consider the action of making a goal. I go to all my friends and say "Today I shall begin learning Swahili." This is easy to do. There is no chance of me intending to do so and failing; my speech is output by the same processes as my intentions, so I can "trust" it. But this is not just an output of my mental processes, but an input. One of the processes potentially reinforcing my behavior of learning Swahili is "If I don't do this, I'll look stupid in front of my friends."
I know it's only an example but it needs to be pointed out that maybe saying to all your friends that you're going to do it actually makes you less likely to do it.