Posts

Counterfactual Mugging Alternative 2016-06-06T06:53:41.637Z

Comments

Comment by wafflepudding on No Safe Defense, Not Even Science · 2017-04-04T21:44:13.506Z · LW · GW

That second paragraph was hard for me. Seeing "a)" and "b)" repeated made me parse it as a jigsaw puzzle where the second "a)" was a subpoint of the first "b)", but then "c)" got back to the main sequence only to jump back to the "b)", the second subpoint of the first "b)". That didn't make any sense, so I tried to read each clause separately, and came up with "1. You are never safe. 2. You must understand. 3. On an emotional basis..." before becoming utterly lost. Only after coming back to it later did I get that repeated letters were references to previous letters.

Comment by wafflepudding on Infinite Certainty · 2017-01-10T06:38:22.087Z · LW · GW

There are, apparently, certain Eastern philosophies that permit and even celebrate logical contradiction. To what extent this is metaphorical I couldn't say, but I recently spoke to an adherent who quite firmly believed that a given statement could be both true and false. After some initial bewilderment, I verified that she wasn't talking about statements that contained both true and false claims, or were informal and thus true or false under different interpretations, but actually meant what she'd originally seemed to mean.

I didn't at first know how to argue such a basic axiom -- it seemed like trying to talk a rock into consciousness -- but on reflection, I became increasingly uncertain what her assertion would even mean. Does she, when she thinks "Hmm, this is both true and false" actually take any action different than I would? Does belief in NNC wrongly constrain some sensory anticipation? As Paul notes, need the law of non-contradiction hold when not making any actual assertions?

All this is to say that the matter which at first seemed very simple became confusing along a number of axes, and though Paul might call any one of these complaints "splitting hairs" (as would I), he would probably claim this with far less certainty than his original 100% confidence in NNC's falsehood: That is, he might be more open-minded about a community of mathematicians explaining why actually some particular complaint isn't splitting hairs at all and is highly important for some non-obvious reasons and due to some fundamental assumptions being confused it would be misleading to call NNC 'false'.

But more simply, I think Paul may have failed to imagine how he would actually feel in the actual situation of a community of mathematicians telling him that he was wrong. Even more simply, I think we can extrapolate a broader mistake of people who are presented with the argument against infinite certainty replying with a particular thing they're certain about, and claiming that they're even more certain about their thing than the last person to try a similar argument. Maybe the correct general response to this is to just restate Eliezer's reasoning about any 100% probability simply being in the reference class of other 100% probabilities, less than 100% of which are correct.

Comment by wafflepudding on Belief in Intelligence · 2016-12-28T18:15:58.239Z · LW · GW

It seems to me that you are predicting the path of the pinball, but quickly enough that you don't realize you're doing it. It's such a fundamental axiom that if there is a clear downward path to a given position, this position will be reached, that it's easy to forget that it was originally reasoning about intermediate steps that led to this axiom. At most points the pinball can reach, it is expected to move down. At the next point, it's expected to move down again. You would inductively expect it to reach a point where it cannot move down anymore, and this point is the hole (or sometimes a fault in the machine).

Contrast with the hole being upraised, or blocked by some barrier. All of the paths you envision lead to a point other than the hole, so you conclude that the ball will land instead on some other array of points. There it's easier to see that gravity still requires path-based reasoning.

Comment by wafflepudding on Is Santa Real? · 2016-10-31T02:50:53.372Z · LW · GW

In case you're still active, I'm curious what your child's reasoning was for placing God in the pretend category. Like, did she know about Occam's Razor, or was she pattern matching God with other fantasies she's heard? I'm mostly curious because I don't think I've ever heard a perspective as undiluted as an Untheist's.

Comment by wafflepudding on Counterfactual Mugging · 2016-10-27T08:43:27.834Z · LW · GW

You forgot about MetaOmega, who gives you $10,000 if and only if No-mega wouldn't have given you anything, and O-mega, who kills your family unless you're an Alphabetic Decision Theorist. This comment doesn't seem specifically anti-UDT -- after all, Omega and No-mega are approximately equally likely to exist; a ratio of 1:1 if not an actual p of .5 -- but it still has the ring of Just Cheating. Admittedly, I don't have any formal way of telling the difference between decision problems that feel more or less legitimate, but I think part of the answer might be that the Counterfactual Mugging isn't really about how to act around superintelligences: It illustrates a more general need to condition our decisions based on counterfactuals, and as EY pointed out, UDT still wins the No-mega problem if you know about No-mega, so whether or not we should subscribe to some decision theory isn't all that dependent on which superintelligences we encounter.

I'm necroing pretty hard and might be assuming too much about what Caspian originally meant, so the above is more me working this out for myself than anything else. But if anyone can explain why the No-mega problem feels like cheating to me, that would be appreciated.

Comment by wafflepudding on How Many LHC Failures Is Too Many? · 2016-10-02T09:04:04.333Z · LW · GW

Gotcha. So, assuming that the actual Isaac Newton didn't rise to prominence*, are you thinking that human life would usually end before his equivalent came around and the ball got rolling? Most of our existential risks are manmade AFAICT. Or you think that we'd tend to die in between him and when someone in a position to build the LHC had the idea to build the LHC? Granted, him being "in a position to build the LHC" is conditional on things like a supportive surrounding population, an accepting government, etcetera; but these things are ephemeral on the scale of centuries.

To summarize, yes, some chance factor would def prevent us from building the LHC as the exact time we did, but with a lot of time to spare, some other chance factor would prime us to build it somewhen else. Building the LHC just seems to me like the kind of thing we do. (And if we die from some other existential risk before Hadron Colliding (Largely), that's outside the bounds of what I was originally responding to, because no one who died would find himself in a universe at all.)

*Not that I'm condoning this idea that Newton started science.

Comment by wafflepudding on How Many LHC Failures Is Too Many? · 2016-10-02T01:04:03.440Z · LW · GW

Are you responding to "Unless human psychology is expected to be that different from world to world?"? Because that's not my position, I'd think that most things recognizable as human will be similar enough to us that they'd build an LHC eventually. I guess I'm not exactly sure what you're getting at.

Comment by wafflepudding on How Many LHC Failures Is Too Many? · 2016-09-29T02:28:39.371Z · LW · GW

I'd agree that certain worlds would have the building of the LHC pushed back or moved forward, but I doubt there would be many where the LHC was just never built. Unless human psychology is expected to be that different from world to world?

Comment by wafflepudding on Counterfactual Mugging Alternative · 2016-06-13T22:11:35.089Z · LW · GW

I am extremely satisfied with this description; I hadn't personally thought of it in such specific terms, and this would be a perfect way to say it. I'll admit I'm a bit confused why you would pay before but not after, considering that either one is done by a person to whom the prophecy is given 50% less often.

Comment by wafflepudding on Counterfactual Mugging Alternative · 2016-06-13T06:38:59.258Z · LW · GW

The kind of person who pays to fight an infallible prophecy is the same kind of person to whom infallible prophecies are given 50% less often. In this case.

Comment by wafflepudding on Counterfactual Mugging Alternative · 2016-06-12T05:21:35.635Z · LW · GW

Hmm, I didn't intend for the prophet to contradict himself. (Based on your comments and others, I seem to have tripped and fallen hard into the illusion of transparency.) Would you mind elaborating on the contradictory statement he makes? And, had he not said anything apparently contradictory, then would you have paid $100?

Comment by wafflepudding on Where Recursive Justification Hits Bottom · 2016-06-10T23:17:11.995Z · LW · GW

Though, the anti-Laplacian mind, in this case, is inherently more complicated. Maybe it's not a moot point that Laplacian minds are on average simpler than their anti-Laplacian counterparts? There are infinite Laplacian and anti-Laplacian minds, but of the two infinities, might one be proportionately larger?

None of this is to detract from Eliezer's original point, of course. I only find it interesting to think about.

Comment by wafflepudding on Counterfactual Mugging Alternative · 2016-06-10T20:38:18.890Z · LW · GW

And if the prophet is "honest and truly prophetic"?

Comment by wafflepudding on Counterfactual Mugging Alternative · 2016-06-07T16:52:45.147Z · LW · GW

Actually, I still stand by the "not contrived" part. (I think that's what drove me to believe it would be easy to understand.) The idea arose organically when I was thinking about what I would do if presented a prophecy like this, and whether it would be worth expending effort to fight it. On the other hand, there's no reason for Omega to play his game with you other than specifically to illustrate the point of CM.

Comment by wafflepudding on Counterfactual Mugging Alternative · 2016-06-06T16:52:16.293Z · LW · GW

Well, your confusion means my original goal has failed, and I suppose that's that. I am pretty sure this is equivalent to CM in the sense that only UDT wins -- I'd be happy to explain further if you'd like, but otherwise, thanks for your help!

Comment by wafflepudding on Counterfactual Mugging Alternative · 2016-06-06T16:40:52.011Z · LW · GW

You are on path 3, but the button is not disabled. The purpose of spending the $100 is to decrease the number of possible worlds where the prophet would come up and talk to you in the first place. You wouldn't end up destroying your timeline by making it inconsistent; ideally, this timeline was just never created because if it had been you would've spent the $100.

Out of curiosity, would you pay Omega in the counterfactual mugging? If you'd pay in CF but not here, that makes me worry that this formulation isn't similar.

Comment by wafflepudding on War and/or Peace (2/8) · 2016-05-24T02:08:20.748Z · LW · GW

Hmm… does some instance of utility get multiplied by the number of people who find it utilitous? Like, if there are twice as many humans, does that mean that one Babyeater baby eaten subtracts twice as much from group utility?

Comment by wafflepudding on Timeless Physics · 2016-04-15T01:40:44.665Z · LW · GW

An omnipotent magicker decides to flip a coin, and the coin lands heads. Afterwards, the magicker changes every particle in the universe to what it would be had the coin landed tails -- including those in his own brain. Is it true that in the past, the coin landed heads, even though this event is epiphenomenal?

I realize that the magicker is violating the laws of entropy, and that in the real world there are no magickers. I also realize that for the purposes of anyone in the universe, the first coin flip doesn't and couldn't possibly matter, because it was epiphenomenal. But I'm still curious what the answer to my question is.

Comment by wafflepudding on Zombies! Zombies? · 2016-03-04T04:16:36.929Z · LW · GW

On (3), if Zombie Chalmers can't be correct or incorrect about consciousness -- as in, he's just making noise when he says "consciousness" -- does the same hold for his beliefs on anything else? Like, Zombie Chalmers also (probably) says "the sun will rise tomorrow," but would you also question whether these letters actually mean anything? In both the cases of the sun's rising and epiphenomenalism's truth, Zombie Chalmers is commenting on an actual way that reality can be. Is there a difference? Or, does Zombie Chalmers have no beliefs about anything? I'd think that a zombie could be thought to have beliefs as far as some advanced AI could.

Comment by wafflepudding on Worse Than Random · 2015-12-31T08:14:42.688Z · LW · GW

This post is my first experience learning about noise in algorithms, so forgive me if I seem underinformed. Two points occurred to me while reading this comment, some clarification would be great:

First, while it was intriguing to read that input just below the perceptual threshold would half the time be perceived by bumping it above the threshold, it seems to me that input just above the threshold would half the time be knocked below it. So wouldn't noise lead to no gain? Just a loss in acuity?

Second, I'm confused how input below the perceptual threshold is actually input. If a chair moves in front of a camera so slightly that the camera doesn't register a change in position, the input seems to me like zero, and noise loud enough to move zero past the perceptual threshold would not distinguish between movement and stillness, but go off half the time and half the time be silent. If that doesn't make sense, assume that the threshold is .1 meters, and the camera doesn't notice any movement less than that. Let's say your noise is a random number between .01 meters and -.01 meters. The chair moves .09 meters, and your noise lands on .01 meters. I wouldn't think that would cross the threshold, because the camera can't actually detect that .09 meters if it's threshold is .1. So, wouldn't the input just be 0 motion detected + .01 meters of noise = .01 meters of motion? Maybe I'm misunderstanding.

Comment by wafflepudding on LessWrong 2.0 · 2015-12-23T00:46:14.260Z · LW · GW

In reading the Sequences, I feel weird about replying to comments because most of them are from seven years ago. Is it frowned upon to respond to something crazy old and possibly obsolete?

Comment by wafflepudding on Arguing "By Definition" · 2015-11-13T15:52:51.636Z · LW · GW

I love this series. Except, I have very particularly been in an argument where I said the phrase, "Hinduism is, by definition, a religion." Isn't agreement on common usage useful if you want to communicate efficiently? Maybe Wiggin shouldn't be used commonly, but one person defining Wiggin in a manner that contradicts the dictionary definition certainly doesn't do anyone any favors. And I think it's fine for common usage to define humans as mortal, as long as it consistently assumes that Socrates is inhuman when he goes on living forever.

Comment by wafflepudding on Arguing "By Definition" · 2015-09-27T18:05:20.421Z · LW · GW

I disagree. Agreeing on term definitions beforehand would solve all of these problems: The definition of religion is not "something that answers theological questions," therefore the By Definition argument is ineffective for proving that atheism is a religion. (Incidentally, if that were the definition of religion, then atheism would be a religion.) For Hinduism, if someone tried to tell me that it was not a religion, I would necessarily use the definition of religion to prove them wrong. If Hinduism did not fit the definition of religion, it would not be a religion.

Comment by wafflepudding on Hindsight Devalues Science · 2015-09-26T01:58:07.842Z · LW · GW

This hurts my image of Freud. Of course, after I have a dream about skyscrapers, he can explain that it's connected to my love of my phallus, but could he predict my love of my phallus based on a dream about skyscrapers?

Comment by wafflepudding on The "Intuitions" Behind "Utilitarianism" · 2015-09-19T00:22:52.936Z · LW · GW

I believe that the vast majority of people in the dust speck thought experiment would be very willing to endure the collision of the dust speck, if only to play a small role in saving a man from 50 years of torture. I would choose the dust specks on the behalf of those hurt by the dust specks, as I can be very close to certain that most of them would consent to it.

A counterargument might be that, since 3^^^3 is such a vast number, the collective pain of the small fraction of people who would not consent to the dust speck still multiplies to be far larger than the pain that the man being tortured would endure. Thus, I would most likely be making a nonconsensual tradeoff in favor of pain. However, I do not value the comfort of those that would condemn a man to 50 years of torture in order to alleviate a moment's mild discomfort, so 100% of the people whose lack of pain I value would willingly trade it over.

If someone can sour that argument for my mind, I'll concede that I prefer the torture.