Quantum Russian Roulette
post by Christian_Szegedy · 2009-09-18T08:49:43.865Z · LW · GW · Legacy · 65 commentsContents
65 comments
The quantum Russian roulette is a game where 16 people participate. Each of them gets a unique four digit binary code assigned and deposits $50000. They are put to deep sleep using some drug. The organizer flips a quantum coin four times. Unlike in Russian roulette, here only the participant survives whose code was flipped. The others are executed in a completely painless manner. The survivor takes all the money.
Let us assume that none of them have families or very good friends. Then the only result of the game is that the guy who wins will enjoy a much better quality of life. The others die in his Everett branch, but they live on in others. So everybody's only subjective experience will be that he went into a room and woke up $750000 richer.
Being extremely spooky to our human intuition, there are hardly any trivial objective reasons to oppose this game under the following assumptions:
- Average utilitarianism
- Near 100% confidence in the Multiple World nature of our universe
- It is possible to kill someone without invoking any negative experiences.
The natural question arises whether it could be somehow checked that the method really works, especially that the Multiple World Hypothesis is correct. At first sight, it looks impossible to convince anybody besides the participant who survived the game.
However there is a way to convince a lot of people in a few Everett branches: You make a one-time big announcement in the Internet, TV etc. and say that there is a well tested quantum coin-flipper, examined by a community consisting of the most honest and trusted members of the society. You take some random 20 bit number and say that you will flip the equipment 20 times and if the outcome is the same as the predetermined number, then you will take it as a one to million evidence that the Multiple World theory works as expected. Of course, only people in the right branch will be convinced. Nevertheless, they could be convinced enough to make serious thoughts about the viability of quantum Russian roulette type games.
My question is: What are the possible moral or logical reasons not to play such games? Both from individual or societal standpoints.
[EDIT] A Simpler version (single player version of the experiment): The single player generates lottery numbers by flipping quantum coins. Sets up an equipment that kills him in sleep if the generated numbers don't coincide with his. In this way, he can guarantee waking up as a lottery millionaire.
65 comments
Comments sorted by top scores.
comment by cousin_it · 2009-09-18T14:41:45.091Z · LW(p) · GW(p)
Here's a funny reformulation of your argument: if you live in a quantum world where deaths are swift and painless, it makes sense to bet a lot of money on the assertion that you will stay alive. This incentivizes many other people to bet on your death and try hard to kill you. The market takes this into account, so your payoff for staying alive grows very high. Sounds like a win-win situation all around!
That said, at root it's just a vanilla application of quantum immortality which people may or may not believe in (the MWI doesn't seem to logically imply it). For a really mindblowing quantum trick see the Elitzur-Vaidman bomb tester. For a deeper exploration of the immortality issue, see Quantum suicide reality editing.
Replies from: Vladimir_Nesov, Christian_Szegedy, Christian_Szegedy↑ comment by Vladimir_Nesov · 2009-09-18T15:12:30.057Z · LW(p) · GW(p)
For a really mindblowing quantum trick see the Elitzur-Vaidman bomb tester.
Interesting.
Consider a collection of bombs, some of which are duds. The bombs are triggered by a single photon. Usable bombs will absorb the photon and detonate. Dud bombs will not absorb the photon. The problem is how to separate the usable bombs from the duds.
...
A solution is for the sorter to use a mode of observation known as counterfactual measurement
...
In 1994, Anton Zeilinger, Paul Kwiat, Harald Weinfurter, and Thomas Herzog actually performed an equivalent of the above experiment, proving interaction-free measurements are indeed possible.
See also:
- Quantum interrogation by Sean Carroll
- Interaction-free measurement (PDF) by Alan DeWeerd
↑ comment by Jonathan_Graehl · 2009-09-18T18:47:01.415Z · LW(p) · GW(p)
As I understand it, the only way to have a known-live undetonated bomb in this branch is to cause it to actually detonate it in other branches.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-09-18T20:14:22.415Z · LW(p) · GW(p)
Sorta, but not quite, as the probability of it actually detonating can be brought as close to 0 as one likes (if I'm not mistaken).
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2009-09-18T21:36:30.529Z · LW(p) · GW(p)
Yes - I didn't mean all other branches.
↑ comment by Christian_Szegedy · 2009-09-18T18:56:46.620Z · LW(p) · GW(p)
The Elitzur-Vaidman is really amazing: more sophisticated than my scenario. However it is quite different and not directly related.
The Quantum suicide is much more similar, in fact my posting derived from an almost identical idea that I also posted on lesswrong. I got that idea independently when reading that thread.
The reason I find the quantum roulette thought experiment interesting is that it is much less speculative. The payoff is clear and the in can easily be motivated and performed by current technology.
Replies from: cousin_it↑ comment by cousin_it · 2009-09-18T18:59:39.546Z · LW(p) · GW(p)
Yes, that last point is important. Too bad we won't get volunteers from LW, because we're all kinda friends and would miss each other.
Replies from: Aurini, Christian_Szegedy↑ comment by Christian_Szegedy · 2009-09-18T20:50:01.637Z · LW(p) · GW(p)
:) We could still make the second experiment and if the numbers match, throw a party... ;)
↑ comment by Christian_Szegedy · 2009-09-19T02:28:36.990Z · LW(p) · GW(p)
I don't think my game is a simple reformulation of quantum immortality.
I don't even believe in quantum immortality. At least it is not implied in any way by MWI.
It is perfectly possible that you have increasing amounts of "quantum luck" in a lot of branches, but finally you die in each of the branches, because the life-supporting miracles increase your life less and less, and when they hit the resolution of time, you simply run out of them and die for sure.
If you think time is continuous, then think of the Zenon paradox to see why the infinite sum of such pieces can add to a finite amount of time gained.
comment by Scott Alexander (Yvain) · 2009-09-18T13:18:44.463Z · LW(p) · GW(p)
However there is a way to convince a lot of people in a few Everett branches: You make a one-time big announcement in the Internet, TV etc. and say that there is a well tested quantum coin-flipper, examined by a community consisting of the most honest and trusted members of the society. You take some random 20 bit number and say that you will flip the equipment 20 times and if the outcome is the same as the predetermined number, then you will take it as a one to million evidence that the Multiple World theory works as expected. Of course, only people in the right branch will be convinced. Nevertheless, they could be convinced enough to make serious thoughts about the viability of quantum Russian roulette type games.
I vaguely remember this being discussed here before, and people deciding it wouldn't work. Before the coin-flipper is run, you have a 1/2^20 chance of seeing your number come up, whether many worlds is true or false. That means that seeing the number come up doesn't tell you anything about whether MW is true or not. It just tells you you're extremely lucky: either lucky enough that the coin-flipper got a very specific number, or lucky enough to have ended up in the very specific universe where the flipper got that number.
Replies from: Christian_Szegedy↑ comment by Christian_Szegedy · 2009-09-18T20:33:14.112Z · LW(p) · GW(p)
I don't really buy that argument. It would apply to any measurement scenario. You could say in the two-mirror experiment: "These dots on the screen don't mean a thing, we just got extremely lucky." Which is of course always a theoretical possibility.
Of course you can derive that you were extremely lucky, but also that "someone got extremely lucky" [SGEL]. If you start with some arbitrary estimates e.g. P(SWI)=0.5 and P(MWI)=0.5 and try to update P(MWI) by using Bayesian inference, you get:
By P(SGEL|SWI)=1/2^20
P(SGEL|MWI)=1
You get:
P(MWI|SGEL)=P(SGEL|MWI)P(MWI)/(P(SGEL|SWI)P(SWI)+P(SGEL|MWI)P(MWI))=
0.5/((1/2^20)0.5 + 0.5)=1/(1+1.2^20) ~ 1-1/2^20
Replies from: saturn↑ comment by saturn · 2009-09-18T21:56:37.538Z · LW(p) · GW(p)
Well, yes, but we can't peek into other Everett branches to check them for lucky people.
Replies from: Christian_Szegedy↑ comment by Christian_Szegedy · 2009-09-19T00:43:03.704Z · LW(p) · GW(p)
I don't see why you wanted to. You could only increase P(MWI) by finding there any.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-11-10T18:39:33.848Z · LW(p) · GW(p)
You take some random 20 bit number and say that you will flip the equipment 20 times and if the outcome is the same as the predetermined number, then you will take it as a one to million evidence that the Multiple World theory works as expected.
That doesn't convince anyone else; from their perspective, in Bayesian terms, the experiment has the same million-to-one improbability of producing this result, regardless of whether QTI is true, since they're not dying in the other worlds. From your perspective, you've ended up in a world where you experienced what feels like strong evidence of QTI being true, but you can never communicate this evidence to anyone else. If we hook up a doomsday device to the coinflipper, then in worlds where we survive, we can never convince aliens.
comment by Lightwave · 2009-09-18T11:05:49.629Z · LW(p) · GW(p)
Do you really need Many Worlds for quantum suicide/immortality? How's a Single Big World (e.g. Tegmark's Level 1 universe) different? Since a Single Big World is infinite, there will be an infinite number of people performing quantum suicide experiments, and some of them will seem to be immortal.
It seems to me that there is no practical difference between living in Many Worlds vs a Single Big World.
Replies from: randallsquared↑ comment by randallsquared · 2009-09-18T16:31:36.208Z · LW(p) · GW(p)
Well, Many Worlds doesn't actually require infinity, which is a plus in my book. You can have a Many Worlds scenario with "only" (the number of discrete possible actions since the beginning of time) worlds, whereas a Single Big World that wasn't actually infinite would require further explanation.
comment by Wei Dai (Wei_Dai) · 2009-09-18T10:26:39.550Z · LW(p) · GW(p)
If you assume 3, average utilitarianism says you should kill anyone who has below average utility, since that raises the average. So in the end you kill everyone except the one person who has the highest utility. There is no need for assumption 2 at all.
BTW, are you aware of any of the previous literature and discussion on quantum suicide and immortality?
See also http://www.acceleratingfuture.com/steven/?p=215
Replies from: thomblake, SforSingularity, Christian_Szegedy, Psychohistorian, Christian_Szegedy↑ comment by thomblake · 2009-09-18T18:40:21.016Z · LW(p) · GW(p)
If you assume 3, average utilitarianism says you should kill anyone who has below average utility, since that raises the average. So in the end you kill everyone except the one person who has the highest utility.
You really need to also assume killing people doesn't decrease utility for those who are left, which usually doesn't work too well for humans...
↑ comment by SforSingularity · 2009-09-18T12:54:08.799Z · LW(p) · GW(p)
Which is why we do not really believe in average utilitarianism...
Replies from: Psychohistorian↑ comment by Psychohistorian · 2009-09-18T19:59:45.960Z · LW(p) · GW(p)
It is possible to kill someone without invoking any negative experiences.
Average utilitarianism requires more, it requires that it is possible to have a policy of systematically killing most people that does not result in negative experiences. This does not seem meaningfully possible for any agents that are vaguely human, so this is a straw man objection to average utilitarianism, and a pretty bad one at that.
↑ comment by Christian_Szegedy · 2009-09-18T20:39:47.538Z · LW(p) · GW(p)
No. If you assign a large negative utility total death. That is me dieing unconditionally has a large negative value to me.
If I assume that the rate of my unconditional death does not change significantly after the experiment, then it could make sense to play the roulette.
↑ comment by Psychohistorian · 2009-09-18T19:56:50.314Z · LW(p) · GW(p)
average utilitarianism says you should kill anyone who has below average utility
This assumes that killing people with low utility has absolutely no effect on the utility of anyone else, and that living in a world where you will be killed if you're not happy enough has no negative effect on your happiness. This is, to put it mildly, completely and totally false without very radically altering the human mind.
On a related note, the beatings will continue until morale improves.
↑ comment by Christian_Szegedy · 2009-09-18T18:33:36.161Z · LW(p) · GW(p)
Thanks for the links. They look interesting.
The base idea seems identical to the quantum suicide scenarios. Although I did not know about them, my only contribution is to put up a convincing concrete scenario, where suicide offers a benefit to each players.
comment by Vladimir_Nesov · 2009-09-18T12:05:32.341Z · LW(p) · GW(p)
If the worlds in your MWI experiment are considered independent, you might as well do the same in a single deterministic world. Compare the expected utility calculations for one world and many-worlds: they'll look the same, you just exchange "many-worlds" with "possible worlds" and averaging with expectation. MWI is morally uninteresting, unless you do nontrivial quantum computation. Just flip a logical coin from pi and kill the other guys.
More specifically: when you are saying "everyone survives in one of the worlds", this statement gets intuitive approval (as opposed to doing the experiment in a deterministic world where all participants but one "die completely"), but there is no term in the expected utility calculation that corresponds to the sentiment.
Replies from: SforSingularity, Christian_Szegedy↑ comment by SforSingularity · 2009-09-18T13:06:04.204Z · LW(p) · GW(p)
MWI is morally uninteresting, unless you do nontrivial quantum computation.
I think that the intuition at steak here is something about continuity of conscious experience. The intuition that Christian might have, if I may anticipate him, is that everyone in the experiment will actually experience getting $750,000, because somehow the word-line of their conscious experience will continue only in the worlds where they do not die.
I think that, in some sense, this is a mistake, because it fundamentally rests upon a very very strong intuition that there exists a unique person who I will be in the future. This is an intuition that evolution programmed into us for obvious reasons: we are more likely to act like a good Could-Should-Would agent if we think that the benefits and costs associated with our actions will accrue to us rather than to some vague probability distribution over an infinite set of physically realized future continuations-of-me, with the property that whatever I do, some of them will die, and whatever I do, some of them will be rich and happy.
↑ comment by Christian_Szegedy · 2009-09-18T20:46:50.830Z · LW(p) · GW(p)
You can assign high negative utility to certain death.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-09-18T20:50:58.660Z · LW(p) · GW(p)
You can assign high negative utility to certain death.
You can, but then you should also do so in the expected utility calculation, which is never actually done in most discussions of MWI in this context, and isn't done in this post. The problem is using MWI as rationalization for invalid intuitions.
Replies from: Christian_Szegedy↑ comment by Christian_Szegedy · 2009-09-18T21:57:29.329Z · LW(p) · GW(p)
I did it implicitly in the OP. Assuming that, you get a better expected value in the quantum scenario.
A logical coin flip be much more scary (and negative utility) assuming certain death for some of the participiants.
(I don't buy quantum immortality arguments. They resemble on Achilles-Turtle problem: Being rescued in shorter and shorter intervals does not imply being rescued for a fixed time.)
comment by PhilGoetz · 2021-10-01T18:40:50.440Z · LW(p) · GW(p)
An easy reason not to play quantum roulette is that, if your theory justifying it is right, you don't gain any expected utility; you just redistribute it, in a manner most people consider unjust, among different future yous. If your theory is wrong, the outcome is much worse. So it's at the very best a break even / lose proposition.
comment by EricSaumur · 2024-08-20T12:06:59.867Z · LW(p) · GW(p)
Maybe this is the Fermi Great Filter. It is in the interest of every member of society to join one of these groups. Sure in every world that they individually experience, they each end up rich. But they also all end up in a world where their civilization has suddenly had its population drop to 1/16th of what it was. Per capita goods production may drop even more because all these suddenly rich people will then want to retire. So they end up poorer in real terms. Likely Civilization colapses and loses the ability to do radio astronomy and we don't see any radio signals from other civilizations. Fermi paradox explained (with lots of 'maybe's).
comment by Jack · 2009-09-18T22:08:08.511Z · LW(p) · GW(p)
I think there will probably have to be some set of worlds in which the losers of the game are alive but near death (it really is necessary to specify the means by which they die). So this really is a gamble since participating means there is a very slight chance you will wake up, 50,000 dollars poorer and needing an ambulance. To figure out the overall average utility of the game one would need to include the possible worlds in which the killing mechanism fails. Average utility over the universal wave function would probably still go up, but there would be a few branches where average utility would go down, dramatically. So the answer it would depend on whether you were quantifying over the entire wave function or doing individual worlds.
Or were you ignoring that as part of the thought experiment?
EDIT: I just thought it all out and I think the probability of experiencing surviving the killing mechanism might be 5 in 6. See here. Basically, when calculating your future experience the worlds in which you survive take over the probability space of the worlds in which you don't survive such that, given about a 5 in 6 chance of your death there is actually about a 5 in 6 chance you experience losing and surviving since in the set of worlds in which you lose there is a 100% chance of experiencing one of the very few worlds in which you survive. This would make playing Quantum Russian roulette considerably stupider then playing regular Russian roulette unless you have a killing mechanism that can only fail by failing to do any damage at all.
comment by Jordan · 2009-09-18T19:05:21.650Z · LW(p) · GW(p)
With a similar technique you can solve any NP-Complete problem. Actually, you can solve much harder problems. For instance, you can minimize any function you have enough computing power to compute. You could apply this, for instance, to genetic algorithms, and arrive at the globally fittest solution. You could likewise solve for the "best" AI given some restraints, such as: find the best program less than 10000 characters long that performs best on a Turing test.
Replies from: Theist, Christian_Szegedy, cousin_it↑ comment by Theist · 2009-09-18T23:45:24.474Z · LW(p) · GW(p)
If mangled worlds is correct (and I understand it correctly), then sufficiently improbable events fail to happen at all. What kind of limit would this place on the problems you can solve with "quantum suicide voodoo"?
↑ comment by Christian_Szegedy · 2009-09-18T20:45:11.413Z · LW(p) · GW(p)
This is a very interesting point and somehow shakes my belief in the current version of MWI.
What I could imagine is that since the total information content of multiverse must be finite, there is some additional quantification going on that makes highly improbable branches "too fuzzy" to be observable. Or something like that.
Replies from: cousin_it↑ comment by cousin_it · 2009-09-18T20:47:31.837Z · LW(p) · GW(p)
Not likely. You're already in a highly improbable branch, and it's getting less probable every millisecond.
Replies from: Christian_Szegedy, Christian_Szegedy↑ comment by Christian_Szegedy · 2009-09-19T02:15:11.296Z · LW(p) · GW(p)
We have seen in the sister topic that mangled worlds theory can in fact account for such information loss. However MWT has similar deficiencies as single worlds: non local action, nonlinearity, discontinuity. It does not mean it can't be true.
↑ comment by Christian_Szegedy · 2009-09-18T20:57:09.505Z · LW(p) · GW(p)
I would not state this for sure. There could still be quite a difference between astronomically unlikely and superastronomically unlikely.
So for example if the total information content of the multiverse is bounded by 2^1000 bits, you could go down to an insanely small probability of 1/2^(2^1000) but not to the "merely" 1/2^(2^1000) times less probable 1/2^(2^1001) .
Replies from: pengvado↑ comment by pengvado · 2009-09-19T02:33:55.437Z · LW(p) · GW(p)
Why would the information content of a quantum universe be measured in bits, rather than qubits? 2^1000 qubits is enough to keep track of every possible configuration of the Hubble volume, without discarding any low magnitude ones. (Unless of course QM does discard low magnitude branches, in which case your quantum computer would too... but such a circular definition is consistent with any amount of information content.)
↑ comment by cousin_it · 2009-09-18T20:51:22.282Z · LW(p) · GW(p)
Actually, you can solve much harder problems. For instance, you can minimize any function you have enough computing power to compute.
Why much harder? If computing the function is in P, minimization is in NP.
EDIT: this statement is wrong, see Wei Dai's comments below for explanation. Corrected version: "minimization is at most polynomially harder than an NP problem".
Replies from: Wei_Dai, Vladimir_Nesov, Jordan↑ comment by Wei Dai (Wei_Dai) · 2009-09-18T21:30:50.106Z · LW(p) · GW(p)
If computing the function is in P, minimization is in NP.
Why is that? In order for function minimization to be in NP, you have to be to write a polytime-checkable proof of the fact that some input is a minimum. I don't think that's true in general.
I also don't see how function minimization can be accomplished using quantum suicide. You can compute the value of the function on every possible input in parallel, but how do you know that your branch hasn't found the minimum and therefore should commit suicide?
This seems like a relevant article, although it doesn't directly address the above questions.
ETA: I do see a solution now how to do function minimization using quantum suicide: Guess a random x, compute f(x), then flip a quantum coin n*f(x) times, and commit suicide unless all of them come out heads. Now the branch that found the minimum f(x) will have a measure at least 2^n times greater than any other branch.
ETA 2: No, that's not quite right, since flipping a quantum coin n*f(x) times takes time proportional to f(x) which could be exponential. So I still don't know how to do this.
Replies from: cousin_it, Jordan, cousin_it, Vladimir_Nesov↑ comment by cousin_it · 2009-09-18T23:11:46.519Z · LW(p) · GW(p)
Why is that? In order for function minimization to be in NP, you have to be to write a polytime-checkable proof of the fact that some input is a minimum. I don't think that's true in general.
...That's an interesting way to look at it. For example take the traveling salesman problem: it's traditionally converted to a decision problem by asking if there's a route cheaper than X. Note that only "yes" answers have to have quickly checkable certificates - this is NP, not NP intersected with co-NP. Now if we start asking if X is exactly the minimum instead, would that be equivalent in complexity? And would it be helpful, seeing as it takes away our ability to do binary search?
In short, I'm a little funny in the head right now, but still think my original claim was correct.
Replies from: Wei_Dai, Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2009-09-19T04:07:20.093Z · LW(p) · GW(p)
This paper (which I found via this Ph.D. thesis) says that for TSP, the question "Is the optimal value equal to k" is D^P-complete. It also says D^P contains both NP and coNP, so assuming NP!=coNP, that problem is not in NP.
Your original claim was "If computing the function is in P, minimization is in NP." Technically, this sentence doesn't make sense because minimization is a functional problem, whereas NP is a class of decision problems. But the ability to minimize a function certainly implies the ability to determine whether a value is a minimum, and since that problem is not in NP, I think your claim is wrong in spirit as well.
As Nesov pointed out, TSP is in FP^NP, so perhaps a restatement of your claim that is correct is "If computing the function takes polynomial time, then it can be minimized in polynomial time given an NP oracle." This still seems to imply that function minimization isn't much harder than NP-complete problems.
Are there any problems in PostBQP that can be said to be much harder than NP-complete problems? I guess you asked that already.
Replies from: cousin_it, cousin_it↑ comment by cousin_it · 2009-09-19T04:35:44.458Z · LW(p) · GW(p)
I can't read those links, but determining whether an input is a minimum seems to be in co-NP because it allows easy verification of a "no" answer by counterexample. So have those people proved that NP=co-NP, or am I just being dense?
This still seems to imply that function minimization isn't much harder than NP-complete problems.
Yep, as a programmer I should've said "at most polynomially harder than a problem in NP". You're right that my wording was bad. I still stand by the spirit, though :-)
↑ comment by Wei Dai (Wei_Dai) · 2009-09-19T00:30:10.928Z · LW(p) · GW(p)
I'm a bit confused too, but I found a Ph.D. thesis that answers a bunch of these questions. I'm still reading it.
On page 5, it says that for TSP, the question "Is the optimal value equal to k" is D^P-complete (which is something I haven't heard of before).
↑ comment by Jordan · 2009-09-18T23:08:19.803Z · LW(p) · GW(p)
You can't get the exact answer, but you can approach it exponentially quickly by doing a binary search. So, if you want N digits of accuracy, you're going to need O(N) time. Someone mentioned this elsewhere but I can't find the comment now.
Your method above would work better, actually (assuming the function is valued from 0 to 1). Just randomly guess x, compute f(x)^n, then place a qubit in a superposition such that it is f(x)^n likely to be 0, and 1 - f(x)^n likely to be 1. Measure the qubit. If it is 0, quantum suicide. You can do the whole thing a few times, taking n = 10, n = 1000, and so on, until you're satisfied you've actually got the minimum. Of course, there's always the chance there's a slightly smaller minimum somewhere, so we're just getting an approximate answer like before, albeit an incredibly good approximate answer.
↑ comment by Vladimir_Nesov · 2009-09-18T21:36:39.177Z · LW(p) · GW(p)
It's a classical point -- you can replace the question of "What is the minimal value of f(x) (what is the witness x that gives the minimal value)?" by "Is there a parameter x that gives the value f(x) less than C (what is an example of such x)?", and then use binary search on C to pinpoint the minimum. Since being able to write down the answer in polynomial time must be a given, you can take the ability to run the search on C in polynomial time for granted.
↑ comment by Vladimir_Nesov · 2009-09-18T23:26:46.138Z · LW(p) · GW(p)
It's in FP^NP, not in NP. Only the decision problem for whether a given value C is more than the minimum is in NP, not minimization itself.
Replies from: cousin_it↑ comment by cousin_it · 2009-09-18T23:45:52.735Z · LW(p) · GW(p)
Yes, that's certainly right, thanks. I was careless in assuming that people would just mentally convert everything to decision problems, like I do :-)
ETA: Vladimir, I couldn't find any proof that NP is strictly contained in PP; is it known? Our thread seems to depend on that question only. It should be true because PP is so freaky powerful, but nothing's turned up yet.
Replies from: pengvado, Wei_Dai, Jordan↑ comment by Wei Dai (Wei_Dai) · 2009-09-19T07:39:50.036Z · LW(p) · GW(p)
On the computational power of PP and ⊕P says it gives strong evidence that PP is strictly harder than PH, which contains NP.
↑ comment by Jordan · 2009-09-19T00:59:31.484Z · LW(p) · GW(p)
According to this FNP is strictly harder than NP.
No idea about PP. Where did PP come up, or how does it relate to FNP?
ETA: Ah, I see. Aaronson's paper here shows that postselection (which is what our suicide voodoo is) gives you PP. Since postselection also gives you FNP, and since FNP is harder than NP, then we should have that PP is strictly harder than NP.
Replies from: cousin_it↑ comment by cousin_it · 2009-09-19T04:02:10.019Z · LW(p) · GW(p)
Jordan, I'm really sorry to inject unneeded rigor into the discussion again, but the statement "FNP is strictly harder than NP" doesn't work. It makes sense to talk about P=NP because P and NP are both sets of decision problems, but FNP is a set of function problems, so to compare it with NP you have to provide a mapping of some sort. Thus your argument doesn't prove that PP>NP yet.
For me personally, function problems are exotic beasts and I'd prefer to settle the power of quantum voodoo on decision problems first :-)
Replies from: Jordan↑ comment by Jordan · 2009-09-19T17:56:02.985Z · LW(p) · GW(p)
There's a natural partial mapping in my mind, but a rigorous mapping is beyond me. Check out this. Apparently it can be shown that FNP is strictly harder than NP, under certain conditions. I didn't understand the condition, and whether it was reasonable or not.
Regardless of all this complexity jazz, which has definitely exceeded my grasp, my random dictionary example still demonstrates that certain problems can be solved exponentially faster with postselection than without, even if it doesn't prove that FNP > NP.
ETA: Coming from a numerics background, decision problems seem like exotic beasts. I'd prefer to settle the power of quantum voodoo on function problems first =P
↑ comment by Jordan · 2009-09-18T21:47:30.367Z · LW(p) · GW(p)
Consider the following problem, given N:
Create a list with 2^N entries, where each entry is a random number from 0 to 1. What is the smallest entry?
This is a function minimization problem, where the function takes n and returns the n-th element of the list. The cost of computing the function is O(1). There is no way to minimize it, however, without looking at every value, which is O(2^N). With quantum suicide voodoo, however, we can minimize it in O(1).
Replies from: cousin_it↑ comment by cousin_it · 2009-09-18T22:28:39.177Z · LW(p) · GW(p)
1) The problem of finding the smallest entry in a list is linear-time with respect to the size of the input; calling that size 2^N instead of M doesn't change things.
2) Accessing the nth element of a list isn't O(1), because you have to read the bits of n for chrissake.
3) I'm not sure how you're going to solve this with quantum voodoo in O(1), because just setting up the computation will take time (or space if you're parallel) proportional to the length of the input list.
Replies from: Jordan↑ comment by Jordan · 2009-09-18T22:44:13.990Z · LW(p) · GW(p)
1) The problem of finding the smallest entry in a list is linear-time with respect to the size of the input; calling that size 2^N instead of M doesn't change things.
You can call it M, if you like. Then individual function evaluations cost log(log(M)). The point is the relative cost difference between function evaluations and function minimization.
2) Accessing the nth element of a list isn't O(1), because you have to read the bits of n for chrissake.
Good point. Function calls will be O(log(N)).
3) I'm not sure how you're going to solve this with quantum voodoo in O(1), because just setting up the computation will take time (or space if you're parallel) proportional to the length of the input list.
Right, the setup of the problem is separate. It could have been handed to you on a flash drive. The point is there exist functions such that we can calculate single evaluations quickly, but can't minimize the entire function quickly.
Replies from: cousin_it↑ comment by cousin_it · 2009-09-18T23:01:39.568Z · LW(p) · GW(p)
The point is the relative cost difference between function evaluations and function minimization.
Well yeah, but the definitions of P and NP involve the size of the input. Your original claim was that we can solve "much harder than NP-complete" problems with quantum voodoo. I don't think you've proved it yet, but if you revise it to somehow talk about "relative cost differences", it does become somewhat more believable.
comment by ThirdContact · 2011-04-02T10:58:17.650Z · LW(p) · GW(p)
like it.
I'm developing a web game called 'quantum roulette' for the website of my upcoming feature Third Contact, and I happened on this thread.
the game will be at www.thirdcontactmovie.com at some point. You might find the film of interest too. Love the discussion.
comment by Jacob Falkovich (Jacobian) · 2015-08-01T03:54:30.863Z · LW(p) · GW(p)
What if we could use this mechanism to solve international conflicts? The main problem with quantum Russian roulette is that in 15/16 branches your friends and family are very sad. However, most of my friends and family, and their friends and family are citizens of one nation..
Let's put all the Israelis in one room, all the Palestinians in another and then flip a quantum coin to release a neuro-toxin in one of the rooms. Every single person will experience waking up to a beautiful world where his people live peacefully in the entire territory from Dead Sea to Med Sea, and all his enemies are taken care of!
Replies from: Lumifer↑ comment by Lumifer · 2015-08-01T23:26:43.672Z · LW(p) · GW(p)
Let's put all the Israelis in one room, all the Palestinians in another and then flip a quantum coin to release a neuro-toxin in one of the rooms.
Putting Jews into a closed room and releasing a lethal gas has been tried. On industrial scale.