[SEQ RERUN] Worse Than Random
post by MinibearRex · 2012-10-22T05:02:36.354Z · LW · GW · Legacy · 8 commentsContents
8 comments
Today's post, Worse Than Random was originally published on 11 November 2008. A summary (taken from the LW wiki):
If a system does better when randomness is added into its processing, then it must somehow have been performing worse than random. And if you can recognize that this is the case, you ought to be able to generate a non-randomized system.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Lawful Uncertainty, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
8 comments
Comments sorted by top scores.
comment by JoshuaZ · 2012-10-22T13:52:38.043Z · LW(p) · GW(p)
The post isn't accurate. Randomness is viable when one is dealing with an actual adversary in the environment. In some sense the standard conjectures that P is equal to BPP) but that PP) is larger amounts to a set of conjectures saying that one can get some weak advantage from randomness. The context where one cannot get any advantage is where there's no active adversary , or where you care about average constraints rather than maximal constraints, which is only one set of contexts.
Replies from: Oscar_Cunningham, MinibearRex, blashimov↑ comment by Oscar_Cunningham · 2012-10-22T16:26:02.879Z · LW(p) · GW(p)
To quote Silas in the comments for Eliezer's next post: "Randomness is like poison. Yes, it can benefit you, but only if you use it on others."
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-10-22T23:26:07.636Z · LW(p) · GW(p)
Yeah, but if you don't have perfect memory (as in the absent-minded driver problem) your past and future selves count as others.
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2012-10-23T13:29:59.072Z · LW(p) · GW(p)
Indeed. Wei_Dai hints that this might be true more generally here.
↑ comment by MinibearRex · 2012-10-22T21:46:51.635Z · LW(p) · GW(p)
From the next post in the sequences:
There does exist a rare class of occasions where we want a source of "true" randomness, such as a quantum measurement device. For example, you are playing rock-paper-scissors against an opponent who is smarter than you are, and who knows exactly how you will be making your choices. In this condition it is wise to choose randomly, because any method your opponent can predict will do worse-than-average.
↑ comment by blashimov · 2012-10-22T15:49:40.552Z · LW(p) · GW(p)
I don't think it was about randomness in general - I followed the links you posted and those show techniques that, while having some randomness, aren't totally chance out puts (.5 yes .5 no, for example, instead 2/3 correct 1/3 incorrect). If you get a system that is wrong 80% of the time between two choices, it must have some way of differentiating the correct/incorrect answer, and choosing the incorrect one. If you can find why that is, you can invert it. Can you clarify more about what you felt was inaccurate? Given my software experience though, I will accept that the inferential distance is too large to be worth your time.
comment by gjm · 2012-10-23T11:34:30.669Z · LW(p) · GW(p)
"Random" means, roughly, "independent of other stuff that might interact with what you're doing". When the environment providing that other stuff is simple, you can often do simple things that achieve that independence very effectively. When the environment is complicated, you may end up basically wanting to do something that has no simple patterns in it, which (depending on the definition of "simple") amounts to passing some randomness tests. When the environment is actively hostile, you want to be unpredictable by your adversary, which means having no patterns it can spot, and using a source of "real" randomness is a nice effective way to achieve that.
In the "easier" cases, indeed using a general-purpose thing-that-avoids-simple-patterns (i.e., something "random" in the usual sense) is an effective technique but something that takes more precise notice of the particular sorts of patterns you want to avoid can do better. In the "harder" cases, and especially those where you have an active, intelligent, adaptive adversary, it's entirely possible that randomness-as-usually-understood is the best practical solution.
comment by Qiaochu_Yuan · 2013-04-09T07:43:33.083Z · LW(p) · GW(p)
As far as I can tell, this post is wrong as a matter of mathematical fact. There exist examples of problems in certain domains where a known randomized algorithm provably performs better than any deterministic algorithm.