Rationality Quotes April 2013
post by Vaniver · 2013-04-08T02:00:38.710Z · LW · GW · Legacy · 285 commentsContents
285 comments
Another monthly installment of the rationality quotes thread. The usual rules apply:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, Overcoming Bias, or HPMoR.
- No more than 5 quotes per person per monthly thread, please.
285 comments
Comments sorted by top scores.
comment by Qiaochu_Yuan · 2013-04-11T09:13:10.014Z · LW(p) · GW(p)
Replies from: Eliezer_Yudkowsky, Oscar_Cunningham, yli, army1987In a class I taught at Berkeley, I did an experiment where I wrote a simple little program that would let people type either "f" or "d" and would predict which key they were going to push next. It's actually very easy to write a program that will make the right prediction about 70% of the time. Most people don't really know how to type randomly. They'll have too many alternations and so on. There will be all sorts of patterns, so you just have to build some sort of probabilistic model. Even a very crude one will do well. I couldn't even beat my own program, knowing exactly how it worked. I challenged people to try this and the program was getting between 70% and 80% prediction rates. Then, we found one student that the program predicted exactly 50% of the time. We asked him what his secret was and he responded that he "just used his free will."
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-11T18:22:30.015Z · LW(p) · GW(p)
Holy Belldandy, it sounds like someone located the player character. Everyone get your quests ready!
Replies from: MugaSofer, DaFranker, MarkusRamikin, JoshuaFox↑ comment by MarkusRamikin · 2013-04-22T08:47:52.784Z · LW(p) · GW(p)
And rewards.
↑ comment by Oscar_Cunningham · 2013-04-11T23:27:21.148Z · LW(p) · GW(p)
My bet is that the student had many digits of pi memorised and just used their parity.
↑ comment by yli · 2013-04-11T20:25:43.142Z · LW(p) · GW(p)
I would have easily won that game (and maybe made a quip about free will when asked how...). All you need is some memorized secret randomness. For example, a randomly generated password that you've memorized, but you'd have to figure out how to convert it to bits on the fly.
Personally I'd recommend going to random.org, generating a few hexadecimal bytes (which are pretty easy to convert to both bits and numbers in any desired range), memorizing them, and keeping them secret. Then you'll always be able to act unpredictably.
Well, unpredictably to a computer program. If you want to be able to be unpredictable to someone who's good at reading your next move from your face, you would need some way to not know your next move before making it. One way would be to run something like an algorithm that generates the binary expansion of pi in your head, and delaying calculating the next bit until the best moment. Of course, you wouldn't actually choose pi, but something less well-known and preferably easier to calculate. I don't know any such algorithms, and I guess if anyone knows a good one, they're not likely to share. But if it was something like a pseudorandom bitstream generator that takes a seed, it could be shared, as long as you didn't share your seed. If anyone's thought about this in more depth and is willing to share, I'm interested.
Replies from: gwern, Yvain↑ comment by gwern · 2013-04-11T20:41:14.767Z · LW(p) · GW(p)
http://blog.yunwilliamyu.net/2011/08/14/mindhack-mental-math-pseudo-random-number-generators/
Replies from: yli, felzix↑ comment by Scott Alexander (Yvain) · 2013-04-11T22:36:06.933Z · LW(p) · GW(p)
When I need this I just look at the nearest object. If the first letter is between a and m, that's a 0. If it's between n and z, that's a 1. For larger strings of random bits, take a piece of memorized text (like a song you like) and do this with the first letter of each word.
Replies from: SaidAchmiz, orthonormal, FiftyTwo↑ comment by Said Achmiz (SaidAchmiz) · 2013-04-12T00:19:11.711Z · LW(p) · GW(p)
There's an easier way: look at the time.
Seconds are even? Type 'f'. Odd? Type 'd'. (Or vice-versa. Or use minutes, if you don't have to do this very often.)
A while ago there was an article (in NYTimes online, I think) about a program that could beat anyone in Rock-Paper-Scissors. That is, it would take a few iterations, and learn your pattern, and do better than chance against you.
It never got any better than chance against me, because I just used the current time as a PRNG.
Edit: Found it. http://www.nytimes.com/interactive/science/rock-paper-scissors.html?_r=0
Edit2: Over 25 rounds, 12-6-7 (win-loss-tie) vs. the "veteran" computer. Try it and post your results! :)
Replies from: Desrtopa, None, Kindly, army1987, EGI, MarkusRamikin, James_Ernest↑ comment by Desrtopa · 2013-04-12T01:06:17.610Z · LW(p) · GW(p)
Over 12 rounds against the veteran computer, I managed 5-4-3, just trying to play "counterintuitively" and play differently from how I expected the players whose information it aggregated would play.
Not enough repetitions to be highly confident that I could beat the computer in the long term, but I stopped because trying to be that counterintuitive is a pain.
Replies from: Roxolan↑ comment by Roxolan · 2013-09-26T16:03:59.826Z · LW(p) · GW(p)
Got 7-6-7 with the same tactic. Apparently the computer only looks at the last 4 throws, so as long as you're playing against Veteran (where your own rounds will be lost in the noise), it should be possible for a human to learn "anti-anti-patterns" and do better than chance.
↑ comment by A1987dM (army1987) · 2013-04-12T13:32:45.224Z · LW(p) · GW(p)
19-18-13 over 50 rounds against the veteran, without using any external RNG, by looking away and thinking of something else so that I couldn't remember the results of previous rounds. (My after-lunch drowsiness probably helped.)
↑ comment by MarkusRamikin · 2013-04-23T08:49:20.079Z · LW(p) · GW(p)
9-6-10 here out of 25 rounds, using current time. :(
I remember doing way better than this a few months ago, just by playing naturally. Gonna blame sample size...
↑ comment by James_Ernest · 2013-04-23T07:17:45.284Z · LW(p) · GW(p)
Somehow managed 16-8-5 versus the veteran computer, by using the articles own text as a seed "Computers mimic human reasoning by building on simple rules..." and applying a-h = rock, i-p = paper, q-z = scissors, I think this is the technique I will use against humans (I know a few people I would love to see flail against pseudo-randomness).
Replies from: Jiro, MarkusRamikin↑ comment by Jiro · 2013-04-23T19:06:54.375Z · LW(p) · GW(p)
That should fail in the long run because it's unlikely that the frequency of letters in English divides so evenly that those rules make each choice converge to happening exactly 1/3 of the time.
I'd just generate the random numbers in my head. A useful thing to do is to pick a couple of numbers from thin air (which doesn't work by itself because the human mind isn't good at picking 'random' numbers from thin air), then adding them together and then taking the last digit (or if you wantt 3 choices, taking them mod 3).
↑ comment by MarkusRamikin · 2013-04-23T08:48:00.580Z · LW(p) · GW(p)
9-6-10 here out of 25 rounds, using current time. :(
↑ comment by orthonormal · 2013-04-11T22:46:12.205Z · LW(p) · GW(p)
That'll be almost independent but not unbiased: I think that a-m will be more frequent than n-z. However, you could do the von Neumann trick: if you have an unfair coin and want a fair sequence of bits, take the first and second flips. HT is 0, TH is 1, and if you get HH or TT, check the third and fourth flips. Etc.
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2013-04-12T18:03:05.123Z · LW(p) · GW(p)
I just looked up the letter frequencies and it's 52% for a-m and 48% for n-z (for the initial letters of English words). Using 'l' instead of 'm' gives a 47/53 split, so 'm' is at least the best letter to use.
↑ comment by FiftyTwo · 2013-04-24T13:22:45.451Z · LW(p) · GW(p)
[Aside] When do you need to generate random numbers in your head? I can think of literally no time when I've needed to.
Replies from: ThrustVectoring↑ comment by ThrustVectoring · 2013-04-24T14:56:41.042Z · LW(p) · GW(p)
If you have to make a close decision and don't have a coin to flip. Or at a poker tournament if you don't trust your own ability to be unpredictable.
↑ comment by A1987dM (army1987) · 2013-04-12T12:31:13.028Z · LW(p) · GW(p)
There once was some site that let you enter a sequence of “H” and “T” and test it for non-randomness (e.g. the distribution of the length of runs, the number of alternations, etc.), and after a couple attempts I managed to pass all or almost all the tests a few times in a row.
comment by D_Malik · 2013-04-04T07:23:23.289Z · LW(p) · GW(p)
There once was a hare who mocked a passing tortoise for being slow. The erudite tortoise responded by challenging the hare to a race.
Built for speed, and with his pride on the line, the hare easily won - I mean, it wasn't even close - and resumed his mocking anew.
Winston Rowntree, Non-Bullshit Fables
Replies from: xv15, lukeprog, roystgnr, Zubon, philh, BillyOblivion↑ comment by xv15 · 2013-04-08T03:14:10.293Z · LW(p) · GW(p)
I've always thought there should be a version where the hare gets eaten by a fox halfway through the race, while the tortoise plods along safely inside its armored mobile home.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-04-08T18:13:03.822Z · LW(p) · GW(p)
↑ comment by roystgnr · 2013-04-08T18:10:33.942Z · LW(p) · GW(p)
On the meta-level, I'm not sure "quickness beats persistence" is a helpful lesson to teach. At the scale of things many LessWrongers would hope to help accomplish, both qualities are prerequisites, and it would be a mistake to believe that you don't have to worry about the latter just because you're one of the millions of people who are 99.9th percentile at the former.
On the base level, a non-bullshit version of this fable would look more like "There once was a hare being passed by a tortoise. Neither of them could talk. The end."
Replies from: MaoShan↑ comment by Zubon · 2013-04-08T03:13:52.995Z · LW(p) · GW(p)
"Moral: life is inarguably a depressingly unfair endeavor."
Replies from: orthonormal, army1987↑ comment by orthonormal · 2013-04-10T19:26:09.978Z · LW(p) · GW(p)
FTFY:
"Moral: life is inarguably a depressingly fair endeavor."
↑ comment by A1987dM (army1987) · 2013-04-08T18:12:05.759Z · LW(p) · GW(p)
What's unfair about that quote? The faster one did win. This would exemplify your moral.
Replies from: xv15, Zubon↑ comment by xv15 · 2013-04-09T05:55:24.964Z · LW(p) · GW(p)
"Fairness" depends entirely on what you condition on. Conditional on the hare being better at racing, you could say it's fair that the hare wins. But why does the hare get to be better at racing in the first place?
Debates about what is and isn't fair are best framed as debates over what to condition on, because that's where most of the disagreement lies. (As is the case here, I suppose).
↑ comment by Zubon · 2013-04-19T02:04:21.071Z · LW(p) · GW(p)
The quote is the next line from the quote source.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-04-19T12:31:21.072Z · LW(p) · GW(p)
Huh, okay.
↑ comment by philh · 2013-04-08T07:29:06.688Z · LW(p) · GW(p)
On a similar note, there's http://www.thisamericanlife.org/radio-archives/episode/463/transcript - search for "Act Two".
↑ comment by BillyOblivion · 2013-04-23T01:16:01.085Z · LW(p) · GW(p)
http://www.quickmeme.com/meme/3t0l49/
Sorry, saw it earlier today and couldn't resist.
comment by etotheipi · 2013-04-08T02:29:36.067Z · LW(p) · GW(p)
"The peril of arguing with you is forgetting to argue with myself. Don’t make me convince you: I don’t want to believe that much."
- Even More Aphorisms and Ten-Second Essays from Vectors 3.0, James Richardson
The others are quite nice too: http://www.theliteraryreview.org/WordPress/tlr-poetry/
Replies from: gwern↑ comment by gwern · 2018-10-27T23:03:05.009Z · LW(p) · GW(p)
That link is now broken. It turns out it was a highly incomplete excerpt from "Vectors 3.0" so I've put By the Numbers on Libgen and put up a complete version taken from the book. (I like some of the aphorisms, so I've ordered the other 2 books to scan as well.)
comment by Jay_Schweikert · 2013-04-04T14:18:00.683Z · LW(p) · GW(p)
Jack Sparrow: [after Will draws his sword] Put it away, son. It's not worth you getting beat again.
Will Turner: You didn't beat me. You ignored the rules of engagement. In a fair fight, I'd kill you.
Jack Sparrow: Then that's not much incentive for me to fight fair, then, is it? [Jack turns the ship, hitting Will with the boom]
Jack Sparrow: Now as long as you're just hanging there, pay attention. The only rules that really matter are these: what a man can do and what a man can't do. For instance, you can accept that your father was a pirate and a good man or you can't. But pirate is in your blood, boy, so you'll have to square with that some day. And me, for example, I can let you drown, but I can't bring this ship into Tortuga all by me onesies, savvy? So, can you sail under the command of a pirate, or can you not?
The pirate-specific stuff is a bit extraneous, but I've always thought this scene neatly captured the virtue of cold, calculating practicality. Not that "fairness" is never important to worry about, but when you're faced with a problem, do you care more about solving it, or arguing that your situation isn't fair? What can you do, and what can't you do? Reminds me of What do I want? What do I have? How can I best use the latter to get the former?
Replies from: TheOtherDave, radical_negative_one, Eugine_Nier↑ comment by TheOtherDave · 2013-04-04T15:25:14.490Z · LW(p) · GW(p)
That said, if I recognize that I'm in a group that values "fairness" as an abstract virtue, then arguing that my situation isn't fair is often a useful way of solving my problem by recruiting alliances.
Replies from: Zubon↑ comment by Zubon · 2013-04-08T03:09:55.640Z · LW(p) · GW(p)
If you're in a group where "that's not fair" is frequently a winning argument, you may already be in trouble.
Replies from: TheOtherDave, scav↑ comment by TheOtherDave · 2013-04-08T14:31:40.117Z · LW(p) · GW(p)
I am in many groups where, when choosing between two strategies A and B, fairness is one of the things we take into account. I'm not sure that's a problem.
↑ comment by scav · 2013-04-08T09:40:16.885Z · LW(p) · GW(p)
If it's a frequently-occurring observation within the group then yes, there seems to be something wrong. Possibly because things are regularly proposed and acted on without considering fairness until someone has to point it out.
If it hardly ever has to be said, but when pointed out, it is often persuasive, you're probably OK.
↑ comment by radical_negative_one · 2013-04-04T16:11:04.447Z · LW(p) · GW(p)
Replies from: FiftyTwoThe pirate-specific stuff is a bit extraneous
Jack Sparrow: The only rules that really matter are these: what a [person] can do and what a [person] can't do. For instance, you can accept that [different customs from yours are traditional and commonly accepted in the world] or you can't. But [this thing you dislike] is [an inevitable feature of your human existence], boy, so you'll have to square with that some day ... So, can you [ally with somebody you find distasteful], or can you not?
↑ comment by FiftyTwo · 2013-04-24T13:26:20.684Z · LW(p) · GW(p)
Even more generally it can be taken as a paraphraasing of the Litany of Gendlin
Jack Sparrow: The only rules that really matter are these: what a [person] can do and what a [person] can't do. For instance, you can accept [reality] or you can't. But [reality] is [true whether or not you believe it], boy, so you'll have to square with that some day ... So, can you [accept it], or can you not?
↑ comment by Eugine_Nier · 2013-04-05T04:24:37.664Z · LW(p) · GW(p)
Frankly this is precisely the kind of ruthless pragmatism that gives utilitarians such a horrible reputation.
Replies from: Desrtopa, Estarlio↑ comment by Desrtopa · 2013-04-05T04:29:46.876Z · LW(p) · GW(p)
Well, it certainly didn't stop Jack Sparrow from being a beloved character.
You can be ruthless and popular, if you're sufficiently charismatic about it.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-04-05T04:33:42.519Z · LW(p) · GW(p)
It also helps to be fictional, or at least sufficiently removed from the target audience that they perceive you in far mode.
Replies from: Desrtopa, BillyOblivion↑ comment by Desrtopa · 2013-04-05T04:43:31.709Z · LW(p) · GW(p)
I'd say that it's possible to be ruthless and popular even among people who're familiar with you, as long as you keep your ruthlessness in far mode for the people you're attempting to cultivate popularity amongst. Business executives come to mind, and the more cutthroat strains of social maneuverers.
↑ comment by BillyOblivion · 2013-04-23T01:10:25.291Z · LW(p) · GW(p)
Dunno mate, I could name a few US Presidents and non-US leaders.
↑ comment by Estarlio · 2013-04-08T15:14:46.035Z · LW(p) · GW(p)
Mmm, that's a good point.
Potentially - If people know you're going to play according to a higher rule or purpose, rather than following feelings, then how much are they going to trust that you're really going to exercise that rule on their behalf?
It'd be like the old argument that people should be allowed to kidnap people off the streets and take their organs - because when you average it out any individual is more likely to need an organ than be the one kidnapped so it's the better gamble for everyone to make. But we don't really imagine it that way, we all see ourselves being the ones dragged off the street and cut up, or that people with unpopular political opinions would be the ones... You can't trust someone who'd come up with that sort of system not to be playing a different game because they've already shown you can't trust their compassionate feelings to work as bounds on their actions. Maybe any friendship they express means as little to them as the poor guy they just butchered.
I wonder how much of it is a trust problem though, and how you'd resolve that. It seems to me that if you knew someone really well, or they didn't seem to be grasping power, they could get away with being ruthless. People seem almost to gloat about how ruthless specops folks and the like are.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-04-23T12:14:10.866Z · LW(p) · GW(p)
My impression is that whistle-blowers tend not to be trusted. It's not as though other businesses line up to hire them.
I think the problem is having moral systems which impose high local costs.
comment by Stabilizer · 2013-04-01T19:19:22.460Z · LW(p) · GW(p)
More specifically, one thing I learned from Terry that I was not taught in school is the importance of bad proofs. I would say "I think this is true", work on it, see that there was no nice proof, and give up. Terry would say "Here's a criterion that eliminates most of the problem. Then in what's left, here's a worse one that handles most of the detritus. One or two more epicycles. At that point it comes down to fourteen cases, and I checked them." Yuck. But we would know it was true, and we would move on. (Usually these would get cleaned up a fair bit before publication.)
-Allen Knutson on collaborating with Terence Tao
Replies from: Eugine_Nier, kpreid, somervta↑ comment by Eugine_Nier · 2013-04-02T04:21:42.308Z · LW(p) · GW(p)
At that point I'd start wondering why there doesn't appear to be a simple proof. For example, maybe some kind of generalization of the result is false and you need the complexity to "break the correspondence" with the generalization.
↑ comment by kpreid · 2013-04-02T16:00:22.834Z · LW(p) · GW(p)
(meta)
Saith the linked site: “You must sign in to read answers past the first one.”
Well, that's obnoxious.
Replies from: Stabilizer↑ comment by Stabilizer · 2013-04-02T22:30:14.045Z · LW(p) · GW(p)
If it's any consolation, none of the answers past the first one on this question are very good.
Replies from: somervtacomment by James_Miller · 2013-04-01T16:17:37.017Z · LW(p) · GW(p)
A remarkable aspect of your mental life is that you are rarely stumped. True, you occasionally face a question such as 17 × 24 = ? to which no answer comes immediately to mind, but these dumbfounded moments are rare. The normal state of your mind is that you have intuitive feelings and opinions about almost everything that comes your way. You like or dislike people long before you know much about them; you trust or distrust strangers without knowing why; you feel that an enterprise is bound to succeed without analyzing it. Whether you state them or not, you often have answers to questions that you do not completely understand, relying on evidence that you can neither explain nor defend.
Daniel Kahneman,Thinking, Fast and Slow
Replies from: Will_Newsome, FiftyTwo, Jayson_Virissimo↑ comment by Will_Newsome · 2013-04-08T06:45:34.962Z · LW(p) · GW(p)
As far as I can tell this doesn't agree with my experience; a good chunk of every day is spent in groping uncertainty and confusion.
Replies from: Kawoomba↑ comment by Jayson_Virissimo · 2013-04-01T18:55:47.733Z · LW(p) · GW(p)
True, you occasionally face a question such as 17 × 24 = ? to which no answer comes immediately to mind, but these dumbfounded moments are rare.
Unless you took John Leslie's advice and Ankified the multiplication table up to 25.
Replies from: AlanCrowe, ygert, PhilGoetz, tgb↑ comment by AlanCrowe · 2013-04-04T11:43:53.828Z · LW(p) · GW(p)
I've read your link to John Leslie with both curiosity and bafflement.
17 x 24 is not perhaps the best example of a question for which no answer comes immediately to mind. Seventeen has the curious property that 17 x 6 = 102. (The recurring decimal 1/6 = 0.166666... hints to us that 17 x 6 = 102 is just the first of a series of near misses on a round number, 167 x 6 = 1002, 1667 x 6 = 10002, etc). So multiplying 17 by any small multiple of 6 is no harder than the two times table. In particular 17 x 24 = 17 x (6 x 4) = (17 x 6) x 4 = 102 x 4 = 408.
17 x 23 might have served better, were it not for the curious symmetry around the number 20, with 17 = 20 - 3 while 23 = 20 + 3. One is reminded of the identity (x + y)(x - y) = x^2 - y^2 which is often useful in arithmetic and tells us at once that 17 x 23 = 20 x 20 - 3 x 3 = 400 - 9 = 391.
17 x 25 has a different defect as an example, because one can hardly avoid apprehending 25 as one quarter of 100, which stimulates the observation that 17 = 16 + 1 and 16 is full of yummy fourness. 17 x 25 = (16 + 1) x 25 = (4 x 4 + 1) x 25 = 4 x 4 x 25 + 1 x 25 = 4 x 100 + 25 = 425.
17 x 26 is a better example. Nature has its little jokes. 7 x 3 = 21 therefore 17 x 13 = (1 + 7) x (1 + 3) = (1 + 1) + 7 x 3 = 2 + 21 = 221. We get the correct answer by outrageously bogus reasoning. And we are surely puzzled. Why does 21 show up in 17 x 13? Aren't larger products always messed up and nasty? (This is connected to 7 + 3 = 10). Any-one who is in on the joke will immediately say 17 x 26 = 17 x (13 x 2) = (17 x 13) x 2 = 221 x 2 = 442. But few people are.
Some people advocate cultivating a friendship with the integers. Learning the multiplication table, up to 25 times 25, by the means exemplified above, is part of what they mean by this.
Others, full of sullen resentment at the practical usefulness of arithmetic, advocate memorizing ones times tables by the grimly efficient deployment of general purpose techniques of rote memorization such as the Anki deck. But who in this second camp sees any need to go beyond ten times ten?
Does John Leslie have a foot in both camps? Does he set the twenty-five times table as the goal and also indicate rote memorization as the means?
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2013-04-04T22:29:34.491Z · LW(p) · GW(p)
I'm not sure exactly what he had in mind, but learning the multiplication tables using Anki isn't exactly rote.
Now, this may not be the case for others, but when I see a new problem like 17 x 24, I don't just keep reading off the answer until I remember it when the note comes back around. Instead, I try to answer it using mental arithmetic, no matter how long it takes. I do this by breaking the problem into easier problems (perhaps by multiplying 17 x 20 and then adding that to 17 x 4). Sooner or later my brain will simply present the answers to the intermediate steps for me to add together and only much later do those steps fade away completely and the final answer is immediately retrievable.
Doing things this way, simply as a matter of course, you develop somewhat of a feel for how certain numbers multiply and develop a kind of "friendship with the integers." Er, at least, that's what it feels like from the inside.
↑ comment by ygert · 2013-04-01T19:27:50.807Z · LW(p) · GW(p)
That's not the important point. Even if you have, you will still face the same problem when facing a question like, for example, say 34 × 57 = ?. The quote was using that particular problem as an example. If that example does not apply to you because you Ankified the multiplication table up to 25 or for any other reason, it is trivial to find another problem that gives the desired mental response. (As I just did with the 34 × 57 problem.)
Replies from: Jayson_Virissimo, Kindly↑ comment by Jayson_Virissimo · 2013-04-01T22:59:43.046Z · LW(p) · GW(p)
Agreed. I'm not so much disagreeing with the thrust of the quote as nitpicking in order to engage in propaganda for my favorite SRS.
↑ comment by Kindly · 2013-04-01T21:26:57.353Z · LW(p) · GW(p)
Of course, even if I have no complete answer to 34 × 57, I still have "intuitive feelings and opinions" about it, and so do you. For example, I know it's between 100 and 10000 just by counting the digits, and although I've just now gone and formalized this intuition, it was there before the math: if I claimed that 34 × 57 = 218508 then I'm sure most people here would call me out long before doing the calculation.
Replies from: ygert↑ comment by ygert · 2013-04-01T21:40:03.973Z · LW(p) · GW(p)
What has this got to do with the original quote? The quote was claiming, truthfully or not, that when one is first presented with a certain type of problem, one is dumbfounded for a period of time. And of course the problem is solvable, and of course even without calculating it you can get a rough picture of the range the answer is in, and with a certain amount of practice one can avoid the dumbfoundedness altogether and move on to solving the problem, and that is a fine response to give to the original quote, but it has no relevance to what I was saying.
All I was saying is that it is an invalid objection to object to the quote based on the fact that with a certain technique the specific example given by the quote can be avoided, as that example could have easily been replaced by a similar example which that technique does not solve. I was talking about that specific objection I was not saying the quote is perfect, or even that it is entirely right. You may raise these other objections to it. But the specific objection that Jayson_Virissimo raised happens to be entirely invalid.
Replies from: Kindly↑ comment by tgb · 2013-04-02T11:44:17.746Z · LW(p) · GW(p)
I'm curious - what advantage do you get from this?
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2013-04-02T16:40:21.547Z · LW(p) · GW(p)
So far, mostly the ability to perform entertaining parlor tricks (via mental arithmetic and a large body of facts about the countries of the world). I admit, it is not very impressive, but not useless either. In other words, nothing you couldn't do in a few minutes with a smartphone (although, I imagine, that would tend to ruin the "trick").
comment by Vaniver · 2013-04-03T14:05:53.741Z · LW(p) · GW(p)
Don’t settle. Don’t finish crappy books. If you don’t like the menu, leave the restaurant. If you’re not on the right path, get off it.
--Chris Brogan on the Sunk Cost Fallacy
Replies from: wedrifid↑ comment by wedrifid · 2013-04-03T14:19:03.889Z · LW(p) · GW(p)
If you don’t like the menu, leave the restaurant.
If there is another one next door, maybe. If it is much farther than that the menu would have to be fairly bad.
Don’t settle.
... if there is a sufficiently convenient alternative and the difference is significant.
Replies from: TimS↑ comment by TimS · 2013-04-03T15:51:56.188Z · LW(p) · GW(p)
I think you are using settle in its more precise meaning (i.e. release a legal claim), which is not consistent with the colloquial usage. Colloquially, "settle" is often used as the antonym of "take reasonable risks."
Similarly, I think the difference between "don't like the menu" and "fairly bad" is hairsplitting for someone who would find this level and type of advice useful. In just about any city, the BATNA is "travel to another place to eat, getting no further from your home than you were at the first place." And that's a pretty good alternative. I think the quote correctly asserts that the alternative is underrated.
Replies from: wedrifid↑ comment by wedrifid · 2013-04-04T02:40:52.976Z · LW(p) · GW(p)
I think the quote correctly asserts that the alternative is underrated.
While I assert that the quote advocates premature optimization. It distracts from actual cases of the sunk cost fallacy by warning against things that are often just are not worth fixing.
comment by Jayson_Virissimo · 2013-04-03T07:32:21.220Z · LW(p) · GW(p)
If knowledge can create problems, it is not through ignorance we can solve them.
-- Isaac Asimov
Replies from: simplicio, wiresnips↑ comment by simplicio · 2013-04-11T06:13:47.226Z · LW(p) · GW(p)
For some interesting exceptions to this quote, see Bostrom on Information Hazards.
↑ comment by wiresnips · 2013-04-09T17:39:21.173Z · LW(p) · GW(p)
This may not be strictly true. Consider the basilisk.
Replies from: Eugine_Nier, Jayson_Virissimo↑ comment by Eugine_Nier · 2013-04-10T03:39:05.460Z · LW(p) · GW(p)
Consider the basilisk.
I have, and I've come to the conclusion that Eliezer's solution, i.e., suppress all knowledge of it, won't actually work.
↑ comment by Jayson_Virissimo · 2013-04-09T20:27:52.996Z · LW(p) · GW(p)
Agreed, but I think the exceptions are very few.
comment by satt · 2013-04-02T06:18:07.248Z · LW(p) · GW(p)
Within the philosophy of science, the view that new discoveries constitute a break with tradition was challenged by Polanyi, who argued that discoveries may be made by the sheer power of believing more strongly than anyone else in current theories, rather than going beyond the paradigm. For example, the theory of Brownian motion which Einstein produced in 1905, may be seen as a literal articulation of the kinetic theory of gases at the time. As Polanyi said:
Discoveries made by the surprising configuration of existing theories might in fact be likened to the feat of a Columbus whose genius lay in taking literally and as a guide to action that the earth was round, which his contemporaries held vaguely and as a mere matter for speculation.
― David Lamb & Susan M. Easton, Multiple Discovery: The pattern of scientific progress, pp. 100-101
Replies from: DanielLC, summerstay, feanor1600↑ comment by DanielLC · 2013-04-08T02:15:16.820Z · LW(p) · GW(p)
Columbus's "genius" was using the largest estimate for the size of Eurasia and the smallest estimate for the size of the world to make the numbers say what he wanted them to. As normally happens with that sort of thing, he was dead wrong. But he got lucky and it turned out there was another continent there.
Replies from: army1987, skepsci↑ comment by A1987dM (army1987) · 2013-04-08T18:09:04.883Z · LW(p) · GW(p)
Wait... he did that on purpose?
Replies from: MugaSofer↑ comment by MugaSofer · 2013-04-08T19:34:12.475Z · LW(p) · GW(p)
Yes, actually. He believed the true dimensions of the Earth would conform to his interpretation of a particular Bible verse (thwo-thirds of the earth should be land, and one-third water, so the Ocean had to be smaller than believed) and fudged the numbers to fit.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-04-09T15:35:11.901Z · LW(p) · GW(p)
Ah, OK. I had taken DanielLC to be implying that he had fudged the numbers in order to convince the Spanish queen to fund him.
↑ comment by skepsci · 2013-04-19T19:45:43.698Z · LW(p) · GW(p)
Exactly. In fact, it was well known at the time that the Earth is round, and most educated people even knew the approximate size (which was calculated by Eratosthenes in the third century BCE). Columbus, on the other hand, used a much less accurate figure, which was off by a factor of 2.
The popular myth that Columbus was right and his contemporaries were wrong is the exact opposite of the truth.
↑ comment by summerstay · 2013-04-08T14:53:39.841Z · LW(p) · GW(p)
Perhaps Columbus's "genius" was simply to take action. I've noticed this in executives and higher-ranking military officers I've met-- they get a quick view of the possibilities, then they make a decision and execute it. Sometimes it works and sometimes it doesn't, but the success rate is a lot better than for people who never take action at all.
Replies from: wedrifid↑ comment by wedrifid · 2013-04-08T16:16:50.956Z · LW(p) · GW(p)
I've noticed this in executives and higher-ranking military officers I've met-- they get a quick view of the possibilities, then they make a decision and execute it.
Executives and higher ranking military officers also happen to have the power to enforce their decisions. Making decisions and acting on them can be possible without that power but the political skill required is far greater, the rewards lower, the risks of failure greater and the risks of success non-negligible.
↑ comment by feanor1600 · 2013-04-11T13:25:39.104Z · LW(p) · GW(p)
This is how Scott Sumner describes his own work in macroeconomics and NGDP targetting. Others see it as radical and innovative, he thinks he is just taking the standard theories seriously.
comment by Richard_Kennaway · 2013-04-10T19:00:22.020Z · LW(p) · GW(p)
BOSWELL. 'Sir Alexander Dick tells me, that he remembers having a thousand people in a year to dine at his house: that is, reckoning each person as one, each time that he dined there.' JOHNSON. 'That, Sir, is about three a day.' BOSWELL. 'How your statement lessens the idea.' JOHNSON. 'That, Sir, is the good of counting. It brings every thing to a certainty, which before floated in the mind indefinitely.'
From Boswell's Life of Johnson. HT to a commenter on the West Hunter blog.
Replies from: Document↑ comment by Document · 2013-04-11T00:16:34.348Z · LW(p) · GW(p)
If each person counts as one for each time he dines, Alexander can only claim to have personally hosted the guests at his most recent meal; the others were guests of someone else.
Replies from: odlogancomment by Stabilizer · 2013-04-01T19:36:20.451Z · LW(p) · GW(p)
One test adults use is whether you still have the kid flake reflex. When you're a little kid and you're asked to do something hard, you can cry and say "I can't do it" and the adults will probably let you off. As a kid there's a magic button you can press by saying "I'm just a kid" that will get you out of most difficult situations. Whereas adults, by definition, are not allowed to flake. They still do, of course, but when they do they're ruthlessly pruned.
-Paul Graham
Replies from: NancyLebovitz, FiftyTwo↑ comment by NancyLebovitz · 2013-04-02T11:13:32.443Z · LW(p) · GW(p)
The way to deal with uncertainty is to analyze it into components. Most people who are reluctant to do something have about eight different reasons mixed together in their heads, and don't know themselves which are biggest. Some will be justified and some bogus, but unless you know the relative proportion of each, you don't know whether your overall uncertainty is mostly justified or mostly bogus.
--Paul Graham, same essay
Replies from: BlazeOrangeDeer↑ comment by BlazeOrangeDeer · 2013-04-09T11:27:06.748Z · LW(p) · GW(p)
Mapmakers deliberately put slight mistakes in their maps so they can tell when someone copies them. If another map has the same mistake, that's very convincing evidence.
Same essay.
comment by twanvl · 2013-04-09T11:45:37.622Z · LW(p) · GW(p)
If the climate skeptics want to win me over, then the way for them to do so is straightforward: they should ignore me, and try instead to win over the academic climatology community, majorities of chemists and physicists, Nobel laureates, the IPCC, National Academies of Science, etc. with superior research and arguments.
-- Scott Aaronson on areas of expertise
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-04-10T03:54:22.616Z · LW(p) · GW(p)
If the atheists what to win me over, then the way for them to do so is straightforward: they should ignore me, and try instead to win over the theology community, bishops, the Pope, pastors, denominational and non-denominational bodies, etc., with superior research and arguments.
Replies from: Manfred, Desrtopa, TheOtherDave, Estarlio, khafra, Tenoke↑ comment by Manfred · 2013-04-11T00:53:48.592Z · LW(p) · GW(p)
To this, the skeptics might respond: but of course we can’t win over the mainstream scientific community, since they’re all in the grip of an evil left-wing conspiracy or delusion! Now, that response is precisely where “the buck stops” for me, and further discussion becomes useless. If I’m asked which of the following two groups is more likely to be in the grip of a delusion — (a) Senate Republicans, Freeman Dyson, and a certain excitable string-theory blogger, or (b) virtually every single expert in the relevant fields, and virtually every other chemist and physicist who I’ve ever respected or heard of — well then, it comes down to a judgment call, but I’m 100% comfortable with my judgment.
-- Scott Aaronson in the next paragraph
↑ comment by Desrtopa · 2013-04-10T04:11:51.651Z · LW(p) · GW(p)
Not that I don't think this is a fair counterpoint to make, but in my own experience trying to find the best arguments for religion, I learned a lot more and got better reasoning talking to random laypeople than by asking priests and theologians.
Of course, the fact that I talked to a lot more laypeople than priests and theologians is most likely the determining factor here, but my experiences discussing the nature and details of climate change have not followed a similar pattern at all.
↑ comment by TheOtherDave · 2013-04-10T16:02:30.227Z · LW(p) · GW(p)
Just so I'm clear: do you believe the theology community ("bishops, the Pope, pastors, denominational and non-denominational bodies, etc.") is as reliable an authority on the nature and existence of the thing atheists don't believe in than the academic climatology community is on the nature and existence of the thing climate skeptics don't believe in?
If so, then this makes perfect sense.
That said, my experience with both groups doesn't justify such a belief.
Replies from: fubarobfusco, MugaSofer↑ comment by fubarobfusco · 2013-04-10T16:49:17.137Z · LW(p) · GW(p)
The analogy doesn't cohere. Nobody denies that climate exists; they disagree on what it is doing.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-10T17:13:23.126Z · LW(p) · GW(p)
I agree that nobody denies climate exists, but I think that's irrelevant to the question at hand.
To clarify that a bit... Aaronson asserted a relationship between "climate skeptics" and "the academic climatology community" with respect to some concept X which climate skeptics deny exists. We could get into a whole discussion about what exactly X is (it certainly isn't climate), but rather than go down that road I simply referred to it as "the thing climate skeptics don't believe in."
Eugine_Nier asserted a relationship between "atheists" and "the theology community" with respect to some concept Y which atheists deny exists. We could similarly get into a whole discussion about what exactly Y is, but rather than go down that road I simply referred to it as "the thing atheists don't believe in."
If the theology community is in the same relationship to Y as the academic climatology community is to X, then the analogy holds.
I just don't believe that the theology community is in that relationship to Y.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-04-10T19:02:41.823Z · LW(p) · GW(p)
I just don't believe that the theology community is in that relationship to Y.
I believe Eugine_Nier is suggesting not that theology community is in the same relationship to Y as the academic climatology community is to X, but the reverse.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-10T21:50:33.185Z · LW(p) · GW(p)
(nods) Yup. It's the opposite of what he said, but he could easily have been speaking ironically.
↑ comment by MugaSofer · 2013-04-11T20:17:11.616Z · LW(p) · GW(p)
Just so I'm clear: do you believe the theology community ("bishops, the Pope, pastors, denominational and non-denominational bodies, etc.") is as reliable an authority on the nature and existence of the thing atheists don't believe in than the academic climatology community is on the nature and existence of the thing climate skeptics don't believe in?
If so, then this makes perfect sense.
That said, my experience with both groups doesn't justify such a belief.
Well, no. You're an atheist. I'm sure a Christian climate skeptic would agree with you, with the terms reversed.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-12T15:04:38.189Z · LW(p) · GW(p)
I'm sure a Christian climate skeptic would agree with you, with the terms reversed.
That is, a Christian climate skeptic would claim that their experience with both groups doesn't justify the belief that the academic climatology community is as reliable an authority as the theology community?
In a trivial sense I agree with you, in that there's all sorts of tribal signaling effects going on, but not if I assume honest discussion. In my experience, strongly identified Christians believe that most theologians are unreliable authorities on the nature of God.
Indeed, it would be hard for them to believe otherwise, since most theologians don't consider Jesus Christ to have been uniquely divine.
Of course, if we implicitly restrict "the theology community" to "the Christian theology community," as many Americans seem to, then you're probably right for sufficiently narrow definitions of "Christian".
Replies from: MugaSofer↑ comment by MugaSofer · 2013-04-12T20:21:30.148Z · LW(p) · GW(p)
Hmm, interesting point. At a guess, I'd say there probably is more disagreement among theologians than climatologists, so there does seem to be some asymmetry there.
On the other hand, if God is analogous to Global Warming (or whatever) then I suppose the analogy for those disputed details might be predictions of how soon we'll all be flooded or killed by extreme weather or whatever and what, exactly, the solution is (including "there isn't one".) So there's that.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-12T20:31:43.923Z · LW(p) · GW(p)
if God is analogous to Global Warming
If "God" refers to what theologians and atheists disagree about, and "Global Warming" refers to what climatologists and climate skeptics disagree about, then sure. I'd be cautious of assuming we agree on what those labels properly refer to more broadly, though.
the analogy for those disputed details might be predictions of how soon we'll all be flooded or killed by extreme weather or whatever and what, exactly, the solution is
Well, OK. Using that analogy, I guess I would say that if climatologists disagreed with each other about Global Warming as widely as theologians disagree with each other about God, I would not consider climatologists any more reliable a source of predictions of how soon we'll all be flooded or killed by extreme weather or whatever and what, exactly, the solution is, than I consider theologists reliable as a source of predictions about God.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-04-12T20:59:28.250Z · LW(p) · GW(p)
I'd be cautious of assuming we agree on what those labels properly refer to more broadly, though.
Yup. Hence the "or whatever".
Well, OK. Using that analogy, I guess I would say that if climatologists disagreed with each other about Global Warming as widely as theologians disagree with each other about God, I would not consider climatologists any more reliable a source of predictions of how soon we'll all be flooded or killed by extreme weather or whatever and what, exactly, the solution is, than I consider theologists reliable as a source of predictions about God.
The point, of course, is that while they may disagree about the details, they all agree on the existence of the thing in question. Although TBH climatologists do seem to have more consensus than theologians.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-12T21:18:48.659Z · LW(p) · GW(p)
The point, of course, is that while they may disagree about the details, they all agree on the existence of the thing in question.
It is not clear to me how to distinguish between "Christian, Buddhist, and Wiccan theologians agree on the existence of God but disagree on the details of God" and "Christian, Buddhist, and Wiccan theologians disagree on whether God exists"
This is almost entirely due to a lack of clarity about what "God" refers to.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-04-12T21:32:55.690Z · LW(p) · GW(p)
Well, Buddhist, and Wiccan theologians are in a minority compared to Christian, Hindu, Deist and so on. And there is a spectrum of both Wiccan and Buddhist thought ranging from standard atheism + relevant cosmology to pretty clear Theism of various kinds (plus relevant cosmology.) Still, it's probably more common than among climatologists, depending on how strictly we define "theologian". (And "climatologist" for that matter, there are a good few fringe "climatologists" who push climate skepticism.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-12T22:40:23.671Z · LW(p) · GW(p)
Yup, agreed that how we define the sets makes a big difference.
↑ comment by Estarlio · 2013-04-14T02:07:50.415Z · LW(p) · GW(p)
If atheists really thought that theists believed just because the pastors did, then targeting the pastors would seem to be the best way to go about it, yes. Either by attacking their credibility or attempting to convince them otherwise/attack the emotional basis of their faith. Even if the playing field was uneven and the pastors were actually crooked, there just wouldn't be any gain in going after the believers as individuals.
↑ comment by khafra · 2013-04-10T13:42:41.503Z · LW(p) · GW(p)
I can't think of a reply to this that won't start a game of reference class tennis; but I think there's a possibility that Aaronson's list is a more complete set of the relevant experts on the climate than your list is of the relevant experts on the existence of deities. If we grant the existence of deities, and merely wish to learn about their behavior; your list would be analogous to Aaronson's.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-04-10T20:25:37.069Z · LW(p) · GW(p)
Both lists end with “etc.”, so I have trouble calling either of them incomplete.
Replies from: khafra, David_Gerard↑ comment by khafra · 2013-04-11T18:51:00.839Z · LW(p) · GW(p)
I think "etc." is a request to the reader to be a good classifier--simply truncating the list at "etc." is overfitting, and defeats the purpose of the "etc." Contrariwise, construing "etc." to mean "everything else, everywhere" is trying to make do with fewer parameters than you actually need. The proper use of "etc." is to use the training examples to construct a good classifier, and flesh out members of the category by lazy evaluation as needed.
↑ comment by David_Gerard · 2013-04-11T18:35:33.376Z · LW(p) · GW(p)
It's not a reasonable presumption that "etc." will cover "any arbitrary thing that happens to make trouble for your counterargument".
comment by BlueSun · 2013-04-03T16:20:39.667Z · LW(p) · GW(p)
Something a Chess Master told me as a child has stuck with me:
How did you get so good?
I've lost more games than you've ever played.
-- Robert Tanner
Replies from: Qiaochu_Yuan, wedrifid↑ comment by Qiaochu_Yuan · 2013-04-03T20:23:03.019Z · LW(p) · GW(p)
Dude, suckin' at something is the first step to being sorta good at something.
-- Jake the Dog (Adventure Time)
Replies from: arundelo, army1987↑ comment by arundelo · 2014-04-14T23:06:01.609Z · LW(p) · GW(p)
For reference purposes: video clip; episode transcript.
↑ comment by A1987dM (army1987) · 2013-04-04T10:20:14.878Z · LW(p) · GW(p)
WTH... My latest Facebook status is “You got to lose to know how to win” (from “Dream On” by Aerosmith). o.O
Replies from: Will_Newsome↑ comment by Will_Newsome · 2013-04-08T06:49:41.829Z · LW(p) · GW(p)
Checkmate, atheists!
Replies from: bbleeker↑ comment by Sabiola (bbleeker) · 2013-04-08T09:15:51.163Z · LW(p) · GW(p)
I don't get it...
Replies from: D_Malik, Estarlio↑ comment by Estarlio · 2013-04-08T15:21:36.972Z · LW(p) · GW(p)
You've got to crash the car to know how to drive, got to drown to learn how to swim, you've got to believe to disbelieve. Got to !x to x.
Replies from: bbleeker↑ comment by Sabiola (bbleeker) · 2013-04-09T14:01:11.480Z · LW(p) · GW(p)
But that would make it "checkmate, believers". All the other sentences say " you've got to to ".
Replies from: Estarlio↑ comment by Estarlio · 2013-04-09T23:38:32.990Z · LW(p) · GW(p)
X & !X can be anything, good or bad. You've just got to pick a value for X that fits in with your desires to get a particular outcome if you want to break it down in terms of good and bad. Got to live to die. The point is that the underlying structure of the argument remains the same whatever you pick.
If you're actually interested in propositional logic, then the suitably named Logic by Paul Tomassi is a very approachable intro to this sort of thing. Though I'm afraid I couldn't say what it goes for these days.
↑ comment by wedrifid · 2013-04-04T13:21:25.550Z · LW(p) · GW(p)
How did you get so good?
I've lost more games than you've ever played.
Which is of course a different question to "What should I do to get good at Chess?" which is all about deliberate practice with a small proportion of time devoted to playing actual games.
Replies from: Will_Newsome, NancyLebovitz↑ comment by Will_Newsome · 2013-04-08T06:58:58.337Z · LW(p) · GW(p)
Right, I often play blitz games for an hour a day weeks on end and don't improve at all. Interestingly, looking at professional games, even if I don't bother to calculate many lines, seems to make me slightly better; so there are ways to improve without deliberate practice, but playing blitz doesn't happen to be one of them. Playing standard time controls does work decently well though, at least once you can recognize all the dozen or so main tactics.
↑ comment by NancyLebovitz · 2013-04-23T12:08:12.244Z · LW(p) · GW(p)
Playing a lot isn't as good as deliberate practice, but it's better than having done neither.
Replies from: wedrifidcomment by Cyan · 2013-04-08T05:27:30.006Z · LW(p) · GW(p)
Replies from: tgbBy three methods we may learn wisdom: First, by reflection, which is noblest; second, by imitation, which is easiest; and third, by experience, which is the bitterest.
↑ comment by tgb · 2013-04-08T12:05:15.375Z · LW(p) · GW(p)
The 'imitation' part is appropriately meta for a quote page.
Replies from: Cyan↑ comment by Cyan · 2013-04-11T02:16:01.665Z · LW(p) · GW(p)
I'd like to imagine that it's the blurb he put on the back of his own book: "I've done the reflection (noble!); buy now and you can get the benefit -- it's easy! -- or you can go stumbling off without the benefit of my wisdom like a sucker."
comment by khafra · 2013-04-19T16:37:17.140Z · LW(p) · GW(p)
Amazon isn’t a store, not really. Not in any sense that we can regularly think about stores. It’s a strange pulsing network of potential goods, global supply chains, and alien associative algorithms with the skin of a store stretched over it, so we don’t lose our minds.
- Tim Maly, pondering the increasing and poorly understood impact of algorithms on the average person's life.
comment by Vaniver · 2013-04-01T15:16:14.587Z · LW(p) · GW(p)
The mere formulation of a problem is far more essential than its solution, which may be merely a matter of mathematical or experimental skills.
-- Albert Einstein
Replies from: Thomas, FiftyTwo, PhilGoetz, PhilGoetz↑ comment by Thomas · 2013-04-01T15:24:51.607Z · LW(p) · GW(p)
At least sometimes the formulation is far easier than the solution.
Replies from: bentarm, MikeDobbs↑ comment by bentarm · 2013-04-02T17:30:39.418Z · LW(p) · GW(p)
This is definitely true. General class of examples: almost any combinatorial problem ever. Concrete example: the Four Colour Theorem
Replies from: MikeDobbs↑ comment by MikeDobbs · 2013-04-06T22:46:18.175Z · LW(p) · GW(p)
General class of examples: almost any combinatorial problem ever
Yes! Combinatorics problems are a perfect example of this. Trying to work out the probability of being dealt a particular hand in poker can be very difficult (for certain hands) until you correctly formulate the question- at which point the calculations are trivial : )
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2013-04-07T22:25:24.010Z · LW(p) · GW(p)
I think bentarm was offering "Combinatorics problems" as an example of the opposite of the phenomenon you describe. In particular the Four Colour Theorem is easy to formulate but hard to solve, and (as far as I know) the solution doesn't involve a reformulation.
Replies from: MikeDobbs↑ comment by MikeDobbs · 2013-04-24T21:55:00.401Z · LW(p) · GW(p)
Yes, upon re-reading I see that you are correct. I think there may be overlap between activities I consider part of the formulation and activities others may consider part of the solution.
To expand on my poker suggestion. When attempting to determine the probability of a hand in poker it is necessary to determine a way to represent that hand using combinations/permutations. I have found that for certain hands this can be rather difficult as you often miss, exclude, or double count some amount of possible hands. This process of representing the hand using mathematics is, in my mind, part of the formulation of the problem; or more accurately, part of the precise formulation of the problem. In this respect, the solution is reduced to trivial calculations once the problem is properly formulated. However, I can certainly see how one might consider this to be part of the solution rather than the formulation.
Thanks for pointing that out
↑ comment by MikeDobbs · 2013-04-01T23:49:11.729Z · LW(p) · GW(p)
In my experience it can often turn out that the formulation is more difficult than the solution (particularly for an interesting/novel problem). Many times I have found that it takes a good deal of effort to accurately define the problem and clearly identify the parameters, but once that has been accomplished the solution turns out to be comparatively simple.
↑ comment by PhilGoetz · 2013-04-05T23:45:20.151Z · LW(p) · GW(p)
Hmm. Einstein is perhaps most famous for "discovering" special relativity. But he neither formulated the problem, nor found the solution (I think the Lorentz transformation was already known to be the solution), but reinterpreted the solution as being real.
His "greatest error" was introducing the cosmological constant into general relativity--curiously, making a similar error to what everyone else had made when confronted with the constancy of the speed of light, which was refusing to accept that the mathematical result described reality.
↑ comment by PhilGoetz · 2013-04-05T23:48:05.835Z · LW(p) · GW(p)
In writing a story, it's easy to identify problems with the story which you must struggle with for weeks to resolve. But often, you suddenly realize what the entire story is really about, and this makes everything suddenly easy. If by the formulation of the problem we mean that overall understanding, rather than specific obstacles, then yes. For stories.
comment by player_03 · 2013-04-08T06:25:20.379Z · LW(p) · GW(p)
Replies from: simplicioWhen I was a Christian, and when I began this intense period of study which eventually led to my atheism, my goal, my one and only goal, was to find the best evidence and argument I could find that would lead people to the truth of Jesus Christ. That was a huge mistake. As a skeptic now, my goal is very similar - it just stops short. My goal is to find the best evidence and argument, period. Not the best evidence and argument that leads to a preconceived conclusion. The best evidence and argument, period, and go wherever the evidence leads.
comment by Jayson_Virissimo · 2013-04-01T23:33:50.618Z · LW(p) · GW(p)
Focusing is about saying no.
-- Steve Jobs
Replies from: AlanCrowe, MixedNuts, Estarlio, PhilGoetz↑ comment by AlanCrowe · 2013-04-09T18:13:12.148Z · LW(p) · GW(p)
Longer version from here
People think focus means saying yes to the thing you’ve got to focus on. But that’s not what it means at all. It means saying no to the hundred other good ideas there are.
—Steve Jobs, interviewed in Fortune, March 7, 2008
↑ comment by MixedNuts · 2013-04-04T08:22:18.356Z · LW(p) · GW(p)
Focusing is about saying no long enough to get into flow, or at least some kind of mental state where your short-term memory doesn't constantly evaporate. If you have to say no all the time, you'll wind up twenty hours later having written six lines and with a head full of jelly.
↑ comment by Estarlio · 2013-04-02T04:32:13.971Z · LW(p) · GW(p)
Without context I'm tempted to say focusing is about a whole bunch of things and that telling people to say no is just another way of saying, 'Use your willpower.' Which is another way of saying 'Focus by focusing!' Which... seems rather recursive at least.
Replies from: TheOtherDave, Stabilizer↑ comment by TheOtherDave · 2013-04-02T06:00:37.936Z · LW(p) · GW(p)
One of the things that focusing is about is giving up pursuing good things.
Which means that if I want to focus, I need to decide which good things I'm going to say "no" to.
This may seem obvious, but after seeing many not-otherwise-stupid management structures create lists of "priorities" that encompass everything good (and consequently aren't priorities at all), I'm inclined to say that it isn't as obvious as it may seem.
↑ comment by Estarlio · 2013-04-03T15:32:40.470Z · LW(p) · GW(p)
Interesting take.
====
Or optimisation is going on at a different point in the company.
Or it is as obvious as it seems and sanity isn't a property of management structures.
Come to think of it it's not necessarily even a property of any individual who participated in the creation of that structure. An idiot who's read The Effective Executive and How to Win Friends and Influence people should be a darned effective manager - but they're not necessarily very intelligent. Similarly you can gradually converge on sane solutions without thinking anything through very far by applying fairly basic procedures, or even just being subject to selection pressures.
====
You need to decide which good things you're going to assign the most resources to, or in what order you're going to do them, or have a list of very general priorities that you're going to pass off to some other system in the company that will give you a similar sort of output. But however you do it, focusing isn't as simple as saying no - or even as saying no to the right things. You'll exclude some things by default but knowing when to say 'let's see' and how strongly to say yes is also very useful.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-04T01:23:32.409Z · LW(p) · GW(p)
But however you do it, focusing isn't as simple as saying no - or even as saying no to the right things. You'll exclude some things by default but knowing when to say 'let's see' and how strongly to say yes is also very useful.
Yes, agreed.
Replies from: MikeDobbs↑ comment by MikeDobbs · 2013-04-05T23:47:18.997Z · LW(p) · GW(p)
This reminds me of Steven Covey's idea of a coordinate graph with four quadrants where you graph importance on on axis and urgency on the other. This gives you for types of "activities" to invest your time into.
Urgent and Unimportant (a phone ringing is a good example): this is where many people loose a tremendous amount of time
Urgent and Important (A broken bone or crime in progress) hese immediately demand our "focus"
Not Urgent and Not Important: pure time wasters- not a good place to invest much energy
Not Urgent BUT Important. This is the area Steven made a point of saying that most people fall short. Because these things are not urgent, we tend to put them off and not invest enough enough energy into them, but since they are important this means we pay a hefty price in the long run. Into this category he puts things like our health, important relationships, personal development and self improvement to name a few.
When we choose what to focus our energy on, we would do well to direct as much of it as possible to these types of activites
↑ comment by Stabilizer · 2013-04-02T05:23:14.267Z · LW(p) · GW(p)
Let us say you have a paper to write but you also want to go to a party. While trying to write the paper, you could keep wondering whether you should stop writing the paper and just go to the party, but keep writing anyway, i.e. try to use willpower. Or you could decide, once and for all that you are not going to go to the party, which is saying no. I think the second approach will be more effective in getting the paper done. So, I think there is actually a difference.
Now, of course the insight isn't profound and both folk and professional psychology has known it for some time (I can't find a good link off-hand). But, when a successful person high-status person who has achieved a lot saying it lends it whole lot more of credibility.
comment by xv15 · 2013-04-08T03:36:06.394Z · LW(p) · GW(p)
Joe Pyne was a confrontational talk show host and amputee, which I say for reasons that will become clear. For reasons that will never become clear, he actually thought it was a good idea to get into a zing-fight with Frank Zappa, his guest of the day. As soon as Zappa had been seated, the following exchange took place:
Pyne: I guess your long hair makes you a girl.
Zappa: I guess your wooden leg makes you a table.
Of course this would imply that Pyne is not a featherless biped.
Source: Robert Cialdini's Influence: The Psychology of Persuasion
comment by Viliam_Bur · 2013-04-19T07:14:30.641Z · LW(p) · GW(p)
"In the typical Western two men fight desperately for the possession of a gun that has been thrown to the ground: whoever reaches the weapon first shoots and lives; his adversary is shot and dies. In ordinary life, the struggle is not for guns but for words; whoever first defines the situation is the victor; his adversary, the victim. ... [the one] who first seizes the word imposes reality on the other; [the one] who defines thus dominates and lives; and [the one] who is defined is subjugated and may be killed."
"In the animal kingdom, the rule is, eat or be eaten; in the human kingdom, define or be defined."
-- Thomas Szasz
comment by [deleted] · 2013-04-11T04:41:54.018Z · LW(p) · GW(p)
.
Replies from: wedrifid, army1987↑ comment by wedrifid · 2013-04-11T05:16:30.305Z · LW(p) · GW(p)
WAYS TO KILL 2 BIRDS W/ 1 STONE
- Radioactive stone in nest.
- Use stone to seal off the air supply to a cage of birds.
- Economist: Sell a precious stone (diamond? Ruby?). Use the proceeds to purchase several dozen chickens. The purchase produces an expected number of bird deaths equal to approximately the number of chickens purchased through tiny changes at the margins, making chicken farming and slaughter slightly more viable.
- Omega: Use stone to kill the dog that would have killed the cat that will now kill 40 birds over its extended lifespan.
↑ comment by Maniakes · 2013-04-18T01:01:46.952Z · LW(p) · GW(p)
Punster: go on a hunting trip with Mick Jagger.
Replies from: malcolmocean↑ comment by MalcolmOcean (malcolmocean) · 2013-04-18T18:47:21.745Z · LW(p) · GW(p)
Double punster: it's hunting season for Jimmy Page's former band.
↑ comment by A1987dM (army1987) · 2013-04-18T18:29:25.362Z · LW(p) · GW(p)
Nice, but how is this a rationality quote? Is there some allegory that I'm missing?
Replies from: None, Kawoombacomment by xv15 · 2013-04-08T05:38:31.754Z · LW(p) · GW(p)
"Alas", said the mouse, "the whole world is growing smaller every day. At the beginning it was so big that I was afraid, I kept running and running, and I was glad when I saw walls far away to the right and left, but these long walls have narrowed so quickly that I am in the last chamber already, and there in the corner stands the trap that I must run into."
"You only need to change your direction," said the cat, and ate it up.
-Kafka, A Little Fable
Replies from: wedrifid, xv15, Document↑ comment by wedrifid · 2013-04-08T06:14:33.185Z · LW(p) · GW(p)
"You only need to change your direction," said the cat, and ate it up.
Moral: Just because the superior agent knows what is best for you and could give you flawless advice, doesn't mean it will not prefer to consume you for your component atoms!
Replies from: gwern, xv15↑ comment by gwern · 2013-04-11T03:34:09.502Z · LW(p) · GW(p)
My problem with this is, that like a number of Kafka's parables, the more I think about it, the less I understand it.
There is a mouse, and a mouse-trap, and a cat. The mouse is running towards the trap, he says, and the cat says that to avoid it, all he must do is change his direction and eats the mouse. What? Where did this cat come from? Is this cat chasing the mouse down the hallway? Well, if he is, then that's pretty darn awful advice, because if the cat is right behind the mouse, then turning to avoid the trap just means he's eaten by the cat, so either way he is doomed.
Actually, given Kafka's novels, so often characterized by double-binds and false dilemmas, maybe that's the point: that all choices lead to one's doom, and the cat's true observation hides the more important observation that the entire system is rigged.
('"Alas", said the voter, "at first in the primaries the options seemed so wide and so much change possible that I was glad there was an establishment candidate to turn to to moderate the others, but as time passed the Overton Window closed in and now there is the final voting booth into which I must walk and vote for the lesser of two evils." "You need only not vote", the system told the voter, and took his silence for consent.')
On the other hand, it's a perfectly optimistic little fable if you simply replace the one word "trap" with the word "cat".
"Alas", said the mouse, "the whole world is growing smaller every day. At the beginning it was so big that I was afraid, I kept running and running, and I was glad when I saw walls far away to the right and left, but these long walls have narrowed so quickly that I am in the last chamber already, and there in the corner stands the cat that I must run into."
"You only need to change your direction," said the cat, and ate it up.
↑ comment by xv15 · 2013-04-08T05:38:53.462Z · LW(p) · GW(p)
I will run the risk of overanalyzing: Faced with a big wide world and no initial idea of what is true or false, people naturally gravitate toward artificial constraints on what they should be allowed to believe. This reduces the feeling of crippling uncertainty and makes the task of reasoning much simpler, and since an artificial constraint can be anything, they can even paint themselves a nice rosy picture in which to live. But ultimately it restricts their ability to align their beliefs with the truth. However comforting their illusions may be at first, there comes a day of reckoning. When the false model finally collides with reality, reality wins.
The truth is that reality contains many horrors. And they are much harder to escape from a narrow corridor that cuts off most possible avenues for retreat.
↑ comment by Document · 2013-04-11T00:01:43.540Z · LW(p) · GW(p)
I briefly read the moral as something like this; something along the lines of "being exposed in the open was the worst thing the mouse could imagine, so it ran blindly away from it without asking what the alternatives were". I'm still not sure I actually get it.
Tangentially, keeping mouse traps in a house with a cat seems hazardous (though I could be underestimating cats). And I assume "day" and "chamber" are used abstractly.
comment by ModusPonies · 2013-04-03T14:17:16.090Z · LW(p) · GW(p)
If you will learn to work with the system, you can go as far as the system will support you ... By realizing you have to use the system and studying how to get the system to do your work, you learn how to adapt the system to your desires. Or you can fight it steadily, as a small undeclared war, for the whole of your life ... Very few of you have the ability to both reform the system and become a first-class scientist.
—Richard Hamming
(I recommend the whole talk, which contains some great examples and many other excellent points.)
Replies from: EHeller, ChristianKl↑ comment by EHeller · 2013-04-08T17:29:18.030Z · LW(p) · GW(p)
I think the thing that strikes me most about this talk is how different science was then versus now. For one small example he was asked to comment on the relative effectiveness of giving talks, writing papers and writing books. In today's world its not a question anyone would ask, and the answer would be "write at least a few papers a year or you won't keep your job."
↑ comment by ChristianKl · 2013-04-08T11:52:00.221Z · LW(p) · GW(p)
I don't see why it has to be either or.
Replies from: gwern↑ comment by gwern · 2013-04-08T13:54:17.660Z · LW(p) · GW(p)
Time and effort are zero-sum.
Replies from: ChristianKl↑ comment by ChristianKl · 2013-04-08T16:38:30.972Z · LW(p) · GW(p)
I don't think so. The status and resources that you get for being a first-class scientist will help you to fight the system.
Replies from: gwern↑ comment by gwern · 2013-04-08T16:56:37.297Z · LW(p) · GW(p)
The status and resources that you get for being a first-class scientist will help you to fight the system.
And would even more help you continue being a first-class scientist, won't help you fight for free (no Time-Turners on offer, I'm afraid), and even in this scenario you still need to decide to first become a first-class scientist - since fighting the system is not a great path to getting status & resources.
Replies from: ChristianKl↑ comment by ChristianKl · 2013-04-08T18:38:19.036Z · LW(p) · GW(p)
Picking fights when you don't have any resources to fight them is in general no good strategy. Whenever you pick a fight you actually have to think about the price and possible reward.
Craig Venter did oppose the NIH and then went and got private funding for himself to persue the ideas in a way he thought to be superior.
Eliezer Yudkowsky did decide to operate outside acdemia. Peter Thiel funded him and the whole LessWrong enterprize increased the amount of resources that he has at his disposal.
There are a lot of sources of resources, that can be gained by picking some fights.
Replies from: gwern↑ comment by gwern · 2013-04-08T18:57:11.390Z · LW(p) · GW(p)
Those aren't the kinds of fights Hamming is talking about. (You have read his talk, right?)
Replies from: ChristianKl↑ comment by ChristianKl · 2013-04-08T19:15:17.991Z · LW(p) · GW(p)
Those aren't the kinds of fights Hamming is talking about. (You have read his talk, right?)
Sorry, now I read it and you are right Hamming does acklowedge that you can fight some fights but just recommends against wasting your time with fights that don't matter in the large scale of things.
comment by Eugine_Nier · 2013-04-26T01:45:34.976Z · LW(p) · GW(p)
One can be extremely confident when giving goal-based advice because it's always right. When you switch to giving instrument-based advice--when you switch from cheerleading to playing the game--you have to warn your audience that Your Mileage May Vary, that there's many a slip 'twixt cup and lip.
-- Garret Jones
Replies from: ChristianKl↑ comment by ChristianKl · 2013-04-28T17:06:56.827Z · LW(p) · GW(p)
Could you give an example for goal goal-based advice that's always right?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-04-28T18:15:35.533Z · LW(p) · GW(p)
Sure. From the same post:
Want to know how to win a gold medal in Rio in 2016? Here's a guaranteed plan to reach the top spot on the podium:
Qualify for an Olympic event.
Do better than every other competitor.
That's it! There's your path to victory. If you find an error in my guaranteed foolproof advice, do let me know.
comment by lukeprog · 2013-04-23T23:44:03.042Z · LW(p) · GW(p)
We live during the hinge of history. Given the scientific and technological discoveries of the last two centuries, the world has never changed as fast. We shall soon have even greater powers to transform, not only our surroundings, but ourselves and our successors. If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period. Our descendants could, if necessary, go elsewhere, spreading through the galaxy.
...What now matters most is that we avoid ending human history.
Parfit, On What Matters, Vol. 2 (pp. 616-620).
Replies from: Nisan↑ comment by Nisan · 2013-04-24T04:54:54.059Z · LW(p) · GW(p)
I am now sixty-seven. To bring my voyage to a happy conclusion . . . I would need to find ways of getting many people to understand what it would be for things to matter, and of getting these people to believe that certain things really do matter. I cannot hope to do these things by myself. But . . . I hope that, with art and industry, some other people will be able to do these things, thereby completing this voyage.
Parfit, quoted in ”How To Be Good” by Larissa MacFarquhar. PDF
comment by Vaniver · 2013-04-14T20:06:13.606Z · LW(p) · GW(p)
The iron rule of nature is: you get what you reward for. If you want ants to come, you put sugar on the floor.
--Charlie Munger
Replies from: army1987, shminux↑ comment by A1987dM (army1987) · 2013-04-15T16:03:14.888Z · LW(p) · GW(p)
“You can catch more flies with honey than vinegar.” “You can catch even more with manure; what's your point?”
--Sheldon Cooper from The Big Bang Theory
Replies from: wedrifid↑ comment by Shmi (shminux) · 2013-04-15T16:32:57.733Z · LW(p) · GW(p)
The Stockholm syndrome says otherwise.
Replies from: Estarlio, Vaniver↑ comment by Vaniver · 2013-04-15T17:00:55.059Z · LW(p) · GW(p)
That link isn't clear to me. Could you please elaborate?
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-15T17:16:52.368Z · LW(p) · GW(p)
It's not "the iron rule", just one of many heuristics of limited applicability. Hurting instead of rewarding is often just as effective. And rewarding can also backfire in the worst way.
Replies from: ChristianKl, Vaniver↑ comment by ChristianKl · 2013-04-28T17:12:44.285Z · LW(p) · GW(p)
The Stockholm syndrome isn't only about hurting the hostage. The captor gains control of the enviroment in which the hostage lives and then can use that control to reward the hostage for fullfilling his wishes.
↑ comment by Vaniver · 2013-04-15T18:25:40.155Z · LW(p) · GW(p)
It's not "the iron rule", just one of many heuristics of limited applicability. Hurting instead of rewarding is often just as effective. And rewarding can also backfire in the worst way.
Munger's quote seemed to me like a more colorful rendition of "incentives matter," which is an iron rule (as it contrasts with what people often want to be true, which is "intentions matter"). Rewards backfiring is generally mistakenly applied rewards, like sugar on the floor, and punishments seem like they can be considered as anti-rewards; you don't get what you punish (with, again, the note that precision matters).
comment by Eugine_Nier · 2013-04-02T05:48:12.695Z · LW(p) · GW(p)
But I now thought that this end [one's happiness] was only to be attained by not making it the direct end. Those only are happy (I thought) who have their minds fixed on some object other than their own happiness[....] Aiming thus at something else, they find happiness along the way[....] Ask yourself whether you are happy, and you cease to be so.
-- John Stuart Mill, autobiography
Replies from: blacktrance, MixedNuts↑ comment by blacktrance · 2013-04-09T09:57:40.714Z · LW(p) · GW(p)
For what it's worth, personal experience tells me otherwise.
↑ comment by MixedNuts · 2013-04-14T22:42:47.971Z · LW(p) · GW(p)
I've found that thinking about something outside yourself (and thus not your own happiness) makes lots of people less depressed, and somewhat happy. However, the last sentence is clearly false, as many anecdotal reports of "I'm so happy!" show. Maybe it works that way for some people?
comment by fortyeridania · 2013-04-08T09:44:36.857Z · LW(p) · GW(p)
But regardless of whether we believe our own positions are inviolable, it behooves us to know and understand the arguments of those who disagree. We should do this for two reasons. First, our inviolable position may be anything but. What we assume is true could be false. The only way we’ll discover this is to face up to evidence and arguments against our position. Because, as much as we may not enjoy it, discovering we’ve believed a falsehood means we’re now closer to believing the truth than we were before. And that’s something we should only ever feel gratitude for.
Aaron Ross Powell, Free Thoughts
Replies from: simplicio↑ comment by simplicio · 2013-04-09T04:10:11.609Z · LW(p) · GW(p)
This is why steelmanning is a really good community norm. Social incentives for understanding the other's position are usually bad, but if people give credit for steelmanning, these incentives are better.
Replies from: Document↑ comment by Document · 2013-04-11T00:03:56.359Z · LW(p) · GW(p)
"Steelmanning" and "understanding the other's position" aren't really related (to my knowledge).
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-11T03:24:47.275Z · LW(p) · GW(p)
It's difficult to steelman someone's position if I don't understand it.
comment by Panic_Lobster · 2013-04-28T07:21:34.052Z · LW(p) · GW(p)
Most of the propositions and questions to be found in philosophical works are not false but nonsensical. Consequently we cannot give any answer to questions of this kind, but can only point out that they are nonsensical. Most of the propositions and questions of philosophers arise from our failure to understand the logic of our language. [...] And it is not surprising that the deepest problems are in fact not problems at all.
Ludwig Wittgenstein, Tractatus Logico-Philosophicus, 1921
comment by NancyLebovitz · 2013-04-23T12:35:18.609Z · LW(p) · GW(p)
Charles Darwin used to say that whenever he ran into something that contradicted a conclusion he cherished, he was obliged to write the new finding down within 30 minutes. Otherwise his mind would work to reject the discordant information, much as the body rejects transplants.
-- Warren Buffett
I have no idea whether this is true of Darwin, but it still might be good advice.
Replies from: simpliciocomment by elharo · 2013-04-11T15:28:31.063Z · LW(p) · GW(p)
The lack of a well-delineated hypothesis is not necessarily a barrier to acceptance of new directions in medical practice. The classic example is John Snow's demonstration that the 1854 cholera epidemic in London was attributable to contaminants in the water. When he removed the handle from the Broad Street pump, the number of cases in the area served by that pump promptly began to wane. Exactly what was in the water that caused the cholera would not be demonstrated for more than a quarter of a century. Still the results of Snow's intervention were so dramatic that no one questioned the cause-and-effect relationship even in the absence of an explicit hypothesis. However, when the causal linkage is less obvious, the absence of a plausible hypothesis can be a significant deterrent to action.
To return to the case at hand, it was difficult for several reasons for physicians to accept the idea that the concentration of blood cholesterol could be a major factor in determining the chances of myocardial infarction decades down the road. As discussed in Chapter 3, it was not appreciated that the average blood cholesterol level in the United States, the so-called normal level, was actually abnormal. It was accelerating atherogenesis and putting a large fraction of the so-called normal population at a high risk for coronary heart diseases. Also, very little was known about the structure and metabolism of these recently discovered and still mysterious cholesterol-protein complexes -- the serum lipoproteins -- and almost nothing was known about how they got into the vessel wall and contributed to the development of the lesions. A degree of skepticism was understandable.
--Daniel Steinberg, The Cholesterol Wars, 2007, Elsevier Press, p. 89
comment by Stephanie_Cunnane · 2013-04-03T00:24:48.862Z · LW(p) · GW(p)
Replies from: Qiaochu_Yuan, Larks, wedrifidIf a statement is false, that's the worst thing you can say about it. You don't need to say it's heretical. And if it isn't false, it shouldn't be suppressed.
↑ comment by Qiaochu_Yuan · 2013-04-03T02:38:05.605Z · LW(p) · GW(p)
I like the sentiment, but Paul Graham seems to be claiming that information hazards don't exist, and that doesn't appear to be true.
↑ comment by Larks · 2013-04-03T03:07:25.359Z · LW(p) · GW(p)
Despite agreeing with the rest of the essay (which is very good), this is not true. Tiresomely standard counter-example: "Heil Hitler! No, there are no Jews in my attic."
Replies from: Osuniev, gothgirl420666, dspeyer↑ comment by gothgirl420666 · 2013-04-06T01:10:59.722Z · LW(p) · GW(p)
Substitute "statement" with "belief".
Replies from: Larks↑ comment by Larks · 2013-04-06T18:42:19.485Z · LW(p) · GW(p)
Sorry, I don't understand. I believe there are Jews in my attic, but this belief should be suppressed, rather than spread.
Replies from: gothgirl420666, TimS↑ comment by gothgirl420666 · 2013-04-08T18:21:48.027Z · LW(p) · GW(p)
Fair enough.
↑ comment by wedrifid · 2013-04-03T02:20:26.579Z · LW(p) · GW(p)
If a statement is false, that's the worst thing you can say about it. You don't need to say it's heretical. And if it isn't false, it shouldn't be suppressed.
I like the sentiment. I disagree that it is (always) the worst you can say about it. And there are also true things that are actively constructed to be misleading---I certainly go about suppressing those where possible and plan to continue.
Replies from: skepscicomment by wilder · 2013-04-02T03:57:43.189Z · LW(p) · GW(p)
Like all great rationalists you believed in things that were twice as incredible as theology.
― Halldór Laxness, Under the Glacier.
Replies from: Eliezer_Yudkowsky, Leonhart, Stabilizer, PhilGoetz, Richard_Kennaway, NancyLebovitz↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-08T02:03:31.458Z · LW(p) · GW(p)
...and then adjusted our senses of the 'incredible' accordingly, so that Special Relativity seemed less incredible, and God more so.
Replies from: Armok_GoB, private_messaging↑ comment by Armok_GoB · 2013-04-08T23:48:29.181Z · LW(p) · GW(p)
Sense of incredulity is not a belief, so it's not covered by those injunctions. A sense of wonder is both pleasant and good for mental health, and diverging to much from the average in deep emotional reactions carries a real cost in less accurate empathic modelling.
↑ comment by private_messaging · 2013-04-09T04:16:15.277Z · LW(p) · GW(p)
Well, I dunno, if you describe physics as a Turing machine program, ala Solomonoff induction, special relativity may well be more incredible than god(s), chiefly because Turing machines may well be unable to do exact Lorentz invariance, but can do some kind of god(s), i.e. superintelligences. (Approximate relativity is doable, though).
Replies from: V_V, BlazeOrangeDeer↑ comment by V_V · 2013-04-09T13:07:50.234Z · LW(p) · GW(p)
Solomonoff induction creates models of the universe from the point of view of a single observer. As such, it wouldn't probably have any particular problem with Einstenian relativity.
On the other hand, if you want a computational model of the universe that is independent from the choice of any particular observer, relativity will get you into trouble.
Replies from: private_messaging↑ comment by private_messaging · 2013-04-09T13:14:15.349Z · LW(p) · GW(p)
Solomonoff induction creates models of the universe from the point of view of a single observer. As such, it wouldn't probably have any particular problem with Einstenian relativity.
On the other hand, if you want a computational model of the universe that is independent from the choice of any particular observer, relativity will get you into trouble.
Relativity doesn't depend to observer, it depends to reference frame... (or rather, doesn't depend). I can launch Michalson-Morley experiment into space and have it send data to me, and it'll need to obey Lorentz invariance and everything else. edit: or just for GPS to work. You have a valid point though, S.I. has a natural preferred frame coinciding with the observer.
Lorentz invariance is a very neat, very elegant property, which as far as we know, only incredibly complicated computations have, and only approximately. This makes me think that algorithmic prior is not a very good idea. Universe needs not be made of elementary components, in the way in which computations are.
Replies from: V_V↑ comment by V_V · 2013-04-09T14:06:42.996Z · LW(p) · GW(p)
Universe needs not be made of elementary components, in the way in which computations are.
Moreover, all computational models assume some sort of global state and absolute time. These assumptions don't seem to hold in physics, or at least they may hold for a single observer, but may require complex models that don't respect a natural simplicity prior.
If it were possible to realize a Solomonoff inductor in our universe I would it expect it to be able to learn, but it might not be necessarily optimal.
↑ comment by BlazeOrangeDeer · 2013-04-09T11:17:31.953Z · LW(p) · GW(p)
It can't do exact relativity but it can do exact general AI? Not to mention that simulating a God that doesn't include relativity will produce the wrong answer.
Replies from: private_messaging↑ comment by private_messaging · 2013-04-09T12:00:23.414Z · LW(p) · GW(p)
It being able to do AI is generally accepted as uncontroversial here. We don't know what would be the shortest way to encode a very good approximation to relativity either - could be straightforward, could be through a singleton intelligence that somehow arises in a more convenient universe and then proceeds to build very good approximations to more elegant universes (given some hint it discovers). I'm an atheist too, it's just that given sufficiently bad choice of the way you represent theories, the shortest hypothesis can involve arbitrarily crazy things just to do something fairly basic (e.g. to make a very very good approximation of real numbers). edit: and relativity is fairly unique in just how elegant it is but how awfully inelegant any simulation of it gets.
Replies from: V_V↑ comment by V_V · 2013-04-09T14:51:04.306Z · LW(p) · GW(p)
We don't know what would be the shortest way to encode a very good approximation to relativity either
The idea is that if humans can come up with approximation of relativity which are good enough for the purpose of predicting their observations, in principle SI can do it too.
The issue is prior probability: since humans use a different prior than SI, it's not straightforward that SI will not favor shorter models that in practice may perform worse.
There are universality theorems which essentially prove that given enough observations, SI will eventually catch up with any semi-computable learner, but the number of observation for this to happen might be far from practical.
For instance, there is a theorem which proves that, for any algorithm, if you sample problem instances according to a Solomonoff distribution, then average case complexity will asymptotically match worst case complexity.
If the Solomonoff distribution was a reasonable prior for practical purposes, then we should observe that for all algorithms, for realistic instance distributions, average case complexity was about the same order of magnitude as worst case complexity. Empirically, we observe that this is not necessarily the case, the Simplex algorithm for linear programming, for instance, has exponential time worst case complexity but is usually very efficient (polynomial time) on typical inputs.
↑ comment by Stabilizer · 2013-04-02T05:25:31.225Z · LW(p) · GW(p)
What does this mean?
Replies from: wilder↑ comment by wilder · 2013-04-02T12:54:08.426Z · LW(p) · GW(p)
That on probabilistic or rational reflection one can come to believe intuitively implausible things that are as or more extraordinary than their theological counterparts. Or to mutilate Hamlet, that there are more things on earth than are dreamt of in heaven.
Replies from: DaFranker↑ comment by DaFranker · 2013-04-02T13:17:30.601Z · LW(p) · GW(p)
Most of quantum physics and relativity are certainly intuitively weirder than Jesus turning water into wine, self-replicating bread or a body of water splitting itself to create a passage.
I mean, our physics say it's technically possible to make machines that do all of this. Without magic. Using energy collected in space and sent to Earth using beams of light. Although we probably wouldn't use beams of light because that's inefficient.
↑ comment by PhilGoetz · 2013-04-05T23:37:00.407Z · LW(p) · GW(p)
I am confused--upvoting this comment is a rejection of this website.
Replies from: simplicio↑ comment by simplicio · 2013-04-09T04:48:30.152Z · LW(p) · GW(p)
I doubt that Laxness means "rationalist" in the LW community sense. In philosophy, a rationalist is defined as distinct from an empiricist, as one who believes knowledge to be arrived at from a priori cogitation, as opposed to experience.
↑ comment by Richard_Kennaway · 2013-04-08T08:25:42.043Z · LW(p) · GW(p)
Even after looking the book up on Google, without context, I can't tell whether the rationalist being spoken of has gone astray through his reason, or has succeeded in finding the truth of something. But I am now interested in reading Laxness.
↑ comment by NancyLebovitz · 2013-04-23T12:04:45.349Z · LW(p) · GW(p)
The mere size of the universe is pretty incredible. I don't think it gets as much emphasis as it used to. I'm not sure whether people have quit thinking about it or gotten used to it.
comment by Shmi (shminux) · 2013-04-23T14:59:01.752Z · LW(p) · GW(p)
Scott Adams on evolution toward... what?
I see the iWatch as the next phase in our evolution to full cyborg status. I want my Google glasses, iWatch, smartphone, and anything else you want to attach to my body. Frankly, I'm tired of being nothing but a skin-bag full of decaying organs. I want to be the machine I was always meant to be. That prospect excites me.
comment by beoShaffer · 2013-04-11T05:02:42.171Z · LW(p) · GW(p)
As she stared at her wall, she understood that she would have to deal with it, accept it all as a new part of her existence. That was the only reasonable thing to do. She didn't have to be happy about it, but the universe wasn't structured around her happiness.
comment by Vaniver · 2013-04-11T01:02:31.128Z · LW(p) · GW(p)
If a man will begin with certainties, he shall end in doubts; but if he will be content to begin with doubts, he shall end in certainties.
--Francis Bacon
Replies from: simplicio↑ comment by simplicio · 2013-04-18T19:16:24.204Z · LW(p) · GW(p)
Neither is necessarily or even usually true though, is it?
Replies from: Vaniver↑ comment by Vaniver · 2013-04-19T01:27:54.079Z · LW(p) · GW(p)
Necessarily, of course not. Usually, well, this is Francis Bacon, and so the intended meaning of the quote is more like "We can be more certain in the outputs of empiricism than we can be in the outputs of deductive argument beginning with intuitions or other a priori knowledge."
comment by gwern · 2013-04-15T00:40:59.970Z · LW(p) · GW(p)
'Talking of a Court-martial that was sitting upon a very momentous publick occasion, he expressed much doubt of an enlightened decision; and said, that perhaps there was not a member of it, who in the whole course of his life, had ever spent an hour by himself in balancing probabilities.'
Boswell's Life of Johnson (quoted in "Applied Scientific Inference", Sturrock 1994)
comment by Yossarian · 2013-04-06T17:10:43.517Z · LW(p) · GW(p)
All things be ready if our minds be so.
- William Shakespeare, Henry V
↑ comment by wedrifid · 2013-04-08T08:53:08.535Z · LW(p) · GW(p)
All things be ready if our minds be so.
What does this mean?
Replies from: tgb↑ comment by tgb · 2013-04-08T11:59:27.245Z · LW(p) · GW(p)
In context, this is said right before the battle of Agincourt and Henry V is reminding his troops that the only thing left for them to do is to prepare their minds for the coming battle (where they are horribly outnumbered). I guess the rationality part is to remember that sometimes we must make sure to be in the right mindset to succeed.
I've always seen that whole speech as a pretty good example of reasoning from the wrong premises: Henry V makes the argument that God will decide the outcome of the battle and so if given the opportunity to have more Englishmen fighting along side them, he would choose to fight without them since then he gets more glory for winning a harder fight and if they lose then fewer will have died. Of course he doesn't take this to the logical conclusion and go out and fight alone, but I guess Shakespeare couldn't have pushed history quite that far.
A good 'dark arts' quote from that speech might be when he offers to pay anyone's fare back to England if they leave then. After that, anyone thinking of deserting will be trapped by their sunk costs into staying - but maybe that's not what Shakespeare had in mind...
Replies from: Yossarian↑ comment by Yossarian · 2013-04-08T19:19:07.416Z · LW(p) · GW(p)
The quote struck me as a poetic way of affirming the general importance of metacognition - a reminder that we are at the center of everything we do, and therefore investing in self improvement is an investment with a multiplier effect. I admit though this may be adding my own meaning that doesn't exist in the quote's context.
I've always seen that whole speech as a pretty good example of reasoning from the wrong premises: Henry V makes the argument that God will decide the outcome of the battle and so if given the opportunity to have more Englishmen fighting along side them, he would choose to fight without them since then he gets more glory for winning a harder fight and if they lose then fewer will have died. Of course he doesn't take this to the logical conclusion and go out and fight alone, but I guess Shakespeare couldn't have pushed history quite that far.
Rewatching Branagh's version recently, I keyed in on a different aspect. In his speech, Henry describes in detail all the glory and status the survivors of the battle will enjoy for the rest of their lives, while (of course) totally downplaying the fact that few of them can expect to collect on that reward. He's making a cost/benefit calculation for them and leaning heavily on the scale in the process.
Contrast with similar inspiring military speeches:
William Wallace says, "Fight and you may die. Run and you may live...for awhile. And dying in your beds, many years from now, would you be willin' to trade ALL the days, from this day to that, for one chance, just one chance, to come back here and tell our enemies that they may take our lives, but they'll never take our freedom!" He's saying essentially the same thing as Henry, but framing it as a loss instead of a gain. Where Henry tells his soldiers what they'll gain from fighting, Wallace tells them what they'll lose if they don't. Perhaps it's telling that, unlike Henry, he doesn't get very specific. It might've been an opportunity for someone in the ranks to run a thought experiment, "What specific aspects of my life will be measurably different if we have 'freedom' versus if we don't have 'freedom'? What exactly AM I trading ALL the days for? And if I magically had that thing without the cost of potentially dying, what would my preferences be then?" Or to just notice their confusion and be able to recognize they were being loss averse and without the ability to define exactly what they were averse to losing.
Meanwhile, Maximus tells his troops, "What you do in life echoes in eternity." He's more honest and direct about the probability that you're going to die, but also reminds you that the cost/benefit analysis extends beyond your own life, the implication being that your 'honor' (reputation) affects your placement in the afterlife and (probably of more consequence) the well being of your family after your death. Life is an iterated game and sometimes you have to defect (or cooperate?) so that your children get to play at all.
And lastly, Patton says, "No bastard ever won a war by dying for his country. He won it by making the other poor, dumb bastard die for his." He explicitly rejects the entire 'die for your country' framing and foists it wholly onto the enemy. It's his version of "The enemy's gate is down." He's not telling you you're not going to die, but at least he's not trying to convince you that your death is somehow a good or necessary thing.
When taken in this company, Henry actually comes across more like a villain. Of all of them, he's appealing to their desire to achieve rational interests in an irrational way without being at all upfront about their odds of actually getting what he's promising them.
comment by Armok_GoB · 2013-04-09T01:02:03.552Z · LW(p) · GW(p)
This DOES teach me a lesson that coming up with absurd-sounding situations on the spot to demonstrate something’s “self-evident implausibility” is liable to come back to bite me, though. I should do it more often, just to accidentally stumble across out of the box ideas.
comment by Tenoke · 2013-04-24T16:40:59.133Z · LW(p) · GW(p)
Odd as it may seem, I am my remembering self, and the experiencing self, who does my living, is like a stranger to me.
--Daniel Kahneman on the dichotomy between the self that experiences things from moment to moment and the self that remembers and evaluates experiences as a whole. (from Thinking, Fast and Slow )
comment by NancyLebovitz · 2013-04-23T12:31:10.093Z · LW(p) · GW(p)
These studies are the record of a failure-- the failure of facts to sustain a preconceived theory. The facts assembled, however, seemed worthy of further examination. If they would not prove what we had hoped to have them prove, it seemed desirable to turn them loose and to follow them to whatever end they might lead.
Edgar Lawrence Smith, Common Stocks as Long Term Investments
comment by Woodbun · 2013-04-04T07:16:02.103Z · LW(p) · GW(p)
"Never forget I am not this silver body, Mahrai. I am not an animal brain, I am not even some attempt to produce an Al through software running on a computer. I am a Culture Mind. We are close to gods, and on the far side."
-Iain M. Banks, Look to Windward
Replies from: Woodbun↑ comment by Woodbun · 2013-04-04T07:17:29.088Z · LW(p) · GW(p)
Incidentally, Mr. Banks has been diagnosed with terminal cancer, and estimated to have a few months to live as of this post. Comments may be made on his website: http://friends.banksophilia.com/
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-08T02:00:47.305Z · LW(p) · GW(p)
Whoops, forgot to promote this.
comment by MikeDobbs · 2013-04-05T23:51:43.636Z · LW(p) · GW(p)
The significant problems we face cannot be solved at the same level of thinking we were at when we created them.
-- Albert Einstein
Replies from: FiftyTwo↑ comment by FiftyTwo · 2013-04-08T02:33:09.215Z · LW(p) · GW(p)
Source? Wikiquote seems to think its a misquote.
Replies from: shminux, MikeDobbs↑ comment by Shmi (shminux) · 2013-04-24T22:15:01.052Z · LW(p) · GW(p)
Isn't there a law or something stating that Einstein never said 99% of what's attributed to him? Or maybe that the accuracy of quote's attribution is inversely proportional to the person's fame?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-24T23:16:10.468Z · LW(p) · GW(p)
Well, it's unsurprising that misattributed quotes are more often attributed to famous people than to unknown people.
↑ comment by MikeDobbs · 2013-04-24T21:44:53.981Z · LW(p) · GW(p)
Thanks FiftyTwo- I just looked up the article you refer to and it indicates that it may be a paraphrase of a longer quote. I heard this from Anthony Robbins, this quote is attributed to Einstein in some of his literature. It seems that the sentiment, if not the exact quote, seem to be attributable to Einstein