Rationality Quotes April 2012

post by Oscar_Cunningham · 2012-04-03T00:42:04.135Z · LW · GW · Legacy · 859 comments

Here's the new thread for posting quotes, with the usual rules:

859 comments

Comments sorted by top scores.

comment by Oscar_Cunningham · 2012-04-01T14:08:01.036Z · LW(p) · GW(p)

I understand what an equation means if I have a way of figuring out the characteristics of its solution without actually solving it.

Paul Dirac

Replies from: Manfred
comment by Manfred · 2012-04-01T17:29:53.594Z · LW(p) · GW(p)

Excellent quote.

comment by Andy_McKenzie · 2012-04-01T22:10:38.785Z · LW(p) · GW(p)

A few years into this book, I was diagnosed as diabetic and received a questionnaire in the mail. The insurance carrier stated that diabetics often suffer from depression and it was worried about me. One of the questions was “Do you think about death?” Yes, I do. “How often?” the company wanted to know. “Yearly? Monthly? Weekly? Daily?” And if daily, how many times per day? I dutifully wrote in, “About 70 times per day.” The next time I saw my internist, she told me the insurer had recommended psychotherapy for my severe depression. I explained to her why I thought about death all day—merely an occupational hazard—and she suggested getting therapy nonetheless. I thought, fine, it might help with the research.

The therapist found me tragically undepressed, and I asked her if she could help me design a new life that would maximize the few years that I had left. After all, one should have a different life strategy at sixty than at twenty. She asked why I thought I was going to die and why I had such a great fear of death. I said, I am going to die. It’s not a fear; it’s a reality. There must be some behavior that could be contraindicated for a man my age but other normally dangerous behavior that takes advantage of the fact that I am risking fewer years at sixty or sixty-five years of age than I was at twenty or twenty-five (such as crimes that carry a life sentence, crushing at age twenty but less so at age sixty-five). Surely psychology must have something to say on the topic. Turns out, according to my therapist, it does not. There was therapy for those with terminal illness, for the bereaved, for the about-to-be-bereaved, for professionals who dealt with terminal patients, and so on, but there was nothing for people who were simply aware that their life would come to a natural end. It would seem to me that this is a large, untapped market. The therapist advised me not to think about death.

Dick Teresi, The Undead

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-04-02T06:02:54.909Z · LW(p) · GW(p)

You're going to die.

Or maybe not.

Replies from: Nisan
comment by Nisan · 2012-04-02T17:51:56.009Z · LW(p) · GW(p)

I like the first video, but I wish it ended at 4:20. It reminds me a lot of Ecclesiastes, which is a refreshingly honest essay about the meaning of life, with the moral "and therefore you should do what God wants you to do" tacked on at the end by an anonymous editor.

comment by Scott Alexander (Yvain) · 2012-04-02T12:55:42.748Z · LW(p) · GW(p)

On counter-signaling, how not to do:

US police investigated a parked car with a personalized plate reading "SMUGLER". They found the vehicle, packed with 24 lb (11 kg) of narcotics, parked near the Canadian border at a hotel named "The Smugglers' Inn." Police believed the trafficker thought that being so obvious would deter the authorities.

-- The Irish Independent, "News In Brief"

Replies from: Alejandro1, RobertLumley
comment by Alejandro1 · 2012-04-02T19:06:46.347Z · LW(p) · GW(p)

Maybe the guy had been reading too much Edgar Allan Poe? As a child, I loved "The Purloined Letter" and tried to play that trick on my sister - taking something from her and hiding it "in plain sight". Of course, she found it immediately.

ETA: it was a girl, not a guy.

comment by RobertLumley · 2012-04-03T01:32:15.407Z · LW(p) · GW(p)

I find it highly unlikely that this is the whole story. Surely the police are not licensed to investigate a car based solely on its vanity plate and where it was parked...

Replies from: TimS
comment by TimS · 2012-04-03T01:44:35.718Z · LW(p) · GW(p)

You are probably right that more information drew police attention to the car, but "near the border" gets one most of the way to legally justified. In the 1970s, the US Supreme Court explicitly approved a permanent checkpoint approximately 50 miles north of the Mexican border.

Replies from: RobertLumley
comment by RobertLumley · 2012-04-03T01:49:42.828Z · LW(p) · GW(p)

Well that's a rather depressing piece of law...

comment by Maniakes · 2012-04-03T00:51:28.809Z · LW(p) · GW(p)

There are big differences between "a study" and "a good study" and "a published study" and "a study that's been independently confirmed" and "a study that's been independently confirmed a dozen times over." These differences are important; when a scientist says something, it's not the same as the Pope saying it. It's only when dozens and hundreds of scientists start saying the same thing that we should start telling people to guzzle red wine out of a fire hose.

Chris Bucholz

Replies from: soreff, Document
comment by soreff · 2012-04-06T17:53:00.890Z · LW(p) · GW(p)

Mostly agreed. If I were to stand on a soapbox and say "light with a wavelength of 523.4371 nm is visible to the human eye", it would fall into the category of an unsubstantiated claim by a single person. But it is implied by the general knowledge that the human visual range is from roughly 400 nm to roughly 700 nm, and that has been confirmed by anyone who has looked at a spectrum with even crude wavelength calibration.

comment by Document · 2012-05-10T00:42:40.106Z · LW(p) · GW(p)

when a scientist says something, it's not the same as the Pope saying it.

Shouldn't that say that it is the same?

comment by Stephanie_Cunnane · 2012-04-04T03:27:55.600Z · LW(p) · GW(p)

Another learning which cost me much to recognize, can be stated in four words. The facts are friendly.

It has interested me a great deal that most psychotherapists, especially the psychoanalysts, have steadily refused to make any scientific investigation of their therapy, or to permit others to do this. I can understand this reaction because I have felt it. Especially in our early investigations I can well remember the anxiety of waiting to see how the findings came out. Suppose our hypotheses were disproved! Suppose we were mistaken in our views! Suppose our opinions were not justified! At such times, as I look back, it seems to me that I regarded the facts as potential enemies, as possible bearers of disaster. I have perhaps been slow in coming to realize that the facts are always friendly. Every bit of evidence one can acquire, in any area, leads one that much closer to what is true. And being closer to the truth can never be a harmful or dangerous or unsatisfying thing. So while I still hate to readjust my thinking, still hate to give up old ways of perceiving and conceptualizing, yet at some deeper level I have, to a considerable degree, come to realize that these painful reorganizations are what is known as learning, and that though painful they always lead to a more satisfying because somewhat more accurate way of seeing life. Thus at the present time one of the most enticing areas for thought and speculation is an area where several of my pet ideas have not been upheld by the evidence, I feel if I can only puzzle my way through this problem that I will find a much more satisfying approximation to the truth. I feel sure the facts will be my friends.

-Carl Rogers, On Becoming a Person: A Therapist's View of Psychotherapy (1961)

Replies from: Dorikka, Document, Ezekiel
comment by Dorikka · 2012-04-05T16:54:55.193Z · LW(p) · GW(p)

Facts are friendly on average, that is. Individual pieces of evidence might lead you to update towards a wrong conclusion. /nitpick

Replies from: wedrifid, Stephanie_Cunnane
comment by wedrifid · 2012-04-05T16:56:36.024Z · LW(p) · GW(p)

Facts are friendly on average, that is. Individual pieces of evidence might lead you to update towards a wrong conclusion. /nitpick

Even then we could potentially nitpick even further, depending on what is meant by 'average'.

comment by Stephanie_Cunnane · 2012-04-06T04:54:11.863Z · LW(p) · GW(p)

Excellent point.

comment by Document · 2012-05-10T01:29:03.946Z · LW(p) · GW(p)

Another learning which cost me much to recognize, can be stated in four words. The facts are friendly.

A while ago I saw a good post or quote on LW on the problem of confusing a phrase one uses to encapsulate an insight with the insight itself. Unfortunately I don't remember where.

comment by Ezekiel · 2012-04-04T23:45:46.365Z · LW(p) · GW(p)

And being closer to the truth can never be a harmful or dangerous or unsatisfying thing.

Knowing about evolution is pretty cool, but I'd be a lot more satisfied if I could believe that we were created as the pinnacle of design by a super-awesome Thing that had a specific plan in mind (and that my nation - and, come to that, tribe -was even more pinnacle than everyone else).

Replies from: TheOtherDave, TimS
comment by TheOtherDave · 2012-04-05T00:10:39.883Z · LW(p) · GW(p)

...and if it turned out that believing that particular falsehood didn't have consequences that left you less satisfied.

Replies from: Ezekiel
comment by Ezekiel · 2012-04-05T10:03:08.484Z · LW(p) · GW(p)

Okay, hypothetical: Dying human. They believed in God their entire life and have lived as basically decent according to their own ethics, and therefore think they're going to be blissing out for the rest of infinity. They will believe this for the next couple of minutes, and then stop existing.

Would you, given the opportunity, dispel their illusion?

Replies from: TheOtherDave, JulianMorrison
comment by TheOtherDave · 2012-04-05T13:43:04.157Z · LW(p) · GW(p)

Depends on what I expected the result of doing so to be.

If I expected the result to be that they are more unhappy than they otherwise would be for the rest of their lives with no other compensating benefit (which is certainly the conclusion your hypothetical encourages), then no I wouldn't.

If I expected the result to be either that they are happier than they otherwise would be for the rest of their lives, or that there is some other compensating benefit to them knowing what will actually happen, then yes I would.

Why do you ask?

Replies from: Ezekiel
comment by Ezekiel · 2012-04-05T21:32:10.210Z · LW(p) · GW(p)

Because this is (to my mind) an example of a situation where the facts aren't friendly and the truth is harmful - thus (hopefully) justifying my objection to the original quote.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-05T21:46:59.368Z · LW(p) · GW(p)

OK. Thanks for clarifying.

comment by JulianMorrison · 2012-04-05T16:39:49.073Z · LW(p) · GW(p)

Dispel all their illusions, including the one that assigned negative utility to unavoidable dying. There are better things to do with 2 minutes than expecting fun you won't receive.

Replies from: Ben_Welchner
comment by Ben_Welchner · 2012-04-05T17:10:32.208Z · LW(p) · GW(p)

If you know of any illusions that give inevitably ceasing to exist negative utility to someone leading a positive-utility life, I would love to have them dispelled for me.

Replies from: JulianMorrison
comment by JulianMorrison · 2012-04-19T23:11:52.560Z · LW(p) · GW(p)

Sorry for the slow reply.

Hmm. I may be a bit biased because I don't really have a high valuation on being alive as such (which is to say utility[X] is nearly the same as utility[X and Julian is alive] for me, all other things being equal - it's why I am not signed up for cryonics).

However I think that any utility calculus that negatively values the fun you're not going to have when inevitably dead is as silly as negatively valuing the fun you didn't get to have because said events preceded your birth, and you inevitably can't extend your life into the past. You get more chance to fulfil your values in the real world by making use of your 2 minutes than by anticipating values that are not going to happen. And I do very much place utility on my values being fulfilled in a real, rather than self deceptive way.

comment by TimS · 2012-04-05T00:00:05.936Z · LW(p) · GW(p)

Yes, the whole statement has an implicit "In the real world" premise.

I'd be happy if I had a magic wand that could violate the second law of thermodynamics, but in the real world . . .

Replies from: Ezekiel
comment by Ezekiel · 2012-04-05T10:49:28.245Z · LW(p) · GW(p)

I wasn't clear. Believing that would make me happy even if it wasn't true. There's no reason to assume reality would be nice enough to only hand us facts that we find satisfying.

If you happen to have a brain that finds the process of learning more satisfying than any possible falsehood, then that's great... But I don't think many people have that advantage.

Replies from: TimS, TheOtherDave
comment by TimS · 2012-04-05T14:00:15.814Z · LW(p) · GW(p)

There's a substantial minority in the community that dislikes the Litany of Gendlin, so you have plenty of company here.

But even granting the premise that believing true things conflicts with being happy, believing true things has been useful for achieving every other type of goal. So it seems like you are endorsing trading off achievement of other goals in order to maximize happiness. Without challenging your decision to adopt particular terminal values, I am unsure if your chosen tradeoff is sustainable.

Replies from: Ezekiel
comment by Ezekiel · 2012-04-05T21:39:08.884Z · LW(p) · GW(p)

I'm not endorsing that, for exactly the reason you said: knowing stuff, on average, will let you achieve your goals. The original quote, though, stated that the truth is "never unsatisfying", which seemed to me to be a false statement.

comment by TheOtherDave · 2012-04-05T13:58:12.900Z · LW(p) · GW(p)

You sound pretty confident that, if you believed that we were created as the pinnacle of design by a super-awesome Thing that had a specific plan in mind, and that your nation/tribe was even more pinnacle than everyone else, you would be happier than you are now.

Can you clarify your reasons for believing that? I mean, I grew up with a lot of people who believe that, and as a class they didn't seem noticeably happier than the people who didn't, so I'm inclined to doubt it. But I'm convinceable.

Replies from: Ezekiel
comment by Ezekiel · 2012-04-05T21:46:00.412Z · LW(p) · GW(p)

You got me, since during the time I did believe that I was a lot less happy than I am now, because that falsehood was part of a whole set of falsehoods which led to annoying obligations. But I do distinctly remember being satisfied with knowing the ultimate goal of the universe and my place in it, and how realising the truth made me feel unsatisfied.

The statement "the truth is never an unsatisfying thing" seems to be affect-heuristic reasoning: going from "truth is useful" to "truth is good" to "truth always feels good to know".

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-05T22:23:42.166Z · LW(p) · GW(p)

Sure. To the extent that you're simply arguing that the initial quote overreaches, I'm not disagreeing with you. But you seemed to be making more positive claims about the value of ignorance.

comment by VKS · 2012-04-04T10:23:55.914Z · LW(p) · GW(p)

Just as there are odors that dogs can smell and we cannot, as well as sounds that dogs can hear and we cannot, so too there are wavelengths of light we cannot see and flavors we cannot taste. Why then, given our brains wired the way they are, does the remark, "Perhaps there are thoughts we cannot think," surprise you?

  • Richard Hamming
Replies from: Eliezer_Yudkowsky, majus, MixedNuts, None
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-04-04T20:10:47.339Z · LW(p) · GW(p)

It surprises people like Greg Egan, and they're not entirely stupid, because brains are Turing complete modulo the finite memory - there's no analogue of that for visible wavelengths.

Replies from: AspiringKnitter, Vaniver, Dmytry
comment by AspiringKnitter · 2012-04-05T06:05:40.693Z · LW(p) · GW(p)

If this weren't Less Wrong, I'd just slink away now and pretend I never saw this, but:

I don't understand this comment, but it sounds important. Where can I go and what can I read that will cause me to understand statements like this in the future?

Replies from: Viliam_Bur, Nornagest, jeremysalwen
comment by Viliam_Bur · 2012-04-05T09:15:23.220Z · LW(p) · GW(p)

When speaking about sensory inputs, it makes sense to say that different species (even different individuals) have different ranges, so one can percieve something and other can't.

With computation it is known that sufficiently strong programming languages are in some sense equal. For example, you could speak about relative advantages of Basic, C/C++, Java, Lisp, Pascal, Python, etc., but in each of these languages you can write a simulator of the remaining ones. This means that if an algorithm can be implemented in one of these languages, it can be implemented in all of them -- in worst case, it would be implemented as a simulation of another language running its native implementation.

There are some technical details, though. Simulating another program is slower and requires more memory than the original program. So it could be argued that on a given hardware you could do a program in language X which uses all the memory and all available time, so it does not necessarily follow that you can do the same program in language Y. But on this level of abstraction we ignore hardware limits. We assume that the computer is fast enough and has enough memory for whatever purpose. (More precisely, we assume that in available time a computer can do any finite number of computation steps; but it cannot do an infinite number of steps. The memory is also unlimited, but in a finite time you can only manage to use a finite amount of memory.)

So on this level of abstraction we only care about whether something can or cannot be implemented by a computer. We ignore time and space (i.e. speed and memory) constraints. Some problems can be solved by algorithms, others can not. (Then, there are other interesting levels of abstraction which care about time and space complexity of algorithms.)

Are all programming languages equal in the above sense? No. For example, although programmers generally want to avoid infinite loops in their programs, if you remove a potential for infinite loops from the programming language (e.g. in Pascal you forbid "while" and "repeat" commands, and a possibility to call functions recursively), you lose ability to simulate programming languages which have this potential, and you lose ability to solve some problems. On the other hand, some universal programming languages seem extremely simple -- a famous example is a Turing machine. This is very useful, because it is easier to do mathematical proofs about a simple language. For example if you invent a new programming language X, all you have to do to prove its universality, is to write a Turing machine simulator, which is usually very simple.

Now back to the original discussion... Eliezer suggests that brain functionality should be likened to computation, not to sensory input. A human brain is computationally universal, because (given enough time, pen and paper) we can simulate a computer program, so all brains should be equal when optimally used (differing only in speed and use of resources). In another comment he adds that ability to compute isn't the same as ability to understand. Therefore (my conclusion) what one human can understand, another human can at least correctly calculate without understanding, given a correct algorithm.

Replies from: AspiringKnitter
comment by AspiringKnitter · 2012-04-05T19:51:31.141Z · LW(p) · GW(p)

Wow. That's really cool, thank you. Upvoted you, jeremysalwen and Nornagest. :)

Could you also explain why the HPMoR universe isn't Turing computable? The time-travel involved seems simple enough to me.

Replies from: thomblake
comment by thomblake · 2012-04-05T20:57:48.251Z · LW(p) · GW(p)

Not a complete answer, but here's commentary from a ffdn review of Chapter 14:

Kevin S. Van Horn
7/24/10 . chapter 14
Harry is jumping to conclusions when he tells McGonagall that the Time-Turner isn't even Turing computable. Time travel simulation is simply a matter of solving fixed-point equation f(x) = x. Here x is the information sent back in time, and f is a function that maps the information received from the future to the information that gets sent back in time. If a solution exists at all, you can find it to any desired degree of accuracy by simply enumerating all possible rational values of x until you find one that satisfies the equation. And if f is known to be both continuous and have a convex compact range, then the Brouwer fixed-point theorem guarantees that there will be a solution.

So the only way I can see that simulating the Time-Turner wouldn't be Turing computable would be if the physical laws of our universe give rise to fixed-point equations that have no solutions. But the existence of the Time-Turner then proves that the conditions leading to no solution can never arise.

Replies from: Nick_Tarleton, johnswentworth, AspiringKnitter
comment by Nick_Tarleton · 2012-04-06T02:04:48.299Z · LW(p) · GW(p)

I got the impression that what "not Turing-computable" meant is that there's no way to only compute what 'actually happens'; you have to somehow iteratively solve the fixed-point equation, maybe necessarily generating experiences (waves hands confusedly) corresponding to the 'false' timelines.

Replies from: tgb
comment by tgb · 2012-04-10T23:29:12.842Z · LW(p) · GW(p)

Sounds rather like our own universe, really.

comment by johnswentworth · 2012-04-09T22:52:42.529Z · LW(p) · GW(p)

There's also the problem of an infinite number of possible solutions.

Replies from: faul_sname
comment by faul_sname · 2012-04-13T05:49:00.400Z · LW(p) · GW(p)

The number of solutions is finite but (very, very, mind-bogglingly) large.

comment by AspiringKnitter · 2012-04-05T23:14:25.900Z · LW(p) · GW(p)

Ah. It's math.

:) Thanks.

comment by Nornagest · 2012-04-05T06:42:10.296Z · LW(p) · GW(p)

A computational system is Turing complete if certain features of its operation can reproduce those of a Turing machine, which is a sort of bare-bones abstracted model of the low-level process of computation. This is important because you can, in principle, simulate the active parts of any Turing complete system in any other Turing complete system (though doing so will be inefficient in a lot of cases); in other words, if you've got enough time and memory, you can calculate anything calculable with any system meeting a fairly minimal set of requirements. Thanks to this result, we know that there's a deep symmetry between different flavors of computation that might not otherwise be obvious. There are some caveats, though: in particular, the idealized version of a Turing machine assumes infinite memory.

Now, to answer your actual question, the branch of mathematics that this comes from is called computability theory, and it's related to the study of mathematical logic and formal languages. The textbook I got most of my understanding of it from is Hopcroft, Motwani, and Ullman's Introduction to Automata Theory, Languages, and Computation, although it might be worth looking through the "Best Textbooks on Every Subject" thread to see if there's a consensus on another.

Replies from: MarkusRamikin
comment by MarkusRamikin · 2012-04-05T07:06:07.758Z · LW(p) · GW(p)

infinite memory space

Curious, does "memory space" mean something more than just "memory"?

Replies from: wedrifid, Nornagest
comment by wedrifid · 2012-04-05T10:05:50.700Z · LW(p) · GW(p)

infinite memory space

Curious, does "memory space" mean something more than just "memory"?

Just a little more specific. Some people may hear "memory" and associate it with, say, the duration of their memory rather than how many can be physically held. For example when a human is said to have a 'really good memory' we don't tend to be trying to make a claim about the theoretical maximum amount of stuff they could remember.

comment by Nornagest · 2012-04-05T07:11:42.328Z · LW(p) · GW(p)

No, although either or both might be a little misleading depending on what connotations you attach to it: an idealized Turing machine stores all its state on a rewritable tape (or several tapes, but that's equivalent to the one-tape version) of symbols that's infinite in both directions. You could think of that as analogous to both memory and disk, or to whatever the system you're actually working with uses for storage.

Replies from: MarkusRamikin
comment by MarkusRamikin · 2012-04-05T07:13:35.957Z · LW(p) · GW(p)

Right, I know that. Was just curious why the extra verbiage in a post meant to explain something.

Replies from: Nornagest
comment by Nornagest · 2012-04-05T07:15:02.004Z · LW(p) · GW(p)

Because it's late and I'm long-winded. I'll delete it.

comment by Vaniver · 2012-04-04T20:35:43.349Z · LW(p) · GW(p)

brains are Turing complete modulo the finite memory

What does that statement mean in the context of thoughts?

That is, when I think about human thoughts I think about information processing algorithms, which typically rely on hardware set up for that explicit purpose. So even though I might be able to repurpose my "verbal manipulation" module to do formal logic, that doesn't mean I have a formal logic module.

Any defects in my ability to repurpose might be specific to me: I might able to think the thought "A-> B, ~A, therefore ~B" with the flavor of trueness, and another person can only think that thought with the flavor of falseness. If the truth flavor is as much a part of the thought as the textual content, then the second thinker cannot think the thought that the first thinker can.

Aren't there people who can hear sounds but not music? Are their brains not Turing complete? Are musical thoughts ones they cannot think?

Replies from: Eliezer_Yudkowsky, Will_Newsome
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-04-04T21:33:35.059Z · LW(p) · GW(p)

It means nothing, although Greg Egan is quite impressed by it. Sad but true: Someone with an IQ of, say, 90 can be trained to operate a Turing machine, but will in all probability never understand matrix calculus. The belief that Turing-complete = understanding-complete is false. It just isn't stupid.

Replies from: komponisto, Elithrion, thomblake, Incorrect, JulianMorrison
comment by komponisto · 2012-04-05T21:57:52.029Z · LW(p) · GW(p)

[That human brains are Turing-complete] means nothing, although Greg Egan is quite impressed by it. Sad but true: Someone with an IQ of, say, 90 can be trained to operate a Turing machine, but will in all probability never understand matrix calculus.

It doesn't mean nothing; it means that people (like machines) can be taught to do things without understanding them.

(They can also be taught to understand, provided you reduce understanding to Turing-machine computations, which is harder. "Understanding that 1+1 = 2" is not the same thing as being able to output "2" to the query "1+1=".)

comment by Elithrion · 2012-04-05T21:39:20.276Z · LW(p) · GW(p)

I would imagine that he can be taught matrix calculus, given sufficient desire (on his and the teachers' parts), teaching skill, and time. I'm not sure if in practice it is possible to muster enough desire or time to do it, but I do think that understanding is something that can theoretically be taught to anyone who can perform the mechanical calculations.

Replies from: Eliezer_Yudkowsky, David_Gerard, DanArmak
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-04-05T21:43:25.206Z · LW(p) · GW(p)

Have you ever tried to teach math to anyone who is not good at math? In my youth I once tutored a woman who was poor, but motivated enough to pay $40/session. A major obstacle was teaching her how to calculate (a^b)^c and getting her to reliably notice that minus times minus equals plus. Despite my attempts at creative physical demonstrations of the notion of a balanced scale, I couldn't get her to really understand the notion of doing the same things to both sides of a mathematical equation. I don't think she would ever understand what was going on in matrix calculus, period, barring "teaching methods" that involve neural reprogramming or gain of additional hardware.

Replies from: matt, Elithrion, Incorrect, NancyLebovitz, DanArmak
comment by matt · 2012-04-13T03:50:38.812Z · LW(p) · GW(p)

Your claim is too large for the evidence you present in support of it.

Teaching someone math who is not good at math is hard, but "will in all probability never understand matrix calculus"!? I don't think you're using the Try Harder.

Assume teaching is hard (list of weak evidence: it's a three year undergraduate degree; humanity has hardly allowed itself to run any proper experiments in the field, and those that have been run seem usually to be generally ignored by professional practitioners; it's massively subject to the typical mind fallacy and most practitioners don't know that fallacy exists). That you, "in your youth" (without having studied teaching), "once" tutored a woman who you couldn't teach very well… doesn't support any very strong conclusion.

It seems very likely to me that Omega could teach matrix calculus to someone with IQ 90 given reasonable time and motivation from the student. One of the things I'm willing to devote significant resources to in the coming years is making education into a proper science. Given the tools of that proper science I humbly submit that you could teach your former student a lot. Track the progress of the Khan Academy for some promising developments in the field.

Replies from: wedrifid, DanArmak
comment by wedrifid · 2012-04-13T05:37:18.268Z · LW(p) · GW(p)

humanity has hardly allowed itself to run any proper experiments in the field, and those that have been run seem usually to be generally ignored by professional practitioners

What are the experiments that are generally ignored?

comment by DanArmak · 2012-04-23T13:13:30.181Z · LW(p) · GW(p)

list of weak evidence

Some of it is weak evidence for the hardness claim (3 years degree), some against (all the rest). Does that match what you meant?

Replies from: matt
comment by matt · 2012-04-24T07:28:04.729Z · LW(p) · GW(p)

I'd intended a different meaning of "hard". On reflection your interpretation seems a very reasonable inference from what I wrote.

What I meant: Teaching is hard enough that you shouldn't expect to find it easy without having spent any time studying it. Even as a well educated westerner, the bits of teaching you can reasonably expect to pick up won't take you far down the path to mastery.

(Thank you for you comment - it got me thinking.)

comment by Elithrion · 2012-04-05T23:52:31.315Z · LW(p) · GW(p)

No, I haven't, and reading your explanation I now believe that there is a fair chance you are correct. However, one problem I have with it is that you're describing a few points of frustration, some of which I assume you ended up overcoming. I am not entirely convinced that had she spent, say one hundred hours studying each skill that someone with adequate talent could fully understand in one, she would not eventually fully understand it.

In cases of extreme trouble, I can imagine her spending forty hours working through a thousand examples, until mechanically she can recognise every example reasonably well, and find the solution correctly, then another twenty working through applications, then another forty hours analysing applications in the real world until the process of seeing the application, formulating the correct problem, and solving it becomes internalised. Certainly, just because I can imagine it doesn't make it true, but I'm not sure on what grounds I should prefer the "impossibility" hypothesis to the "very very slow learning" hypothesis.

comment by Incorrect · 2012-04-05T21:54:53.298Z · LW(p) · GW(p)

I can't imagine how hard it would be to learn math without the concept of referential transparency.

Replies from: MixedNuts
comment by MixedNuts · 2012-04-09T11:52:01.444Z · LW(p) · GW(p)

Not all that hard if that's the only sticking point. I acquired it quite late myself.

comment by NancyLebovitz · 2012-04-24T07:45:18.625Z · LW(p) · GW(p)

What was your impression of her intelligence otherwise?

Suzette Haden Elgin (a science fiction author and linguist who was quite intelligent with and about words) described herself as intractably bad at math.

comment by DanArmak · 2012-04-23T13:15:54.800Z · LW(p) · GW(p)

This anecdote gives very little information on its own. Can you describe your experience teaching math to other people - the audience, the investment, the methods, the outcome? Do you have any idea whether that one woman eventually succeeded in learning some of what you couldn't teach her, and if so, how?

(ETA: I do agree with the general argument about people who are not good at math. I'm only saying this particular story doesn't tell us much about that particular woman, because we don't know how good you are at teaching, etc.)

comment by David_Gerard · 2012-04-08T09:12:38.280Z · LW(p) · GW(p)

I fear you're committing the typical mind fallacy. The dyscalculic could simulate a Turing machine, but all of mathematics, including basic arithmetic, is whaargarbl to them. They're often highly intelligent (though of course the diagnosis is "intelligent elsewhere, unintelligent at maths"), good at words and social things, but literally unable to calculate 17+17 more accurately than "somewhere in the twenties or thirties" or "I have no idea" without machine assistance. I didn't believe it either until I saw it.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-08T14:38:45.770Z · LW(p) · GW(p)

Do you find this harder to believe than, say, aphasia? I've never seen it, but I have no difficulty believing it.

Replies from: David_Gerard
comment by David_Gerard · 2012-04-08T15:19:46.249Z · LW(p) · GW(p)

Well, I certainly don't disbelieve in it now. I first saw it at eighteen, in first-year psychology, in the bit where they tried to beat basic statistics into our heads.

comment by DanArmak · 2012-04-23T13:38:19.522Z · LW(p) · GW(p)

I can't imagine how hard it is to learn to program if you don't instinctively know how. Yet I know it is that hard for many people. Some succeed in learning, some don't. Those who do still have big differences in ability, and ability at a young age seems to be a pretty good predictor of lifetime ability.

I realize I must have learned the basics at some point, although I don't remember it. And I remember learning many more advanced concepts during the many years since. But for both the basics and the advanced subjects, I never experienced anything I can compare to what I'd call "learning" in other subjects I studied.

When programming, if I see/read something new, I may need some time (seconds or hours) to understand it, then once I do, I can use it. It is cognitively very similar to seeing a new room for the first time. It's novel, but I understand it intuitively and in most cases quickly.

When I studied e.g. biology or math at university, I had to deliberately memorize, to solve exercises before understanding the "real thing", to accept that some things I could describe I couldn't duplicate by building them from scratch no matter how much time I had and what materials and tools. This never happened to me in programming. I may not fully understand the domain problem that the program is manipulating. But I always understand the program itself.

And yet I've seen people struggle to understand the most elementary concepts of programming, like, say, distinguishing between names and values. I've had to work with some pretty poor programmers, and had the official job of on-the-job mentoring newbies on two occasions. I know it can be very difficult to teach effectively, it can be very difficult to learn.

Given that I encountered a heavily preselected set of people, who were trying to make programming their main profession, it's easy for me to believe that - at the extreme - for many people elementary programming is impossible to learn, period. And the same should apply to math and any other "abstract" subject for which biologically normal people don't have dedicated thinking modules in their brains.

comment by thomblake · 2012-04-05T02:36:48.119Z · LW(p) · GW(p)

The belief that Turing-complete = understanding-complete is false. It just isn't stupid.

I'm not sure what you mean by understanding-complete, but remember that the turing-complete system is both the operator and any machinery they are manipulating.

comment by Incorrect · 2012-04-05T02:17:17.947Z · LW(p) · GW(p)

So you are considering a man in a Chinese room to lack understanding?

Replies from: J_Taylor
comment by J_Taylor · 2012-04-05T02:37:41.330Z · LW(p) · GW(p)

Obviously the man in the Chinese room lacks understanding, by most common definitions of understanding. It is the room as a system which understands Chinese. (Assuming lookup tables can understand. By functional definitions, they should be able to.)

Replies from: Incorrect
comment by Incorrect · 2012-04-05T12:09:57.153Z · LW(p) · GW(p)

But with a person it becomes a bit more complicated because it depends on what we are referring to when we say their name. I was trying to make an allusion to Blindsight.

comment by JulianMorrison · 2012-04-20T14:41:23.352Z · LW(p) · GW(p)

It means you could, in theory, run an AI on them (slowly).

comment by Will_Newsome · 2012-04-04T21:13:02.407Z · LW(p) · GW(p)

Aren't there people who can hear sounds but not music?

FWIW I've read a study that says about 50% of people can't tell the difference between a major and a minor chord even when you label them happy/sad. [ETA: Happy/sad isn't the relevant dimension, see the replies to this comment.] I have no idea how probable that is, but if true it would imply that half of the American population basically can't hear music.

Replies from: army1987, khafra, Bluehawk, None
comment by A1987dM (army1987) · 2012-04-05T16:05:43.285Z · LW(p) · GW(p)

http://languagelog.ldc.upenn.edu/nll/?p=2074

It shocked the hell out of me, too.

Replies from: Dmytry, arundelo, wedrifid
comment by Dmytry · 2012-04-05T16:55:31.879Z · LW(p) · GW(p)

This is weird. It is hard for me to hear the difference in the cadence, but crystal clear otherwise. In the cadence, the problem for me is that the notes are dragging on, like when you press pedal on piano a bit, that makes it hard to discern the difference.

Maybe they lost something in retelling here? Made up new stimuli for which it doesn't work because of harmonics or something?

Or maybe its just me and everyone on this thread? I have a lot of trouble hearing speech through noise (like that of flowing water), i always have to tell others, i am not hearing what you're saying i am washing the dishes. Though i've no idea how well other people can hear something when they are washing the dishes; maybe i care too much not to pretend to listen when i don't hear.

This needs proper study.

Replies from: arundelo, Scottbert, arundelo, arundelo, tgb
comment by arundelo · 2012-04-05T23:22:03.113Z · LW(p) · GW(p)

The following recordings are played on an acoustic instrument by a human (me), and they have spaces in between the chords. The chord sequences are randomly generated (which means that the major-to-minor ratio is not necessarily 1:1, but all of them do have a mixture of major and minor chords).

Each of the following two recordings is a sequence of eight C major or C minor chords:

Each of the following two recordings is a sequence of eight "cadences" -- groups of four chords that are either

F B♭ C F

or

F B♭ Cminor F

Edit: Here's a listing of the chords in all four sound files.

Edit 2 (2012-Apr-22): I added another recording that contains these chords:

F   B♭   C   F
F   B♭   Cmi F

repeated over and over, while the balance between the voices is varied, from "all voices roughly equal" to "only the second voice from the top audible". The second voice from the top is the only one that is different on the C minor chord. My idea is that hearing the changing voice foregrounded from its context like this might make it easier to pick it out when it's not foregrounded.

comment by Scottbert · 2012-04-09T15:12:11.724Z · LW(p) · GW(p)

Ditto for me -- The difference between the two chords is crystal clear, but in the cadence I can barely hear it.

I'm not a professional, but I sang in school chorus for 6 years, was one of the more skilled singers there, I've studied a little musical theory, and I apparently have a lot of natural talent. And the first time I heard the version played in cadence I didn't notice the difference at all. Freaky. I know how that post-doc felt when she couldn't hear the difference in the chords.

comment by arundelo · 2012-04-22T16:56:24.740Z · LW(p) · GW(p)

I added another recording. See "Edit 2" in this comment for an explanation.

comment by arundelo · 2012-04-05T21:36:22.953Z · LW(p) · GW(p)

Maybe they lost something in retelling here? Made up new stimuli for which it doesn't work because of harmonics or something?

Nope, the audio examples are all straightforward realizations of the corresponding music notation. (They are easy for me to tell apart.)

Replies from: Dmytry
comment by Dmytry · 2012-04-05T23:20:43.174Z · LW(p) · GW(p)

Still, the notes drag on, the notes have harmonics, etc. It is not pure sine waves that abruptly stop and give time for the ear to 'clear' of afterimage-like sound.

I hear the difference in the cadence, it's just that I totally can't believe it can possibly be clearer than just the one chord then another chord. I can tell apart just the two chords at much lower volume level and/or paying much less attention.

comment by tgb · 2012-04-05T18:25:54.704Z · LW(p) · GW(p)

I am with you on easily telling the two apart in the original chords but being unable to reliably tell the difference in the cadence version.

comment by arundelo · 2012-04-05T21:55:56.553Z · LW(p) · GW(p)

I've had between a dozen and two dozen music students over the years. (Guitar and bass guitar.) Some of them started out having trouble telling the difference between ascending and descending intervals. (In other words, some of them had bad ears.) All of them improved, and all of them, with practice, were able to hear me play something and play it back by ear. I'm sure there are some people who are neurologically unable to do this, but in general, it is a learnable skill.

The cognitive fun! website has a musical interval exercise.

Edit: One disadvantage to that exercise/game for people who aren't already familiar with the intervals is that it doesn't have you differentiate between major and minor intervals. (So if you select e.g. 2 and 8 as your intervals, you'll be hearing three different intervals, because some of the 2nds will be minor rather than major.) Sooner or later I'll write my own interval game!

Replies from: alex_zag_al
comment by alex_zag_al · 2012-04-06T01:26:55.997Z · LW(p) · GW(p)

is this what you're looking for?

http://www.musictheory.net/exercises/ear-interval

Replies from: arundelo
comment by arundelo · 2012-04-06T01:33:25.280Z · LW(p) · GW(p)

That's pretty cool. Are there keybindings?

Replies from: alex_zag_al
comment by alex_zag_al · 2012-04-06T16:02:24.706Z · LW(p) · GW(p)

I don't know, doesn't look like it.

comment by wedrifid · 2012-04-05T16:20:43.415Z · LW(p) · GW(p)

http://languagelog.ldc.upenn.edu/nll/?p=2074

It shocked the hell out of me, too.

Likewise.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-05T16:31:37.891Z · LW(p) · GW(p)

I was going to comment about how the individual chords were clearly different to my ear but the "stereotypical I-IV-V-I cadential sequences" were indistinguishable, precisely the reverse of the experience the Bell Labs post doc reportedly reported. Then I read the comments on the article and realized this is fairly common, so I deleted the comment. Then I decided to comment on it anyway. Now I have.

Replies from: wedrifid
comment by wedrifid · 2012-04-05T16:42:50.814Z · LW(p) · GW(p)

I had to listen to that second part several times before I could pick up the difference too. They sound equivalent unless I concentrate.

Replies from: Dmytry
comment by Dmytry · 2012-04-05T17:02:44.779Z · LW(p) · GW(p)

And me. I guess - as most probable explanation - they just lost something crucial in retelling. The notes drag on a fair bit in the second part. I can hear the difference if I really concentrate. But its ilke a typo in the text. If the text was blurred.

Replies from: orthonormal
comment by orthonormal · 2012-04-07T20:58:33.136Z · LW(p) · GW(p)

The second sequence sounded jarringly wrong to me, FWIW.

comment by khafra · 2012-04-06T13:21:30.172Z · LW(p) · GW(p)

At first, I found it unbelievable. Then, I remembered that I have imperfect perfect pitch: I learned both piano and french horn; the latter of which is transposed up a perfect fourth. Especially when I'm practicing regularly, I can usually name a note or simple chord when I hear it; but I'm often off by a perfect fourth.

Introspecting on the difference between being right about a note and wrong about a note makes me believe people can confuse major and minor, but still enjoy music.

comment by Bluehawk · 2012-04-05T23:32:31.112Z · LW(p) · GW(p)

Might have something to do with the fact that happy/sad is neither an accurate nor an encompassing description of the uses of major/minor chords, unless you place a C major and a C or A minor directly next to each other. I for one find that when I try to tell the difference solely on that basis, I might as well flip a coin and my success rate would go down only slightly. When I come at it from other directions and ignore the emotive impact, my success rate is much higher.

In short: Your conclusion doesn't follow from the evidence.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-06T00:15:46.287Z · LW(p) · GW(p)

I stated the evidence incorrectly, look at the uncle/aunt of your comment (if you haven't already) for the actual evidence.

Replies from: Bluehawk
comment by Bluehawk · 2012-04-07T21:54:11.372Z · LW(p) · GW(p)

Yeah, I spotted that after making my comment, but after that I wasn't sure whether you were citing the same source material or no. The actual evidence does say a lot more about how humans (don't?) perceive musical sounds. Thanks for clarifying, though.

comment by [deleted] · 2012-04-04T22:35:32.959Z · LW(p) · GW(p)

I'm curious; 50% of what sample? total human population or USians or what?

comment by Dmytry · 2012-04-05T16:58:54.509Z · LW(p) · GW(p)

There's the halting problem, so here you go. There's also the thoughts that you'll never arrive at because your arriver at the thoughts won't reach them, even if you could think them if told of them.

comment by majus · 2012-04-10T23:31:02.111Z · LW(p) · GW(p)

In Pinker's book "How the Mind Works" he asks the same question. His observation (as I recall) was that much of our apparently abstract logical abilities are done by mapping abstractions like math onto evolved subsystems with different survival purposes in our ancestors: pattern recognition, 3D spatial visualization, etc. He suggests that some problems seem intractable because they don't map cleanly to any of those subsystems.

comment by MixedNuts · 2012-04-04T16:06:45.482Z · LW(p) · GW(p)

Because thoughts don't behave much like perceptions at all, so that wouldn't occur to us or convince us much once we hear it. Are there any thoughtlike things we don't get but can indirectly manipulate?

Replies from: VKS, Vaniver, TheOtherDave
comment by VKS · 2012-04-04T16:17:21.293Z · LW(p) · GW(p)

Extremely large numbers.

(among other things)

comment by Vaniver · 2012-04-04T17:20:23.768Z · LW(p) · GW(p)

Parity transforms as rotations in four-dimensional space.

comment by TheOtherDave · 2012-04-04T16:33:13.342Z · LW(p) · GW(p)

thoughts don't behave much like perceptions at all

Can you expand on what you mean by that? There are many ways in which thoughts behave quite a bit like perceptions, which is unsurprising since they are both examples of operations clusters of neurons can perform, which is a relatively narrow class of operations. Video games behave quite a bit like spreadsheets in a similar way.

Of course, there are also many ways in which video games behave nothing at all like spreadsheets, and thoughts behave nothing like perceptions.

Replies from: MixedNuts
comment by MixedNuts · 2012-04-04T17:38:31.645Z · LW(p) · GW(p)

Naively speaking, if Alice can think a thought, she can just tell Bob, and he will. Dogs can't tell us what ultrasounds sound like, but that's for the same reason they can't tell us what regular sounds sound like.

Replies from: Eugine_Nier, TheOtherDave
comment by Eugine_Nier · 2012-04-05T01:04:13.501Z · LW(p) · GW(p)

That's assuming the thought can be expressed in language.

comment by TheOtherDave · 2012-04-04T17:45:53.157Z · LW(p) · GW(p)

Even if we posit that for every pair of humans X,Y if X thinks thought T then Y is capable of thinking T, it doesn't follow that for all possible Ts, X and Y are capable of thinking T.

That is, whether Alice can think the thought in the first place is not clear.

Replies from: MixedNuts
comment by MixedNuts · 2012-04-04T18:22:22.911Z · LW(p) · GW(p)

If you limit yourself to humans, yes. But at least one mind has to be able to think a thought for that thought to exist.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-04T18:57:28.718Z · LW(p) · GW(p)

Ah, I thought you were limiting yourself to humans, given your example.

If you're asserting that for every pair of cognitive systems X,Y (including animals, aliens, sufficiently sophisticated software, etc.) if X thinks thought T then Y is capable of thinking T, then we just disagree.

Replies from: MixedNuts
comment by MixedNuts · 2012-04-04T19:13:07.725Z · LW(p) · GW(p)

Yes, transmission of thoughts between sufficiently different minds breaks down, so we recover the possibility of thoughts that can be thought but not by us. But that's a sufficiently different reason from why there are sensations we can't perceive to show that the analogy is very shallow.

comment by [deleted] · 2012-04-04T20:19:04.528Z · LW(p) · GW(p)

It would surprise me, since no one could ever give me an example. I'm not sure what kind of evidence could give me good reason to think that there are thoughts that I cannot think.

Replies from: Eugine_Nier, Desrtopa, BillyOblivion
comment by Eugine_Nier · 2012-04-05T01:07:32.514Z · LW(p) · GW(p)

Try visualizing four spacial dimensions.

Replies from: wnoise, Nominull, CronoDAS, Multiheaded, wedrifid, faul_sname, None, Richard_Kennaway
comment by wnoise · 2012-04-05T02:03:08.219Z · LW(p) · GW(p)

Just visualize n dimensions, and then set n = 4.

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2012-04-05T12:40:14.617Z · LW(p) · GW(p)

You might as well tell me to 'just' grow wings and fly away...

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-04-05T15:14:33.848Z · LW(p) · GW(p)

I believe wnoise was making a joke-- one that I thought was moderately funny.

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2012-04-06T07:59:59.149Z · LW(p) · GW(p)

I thought it might be, and if I'd read it elsewhere, I'd have been sure of it - but this is LessWrong, which is chock-full of hyperintelligent people whose abilities to do math, reason and visualize are close to superpowers from where I am. You people seriously intimidate me, you know. (Just because I feel you're so much out of my league, not for any other reason.)

Replies from: wnoise, NancyLebovitz
comment by wnoise · 2012-04-12T19:40:17.905Z · LW(p) · GW(p)

It's a standard joke about mathematicians vs everybody else, and I intended it as such. I can do limited visualization in the 4th dimension (hypercubes and 5-cells (hypertetrahedra), not something as complicated as the 120-cell or even the 24-cell), but it's by extending from a 3-d visualization with math knowledge, rather than specializing n to 4.

comment by NancyLebovitz · 2012-04-06T13:20:54.933Z · LW(p) · GW(p)

For what it's worth, my ability to reason is fairly good in a very specific way-- sometimes I see the relevant thing quickly (and after LWers have been chewing on a problem and haven't seen it (sorry, no examples handy, I just remember the process)), but I'm not good at long chains of reasoning. Math and visualizing aren't my strong points.

comment by Nominull · 2012-04-05T03:03:01.869Z · LW(p) · GW(p)

Been there, done that. Advice to budding spatial-dimension visualizers: the fourth is the hardest, once you manage the fourth the next few are quite easy.

Replies from: tgb
comment by tgb · 2012-04-05T18:29:37.523Z · LW(p) · GW(p)

Is this legit and if so can you elaborate? I bet I'm not the only one here who has tried and failed.

Replies from: Nominull
comment by Nominull · 2012-04-05T19:12:59.644Z · LW(p) · GW(p)

Well, I can elaborate, but I'm not sure how helpful it will be. "No one can be told what the matrix is" and that sort of thing. The basic idea is that it's the equivalent of the line rising out of the paper in two-dimensions, but in three dimensions instead. But that's not telling someone who has tried and failed anything they don't know, I'm sure.

If you really want to be able to visualize higher-order spaces, my advice would be to work with them, do math and computer programming in higher-order spaces, and use that to build up physical intuitions of how things work in higher-order spaces. Once you have the physical intuitions it's easier for your brain to map them to something meaningful. Of course if your reason for wanting to be able to visualize 4D-space is because you want to use the visualization to give you physical intuitions about it that will be useful in math or computer programming, this is an ass-backward way of approaching the problem.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-04-05T23:18:46.284Z · LW(p) · GW(p)

Is it like having a complete n-dimensional construct in your head that you can view in its entirety?

I can visualise 4-dimensional polyhedra, in much the same way I can draw non-planar graphs on a sheet of paper, but it's not what I imagine being able to visualise higher-dimensional objects to be like.

I used to be into Rubik's Cube, and it's quite easy for me to visualise all six faces of a 3D cube at once, but when visualising, say, a 4-octahedron, the graph is easy to visualise, (or draw on a piece of paper, for that matter), but I can only "see" one perspective of the convex hull at a time, with the rest of it abstracted away.

comment by Multiheaded · 2012-04-07T20:26:10.644Z · LW(p) · GW(p)

When I was 13 or so, my brains worked significantly better than they currently do, and I figured out an easy trick for that in a math class one day. Just assign a greyscale color value (from black to white) to each point! This is exactly like taking an usual map and coloring the hills a lighter shade and the low places a darker one.

The only problem with that is it's still "3.5D", like the "2.5D" graphics engine of Doom, where there's only one Z-value to any point in the world so things can't be exactly above or below each other.
To overcome this, you could theoretically imagine the 3D structure alternating between "levels" in the 4th dimension every second, so e.g. one second a 3D cube's left half is grey and its right half is white, indicating a surface "rising" in the 4th dimension, but every other second the right half changes to black while the left is still grey, showing a second surface which begins at the same place and "descends" in the 4th dimension. Voila, you have two 3D "surfaces" meeting at a 4D angle!

With RGB color instead of greyscale, one could theoretically visualize 6 dimensions in such a way.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-04-08T07:08:54.145Z · LW(p) · GW(p)

Now, if only this let you rotate things through the 4th dimension.

Replies from: wnoise
comment by wnoise · 2012-04-12T19:31:23.043Z · LW(p) · GW(p)

Doing specific rotations by breaking it into steps is possible. Rotations by 90 degrees through the higher dimensions is doable with some effort -- it's just coordinate swapping after all. You can make checks that you got it right. Once you have this mastered, you can compose it with rotations that don't touch the higher dimensions. Then compose again with one of these 90 degree rotations, and you have an effective rotation through the higher dimensions.

(Understanding the commutation relations for rotation helps in this breakdown, of course. If you can then go on to understanding how the infinitesimal rotations work, you've got the whole thing down.)

comment by wedrifid · 2012-04-13T07:50:40.280Z · LW(p) · GW(p)

Try visualizing four spacial dimensions.

I knew a guy who credibly claimed to be able to visualize 5 spacial dimensions. He is a genius math professor with 'autistic savant' tendencies.

I certainly couldn't pull it off and I suspect that at my age it is too late for me to be trained without artificial hardware changes.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-04-13T08:00:18.499Z · LW(p) · GW(p)

The way I would do it for dimensions between d=4 and d=6 is to visualize a (d-3)-dimensional array of cubes. Then you remember that similarly positioned points, in the interior of cubes that are neighbors in the array, are near-neighbors in the extra dimensions (which correspond to the directions of the array). It's not a genuinely six-dimensional visualization, but it's a three-dimensional visualization onto which you can map six-dimensional properties. Then if you make an effort, you could learn how rotations, etc, map onto transformations of objects in the visualization. I would think that all claimed visualizations of four or more dimensions really amount to some comparable combinatorial scheme, backed up with some nonvisual rules of transformation and interpretation.

ETA: I see similar ideas in this subthread.

comment by faul_sname · 2012-04-13T06:43:01.618Z · LW(p) · GW(p)

Am I allowed to use time/change dimensions? Because if so, the task is trivial (if computationally expensive).

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-04-13T06:56:12.455Z · LW(p) · GW(p)

Ok, now add a temporal dimension.

Replies from: faul_sname
comment by faul_sname · 2012-04-13T08:17:08.877Z · LW(p) · GW(p)

Adding multiple temporal dimensions effectively how I do it, so one more shouldn't be a problem*. I visualize a 3 dimensional object in an space with a reference point that can move in n perpendicular directions. As the point of reference moves through the space, the object's shape and size change.

Example: to visualize a 5-dimensional sphere, I first visualize a 3 dimensional sphere that can move along a 1 dimensional line. As the point of reference reaches the three-dimensional sphere, a point appears, and this point grows into a full sized sphere at the middle, then shrinks back down to a point. I then add another degree of freedom perpendicular to the first line, and repeat the procedure.

Rotations are still very hard for me to do, and become increasingly difficult with 5 or more dimensions. I think this is due to a very limited amount of short-term memory. As for my technique, I think it piggybacks on the ability to imagine multiple timelines simultaneously. So, alas, it's a matter of repurposing existing abilities, not constructing entirely new ones.

*up to 7: 3 of space, 3 of observer-space, and 1 of time

comment by [deleted] · 2012-04-05T14:07:35.371Z · LW(p) · GW(p)

Either I can visualize them, and then they're thoughts I can think, or I can't visualize them, in which case the exercise doesn't help me.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-04-06T03:03:53.459Z · LW(p) · GW(p)

If you can, replace 4 with N for sufficiently large N.

If you can't, imagine a creature that evolved in a 4-dimensional universe. I find it unlikely that it would not be able to visualize 4 dimensions.

Replies from: None
comment by [deleted] · 2012-04-06T13:57:25.994Z · LW(p) · GW(p)

There's a pretty serious gap between the idea of a person evolved to visualize four dimensions and it being capable of thoughts I cannot think. This might be defensible, but if so only in the context of certain thoughts, something like qualitative ones. But the original quote was inferring from the fact that not everyone can see all the colors to the idea that there are thoughts we cannot think. If 'colors I can't see' are the only kinds of things we can defend as thoughts that I cannot think, then the original quote is trivial.

So even if you can defend 4d visualizations as thoughts I cannot think, you'd have to extend your argument to something else.

But I have a question in return: how would the belief that there are thoughts you cannot think modify your anticipations? What would that look like?

Replies from: Strange7
comment by Strange7 · 2012-04-12T08:41:26.390Z · LW(p) · GW(p)

By itself? Not much at all. The fun part is encountering another creature which can think those thoughts, then deducing the ability (and, being human, shortly thereafter finding some way to exploit it for personal gain) without being able to replicate the thoughts themselves.

comment by Richard_Kennaway · 2012-04-05T06:31:10.149Z · LW(p) · GW(p)

Hinton cubes. I haven't tried them though.

ETA: Original source, online.

comment by Desrtopa · 2012-04-04T23:31:58.508Z · LW(p) · GW(p)

I'm not sure what kind of evidence could give me good reason to think that there are thoughts that I cannot think.

The existence of other signals your brain simply doesn't process doesn't shift your prior at all?

Replies from: None, Richard_Kennaway
comment by [deleted] · 2012-04-05T14:06:40.107Z · LW(p) · GW(p)

The existence of other signals your brain simply doesn't process doesn't shift your prior at all?

That doesn't seem strictly relevant. Other signals might lead me to believe that there are thoughts I don't think (but I accepted that already), not thoughts I can't think. How could I recognize such a thing as a thought? After all, while every thought is a brain signal, not every brain signal is a thought: animals have lots of brain signals, but no thoughts.

Replies from: LucasSloan, Desrtopa
comment by LucasSloan · 2012-04-05T20:36:07.544Z · LW(p) · GW(p)

What is the difference between a thought you can't think and one you don't think?

Replies from: None
comment by [deleted] · 2012-04-05T21:06:42.636Z · LW(p) · GW(p)

Well, for example I don't think very much about soccer. There are thoughts about who the best soccer team is that I simply don't ever think. But I can think them.

Another case: In two different senses of 'can', I can and can't understand Spanish. I can't understand it at the moment, but nevertheless Spanish sentences are in principle translatable into sentences I can understand. I also can't read Aztec hieroglyphs, and here the problem is more serious: no one knows how to read them. But nevertheless, insofar as we assume they are a form of language, we assume that we could translate them given the proper resources. To see something as translatable just is to see it as a language, and to see something as a language is to see it as translatable. Anything which was is in principle untranslatable just isn't recognizable as a language.

I think the point is analogous (and that's no accident) with thoughts. Any thought that I couldn't think by any means is something I cannot by any means recognize as a thought in the first place. All this is just a way of saying that the belief that there are thoughts you cannot think is one of those beliefs that could never modify your anticipations. That should be enough to discount it as a serious consideration.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-05T22:35:30.827Z · LW(p) · GW(p)

And yet, if I see two nonhuman life forms A1 and A2, both of which are performing something I classify as the same task but doing it differently, and A1 and A2 interact, after which they perform the task the same way, I would likely infer that thoughts had been exchanged between them, but I wouldn't be confident that the thoughts which had been exchanged were thoughts that could be translated to a form that I could understand.

Replies from: DanArmak, None
comment by DanArmak · 2012-04-23T12:27:34.387Z · LW(p) · GW(p)

I would likely infer that thoughts had been exchanged between them, but I wouldn't be confident that the thoughts which had been exchanged were thoughts that could be translated to a form that I could understand.

Alternative explanations include:

  • They exchanged genetic material, like bacteria, or outright code, like computer programs; which made them behave more similarly.
  • They are programs, one attacked the other, killed it and replaced its computational slot with a copy of itself.
  • A1 gave A2 a copy of its black-box decision maker which both now use to determine their behavior in this situation. However, neither of them understands the black box's decision algorithm on the level of their own conscious thoughts; and the black box itself is not sentient or alive and has no thoughts.
  • One of them observed the other was more efficient and is now emulating its behavior, but they didn't talk about it ("exchange thoughts"), just looked at one another.

These are, of course, not exhaustive.

You could call some these cases a kind of thought. Maybe to self-modifying programs, a blackbox executable algorithm counts as a thought; or maybe to beings who use the same information storage for genes and minds, lateral gene transfer counts as a thought.

But this is really just a matter of defining what the word "thought" may refer to. I can define it to include executable undocumented Turing Machines, which I don't think humans like us can "think". Or you could define it as something that, after careful argument, reduces to "whatever humans can think and no more".

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-23T12:56:08.923Z · LW(p) · GW(p)

Sure. Leaving aside what we properly attach the label "thought" to, the thing I'm talking about in this context is roughly speaking the executed computations that motivate behavior. In that sense I would accept many of these options as examples of the thing I was talking about, although option 2 in particular is primarily something else and thus somewhat misleading to talk about that way.

comment by [deleted] · 2012-04-06T13:50:23.522Z · LW(p) · GW(p)

I think you're accepting and then withdrawing a premise here: you've identified them as interacting, and you've identified their interaction as being about the task at hand, and the ways of doing it, and the relative advantages of these ways. You've already done a lot of translation right there. So the set up of your problem assumes not only that you can translate their language, but that you in some part already have. All that's left, translation wise, is a question of precision.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-06T14:08:16.560Z · LW(p) · GW(p)

Sure, to some level of precision, I agree that I can think any thought that any other cognitive system, however alien, can think. There might be a mind so alien that the closest analogue to its thought process while contemplating some event that I can fathom is "Look at that, it's really interesting in some way," but I'll accept that this in some part a translation and "all that's left" is a question of precision.

But if you mean to suggest by that that what's left is somehow negligible, I strenuously disagree. Precision matters. If my dog and I are both contemplating a ball, and I am calculating the ratio between its volume and surface, and my dog is wondering whether I'll throw it, we are on some level thinking the same thought ("Oh, look, a ball, it's interesting in some way") but to say that my dog therefore can understand what I'm thinking is so misleading as to be simply false.

I consider it possible for cognitive systems to exist that have the same relationship to my mind in some event that my mind has to my dog's mind in that example.

Replies from: None
comment by [deleted] · 2012-04-06T14:59:22.266Z · LW(p) · GW(p)

Well, I don't think I even implied that the dog could understand what you're thinking. I don't think dogs can think at all. What I'm claiming is that for anything that can think (and thus entertain the idea of thoughts that cannot be thought), there are no thoughts that cannot be thought. The difference between you and your dog isn't just one of raw processing power. It's easy to imagine a vastly more powerful processor than a human brain that is nevertheless incapable of thought (I think Yud.'s suggestion for an FAI is such a being, given that he's explicit that it would not rise to the level of being a mechanical person).

Once we agree that it's a point about precision, I would just say that this ground can always in principle be covered. Suppose the translation has gotten started, such that there is some set of thoughts at some level of precision that is translatable, call it A, and the terra incognito that remains, call it B. Given that the cognitive system you're trying to translate can itself translate between A and B (the aliens understand themselves perfectly), there should be nothing barring you from doing so as well.

You might need extremely complex formulations of the material in A to capture anything in B, but this is allowed: we need some complex sentence to capture what the Germans mean by 'schadenfreude', but it would be wrong to think that because we don't have a single term which corresponds exactly, that we cannot translate or understand the term to just the same precision the Germans do.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-06T15:32:05.479Z · LW(p) · GW(p)

I accept that you don't consider dogs to have cognitive systems capable of having thoughts. I disagree. I suspect we don't disagree on the cognitive capabilities of dogs, but rather on what the label "thought" properly refers to.

Perhaps we would do better to avoid the word "thought" altogether in this discussion in order to sidestep that communications failure. That said, I'm not exactly sure how to do that without getting really clunky, really fast. I'll give it a shot, though.

I certainly agree with you that if cognitive system B (for example, the mind of a Geman speaker) has a simple lexical item Lb (for example, the word "schadenfreude") ,
...and Lb is related to some cognitive state Slb (for example, the thought /schadenfreude/) such that Slb = M(Lb) (which we ordinarily colloquially express by saying that a word means some specific thought),
...and cognitive system A (for example, the mind of an English speaker) lacks a simple lexical item La such that Slb=M(La) (for example, the state we'd ordinarily express by saying that English doesn't have a word for "schadenfreude")...
that we CANNOT conclude from this that A can't enter Slb, nor that there exists no Sla such that A can enter Sla and the difference between Sla and Slb is < N, where N is the threshold below which we'd be comfortable saying that Sla and Slb are "the same thought" despite incidental differences which may exist.

So far, so good, I think. This is essentially the same claim you made above about the fact that there is no English word analogous to "schadenfreude" not preventing an English speaker from thinking the thought /schadenfreude/.

In those terms, I assert that there can exist a state Sa such that A can enter Sa but B cannot enter Sa. Further, I assert that there can exist a state Sa such that A can enter Sa but B cannot enter any state Sb such that the difference between Sa and Sb is < N.

Do you disagree with that? Or do you simply assert that if so, Sa and Sb aren't thoughts? Or something else?

Replies from: None
comment by [deleted] · 2012-04-06T17:20:10.799Z · LW(p) · GW(p)

I agree that this is an issue of what 'thoughts' are, though I'm not sure it's productive to side step the term, since if there's an interesting point to be found in the OP, it's one which involves claims about what a thought is.

In those terms, I assert that there can exist a state Sa such that A can enter Sa but B cannot enter Sa. Further, I assert that there can exist a state Sa such that A can enter Sa but B cannot enter any state Sb such that the difference between Sa and Sb is < N.

I'd like to disagree with that unqualifiedly, but I don't think I have the grounds to do so, so my disagreement is a qualified one. I would say that there is no state Sa such that A can enter Sa, and such that B cannot enter Sa, and such that B can recognise Sa as a cognitive state. So without the last 'and such that', this would be a metaphysical claim that all cognitive systems are capable of entertaining all thoughts, barring uninteresting accidental interference (such as a lack of memory capacity, a lack of sufficient lifespan, etc.). I think this is true, but alas.

With the qualification that 'B would not be able to recognise Sa as a cognitive state', this is a more modest epistemic claim, one which amounts to the claim that recognising something as a cognitive state is nothing other than entering that state to one degree of precision or another. This effectively marks out my opinion on your second assertion: for any Sa and any Sb, such that the difference between Sa and Sb cannot be < N, A (and/or B) cannot by any means recognise the difference as part of that cognitive state.

All this is a way of saying that you could never have reason to think that there are thoughts that you cannot think. Nothing could give you evidence for this, so it's effectively a metaphysical speculation. Not only is evidence for such thoughts impossible, but evidence for the possibility of such thoughts is impossible.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-06T18:01:46.265Z · LW(p) · GW(p)

I'm not exactly sure what it means to recognize something as a cognitive state, but I do assert that there can exist a state Sa such that A can enter Sa, and such that B cannot enter Sa, and such that B can believe that A is entering into a particular cognitive state whenever (and only when) A enters Sa. That ought to be equivalent, yes?

This seems to lead me back to your earlier assertion that if there's some shared "thought" at a very abstract level I and an alien mind can be said to share, then the remaining "terra incognito" between that and sharing the "thought" at a detailed level is necessarily something I can traverse.

I just don't see any reason to expect that to be true. I am as bewildered by that claim as if you had said to me that if there's some shared object that I and an alien can both perceive, then I can necessarily share the alien's perceptions. My response to that claim would be "No, not necessarily; if the alien's perceptions depend on sense organs or cognitive structures that i don't possess, for example, then I may not be able to share those perceptions even if I;n perceiving the same object." Similarly, my response to your claim is "No, not necessarily, if the alien's 'thought' depends on cognitive structures that i don't possess, for example, then I may not be able to share that 'thought'."

You suggest that because the aliens can understand one another's thoughts, it follows that I can understand the alien's thoughts, and I don't see how that's true either.

So, I dunno... I'm pretty stumped here. From my perspective you're simply asserting the impossibility, and I cannot see how you arrive at that assertion.

Replies from: None
comment by [deleted] · 2012-04-08T00:32:26.315Z · LW(p) · GW(p)

Well, if the terra incogntio has any relationship at all to the thoughts you do understand, such that the terra could be recognized as a part of or related to a cognitive state, then the terra is going to consist in stuff which bears inferential relations to what you do understand. These are relations you can necessarily traverse if the alien can traverse them. Add to that the fact that you've already assumed that the aliens largely share your world, that their beliefs are largely true, and that they are largely rational, and it becomes hard to see how you could justify the assertion at the top of your last post.

And that assertion has, thus far, gone undefended.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-08T01:08:13.134Z · LW(p) · GW(p)

Well, I justify it by virtue of believing that my brain isn't some kind of abstract general-purpose thought-having or inferential-relationship-traversing device; it is a specific bit of machinery that evolved to perform specific functions in a particular environment, just like my digestive system, and I find it no more plausible that I can necessarily traverse an inferential relationship that an alien mind can traverse than that I can necessarily extract nutrients from a food source that an alien digestive system can digest.

How do you justify your assertion that I can necessarily traverse an inferential relationship if an alien mind is capable of traversing it?

Replies from: None
comment by [deleted] · 2012-04-08T01:26:40.808Z · LW(p) · GW(p)

Well, your brain isn't that, but its only a necessary but insufficient condition on your having thoughts. Understanding a language is both necessary and sufficient and a language actually is the device you describe. Your competance with your own language ensures the possibility of your traversal in another.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-08T02:27:00.392Z · LW(p) · GW(p)

Sorry, I didn't follow that at all.

Replies from: None
comment by [deleted] · 2012-04-08T03:17:21.997Z · LW(p) · GW(p)

The source of your doubt seemed to be that you didn't think you posessed a general purpose thought having and inferential relationship traversing device. A brain is not such a device, we agree. But you do have such a device. A language is a general purpose thought having and inferential relationship traversing device, and you have that too. So, doubt dispelled?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-08T04:02:56.674Z · LW(p) · GW(p)

Ah! OK, your comment now makes sense to me. Thanks.
Agreed that my not believing that my brain is a general-purpose inferential relationship traversing device (hereafter gpirtd) is at the root of my not believing that all thoughts thinkable by any brain are thinkable by mine.
I'm glad we agree that my brain is not a gpirtd.
But you seem to be asserting that English (for example) is a gpirtd.
Can you expand on your reasons for believing that? I can see no justification for that claim, either.
But I do agree that if English were a gpirtd while my brain was not, it would follow that I could infer in English any thought that an alien mind could infer, at the same level of detail that the alien mind could think it, even if my brain was incapable of performing that inference.

Replies from: None
comment by [deleted] · 2012-04-08T16:20:23.198Z · LW(p) · GW(p)

So the claim is really that language is a gpirtd, excepting very defective cases (like sign-language or something). That language is an inference relation traversing device is, I think, pretty clear on the surface of things: logic is that in virtue of which we traverse inference relations (if anything is). This isn't to say that English, or any language, is a system of logic, but only that logic is one of the things language allows us to do.

I think it actually follows from this that language is also a general purpose thought having device: thoughts are related, and their content is in large part (or perhaps entirely) constituted, by inferential relations. If we're foundationalists about knowledge, then we think that the content of thoughts is not entirely constituted by inferential relations, but this isn't a serious problem. If we can get anywhere in a process of translation, it is by assuming we share a world with whatever speaker we're trying to understand. If we don't assume this, and to whatever extent we don't assume this, just to that extent we can't recognize the gap as conceptual or cognitive. If an alien was reacting in part to facts of the shared world, and in part to facts of an unshared world (whatever that means), then just to the extent that the alien is acting on the latter facts, to that extent would we have to conclude that they are behaving irrationally. The reasons are invisible to us, after all. If we manage to infer from their behavior that they are acting on reasons we don't have immediate access to, then just to the extent that we now view their behavior as rational, we now share that part of the world with them. We can't decide that behavior is rational while knowing nothing of the action or the content of the reason, in the same sense that we can't decide whether or not a belief is rational, or true, while knowing nothing of its meaning or the facts it aimes at.

This last claim is most persuasively argued, I think, by showing that any example we might construct is going to fall apart. So it's here that I want to re-ask my question: what would a thought that we cannot think even look like to us? My claim isn't that there aren't any such thoughts, only that we could never be given reason for thinking that there are.

ETA: as to the question of brains, here I think there is a sense in which there could be thoughts we cannot think. For example, thoughts which take more than a lifetime to think. But this isn't an interesting case, and it's fundamentally remediable. Imagine someone said that there were languages that are impossible for me to understand, and when I pressed him on what he meant, he just pointed out that I do not presently understand chinese, and that he's about to kill me. He isn't making an interesting point, or one anyone would object to. If that is all the original quote intended, then seems a bit trivial: the quoted person could have just pointed out that 1000 years ago, no one could have had any thoughts about airplanes.

Replies from: TheOtherDave, TheOtherDave
comment by TheOtherDave · 2012-04-08T17:11:06.665Z · LW(p) · GW(p)

Re: your ETA... agreed that there are thoughts I cannot think in the trivial sense you describe here, where the world is such that the events that would trigger that thought never arise before my death. What is at issue here is not that, but the less trivial claim that there are thoughts I cannot think by virtue of the way my mind works. To repeat my earlier proposed formalization: there can exist a state Sa such that mind A can enter Sa but mind B cannot enter Sa.

But you seem to also want to declare as trivial all cases where the reason B cannot enter Sa is because of some physical limitation of B, and I have more trouble with that.

I mean, sure, if A can enter Sa in response to some input and B cannot, I expect there to be some physical difference between A and B that accounts for this, and therefore some physical modification that can be made to B to remedy this. So sure, I agree that all such cases are "fundamentally remediable". Worst-case, I transform B into an exact replica of A, and now B can enter state Sa, QED.

I'm enough of a materialist about minds to consider this possible in principle. But I would not agree that, because of this, the difference between A and B is trivial.

comment by TheOtherDave · 2012-04-08T16:56:55.485Z · LW(p) · GW(p)

Well, at the risk of repeating myself in turn, I'll go back to my original example. As an observer I would have reason to believe there were some thoughts involved in that exchange, even if I couldn't think those thoughts.

I understand you to be saying in response that I can necessarily think those thoughts, since I can understand them at some level L1 by virtue of having an awareness of the same world A1 and A2 are interacting with (I agree so far) and that I can therefore understand them at any desired level L2 as long as the aliens themselves can traverse an inference relation between L1 and L2 because I have a language, and languages* are gpirtds (I disagree).

I've asked you why you believe English (for example) is a gpirtd, and you seem to have responded that English (like any non-defective language) allows us to do logic, and logic allows us to traverse inference relations. Did I understand that correctly?

If so, I don't think your response is responsive. I would certainly agree that English (like any language) allows me to perform certain logical operations and therefore to traverse certain inference relations. I would not agree that for all inference relations R, English (or any other language) allows me to traverse R.

I agree that if I'm wrong about that and English (for example) really does allow me to traverse all inference relations, then the rest of your argument holds.

I see no reason to believe that, though.

===

  • Except, you say, for defective cases like sign-language. I have absolutely no idea on what basis you judge sign language defective and English non-defective here, or whether you're referring to some specific sign language or the whole class of sign languages. However, I agree with you that sign languages are not gpirtds. (I don't believe English is either.)
Replies from: None
comment by [deleted] · 2012-04-08T17:11:08.618Z · LW(p) · GW(p)

Well, at the risk of repeating myself in turn, I'll go back to my original example. As an observer I would have reason to believe there were some thoughts involved in that exchange, even if I couldn't think those thoughts.

Well, I'd like a little more from you: I'd like an example where you are given reason to think that there are thoughts in the air, and reason to think that they are not thoughts you could think. As it stands, I of course have no objection to your example, because the example doesn't go so far as suggesting the latter of the two claims.

So do you think you can come up with such an example? If not, don't you think that counts powerfully against your reasons for thinking that such a situation is possible?

I would not agree that for all inference relations R, English (or any other language) allows me to traverse R.

This is not exactly related to my claim. My claim is that you could never be given a reason for thinking that there are thoughts you cannot think. That is not the same as saying that there are thoughts you cannot think. So likewise, I would claim that you could never, deploying the inference relations available to you, infer that there are inference relations unavailable to you. Because if you can infer that they are inference relations, then they are available to you. (ETA: the point here, again, is that you cannot know that something is an inference relation while not knowing of what kind of relation it is. Recognizing that something is an inference relation just is recognizing that it is truth-preserving (say), and you could only recognize that by having a grip on the relation that it is.)

It's extremely important to my argument that we keep in full view the fact that I am making an epistemic claim, not a metaphysical one.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-08T17:25:19.331Z · LW(p) · GW(p)

From an epistemic position, the proposition P1: "Dave's mind is capable of thinking the thought that A1 and A2 shared" is experimentally unfalsifiable. No matter how many times, or how many different ways, I try to think that thought and fail, that doesn't prove I'm incapable of it, it just means that I haven't yet succeeded.

But each such experiment provides additional evidence against P1. The more times I try and fail, and the more different ways I try and fail, the greater the evidence, and consequently the lower the prior probability of P1.

If you're simply asserting that that prior probability can't ever reach zero, I agree completely.

If you're asserting that that prior probability can't in practice ever reach epsilon, I mostly agree.

If you're asserting that that prior probability can't in practice get lower than, say, .01, I disagree.

(ETA: In case this isn't clear, I mean here to propose "I repeatedly try to understand in detail the thought underlying A1 and A2's cooperation and I repeatedly fail" as an example of a reason to think that the thought in question is not one I can think.)

Replies from: None
comment by [deleted] · 2012-04-08T19:18:25.590Z · LW(p) · GW(p)

From an epistemic position, the proposition P1: "Dave's mind is capable of thinking the thought that A1 and A2 shared" is experimentally unfalsifiable.

I think that overestimates my claim: suppose Dave were a propositional logic machine, and the A's were first order logic machines. If we were observing Dave and the Aliens, and given that we are capable of thinking more expressively than either of them, then we could have reason for thinking that Dave cannot think the thoughts that the Aliens are thinking (lets just assume everyone involved is thinking). So we can prove P1 to be false in virtue of stuff we know about Dave and stuff we know about what the Aliens are saying.

That, again, is not my point. My point is that Dave could never have reasons for thinking that he couldn't think what the Aliens are thinking, because Dave could never have reasons for thinking both A) that the aliens are in a given case doing some thinking, and B) that this thinking is thinking that Dave cannot do. If B is true, A is not something Dave can have reasons for. If Dave can have reason for thinking A, then B is false.

So suppose Dave has understood that the aliens are thinking. By understanding this, Dave has already and necessarily assumed that he and the aliens share a world, that he and the aliens largely share relevant beliefs about the world, and that he and the aliens are largely rational.

If you agree that one cannot have reason to think that an action or belief is rational or true without knowing the content or intention of the belief or action, then I think you ought to agree that whatever reasons Dave has for thinking that the aliens are rational are already reasons for thinking that Dave can understand them.

And to whatever extent we third party observers can see that Dave cannot understand them, just to that extent Dave cannot have reasons for thinking that the aliens are rational. In such a case, Dave may believe that the aliens are thinking and it might be impossible for him to understand them. But in this case Dave's opinion that the aliens are thinking is irrational, even if it is true.

Thus, no one can ever be given any reason (i.e. there can never be any evidence) for thinking that there are thoughts that they cannot think. We can never know that there are no such thoughts either, I suppose.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-08T19:58:14.966Z · LW(p) · GW(p)

suppose Dave were a propositional logic machine, and the A's were first order logic machines. [..] (lets just assume everyone involved is thinking). So we can prove P1 to be false in virtue of stuff we know about Dave and stuff we know about what the Aliens are saying.

Supposing both that all of those suppositions were true, and that we could somehow determine experimentally that they were true, then, yes, it would follow that the conclusion was provable.

I'm not sure how we would determine experimentally that they were true, though. I wouldn't normally care, but you made such a point a moment ago about the importance of your claim being about what's knowable rather than about what's true that I'm not sure how to take your current willingness to bounce back and forth between that claim about what can be known in practice, and these arguments that depend on unknowable-in-practice presumptions.

That, again, is not my point.

Then I suppose we can safely ignore it for now.

Dave could never have reasons for thinking that he couldn't think what the Aliens are thinking, because Dave could never have reasons for thinking both A) that the aliens are in a given case doing some thinking, and B) that this thinking is thinking that Dave cannot do.

As I've already said, in this example I have reason to believe A1 and A2 are doing some thinking, and if I make a variety of good-faith-but-unsuccessful attempts to recapitulate that thinking I have reason to believe I'm incapable of doing so.

So suppose Dave has understood that the aliens are thinking.

Is it sufficient to suppose that Dave has reasons to believe the aliens are thinking?

By understanding this, Dave has already and necessarily assumed that he and the aliens share a world, that he and the aliens largely share relevant beliefs about the world, and that he and the aliens are largely rational.

I'm willing to posit all of those things, and I can imagine how they might follow from a belief that the aliens are thinking, for sufficiently convenient values of "world", "largely", and "relevant". Before I lean too heavily on any of that I'd want to clarify those words further, but I'm not sure it actually matters.

If you agree that one cannot have reason to think that an action or belief is rational or true without knowing the content or intention of the belief or action

I don't agree with this. Just to pick a trivial example, if you write down a belief B on a slip of paper and hand it to my friend Sam, who I trust to be both a good judge of and an honest reporter of truth, and Sam says to me "B is true," I have reason to think B is true but I don't know the content of B.

then I think you ought to agree that whatever reasons Dave has for thinking that the aliens are rational are already reasons for thinking that Dave can understand them.

The premise is false, but I agree that were it true your conclusion would follow.

Replies from: None
comment by [deleted] · 2012-04-08T20:48:01.257Z · LW(p) · GW(p)

I have reason to think B is true but I don't know the content of B.

This seems to be a crucial disagreement, so we should settle it first. In your example, you said that you trust Sam to be a good judge and an honest reporter of truth. This means, among other things, that you and Sam share a great many beliefs, and that you think Sam makes judgements roughly in the same ways you do.

So, you mostly understand the kinds of inferences Sam draws, and you mostly understand the beliefs that Sam has. If you infer from this that B is true because Sam says that it is, you must be assuming that B isn't so odd belief that Sam has no competence in assessing it. It must be something Sam is familiar enough to be comfortable with. All that said, you've got a lot of beliefs about what B is, without knowing the specifics.

Essentially, your inference that B is true because Sam says that it is, is the belief that though you don't know what B says specifically, B is very likely to either be one of your beliefs already or something that follows straightforwardly from some of your beliefs.

In other words, if you have good reason to think B is true, you immediately good reason to think you know something about the content of B (i.e. that it is or follows from one of your own beliefs). Thinking that B is probably true just is believing you know something about B.

(ETA: I want to add how closely this example resembles your aliens example, both in the set up, and in how (I think) it should be answered. In both cases, we can look at the example more closely and discover that in drawing the conclusion that the aliens are thinking or that B is true, a great deal is assumed. I'm saying that you can either have these assumptions, but then my translation point follows, or you can deny the translation point, but then you can't have the assumptions necessary to set up your examples.)

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-08T21:27:04.407Z · LW(p) · GW(p)

This seems to be a crucial disagreement, so we should settle it first.

All right.

you trust Sam to be a good judge and an honest reporter of truth. This means, among other things, that you and Sam share a great many beliefs

Sure, if Sam and I freely interact and I consider him a good judge and honest reporter of truth, I will over time come to believe many of the things Sam believes.

Also, to the extent that I also consider myself a good judge of truth (which has to be nontrivial for me to trust my judgment of Sam in the first place), many of the beliefs I come to on observing the world will also be beliefs Sam comes to on observing the world, even if we don't interact freely enough for him to convince me of his belief. This is a little trickier, because not all reasons for belief are fungible... I might have reasons for believing myself a good judge of whether Sam is a good judge of truth without having reasons for believing myself a good judge of truth more generally. But I'm willing to go along with it for now.

Agreed so far.

you think Sam makes judgements roughly in the same ways you do.

No, I don't follow this at all. I might think Sam comes to the same conclusions that I would given the same data, but it does not follow in the least that he uses the same process to get there. That said, I'm not sure this matters to your argument.

So, you mostly understand the kinds of inferences Sam draws

Yes, both in the sense that I can mostly predict the inferences Sam will draw from given data, and in the sense that any arbitrarily-selected inference that Sam draws is very likely to be one that I can draw myself.

you mostly understand the beliefs that Sam has

Yes, in the same ways.

If you infer from this that B is true because Sam says that it is, you must be assuming that B isn't so odd belief that Sam has no competence in assessing it.

Something like this, yes. It is implicit in this example that I trust Sam to recognize if B is outside his competence to evaluate and report that fact if true, so it follows from his not having reported that that I'm confident it isn't true.

you've got a lot of beliefs about what B is, without knowing the specifics.

Certainly. In addition to all of that stuff, I also have the belief that B can be written down on a slip of paper, with all that that implies.

Essentially, your inference that B is true because Sam says that it is, is the belief that though you don't know what B says specifically, B is very likely to either be one of your beliefs already or something that follows straightforwardly from some of your beliefs.

Statistically speaking, yes: given an arbitrarily selected B1 for which Sam would report "B1 is true," the prior probability that I already know B1 is high.

But this is of course in no sense guaranteed. For example, B might be "I'm wearing purple socks," in response to which Sam checks the color of your socks, and subsequently reports to me that B is true. In this case I don't in fact know what color socks you are wearing.

In other words, if you have good reason to think B is true, you immediately good reason to think you know something about the content of B (i.e. that it is or follows from one of your own beliefs).

Again, statistically speaking, sure.

Thinking that B is probably true just is believing you know something about B.

No. You are jumping from "X is reliable evidence of Y" to "X just is Y" without justification.

If X smells good, I have reason to believe that X tastes good, because most things that smell good also taste good. But it is quite possible for me to both smell and taste X and conclude "X smells good and tastes bad." If "thinking that X smells good just is believing that X tastes good" were true, I would at that point also believe "X tastes good and tastes bad," which is not in fact what happens. Therefore I conclude that "thinking that X smells good just is believing that X tastes good" is false.

Similarly, if Sam reports B as true, I have good reason to think B is probably true, and I also have good reason to think I know something important about the content of B (e.g., that it is or follows from one of my own beliefs), because most things that Sam would report as true I also know something important about the contents of (e.g., ibid). But it's quite possible for Sam to report B as true without me knowing anything important about the content of B. I similarly conclude that "thinking that B is probably true just is believing [I] know something [important] about B" is false.

In case it matters, not only is it possible for me to believe B is true when I don't in fact know the content of B (e.g., B is "Abrooks' socks are purple" and Sam checks your socks and tells me "B is true" when I neither know what B says nor know that Abrooks' socks are purple), it's also possible for me to have good reason to believe that I don't know the content of B in this situation (e.g., if Sam further tells me "Dave, you don't know the content of B"... which in fact I don't, and Sam has good reason to believe I don't.)

Replies from: None
comment by [deleted] · 2012-04-08T22:00:44.808Z · LW(p) · GW(p)

No. You are jumping from "X is reliable evidence of Y" to "X just is Y" without justification.

You know that B is likely to be one of your beliefs, or something that follows straightforwardly from your beliefs. It makes no difference if B actually turns out not to be one of your beliefs or something that follows straightforwadly therefrom. Likewise, you would have good reason to guess that the outcome of a die roll is 1-5 as opposed to 6. If it turns out that it comes up 6, this does not impugn the probability involved in your initial estimate. Knowing how dice work is knowing something about this die roll and its outcome. By knowing how dice work, you know that the outcome of this roll is probably 1-5, even if it happens to be 6. Knowing how Sam's judgements work is knowing something about this judgement.

None of this, I grant you, involves knowing the specific content of B. But all of this is knowledge about the content of B. If Sam said to you "Dave, you don't know the content of B", you ought to reply "Sam, I know enough about your beliefs and judgements that I really do know something about the content of B, namely that it's something you would judge to be true on the basis of a shared set of beliefs."

Your set up, I think, draws an arbitrary distinction between knowledge of the specific content of B and knowledge of B as a member of someone's set of beliefs. Even if there's any distinction here (i.e. if we're foundationalists of some kind), it still doesn't follow that knowledge of the second kind is wholly unrelated to knowledge of the first. In fact, that would be astonishing.

Therefore I conclude that "thinking that X smells good just is believing that X tastes good" is false.

So, I'm not saying that because you have reason to believe B to be true, you therefore have reason to believe that you know the content of B. What I'm saying is that because you have reason to believe B to be true, you therefore do know something about the content of B.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-08T22:32:50.040Z · LW(p) · GW(p)

I hope we can agree that in common usage, it's unproblematic for me to say that I don't know what color your socks are. I don't, in fact, know what color your socks are. I don't even know that you're wearing socks.

But, sure, I think it's more probable that your socks (if you're wearing them) are white than that they're purple, and that they probably aren't transparent, and that they probably aren't pink. I agree that I know something about the color of your socks, despite not knowing the color of your socks.

And, sure, if you're thinking "my socks are purple" and I'm thinking "Abrooks' socks probably aren't transparent," these kinds of knowledge aren't wholly unrelated to one another. But that doesn't mean that either my brain or my command of the English language is capable of traversing the relationship from one to the other.

Much as you think I'm drawing arbitrary distinctions, I think you're eliding over real distinctions.

Replies from: None
comment by [deleted] · 2012-04-09T00:56:12.672Z · LW(p) · GW(p)

Okay, so it sounds like we're agreed that your reasons for believing B are at the same time things you take yourself to know about the content of B. Would you accept that this is always going to be true? Or can you think of a counterexample?

If this is always true, then we should at least take this in support of my more general claim that you cannot have reason to think that something is rational or true, i.e. that something is thinking, without taking yourself to know something about the content of that thought.

If we're on the same page so far, then we've agreed that you can't recognise something as thought without assuming you can understand something about its content. Now the question remains, can you understand something to be a thought or part of a thought while at the same time having reason to think it is fundamentally unintelligible to you? Or does the very recognition of something as a thought immediately give you reason to think you can understand it, while evidence against your understanding justifies you only in concluding that something isnt thought after all?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-09T01:08:15.339Z · LW(p) · GW(p)

Yes, my reasons for believing B are, in the very limited sense we're now talking about, things I know about the content of B (e.g., that the value of a die roll is probably between 1 and 5).

Yes, agreed that if I think something is thinking, I know something about the content of its thought.

Further agreed that in the highly extended sense that you're using "understanding" -- the same sense that I can be said to "know" what color socks you're wearing -- I understand everything that can be understood by every thinking system, and my inability to understand a thing is evidence against it's being a thought.

So, OK... you've proven your point.

I continue to think that by insisting on that extended sense of the word, you are eliding over some important distinctions. But I appreciate that you consider those distinctions arbitrary, which is why you insist on ignoring them.

Replies from: None
comment by [deleted] · 2012-04-09T14:13:39.551Z · LW(p) · GW(p)

I continue to think that by insisting on that extended sense of the word, you are eliding over some important distinctions. But I appreciate that you consider those distinctions arbitrary, which is why you insist on ignoring them.

Oh, come on, this has been a very interesting discussion. And I don't take myself to have proven any sort of point. Basically, if we've agreed to all of the above, then we still have to address the original point about precision.

Now, I don't have a very good argument here, for thinking that you can go from knowing some limited and contextual things about the content of a thought to knowing the content with as much precision as the thinker. But here goes: suppose you have a cooperative and patient alien, and that you yourself are intent on getting the translation right. Also, let's assume you have a lot of time, and all the resources you could want for pursuing the translation you want. So given an unlimited time, and full use of metaphor, hand gestures, extended and complex explanations in what terms you do manage to get out of the context, corrections of mistakes, etc. etc., I think you could cover any gap so long as you can take the first step. And so long as the thought isn't actually logically alien.

This means that the failure to translate something should be taken not as evidence that it might be impossible, but as evidence that it is in fact possible to translate. After all, if you know enough to have reason to believe that you've failed, you have taken the first few steps already.

As to whether or not logically alien thought, thought which involves inferences of which we are incapable, is possible, I don't know. I think that if we encountered such thought, we would pretty much only have reason to think that it's not thought.

So, forget about proving anything. Have I made this plausible? Does it now seem reasonable to you to be surprised (contra the original quote) to hear that there are thoughts we cannot think? If I've utterly failed to convince you, after all, I would take that as evidence against my point.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-09T15:25:05.137Z · LW(p) · GW(p)

My position on this hasn't changed, really.

I would summarize your argument as "If we can recognize them as thinking, we are necessarily mutually intelligible in some highly constrained fashion, which makes it likely that we are mutually intelligible in the more general case. Conversely, if we aren't mutually intelligible in the general case, we can't recognize them as thinking."

My objection has been and remains with the embedded assumption that if two systems are mutually intelligible in some highly constrained fashion, it's likely that they are mutually intelligible in the more general case. On average this might well be true, but the exceptions are important. (Similar things are true when playing Russian roulette. On average it's perfectly safe, but I wouldn't recommend playing.)

My reason for objecting remains what it was: evolved systems are constrained by the environment in which they evolved, and are satisficers rather than optimizers, and are therefore highly unlikely to be general-purpose systems. This is as true of cognitive systems as it is of digestive systems. I would be as surprised to hear of an alien mind thinking thoughts I can't think as I would be to hear of an alien stomach digesting foods I can't digest -- that is, not surprised at all. There's nothing magic about thought, it's just another thing we've evolved to be able to do.

That said, I would certainly agree that when faced with a system I have reason to believe is thinking, the best strategy for me to adopt is to assume that I can understand its thoughts given enough time and effort, and to make that effort. (Similarly, when faced with a system I have reason to believe needs food, I should assume that I can feed it given enough time and effort, and make that effort.)

But when faced with a system where I have reason to believe is thinking and where all plausible efforts have failed, I am not justified in concluding that it isn't thinking after all, rather than concluding that its thinking is simply alien to me.

Replies from: None
comment by [deleted] · 2012-04-09T15:33:30.381Z · LW(p) · GW(p)

I would be as surprised to hear of an alien mind thinking thoughts I can't think as I would be to hear of an alien stomach digesting foods I can't digest -- that is, not surprised at all. There's nothing magic about thought, it's just another thing we've evolved to be able to do.

I guess my problem with this claim is similar to my problem with the original quote: the analogy between sensations and thoughts is pretty weak, such that the inference from incompatible sensations to incompatible thoughts is dubious. The analogy between thoughts and digestion is even weaker. The objection that we're organisms of a certain kind, with certain biological limits, is one which involves taking an extremely general point, and supposing that it bears on this issue in particular. But how? Why think that the biological character of our brain places any relevant limits on what we can think? This might seem like an obvious connection, but I don't think it can go without support. The connection between neural activity and brain structures on the one hand and thoughts on the other is not so clear that we can just jump from such general observations about the one to specific claims about the other.

So how can we fill out this reasoning?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-09T16:02:10.593Z · LW(p) · GW(p)

Why think that the biological character of our brain places any relevant limits on what we can think? This might seem like an obvious connection, but I don't think it can go without support.

Yes, it does seem like an obvious connection to me. But, all right...

For example, I observe that various alterations of the brain's structure (e.g., those caused by brain lesions or traumatic brain injury) create alterations in the kinds of thoughts the brain can think, and I infer from this that the brain's structure constrains the kinds of thoughts it can think.

And as I said, I consider the common reference class of evolved systems a source of useful information here as well.

Incidentally, didn't you earlier agree that brains weren't general-purpose inference-traversing devices? I thought your position was that language filled this role. (Even more incidentally, does it follow from this that two nonlinguistic thinking systems being mutually nonintelligable is more plausible for you? Actually, do you consider nonlinguistic thinking systems possible in the first place?)

Replies from: None
comment by [deleted] · 2012-04-09T16:29:42.700Z · LW(p) · GW(p)

For example, I observe that various alterations of the brain's structure (e.g., those caused by brain lesions or traumatic brain injury) create alterations in the kinds of thoughts the brain can think, and I infer from this that the brain's structure constrains the kinds of thoughts it can think.

I don't think this is a good inference: it doesn't follow from the fact that defective brains are constrained in some of their cognitive capacities that for healthy brains there are thoughts that they cannot think (and not for reasons of memory storage, etc.). First, this involves an inference from facts about an unhealthy brain to facts about a healthy brain. Second, this involves an inference from certain kinds of limitations on unhealthy brains to other kinds of limitations on healthy brains. After all, we've agreed that we're not talking about limits on thinking caused by a lack of resources like memory. None of the empirical work showing that brain damage causes cognitive limits is strictly relevant to the question of whether or not other languages are translatable into our own.

Incidentally, didn't you earlier agree that brains weren't general-purpose inference-traversing devices? I thought your position was that language filled this role.

This is still my position.

(Even more incidentally, does it follow from this that two nonlinguistic thinking systems being mutually nonintelligable is more plausible for you? Actually, do you consider nonlinguistic thinking systems possible in the first place?)

No, I don't consider that to be possible, though it's a matter of how broadly we construe 'thinking' and 'language'. But where thinking is the sort of thing that's involved in truth values and inference relations (the truth predicate is probably not actually necessary), and where language is what we are using to communicate right now, then I would say "there is nothing that thinks that cannot use language, and everything that can use language can to that extent think."

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-09T16:40:55.519Z · LW(p) · GW(p)

As I said the last time this came up, I don't consider the line you want to draw on "for reasons of memory storage, etc" to be both well-defined and justified.

More precisely, I freely grant that if there are two minds A and B such that A can think thought T and B cannot think T, that there is some physical difference D between A and B that causes that functional difference, and whether D is in the category of "memory storage, etc." is not well defined. If any physical difference counts, then I guess I agree with you: if A can think T and B is physically identical to A, then B can think T as well. But that's one hell of an additional condition.

there is nothing that thinks that cannot use language, and everything that can use language can to that extent think

It follows that as humans evolved from our most recent non-thinking-and-non-linguistic forebear, the ability to think and the ability to use language appeared within the same individual. Yes?

Replies from: None
comment by [deleted] · 2012-04-09T16:58:59.928Z · LW(p) · GW(p)

if A can think T and B is physically identical to A, then B can think T as well. But that's one hell of an additional condition.

Well, I take it for granted that you can I can think the same thought (say "It is sunny in Chicago"), even though (I think) we have no reason to believe the neural activity involved in this thinking is identical in our two brains. So the fact that there are physical differences between two thinkers doesn't immediately mean that they cannot think the same thoughts. I expect you can think all the same thoughts that I think if we were to make a project of it. And yet it is implausible (and as far as I know empirically unsupported) to think that part or all (or even any) of your brain would as a result become structurally identical to mine.

So physical differences can matter, but among healthy brains, they almost always don't. No two english speakers have structurally identical brains, and yet we're all fully mutually intelligible.

So we can't infer from physical differences to cognitive incompatibilities. I asked you in my last post to give me some reasons for the inference from 'our brains are evolved systems' to 'we can have reason to believe that there are thoughts we cannot think' or 'there are thoughts we cannot think'. Is there some inferential meat here, or is this a hunch? Have I misrepresented your view?

It follows that as humans evolved from our most recent non-thinking-and-non-linguistic forebear, the ability to think and the ability to use language appeared within the same individual. Yes?

Yes, I think so, though of course there wasn't a 'first thinker/language user'.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-09T17:30:32.942Z · LW(p) · GW(p)

Well, I take it for granted that you can I can think the same thought (say "It is sunny in Chicago"), even though (I think) we have no reason to believe the neural activity involved in this thinking is identical in our two brains.

This is another place where I want to avoid treating "Y is near enough to X for practical considerations" as equivalent to "Y is X" and then generalizing out from that to areas outside those practical considerations.

I would certainly agree that you and I can think two thoughts Ta and Tb and have them be similar enough to be considered the same thought for practical purposes (the case where both Ta and Tb map to "It is sunny in Chicago" might be an example, depending on just what we mean by that utterance). I would similarly agree that we have no reason to expect, in this case, that either the neural activity involved in this thinking or the biochemical structures that support and constrain that neural activity to be exactly identical.

So physical differences can matter, but among healthy brains, they almost always don't.

Sure, but why are you limiting the domain of discourse in this way?

If Tom has a stroke and suffers from aphasia, he is less mutually intelligible with other English speakers than he was before the stroke, and his brain is less relevantly similar to that of other English speakers. As his brain heals and the relevant similarities between his brain and mine increase, our mutual intelligibility also increases.

I certainly agree that if we ignore Tom altogether, we have less reason to believe that structure constrains function when it comes to cognition than if we pay attention to Tom. But I don't see why ignoring him is justified.

So physical differences can matter, but among healthy brains, they almost always don't. No two english speakers have structurally identical brains, and yet we're all fully mutually intelligible.

I would say rather that the relevant parts of two English speakers' brains are very similar, and their mutual intelligibility is high. This is precisely what I would expect from a relationship between relevant structural similarity and mutual intelligibility.

As above, this is equivalent to what you said for practical considerations.

I asked you in my last post to give me some reasons for the inference from 'our brains are evolved systems' to 'we can have reason to believe that there are thoughts we cannot think' or 'there are thoughts we cannot think'. Is there some inferential meat here, or is this a hunch?

If you don't consider anything I've said on this question over the last twenty comments or so to provide meat for inference, then I think we can classify it as "just a hunch" for purposes of this conversation.

Replies from: None
comment by [deleted] · 2012-04-09T18:48:26.864Z · LW(p) · GW(p)

But I don't see why ignoring him is justified.

The point isn't that we should ignore him. The point is that your assumption that the difference between Tom and a healthy brain is relevant to this question is (at least as yet) undefended.

If you don't consider anything I've said on this question over the last twenty comments or so to provide meat for inference, then I think we can classify it as "just a hunch" for purposes of this conversation.

Maybe you could point me to something specific? In reviewing our conversations, I found statements of this inference, but I didn't find a defense of it. At one point you said you took it to be obvious, but this is the best I could find. Am I just missing something?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-09T19:06:01.162Z · LW(p) · GW(p)

I don't know if you're missing anything.

I accept that you consider the items on which I base the belief that brain structure constrains the set of inferential relations that an evolved brain can traverse to be inadequate evidence to justify that conclusion. I don't expect repeating myself to change that. If you genuinely don't consider them evidence at all, I expect repeating myself to be even less valuable.

Replies from: None
comment by [deleted] · 2012-04-09T19:54:39.311Z · LW(p) · GW(p)

If you genuinely don't consider them evidence at all, I expect repeating myself to be even less valuable.

I consider it evidence, just weak and indirect in relation to (what I take to be) much stronger and more directly related evidence that we can assume that anything we could recognize as thinking is something we can think. Such that, on balance, I would be surprised to hear that there are such thoughts.

It sounds like we've pretty much exhausted ourselves here, so thanks for the discussion.

comment by Desrtopa · 2012-04-06T13:40:33.604Z · LW(p) · GW(p)

Can you rotate four dimensional solids in your head?

Edit: it looks like I'm not the first to suggest this, but I'll add that since computers are capable not just of representing more than three spacial dimensions, but of tracking objects through them, these are probably "possible thoughts" even if no human can represent them mentally.

Replies from: None
comment by [deleted] · 2012-04-06T14:02:07.472Z · LW(p) · GW(p)

Can you rotate four dimensional solids in your head?

Well, suppose I'm colorblind from birth. I can't visualize green. Is this significantly different from the example of 4d rotations?

If so, how? (ETA: after all, we can do all the math associated with 4d rotations, so we're not deficient in conceptualizing them, just in imagining them. Arguably, computers can't visualize them either. They just do the math and move on).

If not, then is this the only kind of thought (i.e. visualizations, etc.) that we can defend as potentially unthinkable by us? If this is the only kind of thought thus defensible, then we've rendered the original quote trivial: it infers from the fact that it's possible to be unable to see a color that it's possible to be unable to think a thought. But if these kinds of visualizations are the only kinds of thoughts we might not be able to think, then the quote isn't saying anything.

Replies from: Desrtopa
comment by Desrtopa · 2012-04-06T14:21:03.694Z · LW(p) · GW(p)

If you discount inaccessible qualia, how about accurately representing the behaviors of subatomic particles in a uranium atom?

I'm not a physicist, but I have been taught that beyond the simplest atoms, the calculations become so difficult that we're unable to determine whether our quantum models actually predict the configurations we observe. In this case, we can't simply do the math and move on, because the math is too difficult. With our own mental hardware, it appears that we can neither visualize nor predict the behavior of particles on that scale, above a certain level of complexity, but that doesn't mean that a jupiter brain wouldn't be able to.

Replies from: None
comment by [deleted] · 2012-04-06T15:19:18.548Z · LW(p) · GW(p)

If you discount inaccessible qualia, how about accurately representing the behaviors of subatomic particles in a uranium atom?

I'm not discounting qualia (that's it's own discussion), I'm just saying that if these are the only kinds of thoughts which we can defend as being potentially unthinkable by us, then the original quote is trivial.

So one strategy you might take to defend thoughts we cannot think is this: thinking is or supervenes on a physical process, and thus it necessarily takes time. All human beings have a finite lifespan. Some thought could be formulated such that the act of thinking it with a human brain would take longer than any possible lifespan, or perhaps just an infinite amount of time. Therefore, there are thoughts we cannot think.

I think this suggestion is basically the same as yours: what prevents us from thinking this thought is some limited resources, like memory or lifespan, or something like that. Similarly, I could suggest a language that is in principle untranslatable, just because all well formed sentences and clauses in that language are long enough that we couldn't remember a whole one.

But it would be important to distinguish, in these cases, between two different kinds of unthinkability or untranslatability. Both the infinite (or just super complex) thoughts and the super long sentences are translatable into a language we can understand, in principle. There's nothing about those thoughts or sentences, or our thoughts or sentences, that makes them incompatible. The incompatibility arises from a fact about our biology. So in the same line, we could say that some alien species' language is untranslatable because they speak and write in some medium we don't have the technology to access. The problem there isn't with the language or the act of translation.

In sum, I think that this suggestion (and perhaps the original quote) trades on an equivocation between two different kinds of unthinkability. But if the only defensible kind of unthinkability is one on the basis of some accidental limitation of access or resources, then I can't see what's interesting about the idea. It's no more interesting then than the point that I can't speak Chinese because I haven't learned it.

comment by Richard_Kennaway · 2012-04-05T06:37:17.602Z · LW(p) · GW(p)

For me, it merely brings it to the level of "interesting speculation". What observations would provide strong evidence that there be dragons? Other weak evidence that just leaves it at much the original level is the existence of anosognosia -- people with brain damage who appear to be unable to think certain thoughts about their affliction. But that doesn't prove anything about the healthy brain, any more than blindness proves the existence of invisible light.

Some people seem unable to grok mathematics, but then, some people do. The question is whether, Turing-completeness aside, the best current human thinking is understanding-complete, subject only to resource limitation.

comment by BillyOblivion · 2012-04-17T12:22:57.766Z · LW(p) · GW(p)

So if Majus's post (on Pinker) is correct, and the underling processing engine(s) (aka "the brain") determine the boundaries of what you can think about, then it is almost tautological that no one can give you an example since to date almost all folks have a very similar underlying architecture.

Replies from: None
comment by [deleted] · 2012-04-17T13:59:34.126Z · LW(p) · GW(p)

So what I argued was that thoughts are by nature commensurable: it's just in the nature of thoughts that any thinking system can think any thought from any other thinking system. There are exceptions to this, but these exceptions are always on the basis of limited resources, like limited memory.

So, an application of this view is that there are no incommensurable scientific schemes: we can in principle take any claim from any scientific paradigm and understand or test it in any other.

Replies from: BillyOblivion
comment by BillyOblivion · 2012-04-21T00:20:21.952Z · LW(p) · GW(p)

All I argued was that if their thesis is correct, then unless you've had some very odd experiences, no one can give you an example because everyone you meet is similarly bounded.

That is the limit of what my statement was intended to convey.

I don't know enough neurology, psychology and etc. to have a valid opinion, but I will note that we see at most 3 colors. We perceive many more. But any time we want to perceive, for example, the AM radio band we map it into a spectrum our eyes can handle, and as near as I can tell we "think" about it in the colors we perceive.

It is my understanding that there is some work in this area where certain parts of hte brain handle certain types of work. Folks with certain types of injuries or anomalous structures are unable to process certain types of input, and unable to do certain kinds of work. This seems to indicate that while our brain, as currently constructed, is a fairly decent tool for working out the problems we have in front of us, there is some evidence that it is not a general purpose thinking machine.

(in one of those synchronicity thingies my 5 year old just came up to me and showed me a picture of sound waves coming into an ear and molecules "traveling" into your nose).

comment by Alicorn · 2012-04-01T18:09:23.468Z · LW(p) · GW(p)

Westerners are fond of the saying ‘Life isn’t fair.’ Then, they end in snide triumphant: ‘So get used to it!’
What a cruel, sadistic notion to revel in! What a terrible, patriarchal response to a child’s budding sense of ethics. Announce to an Iroquois, ‘Life isn’t fair,’ and her response will be: ‘Then make it fair!’

Barbara Alice Mann

Replies from: Viliam_Bur, Jayson_Virissimo, John_Maxwell_IV
comment by Viliam_Bur · 2012-04-01T20:44:32.442Z · LW(p) · GW(p)

I agree with the necessity of making life more fair, and disagree with the connotational noble Pocahontas lecturing a sadistic western patriarch. (Note: the last three words are taken from the quote.)

Replies from: Nornagest, Alicorn, NancyLebovitz, taelor
comment by Nornagest · 2012-04-01T22:59:04.774Z · LW(p) · GW(p)

Agree that that looks an awful lot like an abuse of the noble savage meme. Barbara Alice Mann appears to be an anthropologist and a Seneca, so that's at least two points where she should really know better -- then again, there's a long and more than somewhat suspect history of anthropologists using their research to make didactic points about Western society. (Margaret Mead, for example.)

Not sure I entirely agree re: fairness. "Life's not fair" seems to me to succinctly express the very important point that natural law and the fundamentals of game theory are invariant relative to egalitarian intuitions. This can't be changed, only worked around, and a response of "so make it fair" seems to dilute that point by implying that any failure of egalitarianism might ideally be traced to some corresponding failure of morality or foresight.

Replies from: Multiheaded, ciphergoth
comment by Multiheaded · 2012-04-03T21:21:50.247Z · LW(p) · GW(p)

You are confusing "fairness" and egalitarianism. While everyone has their own definition of "fairness", it feels obvious to me that, even if you're correct about the cost of imposing reasonable egalitarianism being too high in any given situation, this does not absolve us from seeking some palliative measures to protect those left worst off by that situation. Reducing first the suffering of those who suffer most is an ok partial definition of fairness for me.

Despite (or due to, I'm too sleepy to figure it out) considering myself an egalitarian, I would prefer a world where the most achieving 10% get 200 units of income (and the top 10% of them get 1000), the least achieving 10% get 2 units and everyone else gets 5-15 units (1 unit supporting the lifestyle of today's European blue-collar worker) to a world where the bottom 10% get 0.2 units and everyone else gets 25-50. Isn't that more or less the point of charity (aside from signaling)?

Replies from: Nornagest
comment by Nornagest · 2012-04-03T23:11:39.998Z · LW(p) · GW(p)

even if you're correct about the cost of imposing reasonable egalitarianism being too high in any given situation

I didn't say this. Actually, I'd consider it somewhat incoherent in the context of my argument: if imposing reasonable egalitarianism (whatever "reasonable" is) was too costly to be sustainable, it seems unlikely that we'd have developed intuitions calling for it.

On the other hand, I suppose one possible scenario where that'd make sense would be if some of the emotional architecture driving our sense of equity evolved in the context of band-level societies, and if that architecture turned out to scale poorly -- but that's rather speculative, somewhat at odds with my sense of history, and in any case irrelevant to the point I was trying to make in the grandparent.

Anyway, don't read too much into it. My point was about the relationship between the world and its mathematics and our anthropomorphic intuitions; I wasn't trying to make any sweeping generalizations about our behavior towards each other, except in the rather limited context of game theory and its various cultural consequences. I certainly wasn't trying to make any prescriptive statements about how charitable we should be.

Replies from: Multiheaded
comment by Multiheaded · 2012-04-04T11:50:15.616Z · LW(p) · GW(p)

if imposing reasonable egalitarianism (whatever "reasonable" is) was too costly to be sustainable, it seems unlikely that we'd have developed intuitions calling for it.

Some of the local Right are likely to claim that we developed them just for the purpose of signaling, and that they're the worst thing EVAH when applied to reality. ;)

(Please don't take this as a political attack, guys, my debate with you is philosophical. I just need a signifier for you.)

Replies from: None
comment by [deleted] · 2012-04-11T07:11:22.973Z · LW(p) · GW(p)

ominous theme music

Well someone certainly has been digging into the LessWrong equivalent of Sith holocrons. You are getting pretty good at integrating their mental tool kit. It has made your thinking clearer, made your positions stronger than would have been otherwise possible.

Now far from me, to question such a search for knowledge. Indeed I commend it. It is a path to great predictive power! You will find that as you continue your studies it can offer many useful heuristics, that some would consider ... unthinkable.

Replies from: Multiheaded
comment by Multiheaded · 2012-04-11T12:47:41.091Z · LW(p) · GW(p)

You know, I was not wholly unprepared for this ideological predicament. Since I first became interested in Fascist-like ideas and the history of political conflict surrounding them (during high school), I've always had a hunch that "the enemy" is far wiser, more attractive and more insidious than most people who pretend to "common sense" believe. It is the radical Right themselves and the radical Left who oppose both them and mainstream liberalism (which is "common sense" to our age) that have a more realistic estimate of this conflict's importance. Even in spite of the fact that said Right has been hounded and suppressed since 1940, including, in a gentler way, by moderate conservatives eager to attain a more enlightened image. To quote again from Orwell's review of Mein Kampf:

Suppose that Hitler’s programme could be put into effect. What he envisages, a hundred years hence, is a continuous state of 250 million Germans with plenty of “living room” (i.e. stretching to Afghanistan or thereabouts), a horrible brainless empire in which, essentially, nothing ever happens except the training of young men for war and the endless breeding of fresh cannon-fodder. How was it that he was able to put this monstrous decision across? It is easy to say that at one stage of his career he was financed by the heavy industrialists, who saw in him the man who would smash the Socialists and Communists. They would not have backed him, however, if he had not talked a great movement into existence already. Again, the situation in Germany, with its seven million unemployed, was obviously favourable for demagogues. But Hitler could not have succeeded against his many rivals if it had not been for the attraction of his own personality, which one can feel even in the clumsy writing of Mein Kampf, and which is no doubt overwhelming when one hears his speeches. I should like to put it on record that I have never been able to dislike Hitler. Ever since he came to power — till then, like nearly everyone, I had been deceived into thinking that he did not matter — I have reflected that I would certainly kill him if I could get within reach of him, but that I could feel no personal animosity. The fact is that there is something deeply appealing about him. One feels it again when one sees his photographs — and I recommend especially the photograph at the beginning of Hurst and Blackett’s edition, which shows Hitler in his early Brownshirt days. It is a pathetic, dog-like face, the face of a man suffering under intolerable wrongs. In a rather more manly way it reproduces the expression of innumerable pictures of Christ crucified, and there is little doubt that that is how Hitler sees himself. The initial personal cause of his grievance against the universe can only be guessed at; but at any rate the grievance is there. He is the martyr, the victim. Prometheus chained to the rock, the self-sacrificing hero who fights single-handed against impossible odds. If he were killing a mouse he would know how to make it seem like a dragon. One feels, as with Napoleon, that he is fighting against destiny, that he can’t win, and yet that he somehow deserves to. The attraction of such a pose is of course enormous; half the films that one sees turn upon some such theme.

Of course, the above can't be applied to all such right-wing radicals without adjusting for their personal differences - e.g. Mencius criticizing idealism as the root of all evil both on the right and on the left, while himself possessing a less-than-obvious but very distinct sort of idealism [1] - but still. If exposed to today's political blogosphere, Orwell could undoubtedly have constructed similar respectful warnings for all his radical opponents he'd find solid. The people who dreaded and obsessed over "Fascism", and continue to do so to this day - as well as the contrarians who actually walk that path - have clearer vision than the complacent masses. That the idea is in retreat and on the decline does not affect its strict consistency, decent compatibility with human nature and inherent potential.

Still, when all's said and done I view the situation as half a rational investigation and half a holy war (for a down-to-earth definition of "holy"); I don't currently feel any erosion in my values or see myself reneging at the end of it. Yet - and thank you for your compliment - I'm certainly eager to familiarize myself with as much of the other side's intellectual weaponry as it's possible to without getting significantly Sapir-Whorfed.

-[1] (I'm not going to describe in detail here Moldbug's many similarities and differences with classical thought that has been called fascist; I'll only mention that he himself admitted that calling his vision a "fascist technocracy" has "a grain of truth" - and, of course, I'm rather skeptical of his pretensions to exceptional pragmatism and non-mindkilledness)

comment by Paul Crowley (ciphergoth) · 2012-04-02T07:36:02.207Z · LW(p) · GW(p)

I think that Robert Smith has a much wiser take on this: "The world is neither fair nor unfair"

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-04-03T00:42:56.268Z · LW(p) · GW(p)

The world is neither F nor ~F?

Replies from: ciphergoth, RolfAndreassen, army1987
comment by Paul Crowley (ciphergoth) · 2012-04-03T12:05:51.693Z · LW(p) · GW(p)

Unfair is the opposite of fair, not the logical complement. The moon is neither happy nor sad.

comment by RolfAndreassen · 2012-04-03T01:03:27.669Z · LW(p) · GW(p)

That is indeed possible if F is incoherent or has no referent. The assertion seems equivalent to "There's no such thing as fairness".

Replies from: Oscar_Cunningham, ciphergoth
comment by Oscar_Cunningham · 2012-04-03T08:41:39.666Z · LW(p) · GW(p)

I'm confused because it was Eliezer who taught me this.

(P or ~P) is not always a reliable heuristic, if you substitute arbitrary English sentences for P.

EDIT: I'm now resisting the temptation to tell Eliezer to "read the sequences".

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-04-03T23:49:40.563Z · LW(p) · GW(p)

Original parent says, "The world is neither fair nor unfair", meaning, "The world is neither deliberately fair nor deliberately unfair", and my comment was meant to be interpreted as replying, "Of course the world is unfair - if it's not fair, it must be unfair - and it doesn't matter that it's accidental rather than deliberate." Also to counteract the deep wisdom aura that "The world is neither fair nor unfair" gets from counterintuitively violating the (F \/ ~F) axiom schema.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2012-04-04T08:12:54.426Z · LW(p) · GW(p)

It matters hugely that it's not deliberately unfair. People get themselves into really awful psychological holes - in particular the lasting and highly destructive stain of bitterness - by noting that the world is not fair, and going on to adopt a mindset that it is deliberately unfair.

Replies from: wedrifid
comment by wedrifid · 2012-04-04T08:28:54.763Z · LW(p) · GW(p)

It matters hugely that it's not deliberately unfair.

It matters a lot (to those who are vulnerable to the particular kind of irrational bitterness in question) that the universe is not deliberately unfair.

I took Eliezer's "it doesn't matter" to be the more specific claim "it does not matter to the question of whether the universe is unfair whether the unfairness present is deliberate or not-deliberate".

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2012-04-04T13:04:03.170Z · LW(p) · GW(p)

Err, the "question of whether the universe is unfair" sounds a lot to me like the "question of whether the tree makes a sound". What query are we trying to hug here? I think what I call "unfairness" - something due to some agent - is something we can at least sometimes usefully respond by being pissed off, because the agent doesn't want us to be pissed off. But the Universe absolutely cannot care whether we're pissed off, and so putting it under the same category as eg discrimination engenders the wrong response.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-04T13:25:24.262Z · LW(p) · GW(p)

What makes being pissed off at an agent who treats me unfairly useful is not that the agent doesn't want me to be pissed off. In fact, I can sometimes be usefully pissed off at an unfair agent that is entirely indifferent to, or even unaware of, my existence. In much the the same way, I can sometimes be usefully pissed off at a non-agent that behaves in ways that I would classify as "unfair" if an agent behaved that way.

Admittedly, asking when it's useful to classify something as "unfair" is different from asking what things are in fact unfair.

On the other hand, in practice the first of those seems most relevant to actual human behavior. The second seems to pretty quickly lead to either the answer "everything" (all processes result in output distributions that are not evenly distributed across some metric) or "nothing" (all processes are equally constrained and specified by physical law) and neither of those answers seems terribly relevant to what anyone means by the question.

comment by Paul Crowley (ciphergoth) · 2012-04-04T08:15:03.773Z · LW(p) · GW(p)

No, that fairness isn't a characteristic you can measure of the world. There's such a thing as fairness when it comes to eg dividing a cake between children.

comment by A1987dM (army1987) · 2012-04-05T22:51:38.824Z · LW(p) · GW(p)

“The world is fair” = world.fairness > 0
“The world is unfair” = world.fairness < 0
“The world is neither fair nor unfair” = world.fairness == 0, or something like this.

comment by Alicorn · 2012-04-01T21:54:15.233Z · LW(p) · GW(p)

I didn't think I could remove the quote from that attitude about it very effectively without butchering it. I did lop off a subsequent sentence that made it worse.

comment by NancyLebovitz · 2012-04-02T02:18:02.256Z · LW(p) · GW(p)

Do people typically say "life isn't fair" about situations that people could choose to change?

Replies from: AspiringKnitter, Jayson_Virissimo
comment by AspiringKnitter · 2012-04-02T02:28:30.529Z · LW(p) · GW(p)

Don't they usually say it about situations that they could choose to change, to people who don't have the choice?

Replies from: BlazeOrangeDeer, TimS
comment by BlazeOrangeDeer · 2012-04-02T05:36:19.846Z · LW(p) · GW(p)

Exactly. In my experience the people who say "life isn't fair" are the main reason that it still isn't.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2012-04-03T21:10:25.890Z · LW(p) · GW(p)

In my experience the people who say "life isn't fair" are the main reason that it still isn't.

How did you develop a sufficiently powerful causal model of "life" to establish this claim with such confidence?

Replies from: BlazeOrangeDeer
comment by BlazeOrangeDeer · 2012-04-03T22:51:24.343Z · LW(p) · GW(p)

i mean that in almost all of the situations where I've heard that phrase used, it was used by someone who was being unfair and who couldn't be bothered to make a real excuse.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2012-04-04T14:06:21.401Z · LW(p) · GW(p)

Okay, but that is a very different claim. It could be true even while most sources of unfairness in life are other things, not people who bother to say "life's not fair".

comment by TimS · 2012-04-02T02:52:41.609Z · LW(p) · GW(p)

I agree, it's usually used as an excuse not to try to change things.

comment by Jayson_Virissimo · 2012-04-02T09:48:39.756Z · LW(p) · GW(p)

Do people typically say "life isn't fair" about situations that people could choose to change?

Introspection tells me this statement usually gets trotted out when the cost of achieving fairness is too high to warrant serious consideration.

EDIT: Whoops, I just realised that my imagination only outputted situations involving adults. When imagining situations involving children I get the opposite of my original claim.

Replies from: Multiheaded
comment by Multiheaded · 2012-04-03T21:29:17.602Z · LW(p) · GW(p)

Introspection tells me this statement usually gets trotted out when the cost of achieving fairness is too high to warrant serious consideration.

Could you give an example of such a situation where the cost of achieving "fairness" is indeed too high for you? Because I have a hunch that we differ not so much in our assessment of costs but in our notions of "fairness". Oh, and what is "Serious consideration"? Is a young man thinking of what route he should set his life upon and wanting to increase "fairness" doing more or less serious consideration than an adult thinking whether to give $500 to charity?

Replies from: NancyLebovitz, Jayson_Virissimo
comment by NancyLebovitz · 2012-04-04T11:56:39.733Z · LW(p) · GW(p)

Current example: A friend of mine telling her very intelligent son that he has to do boring schoolwork because life isn't fair.

It occurs to me to ask her whether a good gifted and talented program is available.

Replies from: Multiheaded
comment by Multiheaded · 2012-04-04T12:07:23.313Z · LW(p) · GW(p)

Hmm? I know I'm no-one to tell you those things and it might sound odd coming from a stranger, but... please try persuading her to attend to the kid's special needs somehow. Ideally, I believe, he should be learning what he loves plus things useful in any career like logic and social skills, with moderate challenge and in the company of like-minded peers... but really, any improvement over either the boredom of standard "education" or the strain of a Japanese-style cram school would be fine. It pains me to see smart children burning out, because it happened to me too.

Replies from: NancyLebovitz, CronoDAS
comment by NancyLebovitz · 2012-04-13T05:52:55.992Z · LW(p) · GW(p)

I've talked with her. Her son is already in a Gifted and Talented program, but they're still expecting too much busy work from him-- he's good at learning things that he's interested in the first time he hears them, and doesn't need drilling.

He's got two years more of high school to go.

I've convinced her that it's worthwhile to work on convincing the school that they should modify the program into something that's better for him, and also that it's good for him to learn about advocacy as well as (instead of?) accommodation. I think she cares enough that this isn't going to fall off the to do list, but I'll ask again in a couple of months.

Thanks for pushing about this.

Replies from: Multiheaded
comment by Multiheaded · 2012-04-13T06:10:32.790Z · LW(p) · GW(p)

Great. That's going to brighten up a very very shitty day I'm having, BTW. I got my father moderately angry and disappointed in me for an insubstantial reason (he's OK but kind of emotional and has annoying expectations), and then my mom phoned from work in tears to say that her cat electrocuted itself somehow. I have just got very high on coffee to numb emotion and am browsing LW right now until I can take a peek at reality again.

comment by CronoDAS · 2012-04-13T06:15:10.439Z · LW(p) · GW(p)

Me, I've burned out many times in school. Each time it happened, I was sent to psychiatrists as punishment.

comment by Jayson_Virissimo · 2012-04-04T05:43:24.991Z · LW(p) · GW(p)

I don't remember exactly what I imagined, but it was something like this:

Alice: I can't believe it! They chose that other guy for the job even though I have 6 more years of experience than him. It is so unfair... The only reason they picked him was because he went to the same school as the boss.

Bob: Well, life isn't fair sometimes. Just suck it up, work on your resume, and give the next interview your best shot.

Replies from: Multiheaded
comment by Multiheaded · 2012-04-04T06:19:06.181Z · LW(p) · GW(p)

Actually, I'd say that it could be a case where justice can assert itself... the boss is, barring unusual circumstances, going to lose out on a skilled worker and that could impact his business.

(I mean, presumably the overly high cost of achieving fairness in that case would be passing a law telling employers how to make hiring decisions... but that idiot of a boss would benefit from such a law if the heuristics in it were good; now he's free to shoot himself in the foot!)

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-04-04T06:41:01.788Z · LW(p) · GW(p)

Bob is telling Alice that life isn't fair. Bob is Alice's friend; he is not the boss. Bob seems like he has Alice's interests in mind, since it is unlikely that Alice "doing something about it" would be worth it (such as confronting the boss, suing the company, picketing on the street outside the building, etc...). She is probably better off just continuing her job search. This is independent of whether or not Alice's decision is best for society as a whole.

Replies from: Multiheaded
comment by Multiheaded · 2012-04-04T08:34:37.333Z · LW(p) · GW(p)

Oh, that makes sense.

comment by taelor · 2012-04-12T10:48:05.073Z · LW(p) · GW(p)

The problem with saying that we should make life more fair is that life is often unfair with regard to our ability to make it more fair.

comment by Jayson_Virissimo · 2012-04-02T09:45:47.102Z · LW(p) · GW(p)

The automatic pursuit of fairness might lead to perverse incentives. I have in mind some (non-genetically related) family in Mexico who don't bother saving money for the future because their extended family and neighbours would expect them to pay for food and gifts if they happen to acquire "extra" cash. Perhaps this "Western" patriarchal peculiarity has some merit after all.

Replies from: Viliam_Bur, Nornagest
comment by Viliam_Bur · 2012-04-03T08:41:20.419Z · LW(p) · GW(p)

Is this really about fairness? Seems like different people agree that fairness is a good thing, but use different definitions of fairness. Or perhaps the word fairness is often used to mean "applause lights of my group".

For someone fairness means "everyone has food to eat", for another fairness means "everyone pays for their own food". Then proponents of one definition accuse the others of not being fair -- the debate is framed as if the problem is not different definitions of fairness, but rather our group caring about fairness and the other group ignoring fairness; which of course means that we are morally right and they are morally wrong.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-04-03T13:50:52.436Z · LW(p) · GW(p)

Is this really about fairness?

IDK, but I have heard people refer to fairness in similar situations, so I am merely adopting their usage.

Seems like different people agree that fairness is a good thing, but use different definitions of fairness. Or perhaps the word fairness is often used to mean "applause lights of my group".

I agree. To a large degree the near universal preference for "fairness" in humans is illusory, because people mean mutually contradictory things by it.

For someone fairness means "everyone has food to eat", for another fairness means "everyone pays for their own food". Then proponents of one definition accuse the others of not being fair -- the debate is framed as if the problem is not different definitions of fairness, but rather our group caring about fairness and the other group ignoring fairness; which of course means that we are morally right and they are morally wrong.

I believe "fairness" can be given a fairly rigorous definition (I have in mind people like Rawls), but the second you get explicit about it, people stop agreeing that it is such a good thing (and therefore, it loses its moral force as a human universal).

comment by Nornagest · 2012-04-02T10:03:01.926Z · LW(p) · GW(p)

One wonders whether food and gifts translate into status more or less effectively than whatever they might buy to that end in "Western" society would. Scare quotes because most of Mexico isn't much more or less Western than the US, all things considered.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-04-02T10:19:01.566Z · LW(p) · GW(p)

Yeah, the scare quotes are because I dislike the use of "Western" to mean English-speaking cultures rather than the Greek-Latin-Arabic influenced cultures.

comment by John_Maxwell (John_Maxwell_IV) · 2012-04-02T05:51:00.349Z · LW(p) · GW(p)

I'm not convinced fairness is inherently valuable.

  • Envy is an unpleasant emotion that should probably be eliminated.
  • I like being part of egalitarian social groups, but I don't think status inequality has to follow inevitably from material inequality.
Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2012-04-02T07:35:21.146Z · LW(p) · GW(p)

I don't think that fairness is terminally valuable, but I think it has instrumental value.

comment by Mark_Eichenlaub · 2012-04-02T00:03:19.151Z · LW(p) · GW(p)

Gene Hofstadt: You people. You think money is the answer to every problem.

Don Draper: No, just this particular problem.

Mad Men, "My Old Kentucky Home"

Replies from: FiftyTwo
comment by FiftyTwo · 2012-04-03T21:47:12.920Z · LW(p) · GW(p)

Another good one from Don Draper:

I hate to break it to you, but there is no big lie, there is no system, the universe is indifferent.

Replies from: JulianMorrison
comment by JulianMorrison · 2012-04-20T15:04:18.285Z · LW(p) · GW(p)

This is mistaken because systems can and do assemble out of sufficiently similar people pursuing self interest in a way that ends up coordinated because their motivations are alike. Capitalism is the simplest and most obvious example of such a system, but I'd argue things like patriarchy and racism are similar.

Replies from: FiftyTwo
comment by FiftyTwo · 2012-05-11T00:45:18.615Z · LW(p) · GW(p)

The point is the system doesn't have a particular overriding goals, or central coordination, and isn't interested in you personally. In context, he was speaking to counter-culture people who thought the system was against them, in an ego satisfying way that makes them feel significant. He counter that it is simply indifferent to them.

comment by A4FB53AC · 2012-04-01T15:48:12.770Z · LW(p) · GW(p)

A faith which cannot survive collision with the truth is not worth many regrets.

Arthur C. Clarke

Replies from: Multiheaded, NancyLebovitz
comment by Multiheaded · 2012-04-04T12:59:14.591Z · LW(p) · GW(p)

The trouble is, the most problematic kinds of faith can survive it just fine.

Replies from: gwern
comment by gwern · 2012-04-07T20:48:11.992Z · LW(p) · GW(p)

Which leads us to today's Umeshism: "Why are existing religions so troublesome? Because they're all false, the only ones that exist are so dangerous that they can survive the truth."

Replies from: Multiheaded
comment by Multiheaded · 2012-04-07T21:05:53.442Z · LW(p) · GW(p)

I'm not sure if I can really call myself Gnostic, but if I can, mine's neither troublesome*, nor does it make any claims inconsistent with a sufficiently strong simulation hypothesis.

-* (when e.g. Voegelin was complaining about "Gnostic" ideas of rearranging society, he was 1) obviously excluding any transformation he approved of, perhaps considering it "natural" and not dangerous meddling, and 2) blaming a fairly universal kind of radicalism correlated with all monotheistic or quasi-monotheistic worldviews; he's essentially privileging the hypothesis to vent about personality types he dislikes, and conservatives should really look at these things more objectively for the sake of their own values)

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-04-08T07:14:48.470Z · LW(p) · GW(p)

when e.g. Voegelin was complaining about "Gnostic" ideas of rearranging society, he was 1) obviously excluding any transformation he approved of, perhaps considering it "natural" and not dangerous meddling

Um, no. He was complaining about attempts to rearrange society from the top down.

Replies from: Multiheaded
comment by Multiheaded · 2012-04-08T08:24:42.583Z · LW(p) · GW(p)

The problem is, hardly anyone else would describe a person who's actually in a position of power to do the rearranging - like e.g. Lenin - as "Gnostic"; he has certainly been known as a dreamer blind to reality, but as I pointed out that's a very general indictment. The way it's actually used throughout history, "Gnosticism" has the connotations of a monastic life and mystical pursuits, detached from daily life or outright fleeing from society; after all, no leader who actually left a noticeable mark on society has ever been called that. Many parallels have been drawn between Marxism/Facscism/transhumanism/etc and religious fundamentalism, but those parallels did not include a persecuted, non-populist and underground branch of a religion.

The word has always been associated with "heresy", and a tendency that's imposing its own dogma & suppressing opposition is not called a "heresy". Voegelin should've introduced a new term for the category of people he wanted to indict instead of appropriating an unsuitable word.

comment by NancyLebovitz · 2012-04-08T15:00:44.093Z · LW(p) · GW(p)

That's very nice to say, but people are apt to find giving up some faiths very emotionally wrenching and socially costly (even if the faith isn't high status, a believer is likely to have a lot of relationships with people who are also believers). Now what?

comment by Random832 · 2012-04-13T20:41:37.764Z · LW(p) · GW(p)

The other day I was thinking about Discworld, and then I remembered this and figured it would make a good rationality quote...

[Vimes] distrusted the kind of person who'd take one look at another man and say in a lordly voice to his companion, "Ah, my dear sir, I can tell you nothing except that he is a left-handed stonemason who has spent some years in the merchant navy and has recently fell on hard times," and then unroll a lot of supercilious commentary about calluses and stance and the state of a man's boots, when exactly the same comments could apply to a man who was wearing his old clothes because he'd been doing a spot of home bricklaying for a new barbecue pit, and had been tattooed once when he was drunk and seventeen and in fact got seasick on a wet pavement. What arrogance! What an insult to the rich and chaotic variety of the human experience!

-- Terry Pratchett, Feet of Clay

Replies from: RobinZ, tut
comment by RobinZ · 2012-04-14T04:13:39.874Z · LW(p) · GW(p)

Reminded of a quote I saw on TV Tropes of a MetaFilter comment by ericbop:

Encyclopedia Brown? What a hack! To this day, I occasionally reach into my left pocket for my keys with my right hand, just to prove that little brat wrong.

comment by tut · 2012-04-14T09:27:31.662Z · LW(p) · GW(p)

Sounds like Vimes doesn't like Sherlock Holmes much.

Replies from: Multiheaded
comment by Multiheaded · 2012-04-14T09:45:41.961Z · LW(p) · GW(p)

Gee, you think?

Replies from: tut
comment by tut · 2012-04-14T11:31:51.230Z · LW(p) · GW(p)

Well, the quote made me think of this. Now that I looked up that post I notice that it is downvoted, so perhaps it isn't relevant. But the behavior that Vimes expresses distrust of in the Pratchett quote is pretty much the exact behavior that is used to show off how intelligent/perceptive Holmes is, and which the poster wants to use as an example for rationalists.

Replies from: MixedNuts
comment by MixedNuts · 2012-04-20T17:53:47.516Z · LW(p) · GW(p)

It is relevant and obvious. I suppose it was downvoted for the latter.

comment by Elithrion · 2012-04-03T01:38:31.699Z · LW(p) · GW(p)

"What really is the point of trying to teach anything to anybody?" This question seemed to provoke a murmur of sympathetic approval from up and down the table. Richard continued, "What I mean is that if you really want to understand something, the best way is to try and explain it to someone else. That forces you to sort it out in your mind. And the more slow and dim-witted your pupil, the more you have to break things down into more and more simple ideas. And that's really the essence of programming. By the time you've sorted out a complicated idea into little steps that even a stupid machine can deal with, you've learned something about it yourself."

Douglas Adams, Dirk Gently's Holistic Detective Agency

comment by lsparrish · 2012-04-04T03:19:15.612Z · LW(p) · GW(p)

What really matters is:–

  1. Always try to use the language so as to make quite clear what you mean and make sure your sentence couldn't mean anything else.

  2. Always prefer the plain direct word to the long, vague one. Don't implement promises, but keep them.

  3. Never use abstract nouns when concrete ones will do. If you mean "More people died" don't say "Mortality rose."

  4. In writing. Don't use adjectives which merely tell us how you want us to feel about the thing you are describing. I mean, instead of telling us a thing was "terrible," describe it so that we'll be terrified. Don't say it was "delightful"; make us say "delightful" when we've read the description. You see, all those words (horrifying, wonderful, hideous, exquisite) are only like saying to your readers, "Please will you do my job for me."

  5. Don't use words too big for the subject. Don't say "infinitely" when you mean "very"; otherwise you'll have no word left when you want to talk about something really infinite.

-- C. S. Lewis

comment by Alejandro1 · 2012-04-03T17:01:58.348Z · LW(p) · GW(p)

‘I’m exactly in the position of the man who said, ‘I can believe the impossible, but not the improbable.’’

‘That’s what you call a paradox, isn’t it?’ asked the other.

‘It’s what I call common sense, properly understood,’ replied Father Brown. ’It really is more natural to believe a preternatural story, that deals with things we don’t understand, than a natural story that contradicts things we do understand. Tell me that the great Mr Gladstone, in his last hours, was haunted by the ghost of Parnell, and I will be agnostic about it. But tell me that Mr Gladstone, when first presented to Queen Victoria, wore his hat in her drawing-room and slapped her on the back and offered her a cigar, and I am not agnostic at all. That is not impossible; it’s only incredible.

-G. K. Chesterton, The Curse of the Golden Cross

Replies from: CronoDAS, gwern
comment by CronoDAS · 2012-04-04T03:13:41.773Z · LW(p) · GW(p)

"What was the Sherlock Holmes principle? 'Once you have discounted the impossible, then whatever remains, however improbable, must be the truth.'"

"I reject that entirely," said Dirk sharply. "The impossible often has a kind of integrity to it which the merely improbable lacks. How often have you been presented with an apparently rational explanation of something that works in all respects other than one, which is just that it is hopelessly improbable? Your instinct is to say, 'Yes, but he or she simply wouldn't do that.'"

"Well, it happened to me today, in fact," replied Kate.

"Ah, yes," said Dirk, slapping the table and making the glasses jump. "Your girl in the wheelchair -- a perfect example. The idea that she is somehow receiving yesterday's stock market prices apparently out of thin air is merely impossible, and therefore must be the case, because the idea that she is maintaining an immensely complex and laborious hoax of no benefit to herself is hopelessly improbable. The first idea merely supposes that there is something we don't know about, and God knows there are enough of those. The second, however, runs contrary to something fundamental and human which we do know about. We should therefore be very suspicious of it and all its specious rationality."

-- Douglas Adams. The Long Dark Tea-Time of the Soul (1988) p.169

Replies from: NancyLebovitz, Alejandro1
comment by NancyLebovitz · 2012-04-04T13:07:00.693Z · LW(p) · GW(p)

I can't find the quote easily (it's somewhere in God, No!), but Penn Jillette has said that one aspect of magic tricks is the magician putting in more work to set them up than anyone sane would expect.

I'm moderately sure that he's overestimating how clearly the vast majority of people think about what's needed to make a magic trick work.

Replies from: arundelo, TheOtherDave
comment by arundelo · 2012-04-04T14:51:32.707Z · LW(p) · GW(p)

His partner Teller says the same thing here:

Make the secret a lot more trouble than the trick seems worth. You will be fooled by a trick if it involves more time, money and practice than you (or any other sane onlooker) would be willing to invest. My partner, Penn, and I once produced 500 live cockroaches from a top hat on the desk of talk-show host David Letterman. To prepare this took weeks. We hired an entomologist who provided slow-moving, camera-friendly cockroaches (the kind from under your stove don't hang around for close-ups) and taught us to pick the bugs up without screaming like preadolescent girls. Then we built a secret compartment out of foam-core (one of the few materials cockroaches can't cling to) and worked out a devious routine for sneaking the compartment into the hat. More trouble than the trick was worth? To you, probably. But not to magicians.

Edit: That trick is 19 minutes and 50 seconds into this video.

comment by TheOtherDave · 2012-04-04T13:18:37.031Z · LW(p) · GW(p)

It's not clear to me that clear thought on the part of the audience is necessary to make that statement true.

comment by Alejandro1 · 2012-04-04T03:27:20.758Z · LW(p) · GW(p)

Yes, exactly the same idea. Partial versions of your quote have been posted twice in LW already, and might have inspired me to post the Chesterton prior version, but I liked seeing the context for the Adams one that you provide.

Replies from: CronoDAS
comment by CronoDAS · 2012-04-04T08:24:45.809Z · LW(p) · GW(p)

Out of context, the quote makes much less sense; the specific example illustrates the point much better than the abstract description does.

Just for fun, which of the following extremely improbable events do you think is more likely to happen first:
1) The winning Mega Millions jackpot combination is 1-2-3-4-5-6 (Note that there are 175,711,536 possible combinations, and drawings are held twice a week.)
2) The Pope makes a public statement announcing his conversion to Islam (and isn't joking).

Replies from: Alejandro1, army1987, sixes_and_sevens, army1987
comment by Alejandro1 · 2012-04-04T08:55:53.629Z · LW(p) · GW(p)

Assuming that the 123456 winning must occur by legit random drawing (not a prank or a bug of some kind that is biased towards such a simple result) then I'd go for the Pope story as ]more likely to happen any given day in the present. After all, there have been historically many examples of highly ranked members of groups who sincerely defect to opposing groups, starting with St. Paul. But I confess I'm not very sure about this, and I'm too sleepy to think about the problem rigorously.

In the form you posed the question ("which is more likely to happen first") it is much more difficult to answer because I'd have to evaluate how likely are institutions such as the lottery and the Catholic Church to persist in their current form for centuries or millennia.

Replies from: CronoDAS
comment by CronoDAS · 2012-04-04T09:38:02.362Z · LW(p) · GW(p)

In the form you posed the question ("which is more likely to happen first") it is much more difficult to answer because I'd have to evaluate how likely are institutions such as the lottery and the Catholic Church to persist in their current form for centuries or millennia.

Good point.

comment by A1987dM (army1987) · 2012-04-04T12:49:51.134Z · LW(p) · GW(p)

Just for fun

It'd be even more fun if you replaced "1-2-3-4-5-6" with "14-17-26-51-55-36". (Whenever I play lotteries I always choose combinations like 1-2-3-4-5-6, and I love to see the shocked faces of the people I tell, tell them that it's no less likely than any other combination but it's at least easier to remember, and see their perplexed faces for the couple seconds it takes them to realize I'm right. Someone told me that if such a combination ever won they'd immediately think of me. (Now that I think about it, choosing a Schelling point does have the disadvantage that should I win, I'd have to split the jackpot with more people, but I don't think that's ever gonna happen anyway.))

more likely to happen first

Dunno how you would count the (overwhelmingly likely) case where both Mega Millions and the papacy cease to exist without either of those events happening first, but let's pretend you said "more likely to happen in the next 10 years"... Event 1 ought to happen 0.6 times per million years in average; I dunno about the probability per unit time for Event 2, but it's likely about two orders of magnitude larger.

Replies from: Cyan, wedrifid
comment by Cyan · 2012-04-05T03:24:58.890Z · LW(p) · GW(p)

choosing a Schelling point

Aren't you choosing an anti-Schelling point? It seems to me that people avoid playing low Kolmogorov-complexity lottery numbers because of a sense that they're not random enough -- exactly the fallacious intuition that prompts the shocked faces you enjoy.

Replies from: orthonormal, army1987
comment by orthonormal · 2012-04-05T04:31:59.524Z · LW(p) · GW(p)

Choosing something that's "too obvious" out of a large search space can work if you're playing against a small number of competitors, but when there are millions of people involved, not only are some of them going to un-ironically choose "1-2-3-4-5-6", but more than one person will choose it for the same reason it appeals to you.

Replies from: Cyan
comment by Cyan · 2012-04-05T05:45:36.850Z · LW(p) · GW(p)

Thank you for that insightful observation.

Just to follow up, army1987's actual choice is:

I always choose combinations like 1-2-3-4-5-6

So whether this choice is Schelling or anti-Schelling depends on reference sets that are quite fuzzy on the specified information, to wit, the set of non-random-seeming selections and (the proportion of players in) the set of people who play them.

comment by A1987dM (army1987) · 2012-04-05T10:16:59.134Z · LW(p) · GW(p)

I still think many more people pick any given low Kolmogorov-complexity combination than any given high Kolmogorov-complexity combination, if anything because there are fewer of the former. If 0.1% of the people picked 01-02-03-04-05 / 06 and 99.9% of the people picked a combination from http://www.random.org/quick-pick/ (and discarded it should it look ‘not random enough’), there'd still be 175 thousand times as many people picking 01-02-03-04-05 / 06 as 33-39-50-54-58 / 23. (Likewise, the fact that the most common password is password doesn't necessarily mean that there are lots of idiots: it could mean that 0.01% of the people pick it and 99.99% pick one of more than 9,999 more complicated passwords. Not that I'm actually that optimistic.)

comment by wedrifid · 2012-04-05T10:14:12.883Z · LW(p) · GW(p)

(Now that I think about it, choosing a Schelling point does have the disadvantage that should I win, I'd have to split the jackpot with more people, but I don't think that's ever gonna happen anyway.))

With this in mind I think I would choose combinations that match the pattern /[3-9][0-9][3-9][0-9][1-6][0-9]/. Six digit numbers look too much like dates!

comment by sixes_and_sevens · 2012-04-04T10:46:40.682Z · LW(p) · GW(p)

1-2-3-4-5-6 is a Schelling point for overt tampering with a lottery. That makes it considerably more likely to be reported as the outcome to a lottery, even if it's not more likely to be the outcome of a stochastic method of selecting numbers.

After seeing quite a few examples, I've recently become very sensitive to comparisons of an abstract idea of something with an objective something, as if they were on equal footing. Your question explicitly says the Pope conversion is a legitimate non-shenanigans event, while not making the same claim of the lottery result. Was that intentional?

Replies from: CronoDAS, APMason
comment by CronoDAS · 2012-04-04T20:21:29.032Z · LW(p) · GW(p)

After seeing quite a few examples, I've recently become very sensitive to comparisons of an abstract idea of something with an objective something, as if they were on equal footing. Your question explicitly says the Pope conversion is a legitimate non-shenanigans event, while not making the same claim of the lottery result. Was that intentional?

No, I just didn't think of it. (Assume that I meant that, if someone happens to have bought a 1-2-3-4-5-6 ticket, they would indeed be able to claim the top prize.)

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-04-04T22:19:18.698Z · LW(p) · GW(p)

I might not have worded that very clearly.

You said that the Pope was definitely not joking, (or replaced by a prankster in a pope suit), but left it open as to whether the lottery result was actually a legitimate sequence of numbers drawn randomly from a lottery machine, or somehow engineered to happen.

In that sense, you're comparing a very definite unlikely event (the Pope actually converting to Islam) to a nominally unlikely event (1-2-3-4-5-6 coming up as the lottery results, for some reason that may or may not be a legitimate random draw). Was that intentional?

Replies from: CronoDAS
comment by CronoDAS · 2012-04-04T23:16:27.114Z · LW(p) · GW(p)

No, but if someone successfully manages to rig the lottery to come up 1-2-3-4-5-6, and doesn't get caught, I'd count that as an instance. Similarly, if the reason the Pope issued the public statement was that his brother was being held hostage or something, and he recants after he's rescued, that's good enough, too; I just wanted to rule out things like April Fools jokes, or off-the-cuff sarcastic remarks.

comment by APMason · 2012-04-04T11:52:35.397Z · LW(p) · GW(p)

1-2-3-4-5-6 is a Schelling point for overt tampering with a lottery.

I don't think that's true. If you were going to tamper with the lottery, isn't your most likely motive that you want to win it? Why, then, set it up in such a way that you have to share the prize with the thousands of other people who play those numbers?

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-04-04T12:00:42.962Z · LW(p) · GW(p)

I specified "overt tampering" rather than "covert tampering". If you wanted to choose a result that would draw suspicion, 1-2-3-4-5-6 strikes me as the most obvious candidate.

Replies from: army1987
comment by A1987dM (army1987) · 2012-04-04T12:51:34.997Z · LW(p) · GW(p)

If you wanted to choose a result that would draw suspicion

Why would anyone want to do that? (I'm sure that any reason for that would be much more likely than 1 in 175 million, but still I can't think of it.)

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-04-04T13:00:44.459Z · LW(p) · GW(p)

The three most obvious answers (to my mind) are:

1) to demonstrate your Big Angelic Powers
2) to discredit the lottery organisers
3) as a prank / because you can

comment by A1987dM (army1987) · 2012-04-04T09:27:26.962Z · LW(p) · GW(p)

The former will happen about every couple million years in average, so I'd say the latter is more likely by at least a factor of 100.

comment by gwern · 2012-04-04T00:57:19.045Z · LW(p) · GW(p)

The ghost of Parnell is Far, the presentation to the Queen is Near?

Replies from: Alejandro1
comment by Alejandro1 · 2012-04-04T03:34:04.873Z · LW(p) · GW(p)

Perhaps. I had thought of the quote in the context of a distinction between epistemic/Bayesian probability and physical possibility or probability. For us (though perhaps not for Father Brown) the ghost story is physically impossible, it contradicts the basic laws of reality, while the presentation story does not. (In terms of the MWI we might say that there is a branch of the wavefunction where Gladstone offered the Queen a cigar, but none where a ghost appeared to him.) However, we might very well be justified in assigning the ghost story a higher epistemic probability, because we have more underlying uncertainty about (to use your words) Far concepts like the possibility of ghosts than about Near ones like how Gladstone would have behaved in front of the Queen.

Replies from: cousin_it
comment by cousin_it · 2012-04-04T09:50:05.138Z · LW(p) · GW(p)

I seem to instinctively assign the ghost story a lower probability. The lesson of the quote might still be valid, can you come up with an example that would work for me?

Replies from: Alejandro1
comment by Alejandro1 · 2012-04-04T15:50:24.782Z · LW(p) · GW(p)

Sure. Take one mathematical fact which the mathematical community accepts as true, but which has a complicated proof only recently published and checked. Surely your epistemic probability that there is a mistake in the proof and the theorem is false should be larger than the epistemic probability of the Gladstone story (if you are not convinced, add more outrageous details to it, like Gladstone telling the Queen "What's up, Vic?"). But according to your current beliefs, in the actual world the theorem is necessarily true and its negation impossible, while the Gladstone story is possible in the MWI sense.

Replies from: cousin_it
comment by cousin_it · 2012-04-05T08:34:57.111Z · LW(p) · GW(p)

But according to your current beliefs, in the actual world the theorem is necessarily true and its negation impossible

Whuh? I have logical uncertainty about the theorem.

comment by Multiheaded · 2012-04-06T20:20:31.658Z · LW(p) · GW(p)

[Hitler] has grasped the falsity of the hedonistic attitude to life. Nearly all western thought since the last war, certainly all "progressive" thought, has assumed tacitly that human beings desire nothing beyond ease, security, and avoidance of pain. In such a view of life there is no room, for instance, for patriotism and the military virtues. The Socialist who finds his children playing with soldiers is usually upset, but he is never able to think of a substitute for the tin soldiers; tin pacifists somehow won’t do. Hitler, because in his own joyless mind he feels it with exceptional strength, knows that human beings don’t only want comfort, safety, short working-hours, hygiene, birth-control and, in general, common sense; they also, at least intermittently, want struggle and self-sacrifice, not to mention drums, flag and loyalty-parades.

However they may be as economic theories, Fascism and Nazism are psychologically far sounder than any hedonistic conception of life. The same is probably true of Stalin’s militarized version of Socialism. All three of the great dictators have enhanced their power by imposing intolerable burdens on their peoples. Whereas Socialism, and even capitalism in a grudging way, have said to people "I offer you a good time," Hitler has said to them "I offer you struggle, danger and death," and as a result a whole nation flings itself at his feet.

(George Orwell's review of Mein Kampf)

(well, we have videogames now, yet... we gotta make them better! more vicseral!)

Replies from: Oligopsony, FiftyTwo
comment by Oligopsony · 2012-04-11T05:44:17.928Z · LW(p) · GW(p)

I don't see that that's true. Germany loved Hitler when he was giving them job security and easy victories and became much less popular once the struggle and danger and death arrived on the scene.

Replies from: Multiheaded
comment by Multiheaded · 2012-04-11T12:59:11.564Z · LW(p) · GW(p)

They grumbled, but 95% of them obeyed, worked, killed and died up until the spring of 1945. A huge amount of Germans certainly believed that sticking with the Nazis until the conflict's end was a much lesser evil compared to another national humiliation on the scale of Versallies. And look at the impressive use to which him and Goebbels put evaporative cooling of group beliefs to radicalize the faithful after the July plot. Purging a few malcontents led to a significant increase in zeal and loyalty even as things were getting visibly worse and worse.

comment by FiftyTwo · 2012-06-06T10:18:47.659Z · LW(p) · GW(p)

Full review here:

Replies from: Multiheaded
comment by Multiheaded · 2012-06-06T12:52:08.601Z · LW(p) · GW(p)

There's a pretty good and complete archive of all things by St. George at orwell.ru, by the way. As a pleasant exercise, I'm going to go through the Russian translations over there and see if I can correct anything.

comment by Alejandro1 · 2012-04-02T19:08:33.176Z · LW(p) · GW(p)

On politics as the mind-killer:

We’re at the point where people are morally certain about the empirical facts of what happened between Trayvon Martin and George Zimmerman on the basis of their general political worldviews. This isn’t exactly surprising—we are tribal creatures who like master narratives—but it feels as though it’s gotten more pronounced recently, and it’s almost certainly making us all stupider.

-- Julian Sanchez (the whole post is worth reading)

Replies from: RobertLumley, MixedNuts
comment by RobertLumley · 2012-04-03T01:42:13.496Z · LW(p) · GW(p)

It starts to seem, as Albert Camus once put it, that we’ve made the mind into an armed camp—in which not only politicians and legislative proposals, but moral philosophies, artworks, even scientific theories, have to wear the insignia of one or the other army

Does anyone know the exact quote to which he is referring here?

Replies from: NancyLebovitz, RobertLumley
comment by NancyLebovitz · 2012-04-03T21:47:52.711Z · LW(p) · GW(p)

We've reached the point where the weather is political, and so are third person pronouns.

Replies from: Multiheaded
comment by Multiheaded · 2012-04-10T14:26:14.196Z · LW(p) · GW(p)

Well, third-person pronouns were always political - it's just that only the last century's shift in values and ideological attitudes has allowed the spread of gender-neutral pronouns. Before that the issue was taken to be completely one-sided.

Replies from: hairyfigment
comment by hairyfigment · 2012-04-10T14:33:39.640Z · LW(p) · GW(p)

Conversely, evolution does not count as "political" here because we all belong to one camp. (Posted from Louisiana.)

comment by RobertLumley · 2012-04-03T01:55:58.916Z · LW(p) · GW(p)

I think it's this but I'm not sure:

The Greeks never made the human mind into an armed camp, and in this respect we are inferior to them.

Replies from: Eugine_Nier, FiftyTwo
comment by Eugine_Nier · 2012-04-04T03:46:26.110Z · LW(p) · GW(p)

Tell that to Socrates.

comment by FiftyTwo · 2012-04-03T21:55:58.023Z · LW(p) · GW(p)

Given that they supposedly drowned people for discussing irrational numbers that seems false.

Replies from: ec429, CronoDAS
comment by ec429 · 2012-04-07T22:11:02.526Z · LW(p) · GW(p)

Sorry to have to tell you this, but Pythagoras of Samos probably didn't even exist. More generally, essentially everything you're likely to have read about the Pythagoreans (except for some of their wacky cultish beliefs about chickens) is false, especially the stuff about irrationals. The Pythagoreans were an orphic cult, who (to the best of our knowledge) had no effect whatsoever on mainstream Greek mathematics or philosophy.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-04-08T01:42:53.599Z · LW(p) · GW(p)

Source?

Replies from: ec429
comment by ec429 · 2012-04-08T02:04:04.945Z · LW(p) · GW(p)

Well, my source is Dr Bursill-Hall's History of Mathematics lectures at Cambridge; I presume his source is 'the literature'. Sorry I can't give you a better source than that.

Replies from: None
comment by [deleted] · 2012-04-08T03:52:05.299Z · LW(p) · GW(p)

Can anyone confirm this? Preferably with citation?

comment by MixedNuts · 2012-04-04T16:30:51.221Z · LW(p) · GW(p)

Wait, is there any actual disagreement about what happened? I'm reading older Julian Sanchez posts, but the only point of disagreement seems to be "Once Zimmerman confronted Martin with a gun, did Martin try to disarm him before getting shot?". None of what I've read considers the question relevant; they base their judgements on already known facts such as "someone shot someone else then was let free rather than have a judge decide whether it counted as self-defense".

Replies from: TimS, None
comment by TimS · 2012-04-10T00:01:06.442Z · LW(p) · GW(p)

There's substantial disagreement about the facts. For example, someone was heard yelling for help, but no one agrees whether that was Zimmerman or Martin.

I can talk about Stand-Your-Ground laws and their apparent effect in this case, but I don't want to drone on.

comment by [deleted] · 2012-04-10T18:59:26.381Z · LW(p) · GW(p)

There is the minor matter of people trying to very hard to spin and misrepresent events. At this point I can't help but link to this very relevant Aurini talk on the subject.

Replies from: CaveJohnson
comment by CaveJohnson · 2012-04-23T10:19:17.902Z · LW(p) · GW(p)

Thank you for the link!

Checking out some of his other videos and links I found this podcast on the topic rather interesting commentary.

Especially the summary of facts starting at the 23 minute mark.

Replies from: wallowinmaya, None
comment by David Althaus (wallowinmaya) · 2012-04-25T11:26:58.675Z · LW(p) · GW(p)

Link doesn't work. Here is a new one.

Replies from: CaveJohnson
comment by CaveJohnson · 2012-04-25T12:14:51.895Z · LW(p) · GW(p)

Thank you! Fixed the link to match yours.

comment by [deleted] · 2012-04-23T10:29:00.084Z · LW(p) · GW(p)

Yes I listened to that podcast as well.

I am much more confident that Zimmerman was not the attacker than I was about the innocence of Amanda Knox. His instant demonization and near lynching (people putting out a dead or alive bounty) seems a very troubling development for American society.

Replies from: CharlieSheen
comment by CharlieSheen · 2012-05-03T13:01:16.885Z · LW(p) · GW(p)

His instant demonization and near lynching (people putting out a dead or alive bounty) seems a very troubling development for American society.

More justice for Trayvon I guess.

comment by Spurlock · 2012-04-02T04:45:14.175Z · LW(p) · GW(p)

"Muad’Dib learned rapidly because his first training was in how to learn. And the first lesson of all was the basic trust that he could learn. It is shocking to find how many people do not believe they can learn, and how many more believe learning to be difficult. Muad‘Dib knew that every experience carries its lesson"

Frank Herbert, Dune

Replies from: John_Maxwell_IV, gwern
comment by John_Maxwell (John_Maxwell_IV) · 2012-04-02T06:10:24.413Z · LW(p) · GW(p)

It took me years to learn not to feel afraid due to a perceived status threat when I was having a hard time figuring something out.

A good way to make it hard for me to learn something is to tell me that how quickly I understand it is an indicator of my intellectual aptitude.

Replies from: Spurlock
comment by Spurlock · 2012-04-02T15:32:25.954Z · LW(p) · GW(p)

Interesting article about a study on this effect:

Dweck’s researchers then gave all the fifth-graders a final round of tests that were engineered to be as easy as the first round. Those who had been praised for their effort significantly improved on their first score—by about 30 percent. Those who’d been told they were smart did worse than they had at the very beginning—by about 20 percent.

Dweck had suspected that praise could backfire, but even she was surprised by the magnitude of the effect. “Emphasizing effort gives a child a variable that they can control,” she explains. “They come to see themselves as in control of their success. Emphasizing natural intelligence takes it out of the child’s control, and it provides no good recipe for responding to a failure.”

Replies from: TheOtherDave, undermind
comment by TheOtherDave · 2012-04-02T16:23:00.543Z · LW(p) · GW(p)

This seems like a more complicated explanation than the data supports. It seems simpler, and equally justified, to say that praising effort leads to more effort, which is a good thing on tasks where more effort yields greater success.

I would be interested to see a variation on this study where the second-round problems were engineered to require breaking of established first-round mental sets in order to solve them. What effect does praising effort after the first round have in this case?

Perhaps it leads to more effort, which may be counterproductive for those sorts of problems, and thereby lead to less success than emphasizing intelligence. Or, perhaps not. I'm not making a confident prediction here, but I'd consider a praising-effort-yields-greater-success result more surprising (and thus more informative) in that scenario than the original one.

Replies from: Spurlock, matt
comment by Spurlock · 2012-04-02T17:07:46.264Z · LW(p) · GW(p)

I agree that the data doesn't really distinguish this explanation from the effect John Maxwell described, mainly I just linked it because the circumstances seemed reminiscent and I thought he might find it interesting. Its worth noting though that these aren't competing explanations: your interpretation focuses on explaining the success of the "effort" group, and the other focuses on the failure of the "intelligence" group.

To help decide which hypothesis accounts for most of the difference, there should really have been a control group that was just told "well done" or something. Whichever group diverged the most from the control, that group would be the one where the choice of praise had the greatest effect.

comment by matt · 2012-04-13T00:25:29.053Z · LW(p) · GW(p)

I think the universe is not usually engineered to perversely punish effort. Extra effort may sometimes be counter productive… but I think most people I know fail more often for too little effort than for too much. Use the Try Harder, Luke is usually good advice.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-13T00:33:07.860Z · LW(p) · GW(p)

I agree, so if you intended this as a counterpoint, it seems to follow that I have inconsistent beliefs. If so, can you expand?

Replies from: matt
comment by matt · 2012-04-13T03:29:37.823Z · LW(p) · GW(p)

I'm inferring more than you said, which isn't making it easy for anyone to understand me. Sorry about that.

If you think your comment discusses an edge case, and that it's a good general practice to praise/reward effort rather than intelligence, then we are in agreement and this conversation should probably end. If you think it's a good general practice to spend the cognitive effort required to scan the world for situations where each type of praise/reward would most help… then I think we're disagreeing.

Long comment following - summary at bottom.

Dweck's work sounded a strong chord for me. I was an intelligent kid often praised for my intelligence, and often very scared that I would be discovered not to be as intelligent as everyone seemed to think I was (because the world was full of stuff that I wasn't immediately good at). I therefore avoided many pursuits that I thought would lead others to discover their previous overestimate of my innate, fixed intelligence. I think there are many children and adults who live in that place (I think that, for example, there is a lot of evidence in Eliezer's writing that he has a fixed conception of intelligence (eg. http://lesswrong.com/lw/bdo/rationality_quotes_april_2012/68n2). I also think that praise of my intelligence in my youth had a strong influence on my forming that model (fixed intelligence, not being good at something immediately is evidence that you're not as clever as they thought).

After reading Dweck's work I've tried hard to alter my model of the universe. Innate intelligence obviously varies between individuals… but that's not very helpful or important to me, and spending time thinking about it doesn't help me much. As an individual with whatever innate capacity I have I benefit much more by considering the very significant impact my efforts have on what I can understand and what I can achieve. Anyone I meet who praises me for my (innate, fixed) intelligence undermines my efforts to focus on what I can change, so hurts my efforts at self improvement. Anyone who praises me for something I can change (effort, technique, practice, diligence, etc.) helps me to become a better person.

I think this is particularly important with children - watching someone praise a child for a fixed trait now causes me to flinch as if that child had just been slapped.

Summary:
I think it likely that there exist edge cases where praising intelligence will boost performance on some particular following task, but I think that in nearly all cases the person thus praised will suffer over the longer term due to the much greater frequency of tasks that that form of praise hurts. I think that most people in most cases will benefit more from Dweck style praise of effort (more precisely, any trait they can control), and that that's more true over longer timeframes.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-13T05:14:53.827Z · LW(p) · GW(p)

Well, what my comment discusses is a potential direction of research, and makes some predictions about the results of that, and isn't really about application at all.

As far as application goes, I agree that it's a good general practice to praise/reward effort rather than intelligence. Also to reward effort rather than strength, dexterity, attractiveness, and various other attributes.

More generally, I think it's a good practice to reward behaviors rather than attributes. Rewarding behaviors gets me more of those behaviors. Rewarding attributes gets me nothing predictable.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-04-13T07:01:51.431Z · LW(p) · GW(p)

There's something to be said for rewarding results instead of effort to teach people to make sure they are actually trying rather than trying to try.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-13T13:43:01.956Z · LW(p) · GW(p)

Better results than fixed attributes, certainly. No objection to rewarding results as well. My primary concern with rewarding results instead is that it seems to create the incentive to only tackle problems I'm confident I can succeed at.

comment by undermind · 2012-04-03T13:50:27.037Z · LW(p) · GW(p)

I've seen this study cited a lot; it's extremely relevant to smart self- and other-improvement. But there are various possible interpretations of the results, besides what the authors came up with... Also, how much has this study been replicated?

I'd like to see a top-level post about it.

comment by gwern · 2012-04-04T00:53:26.998Z · LW(p) · GW(p)

Dupe

comment by Stephanie_Cunnane · 2012-04-05T04:09:46.906Z · LW(p) · GW(p)

I believe I am accurate in saying that educators too are interested in learnings which make a difference. Simple knowledge of facts has its value. To know who won the battle of Poltava, or when the umpteenth opus of Mozart was first performed, may win $64,000 or some other sum for the possessor of this information, but I believe educators in general are a little embarrassed by the assumption that the acquisition of such knowledge constitutes education. Speaking of this reminds me of a forceful statement made by a professor of agronomy in my freshman year in college. Whatever knowledge I gained in his course has departed completely, but I remember how, with World War I as his background, he was comparing factual knowledge with ammunition. He wound up his little discourse with the exhortation, "Don't be a damned ammunition wagon; be a rifle!"

-Carl Rogers, On Becoming a Person: A Therapist's View of Psychotherapy (1961)

comment by FiftyTwo · 2012-04-03T21:31:53.045Z · LW(p) · GW(p)

I know a lot of scientists as well as laymen are scornful of philosophy - perhaps understandably so. Reading academic philosophy journals often makes my heart sink too. But without exception, we all share philosophical background assumptions and presuppositions. The penalty of not doing philosophy isn't to transcend it, but simply to give bad philosophical arguments a free pass.

David Pearce

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-04-04T09:57:59.660Z · LW(p) · GW(p)

This is analogous to my main worry as someone who considers himself a part of the anti-metaphysical tradition (like Hume, the Logical Positivists, and to an extent Less Wrongers): what if by avoiding metaphysics I am simply doing bad metaphysics.

Replies from: VKS
comment by VKS · 2012-04-04T10:43:59.555Z · LW(p) · GW(p)

As an experiment, replace 'metaphysics' and 'metaphysical' with 'theology' and 'theological' or 'spirituality' and 'spiritual'. Then the confusion is obvious.

Unless I don't understand what you mean by metaphysics, and just have all those terms bunched up in my head for no reason, which is also possible.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-04-04T10:56:51.015Z · LW(p) · GW(p)

Yes. There is a difference between speaking imprecisely because we don't know (yet) how to express it better, and speaking things unrelated to reality. The former is worth doing, because a good approximation can be better than nothing, and it can help us to avoid worse approximations.

Replies from: VKS
comment by VKS · 2012-04-04T12:11:38.101Z · LW(p) · GW(p)

Well, but what it that is meant by metaphysics? I've heard the word many times, seen its use, and I still don't know what I'm supposed to do with it.

Ok, so now I've read the Wikipedia article, and now I'm unconvinced that when people use the term they mean what it says they mean. I know at least some people who definitely used "metaphysical" in the sense of "spiritual". What do you mean by metaphysics?

Also unconvinced that it has any reason to be thought of as a single subject. I get the impression that the only reason these topics are together is that they feel "big".

But I will grant you that given Wiki's definition of metaphysics, there is no reason to think that it is in principle incapable of providing useful works. I revise my position to state that arguments should not be dismissed because they are metaphysical, but rather because they are bad. Furthermore, I suspect that "metaphysics" is just a bad category, and should, as much as possible, be expunged from one's thinking.

Replies from: Jayson_Virissimo, J_Taylor, Viliam_Bur, thomblake
comment by Jayson_Virissimo · 2012-04-04T14:06:45.350Z · LW(p) · GW(p)

We may be moving too fast when we expunge metaphysics from our web-of-belief. Say you believe that all beliefs should pay rent in anticipated experiences. What experiences do you anticipate only because you hold this belief? If there aren't any, then this seems awfully like a metaphysical belief. In other words, it might not be feasible to avoid metaphysics completely. Even if my specific example fails, the metaphysicians claim to have some that succeed. Studying metaphysics has been on my to-do list for a long time (if only to be secure in my belief that we don't need to bother with it), but for some reason I never actually do it.

Replies from: Will_Newsome, fubarobfusco, Incorrect, Viliam_Bur, VKS
comment by Will_Newsome · 2012-04-04T20:01:48.703Z · LW(p) · GW(p)

(LessWrong implicitly assumes certain metaphysics pretty often, e.g. when they talk about "simulation", "measure", "reality fluid", and so on; it seems to me that "anthropics" is a place where experience meets metaphysics. My preferred metaphysic for anthropics comes from decision theory, and my intuitions about decision theory come to a small extent from theological metaphysics and to a larger extent from theoretical computer science, e.g. algorithmic probability theory, which I figured is a metaphysic for the same reason that monadology is a metaphysic. ISTM that even if metaphysics aren't as fundamental as they pretend to be, they're still useful and perhaps necessary for organizing our experiences and intuitions so as to predict/understand prospective/counterfactual experiences in highly unusual circumstances (e.g. simulations).)

Replies from: J_Taylor, VKS
comment by J_Taylor · 2012-04-04T21:11:32.972Z · LW(p) · GW(p)

When some Lesswrong-users use 'metaphysics', they mean other people's metaphysics. This is much like how some Christians use the term 'religion'.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-04T21:31:33.563Z · LW(p) · GW(p)

Hm... one rationale for such a designation might be: "A 'metaphysic' is a model that is at least one level of abstraction/generalization higher than my most abstract/general model; people who use different models than me seem to have higher-level models than I deem justified given their limited evidence; thus those higher-level models are metaphysical." Or something? I should think about this more.

Replies from: J_Taylor
comment by J_Taylor · 2012-04-04T21:49:03.739Z · LW(p) · GW(p)

Your theory is much nicer than mine. Mine essentially amounts to people believing "I understand reality, your beliefs are scientifically justified, he endorses metaphysical hogwash." Further, at least since the days of the Vienna Circle, some scientifically-minded individuals have used 'metaphysics' as a slur. (I mean, at least some of the Logical Positivists seriously claimed that metaphysical terms were nonsense, that is, having neither truth-value nor meaning.)

I have read Yudkowsky discuss matters of qualia and free will. This site contains metaphysics, straight up. I assume that anyone who dismisses metaphysics is either dismissing folk-usage of the term or is taking too much pride in their models of reality (that latter part does somewhat match your stipulative explanation.)

(Oh, I'm not sure if your joke was intentional, but I still think it is funny that some possible humans would reject metaphysics for being 'models' which are too 'abstract', 'of higher-level', and not 'justified' given the current 'evidence'.)

Replies from: TheOtherDave, Will_Newsome
comment by TheOtherDave · 2012-04-04T22:52:10.483Z · LW(p) · GW(p)

Agreed that Will's theory is nicer than yours. That said, with emphasis on "some," I think yours is true. Although the Christians I know are far more likely to use "religion" to refer to Christianity. (Still more so are the Catholics I know inclined to use "religion" to refer to Catholicism.)

Replies from: J_Taylor
comment by J_Taylor · 2012-04-04T23:47:24.033Z · LW(p) · GW(p)

I was just referring to some Protestants who will share such statements as "Christianity isn't a religion, it's a relationship" or "I hate religion too. That's why I believe in Jesus." Of course, most Protestants do not do this.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-04T23:50:36.739Z · LW(p) · GW(p)

Ah, I see. The Christians I know are more prone to statements like "Religion is important, because it teaches people about the importance of Jesus' love."

comment by Will_Newsome · 2012-04-10T21:35:45.720Z · LW(p) · GW(p)

Just came across a comment by Deogolwulf in response to a comment on one of Mencius Moldbug's posts:

“I would say that all things are ultimately reduceable to quarks”

Say it if you like, but if you are to be rationally coherent, you must believe that your proposition also reduces to quarks, and therefore, given that you believe that quarks are intrinsically meaningless, and given that “nothing enters in at a different level”, you must hold your own proposition to be fundamentally meaningless, and therefore not actually a proposition at all, and therefore that your non-propositional emittance is fundamentally without truth. But why then assert it? Do you ever consider that you are just trying irrationally to put yourself at the furthest remove from your former beliefs? Perhaps that is the source of your aversion to metaphysical philosophy which no rational-thinking animal can ever avoid even should he perversely wish it — any rational consideration of your own beliefs might reveal their nonsensical nature to you.

“It may be impractical to think at the quark-level, but that is the actual level reality operates on and nothing enters in at a different level.”

Could you think of any way to test or affirm this strange belief of yours even empirically-scientifically, let alone quarkly? Of course not, nor could there be any such way. Besides, it seems that, according to your own hazy brand of positivism-cum-physicalism, “levels of reality” are not ontologically objective, let alone empirically-scientifically knowable as such, and thus, by your own lights, it is meaningless to speak of them. But perhaps, after all, you do believe that the levels of reality of which you speak are ontologically objective, or that quarks have intrinsic meaning, in which case, slipping from your positivism, perhaps you would have some philosophical defence of these ideas, along with some defence of the bold equation of reality with physicality. But, once again, you would have to enter the metaphysical-philosophical realm which you yourself claim to be rubbish, and why enter it if you believe it to be so — or is all this just pseudo-scientific and scientistic posing?

Oh, snap!

Replies from: TheOtherDave, J_Taylor, Eugine_Nier
comment by TheOtherDave · 2012-04-10T23:09:22.027Z · LW(p) · GW(p)

I couldn't find the original on a quick Google, but:

The Master was speaking in the public square about the illusory nature of reality, when a bull got away from his handler and charged the crowd. The crowd scattered in fear, all but a young child who had been learning at the master's feet, who had absorbed some of his wisdom and was therefore unafraid.

Afterwards, the Master approached the child's trampled body and, saddened, asked "Why did you not run?"
The child replied, with difficulty, "But Master, had you not just been teaching us that the bull was just an illusion? What should I have to fear from an illusion?"
"Yes, child," he replied. "The bull is an illusion. But so are you."

At that moment, the child died.

Which is to say, believing that something can be entirely explained in terms of something else doesn't absolve me from the need to deal with it. Even if I and the bull and my preference to remain alive can all be entirely captured by the sufficiently precise specification of a set of quarks, it doesn't follow that there exists no such person, no such bull, or no such preference.

Replies from: Will_Newsome, Bugmaster
comment by Will_Newsome · 2012-04-10T23:19:31.450Z · LW(p) · GW(p)

The argument was a meta-level undermining argument supporting the necessity of metaphysical reasoning (of the exact sort that you're engaging in in your comment);—it wasn't an argument about the merits of reductionism. That would likely have been clearer had I included more context; my apologies.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-11T00:22:49.628Z · LW(p) · GW(p)

(nods) Context is often useful, agreed.

Also, metaphysical reasoning is often necessary, agreed.

Sadly, I often find it necessary in response to metaphysical reasoning introduced to situations without a clear sense of what it's achieving and whether that end can be achieved without it.
In this sense it's rather like lawyers.

Not that I'm advocating eliminating all the lawyers, not even a little.
Lawyers are useful.
They're even useful for things other than defending oneself from other lawyers.

But I've also seen situations made worse because one party brought in a lawyer without a clear understanding of the costs and benefits of involving lawyers in that situation.

I suspect that a clear understanding of the costs and benefits of metaphysical reasoning is equally useful.

comment by Bugmaster · 2012-04-10T23:32:15.661Z · LW(p) · GW(p)

Where is that quote from, out of curiosity ?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-11T00:03:00.184Z · LW(p) · GW(p)

If I could remember that, I probably could have found it on Google in the first place.

Replies from: Bugmaster
comment by Bugmaster · 2012-04-11T00:23:08.081Z · LW(p) · GW(p)

...fair enough. I tried looking on Google, and couldn't find it either. Perhaps your quote is original enough for you to claim authorship :-/

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-11T00:30:37.944Z · LW(p) · GW(p)

Perhaps? I'm fairly sure I read it somewhere, but my memory is unreliable.

comment by J_Taylor · 2012-04-12T03:08:25.152Z · LW(p) · GW(p)

Deogolwulf is the sort of fellow who uses 'proposition' while obviously meaning 'statement'. Also, some of the first paragraph is pure unreflective sophistry. Still, the second half:

Could you think of any way to test or affirm this strange belief of yours even empirically-scientifically, let alone quarkly? Of course not, nor could there be any such way.

Following this epistemic attack, I am imagining Deogolwulf holding up a mirror to TGGP's face and stating "No, TGGP, you are the metaphysics."

comment by Eugine_Nier · 2012-04-11T04:03:42.350Z · LW(p) · GW(p)

I think part of the problem is different scenes of the word "reduce". Consider the following two statements:

1) All things ultimately reduce to quarks (nitpick: and leptons)

2) Quarks and leptons ultimately reduce to quantum wave functions.

3) Quantum wave functions ultimately reduce to mathematics.

4) All mathematics ultimately reduces to the ZFC axioms.

Notice that all these statements are true (I'm not quite sure about the first one) for slightly different values of "reduces".

comment by VKS · 2012-04-04T20:21:46.961Z · LW(p) · GW(p)

What?

Replies from: J_Taylor
comment by J_Taylor · 2012-04-04T21:35:54.627Z · LW(p) · GW(p)

When someone on Lesswrong uses the term 'simulation', they are probably making some implicit metaphysical claims about what it means for some object(A) to be a simulation of some other object(B). (This particular subject often falls under the part of metaphysics known as ontology.)

The same applies to usage of most terms.

Replies from: VKS
comment by VKS · 2012-04-04T22:23:12.936Z · LW(p) · GW(p)

Correct me if I'm wrong, but "They are probably making some implicit metaphysical claims about what it means for some object(A) to be a simulation of some other object(B)." and "They are probably making some implicit claims about what it means for some object(A) to be a simulation of some other object(B)" mean exactly the same thing.

Replies from: J_Taylor, TheOtherDave
comment by J_Taylor · 2012-04-04T22:57:44.349Z · LW(p) · GW(p)

They do happen to mean the same thing. This is because the question "What does it mean for some y to be an x?" is a metaphysical question.

"They are probably making some aesthetic claim about why object(A) is more beautiful than object(B)" and "They are probably making some claim about why object(A) is more beautiful than object(B)" also mean the same thing.

comment by TheOtherDave · 2012-04-04T22:29:31.300Z · LW(p) · GW(p)

Come to that, they both probably mean the same thing as "They are probably making some implicit claims about how some object(B) differs from some other object (A) it simulates," which eliminates the reference to meaning as well.

comment by fubarobfusco · 2012-04-04T15:56:09.305Z · LW(p) · GW(p)

Say you believe that all beliefs should pay rent in anticipated experiences. What experiences do you anticipate only because you hold this belief?

Well, that's a "should" statement, so we cash it out in terms of desirable outcomes, e.g.:

  • People who spend more time elaborating on their non-anticipatory beliefs will not get as much benefit from doing so as people who spend more time updating anticipatory beliefs.
  • If two people (or groups, or disciplines) ostensibly aim at the same goals, and deploy similar amounts of resources and effort; but one focuses its efforts with anticipation-controlling beliefs while the other relies on non-anticipation-controlling beliefs, then the former will achieve the goals more than the latter. (Examples could be found in charities with the goal of saving lives; or in martial arts schools with the goal of winning fights.)
comment by Incorrect · 2012-04-04T20:07:51.293Z · LW(p) · GW(p)

Where Recursive Justification Hits Bottom - EY

Can you give any examples of modern metaphysics being useful?

Replies from: thomblake
comment by thomblake · 2012-04-04T22:22:30.756Z · LW(p) · GW(p)

Ontology begat early AI, which begat object-oriented programming.

comment by Viliam_Bur · 2012-04-04T14:21:49.096Z · LW(p) · GW(p)

Say you believe that all beliefs should pay rent in anticipated experiences. What experiences do you anticipate only because you hold this belief?

I anticipate to experience more efficient thinking, because I will have to remember less and think about less topics, while achieving the same results.

Studying metaphysics has been on my to-do list for a long time (if only to be secure in my belief that we don't need to bother with it), but for some reason I never actually do it.

What do you anticipate to experience after studying metaphysics (besides being able to signal deep wisdom)?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-04T20:59:01.688Z · LW(p) · GW(p)

What do you anticipate to experience after studying metaphysics (besides being able to signal deep wisdom)?

I anticipate understanding the abstract nature of justification, thus allowing me to devise better-justified institutions. I anticipate understanding cosmology and its role in justification, thus allowing me to understand how to transcend the contingent/universal duality of justification. I anticipate understanding infinities and their actuality/non-actuality and thus what role infinities play in justification. I anticipate graving new values on new tables with the knowledge gleaned from a greater understanding of justification—I anticipate seeing what both epistemology and morality are special cases and approximations of, and I anticipate using my knowledge of that higher-level structure to create new values. And so on.

Replies from: VKS
comment by VKS · 2012-04-04T21:10:42.078Z · LW(p) · GW(p)

You might be better off studying mathematics, then.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-04T21:15:58.147Z · LW(p) · GW(p)

That too, yes. Algorithmic probability is an example of a field that is pretty mathematical and pretty metaphysical. It's the intellectual descendant of Leibniz's monadology. Computationalism is a mathematical metaphysic.

Replies from: VKS
comment by VKS · 2012-04-04T21:19:10.161Z · LW(p) · GW(p)

If you would be so kind as to try and tell me what you mean by "metaphysic", I would be much less confused.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-04T21:42:57.884Z · LW(p) · GW(p)

By "metaphysic" I mean a high-level model for phenomena or concepts that you can't immediately falsify because, though the model explains all of the phenomena you are aware of, the model is also very general. E.g., if you look at a computer processor you can say "ah, it is performing a computation", and this constrains your anticipations quite a bit; but if you look at a desk or a chair and say "ah, it is performing a computation", then you've gotten into metaphysical territory: you can abstract away the concept of computation and apply it to basically everything, but it's unclear whether or not doing so means that computation is very fundamental, or if you're just overapplying a contingent model. Sometimes when theorizing it's necessary to choose a certain metaphysic: e.g., I will say that I am an instance of a computation, and thus that a computer could make an exact simulation of me and I would exist twice as much, thus making me less surprised to find myself as me rather than someone else. Now, such a line of reasoning requires quite a few metaphysical assumptions—assumptions about the generalizability of certain models that we're not sure do or don't break down—but metaphysical speculation is the best we can do because we don't have a way of simulating people or switching conscious experience flows with other people.

That's one possible explanation of "metaphysic"/"metaphysics", but honestly I should look into the relevant metaphilosophy—it's very possible that my explanation is essentially wrong or misleading in some way.

Replies from: VKS
comment by VKS · 2012-04-04T22:20:18.068Z · LW(p) · GW(p)

Why would generality be opposed to falsifiability? Wouldn't having a model be more general lead to easier falsifiability, given that the model should apply more broadly?

In order to tell whether something is performing a computation, you try to find some way to get the object to exhibit the computation it is (allegedly) making. So -- if I understand correctly -- then a model is metaphysical, in the things you write, if applying it to a particular phenomenon requires an interpretation step which may or may not be known to be possible. How does this differ from any other model, except that you're allowing yourself to be sloppy with it?

If you just replace "metaphysic" by "model", "metaphysical assumptions" by "assumptions about our models and their applicability", "metaphysical speculation" by "speculations based on our models", I think the things you're trying to say become clearer. If a bit less fancy-sounding.

If the thing I understood is the thing you tried to say.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-04T22:33:16.222Z · LW(p) · GW(p)

I could replace all my uses of the word "metaphysical" with "sloppily-general", I guess, but I'm not sure it has quite the right connotations, and "metaphysical" is already the standard terminology. "Metaphysical" is vague in a somewhat precise way that "sloppily-general" isn't. I appreciate the general need for down-to-earth language, but I also don't want to consent to the norm of encouraging people to take pains to write in such a way as to be understood by the greatest common factor of readers.

Replies from: VKS
comment by VKS · 2012-04-04T23:16:22.982Z · LW(p) · GW(p)

"X is a metaphysic" becomes "X is somehow a model (of something), but I'm not sure how". "Y is metaphysical" becomes "Y is about or related to a model (somehow)". I assume my understanding is correct, since you didn't correct it. "sloppily-general" is then indeed kind of far from the intended meaning, but that's just because it's a terrible coinage.

Elsewhere, somebody posted a link to the Stanford Encyclopedia of Philosophy's definition of metaphysics. They say right in the intro that they haven't found a good way to define it. The Wikipedia article on metaphysics's body implies a different definition than its opening paragraph. In common parlance, it's used for some vague spiritualish thing. And your definition is different from all of these. Do you think that the term could reasonably be expected to be understood the way you intended it to?

"Metaphysical" isn't vague in a somewhat precise way. It isn't even evocative, as its convoluted etymology prevents even that. It's just vague and used by philosophers.

The greatest common factor of readers isn't even here. The point is more to be understood by readers at all. Don't make your writing more obscure than it needs to be. Hard concepts are hard enough as is, without making the fricking idea of "somehow a model" worth 3 hours' worth of discussion.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-04T23:29:05.313Z · LW(p) · GW(p)

I assume my understanding is correct, since you didn't correct it.

Sorry, I was just too lazy to correct it. Still too lazy.

Replies from: VKS
comment by VKS · 2012-04-04T23:31:45.462Z · LW(p) · GW(p)

I give up. Good night.

comment by VKS · 2012-04-04T15:49:58.837Z · LW(p) · GW(p)

Metaphysics can't even be a thing in a web of belief! It's more a box for a bunch of things, with a tag that says "Ooo". Unless you want to define it otherwise, or I'm more confused than I think I am. So the category only makes sense if you want to use it to describe your feelings for some given subject. Why would that be a good way to frame a field of study?

That's what I suspect is problem with metaphysics; not the things in the box, which are arbitrary, rather that the box messes up your filing system.

comment by J_Taylor · 2012-04-04T21:26:20.175Z · LW(p) · GW(p)

Metaphysics, as a category, has its constituents determined by the contingent events of history. The same could be said for the categories of philosophy and art. As such, 'metaphysics' is a convenient bucket whose constituents do not necessarily have similarities in structure. At best, I think one could say that they have a Wittgensteinian family-resemblance. However, I am only defending the academic usage of the term. (More information here.) The folk usage seems to hold that metaphysics is "somewhere between "crystal healing" and "tree hugging" in the Dewey decimal system."

Replies from: VKS
comment by VKS · 2012-04-04T21:49:57.771Z · LW(p) · GW(p)

Well that at least makes some sense. I was noticing that Wiki's definition and the definition implied by its examples were in conflict. I don't particularly see why the metaphysics bucket is convenient, though.

Is there any point in discussing metaphysics as anything other than a cultural phenomenon among philosophers?

Replies from: J_Taylor
comment by J_Taylor · 2012-04-04T22:03:05.476Z · LW(p) · GW(p)

I don't particularly see why the metaphysics bucket is convenient, though.

Unless you are a cladist, 'reptile' is a bucket which contains crocodiles, lizards, and turtles, but does not contain birds and mammals. The word is still sometimes useful for communication.

Is there any point in discussing metaphysics as anything other than a cultural phenomenon among philosophers?

It depends on your goals. I do not generally recommend it, however.

Replies from: VKS
comment by VKS · 2012-04-04T22:31:52.143Z · LW(p) · GW(p)

My claim was not about the general lack of utility of buckets. Briefly, the reptile bucket is useful because reptiles are similar to one another, and thus having a way to refer to them all is handy. There is apparently no such justification for "metaphysics", except in the sense that its contents are related by history. But this clearly isn't the use you want to make of this bucket.

Replies from: J_Taylor
comment by J_Taylor · 2012-04-04T22:50:00.579Z · LW(p) · GW(p)

The word 'similar' is often frustratingly vague. However, crocodiles and birds share a more recent common ancestor than crocodiles and turtles.

But this clearly isn't the use you want to make of this bucket.

The word is nonetheless used. I do agree with you that it is frustrating that the word's usage is historically determined.

Replies from: VKS
comment by VKS · 2012-04-04T23:28:08.489Z · LW(p) · GW(p)

Well then the term reptile is somewhat deceptive in evolutionary biology, and based more on some consensus about appearance. Fine. Whatever. The point is that the word metaphysics isn't evocative in that way or any way, except in the context of its historical usage. As such, it cannot inform us in any way about any subject that isn't the phenomenon of its acceptance as a field, and is not even a useful subject heading, being a hodgepodge. We can choose whether to continue to use it, and I don't see why we should.

Replies from: J_Taylor
comment by J_Taylor · 2012-04-04T23:42:05.881Z · LW(p) · GW(p)

Within the field of philosophy, the usage is a fairly normal term, much like 'reptile' or 'sex' are normal terms for most people. Much of my vocabulary comes from that field and I am most comfortable using its terms. 'Metaphysics' is one of many problematic terms which are evocative to me, because I understand how these terms are used. Asking someone who studies philosophy to stop using 'metaphysics' is like asking someone who studies biology to stop using 'species'.

However, it is your prerogative to use whatever terms you prefer. I am sure that we are both trying to be pragmatic.

comment by Viliam_Bur · 2012-04-04T14:19:10.022Z · LW(p) · GW(p)

Conventional usage seems to be: speaking about deep intangible topics.

Which is a bad categegory, because it contains: abstract thinking + supernatural claims + complicated nonsense; especially the parts good for signalling wisdom.

comment by thomblake · 2012-04-04T22:26:23.740Z · LW(p) · GW(p)

Well, but what it that is meant by metaphysics? I've heard the word many times, seen its use, and I still don't know what I'm supposed to do with it.

It's a bit confusing in part because of its strange etymology. Originally, "meta" was used in the sense of "after", since "metaphysics" was the unnamed book that came after "physics" in the standard ordering of Aristotle's works. Later scholars accidentally connected that to something like our current usage of "meta", and a somewhat arbitrary field was born.

comment by VKS · 2012-04-03T07:51:55.161Z · LW(p) · GW(p)

Pedantry and mastery are opposite attitudes toward rules. To apply a rule to the letter, rigidly, unquestioningly, in cases where it fits and in cases where it does not fit, is pedantry. ... To apply a rule with natural ease, with judgment, noticing the cases where it fits, and without ever letting the words of the rule obscure the purpose of the action or the opportunities of the situation, is mastery.

  • George Pólya, How to Solve It
Replies from: MixedNuts
comment by MixedNuts · 2012-04-09T11:59:30.630Z · LW(p) · GW(p)

...and that's why the rule doesn't apply to the reference class of cases I just constructed to only contain my own, Officer.

Replies from: Strange7
comment by Strange7 · 2012-04-12T18:32:42.332Z · LW(p) · GW(p)

At which point the officer will demonstrate in no uncertain terms who is the master in the current situation.

comment by dvasya · 2012-04-01T16:01:25.001Z · LW(p) · GW(p)

Our minds contain processes that enable us to solve problems we consider difficult. "Intelligence" is our name for whichever of those processes we don't yet understand.

Some people dislike this "definition" because its meaning is doomed to keep changing as we learn more about psychology. But in my view that's exactly how it ought to be, because the very concept of intelligence is like a stage magician's trick. Like the concept of "the unexplored regions of Africa," it disappears as soon as we discover it.

-- Marvin Minsky, The Society of Mind

comment by EditedToAdd · 2012-04-02T16:51:17.068Z · LW(p) · GW(p)

But, the hard part comes after you conquer the world. What kind of world are you thinking of creating?

Johan Liebert, Monster

comment by Rhwawn · 2012-04-06T19:54:30.055Z · LW(p) · GW(p)

By relieving the brain of all unnecessary work, a good notation sets it free to concentrate on more advanced problems, and, in effect, increases the mental power of the race.

Alfred North Whitehead, “An Introduction to Mathematics” (thanks to Terence Tao)

comment by MixedNuts · 2012-04-09T15:24:07.079Z · LW(p) · GW(p)

On specificity and sneaking on connotations; useful for the liberal-minded among us:

I think, with racism and sexism and 'isms' generally, there's a sort of confusion of terminology.

A "Racist1" is someone, who, like a majority of people in this society, has subconsciously internalized some negative attitudes about minority racial groups. If a Racist1 takes the Implicit Association Test, her score shows she's biased against black people, like the majority of people (of all races) who took the test. Chances are, whether you know it or not, you're a Racist1.

A "Racist2" is someone who's kind of an insensitive jerk about race. The kind of guy who calls Obama the "Food Stamp President." Someone you wouldn't want your sister dating.

A "Racist3" is a neo-Nazi. You can never be quite sure that one day he won't snap and kill someone. He's clearly a social deviant.

People use the word "Racist" for all three things, and I think that's the source of a lot of arguments. When people get accused of being racists, they evade responsibility by saying, "Hey, I'm not a Racist3!" when in fact you were only saying they were Racist1 or Racist2. But some of the responsibility is on the accusers too -- if you say "That Republican's a racist" with the implication of "a jerk" and then backtrack and change the meaning to "vulnerable to unconscious bias", then you're arguing in bad faith. Never mind that some laws and rules which were meant to protect people from Racist3's are in fact deployed against Racist2's.

-celandine13

Replies from: Vladimir_M, cousin_it, CaveJohnson, TheOtherDave, BillyOblivion, Oscar_Cunningham, Eugine_Nier, Crouching_Badger, army1987
comment by Vladimir_M · 2012-04-24T19:30:01.248Z · LW(p) · GW(p)

How about:

  1. Someone who, following an honest best effort to evaluate the available evidence, concludes that some of the beliefs that nowadays fall under the standard definition of "racist" nevertheless may be true with probabilities significantly above zero.

  2. Someone who performs Bayesian inference that somehow involves probabilities conditioned on the race of a person or a group of people, and whose conclusion happens to reflect negatively on this person or group in some way. (Or, alternatively, someone who doesn't believe that making such inferences is grossly immoral as a matter of principle.)

Both (1) and (2) fall squarely under the common usage of the term "racist," and yet I don't see how they would fit into the above cited classification.

Of course, some people would presumably argue that all beliefs in category (1) are in fact conclusively proven to be false with p~1, so it can be only a matter of incorrect conclusions motivated by the above listed categories of racism. Presumably they would also claim that, as a well-established general principle, no correct inferences in category (2) are ever possible. But do you really believe this?

Replies from: army1987, army1987
comment by A1987dM (army1987) · 2012-04-25T09:02:49.255Z · LW(p) · GW(p)

That (1) only makes sense if there is a “standard” definition of racist (and it's based on what people believe rather than/as well as what they do). The point of the celandine13 was indeed that there's no such thing.

comment by A1987dM (army1987) · 2012-04-25T00:37:47.906Z · LW(p) · GW(p)

Someone who performs Bayesian inference that somehow involves probabilities conditioned on the race of a person or a group of people

The evidence someone's race constitutes about that person's qualities is usually very easily screened off, as I mentioned here. And given that we're running on corrupted hardware, I suspect that someone who does try to “performs Bayesian inference that somehow involves probabilities conditioned on the race of a person” ends up subconsciously double-counting evidence and therefore end up with less accurate results than somebody who doesn't. (As for cases when the evidence from race is not so easy to screen off... well, I've never heard anybody being accused of racism for pointing out that Africans have longer penises than Asians.)

Replies from: Vaniver, Eugine_Nier, Eugine_Nier, JoshuaZ
comment by Vaniver · 2012-04-26T04:18:09.669Z · LW(p) · GW(p)

well, I've never heard anybody being accused of racism for pointing out that Africans have longer penises than Asians.

I have seen accusations for racism as responses to people pointing that out.

comment by Eugine_Nier · 2012-04-26T04:09:55.028Z · LW(p) · GW(p)

Also, according to the U.S. Supreme Court even if race is screened off, you're actions can still be racist or something.

comment by Eugine_Nier · 2012-04-25T07:59:51.501Z · LW(p) · GW(p)

The evidence someone's race constitutes about that person's qualities is usually very easily screened off, as I mentioned here.

In real life, you don't have the luxury of gathering forensic evidence on everyone you meet.

Replies from: army1987
comment by A1987dM (army1987) · 2012-04-25T08:55:05.815Z · LW(p) · GW(p)

I'm not talking about forensic evidence. Even if white people are smarter in average than black people, I think just talking with somebody for ten minutes would give me evidence about their intelligence which would nearly completely screen off that from skin colour. Heck, even just knowing what their job is would screen off much of it.

Replies from: Eugine_Nier, Vaniver
comment by Eugine_Nier · 2012-04-26T04:07:13.757Z · LW(p) · GW(p)

Even if white people are smarter in average than black people, I think just talking with somebody for ten minutes would give me evidence about their intelligence which would nearly completely screen off that from skin colour.

Also, as Eric Raymond discusses here, especially in the comments, you sometimes need to make judgements without spending ten minutes talking to everyone you see.

Heck, even just knowing what their job is would screen off much of it.

There's this thing called Affirmative Action, as I mentioned elsewhere in this thread.

Replies from: Multiheaded, army1987
comment by Multiheaded · 2012-05-07T14:23:36.896Z · LW(p) · GW(p)

Also, as Eric Raymond discusses here, especially in the comments, you sometimes need to make judgements without spending ten minutes talking to everyone you see.

...

I do not require any “moral justification” for acting on the truth as it it really is; truth is its own warrant. (A comment by him).

I facepalmed. Really, Eric? Sorry, I don't think that a moral realist is perceptive enough to the nuances and ethical knots involved to be a judge on this issue. I don't know, he might be an excellent scientist, but it's extremely stupid to be so rash when you're attempting serious contrarianism.

But you reveal a confusion in your own thinking. It is not “treating other human beings as less-than-equal” to make rational decisions in risk situations; it is only that if you make decisions which are irrationally biased.

Yep, let's all try to overcome bias really really hard; there's only one solution, one desirable state, there's a straight road ahead of us; Kingdom of Rationality, here we come!

(Yvain, thank you a million times for that sobering post!)

comment by A1987dM (army1987) · 2012-05-07T13:54:21.530Z · LW(p) · GW(p)

Also, as Eric Raymond discusses here, especially in the comments, you sometimes need to make judgements without spending ten minutes talking to everyone you see.

You know, there are countries where the intentional homicide rate is smaller than in John Derbyshire's country by nearly an order of magnitude.

Heck, even just knowing what their job is would screen off much of it.

There's this thing called Affirmative Action, as I mentioned elsewhere in this thread.

That thing doesn't exist in all countries. Plus, I think the reason why you don't see that many two-digit-IQ people among (say) physics professors is not that they don't make it, it's that they don't even consider doing that, so even if some governmental policy somehow made it easier for black people with an IQ of 90 to succeed than for Jewish people with the same IQ, I would still expect a black physics professor to be smarter than (say) a Jewish truck driver.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-05-08T07:11:45.892Z · LW(p) · GW(p)

so even if some governmental policy somehow made it easier for black people with an IQ of 90 to succeed than for Jewish people with the same IQ, I would still expect a black physics professor to be smarter than (say) a Jewish truck driver.

That's not the point. The point is that the black physics professor is less smart than the Jewish physics professor.

Replies from: army1987
comment by A1987dM (army1987) · 2012-05-08T07:59:31.294Z · LW(p) · GW(p)

But the difference is smaller than for the median black person and the median Jewish person. (I said "even just knowing what their job is would screen off much of it", not "all of it".)

Replies from: private_messaging
comment by private_messaging · 2012-05-08T08:15:45.819Z · LW(p) · GW(p)

The bell curve has both the mean and the deviation, you can have a 'race' with lower mean and larger standard deviation, and then you can e.g. filter by reliable accomplishment of some kind, such as solving some problem that smartest people in the world attempted and failed, you may end up with situation that the population with lower mean and larger standard deviation will have fewer people whom attain this, but those whom do, are on average smarter. Set bar even higher, and the population with lower mean and larger standard deviation has more people attaining it. Also, the Gaussian distribution can stop being good approximation very far away from the mean.

edit: and to reply to grand grand parents: I bet i can divide the world into category that includes you, and a category that does not include you, in such a way that the category including you has substantially higher crime rate, or is otherwise bad. Actually if you are from US, I have a pretty natural 'cultural' category where your murder rate is about 5..10x of normal for such average income. Other category is the 'racists', i.e. the people whom use skin colour as evidence. Those people also have substantially bad behaviour. You of course want to use skin colour as evidence, and don't want me to use your qualities as evidence. See if I care. If you want to use the skin colour as evidence, lumping together everyone that's black, I want to use 'use of skin colour as evidence', lumping you together with all the nasty racists.

Replies from: army1987
comment by A1987dM (army1987) · 2012-05-08T09:19:07.209Z · LW(p) · GW(p)

The bell curve has both the mean and the deviation, you can have a 'race' with lower mean and larger standard deviation

IIRC, no substantial difference was found in the standard deviations among races. (Whereas for genders, they have the same mean but males have larger sigma, so there are both more male idiots than female idiots and more male geniuses than female geniuses.)

Also, the Gaussian distribution can stop being good approximation very far away from the mean.

Isn't IQ defined to be a Gaussian (e.g. IQ 160 just means ‘99.99683rd percentile among people your age’), rather than ‘whatever IQ tests measure’? If so, a better statement of that phenomenon would be “IQ tests are inaccurate for extreme values.”

See if I care. If you want to use the skin colour as evidence, lumping together everyone that's black, I want to use 'use of skin colour as evidence', lumping you together with all the nasty racists.

I want to use ‘use of “use of skin colour as evidence” as evidence’ as evidence, but I'm not sure what that's evidence for. :-)

Replies from: private_messaging, private_messaging
comment by private_messaging · 2012-05-08T14:13:02.065Z · LW(p) · GW(p)

IRC, no substantial difference was found in the standard deviations among races

Even a small difference translates into enormous ratio between numbers of people, several standard deviations from the mean...

Isn't IQ defined to be a Gaussian

Yes, and it is defined to have specific standard deviation as well. That definition makes it unsuitable measure. The Gaussian distribution also arises from sum of multiple independent variables. The statement was about intelligence though, which is different thing from both "what IQ tests measure" and "how IQ is defined".

Another huge failing of IQ is the non-measure of ability to build and use a huge search-able database of methods and facts. Building such database is a long-term memory task and can not be tested in short time span; the existing knowledge can't be tested without massive influence by the background. Likewise, the IQ test lacks any problems that are actually difficult enough to have some solution methods that some people would know before the test, and some won't.

Effectively, the IQ tests do not test for heavily parallel processing capability.

For example, I do believe that it would be possible to build 'superhuman AI' that runs on a cellphone and aces IQ tests, and could perhaps deceive a human in brief conversation. The same AI would never be able to invent a stone axe from scratch, let alone anything more complicated; it'd be nothing but a glorified calculator.

I want to use ‘use of “use of skin colour as evidence” as evidence’ as evidence, but I'm not sure what that's evidence for. :-)

Well, the people who use skin colour as evidence, i would guess, are on average less well behaved than rest of society... so you can use it to guess someone's criminality or other untrustworthiness.

Replies from: army1987, army1987, Vaniver
comment by A1987dM (army1987) · 2012-05-11T09:55:23.948Z · LW(p) · GW(p)

Likewise, the IQ test lacks any problems that are actually difficult enough to have some solution methods that some people would know before the test, and some won't.

Indeed, when I last took a few IQ tests I felt like I was being tested tested more for familiarity with concepts such as exclusiveOR, cyclical permutations, and similar basic discrete maths stuff than for processing power. (Of course, it does take insight to realize that such concepts are relevant to the questions and processing power to figure out the answer within the time frame of the test, but I think that if I had never heard about XOR or used Sarrus' rule I would have scored much worse.)

ETA: This is also why I suspect that the correlations between race and IQ aren't entirely genetic. If Einstein's twin brother had grown up in a very poor region with no education...

comment by A1987dM (army1987) · 2012-05-09T17:23:12.914Z · LW(p) · GW(p)

Even a small difference translates into enormous ratio between numbers of people, several standard deviations from the mean...

A distribution with mean 100 and st. dev. 14 will exceed one with mean 90 and st. dev. 16 for all x between about 93 and about 170, and there aren't that many people with IQs over 170 anyway.

Replies from: private_messaging
comment by private_messaging · 2012-05-09T18:46:35.451Z · LW(p) · GW(p)

But can we detect such a tiny difference as between std dev 14 and std dev 16 ? After we have to control for really many factors that are different between groups in question?

Also, that was my point, at the level of very high (one in million) intelligence, i.e. actual geniuses, the people you'd call genius without having to detect them using some test. I have a pet hypothesis about the last biological change which caused our technological progress. Little mixing with Neanderthals, raising the standard deviation somewhat.

The IQ test I think get useless past some point, when the IQ test savants that solve it at such level (but can't learn very well for example, or can't do problems well that require more of parallel processing), start to outnumber geniuses.

comment by Vaniver · 2012-05-08T14:33:43.701Z · LW(p) · GW(p)

Well, the people who use skin colour as evidence, i would guess, are on average less well behaved than rest of society... so you can use it to guess someone's criminality or other untrustworthiness.

What sort of effect size do you expect here? Why?

Replies from: private_messaging
comment by private_messaging · 2012-05-08T15:09:45.713Z · LW(p) · GW(p)

You have the neonazis among those who use skin colour as evidence of criminality, but not among those who don't. I don't know of other differences that were demonstrated, my expectation for other effects is zero. I should expect the overall effect on order of at least the proportion of race motivated violence to overall violence; my expectation is somewhat higher than this though because I would guess that the near-neonazis are likewise more violent, including within-race crime.

comment by private_messaging · 2012-05-08T15:23:43.578Z · LW(p) · GW(p)

I want to use ‘use of “use of skin colour as evidence” as evidence’ as evidence, but I'm not sure what that's evidence for. :-)

Doh, missed the extra nesting. I doubt it'll be evidence for much... both neonazis and liberal types use that as evidence, the former as evidence of ingroup-ness and the latter as evidence of badness, so I don't see for what it would be discriminating.

Replies from: army1987
comment by A1987dM (army1987) · 2012-05-08T17:04:06.632Z · LW(p) · GW(p)

I can't remember whether I read this from someone else or came up with it on my own, but when people ask “do you oppose homosexual marriage” in questionnaires to find out political orientations, people answering “yes” will include both those who oppose homosexual marriage but are OK with heterosexual marriage, and those who oppose all marriage, and those groups are very different clusters in political space (paleo-conservatives the former, radical anarchists the latter). (Of course, the latter group is so much smaller than the former than if you're doing statistics with large numbers of people this shouldn't be such an issue.)

comment by Vaniver · 2012-04-26T04:19:45.836Z · LW(p) · GW(p)

Even if white people are smarter in average than black people, I think just talking with somebody for ten minutes would give me evidence about their intelligence which would nearly completely screen off that from skin colour.

What if verbal ability and quantitative ability are often decoupled?

Replies from: army1987
comment by A1987dM (army1987) · 2012-05-07T13:43:31.834Z · LW(p) · GW(p)

I wasn't talking about "verbal ability" (which, to the extent that can be found out in ten minutes, correlates more with where someone grew up than with IQ), but about what they say, e.g. their reaction to finding out that I'm a physics student (though for this particular example there are lots of confounding factors), or what kinds of activities they enjoy.

Replies from: Vaniver
comment by Vaniver · 2012-05-07T17:26:02.687Z · LW(p) · GW(p)

If you're able to drive the conversation like that, you can get information about IQ, and that information may have a larger impact than race. But to "screen off" evidence means making that evidence conditionally independent- once you knew their level of interest in physics, race would give you no information about their IQ. That isn't the case.

Imagine that all races have Gaussian IQ distributions with the same standard deviation, but different means, and consider just the population of people whose IQs are above 132 ('geniuses' for this comment). In such a model, the mean IQ of black geniuses will be smaller than the mean IQ of white geniuses which will be smaller than the mean IQ of Jewish geniuses- so even knowing a lower bound for IQ won't screen off the evidence provided by race!

Replies from: army1987
comment by A1987dM (army1987) · 2012-05-07T18:01:10.513Z · LW(p) · GW(p)

Huh, sure, if the likelihood is a reversed Heaviside step. If the likelihood is itself a Gaussian, then the posterior is a Gaussian whose mean is the weighed average of that of the prior and that of the likelihood, weighed by the inverse squared standard deviations. So even if the st.dev. of the likelihood was half that of the prior for each race, the difference in posterior means would shrink by five times.

Replies from: Vaniver
comment by Vaniver · 2012-05-07T18:31:36.885Z · LW(p) · GW(p)

Right- there's lots of information out there that will narrow your IQ estimate of someone else more than their race will, like that they're a professional physicist or member of MENSA, but evidence only becomes worthless when it's independent of the quantity you're interested in given the other things you know.

Replies from: maia
comment by maia · 2012-05-07T18:43:24.070Z · LW(p) · GW(p)

Can you give an example of evidence becoming worthless? (I can't think of any.)

Replies from: alex_zag_al, Vaniver
comment by alex_zag_al · 2012-05-07T19:46:19.211Z · LW(p) · GW(p)

You have a theory that a certain kind of building is highly prone to fire. You see a news report that mentions that a building of that kind has burnt down on Main Street. The news report supports your theory - unless you were a witness to the fire the previous night.

Replies from: Dolores1984, TheOtherDave
comment by Dolores1984 · 2012-05-07T20:02:18.716Z · LW(p) · GW(p)

If you were promoting the theory before that point, the police may still have some pointed questions to ask you.

Replies from: alex_zag_al
comment by alex_zag_al · 2012-05-07T20:05:44.101Z · LW(p) · GW(p)

I'm talking about how valuable the evidence is to you, the theory-promoter. If you were there, then the news report tells you nothing you didn't already know.

Replies from: Dolores1984
comment by Dolores1984 · 2012-05-07T22:16:17.109Z · LW(p) · GW(p)

I understood your point. I was simply making a joke.

comment by TheOtherDave · 2012-05-07T19:56:19.192Z · LW(p) · GW(p)

In this case, if the news report is consistent with my recollections, it seems that is evidence of the reliability of the news, and of the reliability of my memory, and additional evidence that the event actually occurred that way.

No?

Replies from: alex_zag_al
comment by alex_zag_al · 2012-05-07T19:58:26.410Z · LW(p) · GW(p)

Yeah, true. But having been there the previous night, and making good observations the previous night, certainly makes the news report go from pretty strong evidence to almost nothing.

EDIT: Really the important thing I think, is that if your observations are good enough than the evidence from the news report is "worthless", in the sense that you shouldn't pay to find out whether there was a news report that backs up your observations. It's not worth the time it takes to hear it..

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-07T20:37:12.607Z · LW(p) · GW(p)

Hm.

Maybe I'm missing your point altogether, but it seems this is only true if the only thing I care about is the truth of that one theory of mine. If I also care about, for example, whether news reports are typically reliable, then suddenly the news report is worth a lot more.

But, sure, given that premise, I agree.

comment by Vaniver · 2012-05-07T21:07:13.067Z · LW(p) · GW(p)

Suppose A gives me information about B, and B gives me information about C; they're dependent. (Remember, probabilistic dependence is always mutual.) A gives me information about C (through B) only if I don't know B. If I know B, then A is conditionally independent of C, and so learning A tells me nothing about C.

Replies from: maia
comment by maia · 2012-05-08T03:53:40.596Z · LW(p) · GW(p)

So essentially... a new fact is useless only if it's a subset of knowledge you already have?

Replies from: Vaniver
comment by Vaniver · 2012-05-08T11:59:25.427Z · LW(p) · GW(p)

That seems like a fine way to put it.

comment by JoshuaZ · 2012-04-26T05:13:19.895Z · LW(p) · GW(p)

(As for cases when the evidence from race is not so easy to screen off... well, I've never heard anybody being accused of racism for pointing out that Africans have longer penises than Asians.)

Minor note, this appears to actually not be the case. Most studies have no correlation between race and penis size. See for example here. The only group that there may be some substantial difference is that Chinese babies may have smaller genitalia after birth but this doesn't appear to hold over to a significant difference by the time the children have reached puberty. Relevant study.

Replies from: army1987, Nornagest
comment by A1987dM (army1987) · 2012-05-07T13:40:07.247Z · LW(p) · GW(p)

Huh, according to this map the average Congolese penis is nearly twice as long as the average South Korean penis. (ISTR that stretched flaccid length doesn't perfectly correlate with erect length.)

comment by Nornagest · 2012-04-26T05:31:14.748Z · LW(p) · GW(p)

Oddly salient for such a trivial result. Should a study qualify for an Ig Nobel if you can use it to settle bar bets?

comment by cousin_it · 2012-04-12T09:18:03.445Z · LW(p) · GW(p)

Where would someone like Steve Sailer fit in this classification?

Replies from: GLaDOS
comment by GLaDOS · 2012-04-24T16:16:10.943Z · LW(p) · GW(p)

Indeed as strange as it might sound (but not to those who know what he usually blogs about) Steve Sailer seems to genuinely like black people more than average and I wouldn't be surprised at all if a test showed he wasn't biased against them or was less biased than the average white American.

He also dosen't seem like racist2 from the vast majority of his writing, painting him as racist3 is plain absurd.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-04-24T16:26:25.680Z · LW(p) · GW(p)

Steve Sailer seems to genuinely like black people more than average

What evidence leads to this conclusion?

Replies from: Vaniver
comment by Vaniver · 2012-04-24T16:46:13.509Z · LW(p) · GW(p)

He published his IAT results and he's proposed policies that play to the strengths of blacks.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-04-24T17:46:10.774Z · LW(p) · GW(p)

Historically, proposing policies that are set to help the specific strengths of a minority group are not generally indicative of actually positive feelings about those groups.

Replies from: Vaniver
comment by Vaniver · 2012-04-24T18:17:56.470Z · LW(p) · GW(p)

The IAT is the best measure of 'genuinely like X people' we have now, though that's not saying much. (I believe the only place he published it is VDare, which is currently down.)

Historically, proposing policies that are set to help the specific strengths of a minority group are not generally indicative of actually positive feelings about those groups.

What are the competing hypotheses and competing observations, here?

Replies from: army1987
comment by A1987dM (army1987) · 2012-04-25T17:17:13.938Z · LW(p) · GW(p)

The IAT is the best measure of 'genuinely like X people' we have now

...for a particular value of genuine. (See this, BTW.)

Replies from: Vaniver
comment by Vaniver · 2012-04-25T19:43:55.142Z · LW(p) · GW(p)

It seems to me the natural interpretation for "genuine" is "unconscious," and if that post is relevant, it seems that it argues for more relative importance for the IAT over stated positions and opinions.

comment by CaveJohnson · 2012-04-24T16:19:21.714Z · LW(p) · GW(p)

This is missing Racist4:

Someone whose preferences result in disparate impact.

comment by TheOtherDave · 2012-04-09T16:22:36.820Z · LW(p) · GW(p)

...and also useful for those among us who don't identify as "liberal-minded."

Replies from: MixedNuts
comment by MixedNuts · 2012-04-09T16:46:14.432Z · LW(p) · GW(p)

Really? It does seem useful to communicate with the liberal-minded without feeling personally insulted or thinking they're going way overboard on political correctness. But only liberals and those who think like them seem prone to thinking "Everyone is full of EVIL PREJUDICE except my tribe".

Replies from: Danfly, FAWS, TheOtherDave
comment by Danfly · 2012-04-09T17:40:21.540Z · LW(p) · GW(p)

But only liberals and those who think like them seem prone to thinking "Everyone is full of EVIL PREJUDICE except my tribe"

When I saw this, I could not help but think what an apt demonstration it was of a green accusing the blues of holding a uniquely prejudiced point of view because they are blues, while he, being a green, is of course immune to any such sentiment.

Replies from: Random832
comment by Random832 · 2012-04-14T10:05:36.472Z · LW(p) · GW(p)

Why is it that wherever I see "greens and blues" mapped to real-world politics, "green" are the liberals and "blue" are the conservatives? example.

EDIT: I misread your comment.

comment by FAWS · 2012-04-09T17:11:06.706Z · LW(p) · GW(p)

Are you saying that the demographic you are talking about is special in using prejudice as the marker of evilness (as opposed to religious affiliation or whatever), or in taking that sort of attitude at all?

Replies from: MixedNuts
comment by MixedNuts · 2012-04-09T17:33:13.109Z · LW(p) · GW(p)

Sort of the latter. Conservatives tend to think people evil for supporting things like gay marriage and abortion - things that all sides agree are supported by one side and opposed by the other. Or to think people fundamentally good, but naive and misguided - everyone agrees poverty is bad, but conservatives think food stamps make it worse, so they oppose liberals who support food stamps.

People who reject both labels seem to regard both conservatives and liberals as cute little bumbling fools who want to do good and thus deserve a pat on the head and a lollipop.

I haven't spent nearly as much time in conservative circles as in liberal ones, but there is a distinctive pattern among liberals that I would not expect to observe anywhere else: "Let's solve sexism by putting kittens in a blender!" "Putting kittens in a blender sounds like a bad idea." "You evil sexist!".

Replies from: TheOtherDave, BillyOblivion, SusanBrennan
comment by TheOtherDave · 2012-04-09T18:03:05.037Z · LW(p) · GW(p)

Leaving as untouched as I possibly can while still participating in this discussion at all the political labeling question here, I am interested in your thoughts as to the structural similarities and differences between the hypothetical conversation you cite about sexism, and a conversation like:

"Let's make God happy by putting kittens in a blender!"
"Putting kittens in a blender sounds like a bad idea."
"You evil atheist!"

or

"Let's improve our capitalist economy by putting kittens in a blender!"
"Putting kittens in a blender sounds like a bad idea."
"You evil communist!"

comment by BillyOblivion · 2012-04-17T12:09:15.459Z · LW(p) · GW(p)

I've spent a lot of time on the conservative side (between the guns, being in the Military and working in/around the Defense Industry, and in general being a tradition oriented more-or-less libertarian) and many of them aren't any different.

"Gay Marriage will ruin the institution" "Uh. How many times have you been divorced?" "COMMUNIST!" (no, not literally, but YKWIM)

Heck, even the Implicit Association Test assumes that if you're "liberal" on Gun Control (whatever that means) you're also Liberal on Gay Marriage and Abortion. Anyone wanna make some assumptions on the Implicit Associations of the writers of that test?

Replies from: wedrifid
comment by wedrifid · 2012-04-17T13:46:54.787Z · LW(p) · GW(p)

"Gay Marriage will ruin the institution"

It certainly ruins some aspects. How will the state know which partner to favor in the divorce proceedings if both are the same sex?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-17T14:03:20.762Z · LW(p) · GW(p)

The shorter one.

Replies from: None, cousin_it, wedrifid
comment by [deleted] · 2012-04-17T14:20:54.403Z · LW(p) · GW(p)

Being 1.6m, I support this decision.

EDIT: Take that, veil of ignorance!

comment by cousin_it · 2012-04-17T14:25:31.006Z · LW(p) · GW(p)

Why not the cuter one?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-17T14:54:18.732Z · LW(p) · GW(p)

That works too. A more serious answer here.

comment by wedrifid · 2012-04-17T14:06:55.908Z · LW(p) · GW(p)

The shorter one.

Good answer. Does it work that way in practice? I wouldn't be able to predict whether the halo effect would overcome the sympathy influence and win out in effective total favoritism.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-17T14:53:00.769Z · LW(p) · GW(p)

Beats me. I expect there's a lot of noise here; I was more making a nod towards the standard trope than actually proposing an answer. "The one with less earning power" is also an answer that comes to mind.

If I had to guess, I'd guess that in most jurisdictions where same-sex divorce is no longer so novel as to be singular, the tendency would be to approximate splitting assets down the middle. But I'm no more than .35 confident of that, and even that much depends on a very ad-hoc definition of "no longer so novel."

comment by SusanBrennan · 2012-04-09T23:36:31.645Z · LW(p) · GW(p)

It is probably a very bad idea for me to make my first post in reply to something that is blatantly political, on a site which quite actively discourages it, but I'm not very rational. You see, I would probably consider myself more of a liberal than a conservative. I have even attended meetings of feminist organizations, which means that I am a very irrational type of bumbling fool. Nevertheless, I assure you that I would indeed question the ethics of putting kittens in blenders. I would also question the effectiveness of putting kittens in blenders as a means to solve sexism. However, I have never seen such a position proposed before and would be rather shocked to be called an "evil sexist", even by radical feminists who I do not tend to agree with, for opposing the practice.

Perhaps everything you say is true. Perhaps there is something in liberals that makes us more tribal than the average human being. I would freely admit to being more irrational than rational most of the time. When someone not of my tribe says something I find horrific, my emotions tend to make me go "damn their entire tribe for only they would think such things", rather than "I disagree with the point this individual is making, though I am sure it is not held by everyone else in his tribe and I am sure there are converse examples of people who have reached the same conclusion in my tribe".

I see that the inferences you have drawn from your experience at a large number of liberal events and a large number of conservative events have led you to the conclusion that "ONLY liberals and those that think like them seem prone to thinking "everyone is full of evil prejudice except my tribe". I would have thought that a statement of such strength, particularly since it uses the word ONLY, would require much more than the anecdotal experiences of one individual in order to justifiably reject the null hypothesis. Perhaps you have done many statistical studies on this that I am unaware of. Perhaps you have assumed knowledge of your studies is common among Less Wrong contributors (and I would admit that the average LW contributor is smarter than me, so it's not too much of a stretch). Indeed, you may have constructed your priors in a completely impartial manner and may indeed be completely justified in assuming the truth of your alternative hypothesis. Nevertheless, I am a little skeptical of the reliability of the methods you used for arriving at the conclusion of attributing this quality to "ONLY liberals and those who think like them",as opposed to "MOSTLY liberals and those who think like them"

Unsurprisingly, I have a number of issues with that sentence which are not just political. The set which includes "liberals and those who think like them" is not very well defined. I imagine a liberal thinks more like a conservative than a dog thinks like a liberal or a conservative. Consequently, your set could be defined to include everything within the set "conscious human beings", as conscious human beings are certainly things which tend to think like other human beings. However, it is very clear from context that this is not what you mean. Do libertarians think like liberals? I imagine many libertarians would say "yes, on a lot of things, but not on many other things. On other things, I tend to think like a conservative". but, clearly, your additional qualifier of "those who think like them" was included specify that you were not talking about only liberals. Do socialists think like liberals? I imagine a conservative would often say "yes, they do. They both tend to want more government intervention". Conversely, I think a socialist might say "no, liberals believe in private ownership of the means of production. I believe that system is inherently unjust". The vast majority of anarchists, as the forms of anarchism which have their origins in the labour movement, i.e. those advocating social anarchism are still the most common form of anarchism from a worldwide perspective. These anarchists would in fact see themselves as thinking more like orthodox Marxists than US conservatives. They would differ very strongly over the "statist" notion of the dictatorship of the proletariat, but would have similar long term ends. This puts the conservative who defines his conservatism as an ideology of "less government" in contrast to liberals and socialists in an odd position. You see, if he is not of a very extreme persuasion and is a believer in western democracy in its current form, it would probably be safe to say that he thinks more like liberals and democratic socialists than he thinks like a revolutionary social anarchist. So, defining who exactly thinks like a liberal, but is not actually a liberal is not an easy task. I believe there is a great deal of literature in linguistics and the philosophy of language dealing with the concept of "like" how difficult it actually is to categorize one thing as beinglike another thing. Trying to define an agent whichthinks like another agent seems, if anything, even more difficult.

Did you perhaps come up with a technical definition of for the set of people defined as "liberal or thinks like a liberal". Did you create questionnaires with a number of propositions associated with the ideology "liberalism" and give them to people in the circles you mentioned, so that you could, to some extent, identify those who were of the set "think like liberals" in non-liberal groups. Perhaps you used a ratio of 13 positive answers to 20 negative answers as a minimum benchmark for those who "think like[ liberals]". Were there questions on these sheets which were similar in form to "if you could stop sexism by putting kittens in a blender, would you put kittens in a blender?" and "in such circumstances, would you treat anyone stopping you from putting kittens in a blender as the enemy?". If people in the "liberals and those who think like them group" did answer positively to both of those questions, I would be fairly surprised.

But maybe you have just let political hyperbole get in the way of presenting a potentially more persuasive argument. There is probably a good case to be made for comparatively stronger tribal sentiments in liberals. After all, individualism is a fundamental part of modern day conservatism, but is no longer considered a key component of liberalism. Now liberals are associated with more collectivist values. Consequently, it would not be surprising if studies showed that liberals had emotionally stronger collectivist tendencies than conservatives. Indeed, I think one could be justified in assuming a prior probability of greater than .5 that more collectivist tendencies would be found in liberals than in conservatives if we use the US definition of those terms.

In conclusion, if you had just said something along the lines of "In my own experience, individuals of a liberal political persuasion tend to have stronger views concerning moral judgment of their opponents. Has anyone else noticed this or am I the only one? If not, are there probable cognitive causes behind this"? At least that would have seemed more rational. It would have seemed more like something that belongs on Less Wrong. Presenting your argument in that form might have spared you some of that negative karma. If emotions were not getting in your way, maybe you would have noticed that your argument would seem out of place on this website, particularly when you decided to capitalize EVIL PREJUDICE. You might also have realized that when your accusation levied at a political group was questioned, you merely resorted to stronger hyperbole involving kittens in blenders. Your argument had become a soldier and you decided that you should try to save it by resorting to an argument that was even more absurd and hyperbolic.

I've looked at some of your previous contributions and you are clearly intelligent, so I don't doubt that you probably had a valid point to make. You just could have made it better. You must have noticed that some of your statements just don't fit the accepted rules of discourse on this site.

Replies from: Desrtopa, TheOtherDave
comment by Desrtopa · 2012-04-12T04:16:29.274Z · LW(p) · GW(p)

I never interpreted MixedNuts' statement as entailing that liberals have stronger tribal sentiments. Rather, I interpreted it as being that accusing others of prejudice, and jumping on people who oppose proposed solutions to combat prejudice even if the solutions aren't very good, are distinctly liberal tribal phenomena. A comparable tribal behavior that you would be likely to see among conservatives, but unlikely to see among liberals, would be accusing people of being "unpatriotic."

Replies from: SusanBrennan
comment by SusanBrennan · 2012-04-12T12:29:34.424Z · LW(p) · GW(p)

Point taken. In hindsight I also seem to have gotten a bit carried away with the above post. I would, however, hold that there are many social/political/religious groups that have a remarkable tendency to see everyone except themselves as remarkably prejudiced because their worldview is not shared. Nevertheless, continuing down this road is not likely to be very productive.

I vote that we abandon ship and shift our attentions back to topics like rationality techniques, game theory, friendly AI and meta-ethics, where we can think more clearly.

comment by TheOtherDave · 2012-04-09T23:42:38.817Z · LW(p) · GW(p)

Yeah, it was probably a bad idea, but damn I enjoyed reading it.

comment by TheOtherDave · 2012-04-09T17:05:51.582Z · LW(p) · GW(p)

Attending to specificity and the sneaking in of connotations has benefits that are not limited to dealing with accusations of "EVIL PREJUDICE".

comment by BillyOblivion · 2012-04-17T11:32:29.547Z · LW(p) · GW(p)

So if a minority takes the Implicitly Association Test and finds out their biased against the dominant "race" in their area, are they a Racist1, or not?

I would also really question the validity of the Implicit Association Test. It says "Your data suggest a slight implicit preference for White People compared to Black People.", which given that blacks have been severely under-represented my social sub-culture for the last 27 years(Punk/Goth), the school I graduated from (Art School), and my professional environments (IT) for the last 20 years is probably not inaccurate.

However, it also says "Your data suggest a slight implicit preference for Herman Cain compared to Barack Obama." Which is nonsense. I have a STRONG preference for Herman Cain over Barack Obama.

Replies from: Manfred
comment by Manfred · 2012-04-17T13:10:19.057Z · LW(p) · GW(p)

So if a minority takes the Implicitly Association Test and finds out their biased against the dominant "race" in their area, are they a Racist1, or not?

Looks like we need more "racism"s :D A common definition of racism that reflects the intuitions you bring up is "racism is prejudice plus power," (e.g., here) which isn't very useful from a decision-making point of view but which is very useful when looking at this racism as a functional thing experienced by the some group.

comment by Oscar_Cunningham · 2012-04-09T18:31:28.970Z · LW(p) · GW(p)

Surely one of the definitions of "racist" should contain something about thinking that some races are better than others. Or is that covered under "neo-Nazi"?

Replies from: thomblake, Eugine_Nier
comment by thomblake · 2012-04-10T19:34:13.084Z · LW(p) · GW(p)

I'm pretty sure that's covered under Racist1. Note the word "negative".

Though it's odd that Racist1 specifically refers to "minorities". The entire suite seems to miss folks that favor a "minority" race.

Replies from: CaveJohnson, gjm
comment by CaveJohnson · 2012-04-24T16:25:03.605Z · LW(p) · GW(p)

Not really it is perfectly possible to be explicitly aware of one's racial preferences and not really be bothered by having such preferences, at least no more than one is bothered by liking salty food or green parks, yet not be a Nazi or prone to violence.

Indeed I think a good argument can be made not only that large number of such people lived in the 19th and 20th century, but that we probably have millions of them living today in say a place like Japan.

And that they are mostly pretty decent and ok people.

Edit: Sorry! I didn't see the later comments already covering this. :)

comment by gjm · 2012-04-12T21:43:10.684Z · LW(p) · GW(p)

Negative subconscious attitudes aren't the same thing as (though they might cause or be caused by) conscious opinions that such-and-such people are inferior in some way.

Replies from: thomblake, army1987
comment by thomblake · 2012-04-12T21:44:36.918Z · LW(p) · GW(p)

Ah yes - it's extra-weird that someone isn't allowed in that framework to have conscious racist opinions but not be a jerk about it.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2012-04-12T22:53:14.187Z · LW(p) · GW(p)

If one has conscious racist opinions, or is conscious that one has unconscious racist opinions (has taken the IAT but doesn't explicitly believe negative things about blacks) but doesn't act on them, it's probably because one doesn't endorse them. I'd class such a person as a Racist1.

Replies from: thomblake
comment by thomblake · 2012-04-12T22:56:53.997Z · LW(p) · GW(p)

I don't think not being an "insensitive jerk" is the same as not acting on one's opinions.

For example, if I think that people who can't do math shouldn't be programmers, and I make sure to screen applicants for math skills, that's acting on my opinions. If I make fun of people with poor math skills for not being able to get high-paying programmer jobs, that's being an insensitive jerk.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2012-04-14T17:07:57.390Z · LW(p) · GW(p)

That's true. I was taking "racist opinions" to mean "incorrect race-related beliefs that favor one group over another". If people who couldn't do math were just as good at programming as people who could, and you still screened applicants for math skills, that would be a jerk move. If your race- or gender- or whatever-group-related beliefs are true, and you act on them rationally (e.g. not discriminating with a hard filter when there's only a small difference), then you aren't being any kind of racist by my definition.

ETA: did anyone downvote for a reason other than LocustBeamGun's?

Replies from: wedrifid, None
comment by wedrifid · 2012-04-14T19:16:05.438Z · LW(p) · GW(p)

If people who couldn't do math were just as good at programming as people who could, and you still screened applicants for math skills, that would be a jerk move.

Not to mention a bad business decision.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2012-04-14T23:17:51.525Z · LW(p) · GW(p)

That too, thanks for pointing it out.

comment by [deleted] · 2012-04-14T20:31:33.283Z · LW(p) · GW(p)

(ETA: I didn't downvote, but) I wouldn't call gender differences in math "small" - the genders have similar average skills but their variances are VERY different. As in, Emmy Noether versus ~everyone else.

And if there is a great difference between groups it would be more rational to apply strong filters (except for example people who are bad at math, conveniently, aren't likely to become programmers). Perhaps the downvoter(s) thought you only presented the anti-discrimination side of the issue.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2012-04-14T23:15:38.213Z · LW(p) · GW(p)

I think in most cases the average is more important in deciding how much to discriminate. But I deleted the relevant phrase because I'm not sure about that specific case and my argument holds about the same amount of water without it as with it.

EDIT:

Perhaps the downvoter(s) thought you only presented the anti-discrimination side of the issue.

Huh, I was intending to say that it's acceptable to discriminate on real existing differences, to the extent that those differences exist. Not sure how to fix my comment to make that less ambiguous, so just saying it straight out here.

comment by A1987dM (army1987) · 2012-04-25T11:44:57.143Z · LW(p) · GW(p)

Indeed. For some reason I'm not sure of, I instinctively dislike Chinese people, but I don't endorse this dislike and try to acting upon it as little as possible (except when seeking romantic partners -- I think I do get to decide what criteria to use for that).

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-25T12:46:45.299Z · LW(p) · GW(p)

Can you expand on the difference you see between acting on your (non-endorsed) preferences in romantic partners, and acting on those preferences in, for example, friends?

Replies from: army1987
comment by A1987dM (army1987) · 2012-04-25T13:57:20.460Z · LW(p) · GW(p)

As for this specific case, I don't happen to have any Chinese friend at the moment, so I can't.

More generally, see some of the comments on this Robin Hanson post: not many of them seem to agree with him.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-25T14:15:46.004Z · LW(p) · GW(p)

I don't understand how not having any Chinese friends at the moment precludes you from expanding on the differences between acting on your dislike of Chinese people when seeking romantic partners and acting on it in other areas of your life, such as maintaining friendships.

Yes, the commenters on that post mostly don't agree with him.

That said, I would summarize most of the exchange as:
"Why are we OK with A, but we have a problem with B?"
"Because A is OK and B is wrong!"

Which isn't quite as illuminating as I might have liked.

Replies from: army1987
comment by A1987dM (army1987) · 2012-04-25T14:37:54.813Z · LW(p) · GW(p)

I don't understand how not having any Chinese friends at the moment precludes you from expanding on the differences between acting on your dislike of Chinese people when seeking romantic partners and acting on it in other areas of your life, such as maintaining friendships.

Since I'm not maintaining any friendships with Chinese people, I can't see what it would even mean for me to act on my dislike of Chinese people in maintaining friendships. As for ‘other areas of my life’, this means that if I attempt to interact with a Chinese-looking beggar the same way I'd behave I'd interact with an European-looking beggar, to read a paper by an author with a Chinese-sounding name the same way I'd read one by an author with (say) a Polish-sounding name, and so on. (I suspect I might have misunderstood your question, though.)

comment by Eugine_Nier · 2012-04-09T19:41:11.105Z · LW(p) · GW(p)

Depends on what you mean by "better". There's a difference between taking the data on race and IQ seriously, and wanting to commit genocide.

Replies from: TheOtherDave, Oscar_Cunningham
comment by TheOtherDave · 2012-04-09T20:17:07.757Z · LW(p) · GW(p)

(blink)

Can you unpack the relationship here between some available meaning of "better" and wanting to commit genocide?

Replies from: Eugine_Nier, wedrifid
comment by Eugine_Nier · 2012-04-09T20:40:02.016Z · LW(p) · GW(p)

Can you unpack the relationship here between some available meaning of "better" and wanting to commit genocide?

That's the question I was implicitly asking Oscar.

comment by wedrifid · 2012-04-09T21:02:41.632Z · LW(p) · GW(p)

Can you unpack the relationship here between some available meaning of "better" and wanting to commit genocide?

Most obvious plausible available meaning for 'better' that fits: "Most satisfies my average utilitarian values".

(Yes, most brands of simple utilitarianism reduce to psychopathy - but since people still advocate them we can consider the meaning at least 'available'.)

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-09T21:52:10.414Z · LW(p) · GW(p)

Fair enough.

comment by Oscar_Cunningham · 2012-04-09T19:53:48.586Z · LW(p) · GW(p)

Sure, I just thought it was weird that the definitions given barely even mentioned race.

comment by Eugine_Nier · 2012-04-09T17:58:20.585Z · LW(p) · GW(p)

You left out one common definition.

A "Racist0" is someone who has accurate priors about the behavior of people of different races.

Also I don't see why calling Obama the "Food Stamp President" or otherwise criticizing his economic policy president makes one a jerk, much less a "Racist2" unless one already believes that all criticism of Obama is racist by definition.

Replies from: TimS, CronoDAS, grendelkhan, Oligopsony
comment by TimS · 2012-04-09T18:10:45.331Z · LW(p) · GW(p)

I'm honestly confused. You don't see why calling Obama a "Food Stamp President" is different from criticizing his economic policy?

I guess I would not predict that particular phrase being leveled against Hillary or Bill Clinton - even from people who disagreed with their economic policies for the same reasons they disagree with Obama's economic policies.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-04-09T18:59:16.696Z · LW(p) · GW(p)

I guess I would not predict that particular phrase being leveled against Hillary or Bill Clinton - even from people who disagreed with their economic policies for the same reasons they disagree with Obama's economic policies.

Well, Bill Clinton had saner economic policies, but otherwise I would predict that phrase, or something similar, being used against a white politician.

Replies from: TimS
comment by TimS · 2012-04-09T20:08:40.006Z · LW(p) · GW(p)

You haven't answered my question:

Given the way that public welfare codes for both "lazy" and "black" in the United States, do you think that "Food Stamp President" has the same implications as some other critique of Obama's economic policies (in terms of whether the speaker intended to invoke Obama's race and whether the speaker judges Obama differently than some other politician with substantially identical positions)?

Replies from: Random832, Eugine_Nier
comment by Random832 · 2012-04-10T20:18:35.819Z · LW(p) · GW(p)

"public welfare codes for both "lazy" and "black" in the United States"

Taking your word on that, what "other critique of Obama's economic policies" are you imagining that would not have the same implications, unless you mean one that ignores public welfare entirely in favor of focusing on some other economic issue instead?

Replies from: TimS
comment by TimS · 2012-04-11T00:53:16.174Z · LW(p) · GW(p)

A political opponent of Obama might say:

Basic economics says that what you pay for, you get more of. Therefore, when you extend long-term unemployment benefits, you get more long-term unemployment.

or

The current tax rate is too far to the right on the Laffer curve

or

The health insurance purchase mandate is unprecedented, UnAmerican, and unConstitutional

edit: or

People who pay no net income tax (because of low income and earned income tax credits) are drains on American society

(end edit)

without me thinking that the political opponent was intending to invoke Obama's race in some way. None of these are actual quotes, but I think they are coherent assertions that disagree with Obama's economic or legal philosophy. Edit: I feel confident I could find actual quote of equivalent content.

Replies from: Random832, Eugine_Nier, Eugine_Nier
comment by Random832 · 2012-04-11T12:54:44.487Z · LW(p) · GW(p)

Of course, none of the ones you suggested are actually about public welfare, in the sense of the government providing supplemental income for people who are unable to get jobs to provide themselves adequate income. So what we have is not a code word, but rather a code issue.

Except the first one, but with how you framed it as "public welfare codes for..." I don't see how that one wouldn't have the same connotations.

Replies from: TimS
comment by TimS · 2012-04-11T13:14:41.005Z · LW(p) · GW(p)

Tl;dr: You have a good point, but we seem to be stuck with the historical context.


Unemployment benefits might qualify as public welfare. More tenuously, the various health insurance subsidies and expansions of Medicaid (government health insurance for the very poor) contained in "Obamacare."

But your point is well taken. The well has been poisoned by political talking points from the 1980s (e.g. welfare queen and the response from the left). I'll agree that there's no good reason for us to be trapped in the context from the past, but politicians have not tried very hard to escape that trap.

comment by Eugine_Nier · 2012-04-11T03:33:12.737Z · LW(p) · GW(p)

The term "welfare president" has the advantage of not having a huge inferential distance (how many people know what a Laffer curve is?) and working as a soundbite.

comment by Eugine_Nier · 2012-04-21T04:31:57.481Z · LW(p) · GW(p)

Here is another example of my point that one can claim any criticism of Obama is racist if one is sufficiently motivated.

comment by Eugine_Nier · 2012-04-10T00:14:44.540Z · LW(p) · GW(p)

Well, yes by finding enough "code words" you can make any criticism of Obama racist.

Replies from: TheOtherDave, RobinZ
comment by TheOtherDave · 2012-04-10T01:03:18.985Z · LW(p) · GW(p)

Yes, that's certainly true.

I'm really curious now, though. What's your opinion about the intended connotations of the phrase "food stamp President"? Do you think it's intended primarily as a way of describing Obama's economic policies? His commitment to preventing hunger? His fondness for individual welfare programs? Something else?

Or, if you think the intention varies depending on the user, what connotations do you think Gingrich intended to evoke with it?

Or, if you're unwilling to speculate as to Gingrich's motives, what connotations do you think it evokes in a typical resident of, say, Utah or North Dakota?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-04-10T03:34:39.506Z · LW(p) · GW(p)

The direct meaning is reference to the fact that food stamp use has soured during his presidency. For generally, a reference to his governing style which includes anti-business policies and expanding entitlements.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-10T12:49:47.771Z · LW(p) · GW(p)

I'm going to be charitable and assume that by "direct meaning" you mean to refer to the intended connotations that I asked about. Thanks for the answer.

comment by RobinZ · 2012-04-10T16:17:49.224Z · LW(p) · GW(p)

That seems improbable. To pick the first example I Googled off of the Atlantic webside: Chart of the Day: Obama's Epic Failure on Judicial Nominees contains some substantive criticism of Obama - can you show me where it contains "code words" of this kind?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-10T17:37:20.036Z · LW(p) · GW(p)

It's not an improbable claim so much as a nigh-unfalsifiable claim.

I mean, imagine the following conversation between two hypothetical people, arbitrarily labelled RZ and EN here:
EN: By finding enough "code words" you can make any criticism of Obama racist.
RZ: What about this criticism?
EN: By declaring "epic", "confirmation mess", and "death blow" to be racist "code words", you can make that criticism racist.
RZ: But "epic", "confirmation mess", and "death blow" aren't racist code words!
EN: Right. Neither is "food stamps".

Of course, one way forward from this point is to taboo "code word" -- for example, to predict that an IAT would find stronger associations between "food stamps" and black people than between "epic" and black people, but would not find stronger associations between "food stamps" and white people than between "epic" and white people.

Replies from: RobinZ
comment by RobinZ · 2012-04-10T19:12:26.191Z · LW(p) · GW(p)

I think "nigh-unfalsifiable" is unfair in general when it comes to the use of code words, but I'm not familiar with the facts of the particular case under discussion.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-10T19:22:11.953Z · LW(p) · GW(p)

I agree in the general case.

In fact, I fully expect that (for example) an IAT would find stronger associations between "food stamps" and black people than between "epic" and black people, but would not find stronger associations between "food stamps" and white people than between "epic" and white people, and if I did not find that result I would have to seriously rethink my belief that "food stamps" is a dog-whistle in the particular case under discussion; it's not unfalsifiable at all.

But I can't figure out any way to falsify the claim that "by finding enough 'code words' you can make any criticism of Obama racist," nor even the implied related claim that it's equally easy to do so for all texts. Especially in the context of this discussion, where the experimental test isn't actually available. All Eugene_Nier has to do is claim that arbitrarily selected words in the article you cite are equally racially charged, and claim -- perhaps even sincerely -- to detect no difference between the connotations of different words.

Replies from: RobinZ
comment by RobinZ · 2012-04-10T20:11:20.443Z · LW(p) · GW(p)

I wouldn't actually use IAT to find these kind of connections - I would look at the use of phrases in other contexts by other people, and I would look at the reactions to the phrases in those contexts.

To take a historical example from Battle Cry of Freedom: The Civil War Era by James M. McPherson: in the 1862 riots against the draft, one of the banners that rioters carried read, "The Constitution As It Is, The Union As It Was". That this allusion to the Constitution is an allusion to the legality of slavery under said Constitution is supported by one of the other banners carried by the same groups of rioters: "We won't fight to free the nigger". If, in 1862, a candidate for state office out in the Midwest were to repeat (or even, depending on the exact words, paraphrase) that phrase about the Constitution, I think the charge of "code word" would be well-placed.

Replies from: TimS, TheOtherDave
comment by TimS · 2012-04-11T13:28:39.599Z · LW(p) · GW(p)

I agree that looking at deployment of phrases is a useful way of finding code words, but it is always vulnerable to "cherry-picking." The second banner you mentioned might or might not have been representative of the movement.

Consider the hypothetical protest filled with "Defend the Constitution, Strike Down Obamacare" posters, which should not be tainted by other posters saying "Keep government out of Medicare"(1) but it is hard to describe an ex ante principle explaining how distinctions should be made.

(1) For non-Americans: Medicare is widely popular government health insurance program for the elderly.

Replies from: RobinZ
comment by RobinZ · 2012-04-11T16:05:08.621Z · LW(p) · GW(p)

Agreed - it's not a mechanical judgment.

comment by TheOtherDave · 2012-04-10T20:27:16.626Z · LW(p) · GW(p)

Yup, looking at venues in which a phrase gets used is another way to establish likely connections between phrases and ideologies.

comment by CronoDAS · 2012-04-13T08:27:41.253Z · LW(p) · GW(p)

Unfortunately, it seems to me that most of the information that "race" provides is screened off by various things that are only weakly correlated with race, and it also seems to me that our badly-designed hardware doesn't update very well upon learning these things. For example, "X is a college graduate, and is black" doesn't tell you all that much more than "X is a college graduate"; it's probably easier to deal with this by having inaccurate priors than by updating properly.

Replies from: steven0461, Eugine_Nier
comment by steven0461 · 2012-04-16T00:07:16.212Z · LW(p) · GW(p)

For example, "X is a college graduate, and is black" doesn't tell you all that much more than "X is a college graduate"

I'm not sure that what you have in mind here is screening, at least in the causal diagrams sense. If I'm not mistaken, learning that someone is a college graduate screens off race for the purpose of predicting the causal effects of college graduation, but it doesn't screen off race for the purpose of predicting causes of college graduation (such as intelligence) and their effects. You're right, though, that even in the latter case learning that someone is a college graduate decreases the size of the update from learning their race. (At least given realistic assumptions. If 99% of cyan people have IQ 80 and 1% have IQ 140, and 99% of magenta people have IQ 79 and 1% have IQ 240, learning that someone is a college graduate suddenly makes it much more informative to learn their race. But that's not the world we live in; it's just to illustrate the statistics.)

comment by Eugine_Nier · 2012-04-14T04:23:56.982Z · LW(p) · GW(p)

Unfortunately, it seems to me that most of the information that "race" provides is screened off by various things that are only weakly correlated with race,

Which are generally much harder to observe.

For example, "X is a college graduate, and is black" doesn't tell you all that much more than "X is a college graduate"

Um, Affirmative Action. Also tail ends of distributions.

Replies from: grendelkhan
comment by grendelkhan · 2012-04-15T15:04:24.320Z · LW(p) · GW(p)

Um, Affirmative Action. Also tail ends of distributions.

I was under the impression that AA applied to college admissions, and that college graduation is still entirely contingent on one's performance. (Though I've heard tell that legacy students both get an AA-sized bump to admissions and tend to be graded on a much less harsh scale.)

Additionally, it seems that there's a lot of 'different justification, same conclusion' with regards to claims about black people. For instance, "black people are inherently stupid and lazy" becomes "black people don't have to meet the same standards for education". The actual example I saw was that people subconsciously don't like to hire black people (the Chicago resume study) because they present a risk of an EEOC lawsuit. (The annual risk of being involved in an EEOC lawsuit is on the order of one in a million.)

Replies from: Desrtopa, Eugine_Nier
comment by Desrtopa · 2012-04-15T15:29:06.406Z · LW(p) · GW(p)

I was under the impression that AA applied to college admissions, and that college graduation is still entirely contingent on one's performance. (Though I've heard tell that legacy students both get an AA-sized bump to admissions and tend to be graded on a much less harsh scale.)

A quick google search isn't giving me an actual percentage, but I believe that students who're admitted to and attend college, but do not graduate, are still significantly in the minority. Even those who barely made it in mostly graduate, if not necessarily with good GPAs.

Replies from: BillyOblivion, grendelkhan
comment by BillyOblivion · 2012-04-17T11:52:31.682Z · LW(p) · GW(p)

One of the criticisms of colleges engaging in "AA" type policies is that they often will put someone in a slightly higher level school (say Berkeley rather than Davis) than they really should be in and which because of their background they are unprepared for. Not necessarily intellectually--they could be very bright, but in terms of things like study skills and the like.

There is sufficient data to suggest this should be looked at more thoroughly. In general it is better for someone to graduate from a "lesser" school than to drop out of a better one.

Replies from: wedrifid
comment by wedrifid · 2012-04-17T13:48:41.403Z · LW(p) · GW(p)

One of the criticisms of colleges engaging in "AA" type policies

Which policies were those again? Teetotalism, something to do with faith in a greater power, apologising to folks and, let's see... 1,2,3... at least 9 others.

(ie. I put it that "AA" doesn't work as a credible acronym. There are at least two far more obvious meanings for "AA policies" that must be ruled out before something to do with smart children gets considered as a hypothesis.)

Replies from: BillyOblivion
comment by BillyOblivion · 2012-04-21T00:26:53.736Z · LW(p) · GW(p)

I apologize. I was being lazy and assumed that since it was used multiple times above that folks following the conversation would get it from context. I didn't realize that this conversation would so disquiet some people that they would get hung up on that, rather than addressing what many people think is a moderately serious problem, if not for society, then for the students who are basically being set up to fail.

But by all means let's first have this silly little pissing match about not being able to track abbreviations through a conversation. It's far more important.

Replies from: wedrifid
comment by wedrifid · 2012-04-21T00:35:28.861Z · LW(p) · GW(p)

No slight intended and I hope you'll pardon my tangential reply. I know you weren't the first to introduce the acronym.

comment by grendelkhan · 2012-04-15T21:30:42.372Z · LW(p) · GW(p)

Okay, but if not everyone graduates from college, and the point of admissions is to weed out people who'll succeed in school rather than wasting everyone's time, then how does a college degree mean anything different for a standard graduate, a legacy graduate, and an affirmative-action graduate? (Note that the bar is lowered for legacy graduates to the same degree as affirmative-action graduates, so if you don't hear "my father also went here" the same way as "I got in partly because of my race", then there's a different factor at work here.)

Replies from: steven0461, Desrtopa
comment by steven0461 · 2012-04-15T23:24:59.779Z · LW(p) · GW(p)

Okay, but if not everyone graduates from college, and the point of admissions is to weed out people who'll succeed in school rather than wasting everyone's time, then how does a college degree mean anything different for a standard graduate, a legacy graduate, and an affirmative-action graduate?

In the extreme case where being above a given level of competence deterministically causes graduation, you're correct and AA makes no difference; the likelihood (but not necessarily the prior or posterior probability) of different competence levels for a college graduate is independent of race. In the extreme case where graduation is completely random, you're wrong and AA affects the evidence provided by graduation in the same way as it affects the evidence provided by admission. Reality is likely to be somewhere in between (I'm not saying it's in the middle).

(Note that the bar is lowered for legacy graduates to the same degree as affirmative-action graduates, so if you don't hear "my father also went here" the same way as "I got in partly because of my race", then there's a different factor at work here.)

It depends on the actual distribution of legacy and AA graduates.

comment by Desrtopa · 2012-04-15T21:47:22.892Z · LW(p) · GW(p)

and the point of admissions is to weed out people who'll succeed in school rather than wasting everyone's time

I'd say that the point of admissions is less to weed out people who'll succeed from people who'll waste the school's time than to weed out people who'll reflect poorly on the status of the school. Colleges raise their status by taking better students, so their interests are served not by taking students down to the lower limit of those who can meet academic requirements, but by being as selective as they can afford to be. Schools will even lie about the test scores of students they actually accept, among other things, to be seen as more selective.

comment by Eugine_Nier · 2012-04-15T22:32:30.874Z · LW(p) · GW(p)

Additionally, it seems that there's a lot of 'different justification, same conclusion' with regards to claims about black people.

I think it's more a case same observations, different proposed mechanisms.

comment by grendelkhan · 2012-04-15T21:25:03.979Z · LW(p) · GW(p)

Has anyone ever claimed that any criticism of Obama is racist by definition? I only ever see this claim from people who want to raise the bar for racism above what they've been accused of. It's not like targeting welfare to play on racism is a completely outlandish claim--I hope you're familiar with Lee Atwater's very famous description of the Southern Strategy:

You start out in 1954 by saying, "Nigger, nigger, nigger." By 1968 you can't say "nigger" — that hurts you. Backfires. So you say stuff like forced busing, states' rights and all that stuff. You're getting so abstract now [that] you're talking about cutting taxes, and all these things you're talking about are totally economic things and a byproduct of them is [that] blacks get hurt worse than whites. And subconsciously maybe that is part of it. I'm not saying that. But I'm saying that if it is getting that abstract, and that coded, that we are doing away with the racial problem one way or the other. You follow me — because obviously sitting around saying, "We want to cut this," is much more abstract than even the busing thing, and a hell of a lot more abstract than "Nigger, nigger."

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-04-15T22:38:50.969Z · LW(p) · GW(p)

Has anyone ever claimed that any criticism of Obama is racist by definition?

No, they just declare each individual instance 'racist' no matter how tenuous the argument. The rather ludicrous attempts to dismiss the Tea Party as 'racist' being the most prominent example.

comment by Oligopsony · 2012-04-11T05:06:11.241Z · LW(p) · GW(p)

A "Racist0" is someone who has accurate priors about the behavior of people of different races.

That's the R2 way of phrasing R{1,2}, like "race traitor" is the R3 way of phrasing R1 or celandine's phrasings are from an R1 perspective. (Not saying you are a jerk; just trying to separate out precisely such connotative differences from these useful clusters/concentric rings in peoplespace.)

(N.B. that if this definition wasn't question-begging and/or indexical it would imply that iff accurate priors are equal over races then the genuinely colorblind are racists.)

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-04-11T06:46:08.561Z · LW(p) · GW(p)

That's the R2 way of phrasing R{1,2}, like "race traitor" is the R3 way of phrasing R1 or celandine's phrasings are from an R1 perspective. (Not saying you are a jerk; just trying to separate out precisely such connotative differences from these useful clusters/concentric rings in peoplespace.)

Possibly, I couldn't quite figure out Mixed Nuts' definitions because he seemed to be implicitly assuming that accurate priors were equal over races.

(N.B. that if this definition wasn't question-begging and/or indexical it would imply that iff accurate priors are equal over races then the genuinely colorblind are racists.)

Well they aren't. Nevertheless, I should probably have said something more like:

A "Racist0" is someone who rationally believes that priors aren't equal over races.

comment by Crouching_Badger · 2012-04-24T17:45:51.764Z · LW(p) · GW(p)

Apart from race, isn't this a problem with English or language in general? We use the same words for varying degrees of a certain notion, and people cherry pick the definitions that they want to cogitate for response. If I call someone a conservative, is it a compliment or an insult? That depends on both of our perceptions of the word conservative as well as our outlook on ourselves as political beings; however, beyond that, I could mean to say that the person is fiscally conservative, but as the current conservative candidates are showing conservatism to be far-right extremism, the person may think, "Hey! I'm not one of those guys."

I think if someone wants to argue with you, you'd be hard-pressed to speak eloquently enough to provide an impenetrable phrase that does not open itself to a spectrum of interpretation.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-04-24T22:49:16.640Z · LW(p) · GW(p)

Sure. "Conservative" isn't a fixed political position. Quite often, it's a claim about one's political position: that it stands for some historical good or tradition. A "conservative" in Russia might look back to the good old days of Stalin whereas a "conservative" in the U.S. would not appreciate the comparison. It's also a flag color; your "fiscal conservative" may merely not want to wave a flag of the same color as Rick Santorum's.

comment by A1987dM (army1987) · 2012-04-09T23:42:14.473Z · LW(p) · GW(p)

What about a "Racist4", someone who assign different moral values to people of different races all other things being equal?

Replies from: Desrtopa, Strange7, CaveJohnson
comment by Desrtopa · 2012-04-12T04:05:36.443Z · LW(p) · GW(p)

Based on a couple interviews I've seen with unabashed Racist3s, I think that they would tend to fulfill that criterion.

Edit: Requesting clarification for downvote?

comment by Strange7 · 2012-04-12T08:29:26.313Z · LW(p) · GW(p)

That would be a paleo-nazi. Not many of them around, anymore, and those that are don't get away with much.

Replies from: CaveJohnson
comment by CaveJohnson · 2012-04-24T16:34:54.673Z · LW(p) · GW(p)

Why make up a new word? Paleoconservatives and smarter white nationalists (think Jared Taylor ) seem to often fit the bill.

comment by CaveJohnson · 2012-04-24T16:33:39.839Z · LW(p) · GW(p)

Depends if the differences in assigned moral values are large enough they can easily approach Nazi pretty quickly. As a thought experiment consider how many dolphins would you kill to save a single person?

comment by Oscar_Cunningham · 2012-04-01T14:07:43.615Z · LW(p) · GW(p)

You don't understand anything until you learn it more than one way.

Marvin Minsky

comment by [deleted] · 2012-04-12T07:39:12.329Z · LW(p) · GW(p)

The most fundamental form of human stupidity is forgetting what we were trying to do in the first place

--Nietzsche

comment by Spurlock · 2012-04-02T04:44:28.707Z · LW(p) · GW(p)

“The mind commands the body and it obeys. The mind orders itself and meets resistance. ”

-St Augustine of Hippo

Replies from: NancyLebovitz, None
comment by NancyLebovitz · 2012-04-03T21:45:19.679Z · LW(p) · GW(p)

The mind commands the body and it obeys.

Augustine has obviously never tried to learn something which requires complicated movement, or at least he didn't try it as an adult.

Replies from: JulianMorrison, DSimon
comment by JulianMorrison · 2012-04-20T15:50:19.916Z · LW(p) · GW(p)

The general principle is: cached is fast, cache-populating is slow. This goes for mind and "body" both, because the body does as its told, but it needs telling in a lot of detail and the control signals need to be discovered. Most people, for both mind and body, learn enough control signals for day-to-day use, and stop.

I do somewhat wonder what it would be like to know the control signals for all my muscles, Bene Gesserit style.

Replies from: khafra, NancyLebovitz
comment by khafra · 2012-06-21T20:19:04.858Z · LW(p) · GW(p)

Vladimir Vasiliev is a Bene Gesserit, at least for skeletal muscle. Unfortunately, I can't locate any of the videos that really demonstrate this on youtube; but it makes him able to do some strange-looking things very effectively.

comment by NancyLebovitz · 2012-04-20T18:56:57.899Z · LW(p) · GW(p)

I'm reasonably sure that the important thing is awareness of muscles in systems appropriate for movement [1] rather than as individuals. Herbert had a good intution there, but Feldenkrais is a real world method of improving movement. Also take a look at Eric Franklin's books on practical anatomy.

[1] That's approximate phrasing for an approximate idea.

comment by DSimon · 2012-04-05T09:00:48.995Z · LW(p) · GW(p)

It may be a matter of the mind having to first order itself to give the body the correct commands.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-04-05T13:56:22.854Z · LW(p) · GW(p)

That seems fair, but on the other hand, it seems that a primary way of the mind acquiring the order it needs is to start by giving the body commands that the body doesn't follow.

comment by [deleted] · 2012-04-02T20:45:38.979Z · LW(p) · GW(p)

-

comment by Vaniver · 2012-04-01T23:20:25.578Z · LW(p) · GW(p)

For those who feel deeply about contemporary politics, certain topics have become so infected by considerations of prestige that a genuinely rational approach to them is almost impossible.

-George Orwell

Replies from: Multiheaded
comment by Multiheaded · 2012-04-10T14:29:29.316Z · LW(p) · GW(p)

Sadly, there's no need of any adjective before "Politics" here. It's a fully general statement.

Replies from: RobinZ
comment by RobinZ · 2012-04-23T20:43:06.646Z · LW(p) · GW(p)

You may be able to delete the words on either side of the adjective as well.

comment by Eugine_Nier · 2012-04-01T19:40:33.750Z · LW(p) · GW(p)

Truth must necessarily be stranger than fiction, for fiction is the creation of the human mind and therefore congenial to it.

G. K. Chesterton

Replies from: Ezekiel, CronoDAS, Kaj_Sotala, BlazeOrangeDeer
comment by Ezekiel · 2012-04-01T23:27:00.242Z · LW(p) · GW(p)

Zach Wiener's elegant disproof:

Think of the strangest thing that's true. Okay. Now add a monkey dressed as Hitler.

(Although to be fair, it's possible that the disproof fails because "think of the strangest thing that's true" is impossible for a human brain.)

Replies from: Blueberry, Eugine_Nier, army1987
comment by Blueberry · 2012-04-02T07:44:37.720Z · LW(p) · GW(p)

It also fails in the case where the strangest thing that's true is an infinite number of monkeys dressed as Hitler. Then adding one doesn't change it.

More to the point, the comparison is more about typical fiction, rather than ad hoc fictional scenarios. There are very few fictional works with monkeys dressed as Hitler.

Replies from: Eugine_Nier, Ezekiel, TraderJoe
comment by Eugine_Nier · 2012-04-03T02:19:38.149Z · LW(p) · GW(p)

Indeed, I posted this quote partially out of annoyance at a certain type of analysis I kept seeing in the MoR threads. Namely, person X benefited from the way event Y turned out; therefore, person X was behind event Y. After all, thinking like this about real life will quickly turn one into a tin-foil-hat-wearing conspiracy theorist.

Replies from: FiftyTwo, gwern
comment by FiftyTwo · 2012-04-03T21:52:54.356Z · LW(p) · GW(p)

Yes but in real life the major players don't have the ability to time travel, read minds, become invisible, manipulate probability etcetera, these make complex plans far more plausible than they would be in the real world. (That and conservation of detail.)

Replies from: Pavitra
comment by Pavitra · 2012-04-05T13:20:41.301Z · LW(p) · GW(p)

In real life the major players are immune to mindreading, can communicate securely and instantaneously worldwide, and have tens of thousands of people working under them. You are, ironically, overlooking the strangeness of reality.

Conservation of detail may be a valid argument though.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-04-06T03:46:42.580Z · LW(p) · GW(p)

Conservation of detail may be a valid argument though.

Conservation of detail is one of the memetic hazards of reading too much fiction.

comment by gwern · 2012-04-04T00:51:09.631Z · LW(p) · GW(p)

Namely, person X benefited from the way event Y turned out; therefore, person X was behind event Y.

Which is exactly what MoR tells us to do to analyze it, is it not?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-04-04T03:53:01.529Z · LW(p) · GW(p)

That's still not a reason for assuming everyone is running perfect gambit roulettes.

Replies from: gwern
comment by gwern · 2012-04-04T14:04:50.973Z · LW(p) · GW(p)

You can say that with a straight face after the last few chapters of plotting?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-04-06T03:48:52.329Z · LW(p) · GW(p)

Yes, I was referring to the theories that Dumbledore sabotaged Snape's relationship with Lilly so that the boy-who-lived (who hadn't even been born then) would have the experience of being bullied by his potions master.

comment by Ezekiel · 2012-04-02T11:05:42.631Z · LW(p) · GW(p)

Depends on the infinity. Ordinal infinities change when you add one to them.

If we're restricting ourselves to actual published fiction, I present Cory Doctorow's Someone Comes to Town, Someone Leaves Town. The protagonist's parents are a mountain and a washing machine, it gets weirder from there, and the whole thing is played completely straight.

Replies from: gjm
comment by gjm · 2012-04-03T10:58:12.002Z · LW(p) · GW(p)

Ordinal infinities change when you add one to them.

Depends on which end you add one at. :-)

(I mention this not because I think there's any danger Ezekiel doesn't know it, but just because it might pique someone's curiosity.)

comment by TraderJoe · 2012-04-05T13:33:06.195Z · LW(p) · GW(p)

[comment deleted]

comment by Eugine_Nier · 2012-04-02T00:24:19.694Z · LW(p) · GW(p)

This quote seems relevant:

They must be true because, if there were not true, no one would have the imagination to invent them.

G. H. Hardy, upon receiving a letter containing mathematical formulae from Ramanujan

comment by A1987dM (army1987) · 2012-04-04T14:29:02.104Z · LW(p) · GW(p)

Doesn't work if (n + 1) monkeys dressed as Hitler are no stranger than n monkeys dressed as Hitler, and n monkeys dressed as Hitler are true.

comment by CronoDAS · 2012-04-13T06:41:39.840Z · LW(p) · GW(p)

Since the beginning not one unusual thing has ever happened.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-04-13T07:03:04.923Z · LW(p) · GW(p)

Eliezer's unconventional definition of "strange" is occasionally annoying.

Replies from: wedrifid
comment by wedrifid · 2012-04-13T07:45:25.858Z · LW(p) · GW(p)

Eliezer's unconventional definition of "strange" is occasionally annoying.

Strange I would almost accept. But in this case the quote is 'unusual'... that's even worse! Unusual fits squarely into the realm of 'actually happens'.

comment by Kaj_Sotala · 2012-04-04T05:57:08.860Z · LW(p) · GW(p)

Also:

Truth is stranger than fiction, because fiction has to make sense.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-04-04T06:32:15.829Z · LW(p) · GW(p)

I was originally going to post that one, but decided to go with Chesterton's version since it better explains what is meant. (At the expense of loosing some of the snappiness.)

comment by BlazeOrangeDeer · 2012-04-02T05:59:01.545Z · LW(p) · GW(p)

"Reality is the thing that surpises me." - Paraphrase of EY

comment by VKS · 2012-04-03T20:52:49.224Z · LW(p) · GW(p)

Don't just read it; fight it! Ask your own questions, look for your own examples, discover your own proofs. Is the hypothesis necessary? Is the converse true? What happens in the classical special case? What about the degenerate case? Where does the proof use the hypothesis?

  • Paul Halmos
comment by Stephanie_Cunnane · 2012-04-09T02:15:40.086Z · LW(p) · GW(p)

From this moment forward, remember this: What you do is infinitely more important than how you do it. Efficiency is still important, but it is useless unless applied to the right things.

-Tim Ferriss, The 4-Hour Workweek

Replies from: CronoDAS
comment by CronoDAS · 2012-04-13T08:18:15.595Z · LW(p) · GW(p)

There is nothing so useless as doing efficiently what should not be done at all.

-- Peter Drucker

(I've quoted this line several times before.)

Replies from: wedrifid
comment by wedrifid · 2012-04-13T09:12:00.905Z · LW(p) · GW(p)

There is nothing so useless as doing efficiently what should not be done at all.

Sure there is. Doing inefficiently what should not be done at all is even more useless. At least if you do it efficiently you can go ahead and do something else sooner.

It seems to me that efficiency is just as useful doing things that should not be done as it is other times, for a fixed amount of doing stuff that shouldn't be done.

Replies from: thomblake
comment by thomblake · 2012-04-13T15:05:44.674Z · LW(p) · GW(p)

Depends on the kind of efficiency, I guess.

If someone is systematically murdering people for an hour, I'd prefer they not get as much murdering done as they could.

Replies from: wedrifid
comment by wedrifid · 2012-04-13T17:50:27.505Z · LW(p) · GW(p)

If someone is systematically murdering people for an hour, I'd prefer they not get as much murdering done as they could.

I did specify "for a fixed amount of doing stuff that shouldn't be done". If they are getting more murdering done, that is probably bad.

comment by Stephanie_Cunnane · 2012-04-03T07:51:19.971Z · LW(p) · GW(p)

In short, and I can't emphasize this strongly enough, a fundamental issue that any theory of psychology ultimately has to face is that brains are useful. They guide behavior. Any brain that didn't cause its owner to do useful--in the evolutionary sense--things, didn't cause reproduction.

-Robert Kurzban, Why Everyone (Else) is a Hypocrite: Evolution and the Modular Mind

comment by jsbennett86 · 2012-04-03T02:17:07.967Z · LW(p) · GW(p)

But when we have these irrational beliefs, these culturally coded assumptions, running so deep within our community and movement, how do we actually change that? How do we get people to further question themselves when they’ve already become convinced that they’re a rational person, a skeptic, and have moved on from irrationality, cognitive distortion and bias?

Well I think what we need to do is to change the fundamental structure and values of skepticism. We need to build our community and movement around slightly different premises.

As it has stood in the past, skepticism has been predicated on a belief in the power of the empirical and rational. It has been based on the premise that there is an empirical truth, and that it is knowable, and that certain tools and strategies like science and logic will allow us to reach that truth. In short, the “old guard” skepticism was based on a veneration of the rational. But the veneration of certain techniques or certain philosophies creates the problematic possibility of choosing to consider certain conclusions or beliefs to BE empirical and rational and above criticism, particularly beliefs derived from the “right” tools, and even more dangerously, to consider oneself “rational”.

...

I believe that in order to be able to question our own beliefs as well as we question those of others, we need to restructure skepticism around awareness of human limitation, irrationality and flaws. Rather than venerating the rational, and aspiring to become some kind of superhuman fully rational vulcan minds, we need to instead create a more human skepticism, built around understanding how belief operates, how we draw conclusions, and how we can cope with the human limitations. I believe we need to remove the focus from aspiring towards ridding ourselves of the irrational, and instead move the focus towards understanding how this irrationality operates and why we believe all the crazy things we believe. We need to position as our primary aspiration not the achievement of a perfect comprehending mind, but instead an ability to maintain constant hesitation and doubt, to always always ALWAYS second-guess our positions and understand that they’re being created through a flawed mind, from flawed perceptions.

Science and reason are excellent tools to allow us to cope with being crazy, irrational human beings, but it CANNOT allow us to transcend that. The instant we begin to believe that we have become A Skeptic, A Rational Person, that is when we’ve fucked up, that is when we stop practicing skepticism, stop keeping an eye out for our mistakes, and begin to imagine our irrational perceptions as perfect rational conclusions. It’s only by building a skepticism based on the practice of doubt, rather than the state of Skeptic, that we’ll truly be able to be move on from our assumptions.

Replies from: MixedNuts
comment by MixedNuts · 2012-04-04T16:18:15.093Z · LW(p) · GW(p)

Upvoted because I like Natalie Reed, but this is way too long. The key sentence seems to be

We need to position as our primary aspiration not the achievement of a perfect comprehending mind, but instead an ability to maintain constant hesitation and doubt, to always always ALWAYS second-guess our positions and understand that they’re being created through a flawed mind, from flawed perceptions.

Replies from: jsbennett86
comment by jsbennett86 · 2012-04-05T01:25:21.138Z · LW(p) · GW(p)

Thanks. I didn't wanna post this much, but I was rather too attached to the passage to cut anything else out. Helps to have other eyes.

comment by Klevador · 2012-04-14T04:48:48.730Z · LW(p) · GW(p)

Any collocation of persons, no matter how numerous, how scant, how even their homogeneity, how firmly they profess common doctrine, will presently reveal themselves to consist of smaller groups espousing variant versions of the common creed; and these sub-groups will manifest sub-sub-groups, and so to the final limit of the single individual, and even in this single person conflicting tendencies will express themselves.

— Jack Vance, The Languages of Pao

Replies from: army1987
comment by A1987dM (army1987) · 2012-04-17T10:52:27.367Z · LW(p) · GW(p)

Shorter version:

Quot homines, tot sententiae (as many people, so many opinions)

-- Terence, Phormio

Replies from: MixedNuts
comment by MixedNuts · 2012-04-20T17:52:14.135Z · LW(p) · GW(p)

My favorite:

Two {people, rabbis, economists}, three opinions.

comment by maia · 2012-04-12T17:22:24.995Z · LW(p) · GW(p)

Suppose you know a golfer's score on day 1 and are asked to predict his score on day 2. You expect the golfer to retain the same level of talent on the second day, so your best guesses will be "above average" for the [better-scoring] player and "below average" for the [worse-scoring] player. Luck, of course, is a different matter. Since you have no way of predicting the golfers' luck on the second (or any) day, your best guess must be that it will be average, neither good nor bad. This means that in the absence of any other information, your best guess about the players' score on day 2 should not be a repeat of their performance on day 1. ...

The best predicted performance on day 2 is more moderate, closer to the average than the evidence on which it is based (the score on day 1). This is why the pattern is called regression to the mean. The more extreme the original score, the more regression we expect, because an extremely good score suggests a very lucky day. The regressive prediction is reasonable, but its accuracy is not guaranteed. A few of the golfers who scored 66 on day 1 will do even better on the second day, if their luck improves. Most will do worse, because their luck will no longer be above average.

Now let us go against the time arrow. Arrange the players by their performance on day 2 and look at their performance on day 1. You will find precisely the same pattern of regression to the mean. ... The fact that you observe regression when you predict an early event from a later event should help convince you that regression does not have a causal explanation.

  • Daniel Kahneman, Thinking, Fast and Slow
Replies from: CronoDAS, army1987
comment by CronoDAS · 2012-04-13T08:34:52.018Z · LW(p) · GW(p)

If you know the scores of two different golfers on day 1, then you know more than if you know the score of only one golfer on day 1. You can't predict the direction in which regression to the mean will occur if your data set is a single point.

The following all have different answers:

I play a certain video game a lot. The last time I played it, my score was 39700. What's your best guess for my score the next time I play it?

(The answer is 39700; I'm probably not going to improve with practice, and you have no way to know if 39700 is unusually good or unusually bad.)

My friend and I both play a certain video game a lot. The last time I played it, my score was 39700. The last time my friend played it, his score was 32100. What's your best guess for my score the next time I play it?

(The answer is some number less than 39700; knowing that my friend got a lower score gives you a reason to believe that 39700 might be higher than normal.)

I played a video game for the first time yesterday. My score was 39700, and higher scores are better than lower ones. What's your best guess for my score the next time I play it?

(The answer is some number higher than 39700, because I'm no longer an absolute beginner.)

Replies from: maia
comment by maia · 2012-04-13T16:18:20.992Z · LW(p) · GW(p)

True, a single data point can't give you knowledge of regression effects. In the context of the original problem, Kahneman assumed that you had access to the average score of all the golfers on the first day.

I played a video game for the first time yesterday. My score was 39700, and higher scores are better than lower ones. What's your best guess for my score the next time I play it? (The answer is some number higher than 39700, because I'm no longer an absolute beginner.)

I'm not sure it's true that the answer is higher than 39700, in this case. It depends on if you have knowledge of how people generally improve, and if your score is higher than average for an absolute beginner. Since unknown factors could adjust the score either up or down, I would probably just guess that it will be the same the next day.

Replies from: RobinZ
comment by RobinZ · 2012-04-16T17:16:56.014Z · LW(p) · GW(p)

The existence of factors which could adjust the score either up or down does not indicate which factors dominate. In this case, you have no information which suggests that 39700 is either above or below the median, and therefore these two cases must be assigned equal probability - canceling out any "regression to the mean" effects you could have predicted. Similar arguments apply to other effects which change the score.

Replies from: maia, Eugine_Nier
comment by maia · 2012-04-16T18:55:52.535Z · LW(p) · GW(p)

So you estimate "regression to the mean" effects as zero, and base your estimate on any other effects you know about and how strong you think they are. That makes sense. Thanks for the correction!

comment by Eugine_Nier · 2012-04-17T05:19:36.970Z · LW(p) · GW(p)

In this case, you have no information which suggests that 39700 is either above or below the median, and therefore these two cases must be assigned equal probability

Not quite, you have some background information about the range of scores video games usually employ.

Replies from: RobinZ
comment by RobinZ · 2012-04-17T05:54:10.781Z · LW(p) · GW(p)

And, I suppose, information about the probability of people mentioning average scores. I concede that either factor could justify arguing that the score should decrease.

comment by A1987dM (army1987) · 2012-04-12T18:34:06.300Z · LW(p) · GW(p)

It reminds me of E.T. Jaynes' explanation of why time-reversible dynamic laws for (say) sugar molecules in water lead to a time-irreversible diffusion equation.

comment by Bugmaster · 2012-04-05T05:48:37.007Z · LW(p) · GW(p)

-- So... if they've got armor on, it's a battle !
-- And who told you that ?
-- A knight...
-- How'd you know he was a knight ?
-- Well... that's 'cause... he'd got armor on ?
-- You don't have to be a knight to buy armor. Any idiot can buy armor.
-- How do you know ?
-- 'Cause I sold armor.

-Game of Thrones (TV show)

comment by Kutta · 2012-04-01T13:00:30.729Z · LW(p) · GW(p)

He who knows how to do something is the servant of he who knows why that thing must be done.

-- Isuna Hasekura, Spice and Wolf vol. 5 ("servant" is justified by the medieval setting).

Replies from: John_Maxwell_IV, Blueberry, Bugmaster
comment by John_Maxwell (John_Maxwell_IV) · 2012-04-02T06:12:57.170Z · LW(p) · GW(p)

I don't get it.

Replies from: Vaniver
comment by Vaniver · 2012-04-02T17:21:34.160Z · LW(p) · GW(p)

Short explanation: the person that knows why a thing must be done is generally the person who decides what must be done. Application to rationality: instrumental rationality is a method that serves goals. The part that values and the part that implements are distinct. (Also, you can see the separation of terminal and instrumental values.)

Replies from: gwern
comment by gwern · 2012-04-04T00:49:55.789Z · LW(p) · GW(p)

And explains why businessmen keep more of the money than the random techies they hire.

comment by Blueberry · 2012-04-02T07:48:02.386Z · LW(p) · GW(p)

Would "servant" not otherwise be justified?

Replies from: Nornagest
comment by Nornagest · 2012-04-02T08:04:09.127Z · LW(p) · GW(p)

It's fairly benign, but looks a little archaic -- not so archaic that it'd have to be medieval, though. The rest of the phrasing is fairly modern, or I'd probably have assumed it was a quote from anywhere from the Enlightenment up to the Edwardian period. It has the ring of something a Victorian aphorist might say.

comment by Bugmaster · 2012-04-01T17:49:54.705Z · LW(p) · GW(p)

I think the quote should start with, "he WHO knows...".

comment by [deleted] · 2012-04-16T05:49:36.348Z · LW(p) · GW(p)

The fundamental rule of political analysis from the point of psychology is, follow the sacredness, and around it is a ring of motivated ignorance.

--Jonathan Haidt, source

Replies from: Multiheaded
comment by Multiheaded · 2012-04-16T12:07:56.393Z · LW(p) · GW(p)

He also talks about how sacredness is one of the fundamental values for human communities, and how liberal/left-leaning theorists don't pay enough attention to it (and refuse to acknowledge their own sacred/profane areas).

I have more to say about his values theory, I'll post some thoughts later.

UPD: I wrote a little something, now I'm just gonna ask Konkvistador whether he thinks it's neutral enough or too political for LW.

Replies from: None
comment by [deleted] · 2012-04-16T15:03:56.452Z · LW(p) · GW(p)

Please make sure you do. I suspect it will be interesting. :)

comment by A1987dM (army1987) · 2012-04-02T17:13:52.552Z · LW(p) · GW(p)

I first encountered this in a physics newsgroup, after some crank was taking some toy model way too seriously:

Analogies are like ropes; they tie things together pretty well, but you won't get very far if you try to push them.

Thaddeus Stout Tom Davidson

(I remembered something like "if you pull them too much, they break down", actually...)

comment by John_Maxwell (John_Maxwell_IV) · 2012-04-02T05:39:48.041Z · LW(p) · GW(p)

Don't kid yourself, just because you got the correct numerical answer to a problem is not justification that you understand the physics of the problem. You must understand all the logical steps in arriving at that solution or you have gained nothing, right answer or not.

My old physics professor David Newton (yes, apparently that's the name he was born with) on how to study physics.

Replies from: gwern, BlazeOrangeDeer
comment by gwern · 2012-04-07T19:30:53.952Z · LW(p) · GW(p)

A novice was trying to fix a broken Lisp machine by turning the power off and on.

Knight, seeing what the student was doing, spoke sternly: “You cannot fix a machine by just power-cycling it with no understanding of what is going wrong.”

Knight turned the machine off and on.

The machine worked.

--Some AI Koans, collected by ESR

comment by BlazeOrangeDeer · 2012-04-02T05:57:41.774Z · LW(p) · GW(p)

My physics teacher is always sure to clarify which parts of a problem are physics and which are math. Physics is usually the part that allows you to set up the math.

comment by [deleted] · 2012-04-18T12:44:24.504Z · LW(p) · GW(p)

A weak man is not as happy as that same man would be if he were strong. This reality is offensive to some people who would like the intellectual or spiritual to take precedence. It is instructive to see what happens to these very people as their squat strength goes up.

-- Mark Rippetoe, Starting Strength

Replies from: Manfred, Nornagest, Incorrect
comment by Manfred · 2012-04-22T05:21:14.804Z · LW(p) · GW(p)

Sample: men who come to this guy to get stronger, I assume?

comment by Nornagest · 2012-04-22T06:37:02.955Z · LW(p) · GW(p)

Hmm. This sort of thing seems plausible, but I wonder how much of it is strength-specific? I've heard of eudaimonic effects for exercise in general (not necessarily strength training) and for mastering any new skill, and I doubt he's filtering those out properly.

Replies from: army1987
comment by A1987dM (army1987) · 2012-04-22T13:17:16.514Z · LW(p) · GW(p)

Why was this downvoted?

comment by Incorrect · 2012-04-22T06:49:00.895Z · LW(p) · GW(p)

He's ignoring that people might not like how larger muscles look.

And personally (though I don't care much) I would only care about practical athletic ability, not weight lifting.

Replies from: realitygrill, army1987
comment by realitygrill · 2012-04-29T04:04:25.550Z · LW(p) · GW(p)

I understand this line of thought, but.. strength doesn't have to be developed through weights, strength increase doesn't necessarily mean much hypertrophy, and most importantly strength is a prerequisite/accelerator for increasing pretty much all athletic abilities (power, flexibility, endurance..)

comment by A1987dM (army1987) · 2012-04-22T10:59:38.125Z · LW(p) · GW(p)

He's ignoring that people might not like how larger muscles look.

I guess the relation between muscle mass and physical attractiveness is non-monotonic, so a marginal increase in muscle mass would make some people look marginally better and other people look marginally worse. (I suspect the median Internet user is in the former group, though.)

ETA: Judging from the picture on Wikipedia, Rippetoe himself looks like someone who would look better if he lost some weight (but I'm a heterosexual male, so my judgement might be inaccurate).

Replies from: None
comment by [deleted] · 2012-04-22T17:09:29.729Z · LW(p) · GW(p)

Judging from the picture on Wikipedia, Rippetoe himself looks like someone who would look better if he lost some weight (but I'm a heterosexual male, so my judgement might be inaccurate).

I'm somewhat annoyed that the comments on this thread are vapid, but this might be worth responding to. It doesn't particularly matter whether or not Rippetoe is himself currently ripped -- see this Wikipedia article of yours for his domain expert credentials:

He is one of the few strength athletics authorities to publish both peer-reviewed articles as well as books for the lay population. Unlike most strength and conditioning academics, he has several decades of practical application as an elite-level strength coach, former competitive powerlifter, and a current gym owner.

Secondly, notice that he was a competitive powerlifter thirty years ago. Senescence is a bitch.

Replies from: army1987
comment by A1987dM (army1987) · 2012-04-22T18:23:54.520Z · LW(p) · GW(p)

see this Wikipedia article of yours:

Why “of yours”? I've never edited it.

for his domain expert credentials

I didn't dispute them. The grandparent and great-grandparent are about “how larger muscles look”. I can't see how the passage you quote is relevant to the fact that I think he's ugly.

comment by Pavitra · 2012-04-05T12:59:38.481Z · LW(p) · GW(p)

In the real world things are very different. You just need to look around you. Nobody wants to die that way. People die of disease and accident. Death comes suddenly and there is no notion of good or bad. It leaves, not a dramatic feeling but great emptiness. When you lose someone you loved very much you feel this big empty space and think, 'If I had known this was coming I would have done things differently.'

Yoshinori Kitase

Replies from: gwern
comment by gwern · 2012-04-07T20:39:20.514Z · LW(p) · GW(p)

Context: Aeris dies. (Spoilers!)

Replies from: gRR
comment by gRR · 2012-04-07T21:34:32.861Z · LW(p) · GW(p)

It would be interesting to calculate the total utility of an author wantonly murdering a universally beloved character. May turn out to be quite a crime...

Replies from: Nornagest, Desrtopa
comment by Nornagest · 2012-04-12T04:47:45.238Z · LW(p) · GW(p)

Well, it's certainly not limited to killing off characters, but people have been writing about emotional release as a response to tragedy in drama for quite a long time. Generally it's thought of as a good thing, if not necessarily a pleasant one, and I'm inclined to agree with this analysis; people go into fiction looking for an emotional response, and the enduring popularity of tragic storytelling suggests that they aren't exclusively looking for emotions generally regarded as positive.

Content warnings pointing to what a work's going for might not be a bad idea from a utilitarian standpoint, though. I personally handle tragedy well, for example, but I have a lot of trouble with cringe comedy.

Replies from: CronoDAS
comment by CronoDAS · 2012-04-13T08:15:48.135Z · LW(p) · GW(p)

I personally handle tragedy well, for example, but I have a lot of trouble with cringe comedy.

I've had to leave the room because I get embarrassed just watching characters in that kind of show...

comment by Desrtopa · 2012-04-12T04:23:33.891Z · LW(p) · GW(p)

Well, one of my favorite authors is infamous for doing this, and I for one think his works are the better for it. It certainly hasn't prevented them from becoming very popular.

Replies from: taelor
comment by taelor · 2012-04-12T05:29:40.985Z · LW(p) · GW(p)

Upvoted, for having the exact same thought as I did when reading the parent post.

Replies from: Document
comment by Document · 2012-05-10T01:35:19.533Z · LW(p) · GW(p)

Maybe you were both primed by gRR's username.

comment by tgb · 2012-04-01T13:30:37.257Z · LW(p) · GW(p)

Who has seen the wind?
Neither I nor you:
But when the leaves hang trembling,
The wind is passing through.

Who has seen the wind?
Neither you nor I:
But when the trees bow down their heads,
The wind is passing by.

-- Christina Rossetti, Who has seen the Wind?

Replies from: BlazeOrangeDeer
comment by BlazeOrangeDeer · 2012-04-02T05:44:59.714Z · LW(p) · GW(p)

Interestingly enough, this is my friend's parents response when asked why they believe in an invisible god. I suppose they haven't considered that the leaves and trees may be messed up enough to shake of their own accord.

Replies from: tgb
comment by tgb · 2012-04-02T11:12:25.866Z · LW(p) · GW(p)

Interesting.

It is rather unlikely that Christina Rossetti intended this to be a rationalist quote in a sense we would identify with. I do read it as an argument for scientific realism and belief in the implied invisible, but it seems likely that she was merely being poetic or that she was making a pro-religion argument, given her background. Of course the beauty of this system is that if someone quotes this to you as an argument for God (or anything), you can ask them what the leaves and trees are for their wind and thus get at their true argument.

Furthermore, the context in which I first read it is the video game Braid, juvpu cerfragrq vg va gur pbagrkg bs gur chefhvg bs fpvrapr. I would highly recommend this game, by the way.

Replies from: wedrifid, wirov, BlazeOrangeDeer
comment by wedrifid · 2012-04-07T12:37:37.208Z · LW(p) · GW(p)

Furthermore, the context in which I first read it is the video game Braid, juvpu cerfragrq vg va gur pbagrkg bs gur chefhvg bs fpvrapr. I would highly recommend this game, by the way.

Hey! It's Super Mario with built in cheat modes!

comment by wirov · 2012-04-05T12:00:25.044Z · LW(p) · GW(p)

Could you rot13 the word fpvrapr in the last paragraph? For me, finally getting the meaning of the princess at the end was such a beautiful realization that I wouldn't like to spoil it for others…

(I highly recommend the game too. In fact, I've already bought it several times – once for me, and as a gift for others.)

Replies from: tgb
comment by tgb · 2012-04-07T11:45:59.263Z · LW(p) · GW(p)

Done and agreed. I am ashamed to admit it that I first played it from a pirated copy - I later bought it, and I intend to buy Jonathan Blow's next game The Witness when it comes out. But I still feel bad about pirating it...

comment by BlazeOrangeDeer · 2012-04-02T19:00:26.052Z · LW(p) · GW(p)

I love that game, it's been a while since I played it though.

Replies from: MixedNuts
comment by MixedNuts · 2012-04-09T12:03:43.880Z · LW(p) · GW(p)

I third the recommendation.

comment by maia · 2012-04-12T17:51:31.707Z · LW(p) · GW(p)

A shortcut for making less-biased predictions, taking base averages into account.

Regarding this problem: "Julie is currently a senior in a state university. She read fluently when she was four years old. What is her grade point average (GPA)?"

Recall that the correlation between two measures - in the present case, reading age and GPA - is equal to the proportion of shared factors among their determinants. What is your best guess about that proportion? My most optimistic guess is about 30%. Assuming this estimate, we have all we need to produce an unbiased prediction. Here are the directions for how to get there in four simple steps:

  1. Start with an estimate of average GPA.
  2. Determine the GPA that matches your impression of the evidence.
  3. Estimate the correlation between your evidence and GPA.
  4. If the correlation is .30, move 30% of the distance from the average to the matching GPA.
  • Daniel Kahneman, Thinking, Fast and Slow
comment by iwdw · 2012-04-24T15:48:57.305Z · LW(p) · GW(p)

The fact that I can knock 12 points off a Hamilton Depression scale with an Ambien and a Krispy Kreme should serve as a warning about the validity and generalizability of the term "antidepressant."

comment by [deleted] · 2012-04-27T07:39:39.129Z · LW(p) · GW(p)

Generally when I see write-ups of statistical results, I immediately go to the original source. The fact is that the media is liable to simply shade and color the results to suit their own pat narrative. That’s just human nature.

--Razib Khan, source

comment by chaosmosis · 2012-04-18T17:29:45.375Z · LW(p) · GW(p)

"When I was young I shoved my ignorance in people's faces. They beat me with sticks. By the time I was forty my blunt instrument had been honed to a fine cutting point for me. If you hide your ignorance, no one will hit you and you'll never learn."

-- Farenheit 451

I'll be sticking around a while, although I'm not doing too well right now (check the HPMOR discussion thread for those of you interested in viewing the carnage, it's beautiful). It's not really a rationality problem, but I need to learn how to deal with other people who have big egos, because apparently only two or three people received my comments the way I meant them to come across. Plus, I like the idea of losing so much karma in one day and then eventually earning it all back and being recognized as a super rationalist. Gaining the legitimate approval of a group who now have a lot against me will be a decent challenge.

Also I doubt that I would be able to resist commenting even if I wanted to. That's probably mostly it.

Replies from: MixedNuts, thomblake, None, wedrifid, roland, chaosmosis
comment by MixedNuts · 2012-04-20T17:48:27.133Z · LW(p) · GW(p)

Tips for dealing with people with big egos:

  • Don't insult anyone, ever. If Wagner posts, either say "Hmm, why do you believe Mendelssohn's music to be derivative?" or silently downvote, but don't call him an antisemitic piece of shit.
  • Attributing negative motivations (disliking you, wanting to win a debate, being prejudiced) counts as an insult.
  • Attributing any kind of motivation at all is pretty likely to count as an insult. You can ask about motivation, but only list positive or neutral ones or make it an open question.
  • Likewise, you can ask why you were downvoted. This very often gets people to upvote you again if they were wrong to downvote you (and if not, you get the information you want). Any further implication that they were wrong is an insult.
  • Stick closely to the question and do not involve the personalities of debaters.
  • Exception to the above: it's okay to pass judgement on a personality trait if it's a compliment. If you can't always avoid insulting people, occasionally complimenting them can help.
  • A lot of things are insults. You will slip up. This won't make people dislike you.
  • If you know what a polite and friendly tone is, have one.
  • If someone isn't polite and friendly, it means you need to be more polite and friendly.
  • If they're being very rude and mean and it's getting annoying, you can gently mention it. Still make the rest of your post polite and friendly and about the question.
  • If the "polite and about the question" part is empty, don't post.
  • If you have insulted someone in a thread - either more than once, or once and people are still hostile despite you being extra nice afterwards - people will keep being hostile in the thread and you should probably walk away from it.
  • If hostility in a thread is leaking into your mood, walk away from the whole site for a little while.
  • When you post in another thread, people will not hold any grudges against you from previous threads. Sorry for your epic quest, but we don't have much against you right now.
  • Apologies (rather than silence) are a good idea if you were clearly in the wrong and not overly tempted to add "but".

On politeness:

  • Some politeness norms are stupid and harmful and wrong, like "You must not criticize even if explicitly asked to" or "Disagreement is impolite". Fortunately, we don't have these here.
  • Some are good, like not insulting people. Insulting messages get across poorly. This happens even when people ignore the insult to answer the substance, because the message is overloaded.
  • Some are mostly local communication protocols that help but can be costly to constrain your message around. It's okay to drop them if you can't bear the cost.
  • Some are about fostering personal liking between people. They're worthwhile to people who want that and noise to people who don't.
  • Taking pains to be polite is training wheels. People who are good with words can say precisely and concisely what they mean in a completely neutral tone. People who aren't are injecting lots of accidental interpersonal content, so we need to make it harmless explicitly.

People who are exempted:

  • The aforementioned people, who will never accidentally insult anyone;
  • People whose contribution is so incredibly awesome that it compensates for being insufferable; I know of a few but none on LessWrong;
  • wedrifid, who is somehow capable of pleasant interaction while being a complete jerk.
Replies from: TheOtherDave, komponisto
comment by TheOtherDave · 2012-04-20T19:12:29.235Z · LW(p) · GW(p)

I'll add to this that actually paying attention to wedrifid is instructive here.

My own interpretation of wedrifid's behavior is that mostly s/he ignores all of these ad-hoc rules in favor of:
1) paying attention to the status implications of what's going on,
2) correctly recognizing that attempts to lower someone's status are attacks
3) honoring the obligations of implicit social alliances when an ally is attacked

I endorse this and have been trying to get better about #3 myself.

Replies from: MixedNuts, Wei_Dai
comment by MixedNuts · 2012-04-20T19:29:28.370Z · LW(p) · GW(p)

Might be too advanced for someone who just learned that saying "Please stop being stupid." is a bad idea.

Replies from: TheOtherDave, wedrifid
comment by TheOtherDave · 2012-04-20T19:42:42.481Z · LW(p) · GW(p)

Sure. Then again, if you'd only intended that for chaosmosis' benefit, I assume you'd have PMed it.

comment by wedrifid · 2012-04-21T00:24:39.664Z · LW(p) · GW(p)

Might be too advanced for someone who just learned that saying "Please stop being stupid." is a bad idea.

Well... I've seen people nearly that exact phrase to great effect at times... But that's not the sort of thing you'd want to include in a 'basics' list either.

Just as with fashion, it is best to follow the rules until you understand the rules well enough to know exactly how they work and why a particular exception applies!

comment by Wei Dai (Wei_Dai) · 2012-04-20T20:53:15.941Z · LW(p) · GW(p)

The phrase "social alliances" makes me uneasy with the fear that if everyone did #3, LW would degenerate into typical green vs blue debates. Can you explain a bit more why you endorse it?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-20T23:10:33.364Z · LW(p) · GW(p)

If Sam and I are engaged in some activity A, and Pat comes along and punishes Sam for A or otherwise interferes with Sam's ability to engage in A...
...if on reflection I endorse A, then I endorse interfering with Pat and aiding Sam, for several reasons: it results in more A, it keeps me from feeling like a coward and a hypocrite, and I establish myself as a reliable ally. I consider that one of the obligations of social alliance.
...if on reflection I reject A, then I endorse discussing the matter with Sam in private. Ideally we come to agreement on the matter, and either it changes to case 1, or I step up alongside Sam and we take the resulting social status hit of acknowledging our error together. This, too, I consider one of the obligations of social alliance.
...if on reflection I reject A and I can't come to agreement with Sam, I endorse acknowledging that I've unilaterally dissolved the aspect of our social alliance that was mediated by A. (Also, I take that status hit all by myself, but that's beside the point here.)

I agree with you that if I instead skip the reflective step and reflexively endorse A, that quickly degenerates into pure tribal warfare. But the failure in this case is not in respecting the alliance, it's failing to reflect on whether I endorse A. If I do neither, then the community doesn't degenerate into tribal warfare, it degenerates into chaos.

Admittedly, chaos can be more fun, but I don't really endorse it.

All of that said, I do recognize that explicitly talking about "social alliances" (and, indeed, explicitly talking about social status at all) is a somewhat distracting thing to do, and doesn't help me make myself understood especially well to most audiences. It was kind of a self-indulgent comment, in retrospect, although an accurate one (IMO).

(I feel vaguely like Will_Newsome, now. I wonder if that's a good thing.)

Replies from: wedrifid, Wei_Dai, wedrifid
comment by wedrifid · 2012-04-21T06:05:17.605Z · LW(p) · GW(p)

I feel vaguely like Will_Newsome, now. I wonder if that's a good thing.

Start to worry if you begin to feel morally obliged to engage in activity 'Z' that neither you, Sam or Pat endorse but which you must support due to acausal social allegiance with Bink mediated by the demon X(A/N)th, who is responsible for UFOs, for the illusion of stars that we see in the sky and also divinely inspired the Bhagavad-Gita.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-21T15:20:55.958Z · LW(p) · GW(p)

Been there, done that. (Not specifically. It would be creepy if you'd gotten the specifics right.)
I blame the stroke, though.

Replies from: wedrifid
comment by wedrifid · 2012-04-21T17:54:06.968Z · LW(p) · GW(p)

Been there, done that. (Not specifically. It would be creepy if you'd gotten the specifics right.) I blame the stroke, though.

Battling your way to sanity against corrupted hardware has the potential makings of a fascinating story.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-21T18:56:08.500Z · LW(p) · GW(p)

It wasn't quite as dramatic as you make it sound, but it was certainly fascinating to live through.
The general case is here.
The specifics... hm.
I remain uncomfortable discussing the specifics in public.

comment by Wei Dai (Wei_Dai) · 2012-04-21T00:43:34.244Z · LW(p) · GW(p)

if on reflection I endorse A, then I endorse interfering with Pat and aiding Sam, for several reasons: it results in more A, it keeps me from feeling like a coward and a hypocrite, and I establish myself as a reliable ally. I consider that one of the obligations of social alliance.

Is establishing yourself as a reliable ally an instrumental or terminal goal for you? If the former, what advantages does it bring in a group blog / discussion forum like this one? The kind of alliance you've mentioned so far are temporary ones formed implicitly by engaging someone in discussion, but people will discuss things with you if they think your comments are interesting, with virtually no consideration for how reliable you are as an ally. Are you hoping to establish other kinds of alliances here?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-21T01:06:07.672Z · LW(p) · GW(p)

Is establishing yourself as a reliable ally an instrumental or terminal goal for you?

Instrumental.

If the former, what advantages does it bring in a group blog / discussion forum like this one?

Trust, mostly. Which is itself an instrumental goal, of course, but the set of advantages that being trusted provides in a discussion is so ramified I don't know how I could begin to itemize it.
To pick one that came up recently, though, here's a discussion of one of the advantages of trust in a forum like this one, related to trolley problems and similar hypotheticals.
Another one that comes up far more often is other people's willingness to assume, when I say things that have both a sensible and a nonsensical interpretation, that I mean the former.

The kind of alliance you've mentioned so far are temporary ones formed implicitly by engaging someone in discussion, but people will discuss things with you if they think your comments are interesting, with virtually no consideration for how reliable you are as an ally.

Yes, I agree that when people form implicit alliances by (for example) engaging someone in discussion, they typically give virtually no explicit consideration for how reliable I am as an ally.

If you mean to say further that it doesn't affect them at all, I mostly disagree, but I suspect that at this point it might be useful to Taboo "ally."

People's estimation of how reliable I am as a person to engage in discussion with, for example, certainly does influence their willingness to engage me in discussion. And vice-versa. There are plenty of people I mostly don't engage in discussion, because I no longer trust that they will engage reliably.

Are you hoping to establish other kinds of alliances here?

Not that I can think of, but honestly this question bewilders me, so it's possible that you're asking about something I'm not even considering. What kind of alliances do you have in mind?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-04-22T02:19:03.708Z · LW(p) · GW(p)

To pick one that came up recently, though, here's a discussion of one of the advantages of trust in a forum like this one, related to trolley problems and similar hypotheticals. Another one that comes up far more often is other people's willingness to assume, when I say things that have both a sensible and a nonsensical interpretation, that I mean the former.

It's not clear to me that these attributes are strongly (or even positively) correlated with willingness to "stick up" for a conversation partner, since typically this behavioral tendency has more to do with whether a person is socially aggressive or timid. So by doing that, you're mostly signaling that you're not timid, with "being a good discussion partner" a much weaker inference, if people think in that direction at all. (This is the impression I have of wedrifid, for example.)

What kind of alliances do you have in mind?

I didn't have any specific kind of alliances in mind, but just thought the question might be worth asking. Now that I think about it, it might be for example that you're looking to make real-life friends, or contacts for advancing your career, or hoping to be recruit by SIAI.

Replies from: wedrifid, TheOtherDave
comment by wedrifid · 2012-04-22T14:22:44.834Z · LW(p) · GW(p)

It's not clear to me that these attributes are strongly (or even positively) correlated with willingness to "stick up" for a conversation partner, since typically this behavioral tendency has more to do with whether a person is socially aggressive or timid. So by doing that, you're mostly signaling that you're not timid

This model of the world does an injustice to a class of people I hold in high esteem (those who are willing to defend others against certain types of social aggression even at cost to themselves) and doesn't seem to be a very accurate description of reality. A lot of information - and information I consider important at that - can be gained about a person simply by seeing who they choose to defend in which circumstances. Sure, excessive 'timidity' can serve to suppress this kind of behavior and so information can be gleaned about social confidence and assertiveness by seeing how freely they intervene. But to take this to the extreme of saying you are mostly signalling that you're not timid seems to be a mistake.

In my own experience - from back when I was timid in the extreme - the sort of "sticking up for", jumping to the defense against (unfair or undesirable) aggression is one thing that could break me out of my shell. To say that my defiance of my nature at that time was really just me being not timid after all would be to make a lie of the battle of rather significant opposing forces within the mind of that former self.

(This is the impression I have of wedrifid, for example.)

Merely that I am bold and that my behavioral tendencies and strategies in this kind of area are just signals of that boldness? Dave's model seems far more accurate and useful in this case.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-04-22T19:46:27.641Z · LW(p) · GW(p)

Merely that I am bold and that my behavioral tendencies and strategies in this kind of area are just signals of that boldness? Dave's model seems far more accurate and useful in this case.

I find that my brain doesn't automatically build detailed models of LW participants, even the most prominent ones like yourself, and I haven't found a strong reason to do so consciously, using explicit reasoning, except when I engage in discussion with someone, and even then I only try to model the part of their mind most relevant to the discussion at hand.

I realize that I may be engaging in typical mind fallacy in thinking that most other people are probably like me in this regard. If I am, I'd be curious to find out.

comment by TheOtherDave · 2012-04-22T05:38:46.931Z · LW(p) · GW(p)

Fair enough; it may be that I overestimate the value of what I'm calling trust here.

Just for my own clarity, when you say that what I'm doing is signaling my lack of timidity, are you referring to my actual behavior on this site, or are you referring to the behavior we've been discussing on this thread (or are they equivalent)?

I'm not especially looking to make real-life friends, though there are folks here who I wouldn't mind getting to know in real life. Ditto work contacts. I have no interest in working for SI.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-04-22T09:38:25.337Z · LW(p) · GW(p)

I was talking about the abstract behavior that we were discussing.

comment by wedrifid · 2012-04-21T05:46:48.164Z · LW(p) · GW(p)

If Sam and I are engaged in some activity A, and Pat comes along and punishes Sam for A or otherwise interferes with Sam's ability to engage in A...
...if on reflection I endorse A, then I endorse interfering with Pat and aiding Sam, for several reasons: it results in more A, it keeps me from feeling like a coward and a hypocrite, and I establish myself as a reliable ally. I consider that one of the obligations of social alliance.
...if on reflection I reject A, then I endorse discussing the matter with Sam in private. Ideally we come to agreement on the matter, and either it changes to case 1, or I step up alongside Sam and we take the resulting social status hit of acknowledging our error together. This, too, I consider one of the obligations of social alliance.
...if on reflection I reject A and I can't come to agreement with Sam, I endorse acknowledging that I've unilaterally dissolved the aspect of our social alliance that was mediated by A. (Also, I take that status hit all by myself, but that's beside the point here.)

I really like your illustration here. To the extent that this is what you were trying to convey by "3)" in your analysis of wedrifid's style then I endorse it. I wouldn't have used the "alliances" description since that could be interpreted in a far more specific and less desirable way (like how Wei is framing it). But now that you have unpacked your thinking here I'm happy with it as a simple model.

Note that depending on the context there are times where I would approve of various combinations of support or opposition to each of "Sam", "Pat" and "A". In particular there are many behaviors "A" that the execution of will immediately place the victim of said behavior into the role of "ally that I am obliged to support".

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-21T15:03:47.310Z · LW(p) · GW(p)

Yeah, agreed about the distracting phrasing. I find it's a useful way for me to think about it, as it brings into sharp relief the associated obligations for mutual support, which I otherwise tend to obfuscate, but talking about it that way tends to evoke social resistance.

Agreed that there are many other scenarios in addition to the three I cite, and the specifics vary; transient alliances in a multi-agent system can get complicated.

Also, if you have an articulable model of how you make those judgments I'd be interested, especially if it uses more socially acceptable language than mine does.

Edit: Also, I'm really curious as to the reasoning of whoever downvoted that. I commit to preserving that person's anonymity if they PM me about their reasoning.

Replies from: wedrifid
comment by wedrifid · 2012-04-21T17:29:58.199Z · LW(p) · GW(p)

I'm really curious as to the reasoning of whoever downvoted that.

For what it is worth, sampling over time suggests multiple people - at one point there were multiple upvotes.

I'm somewhat less curious. I just assumed it people from the 'green' social alliance acting to oppose the suggestion that people acting out the obligations of social allegiance is a desirable and necessary mechanism by which a community preserves that which is desired and prevents chaos.

comment by komponisto · 2012-04-22T20:36:58.618Z · LW(p) · GW(p)

wedrifid, who is somehow capable of pleasant interaction while being a complete jerk

Regardless of whether or not this is compatible with being a "complete jerk" in your sense, I wish to point out that wedrifid is in many respects an exemplary Less Wrong commenter. There are few others I can think of who are simultaneously as (1) informative, including about their own brain state, (2) rational, especially in the sense of being willing and able to disagree within factions/alliances and agree across them, and (3) socially clueful, in the sense of being aware of the unspoken interpersonal implications of all discourse and putting in the necessary work to manage these implications in a way compatible with one's other goals (naturally the methods used are community-specific but that is more than good enough).

In saying this, I don't know whether I'm expanding on your point or disagreeing with it.

Replies from: Wei_Dai, wedrifid
comment by Wei Dai (Wei_Dai) · 2012-04-24T05:50:04.643Z · LW(p) · GW(p)

I would be interested in having wedrifid write a post systematically explaining his philosophy of how to participate on LW, because the bits and pieces of it that I've seen so far (your comment, TheOtherDave's, this comment by wedrifid) are not really forming into a coherent whole for me.

Replies from: wedrifid
comment by wedrifid · 2012-04-24T06:45:45.712Z · LW(p) · GW(p)

I would be interested in having wedrifid write a post systematically explaining his philosophy of how to participate on LW, because the bits and pieces of it that I've seen so far (your comment, TheOtherDave's, this comment by wedrifid) are not really forming into a coherent whole for me.

That would be an interesting thing to do, too. It is on the list of posts that I may or may not get around to writing!

comment by wedrifid · 2012-04-22T20:51:56.043Z · LW(p) · GW(p)

Regardless of whether or not this is compatible with being a "complete jerk" in your sense, I wish to point out that wedrifid is in many respects an exemplary Less Wrong commenter. There are few others I can think of who are simultaneously as (1) informative, including about their own brain state, (2) rational, especially in the sense of being willing and able to disagree within factions/alliances and agree across them, and (3) socially clueful, in the sense of being aware of the unspoken interpersonal implications of all discourse and putting in the necessary work to manage these implications in a way compatible with one's other goals (naturally the methods used are community-specific but that is more than good enough).

I appreciate your kind words komponisto! You inspire me to live up to them.

comment by thomblake · 2012-04-18T18:06:15.304Z · LW(p) · GW(p)

Plus, I like the idea of losing so much karma in one day and then eventually earning it all back

This discussion is off-topic for the "Rationality Quotes" thread, but...

If you're interested in an easy way to gain karma, you might want to try an experimental method I've been kicking around:

Take an article from Wikipedia on a bias that we don't have an article about yet. Wikipedia has a list of cognitive biases. Write a top-level post about that bias, with appropriate use of references. Write it in a similar style to Eliezer's more straightforward posts on a bias, examples first.

My prediction is that such an article, if well-written, should gain about +40 votes; about +80 if it contains useful actionable material.

Replies from: chaosmosis
comment by chaosmosis · 2012-04-18T18:18:30.668Z · LW(p) · GW(p)

No, I want this to be harder than that. It needs to be a drawn out and painful and embarrassing process.

Maybe I'll eventually write something like that. Not yet.

Replies from: DSimon, David_Gerard
comment by DSimon · 2012-04-18T22:52:19.466Z · LW(p) · GW(p)

It needs to be a drawn out and painful and embarrassing process.

Oh, you want a Quest, not a goal. :-)

In that case, try writing an article that says exactly the opposite of something that somebody with very high (>10,000) karma says, even linking to their statement to make the contrast clear. Bonus points if you end up getting into a civil conversation directly with that person in the comments of your article.

Note: I believe that it is not only possible, but even easy, for you to do this and get a net karma gain. All you need is (a) a fairly good argument, and (b) a friendly tone.

Replies from: orthonormal, gRR, Bugmaster, wedrifid
comment by orthonormal · 2012-04-22T18:48:41.750Z · LW(p) · GW(p)

Try writing an article that says exactly the opposite of something that somebody with very high (>10,000) karma says, even linking to their statement to make the contrast clear. Bonus points if you end up getting into a civil conversation directly with that person in the comments of your article.

I nominate this as the Less Wrong Summer Challenge, for everybody.

(One modification I'd make: it shouldn't necessarily be the exact opposite: precisely reversed intelligence usually is stupidity. But your thesis should be mutually incompatible with any charitable interpretation of the original claim.)

comment by gRR · 2012-04-22T19:16:39.060Z · LW(p) · GW(p)

And now I realize I just did exactly that, and your prediction is absolutely correct. No bonus points for me, though.

comment by Bugmaster · 2012-04-18T22:54:46.024Z · LW(p) · GW(p)

You just need a reasonably friendly tone. I have a bunch of karma, and I haven't posted any articles yet (though I'm working on it).

Replies from: DSimon
comment by DSimon · 2012-04-18T22:56:15.472Z · LW(p) · GW(p)

Indeed, that would work if karma was merely the goal. But chaosmosis expressed a desire for a "painful and embarrasing process", meaning that the ante and risk must be higher.

comment by wedrifid · 2012-04-18T23:33:41.014Z · LW(p) · GW(p)

In that case, try writing an article that says exactly the opposite of something that somebody with very high (>10,000) karma says, even linking to their statement to make the contrast clear. Bonus points if you end up getting into a civil conversation directly with that person in the comments of your article.

That actually sounds fun now that you put it like that!

comment by David_Gerard · 2012-04-18T23:23:28.085Z · LW(p) · GW(p)

One day I will write "How to karmawhore with LessWrong comments" if I can work out how to do it in such a way that it won't get -5000 within an hour.

Replies from: DSimon
comment by DSimon · 2012-04-18T23:38:44.335Z · LW(p) · GW(p)

I know how you could do it. You need to come up with a detailed written strategy for maximizing karma with minimal actual contribution. Have some third party (or several) that LW would trust hold on to it in secrect.

Then, for a week or two, apply that strategy as directly and blatantly as you think you can get away with, racking up as many points as possible.

Once that's done, compile a list of those comments and post it into an article, along with your original strategy document and the verification from the third party that you wrote the strategy before you wrote the comments, rather than ad-hocing a "strategy" onto a run of comments that happened to succeed.

Voila: you have now pulled a karma hack and then afterwards gone white-hat with the exploit data. LW will have no choice but to give you more karma for kindly revealing the vulnerability in their system! Excellent. >:-)

Replies from: Dias, David_Gerard, army1987
comment by Dias · 2012-04-19T07:58:43.825Z · LW(p) · GW(p)

Have some third party (or several) that LW would trust hold on to it in secrect.

Nitpick: cryptography solves this much more neatly.

Of course, people could accuse you of having an efficient way of factorising numbers, but if you do karma is going to be the least of anyone's concerns.

Replies from: ciphergoth, wedrifid
comment by Paul Crowley (ciphergoth) · 2012-04-19T12:31:03.905Z · LW(p) · GW(p)

Factorization doesn't enter into it - to precommit to a message that you will later reveal publically, publish a hash of the (salted) message.

comment by wedrifid · 2012-04-19T08:29:12.394Z · LW(p) · GW(p)

Nitpick: cryptography solves this much more neatly.

But somewhat less transparently. The cryptographic solution still requires that an encrypted message is made public prior to the actions being taken and declaring an encrypted prediction has side effects. The neat solution is to still use trusted parties but give the trusted parties only the encrypted strategy (or a hash thereof).

Replies from: Bugmaster
comment by Bugmaster · 2012-04-19T09:25:50.472Z · LW(p) · GW(p)

The cryptographic solution still requires that an encrypted message is made public prior to the actions being taken and declaring an encrypted prediction has side effects.

What kind of side effects ? I have no formal training in cryptography, so please forgive me if this is a naive question.

Replies from: wedrifid
comment by wedrifid · 2012-04-19T09:32:11.542Z · LW(p) · GW(p)

What kind of side effects ? I have no formal training in cryptography, so please forgive me if this is a naive question.

I mean you still have to give the encrypted data to someone. They can't tell what it is but they can see you are up to something. So you still have to use some additional sort of trust mechanism if you don't want the act of giving encrypted fore-notice to influence behavior.

Replies from: Bugmaster
comment by Bugmaster · 2012-04-19T17:27:59.055Z · LW(p) · GW(p)

Ah ok, that makes sense. In this case, you can employ steganography. For example, you could publish an unrelated article using a pretty image as a header. When the time comes, you reveal the algorithm and password required in order to extract your secret message from the image.

Replies from: wedrifid
comment by wedrifid · 2012-04-19T17:42:44.276Z · LW(p) · GW(p)

Ah ok, that makes sense. In this case, you can employ steganography. For example, you could publish an unrelated article using a pretty image as a header. When the time comes, you reveal the algorithm and password required in order to extract your secret message from the image.

Better yet... embed five different predictions in that header. When the time comes, reveal just the one that turned out most correct!

Replies from: Bugmaster
comment by Bugmaster · 2012-04-19T17:44:33.052Z · LW(p) · GW(p)

Hmm yes, there might be a hidden weakness in my master plan as far as accountability is concerned :-)

Replies from: khafra
comment by khafra · 2012-04-20T14:17:26.394Z · LW(p) · GW(p)

None that were not extant in the original scheme, assuming there are at least five people on LW who'd be considered trusted parties.

Replies from: RobinZ
comment by RobinZ · 2012-04-20T15:07:54.944Z · LW(p) · GW(p)

But of four people on LW who would be considered trusted parties, what's the probability that all four would be quiet after the fifth is called upon to post the prediction or prediction hash?

Replies from: khafra
comment by khafra · 2012-04-20T17:47:43.637Z · LW(p) · GW(p)

You're right, of course. I didn't think that through. There haven't been any good "gain the habit of really thinking things through" exercises for a Skill-of-the-Week post, have there?

Replies from: RobinZ
comment by RobinZ · 2012-04-20T17:55:56.974Z · LW(p) · GW(p)

Bear in mind that it's often not worth the effort. I think the skill to train would be recognizing when it might be.

Besides, in the prediction-hash case, they may well not post right away.

Replies from: khafra
comment by khafra · 2012-04-20T18:05:09.732Z · LW(p) · GW(p)

"Recognizing when you've actually thought thoroughly" is the specific failure mode I'm thinking of; but that's probably highly correlated with recognizing when to start thinking thoroughly.

I feel like such a skill may be difficult to consciously train without a tutor:

Rice's theorem will tell you that you cannot, without already knowing unknown unknowns, determine which knowledge is safe to ignore.

-- @afoolswisdom

Besides, in the prediction-hash case, they may well not post right away.

Yes, the first thing I thought of was Quirrel's hashed prediction; but it doesn't seem that everyone's forgotten yet, as of last month.

comment by David_Gerard · 2012-04-18T23:41:45.266Z · LW(p) · GW(p)

My actual strategy was just to post lots. Going through the sequences provided a target-rich environment ;-)

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-19T00:18:18.866Z · LW(p) · GW(p)

IME, per-comment EV is way higher in the HP:MoR discussion threads.

Replies from: David_Gerard
comment by David_Gerard · 2012-04-19T07:03:12.075Z · LW(p) · GW(p)

It so is. Karmawhoring in those is easy.

This suggests measuring posts for comment EV.

Replies from: Hul-Gil
comment by Hul-Gil · 2012-04-19T07:20:26.650Z · LW(p) · GW(p)

This suggests measuring posts for comment EV.

Now that is an interesting concept. I like where this subthread is going.

Interesting comparisons to other systems involving currency come to mind.

EV-analysis is the more intellectually interesting proposition, but it has me thinking. Next up: black-market karma services. I will facilitate karma-parties... for a nominal (karma) fee, of course. If you want to maintain the pretense of legitimacy, we will need to do some karma-laundering, ensuring that your posts appear that they could be worth the amount of karma they have received. Sock-puppet accounts to provide awful arguments that you can quickly demolish? Karma mines. And then, we begin to sell LW karma for Bitcoins, and--

...okay, perhaps some sleep is in order first.

Replies from: David_Gerard
comment by David_Gerard · 2012-04-19T14:25:03.010Z · LW(p) · GW(p)

And then, we begin to sell LW karma for Bitcoins, and--

It is clear we need to start work on a distributed, decentralised, cryptographically-secure Internet karma mechanism.

comment by A1987dM (army1987) · 2012-04-19T17:07:55.499Z · LW(p) · GW(p)

You need to come up with a detailed written strategy for maximizing karma with minimal actual contribution.

Create a dozen sockpuppet accounts and use them to upvote every single one of your posts. Duh.

Replies from: Richard_Kennaway, Strange7, Bugmaster
comment by Richard_Kennaway · 2012-04-22T19:15:27.768Z · LW(p) · GW(p)

That's like getting a black belt in karate by buying one from the martial arts shop. It isn't karmawhoring unless you're getting karma from real people who really thought your comments worth upvoting.

Replies from: army1987
comment by A1987dM (army1987) · 2012-04-23T18:55:47.143Z · LW(p) · GW(p)

“Getting karma from real people who really thought your comments worth upvoting” sounds like a good thing, so why the (apparently) derogatory term karmawhoring?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-04-23T19:14:54.496Z · LW(p) · GW(p)

It is good to have one's comments favourably appreciated by real people. Chasing after that appreciation, not so much. Especially, per an ancestor comment, trying to achieve that proxy measure of value while minimizing the actual value of what you are posting. The analogy with prostitution is close, although one difference is that the prostitute's reward -- money -- is of some actual use.

comment by Strange7 · 2012-04-21T07:25:11.664Z · LW(p) · GW(p)

Not as straightforward as it sounds. Irrelevant one-sentence comments upvoted to +10 will attract more downvotes than they would otherwise.

comment by Bugmaster · 2012-04-19T17:29:21.883Z · LW(p) · GW(p)

This would indeed count as "minimal contribution", but still sounds like a lot of work...

comment by [deleted] · 2012-04-18T17:33:06.624Z · LW(p) · GW(p)

It's not really a rationality problem, but I need to learn how to deal with other people who have big egos.

This is actually a really worthwhile skill to learn, independently of any LW-related foolishness. And it is actually a rationality problem.

Replies from: army1987, chaosmosis
comment by A1987dM (army1987) · 2012-04-18T19:54:07.475Z · LW(p) · GW(p)

And it is actually a rationality problem.

You mean to the extent that any problem at all is a rationality problem, or something else?

Replies from: None, David_Gerard
comment by [deleted] · 2012-04-18T22:28:32.056Z · LW(p) · GW(p)

It's a bias, as far as I'm concerned, and something that needs to be overcome. People with egos can be right, but if one can't deal with the fact that they're either right or wrong regardless of their egotism, then one is that much slower to update.

comment by David_Gerard · 2012-04-18T23:24:18.763Z · LW(p) · GW(p)

Dealing with others' irrationality is very much a rationality problem.

comment by chaosmosis · 2012-04-18T18:16:12.134Z · LW(p) · GW(p)

Ignore this.

comment by wedrifid · 2012-04-18T19:10:23.162Z · LW(p) · GW(p)

It's not really a rationality problem, but I need to learn how to deal with other people who have big egos, because apparently only two or three people received my comments the way I meant them to come across.

It is what we would call an "instrumental rationality" problem. And one of the most important ones at that. Right up there with learning how to deal with our own big egos... which you seem to be taking steps towards now!

comment by roland · 2012-04-24T05:47:48.583Z · LW(p) · GW(p)

And I thought I was the only one getting pummeled here...

comment by chaosmosis · 2012-04-22T18:38:20.152Z · LW(p) · GW(p)

UPDATE: Lame quest was lame. I'm already back up to positive karma although I hit -100 a couple days ago.

Maybe I should try for -1000 next time, instead.

Replies from: Nisan
comment by Nisan · 2012-04-22T19:04:11.405Z · LW(p) · GW(p)

Some users don't read the HP:MoR threads, and some users only read the HP:MoR threads. You don't have to feel like you have a reputation here yet. Also, welcome to Less Wrong.

Replies from: army1987
comment by A1987dM (army1987) · 2012-04-23T19:01:10.037Z · LW(p) · GW(p)

Some users don't read the HP:MoR threads, and some users only read the HP:MoR threads.

Has anybody ever considered moving the HP:MoR threads to another site?

Replies from: Vaniver
comment by Vaniver · 2012-04-23T19:49:29.466Z · LW(p) · GW(p)

There are threads on other sites (the TVTropes one is the biggest, I think, but I know the xkcd forums have a thread, and I'm sure others do as well). Part of the value of having HP:MoR threads here is it makes it likely that people who come here for the MoR threads will stay for the rest of the site- but I agree that the karma on them is atypical for karma on the site, and decoupling it would have some value (but I suspect higher costs than value).

Replies from: army1987
comment by A1987dM (army1987) · 2012-04-24T13:01:44.294Z · LW(p) · GW(p)

As I mentioned elsewhere, it would have the effect of making http://lesswrong.com/r/discussion/topcomments/ more useful (for people who don't read HP:MoR, such as me).

comment by HonoreDB · 2012-04-16T15:26:14.738Z · LW(p) · GW(p)

That's right, Emotion. Go ahead, put Reason out of the way! That's great! Fine! ...for Hitler.

--1943 Disney cartoon

comment by lukeprog · 2012-04-15T13:30:09.940Z · LW(p) · GW(p)

Every intelligent ghost must contain a machine.

Aaron Sloman

comment by Eugine_Nier · 2012-04-06T21:55:07.000Z · LW(p) · GW(p)

So the interesting and substantive question is not whether one thinks the fit will survive and thrive better than the unfit. They will. The interesting question is what the rules are that determine what is "fit."

-- David Henderson on Social Darwinism

comment by scav · 2012-04-03T07:53:33.475Z · LW(p) · GW(p)

Clearly, Bem’s psychic could bankrupt all casinos on the planet before anybody realized what was going on. This analysis leaves us with two possibilities. The first possibility is that, for whatever reason, the psi effects are not operative in casinos, but they are operative in psychological experiments on erotic pictures. The second possibility is that the psi effects are either nonexistent, or else so small that they cannot overcome the house advantage. Note that in the latter case, all of Bem’s experiments overestimate the effect.

Returning to Laplace’s Principle, we feel that the above reasons motivate us to assign our prior belief in precognition a number very close to zero.

Why Psychologists Must Change the Way They Analyze Their Data: The Case of Psi

Eric–Jan Wagenmakers, Ruud Wetzels, Denny Borsboom, & Han van der Maas

Replies from: FiftyTwo
comment by FiftyTwo · 2012-04-03T21:40:35.872Z · LW(p) · GW(p)

I don't see why the first hypothesis should necessarily be rejected out of hand. If the supposed mechanism is unconscious then having it react to erotic pictures and not particular casino objects seems perfectly plausible. Obviously the real explanation might be that the data wasn't strong enough to prove the claim, but we shouldn't allow the low status of "psi theories" to distort our judgement.

Replies from: scav
comment by scav · 2012-04-04T08:12:28.006Z · LW(p) · GW(p)

One good thing about Bayesian reasoning is that assigning a prior belief very close to zero isn't rejecting the hypothesis out of hand. The posterior belief will be updated by evidence (if any can be found). And even if you start with a high prior probability and update it with Bem's evidence for precognition, you would soon have a posterior probability much closer to zero than your prior :)

BTW there is no supposed mechanism for precognition. Just calling it "unconscious" doesn't render it any more plausible that we have a sense that would be super useful if only it even worked well enough to be measured, and yet unlike all our other senses, it hasn't been acted on by natural selection to improve. Sounds like special pleading to me.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-04-05T00:30:13.318Z · LW(p) · GW(p)

Just calling it "unconscious" doesn't render it any more plausible that we have a sense

FiftyTwo wasn't arguing that the sense was plausible. He was conditioning on the assumption that the sense exists.

Replies from: scav
comment by scav · 2012-04-05T14:33:11.341Z · LW(p) · GW(p)

OK, point taken. However, there being no proposed mechanism for precognition, it can hardly be called "plausible" that it operates inconsistently and that the experiment just happened to pick one of the things it can do out of all possibilities.

After all, if nobody knows how it's supposed to work, how does the experimenter justify claiming his data as evidence for precognition rather than quantum pornotanglement? You could say I just made that up on the spot. It doesn't matter: precognition isn't necessarily a thing either.

Replies from: Eugine_Nier, FiftyTwo
comment by Eugine_Nier · 2012-04-06T03:41:40.447Z · LW(p) · GW(p)

How exactly does "quantum pornotanglement" and why doesn't it count as a type/mechanism for precognition.

Replies from: Vaniver
comment by Vaniver · 2012-04-06T04:45:33.168Z · LW(p) · GW(p)

Now I'm thinking of pin-up Feynman diagrams.

Replies from: army1987
comment by A1987dM (army1987) · 2012-04-06T09:44:34.464Z · LW(p) · GW(p)

(Does Rule 34 apply?)

comment by FiftyTwo · 2012-05-11T00:41:09.330Z · LW(p) · GW(p)

Analogously, if someone told me they had a magic rock that could pick up certain pieces metal and not others, and couldn't explain why. it might be they are wrong it can pick up any metals, or there may be an underling effect causing these observations that we don't understand. In the analogy magnetism can be observed long before its is understood, and why some metals are and aren't magnetic isn't a trivial problem.

Similarly it may be that some psychic phenomena exists which works for some things, and not for others, for reasons we're not aware of. The fact we can't fully explain why it works in some cases but not others doesn't mean we should outlaw evidence of the cases where it does.

Replies from: scav
comment by scav · 2012-05-11T08:11:19.473Z · LW(p) · GW(p)

I would at least expect them to be able to demonstrate their magic rock and let me try it out on various materials.

If they had a rock that they claimed could pick up copper but not brass, based on only one experiment, but the rock now doesn't work if any scientists are watching, I'd be disinclined to privilege their hypothesis of the rock's magic properties.

Nobody is outlawing the evidence. I'm saying the evidence is unconvincing, and far short of what is needed to support an extraordinary claim such as precognition. It is for example much less rigorous than the evidence there was for another causality-violating hypothesis: FTL neutrinos. That turned out to be due to an equipment defect. Many were disappointed but nobody was surprised. Same reference class if you ask me.

comment by Wei Dai (Wei_Dai) · 2012-04-10T17:41:47.806Z · LW(p) · GW(p)

当局者迷,旁观者清

Chinese proverb, meaning "the onlooker sees things more clearly", or literally, "the player lost, the spectator clear"

Replies from: None, Richard_Kennaway
comment by [deleted] · 2012-04-10T17:48:38.484Z · LW(p) · GW(p)

三人成虎

Chinese proverb, "three men make a tiger", referring to a semi-mythological event during the Warring States period:

According to the Warring States Records, or Zhan Guo Ce, before he left on a trip to the state of Zhao, Pang Cong asked the King of Wei whether he would hypothetically believe in one civilian's report that a tiger was roaming the markets in the capital city, to which the King replied no. Pang Cong asked what the King thought if two people reported the same thing, and the King said he would begin to wonder. Pang Cong then asked, "what if three people all claimed to have seen a tiger?" The King replied that he would believe in it. Pang Cong reminded the King that the notion of a live tiger in a crowded market was absurd, yet when repeated by numerous people, it seemed real. As a high-ranking official, Pang Cong had more than three opponents and critics; naturally, he urged the King to pay no attention to those who would spread rumors about him while he was away. "I understand," the King replied, and Pang Cong left for Zhao. Yet, slanderous talk took place. When Pang Cong returned to Wei, the King indeed stopped seeing him.

-- Wikipedia

comment by Richard_Kennaway · 2012-04-11T08:42:53.162Z · LW(p) · GW(p)

In personal development workshops, the saying is, "the one with the mike in their hand is the last to see it." Of doctors and lawyers it is said that one who treats himself, or acts in court for himself, has a fool for a client.

comment by CronoDAS · 2012-04-04T01:17:03.081Z · LW(p) · GW(p)

Any “technology” which claims miraculous benefits on a timescale longer than it takes to achieve tenure and retire is vaporware, and should not be taken seriously.

-- Scott Locklin

Replies from: DSimon, Multiheaded
comment by DSimon · 2012-04-05T08:51:04.556Z · LW(p) · GW(p)

Cryonics?

Replies from: Ezekiel, BillyOblivion
comment by Ezekiel · 2012-04-05T21:47:50.844Z · LW(p) · GW(p)

I'm curious. Were you agreeing with the quote (and thus dissing cryonics), disagreeing with the quote (and bringing cryonics as a counterexample), or genuinely without agenda?

Replies from: DSimon
comment by DSimon · 2012-04-06T00:39:26.528Z · LW(p) · GW(p)

Partly that second one, partly just curious if it was an intended subject.

Replies from: CronoDAS
comment by CronoDAS · 2012-04-06T02:18:32.405Z · LW(p) · GW(p)

The original context is that Scott Locklin is a nanotechnology skeptic.

comment by BillyOblivion · 2012-04-17T12:39:54.529Z · LW(p) · GW(p)

Follow the link, he explains it there.

comment by Multiheaded · 2012-04-10T14:34:20.295Z · LW(p) · GW(p)

Manifestly stupid.

comment by Stabilizer · 2012-04-02T20:39:55.603Z · LW(p) · GW(p)

Uxbal: I don't want to die, Bea. I'm afraid to leave the children on their own... I can't.
Bea: You think you take care of the children Uxbal. Don't be naive. The universe takes care of them.
Uxbal: Yes... but the universe doesn't pay the rent.

-Biutiful

comment by [deleted] · 2012-04-02T18:37:29.728Z · LW(p) · GW(p)

All fiction needs to be taken both seriusly and not seriously.

Seriously because even the silliest of art can change minds.

Not seriously because no matter the delusions of the author, or the tone of the work, it's still fiction; entertainment, simulated on an human brain.

Rasmus Eide aka. Armok_GoB.

PS. This is not taken from an LW/OB post.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-02T20:45:17.609Z · LW(p) · GW(p)

Everything needs to be taken both seriously and not-seriously. Tepid unreflective semi-seriousness is always a mistake.

comment by spqr0a1 · 2012-04-05T23:44:50.091Z · LW(p) · GW(p)

To prize every thing according to its real use ought to be the aim of a rational being. There are few things which can much conduce to happiness, and, therefore, few things to be ardently desired. He that looks upon the business and bustle of the world, with the philosophy with which Socrates surveyed the fair at Athens, will turn away at last with his exclamation, 'How many things are here which I do not want'.

--Samuel Johnson, The Adventurer, #119, December 25, 1753.

comment by Jayson_Virissimo · 2012-04-02T05:54:48.841Z · LW(p) · GW(p)

Seek knowledge, even as far as China.

-A Weak Hadith of the Prophet Muhammad

comment by asparisi · 2012-04-20T19:23:30.016Z · LW(p) · GW(p)

"If you had a choice between the ability to detect falsehood and the ability to discover truth, which would you take? There was a time when I thought they were different ways of saying the same thing, but I no longer believe that. Most of my relatives, for example, are almost as good at seeing through subterfuge as they are at perpetrating it. I'm not at all sure, though, that they care much about truth. On the other hand, I'd always felt there was something noble, special, and honorable about seeking truth..."

  • Merlin, Sign of Chaos
comment by Will_Newsome · 2012-04-02T20:21:11.012Z · LW(p) · GW(p)

The majority of mankind is lazy-minded, incurious, absorbed in vanities, and tepid in emotion, and is therefore incapable of either much doubt or much faith; and when the ordinary man calls himself a sceptic or an unbeliever, that is ordinarily a simple pose, cloaking a disinclination to think anything out to a conclusion.

T. S. Eliot

Replies from: DSimon
comment by DSimon · 2012-04-05T08:58:37.344Z · LW(p) · GW(p)

I've read this a few times, but I'm still not seeing anything except "Non-believers are dummies, ha!", and I wonder if that's all there is to it or if I'm just getting blocked by my "oh-crap-what-did-he-say-about-my-tribe?" alarms going off.

Replies from: None, Document
comment by [deleted] · 2012-04-05T22:33:30.116Z · LW(p) · GW(p)

I may very well reading what I want to read out of this quote, but I feel like if the quote is to be taken as a jab at non-believers, it's also a jab at believers. The "ordinary man claiming to be a skeptic" part is explicit, but note that before he claims most are incapable of both much doubt and much faith, which I think implies that the same issue goes for believers and non-skeptics.

The basic idea I'm pulling from the quote seems to be that most people won't critically think about their ideas, so you can't always trust another's self-labeling to decide if their beliefs have been well thought out.

comment by Document · 2012-05-10T00:41:00.676Z · LW(p) · GW(p)

Consider "The majority of this liquid is not water".

comment by gwern · 2012-04-07T17:47:58.176Z · LW(p) · GW(p)

"The human understanding when it has once adopted an opinion draws all things else to support and agree with it.

And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects or despises, or else by some distinction sets aside and rejects, in order that by this great and pernicious predetermination the authority of its former conclusion may remain inviolate."

--Francis Bacon, Novum Organum (1620)

comment by aribrill (Particleman) · 2012-04-03T19:40:16.570Z · LW(p) · GW(p)

Dear, my soul is grey
With poring over the long sum of ill;
So much for vice, so much for discontent...
Coherent in statistical despairs
With such a total of distracted life,
To see it down in figures on a page,
Plain, silent, clear, as God sees through the earth
The sense of all the graves, - that's terrible
For one who is not God, and cannot right
The wrong he looks on. May I choose indeed
But vow away my years, my means, my aims,
Among the helpers, if there's any help
In such a social strait? The common blood
That swings along my veins, is strong enough
To draw me to this duty.

-Elizabeth Barrett Browning, Aurora Leigh, 1856

comment by VKS · 2012-04-03T07:32:16.912Z · LW(p) · GW(p)

The last level of metaphor in the Alice books is this: that life, viewed rationally and without illusion, appears to be a nonsense tale told by an idiot mathematician. At the heart of things science finds only a mad, never-ending quadrille of Mock Turtle Waves and Gryphon Particles. For a moment the waves and particles dance in grotesque, inconceivably complex patterns capable of reflecting on their own absurdity.

  • Martin Gardner, The Annotated Alice
Replies from: gjm, Eliezer_Yudkowsky, MixedNuts, dbaupp
comment by gjm · 2012-04-12T22:14:54.757Z · LW(p) · GW(p)

Leaving aside the dubiousness of calling the way the universe actually works "nonsense" and "mad": It seems very, very, very unlikely that anything in Lewis Carroll's writings was a metaphor for quantum mechanics. He died in 1898.

(I suppose something can be used as a metaphor for quantum mechanics without having been intended as one, though.)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-04-05T22:38:05.209Z · LW(p) · GW(p)

The heck? Quantum fields are completely lawful and sane. Only the higher levels of organization, i.e. human beings, are bugfuck crazy.

Behold, the Copenhagen Interpretation causes BRAIN DAMAGE.

Replies from: VKS, shminux
comment by VKS · 2012-04-06T01:04:23.522Z · LW(p) · GW(p)

As natural as QFT seems today, my understanding is that in 1960, before many of the classic texts in the domain were published, the ideas still seemed quite strange. We would do well to remember that when we set out to search for other truths which we do not yet grasp.

:p

comment by shminux · 2012-04-08T18:42:10.782Z · LW(p) · GW(p)

Maybe, but the Big World idea causes much more severe damage, judging by the recent discussions here and elsewhere.

comment by MixedNuts · 2012-04-05T16:38:56.300Z · LW(p) · GW(p)

What's Martin complaining about, exactly? That goodness is nowhere in physical law, so things can be unfair and horrible for no reason? That goodness is reducible in the first place? That physics is hard and therefore deserves nasty words like "absurd"?

comment by dbaupp · 2012-04-05T00:24:24.738Z · LW(p) · GW(p)

viewed rationally and without illusion

Lewis Carroll was religious, and to add to that, he was human.

Replies from: Pavitra, Tyrrell_McAllister
comment by Pavitra · 2012-04-05T13:07:36.545Z · LW(p) · GW(p)

These threads would be very sparsely populated if we avoided quoting humans.

Replies from: dbaupp, TheOtherDave, wedrifid
comment by dbaupp · 2012-04-09T11:40:44.706Z · LW(p) · GW(p)

You have misrepresented me. I was refuting the bit where a human was said to be doing something "rationally and without illusion": chances are that doesn't happen (especially regarding a topic as broad as "life").

comment by TheOtherDave · 2012-04-05T13:44:55.147Z · LW(p) · GW(p)

Upvoted for dry wit.

comment by wedrifid · 2012-04-05T15:28:58.371Z · LW(p) · GW(p)

Is fiction permitted? Most of my favorite quote are not from 'humans'.

comment by Tyrrell_McAllister · 2012-04-05T14:11:07.865Z · LW(p) · GW(p)

Lewis Carroll was religious, and to add to that, he was human.

For that matter, so was Martin Gardner.

comment by Thomas · 2012-04-03T06:24:39.779Z · LW(p) · GW(p)

Memory locations are just wires turned sideways in time.

  • Danny Hillis
Replies from: Mass_Driver
comment by Mass_Driver · 2012-04-03T22:03:04.835Z · LW(p) · GW(p)

Can you please explain this, slowly and carefully? It sounds plausible, and I'm trying to improve my understanding of space-time / 4-D thinking.

Replies from: Oscar_Cunningham, Thomas
comment by Oscar_Cunningham · 2012-04-06T18:14:05.942Z · LW(p) · GW(p)

When analysing a circuit we normally consider a wire to have the same voltage along its entire length. (There are two problems with this: voltage changes only propagate at c, and the wire has a resistance. Normally these are both negligible.) Thus we can view wires as taking a voltage and spreading it out along a line in space.

On the other hand, memory locations take a voltage and spread it out through time. So they are in some sense a wire pointing in the time direction.

Sadly, the analogy doesn't quite hold up. Wires have one spatial dimension but also have a temporal dimension (i.e. wires exist for more than an instant). So if you rotated a wire so that its spatial dimension pointed along the temporal dimension, its temporal dimension would rotate down into one of the spatial dimensions. It would still look like a wire! A memory location has no spatial extent: they're a very small bit of metal (you could make one in the shape of a wire but people don't). Thus they have a temporal extent but no spatial extent. So if you rotated one you could get something that had a spatial extent but no temporal extent. This would look like a piece of wire that appeared for an instant and then disappeared again.

Replies from: Mass_Driver
comment by Mass_Driver · 2012-04-06T19:40:05.587Z · LW(p) · GW(p)

Amazing! So a stricter analogy might be a memory location and a lightning bolt -- the memory location occupies only a tiny amount of space, and the static discharge of lightning takes only a tiny amount of time.

comment by Thomas · 2012-04-04T07:28:28.390Z · LW(p) · GW(p)

Ponder only the one dimensional time for now. At every point of time, you have only this moment and nothing more. But with the memories, you have same previous moments cached. Stored somewhere "orthogonal" to the timeline.

I've heard it here: http://edge.org/conversation/a-universe-of-self-replicating-code

On a site even better than this and quite unpopular on this site, also. Read or watch Dyson there. As many others.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-04-04T13:00:45.410Z · LW(p) · GW(p)

Is Edge the more unpopular site, or are you thinking of someplace else?

For what it's worth, I don't have anything against Edge, I just get bored reading it, even when the question is something I'm interested in.

comment by Alicorn · 2012-04-01T18:08:56.793Z · LW(p) · GW(p)

I was once a skeptic but was converted by the two missionaries on either side of my nose.

Robert Brault

Replies from: Desrtopa, Ezekiel, Blueberry
comment by Desrtopa · 2012-04-02T04:20:55.502Z · LW(p) · GW(p)

Am I the only one who didn't realize before reading other comments that he was not claiming to have been converted by his nostrils?

comment by Ezekiel · 2012-04-01T18:35:10.424Z · LW(p) · GW(p)

Particularly interesting since I (and, I suspect, others on LW) usually attach positive affect to the word "skeptic", since it seems to us that naivete is the more common error. But of course a Creationist is sceptical of evolution.

(Apparently both spellings are correct. I've learned something today.)

Replies from: BlazeOrangeDeer
comment by BlazeOrangeDeer · 2012-04-02T05:39:47.190Z · LW(p) · GW(p)

I'd call creatonists "evolution deniers" before I'd call them "evolution skeptics", but I suppose they'd do the same to me with God...

comment by Blueberry · 2012-04-01T19:41:53.259Z · LW(p) · GW(p)

I must be misinterpreting this, because it appears to say "religion is obvious if you just open your eyes." How is that a rationality quote?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-01T19:47:25.567Z · LW(p) · GW(p)

LW's standards for rationality quotes vary, but in any case this does allow for the reading of endorsing allowing perceived evidence to override pre-existing beliefs, if one ignores the standard connotations of "skeptic" and "missionary".

Replies from: Blueberry, Blueberry
comment by Blueberry · 2012-04-01T20:01:04.288Z · LW(p) · GW(p)

I guess, but that seems like a strange interpretation seeing as the speaker says he's no longer "a skeptic" in general.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-04-01T20:08:57.286Z · LW(p) · GW(p)

The point of rationality isn't to better argue against beliefs you consider wrong but to change your existing beliefs to be more correct.

Replies from: Blueberry
comment by Blueberry · 2012-04-02T07:46:11.084Z · LW(p) · GW(p)

That's a good reminder but I'm not sure how it applies here.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-04-03T02:13:02.608Z · LW(p) · GW(p)

A quote that calls the holder of a potentially wrong belief a "skeptic" rather than a "believer" is more useful since it makes you more likely to identify with him.

comment by Blueberry · 2012-04-01T20:07:04.899Z · LW(p) · GW(p)

Also judging from his other quotes I'm pretty sure that's not what he meant...

comment by Bill_McGrath · 2012-04-16T09:58:03.886Z · LW(p) · GW(p)

Using an elementary accounting text and with the help of an accountant friend, I began. For me, a composer, accounting had always been the symbol of ultimate boredom. But a surprise awaited me: Accounting is just a simple, practical tool for measuring resources, so as to better allocate and use them. In fact, I quickly realized that basic accounting concepts had a utility far beyond finance. Resources are almost always limited; one must constantly weigh costs and benefits to make enlightened decisions.

--Alan Belkin From the Stock Market to Music, via the Theory of Evolution

This was just the first bit that stood out as LW-relevant; he also briefly mentions cognitive bias and touches on the possible benefits of cognitive science to the arts.

comment by ChristianKl · 2012-04-13T13:39:22.925Z · LW(p) · GW(p)

If it can fool ten thousand users all at once (which ought to be dead simple, just add more servers), does that make it ten thousand times more human than Alan Turing?

Bruce Sterling

comment by atorm · 2012-04-09T02:18:17.649Z · LW(p) · GW(p)

There are two worlds: the world that is, and the world that should be. We live in one, and must create the other, if it is ever to be. -paraphrased from Jim Butcher's Turn Coat

comment by Vulture · 2012-04-28T03:04:31.070Z · LW(p) · GW(p)

Human beings have been designed by evolution to be good pattern matchers, and to trust the patterns they find; as a corollary their intuition about probability is abysmal. Lotteries and Las Vegas wouldn't function if it weren't so.

-Mark Rosenfelder (http://zompist.com/chance.htm)

comment by bojangles · 2012-04-27T18:49:41.103Z · LW(p) · GW(p)

I stopped being afraid because I read the truth. And that's the scientifical truth which is much better. You shouldn't let poets lie to you.

-- Bjork

comment by [deleted] · 2012-04-10T19:05:17.155Z · LW(p) · GW(p)

One day the last portrait of Rembrandt and the last bar of Mozart will have ceased to be — though possibly a colored canvas and a sheet of notes will remain — because the last eye and the last ear accessible to their message will have gone.

--Oswald Spengler, The Decline of the West

Replies from: None
comment by [deleted] · 2012-04-13T21:02:00.554Z · LW(p) · GW(p)

That sounds deep, but it has nothing to to with rationality

Replies from: None
comment by [deleted] · 2012-04-14T06:27:08.082Z · LW(p) · GW(p)

Not really, for example it is actually pretty clearly connected to fun theory.

comment by pleeppleep · 2012-04-04T03:41:52.268Z · LW(p) · GW(p)

"An organized mind is a disciplined mind. And a disciplined mind is a powerful mind."

-- Batman (Batman the Brave and the Bold)

Replies from: wedrifid, Arran_Stirton
comment by wedrifid · 2012-04-05T16:48:53.121Z · LW(p) · GW(p)

"An organized mind is a disciplined mind. And a disciplined mind is a powerful mind."

That doesn't seem to follow. An organized mind may not be disciplined. It may even be obsessively organized at the expense of being disciplined.

Replies from: pleeppleep
comment by pleeppleep · 2012-04-05T20:34:44.727Z · LW(p) · GW(p)

Assuming the mind is human, then I suppose you might have to modify it to ever make it truly organized, but identifying and organizing one's thoughts is an important part of rationality. You cannot make any effort to organize your thoughts without a certain degree of discipline. Think of the martial arts metaphor people here keep using in regards to rationality.

Replies from: wedrifid
comment by wedrifid · 2012-04-05T20:59:37.355Z · LW(p) · GW(p)

I expect there is a correlation between degree of organisation, degree of discipline and measures of a minds' 'power'. But this relationship is definitely not one of a series of "is a".

Think of the martial arts metaphor people here keep using in regards to rationality.

To be honest I try not to. That kind of thinking seems to lead to "koans", which seem to be a name for saying things that are blatantly false but feeling deep while doing so because there is some loosely related not-false lesson that someone could conceivably deconstruct from the koan.

comment by Arran_Stirton · 2012-04-04T07:07:46.941Z · LW(p) · GW(p)

So says a man-dressed-like-a-bat.

(That's not a jibe aimed at the quote but rather a reference to this.)

Replies from: Pavitra
comment by Pavitra · 2012-04-05T13:01:11.223Z · LW(p) · GW(p)

Downvoted because this comment serves only to propagate a mildly-entertaining meme, rather than contributing to the discussion in some way.

comment by NancyLebovitz · 2012-04-29T13:31:51.810Z · LW(p) · GW(p)

In recent years, I've come to think of myself as something of a magician, and my specialty is pulling the wool over my own eyes.

--Kip W

comment by NancyLebovitz · 2012-04-07T15:25:35.469Z · LW(p) · GW(p)

Civil wars are bitter because

People make their recollections fit with their suffering.

---Thucydides

Found here.

comment by [deleted] · 2012-04-01T14:57:38.144Z · LW(p) · GW(p)

AG: You know very well the channels of possi8ility at that exact juncture resulted from her decision paths as well as yours.

AG: 8ut even so, when it comes to your key decisions, the possi8ilities are pro8a8ly fewer and more discrete than you have presumed.

AG: Otherwise you would not see results consolidated into those vortices, would you? Possi8ility would resem8le an enormous hazy field of infinitely su8tle variations and micro-choices.

AG: Imagine if at that moment you truly were capa8le of anything, no matter how outlandish, a8surd, or patently fruitless. How would this vast amount of information present itself to you through your senses? What difference would it make in your final decision if all other tri8utaries of whim spilled into the same decaying future? And what would this make of your agency as a hero meant to learn and grow?

AG: Look at it this way. Imagine that over the course of someone's life, they are truly capa8le of every conceiva8le action at any moment, and did indeed take each of those actions in different 8ranching realities. Doesn't a scenario like that deaden a person's agency just as much as one where their fate is decidedly etched in stone as a single path of unavoida8le decisions? Who exactly is that person who can and does take all conceiva8le actions, other than someone perfectly generic, who only appears to have unique predilections and motives when you examine the ar8itrary path they happen to occupy?

Andrew Hussie

Replies from: Randaly, Bugmaster
comment by Randaly · 2012-04-01T15:53:13.774Z · LW(p) · GW(p)

Is there a reason all the b's have been replaced by 8's?

Replies from: David_Gerard, Particleman
comment by David_Gerard · 2012-04-01T16:56:21.720Z · LW(p) · GW(p)

Character typing quirk in the original.

comment by aribrill (Particleman) · 2012-04-03T19:59:05.234Z · LW(p) · GW(p)

The typing quirks actually serve a purpose in the comic. Almost all communication among the characters takes place through chat logs, so the system provides a handy way to visually distinguish who's speaking. They also reinforce each character's personality and thematic associations - for example, the character quoted above (Aranea) is associated with spiders, arachnids in general, and the zodiac sign of Scorpio.

Unfortunately, all that is irrelevant in the context of a Rationality Quote.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2012-04-03T23:47:54.485Z · LW(p) · GW(p) The character in question is named Vriska. You're thinking of Aradia. Replies from: Nornagest
comment by Nornagest · 2012-04-03T23:52:45.234Z · LW(p) · GW(p) Actually, he's not -- the quote comes from Vriska's recently introduced pre-Scratch ancestor, who's got a similar but not identical typing style. Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2012-04-04T10:22:27.694Z · LW(p) · GW(p)

You're right, never mind. Still internalizing the new set of ancestors.

comment by Bugmaster · 2012-04-01T17:55:17.907Z · LW(p) · GW(p)

I hate to downvote Homestuck, but there I go, downvoting it. The typing quirks and chatlog-style layout are too specific to the comic.

Replies from: arundelo
comment by arundelo · 2012-04-02T03:38:56.078Z · LW(p) · GW(p)

Every time someone mentions Homestuck I resist (until now) posting this image macro.

I spent a few minutes reading Homestuck from the beginning, but it did not grab me at all. Is there a better place to start, or is it probably just not my cup of tea?

(Speaking of webcomics, I have a similar question about Dresden Codak.)

Replies from: Nornagest, Bugmaster
comment by Nornagest · 2012-04-02T04:14:53.506Z · LW(p) · GW(p)

I spent a few minutes reading Homestuck from the beginning, but it did not grab me at all. Is there a better place to start, or is it probably just not my cup of tea?

It starts pretty slow. Most of the really impressive bits, to my taste, don't start happening until well into act 4, but that's a few thousand (mostly single-panel, but still) pages of story to go through; unless you have a great deal of free time, I wouldn't hold it against you if you decided it's not for you by the end of act 2. Alternately, you might consider reading act 5.1 and going back if you like it; that's a largely independent and much more compressed storyline, although you'll lose some of the impact if you don't have the referents in the earlier parts of the story to compare against. You'll need to front-load a lot of tolerance for idiosyncratic typing that way, though.

Which brings me to quotes like MHD's: for quotation out of context, I would definitely have edited out the typing quirks (or ed8ed, if we're being cute). The quirks are more about characterization than content, and some of the characters are almost unreadable without a lot of practice.

Dresden Codak, incidentally, doesn't have this excuse. If you've read a couple dozen pages of that and didn't like it, you're probably not going to like the rest.

Replies from: khafra
comment by khafra · 2012-04-02T13:18:31.254Z · LW(p) · GW(p)

Dresden Codak, incidentally, doesn't have this excuse. If you've read a couple dozen pages of that and didn't like it, you're probably not going to like the rest.

I've never been sure exactly where and how to get into the Dresden Codak storyline; but the one-offs like Caveman Science and the epistemological RPG are some of my favorite things on the internet.

Replies from: katydee, VKS
comment by katydee · 2012-04-02T15:15:45.873Z · LW(p) · GW(p)

The first real "storyline" Dresden Codak comic can be found here, That said, a lot of people I've spoken with simply don't like the Dresden Codak storyline in any form, and prefer the funny one-offs to any of the continuity-oriented comics.

comment by VKS · 2012-04-04T16:39:17.192Z · LW(p) · GW(p)

If you've read a couple dozen pages of that and didn't like it, you're probably not going to like the rest.

A couple dozen pages of Dresden Codak is almost a third of the entire thing...

Perhaps it's just me, but I think it's sufficiently short that the naïve strategy (start at the beginning, click next until you get to the end) would work in this case.

(Incidentally, when you get to Hob #9, remember to read the description at the bottom of the page.)

comment by Bugmaster · 2012-04-02T08:11:53.927Z · LW(p) · GW(p)

I disagree with Nornagest: I think the best place to start is at the beginning. They pretty much had me at "fetch modus", I was hooked from then on. A lot of really inspirational things start to happen later on, f.ex. the Flash animation "[S] WV: Ascend", but it might be difficult to comprehend without reading the earlier parts.

I would also advise starting at the beginning because I'm starting to grow dissatisfied with the double-meta-reacharaound tack that the comic is taking now... The earlier chapters had a much more coherent story, IMO.

comment by Stabilizer · 2012-04-06T01:33:18.869Z · LW(p) · GW(p)

Men, it has been well said, think in herds; it will be seen that they go mad in herds, while they only recover their senses slowly, and one by one.

-C. Mackay, Extraordinary Popular Delusions and the Madness of Crowds, 1852.

comment by arundelo · 2012-04-06T01:05:02.698Z · LW(p) · GW(p)

Billings: [...] What do you think, Peters? What are the chances that this "jewpacabra" is real?

Peters: "I'm estimating somewhere around point zero zero zero zero zero zero zero zero one percent.

Billings: (Sighs) We can't afford to take that chance. [...]

-- Trey Parker, Jewpacabra

(This is at about five minutes fifty seconds into the episode.)

Edit: Related Sequence post.

comment by Grognor · 2012-04-03T11:50:17.995Z · LW(p) · GW(p)

To a large degree, our values "just happen"—like our brains. When our values conflict—the value of preventing suffering versus the value of preserving the human species—we are tempted to choose the latter because it feels axiomatic to us. But that is a reason to treat it with extra suspicion, not to treat it as axiomatic.

-Sister Y

Replies from: Grognor
comment by Grognor · 2012-04-05T02:36:10.727Z · LW(p) · GW(p)

This quote argues for a position, which is why I think it currently sits ugly at 0 karma after having sat ugly at 1 for a while, but I think, inseparable from the position being argued for, it espouses an important general principle which one should not simply ignore because it can apply to one's preconception; indeed (applying its lesson) that is precisely when we need the principle most.

So while I would have just taken the general principle out from Sister Y's post if it were possible for me to do so (and taken the mediocre three to four karma I would have gotten for it), I'm glad that it was intertwined now, as it shows that yes, you're supposed to apply the principle to even this (substitute anything for "this", of course).

I do sincerely wonder what the world would look like if people could even-handedly apply lessons from quotes. There are many lessons here.

Edit: Actually, looking closely at what the words actually say, I realize it doesn't, by itself, argue for the position that the former value is better than the latter value, but its context is an argument for said thing.

Edit2: If you look at the sort of quote in the original Rationality Quotes posts that were entirely Eliezer's collection, they were mostly of the sort that were likely to make you think about something rather than something that is easy to agree with. A desire to return to that model could be what's motivating the comment you're reading.

Replies from: TimS
comment by TimS · 2012-04-05T02:48:06.076Z · LW(p) · GW(p)

In brief, you presented a quote (1) with a controversial position, (2) little LessWrong consensus, (3) no obvious relationship to generalized improvement at achieving goals, and (4) no relationship to the ideal scientific method. You are surprised (or disappointed) that it got negligible karma attention.

I notice I am confused.

Replies from: Grognor
comment by Grognor · 2012-04-05T02:56:29.776Z · LW(p) · GW(p)

Definitely not surprised. (Edit: okay, now I'm a little surprised. The quote has now been voted up to +4. My little discussion was convincing? I don't know!) Maybe moderately disappointed. I think there's a lot to be said for the meta level of "continue to search, and not just put on a show of searching, for where you're wrong, even if you've already done this many times." I'm a little more disappointed that the highest-voted quotes tend to be applause lights. (Though not always) (also, applause lights are not inherently bad things, but I wish they didn't get the most karma).

Replies from: TimS
comment by TimS · 2012-04-05T13:53:01.628Z · LW(p) · GW(p)

(1) Visibility - people who missed the quote the first time saw our exchange on the side bar.

(2) I am also confused by the purpose of the rationality quotes page. It's not surprising to me that lack of consensus limits upvote potential (i.e. local applause lights get voted up). That said, applause lights are grounded in particular communities. "I like human rights" is an applause light in the United States, but is a provocative position in North Korea. Some of the upvoting is based on the wish that the quote was more widely accepted in general society (i.e. we wish society was more like us)

(3) Notwithstanding what I just said, Rationality Quotes seems to function as a ideological purity tester. If it gets upvoted here, that shows it is part of the local consensus. In other words, I could post quotes that I thought were both post-modern and rationalist, and I expect they would be downvoted as outside the mainstream. To the extent that you think LessWrong has dysfunctional groupthink, I'm not sure the fight can be won in Rationality Quotes as opposed to Open Thread or Discussion. (I aspire to aspire to post into Main, so I seldom think about the social norms of that type of posting).

(4) In a substantive response to your quote, LessWrong is surprisingly child-free-living in its attitude. Even controlling for age, socioeconomic status, and gender, we are not even vaguely representative of how frequently people desire to have children.

Replies from: Bluehawk
comment by Bluehawk · 2012-04-06T00:04:09.149Z · LW(p) · GW(p)

I'm curious. Did you say "aspire to aspire to post into Main" deliberately?

comment by Will_Newsome · 2012-04-02T21:40:56.010Z · LW(p) · GW(p)

Oh my soul, be prepared for the coming of the Stranger.
Be prepared for him who knows how to ask questions.

T. S. Eliot, The Rock

comment by Vulture · 2012-04-28T03:26:23.835Z · LW(p) · GW(p)

Leonid: Without a purpose, a man is nothing.

Newton: Yes. But we wonder...do you share our gift? Do you have the necessary vision? Do you know the final fate of man?

Leonid: How could anyone know things like that?

Council: The Greater Science. The Quiet Math. The Silent Truth. The Hidden Arts. The Secret Alchemy.

Newton: Every question has an answer. Every equation has a solution.

  • S.H.I.E.L.D. #1 (Jonathan Hickman)
Replies from: David_Gerard, SusanBrennan
comment by David_Gerard · 2012-04-29T14:19:52.982Z · LW(p) · GW(p)

The point of this one isn't clear.

Replies from: Vulture
comment by Vulture · 2012-04-30T05:18:07.856Z · LW(p) · GW(p)

I guess it probably should have been broken up into a couple of shorter ones, but it was a single, short exchange and I just couldn't resist. That the question of the final fate of man, can, like any question, be answered with a greater science, with the hidden arts... this is essentially the message of transhumanist rationality, and it was beautifully phraseds here. "Without a purpose, a man is nothing"... this really should have been off on its own, in retrospect, but its meaning is a little bit less obscure, I think.

comment by SusanBrennan · 2012-04-29T14:46:20.471Z · LW(p) · GW(p)

Isn't one of the implications of Gödel's incompleteness theorem that there will always be unanswerable questions?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-29T15:08:03.788Z · LW(p) · GW(p)

Only if the questioner is consistent.

Replies from: Vulture
comment by Vulture · 2012-04-30T05:12:24.684Z · LW(p) · GW(p)

And there's no way to tell whether the questioner is inconsistent, or there exist unanswerable questions, right? [In any case, I would be greatly astonished if "What is the final fate of man?" was found to be isomorphic to a human Godel sentence ;-) ]

comment by FiftyTwo · 2012-04-03T21:27:08.649Z · LW(p) · GW(p)

> "The penalty of not doing philosophy isn't to transcend it, but simply to give bad philosophical arguments a free pass."

David Pearce

David Pearce ">www.reddit.com/r/Transhuman/comments/r7dui/david_pearce_ama/c43jfmk)

comment by aribrill (Particleman) · 2012-04-03T19:35:01.957Z · LW(p) · GW(p)

"Dear, my soul is grey With poring over the long sum of ill; So much for vice, so much for discontent... Coherent in statistical despairs With such a total of distracted life, To see it down in figures on a page, Plain, silent, clear, as God sees through the earth The sense of all the graves, - that's terrible For one who is not God, and cannot right The wrong he looks on. May I choose indeed But vow away my years, my means, my aims, Among the helpers, if there's any help In such a social strait? The common blood That swings along my veins, is strong enough To draw me to this duty."

Elizabeth Barrett Browning, Aurora Leigh, 1856

comment by J_Taylor · 2012-04-02T23:59:04.268Z · LW(p) · GW(p)

The chess board is the world, the pieces are the phenomena of the universe, the rules of the game are what we call the laws of Nature. The player on the other side is hidden from us. We know that his play is always fair, just and patient. We also know, to our cost, that he never overlooks a mistake, or makes the smallest allowance for ignorance.

-Thomas Huxley

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-03T00:56:06.560Z · LW(p) · GW(p)

I've traditionally gone with: the board is the space of/for potentially-live hypotheses/arguments/considerations, pieces are facts/observations/common-knowledge-arguments, moves are new arguments, the rules are the rules of epistemology. This lets you bring in other metaphors: ideally your pieces (facts/common-knowledge-arguments) should be overprotected (supported by other facts/common-knowledge-arguments); you should watch out for zwichenzugs (arguments that redeem other arguments that it would otherwise be justified to ignore); tactics/combinations (good arguments or combinations of arguments) flow from strategy/positioning (taking care in advance to marshal your arguments); controlling the center (the key factual issues/hypotheses at stake) is important; tactics (good arguments) often require the coordination of functionally diverse pieces (facts/common-knowledge-arguments), and so on.

The subskills that I use to play chess overlap a lot with the subskills I use to discover truth. E.g., the subskill of thinking "if I move here, then he moves there, then I move there, then he moves there, ..." and thinking through the best possible arguments at each point rather than just giving up or assuming he'll do something I'd find useful, i.e. avoiding motivated stopping and motivated continuation, is a subskill I use constantly and find very important. I constantly see people only thinking one or two moves (arguments) ahead, and in the absence of objective feedback this leads to them repeatedly being overconfident in bad moves (bad arguments) that only seem good if you're not very experienced at chess (argumentation in the epistemic sense).

Oh, a rationality quote: Bill Hartson: "Chess doesn't make sane people crazy; it keeps crazy people sane."

And Bobby Fischer: "My opponents make good moves too. Sometimes I don't take these things into consideration."

comment by EditedToAdd · 2012-04-02T02:01:55.228Z · LW(p) · GW(p)

But, the hard part comes after you conquer the world. What kind of world are you thinking of creating?

Johan Liebert, Monster

comment by Andy_McKenzie · 2012-04-01T22:09:05.376Z · LW(p) · GW(p)

I adore Western medicine. I trust my doctor with my life. I’m just not sure I trust her with my death. Keep in mind that when it comes to your body and those of your family and who’s dead and who’s alive, who’s conscious and who’s not, your own judgment may be better than anyone else’s.

Dick Teresi, The Undead

comment by gyokuro · 2012-04-10T00:49:18.831Z · LW(p) · GW(p)

They were conquerors, and for that you want only brute force -- nothing to boast of, when you have it, since your strength is just an accident arising from the weakness of others.

--Joseph Conrad, Heart of Darkness

comment by Will_Newsome · 2012-04-02T01:49:13.564Z · LW(p) · GW(p)

In the small circle of pain within the skull
You still shall tramp and tread one endless round
Of thought, to justify your action to yourselves,
Weaving a fiction which unravels as you weave,
Pacing forever in the hell of make-believe
Which never is belief: this is your fate on earth
And we must think no further of you.

T. S. Eliot, Murder in the Cathedral

comment by Klevador · 2012-04-14T08:20:45.233Z · LW(p) · GW(p)

"The material world," continued Dupin, "abounds with very strict analogies to the immaterial; and thus some color of truth has been given to the rhetorical dogma, that metaphor, or simile, may be made to strengthen an argument, as well as to embellish a description. The principle of the vis inertiae, for example, seems to be identical in physics and metaphysics. It is not more true in the former, that a large body is with more difficulty set in motion than a smaller one, and that its subsequent momentum is commensurate with this difficulty, than it is, in the latter, that intellects of the vaster capacity, while more forcible, more constant, and more eventful in their movements than those of inferior grade, are yet the less readily moved, and more embarrassed and full of hesitation in the first few steps of their progress.

— Poe, The Purloined Letter

comment by taelor · 2012-04-02T09:02:20.496Z · LW(p) · GW(p)

He who refuses to do arithmetic is doomed to talk nonsense.

-- John McCarthy

Replies from: Nominull
comment by Nominull · 2012-04-02T16:05:24.686Z · LW(p) · GW(p)

Repeat

Replies from: MarkusRamikin
comment by MarkusRamikin · 2012-04-04T11:48:21.802Z · LW(p) · GW(p)

I'm starting to feel it was a mistake to have so many of those threads instead of a single one.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-04-04T11:58:12.378Z · LW(p) · GW(p)

A single thread would have been of unmanageable size.

Replies from: MarkusRamikin
comment by MarkusRamikin · 2012-04-04T12:06:56.803Z · LW(p) · GW(p)

In what sense unmanageable? What would it make harder to do that is easy to do now?

It seems to me the current setup makes it harder to know if you're posting a repeat, or to display a list of all top quotes.

Also, I think it leads to more barrel-scraping this way; it seems to me that for the most part we ran out of the really great quotes and now often things get posted that have no special rationality lesson, but instead appeal to the tastes and specific beliefs common in our particular community.

Replies from: NancyLebovitz, NancyLebovitz
comment by NancyLebovitz · 2012-04-04T12:44:39.755Z · LW(p) · GW(p)

Unmanageable because the site software doesn't show more than 500 (top-level?) comments, and because large numbers of comments load more slowly.

There's a way to find top-voted quotes-- Best of Rationality Quotes 2009/2010 (Warning: 750kB page, 774 quotes). This could be considered a hint about the quantity problem.

There is another one for 2011.

As for dupes, the search on the site is adequate for finding them-- what's needed is a recommendation on the quotes page for people to check before posting.

I think the quotes continue to be somewhat interesting, but it's not so much that there are no great ones left (though I was surprised to discover recently that "Nature to be commanded must be obeyed" hadn't been listed) as that they tend to keep hitting the same points.

Replies from: MarkusRamikin, NancyLebovitz
comment by MarkusRamikin · 2012-04-04T12:54:45.971Z · LW(p) · GW(p)

I see. Thank you.

It seems to me that there's room for improvement to the software, then. However, I'll shut up at this point.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-04-04T12:58:09.269Z · LW(p) · GW(p)

You're welcome.

There's always room for improvement in the software. Once in a while, there's a request for suggestions, so you might want to think about the changes you'd like to see.

comment by NancyLebovitz · 2012-04-08T15:05:47.807Z · LW(p) · GW(p)

To my mind, the redundancy problem with the quotes pages isn't so much repeated quotes as different quotes which mean pretty much the same thing.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-04-11T08:44:51.948Z · LW(p) · GW(p)

How many different things are there to say about rationality?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-04-11T09:12:15.553Z · LW(p) · GW(p)

Well, the right question is "How many different brief things are there to say about rationality?"

If you're allowed to go on at length, the sequences imply that there's quite a bit to say.

I don't think the question about brief statements has an a priori answer.

comment by NancyLebovitz · 2012-04-04T14:19:49.046Z · LW(p) · GW(p)

Thanks for asking about unmanageablility.

That fits neatly with the importance of being specific.

I had enough experience with the site to know that very long threads don't work well and to have a feeling for the quote threads adding up to a huge lump, but I had it in my mind as one chunk and didn't realize that if you suggested a single quote thread, it was worth considering that you didn't have my background knowledge.

comment by Klevador · 2012-04-14T06:20:47.961Z · LW(p) · GW(p)

Tom: "Diana, have you ever confronted a moral dilemma?"

Diana: "I have spent my life confronting real dilemmas. I have always found moral dilemmas to be the indulgence of the well-fed middle class."

— Waiting for God (TV Series)

Replies from: tut
comment by tut · 2012-04-16T12:22:01.857Z · LW(p) · GW(p)

Is there a point to this quote, besides that this diana character doesn't understand the term 'moral dilemma'?

Replies from: Eugine_Nier, Normal_Anomaly
comment by Eugine_Nier · 2012-04-29T21:21:42.219Z · LW(p) · GW(p)

That the kind of "moral dilemmas" philosophers tend to contemplate, tend to be very different to the kind of dilemmas people encounter in practice.

comment by Normal_Anomaly · 2012-04-28T22:12:13.506Z · LW(p) · GW(p)

Perhaps that it requires significant time and cognitive energy to make difficult decisions in general or reflectively modify one's moral system in particular?

ETA: can someone explain the downvote?

comment by EditedToAdd · 2012-04-02T15:35:10.568Z · LW(p) · GW(p)

What is a man? A miserable little pile of secrets. (0:43 – 0:48)

--- Dracula

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-02T20:47:06.590Z · LW(p) · GW(p)

"What is a man? A miserable little pile of replicators!" "What is a man? A miserable little pile of thermostats!"

Replies from: HonoreDB
comment by CronoDAS · 2012-04-13T17:50:27.587Z · LW(p) · GW(p)

Maybe this song won't get downvoted? It's a little more on-topic for LessWrong, even if it does get political at the end. ;)

It was back in 1941.
I was a member of a good platoon.
We were on maneuvers in Lou'siana one night
By the light of the moon.
The Captain told us to ford a river.
That's how it all begun.
We were knee deep in the Big Muddy,
And the big fool said to push on.

The Sergeant said, "Sir, are you sure
This is the best way back to the base?"
"Sergeant, go on, I've forded this river
About a mile above this place.
It'll be a little soggy, but just keep sloggin'.
We'll soon be on dry ground."
We were waist deep in the Big Muddy,
And the big fool said to push on.

The Sergeant said, "Sir, with all this equipment,
No man will be able to swim."
"Sergeant, don't be a Nervous Nelly,"
The Captain said to him.
"All we need is a little determination.
Men, follow me. I'll lead on."
We were neck deep in the Big Muddy,
And the big fool said to push on.

All at once the moon clouded over.
We heard a gurglin' cry.
A few seconds later the Captain's helmet
Was all that floated by.
The Sergeant said, "Turn around, men.
I'm in charge from now on."
And we just made it out of the Big Muddy
With the Captain dead and gone.

We stripped and dived and found his body
Stuck in the old quicksand.
I guess he didn't know that the water was deeper
Then the place he'd once before been.
Another stream had joined the Big Muddy
About a half mile from where we'd gone.
We were lucky to escape from the Big Muddy
When the big fool said to push on.

Now I'm not going to point any moral —
I'll leave that for yourself.
Maybe you're still walking, you're still talking,
You'd like to keep your health.
But every time I read the papers, that old feeling comes on,
We're waist deep in the Big Muddy
And the big fool says to push on.

Waist deep in the Big Muddy,
The big fool says to push on.
Waist deep in the Big Muddy,
The big fool says to push on.
Waist deep, neck deep,
Soon even a tall man will be over his head.
We're waist deep in the Big Muddy,
And the big fool says to push on.

-- Pete Seeger, "Waist Deep in the Big Muddy"

Replies from: CronoDAS, wedrifid
comment by CronoDAS · 2012-04-13T20:51:55.079Z · LW(p) · GW(p)

Quick question: Is this getting downvoted because of the quote or because I talked about downvoting?

(The song itself is a rather amusing lesson in escalation of commitment and sunk cost fallacy, among other things...)

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-04-14T10:36:28.212Z · LW(p) · GW(p)

It's too long. This thread is about quotes, not about making others read a whole piece of work you like. Perhaps use the monthly media thread for that purpose?

For this thread you could have perhaps reduced the quotable to this:

But every time I read the papers, that old feeling comes on,
We're waist deep in the Big Muddy
And the big fool says to push on.

or perhaps possibly even two verses would be acceptable like this:

The Sergeant said, "Sir, with all this equipment,
No man will be able to swim."
"Sergeant, don't be a Nervous Nelly,"
The Captain said to him.
"All we need is a little determination.
Men, follow me. I'll lead on."
We were neck deep in the Big Muddy,
And the big fool said to push on.

All at once the moon clouded over.
We heard a gurglin' cry.
A few seconds later the Captain's helmet
Was all that floated by.
The Sergeant said, "Turn around, men.
I'm in charge from now on."
And we just made it out of the Big Muddy
With the Captain dead and gone.

and just linked to some other page where one could see the whole song.

But not the whole damn thing.

Replies from: CronoDAS
comment by CronoDAS · 2012-04-14T21:54:31.546Z · LW(p) · GW(p)

It's too long. This thread is about quotes, not about making others read a whole piece of work you like. Perhaps use the monthly media thread for that purpose?

Thanks.

comment by wedrifid · 2012-04-13T17:51:44.969Z · LW(p) · GW(p)

Maybe this song won't get downvoted? ;)

If I downvoted this comment but not the song would that count or not?

Replies from: CronoDAS
comment by CronoDAS · 2012-04-13T20:48:28.613Z · LW(p) · GW(p)

How can I tell the difference? (I assume that you mean downvoting the song on Youtube?)

Replies from: chaosmosis
comment by chaosmosis · 2012-04-24T18:02:36.406Z · LW(p) · GW(p)

Lolz. I think he meant "downvoted this comment" where "this" means "the comment he was quoting" as opposed to the other comment which contained the song.

comment by Will_Newsome · 2012-04-02T01:47:30.422Z · LW(p) · GW(p)

You think me reckless, desperate and mad.
You argue by results, as this world does,
To settle if an act be good or bad.
You defer to the fact. For every life and every act
Consequence of good and evil can be shown.
And as in time results of many deeds are blended
So good and evil in the end become confounded.
It is not in time that my death shall be known;
It is out of time that my decision is taken
If you call that decision
To which my whole being gives entire consent.
I give my life
To the Law of God above the Law of Man.
Those who do not the same
How should they know what I do?

T. S. Eliot, Murder in the Cathedral

Replies from: BlazeOrangeDeer
comment by BlazeOrangeDeer · 2012-04-02T05:54:29.856Z · LW(p) · GW(p)

Correct me if I'm wrong, but does this seem like an affirmation of religious morality and denouncement of consequentialism? I'm failing to see the rationality here.

Replies from: hairyfigment, Will_Newsome
comment by hairyfigment · 2012-04-09T05:47:05.791Z · LW(p) · GW(p)

He probably means that in a Big World of certain kinds, he thinks, TDT/UDT leads to unpopular conclusions. E.g. that we should believe in all deities who punish disbelief if they exist in some possible world.

This seems close to the reason I rejected the mathematical macrocosm hypothesis, even before Someone Who's Probably Not Will Newsome explained part of his position. If Tegmark IV either does not constrain anticipation, or calls you a Boltzmann Brain equivalent, then it fails as an explanation. And upon close inspection, I don't think I reject Boltzmann just by applying decision theory. It seems logically absurd to say, "I am likely a Boltzmann Brain, but acting like a real boy has more expected value." The first clause means I shouldn't trust my reasoning and should likely just think happy thoughts. I think the best theory of reality will say a random (real) mind would likely benefit from rationality, to at least the extent that we appear to benefit.

comment by Will_Newsome · 2012-04-02T07:29:28.346Z · LW(p) · GW(p)

Rationality in: Recognition of timeless/timeful distinction (Law of God, Law of Man), Emphasizing timeless effects even when they're heavily discountable, Pointing out that history tends to make fools of the temporally good, Touching on the touchy theme of consent, Proposing arguments about when it is or is not justified to take-into-account or ignore the arguments of others who seem to be acting in good faith.

Also, even a simple counter-affirmation to local ideology is itself useful if it's sufficiently eloquently-stated.

(Pretty drunk, apologies for any errors.)

Replies from: wedrifid, Incorrect
comment by wedrifid · 2012-04-02T08:07:58.965Z · LW(p) · GW(p)

(Pretty drunk, apologies for any errors.)

You mean the part where you equate 'timeless' considerations with the Law of God?

Replies from: gjm, Will_Newsome, Will_Newsome
comment by gjm · 2012-04-03T10:53:50.306Z · LW(p) · GW(p)

Conditional on the existence of a Law of God (and the sort of god in whom Eliot believed) that's not so very unreasonable. It's worth distinguishing between "irrational" and "rational but based on prior assumptions I find very improbable".

(None the less, I don't think there's much rationality in the lines Will_Newsome quoted, though it does gesticulate in the general direction of an important difficulty with consequentialism: a given action has a lot of consequences and sorting out the net effect is difficult-to-impossible; so we have to make do with a bunch of heuristic approximations to consequentialism. I'll still take that over a bunch of heuristic approximations to the law of a probably-nonexistent god, any day.)

comment by Will_Newsome · 2012-04-02T19:46:32.239Z · LW(p) · GW(p)

Wait, it explicitly says that his decision (if you call that "decision" to which his whole being gives entire consent) to give his life to the Law of God should (and is to) be taken timelessly ("out of time"). ...I don't see how that's not clear. Most of the time when people complain about equivocation/syncretism it's because the (alleged) meaning is implicit or hidden one layer down, but that's not the case here.

comment by Will_Newsome · 2012-04-02T19:42:52.284Z · LW(p) · GW(p)

That's definitely not an error. Have you read much T. S. Eliot? He was obsessed with the timeful/timeless local/global distinction. Read Four Quartets.

Replies from: wedrifid
comment by wedrifid · 2012-04-02T21:55:03.600Z · LW(p) · GW(p)

That's definitely not an error. Have you read much T. S. Eliot? He was obsessed with the timeful/timeless local/global distinction. Read Four Quartets.

I wasn't trying to imply you misrepresented T.S.Eliot's obsession. Just that you make an error in advocating it as an example of a "Rationality Quote". Because it's drivel.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-02T22:07:49.623Z · LW(p) · GW(p)

0_o

/sigh...

comment by Incorrect · 2012-04-02T22:16:27.062Z · LW(p) · GW(p)

What is the empirical difference between a person who is temporally vs timelessly good?

comment by CronoDAS · 2012-04-01T22:45:47.931Z · LW(p) · GW(p)

I believe the children are our are future
Teach them well and let them lead the way
Show them all the beauty they possess inside
Give them a sense of pride to make it easier
Let the children's laughter remind us how we used to be
Everybody searching for a hero
People need someone to look up to
I never found anyone who fulfill my needs
A lonely place to be
So I learned to depend on me

I decided long ago, never to walk in anyone's shadows
If I fail, if I succeed, at least I'll live as I believe
No matter what they take from me, they can't take away my dignity
Because the greatest love of all is happening to me
I found the greatest love of all inside of me
The greatest love of all is easy to achieve
Learning to love yourself - it is the greatest love of all

-- from "The Greatest Love of All", Music by Michael Masser, Lyrics by Lynda Creed
First performed by George Benson

Edit: truncated at Alicorn's suggestion.

Replies from: arundelo
comment by arundelo · 2012-04-02T03:14:45.316Z · LW(p) · GW(p)

I think this is a rare pop song that successfully treats a somewhat abstract idea.

Performed by George Benson

Among others, of course! Benson's is good, though, and I had not heard it before. Moar George Benson (with the McCoy Tyner trio).

Replies from: CronoDAS
comment by CronoDAS · 2012-04-01T22:05:00.607Z · LW(p) · GW(p)

(http://www.youtube.com/watch?v=U2qLu1CYBf4)

Listen boy, don't want to see you let a good thing slip away
You know I don't like watching anybody make the same mistakes I made
She's a real nice girl And she's always there for you
But a nice girl wouldn't tell you what you should do

Listen boy, I'm sure that you think you got it all under control
You don't want somebody telling you the way to stay in someone's soul
You're a big boy now, you'll never let her go
But that's just the kind of thing she ought to know

Tell her about it! Tell her everything you feel
Give her every reason to accept that you're for real

Tell her about it!
Tell her all your crazy dreams
Let her know you need her, let her know how much she means

Listen boy, it's not automatically a certain guarantee
To insure yourself you've got to provide communication constantly
When you love someone, you're always insecure
And there's only one good way to reassure

Tell her about it! Let her know how much you care
When she can't be with you tell her you wish you were there
Tell her about it
Every day before you leave
Pay her some attention, give her something to believe

Cause now and then she'll get to worrying
Just because you haven't spoken for so long
Though you may not have done anything
Will that be a consolation when she's gone?

Listen boy, it's good information from a man who's made mistakes
Just a word or two that she gets from you could be the difference that it makes
She's a trusting soul, she's put her trust in you
But a girl like that won't tell you what you should do

Tell her about it
Tell her everything you feel
Give her every reason to accept that you're for real
Tell her about it
Tell her all your crazy dreams
Let her know you need her, let her know how much she means

Tell her about it
Tell her how you feel right now
Tell her about it
The girl don't want to wait too long
You got to tell her about it
Tell her now and you won't go wrong
You got to tell her about it
Before it gets too late
You got to tell her about it
You know the girl don't want
To wait--you got to
Tell her about it

-- Billy Joel

Replies from: Alicorn
comment by Alicorn · 2012-04-02T00:47:18.112Z · LW(p) · GW(p)

Your song lyrics might be better received if you truncated them somewhere and possibly included an explanation.

Replies from: CronoDAS
comment by CronoDAS · 2012-04-02T07:23:57.155Z · LW(p) · GW(p)

Well, if people want to downvote good advice because it's in song form, that's not my problem. ;)

It's not like I have anything to worry about. I've got over seven thousand karma right now, so it's not like my posting privileges are in danger or anything. Downvote away!

(I'll cut down the other one, though.)

Replies from: nshepperd, FiftyTwo
comment by nshepperd · 2012-04-02T10:08:44.708Z · LW(p) · GW(p)

Karma is information, not money.

Replies from: Incorrect
comment by Incorrect · 2012-04-03T01:57:35.730Z · LW(p) · GW(p)

Actually some on lesswrong try to make bets with it as currency.

comment by FiftyTwo · 2012-04-03T21:34:45.174Z · LW(p) · GW(p)

You shouldn't use the fact you have given clear and useful comments in the past excuse you from giving equally good ones now.

Replies from: CronoDAS
comment by CronoDAS · 2012-04-03T23:11:28.591Z · LW(p) · GW(p)

What I meant to say was that, well, I think it's a good Rationality Quote, even if a lot of other people don't.

Replies from: Alicorn
comment by Alicorn · 2012-04-03T23:36:10.761Z · LW(p) · GW(p)

Why do you think it's a good rationality quote?