Rationality Quotes August 2014

post by RolfAndreassen · 2014-08-04T03:12:33.667Z · LW · GW · Legacy · 236 comments

236 comments

Comments sorted by top scores.

comment by Ben Pace (Benito) · 2014-08-08T21:35:08.745Z · LW(p) · GW(p)

Hollywood is filled with feel-good messages about how robotic logic is no match for fuzzy, warm, human irrationality, and how the power of love will overcome pesky obstacles such as a malevolent superintelligent computer. Unfortunately there isn’t a great deal of cause to think this is the case, any more than there is that noble gorillas can defeat evil human poachers with the power of chest-beating and the ability to use rudimentary tools.

From the British Newspaper 'The Telegraph', and their article on Nick Bostrom's awesome new book 'Superintelligence'.

I just thought it was a great analogy. Nice to see AI as an X-Risk in the mainstream media too.

Replies from: elharo
comment by elharo · 2014-08-10T10:28:05.586Z · LW(p) · GW(p)

Probably true. It's not like Hollywood is an accurate source of information about anything. (Climate change, asteroid impacts, the legal system, the military, romance, sex, business, anything.) But I fail to see how this is a rationality quote. I'm sure there are many more quotes of the form "Group X is wrong about Topic Y."

I would prefer to limit quotes to those that that teach us how to tell whether or how much Group X is right or wrong about Topic Y, and skip quotes that merely turn on the applause lights on a particular topic.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-08-12T15:20:21.650Z · LW(p) · GW(p)

The quote isn't just about Hollywood being wrong, it's about a specific way that it's wrong.

comment by Pablo (Pablo_Stafforini) · 2014-08-04T17:59:45.303Z · LW(p) · GW(p)

A good rule of thumb to ask yourself in all situations is, “If not now, then when?” Many people delay important habits, work and goals for some hypothetical future. But the future quickly becomes the present and nothing will have changed.

Scott Young

Replies from: alanwil2
comment by alanwil2 · 2014-08-06T19:28:02.087Z · LW(p) · GW(p)

I just bought a book on procrastination. I am going to start reading it tomorrow.

comment by jaime2000 · 2014-08-05T04:24:47.963Z · LW(p) · GW(p)

"I want information. I want to understand you. To understand what exactly I'm fighting. You can help me."
"I obviously won't."
"I will kill you if you don't help me. I'm not bluffing, Broadwings. I will kill you and you will die alone and unseen, and frankly you are far too intelligent to simply believe that the stories of ancestral halls are true. You will die and that will probably be it, and nobody will ever know if you talked or not—not that conversing with an enemy in a war you don't support is dishonorable in the first place."
"You'll let me leave if I stonewall, because you don't want to set a precedent of murdering surrendered officers."
"We'll see. Would you like another cup?"
"No."
Derpy smiled deviously. "You know, in that last battle? We didn't fly our cannon up there to the cliffs. Nope. We had Earth ponies drag them. Earth ponies are capable of astounding physical feats, you know. We're probably going to be using more mobility in our artillery deployment going forward, now that they've demonstrated how effective the concept is."
"...why did you tell me that? What would drive you to tell me that?"
"I'll ask again before I continue. Would you like to assist me, Broadwings?"
"I am a gryphon. Telling me your plans will do nothing to change that. I will not barter secrets."
She leaned back, gesturing with a hoof as she talked. "My biggest strengths are that I understand the way crowds think and that I am good at thinking up unexpected ways to solve simple problems. My army's biggest weakness is that my soldiers are inexperienced, and that unexpected developments have an inordinate effect on their morale. Also, my infantry will never be able to stand against a sustained lion charge, so I have to keep finding ways to nullify that disadvantage, and frankly I won't be able to forever."
"I don't understand. What are you doing, Mare? Why are you--"
"--my personal biggest weaknesses," she continued, her smile now malicious, "are my struggles with morality, identity, and my desire to be loved. There's also my relationship with the stallion Macintosh Apple, who is usually called Big Macintosh, with whom I spend upwards of ten hours a day, and on whom I am completely emotionally dependent. If he were to be killed, I'd probably fall apart emotionally. I also have a daughter named Dinky—not by him, mind you—who is in the Southmarch, and who I am very, very guilty about abandoning. If anything were to happen to her I might kill myself. Do you understand yet, Broadwings?"
"Mare, this is insanity. I cannot--"
"--All right then, we'll continue. I also have in this camp Sweetie Belle, Apple Bloom, and Scootaloo, three little fillies, though they're growing quite quickly now. Sweetie Belle is the writer of many propaganda songs, Apple Bloom is Big Mac's sister, who he protects like a daughter, and I believe Scootaloo has no special importance but the other two would defend her to the death. They would be quite easy to kill as well. Do you understand yet?"
"Mare! Are you mad?! Do you have any idea how dangerous it is to tell me these things? Aren't you afraid I would tell--"
"--Good," she nodded. "You're beginning to understand. Let's see. My logistics framework right now is nonexistent. I'm entirely reliant on local villages bringing me food and materiel, and on capturing food and materiel meant for your armies. My army is nowhere near as mobile as it appears, since it can only operate in areas where I have established relationships with each particular village. A bit of simple recon work would let you figure out where I can and cannot go. Do you understand yet?"
Broadwings' eyes opened and his pupils shrank with dawning recognition. "...If I came back to my army, I would use this to defeat you. If I told any other gryphon, they would use it to defeat you. You...you have..."
"Yes. I have sealed your fate; you will not see your home. I can't let you leave now. I absolutely can't. I can now either kill you or keep you prisoner until this war is over—and I don't keep useless prisoners. It's now out of my hooves. One or the other. You pick."

~emkajii, Equestria: Total War

Replies from: devas
comment by devas · 2014-08-05T10:37:41.589Z · LW(p) · GW(p)

This sounds like something from Schelling's strategy of conflict, although I haven't read it

Replies from: jaime2000, satt
comment by jaime2000 · 2014-08-05T16:51:38.046Z · LW(p) · GW(p)

Yes, that's exactly what I was thinking. General Broadwings thinks General Derpy is bluffing, so Derpy credibly precommits herself to not releasing him by telling him information that would surely doom her army if she did. She gives up the choice of freeing Broadwings, and comes out ahead for it.

comment by satt · 2014-08-07T02:53:38.545Z · LW(p) · GW(p)

It's kind of reminiscent of this, from pages 43-44 of the 1980 edition:

It is not always easy to make a convincing, self-binding, promise. Both the kidnapper who would like to release his prisoner, and the prisoner, may search desperately for a way to commit the latter against informing on his captor, without finding one. If the victim has committed an act whose disclosure could lead to blackmail, he may confess it; if not, he might commit one in the presence of his captor, to create the bond that will ensure his silence. But these extreme possibilities illustrate how difficult, as well as important, it may be to assume a promise.

Compare also Daniel Ellsberg's Kidnap game.

comment by RolfAndreassen · 2014-08-04T05:40:54.896Z · LW(p) · GW(p)

A man is walking on the moon with his eyes turned up toward space And the bright blue world that watches him reflected on his face. The whole world sees the hero there and the module crew also. But few can see the guiding team that guards him from below.

Here's a health to the man who walked the moon, and the module crew above, And the team that watches from the sky with worry, joy, and love. To all who blazed the sky-trail come raise your glasses 'round; And a health to the unknown heroes, too, who never left the ground.

Here's a health to the ship's designers, and the welders of her seams, And all who man the radar-scan to watch our dawning dreams. For all the unknown heroes, sing out to every shore: "What makes one step a giant leap is all the steps before".

Leslie Fish, musically praising the Hufflepuff virtues.

Replies from: Lumifer
comment by Lumifer · 2014-08-04T14:53:37.158Z · LW(p) · GW(p)

And all who man the radar-scan to watch our dawning dreams.

So, the NSA technicians are included in the praising..? X-D

comment by Stabilizer · 2014-08-04T04:01:20.208Z · LW(p) · GW(p)

Surgeons finally did upgrade their antiseptic standards at the end of the nineteenth century. But, as is often the case with new ideas, the effort required deeper changes than anyone had anticipated. In their blood-slick, viscera-encrusted black coats, surgeons had seen themselves as warriors doing hemorrhagic battle with little more than their bare hands. A few pioneering Germans, however, seized on the idea of the surgeon as scientist. They traded in their black coats for pristine laboratory whites, refashioned their operating rooms to achieve the exacting sterility of a bacteriological lab, and embraced anatomic precision over speed.

The key message to teach surgeons, it turned out, was not how to stop germs but how to think like a laboratory scientist. Young physicians from America and elsewhere who went to Germany to study with its surgical luminaries became fervent converts to their thinking and their standards. They returned as apostles not only for the use of antiseptic practice (to kill germs) but also for the much more exacting demands of aseptic practice (to prevent germs), such as wearing sterile gloves, gowns, hats, and masks. Proselytizing through their own students and colleagues, they finally spread the ideas worldwide.

-Atul Gawande

comment by arundelo · 2014-08-05T05:19:21.806Z · LW(p) · GW(p)

That's why I'm skeptical of people who look at some catastrophic failure of a complex system and say, "Wow, the odds of this happening are astronomical. Five different safety systems had to fail simultaneously!" What they don't realize is that one or two of those systems are failing all the time, and it's up to the other three systems to prevent the failure from turning into a disaster.

-- Raymond Chen

Replies from: satt, dspeyer, Gunnar_Zarncke
comment by satt · 2014-08-07T01:16:20.202Z · LW(p) · GW(p)

In other words, some of the slices in one's Swiss cheese model are actually missing entirely.

comment by dspeyer · 2014-08-05T21:24:34.969Z · LW(p) · GW(p)

Correlary: if you're running a system for which five simultaneous failures is a disaster, monitor each safety system seperately and treat any three simultaneous failures as if it were a disaster.

comment by Gunnar_Zarncke · 2014-08-07T22:02:13.543Z · LW(p) · GW(p)

Also known as Fundamental Failure Mode. From Systemantics:

System failure

The Fundamental Failure-Mode Theorem (F.F.T.): Complex systems usually operate in failure mode.

A complex system can fail in an infinite number of ways. (If anything can go wrong, it will.) (See Murphy's law.)

The mode of failure of a complex system cannot ordinarily be predicted from its structure.

The crucial variables are discovered by accident.

The larger the system, the greater the probability of unexpected failure.

"Success" or "Function" in any system may be failure in the larger or smaller systems to which the system is connected.

The Fail-Safe Theorem: When a Fail-Safe system fails, it fails by failing to fail safe.

comment by dspeyer · 2014-08-05T21:34:00.411Z · LW(p) · GW(p)

It was a gamble: would people really take time out of their busy lives to answer other people’s questions, for nothing more than fake internet points and bragging rights?

It turns out that people will do anything for fake internet points.

Just kidding. At best, the points, and the gamification, and the focused structure of the site did little more than encourage people to keep doing what they were already doing. People came because they wanted to help other people, because they needed to learn something new, or because they wanted to show off the clever way they’d solved a problem.

...

An incredible number of people jumped at the chance to help a stranger

-- Jay Hanlon, Five year retrospective on StackOverflow

Replies from: satt, cody-bryce, ike
comment by satt · 2014-08-07T01:38:38.647Z · LW(p) · GW(p)

On the other hand, a Slashdot comment that's stuck in my mind (and on my hard disks) since I read it years ago:

In one respect the computer industry is exactly like the construction industry: nobody has two minutes to tell you how to do something...but they all have forty-five minutes to tell you why you did it wrong.

When I started working at a tech company, as a lowly new-guy know-nothing, I found that any question starting with "How do I..." or "What's the best way to..." would be ignored; so I had to adopt another strategy. Say I wanted to do X. Research showed me there were (say) about six or seven ways to do X. Which is the best in my situation? I don't know. So I pick an approach at random, though I don't actually use it. Then I wander down to the coffee machine and casually remark, "So, I needed to do X, and I used approach Y." I would then, inevitably, get a half-hour discussion of why that was stupid, and what I should have done was use approach Z, because of this, this, and this. Then I would go off and use approach Z.

In ten years in the tech industry, that strategy has never failed once. I think the key difference is the subtext. In the first strategy, the subtext is, "Hey, can you spend your valuable time helping me do something trivial?" while in the second strategy, the subtext is, "Hey, here's a chance to show off how smart you are." People being what they are, the first subtext will usually fail -- but the second will always succeed.

— fumblebruschi

Replies from: NancyLebovitz, Sarunas, TheMajor
comment by NancyLebovitz · 2014-08-12T15:15:08.084Z · LW(p) · GW(p)

In addition to the specific advice, this is an excellent example of rationality because it's about getting the best from people as they are rather than being resentful because they aren't behaving as they would if they were ideally rational.

Replies from: satt
comment by satt · 2014-08-16T16:50:29.689Z · LW(p) · GW(p)

I can't be sure, because I first read that comment so long ago, but I think I took it as an inspiration to be better than the co-workers at the coffee machine. It's repellent to imagine myself as a person who'd spend 45 minutes on a Yer Doin It Rong lecture but wouldn't spend 2 minutes to explain how to do something properly in the first place.

comment by Sarunas · 2014-08-20T22:36:37.253Z · LW(p) · GW(p)

This is known as Cunningham's Law. Another example.The explanation (non-competitive vs. competitive mindsets, the latter of which is more motivating to act) seems quite convincing. In addition, could there also be an analogy to loss aversion (a tendency to prefer avoiding losses to acquiring gains)? Would people feel more urgency to correct what they see as wrong (and thus challenging what they see as correct) rather than explain what is right ("less wrong" vs. "more right", if we are not trying to avoid puns)?

comment by TheMajor · 2014-08-07T06:30:05.066Z · LW(p) · GW(p)

A reply because an upvote doesn't begin to cover it. I might start using this!

comment by cody-bryce · 2014-09-07T04:36:43.259Z · LW(p) · GW(p)

Convincing people to offer others programming help on the internet isn't a special accomplishment of SO. From usenet to modern mailing lists to forums to IRC, there are tons and tons of thriving venues for it. The gamification might have helped SO's popularity some, but taking time out of their busy lives to answer others' questions was alive and well.

SO is a dangerous trash heap. It doesn't encourage helping people make good programs; it answers extremely literal questions. Speed of post is important. Style of post is important. Blatantly wrong answers are upvoted by people who don't know what they're looking at when they are early, indicating that vote count isn't telling ever. Doing anything but answering a question completely literally is treated with extreme hostility. These sorts of things have gotten worse with time.

The community relations are bizarre. Active members of the community buy into cheap salesman lines by the owners that are meant to favor the owners. The idea that the community can direct itself is thrown around as if it wasn't blatantly untrue.

Yes, an incredible people jump at the chance to help strangers. SO didn't invent that, they're just one of the more popular current hosts to these people. It's distasteful to act like it started by wondering if such people exist.

Replies from: Lumifer, Lumifer, Lumifer
comment by Lumifer · 2014-09-08T01:49:06.439Z · LW(p) · GW(p)

It doesn't encourage helping people make good programs; it answers extremely literal questions.

So? That's fine. "Helping people make good programs" is awfully fuzzy and is likely to start by major holy wars breaking out. SO is useful, at least for me, because it offers fast concise answers to very specific and literal questions I have on a regular basis.

I can't say anything about the internal politics of SO since I don't play there.

comment by Lumifer · 2014-09-08T01:53:19.856Z · LW(p) · GW(p)

It doesn't encourage helping people make good programs; it answers extremely literal questions.

So? That's fine. "Helping people make good programs" is awfully fuzzy and is likely to start by major holy wars breaking out. SO is useful, at least for me, because it offers fast concise answers to very specific and literal questions I have on a regular basis.

I can't say anything about the internal politics of SO since I don't play there.

comment by Lumifer · 2014-09-08T01:50:05.106Z · LW(p) · GW(p)

It doesn't encourage helping people make good programs; it answers extremely literal questions.

So? That's fine. "Helping people make good programs" is awfully fuzzy and is likely to start by major holy wars breaking out. SO is useful, at least for me, because it offers fast concise answers to very specific and literal questions I have on a regular basis.

I can't say anything about the internal politics of SO since I don't play there.

comment by ike · 2014-08-06T02:05:47.477Z · LW(p) · GW(p)

Well, did they test popularity of sites without fake internet points vs popularity of sites with, controlling for relevant factors? I skimmed through the post, and there wasn't much actual data on what people do and why, just assertions.

Replies from: Azathoth123
comment by Azathoth123 · 2014-08-06T02:57:57.648Z · LW(p) · GW(p)

I thought the point of the points was to weed out the people whose "help" you don't want.

Replies from: ike
comment by ike · 2014-08-06T20:07:31.839Z · LW(p) · GW(p)

That would account for reputation, not badges. (No one says "Hey, I got two answer from people with the same rep, but one has twice as many badges, so I'll go with that one.")

On the actual question, I've seen meta-posts on Stack Exchange complaining that they qualified for a badge and didn't get it, so the stuff does matter somewhat.

comment by CronoDAS · 2014-08-04T08:24:52.642Z · LW(p) · GW(p)

The amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it.

-- Alberto Brandolini (via David Brin)

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-08-04T09:11:07.445Z · LW(p) · GW(p)

Refuting frequently appearing bullshit could be made more efficient by having a web page with standard explanations which could be linked from the debate. Posting a link (perhaps with a short summary, which could also be provided on the top of that web page) does not require too much energy.

Which would create another problem, of protecting that web page from bullshit created by reversing stupidity, undiscriminating skepticism, or simply affective death spirals about that web page. (Yes, I'm thinking about RationalWiki.) Maybe we could have multiple anti-bullshit websites, which would sometimes explain using their own words, and sometimes merely by linking to another website's explanation they agree with.

Replies from: CronoDAS, ChristianKl, Jiro, AndHisHorse, Gunnar_Zarncke
comment by CronoDAS · 2014-08-04T09:18:56.219Z · LW(p) · GW(p)

http://www.talkorigins.org/indexcc/ is considered a good one on the single issue of creationism vs. evolution.

Replies from: fubarobfusco
comment by fubarobfusco · 2014-08-04T21:07:20.582Z · LW(p) · GW(p)

Yes, it is, and The Counter-Creationism Handbook sits next to Darwin, Dawkins, and Diamond on my shelf. It would be a Good Thing if folks in other bullshit-fighting arenas had the level of scholarship exhibited by Mark Isaak and his collaborators.

(Hell, every time I see a "bingo card" ridiculing an Other Side's arguments, I wish its creators had the time and scholarly dedication of the talk.origins folk.)

comment by ChristianKl · 2014-08-11T10:47:13.593Z · LW(p) · GW(p)

undiscriminating skepticism

I think that's a bad description. The kind of people on RationalWiki are very discriminating. When something is said by an Authority they trust they aren't skeptic and when something is said by someone they don't trust they are very "skeptic".

Maybe we could have multiple anti-bullshit websites

I don't think that framing yourself as "anti-bullshit" is helpful. It's makes more sense to frame yourself as being pro-evidence. We already do have multiple websites that do explain issues.

I personally like Skeptics Stackexchange. If I come about a new claim I often simply go and open a question over there.

When it comes to an issue such as vaccination I think Vox has a decent primer: http://www.vox.com/cards/vaccines/what-is-vaccine

comment by Jiro · 2014-08-08T19:01:22.838Z · LW(p) · GW(p)

How does this differ from religious groups refusing to answer questions that dispute things said by their religion, and instead referring you to scripture passages or Christian apologetics?

Of course it's different in that you will link to refutations that are good arguments, and the religious person will link to apologetics that are bad arguments, but aside from that, how is it different? After all, you can't very well say that certain tactics are acceptable or unacceptable based on whether the associated arguments are good or bad.

Replies from: Viliam_Bur, Richard_Kennaway
comment by Viliam_Bur · 2014-08-09T18:20:49.128Z · LW(p) · GW(p)

Depends on the audience and topic. Also, sometimes the goal is not to convince your opponent, but to convince the bystanders.

Imagine that you are on a web forum where someone comes and writes a long comment about "Isn't it horrible that vaccination causes autism, and yet the government wants us to vaccinate our children? I would do anything to protect my child from autism!" and some information probably copied from some other webpage. It's not just you and them; there are also other readers who don't have a clue and may be frightened by the message. (And they will not use google, because... well, humans are stupid.)

If nobody opposes the message, it seems like their is a clear consensus among the people who care about the topic. If you opposed them, you are wasting your time. -- But if you post a link to a good explanation, then the people frightened by the message can read the explanation and hear a dissenting voice, while you wouldn't have to spend a lot of time... assumming there is a good anti-bullshit page where you just enter "vaccination, autism" in the search box, and it shows you a well-written page about the topic. Where well-written means a short layman-accessible summary at the top, and then detailed arguments and references below.

Replies from: Jiro
comment by Jiro · 2014-08-10T16:01:33.378Z · LW(p) · GW(p)

But by that same reasoning, a fundamentalist Christian could come here, see that someone has written a long comment about, say, evolution, and reply with a link to a prewritten web page listing 100 arguments against evolution. He reasons that if he posts a good explanation, people who are frightened by the idea of fundamentalists being a menace can read the explanation and hear a "dissenting voice",,,.

As far as he is concerned, he has followed your recommendations exactly. Is there something you could say which explains why his behavior is unacceptable, but the behavior you describe is acceptable, that does not involve "our anti-anti-vaccination page is well-written and your anti-evolution page is not"?

(Alternatively, would you find his behavior acceptable? This seems odd.)

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-08-11T07:50:42.509Z · LW(p) · GW(p)

A fundamentalist Christian who would post here a link to a page listing arguments against evolution would be more effective than the one who would try to debate, because they would achieve the same (in this situation: zero) effect while spending much less resources. The people who would try to debate them, each of them would waste more of their time by reading the linked page and composing the reply. So, I believe this is a good strategy.

Specifically on LW we have an (unwritten?) norm that if you post a link, you should also provide a summary using your own words. Which probably was designed to counter this strategy. But there are website which don't have this norm, e.g. Facebook.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-08-11T09:00:46.419Z · LW(p) · GW(p)

Specifically on LW we have an (unwritten?) norm that if you post a link, you should also provide a summary using your own words. Which probably was designed to counter this strategy.

It is not specific to LW, but a custom of good practice that personally, I have followed ever since there has been such a thing as a link (and before then, when the equivalent was posting to an email list a cut-and-paste of someone else's words without any words from the person posting). I also practice the custom of ignoring links that come to me without context.

I recommend both parts of this practice to everyone.

comment by Richard_Kennaway · 2014-08-11T10:19:15.284Z · LW(p) · GW(p)

How does this differ from religious groups refusing to answer questions that dispute things said by their religion, and instead referring you to scripture passages or Christian apologetics?

How does answering questions at length in one's own words differ from religious groups answering questions at length in their own words?

Replies from: Jiro
comment by Jiro · 2014-08-11T15:52:10.758Z · LW(p) · GW(p)

It doesn't differ. But it doesn't have to, since we consider it acceptable behavior for religious people to come here and answer questions in their own words.

We generally don't consider it acceptable behavior for religious people to come here and respond to posts by giving links to apologetic sites. It should not, then, be acceptable behavior for us except maybe in a few specialized cases (such as where the dispute is purely over facts, like for vaccines).

comment by AndHisHorse · 2014-08-08T17:52:15.836Z · LW(p) · GW(p)

Refuting frequently appearing bullshit is more than a matter of making the facts available. After all, anti-vaccination folks appear with enough frequency to be a curious news item (which I admit is a horrendous metric, but let's pretend it means something), and I'm sure that a quick Google search would yield enough facts to disabuse them of their notions. The trick is building up enough credibility and charisma - if such a property could be applied to an argument - to make such a site not just correct, but convincing. That's where the order of magnitude comes in.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-08-09T18:26:02.311Z · LW(p) · GW(p)

a quick Google search would yield enough facts to disabuse them of their notions

Most people are not strategic enough to use google. Or if they get many contradicting results from google search, they are not smart enough to decide.

Also, there is this bias that if an information was brought to you by a person you know, it has much stronger impact. (For most people, good relationships matter more than truth. If someone brings you an information, disbelieving the information is a potential conflict with given person.) You only get an equal opposing force if another person you know opposes the original information. For example, by saying "it's bullshit" and posting a link to refutation.

comment by Gunnar_Zarncke · 2014-08-07T22:13:13.933Z · LW(p) · GW(p)

But you still have to find the proper entry. This just shifts the burden around and in total refutation is probably still much more expensive than creation (esp. as the same BS can be copied but the refutation can't..

comment by shminux · 2014-08-26T21:09:12.591Z · LW(p) · GW(p)

is consciousness more like the weather, or is it more like multiplication?

Scott Aaronson

More context:

a perfect simulation of the weather doesn’t make it rain—at least, not in our world. On the other hand, a perfect simulation of multiplying two numbers does multiply the numbers: there’s no difference at all between multiplication and a “simulation of multiplication.” Likewise, a perfect simulation of a good argument is a good argument, a perfect simulation of a sidesplitting joke is a sidesplitting joke, etc.

Maybe the hardware substrate is relevant after all. But [...] I think the burden is firmly on those of us who suspect so, to explain what about the hardware matters and why. Post-Turing, no one gets to treat consciousness’s dependence on particular hardware as “obvious”—especially if they never even explain what it is about that hardware that makes a difference.

comment by Ben Pace (Benito) · 2014-08-05T12:50:36.336Z · LW(p) · GW(p)

But if that were the case, then moral philosophers - who reason about ethical principles all day long - should be more virtuous than other people. Are they? The philosopher Eric Schwitzgebel tried to find out. He used surveys and more surreptitious methods to measure how often moral philosophers give to charity, vote, call their mothers, donate blood, donate organs, clean up after themselves at philosophy conferences, and respond to emails purportedly from students. And in none of these ways are moral philosophers better than other philosophers or professors in other fields.

Schwitzgebel even scrounged up the missing-book lists from dozens of libraries and found that academic books on ethics, which are presumably mostly borrowed by ethicists, are more likely to be stolen or just never returned than books in other areas of philosophy. In other words, expertise in moral reasoning does not seem to improve moral behavior, and it might even make it worse (perhaps by making the rider more skilled at post hoc justification). Schwitzgebel still has yet to find a single measure on which moral philosophers behave better than other philosophers.

  • Jonathon Haidt, discussing the idea that ethical reasoning causes good behaviour, in his book 'The Righteous Mind'.

I found the book-stealing thing quite funny, although I imagine that some of the results described could be explained by popularity; if more people get into / like ethics, then there are more people who might steal library books, more antisocial people who don't respond to emails, etc. This hasn't been demonstrated to my knowledge though, and I'm otherwise inclined to believe that people who spend their days thinking about ethics in the abstract, are simply better at coming up with rationales for their instinctive feelings. Joshua Greene says rights are an example of this, where we need a dictum against whatever our emotions are telling us is despicable, even though we can't find any utilitarian justification for it.

Replies from: dspeyer, Torello, Richard_Kennaway, NancyLebovitz
comment by dspeyer · 2014-08-06T22:56:19.085Z · LW(p) · GW(p)

There's probably a selection effect at work. Would a highly moral person with a capable and flexible mind become a full-time moral philosopher? Take their sustenance from society's philanthropy budget?

Or would they take the talmudists' advice and learn a trade so they can support themselves, and study moral philosophy in their free time? Or perhaps Givewell's advice and learn the most lucrative art they can and give most of it to charity? Or study whichever field allows them to make the biggest difference in peoples' lives? (Probably medicine, engineering or diplomacy.)

Granted, such a person might think they could make such a large contribution to the field of moral philosophy that it would be comparable in impact to other research fields. This seems unlikely.

The same reasoning would keep highly moral people out of other sorts of philosophy, but people who don't have an interest in moral philosophy per se might not notice the point. It's hard to avoid if you specifically study it.

Replies from: None, Viliam_Bur, CCC
comment by [deleted] · 2014-08-31T14:31:19.957Z · LW(p) · GW(p)

This could happen, but I think it's mostly dwarfed by the far larger selection effect that people who are not financially privileged mostly don't attempt to become humanities academics these days -- and for good reason.

Replies from: dspeyer
comment by dspeyer · 2014-09-01T16:05:13.925Z · LW(p) · GW(p)

Are you saying that financially privileged people tend to be less moral?

Replies from: None
comment by [deleted] · 2014-09-03T10:09:41.697Z · LW(p) · GW(p)

While that case has been made in a few isolated studies, I was more generally referring to the fact that people who don't come from money will usually choose careers that make them money, and humanities academia doesn't.

Replies from: Nornagest, dspeyer
comment by Nornagest · 2014-09-03T17:49:08.440Z · LW(p) · GW(p)

Wasn't sure about that, so I tracked down some research (Goyette & Mullen 2006). Turns out you're right: conditioned on getting into college in the first place, higher socioeconomic status (as proxied by parents' educational achievement) is correlated with going into arts and sciences over vocational fields (engineering, education, business). The paper also finds a nonsignificant trend toward choosing arts and humanities over math and science, within the arts and science category.

(Within the vocational majors, though, engineering is the highest-SES category. Business and education are both significantly lower. I don't know which of those would be most lucrative on average but I suspect it'd be engineering.)

Replies from: None
comment by [deleted] · 2014-09-03T22:17:26.067Z · LW(p) · GW(p)

(Within the vocational majors, though, engineering is the highest-SES category. Business and education are both significantly lower. I don't know which of those would be most lucrative on average but I suspect it'd be engineering.)

I think there are several trade-offs there: engineering looks like the highest expected value to us, because we (on LessWrong, mostly) had pre-university educations focused on math, science, and technology. People from lower SES... did not, so fewer of them will survive the weed-out courses taught in "we damn well hope you learned this in AP class" style. And then there's the acclimation to discipline and acclimation to obsessive work-habits (necessary for engineering school) that come from professional parentage... and so on. And then of course, many low-SES people probably want to go into teaching as a helping profession, but that's not a very quantitative explanation and I'm probably just making it up.

On the other hand, engineering colleges tend to have abnormally large quantities of international students and immigrants blatantly focused on careerism. So yeah.

comment by dspeyer · 2014-09-03T17:19:55.055Z · LW(p) · GW(p)

How does that fact impact the morality of moral philosophers as measured?

comment by Viliam_Bur · 2014-08-09T21:59:57.081Z · LW(p) · GW(p)

Granted, such a person might think they could make such a large contribution to the field of moral philosophy that it would be comparable in impact to other research fields. This seems unlikely.

Unlikely that they would make such contribution? Yes. Unlikely that they think they would make such contribution? Maybe no.

But I guess they probably don't even think this way, i.e. don't try to maximize their impact. More likely it is something like: "My contribution to society exceeds my salary, so I am a net benefit to the society". Which is actually possible. Yeah, some people, especially the effective altruists, would consider such thinking an evidence against their competence as a moral philosopher.

comment by CCC · 2014-08-07T07:40:53.068Z · LW(p) · GW(p)

Or would they take the talmudists' advice and learn a trade so they can support themselves, and study moral philosophy in their free time?

If someone's studying moral philosophy in their free time, then wouldn't they be taking academic books on ethics out of the library?

comment by Torello · 2014-08-05T13:43:56.239Z · LW(p) · GW(p)

"In 1971, John Rawls coined the term "reflective equilibrium" to denote "a state of balance or coherence among a set of beliefs arrived at by a process of deliberative mutual adjustment among general principles and particular judgments". In practical terms, reflective equilibrium is about how we identify and resolve logical inconsistencies in our prevailing moral compass. Examples such as the rejection of slavery and of innumerable "isms" (sexism, ageism, etc.) are quite clear: the arguments that worked best were those highlighting the hypocrisy of maintaining acceptance of existing attitudes in the face of already-established contrasting attitudes in matters that were indisputably analogous."

-Aubrey de Grey, The Overdue Demise Of Monogamy

This passage argues that reasoning does impact ethical behavior. Steven Pinker and Peter Singer make similar arguments, which I find convincing.

Replies from: None, Benito
comment by [deleted] · 2014-08-31T14:32:33.052Z · LW(p) · GW(p)

I find it quite arguable whether or not "reflective equilibrium" is a real thing that actually happens in our cognition, or a little game played by philosophy academics. Actual cognitive dissonance caused by holding mutually contradicting ideas in simultaneous salience is well-evidenced, but that's not exactly an equilibrium across all ideas we hold, merely across the ones we're holding in short-term verbal memory at the time.

comment by Ben Pace (Benito) · 2014-08-05T18:48:21.423Z · LW(p) · GW(p)

I actually put up another quote arguing for it, by Joshua Greene, making an analogy between successsful moral argument and the invention of new technology; even though a person rarely invents a whole new piece of technology, our world is defined by technological advance. Similarly, even though it is rare for a moral norm to change as a result of abstract argument, our social norms have change dramatically since times gone by.

Nonetheless, the quote works with empirical evidence, the ultimate arbiter of reality. It looks like, whilst moral argument can change our thoughts (and behaviour) on ethical issues, a lot of the time it doesn't. Like technology, the big changes transform our world, but for the most part we're just playing angry birds.

comment by Richard_Kennaway · 2014-08-13T08:51:09.449Z · LW(p) · GW(p)

This hasn't been demonstrated to my knowledge though, and I'm otherwise inclined to believe that people who spend their days thinking about ethics in the abstract, are simply better at coming up with rationales for their instinctive feelings.

I think it more likely they're better at coming up with rationales to ignore their instinctive feelings.

Replies from: Jiro, VAuroch
comment by Jiro · 2014-08-13T20:30:43.053Z · LW(p) · GW(p)

I think that someone can believe that their instinctive feelings are an approximation to what is ethical, then try to formalize it, then conclude that they have identified areas where the approximation is in error. So their ethics code could be highly based on their instinctive feelings without following them 100% of the time.

comment by VAuroch · 2014-08-13T09:25:02.386Z · LW(p) · GW(p)

That seems unlikely. People's instinctive feelings are generally pretty selfish. (Small sample size, obviously. I think 2 other people where I've spoken with enough about this kind of thing to judge.)

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-08-13T10:03:49.622Z · LW(p) · GW(p)

None of your sample were people with children, then?

And there's also the question of what is "instinctive" versus whatever the opposite is. What is this distinction and how do you tell?

Replies from: VAuroch
comment by VAuroch · 2014-08-13T21:08:51.850Z · LW(p) · GW(p)

No, but I don't see why children should have an effect; favoring your children over strangers is no less selfish than favoring yourself over strangers, and both are strong instincts.

By instinctive I just mean system 1; the judgments made before you take time to think through what you should do.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-08-14T04:01:27.187Z · LW(p) · GW(p)

No, but I don't see why children should have an effect; favoring your children over strangers is no less selfish than favoring yourself over strangers, and both are strong instincts.

I had intended to draw attention to the phenomenon of favouring one's children over oneself. It appears I was right about the test demographic.

And "no less selfish"? At what point would you consider the widening circle to be "less selfish"? To favour your village over others, your country over others, humanity over animals; are these are all no less selfish? Is nothing unselfish but a life of exaninition and unceasing service to everyone and everything but oneself?

By instinctive I just mean system 1; the judgments made before you take time to think through what you should do.

System 1 is susceptible to training -- that is what training is. We may be born with the neurological mechanism, but not its entire content. "Instinct" more usually means (quoting Wikipedia) "performed without being based upon prior experience". A human without prior experience is a baby.

Replies from: VAuroch
comment by VAuroch · 2014-08-14T06:55:48.538Z · LW(p) · GW(p)

Standard definitions of system 1 describe it as 'instinctive', but if you need a separate definition of instinctive responses, 'untrained system 1 responses' works.

At what point would you consider the widening circle to be "less selfish"? To favour your village over others, your country over others, humanity over animals; are these are all no less selfish? Is nothing unselfish but a life of exaninition and unceasing service to everyone and everything but oneself?

That depends. Any of those things can be unselfish, if you're doing it because you think it's a good thing to do independent of whether it's an outcome/action you like, and the wider the circle the more likely that's the motivation. If it's based on 'I like these people and want them to be happy, therefore I will take this action' that's still selfish.

Lest this sound like I'm saying anything that isn't done for abstract reasons is selfish, I'd contrast it with things done for reasons of compassion. The lines there can get blurry when the people you're feeling compassion for are in your ingroup, but things like the place-quarters-here-for-adorable-sad-children variety of charity are clearly trying to induce compassionate motivation (and it works).

From conversations I have had with my own parents (not as comprehensive or in-depth, but heartfelt), it seemed pretty clear that the parenting instinct is much more 'these kids are mine and I will take care of them come hell or high water' than a compassionate reflex.

comment by NancyLebovitz · 2014-08-12T15:11:47.704Z · LW(p) · GW(p)

Hypothesis: At least some of the people who are interested in ethics are concerned because they have a problem behaving ethically.

comment by Iydak · 2014-08-13T11:57:54.386Z · LW(p) · GW(p)

We try things. Occasionally they even work.

Parson Gotti

Replies from: Bugmaster
comment by Bugmaster · 2014-08-14T04:47:42.052Z · LW(p) · GW(p)

Also this entire comic:

http://www.erfworld.com/book-1-archive/?px=%2F124.jpg

Replies from: lalaithion, lmm
comment by lalaithion · 2014-08-15T06:13:56.936Z · LW(p) · GW(p)

" 'striving for the impossible' doesn't mean 'toiling in vain'. It means growth, it means improvement in the directions of your ideas, not futility."

comment by lmm · 2014-08-25T21:31:24.452Z · LW(p) · GW(p)

Link is broken

Replies from: Iydak
comment by Iydak · 2014-08-26T22:50:48.709Z · LW(p) · GW(p)

Looks like they decided to swap over to a new site not two weeks after I posted it. Should be fixed now.

Replies from: lmm
comment by lmm · 2014-08-27T07:01:25.580Z · LW(p) · GW(p)

Nope, still broken

Replies from: Iydak
comment by Iydak · 2014-08-28T14:21:12.137Z · LW(p) · GW(p)

My link, or Bugmaster's?

Replies from: lmm
comment by lmm · 2014-08-28T23:28:56.139Z · LW(p) · GW(p)

Bugmaster's

comment by Ben Pace (Benito) · 2014-08-04T09:58:03.037Z · LW(p) · GW(p)

A good argument is like a piece of technology. Few of us will ever invent a new piece of technology, and on any given day it’s unlikely that we’ll adopt one. Nevertheless, the world we inhabit is defined by technological change. Likewise, I believe that the world we inhabit is a product of good moral arguments. It’s hard to catch someone in the midst of reasoned moral persuasion, and harder still to observe the genesis of a good argument. But I believe that without our capacity for moral reasoning, the world would be a very different place.

-Joshua Greene, “Moral Tribes”, Endnotes

comment by James_Miller · 2014-08-04T03:35:52.416Z · LW(p) · GW(p)

Come back with your shield - or on it.

Our kind might not be able to cooperate, but the Spartans certainly could. The Spartans were masters of hoplite phalanx warfare where often every individual would have been better off running away but collectively everyone was better off if none ran away than if all did. The above quote is what Plutarch says Spartan mothers would tell their sons before battle. (Because shields were heavy if you were going to run away you would drop it, and coming back on your shield meant you were dead.) Spreading memes to overcome collective action problems is civilization level rational.

Replies from: RolfAndreassen, KnaveOfAllTrades, None, Torello
comment by RolfAndreassen · 2014-08-04T03:45:21.792Z · LW(p) · GW(p)

Well... most of what we "know" about the Spartans was written down by their enemies, and may be inaccurate. It is not at all clear that any actual Spartan ever said the words you attribute to them; it may be Plutarch making things up to illustrate how he thought a city ought to work. Which doesn't necessarily make it bad rationality, but does mean it is fictional evidence, not historical.

Replies from: VAuroch, Gunnar_Zarncke
comment by VAuroch · 2014-08-04T21:06:39.134Z · LW(p) · GW(p)

We have significant amounts written by the Ancient Greek equivalent of a Sparta otaku, Thucydides. He lived there for a significant period (IIRC, he was in exile from Athens at the time) and was firsthand familiar.

comment by Gunnar_Zarncke · 2014-08-07T22:37:08.523Z · LW(p) · GW(p)

An example of the orderly battle of the hellens from Xenophons Anabasis where the enemy has ten-fold numeric superiority:

Clearchus, though he could see the compact body at the centre, and had been told by Cyrus that the king lay outside the Hellenic left (for, owing to numerical superiority, the king, while holding his own centre, could well overlap Cyrus's extreme left), still hesitated to draw off his right wing from the river, for fear of being turned on both flanks; and he simply replied, assuring Cyrus that he would take care all went well.

...

At this time the barbarian army was evenly advancing, and the Hellenic division was still riveted to the spot, completing its formation as the various contingents came up.

...

And now the two battle lines were no more than three or four furlongs apart, when the Hellenes began chanting the paean, and at the same time advanced against the enemy. But with the forward movement a certain portion of the line curved onwards in advance, with wave-like sinuosity, and the portion left behind quickened to a run; and simultaneously a thrilling cry burst from all lips, like that in honour of the war-god—eleleu! eleleu! and the running became general. Some say they clashed their shields and spears, thereby causing terror to the horses (4); and before they had got within arrowshot the barbarians swerved and took to flight. And now the Hellenes gave chase with might and main, checked only by shouts to one another not to race, but to keep their ranks. The enemy's chariots, reft of their charioteers, swept onwards, some through the enemy themselves, others past the Hellenes. They, as they saw them coming, opened a gap and let them pass. One fellow, like some dumbfoundered mortal on a racecourse, was caught by the heels, but even he, they said, received no hurt, nor indeed, with the single exception of some one on the left wing who was said to have been wounded by an arrow, did any Hellene in this battle suffer a single hurt.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2014-08-08T10:00:33.038Z · LW(p) · GW(p)

...according to Xenophon, at any rate. I don't see what that has to do with the alleged Spartan quote.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-08-08T12:09:13.149Z · LW(p) · GW(p)

The hellenes mentioned in the quote were likely spartans.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2014-08-09T01:49:03.391Z · LW(p) · GW(p)

Some of the commanders were Spartans, yes; and it does seem likely that the mercenaries segregated themselves at least somewhat by city of origin, so the Spartan commanders probably had Spartan troops. But the tactics described are standard Hellenic ones; there is nothing about them that is special to Sparta, as far as I can see.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-08-09T08:27:12.036Z · LW(p) · GW(p)

I'm neither istoriean nor expert on acient warfare. My quote was intended to substantiate the claim

Our kind might not be able to cooperate, but the Spartans certainly could. The Spartans were masters of hoplite phalanx warfare where often every individual would have been better off running away but collectively everyone was better off if none ran away than if all did.

My given quote indeed doesn't distinguish between Spartans and Athens... but that isn't needed as it appears that all hellenes were able to much better cooperate than their enemies. And from my reading of Anabasis this is substantiated. And my given quote is no bad one at that.

Replies from: lmm
comment by lmm · 2014-08-09T22:09:34.796Z · LW(p) · GW(p)

The reason it's relevant is that some of us consider the Athenians to be "our kind", or at least the closest thing at the time.

comment by KnaveOfAllTrades · 2014-08-04T05:49:07.774Z · LW(p) · GW(p)

Plaudits for actually explaining and justifying your rationality quote. May others follow your example!

comment by [deleted] · 2014-08-31T14:46:53.676Z · LW(p) · GW(p)

"Our kind cannot cooperate" is a common meme for which I've seen comparatively little evidence. Mailing lists are not the real world, and while most people might start flame wars over the tiniest bullshit on mailing lists, their real-world behavior is largely cooperative and prosocial.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-08-31T18:17:51.464Z · LW(p) · GW(p)

Would those be the same people you characterised by these words?

(ChristianKl) Normal civilized humans don't really want to kill other humans.

(eli_sennesh) Well, certainly not nearby humans who have similar skin coloration and evince membership in the same tribe. Those people, on the other hand, are disgusting, and the lot of them simply have to go.

Replies from: None
comment by [deleted] · 2014-08-31T19:44:19.338Z · LW(p) · GW(p)

The comment you're quoting is sarcastic.

comment by Torello · 2014-08-04T17:26:28.078Z · LW(p) · GW(p)

I find it ironic that you use a military example to illustrate how we can achieve collective action at the civilization level.

Isn't the fact the Spartans were willing to "come back with their shields - or on it" the epitome of our kind not being able to cooperate?

I always interpreted "our kind" as the whole of humanity, so for me one sub-set of humanity banding together to destroy another subset (or die trying) isn't a good example of civilization-level cooperation, or the kind of meme that would be useful to spread.

Replies from: Azathoth123, James_Miller
comment by Azathoth123 · 2014-08-05T03:13:44.966Z · LW(p) · GW(p)

I always interpreted "our kind" as the whole of humanity,

Did you read the linked article? In it Eliezer is contrasting rationalist and religious institutions. You may also want to read this to get an idea for the problem James Miller is trying to address. Here is a relevant quote:

Suppose that a country of rationalists is attacked by a country of Evil Barbarians who know nothing of probability theory or decision theory.

Now there's a certain viewpoint on "rationality" or "rationalism" which would say something like this:

"Obviously, the rationalists will lose. The Barbarians believe in an afterlife where they'll be rewarded for courage; so they'll throw themselves into battle without hesitation or remorse. Thanks to their affective death spirals around their Cause and Great Leader Bob, their warriors will obey orders, and their citizens at home will produce enthusiastically and at full capacity for the war; anyone caught skimming or holding back will be burned at the stake in accordance with Barbarian tradition. They'll believe in each other's goodness and hate the enemy more strongly than any sane person would, binding themselves into a tight group. Meanwhile, the rationalists will realize that there's no conceivable reward to be had from dying in battle; they'll wish that others would fight, but not want to fight themselves. Even if they can find soldiers, their civilians won't be as cooperative: So long as any one sausage almost certainly doesn't lead to the collapse of the war effort, they'll want to keep that sausage for themselves, and so not contribute as much as they could. No matter how refined, elegant, civilized, productive, and nonviolent their culture was to start with, they won't be able to resist the Barbarian invasion; sane discussion is no match for a frothing lunatic armed with a gun. In the end, the Barbarians will win because they want to fight, they want to hurt the rationalists, they want to conquer and their whole society is united around conquest; they care about that more than any sane person would."

And that's assuming the rationalists don't simply surrender without a fight on the grounds that "war is a zero sum game".

Replies from: Torello, Jiro
comment by Torello · 2014-08-05T14:04:11.858Z · LW(p) · GW(p)

I didn't read the linked article--it certainly seems to frame the issue as rationalists vs. barbarians, not humanity vs. the environment (and the flaws of humanity), so thanks for pointing that out.

I do think fundamentalists/extremists/terrorists have an asymmetrical advantage in the short term in that it's always easier to cause damage/disorder than improvement/order. This quote above seems to be a particular example of this phenomenon.

However, I have to agree with Jiro's comment. Extremists may be able to destroy things and kill people, but I wouldn't say they've been able to conquer anything. To me, "conquer" implies taking control of a country, making its economy work for you, dominating the native population, building a palace, etc. Modern extremists commit suicide and then their mastermind hides silently for a decade until helicopters fly in and soldiers kill him.

comment by Jiro · 2014-08-05T04:38:40.917Z · LW(p) · GW(p)

When referring to actual barbarians, the description of the barbarians seems to lie by omission--even if all the things described above are mostly true, the barbarians have wrecked their economy because central planning doesn't work no matter how many orders they give, burning people at the stake is bad for investment, their belief in an afterlife is associated with other beliefs that prevent them from making or even efficiently using scientific advances, and their inability to have sane discussion means they can't make tactical decisions or really plan anything well at all. (Etc.) That sort of thing is pretty much the reason that the West hasn't been conquered by Muslim fundamentalists yet:

Also, barbarism doesn't arise at random. Some social structures are more conducive to barbarism than others and they may have inherent flaws which reduce the efficiency of conquest even as their encouragement of barbarism increases it.

Replies from: Azathoth123, Viliam_Bur, Lumifer, James_Miller
comment by Azathoth123 · 2014-08-06T02:43:07.759Z · LW(p) · GW(p)

the barbarians have wrecked their economy because central planning doesn't work no matter how many orders they give

Not all barbarians do that. The communists did that, but they also considered themselves rationalists and were considered such by many people at the time. Muslim fundamentalists generally don't.

burning people at the stake is bad for investment

Depends on whose being burned and why. Having the highest per capita rate of capital punishment doesn't seem to have hurt Singapore's ability to get investment.

their belief in an afterlife is associated with other beliefs that prevent them from making or even efficiently using scientific advances

I don't think so. It might hurt their ability to make scientific advancements, but they're perfectly capable of using them once someone else makes them.

Also, barbarism doesn't arise at random. Some social structures are more conducive to barbarism than others and they may have inherent flaws which reduce the efficiency of conquest even as their encouragement of barbarism increases it.

'Rationalist' societies can also have inherent flaws, like say problems solving the collective action problems associated with wars.

comment by Viliam_Bur · 2014-08-09T18:50:13.851Z · LW(p) · GW(p)

This feels to me like a just world fallacy, or perhaps choosing the most convenient world. Yes, if the barbarians are completely stupid, they are probably not so much of a danger these days. If they are completely anti-science, we probably have better guns.

Now imagine somewhat smarter barbarians, who by themselves are unable to do sophisticated science, but have no problem kidnapping a few scientists and telling them to produce a lot of weapons for them, otherwise their families will be burned at stake. (Even if their religion prevents them from doing science, they may compartmentalize and believe it is okay to use the devil's tools against the devil himself.) Suddenly, the barbarians have good guns, too.

Maybe the reason why the West hasn't been conquered by Muslim fundamentalists yet is that Muslims don't have an equivalent of Genghis Khan. Someone who would have the courage to conquer the nearest territory, kill horribly everyone who opposed them, let live those who didn't (and make this fact publicly known), take some men and weapons from the conquered territory and use them to attack the next territory immediately, et cetera, spreading like a wildfire. First attacking some smaller but civilized countries to get better weapons for attacking the next ones. With multiple leaders, so that dropping a bomb won't stop the war. (Maybe one Osama hiding in secret, giving commands to dozen wannabe Genghis Khans who don't mind getting to paradise too soon.)

Replies from: SilentCal, None, Jiro
comment by SilentCal · 2014-08-11T18:08:45.377Z · LW(p) · GW(p)

Jiro's fallacy is not in saying that the world is or has been just in this respect, but rather in implicitly saying it must be. I don't think it's a coincidence that liberal/secular/enlightenment nations are the most powerful today, but that fact doesn't negate the point of the barbarian hypothetical.

I seriously doubt the viability of your Genghis Khan plan for modern fundamentalist Islam, seeing as that same M.O. was tried recently except starting with one of the world's top industrial and scientific powers. But that's a fact about our world, and the point of the barbarian example is more universal than that.

Replies from: Jiro, Azathoth123
comment by Jiro · 2014-08-11T19:27:53.329Z · LW(p) · GW(p)

If the country of rationalists is attacked by a country of barbarians who are perfectly optimized for conquest, the rationalists will get conquered.

But there's no way to get from here to there except by Omega coming down and constructing exactly the race of barbarians necessary for the hypothetical to work. And if you're going to say that, there's no point in referring to them as barbarians and describing their actions in terms like "believes in an afterlife" and "obeys orders" that bring to mind real-life human cultures; you may as well say that Omega is just manipulating each individual barbarian like a player micromanaging a video game and causing him to act in exactly the way necessary for the conquest to work best.

Except of course that if you say "rationalists could be conquered by a set of drones micromanaged by Omega", without pretending that you're discussing a real-world situation, most people (assuming they know what you're talking about) would reply "so what?"

Replies from: Wes_W, SilentCal
comment by Wes_W · 2014-08-11T19:57:09.017Z · LW(p) · GW(p)

If the country of rationalists is attacked by a country of barbarians who are perfectly optimized for conquest, the rationalists will get conquered.

This is not inconsistent with the claim that, if the country of rationalists is attacked by a country of barbarians who are imperfectly optimized for conquest, the rationalists might get conquered, with the risk depending on how optimized the barbarians are. And, for that matter, the rationalist nation probably isn't theoretically optimal either...

On balance, believing true things is an advantage, but there are other kinds of advantages which don't automatically favor the rationalist side. Sheer numbers, for example.

Replies from: Jiro
comment by Jiro · 2014-08-11T20:06:24.028Z · LW(p) · GW(p)

This is not inconsistent with the claim that, if the country of rationalists is attacked by a country of barbarians who are imperfectly optimized for conquest, the rationalists might get conquered, with the risk depending on how optimized the barbarians are.

How imperfectly optimized, though? Imperfectly optimized like Omega controlling each barbarian but occasionally rolling the barbarian's morale check, which fails on a 1 on a D100? Or imperfectly optimized like real life barbarians?

Replies from: V_V
comment by V_V · 2014-08-11T20:44:40.672Z · LW(p) · GW(p)

What about the Bolsheviks? Or the WW2-era Japanese?

comment by SilentCal · 2014-08-11T20:39:58.140Z · LW(p) · GW(p)

Try the following obviously-unrealistic yet not-obviously-uninteresting hypothetical: There are two approximately equal-strength warring tribes of barbarians, Tribe A and Tribe B. One day Omega sprinkles magic rationality dust on Tribe A, turning all of its members into rationalists. Tribe B is on the move towards their camp and will arrive in a few days. This is not enough time for Tribe A to achieve any useful scientific or economic advances, nor to accumulate a significant population advantage from non-stake-burning.

Can you see, in that hypothetical, how Eliezer's points in the linked posts are important?

Or another approach: the quote about the rationalists losing says "Barbarians have advantages A, B, and C over rationalists." Your response is "But rationalists have larger advantages X, Y, and Z over barbarians, so who cares?" Eliezer's response is "screw that, if barbarians have any advantages over rationalists, the "rationalists" aren't rational enough". My hypothetical's purpose is to try to control for X, Y, and Z so we have to think about A, B, and C.

Replies from: Jiro
comment by Jiro · 2014-08-11T22:17:05.785Z · LW(p) · GW(p)

My hypothetical's purpose is to try to control for X, Y, and Z so we have to think about A, B, and C.

Advantages are usually advantages under a specific set of circumstances. If you "control" for X, Y, and Z by postulating a set of circumstances where they have no effect, then of course A, B, and C are better. The rationalists have a set of ideals that works better in a large range of circumstances and in more realistic circumstances. They will not, of course, work better in a situation which is contrived so that they lose, and that's fairly uninteresting--it's impossible to have ideals that work under absolutely all circumstances.

Think of being rationalist like wearing a seatbelt. Asking what if the rationalists' advances over the barbarians just happen not to apply is like asking what if not having a seatbelt would let you be thrown out of the car onto something soft but wearing a seatbelt gets you killed. I would not conclude that there is something wrong with seatbelts just because there are specific unlikely situations where wearing one might get you killed and not wearing one lets you survive.

Replies from: SilentCal
comment by SilentCal · 2014-08-11T22:37:57.011Z · LW(p) · GW(p)

"It's impossible to have ideals that work under absolutely all circumstances."

This is the essentially the proposition Eliezer wrote those posts to refute.

A seat belt is a dumb tool which is extremely beneficial in real-world situations, but we can easily contrive unrealistic cases where it causes harm. The point of those posts is that rationality is 'smart'; it's like a seat belt that can analyze your trajectory and disengage itself iff you would come to less harm that way, so that even in the contrived case it doesn't hurt to wear one.

Replies from: Jiro
comment by Jiro · 2014-08-12T00:01:36.742Z · LW(p) · GW(p)

I don't read it that way (at least not the long quote, taken from the second article). The way I read it, he is trying to discredit what he thinks is fake rationalism, and is giving barbarians as an example of a major failure which proves that this rationalism is fake. (Pay attention to the use of scare quotes.) I believe my response--that everything fails in unrealistic situations and what he is describing is an unrealistic situation--is on point.

Replies from: SilentCal
comment by SilentCal · 2014-08-12T16:28:13.594Z · LW(p) · GW(p)

The intention is to discredit a 'fake rationalism' and illustrate 'real rationalism' in its place.

I think you're right that if we're talking about the USA fighting a conventional war against the visigoths, the barbarians' advantages aren't even a rounding error. But there are other types of conflicts where that may not be the case. Some possible examples include warfare in other eras or settings, economic competition, democratic politics, and social movements.

Or even if it's not a conflict--maybe we'd like to be able to have traditional rationalist virtues like empirically testing beliefs, and also be a group that cooperates on prisoners' dilemmas? Barbarians are a dramatization to elevate these problems to an existential level, but the ultimate point is that rationalists shouldn't have this kind of disadvantage even if they do have offsetting advantages that make them stronger than any group they're likely to come into direct conflict with.

Replies from: Jiro
comment by Jiro · 2014-08-12T17:46:42.708Z · LW(p) · GW(p)

but the ultimate point is that rationalists shouldn't have this kind of disadvantage

Why shouldn't they? If the "should" means "I would be happier if", maybe, but it's not a law of the universe that rationalists must always have an advantage in all situations.

(If nothing else, what if the enemy just hates rationalists? I can even think of real-life enemies who do.)

Replies from: SilentCal, Lumifer
comment by SilentCal · 2014-08-12T18:34:59.361Z · LW(p) · GW(p)

Rationalists shouldn't have those disadvantages, because there are a bunch of ways to mitigate them, which the post goes on to enumerate.

Part of Eliezer's project is to enshrine a definition of 'rational' such that a decision that predictably puts you at a disadvantage is not rational.

Are you arguing that Eliezer-rationality is a poor fit for the word's historic usage and you'd rather cultivate a different kind of rationality that doesn't allow the kind of unpleasant anti-barbarian measures described in the post? One that forbids them, not conditionally on barbarians not being a major threat, but absolutely?

Replies from: Jiro, Jiro, Jiro
comment by Jiro · 2014-08-12T23:35:53.368Z · LW(p) · GW(p)

It's pretty well established here that a phrase like 'predictably puts you at a disadvantage' is a probabilistic term; essentially it means 'has a negative impact on your expected uitility.'

If the definition was actually what you describe, then whether barbarians demonstrate that that kind of rationality "predictably puts you at a disadvantage" would partly depend on how likely you were to be attacked by barbarians of those types. (Because low probability events contribute less to your expected utility.)

In other words, if that's what he meant, then optimized barbarians don't count as an example. He'd have to either use realistic barbarians, or argue that his optimized barbarians are realistic enough that they make substantial contributions to the expected utility.

(Does "I assume he meant something different because if he meant that, his example is useless" count as steelmanning?)

Replies from: SilentCal
comment by SilentCal · 2014-08-13T18:10:21.949Z · LW(p) · GW(p)

He's writing about what the rationalists should do conditional on facing that kind of barbarian. The point of the post is not that rationalist communities should all implement military drafts, but rather that they should be capable of such measures if circumstances require them.

Replies from: Jiro
comment by Jiro · 2014-08-13T18:34:39.547Z · LW(p) · GW(p)

He's writing about what the rationalists should do conditional on facing that kind of barbarian.

That makes a bit more sense but it still has flaws. The first flaw that comes to mind is that the rationalists may have precommitted to support human rights and that the harm that this precommitment causes to the rationalists in the optimized-barbarian scenario is more than balanced by the benefit it causes by making the rationalists unwilling to violate human rights in other scenarios where the rationalists think they are being attacked by sufficiently optimized barbarians, but are not. Whether this precommitment is rational depends on the probability of the optimized-barbarian scenario, and the fact that it is undeniably harmful conditional on the optimized-barbarian scenario doesn't mean it's not rational.

(Edit: It is possible to phrase this in a non-precommitment way: "I know that I am running on corrupted hardware and a threat that seems to be optimized barbarians probably isn't. The expected utility is calculated from P(false belief in optimized barbarians) benefit from not drafting everyone - P(true belief in optimized barbarians) loss from being killed by barbarians. The conclusion is the same: just because the decision is bad for you in the case where the barbarians really are optimized doesn't make the choice irrational.)

Replies from: SilentCal
comment by SilentCal · 2014-08-13T20:25:35.217Z · LW(p) · GW(p)

Right, but it's a consequentialist argument, not a deontological one. "This measure is more likely to harm than to help" is a good argument; "On the outside view this measure is more likely to harm than to help, and we should go with the outside view even though the inside view seems really really convincing" can be a good argument; but you should never say "This measure would help, but we can't take it because it's 'irrational'".

Replies from: Jiro
comment by Jiro · 2014-08-13T20:50:59.876Z · LW(p) · GW(p)

In a sense, just saying "you shouldn't be the kind of 'rational' that leads you to be killed by optimized barbarians" is already a consequential argument.

you should never say "This measure would help, but we can't take it because it's 'irrational'".

Yes, you should, if you have precommitted to not take that measure and you are then unlucky enough to be in the situation where the measure would help. Precommitment means that you can't change your mind at the last minute and say "now that I know the measure would help, I'll do it after all".

The example I give above is not the best illustration because you are precommitting because you can't distinguish between the two scenarios, but imagine a variation: You always can identify optimized barbarians, but if you precommit to not drafting people when the optimized barbarians come, your government will be trusted more in branches of the world where the optimized barbarians don't come. Again, the measure of worlds with optimized barbarians is small. In that case, you should precommit, and then if the optimized barbarians come, you say "drafting people would help, but I can't break a precommitment". If you have made the precommitment by self-modifying so as to be unwilling to take the measure, the unwillingness looks like a failure of rationality ("If only you weren't unwilling, you'd survive, but since you are, you'll die!"), when it really isn't.

Replies from: SilentCal
comment by SilentCal · 2014-08-13T22:31:45.143Z · LW(p) · GW(p)

Precommitment is also a potentially good reason. I'm not sure what we disagree about anymore.

Is your objection to the Barbarians post that you fear it will be used to justify actually implementing unsavory measures like those it describes for use against barbarians?

Replies from: Jiro
comment by Jiro · 2014-08-14T01:25:12.168Z · LW(p) · GW(p)

I'm not sure what we disagree about anymore.

If there is precommitment, it may be true that doing X would benefit you, you refuse to do X, and your refusal to do X is rational.

Eliezer said that if doing X would benefit you and you refuse to do X, that is not really rational.

Furthermore, whether X is rational depends on what the probability of the scenario was before it happened--even if the scenario is happening now. Eliezer as interpreted by you believes that if the scenario is happening now, the past probability of the scenario doesn't affect whether X is rational (That's why he could use optimized barbarians as an example in the first place, despite its low probability.)

Also, I happen to think that many cases of people "irrationally" acting on principle can be modelled as a type of precommitment. Precommitment is just how we formalize "I'm going to shop at the store with the lower price, even if I have to spend more on gas to get there" or "we should allow free speech/free press/etc. and I don't care how many terrorists that helps".

Replies from: SilentCal
comment by SilentCal · 2014-08-14T14:49:36.874Z · LW(p) · GW(p)

TDT/UDT and the outside view are how we formalize precommitment.

comment by Jiro · 2014-08-12T22:23:59.048Z · LW(p) · GW(p)

It seems like the fastest way to get modded down is to disagree with Eliezer.

comment by Jiro · 2014-08-12T19:07:57.259Z · LW(p) · GW(p)

Part of Eliezer's project is to enshrine a definition of 'rational' such that a decision that predictably puts you at a disadvantage is not rational.

Assuming that "predictably puts you at a disadvantage" means "there is at least one situation X such that I can predict that if X occurs, I would be at a disadvantage", then I don't agree with this definition. (For instance, if Biblical literalism is true, pretty much every rationalist would be at a disadvantage. Does that mean that every definition of "rational" is bad?)

Replies from: SilentCal
comment by SilentCal · 2014-08-12T22:29:56.985Z · LW(p) · GW(p)

It's pretty well established here that a phrase like 'predictably puts you at a disadvantage' is a probabilistic term; essentially it means 'has a negative impact on your expected uitility.'

By the definition you assumed I was using, it would be true to say that buying a lottery ticket predictably increases your wealth. That is not a reasonable way to use words.

(Also, you've been disagreeing with Eliezer this whole thread, and only that last post has downvotes)

comment by Lumifer · 2014-08-12T18:23:16.800Z · LW(p) · GW(p)

Why shouldn't they?

If you accept the local definition of rationality as winning (and not, say, as "thinking logically and deeply") then, well, losing means you weren't sufficiently rational :-/

comment by Azathoth123 · 2014-08-12T05:42:39.621Z · LW(p) · GW(p)

I seriously doubt the viability of your Genghis Khan plan for modern fundamentalist Islam, seeing as that same M.O. was tried recently except starting with one of the world's top industrial and scientific powers.

I'm not sure. The liberal world seems to have gotten "softer" since then. Compare the general reaction in the US to the death toll in Iraq (maybe one or two US soldiers a day) with the death toll in WWII.

comment by [deleted] · 2014-08-31T15:03:32.141Z · LW(p) · GW(p)

Maybe the reason why the West hasn't been conquered by Muslim fundamentalists yet is that Muslims don't have an equivalent of Genghis Khan. Someone who would have the courage to conquer the nearest territory, kill horribly everyone who opposed them, let live those who didn't (and make this fact publicly known), take some men and weapons from the conquered territory and use them to attack the next territory immediately, et cetera, spreading like a wildfire.

"Caliph" Abu Bakr al-Baghdadi and his group ISIS have been behaving exactly like this. They are quite young, but don't appear quite able to take on a Western military yet.

This feels to me like a just world fallacy, or perhaps choosing the most convenient world.

And yet, by definition, a group who are better at rationality win more often. We ought to expect that rational civilizations can beat irrational ones, because rationality is systematized cross-domain winning.

Replies from: Viliam_Bur, Jiro
comment by Viliam_Bur · 2014-08-31T16:07:30.804Z · LW(p) · GW(p)

by definition, a group who are better at rationality win more often

Well, there is this "valley of bad rationality" where being more rational about part of the problem but not yet more rational about other part can make people less winning.

Sometimes I feel are we are there at a society level. We have smart individuals, we have science, we fly to the moon, etc. However, superstition and blind hate can be an efficient tool for coordinating a group to fight against another group. We don't use this tool much (because it doesn't fit well with rationality and science), but we don't have an equally strong replacement. Also, only a few people in our civilization do the rationality and science. So even if there is a rationality-based defense, most of our society is too stupid to use it efficiently. On the scale from "barbarians" to "bayesians", most of our society is somewhere in the middle: not barbaric enough, but still far from rational.

comment by Jiro · 2014-08-31T15:30:57.382Z · LW(p) · GW(p)

A group that is better at rationality will win more often, but winning more often is not the same thing as "winning in a superset of the situations in which the irrational win".

comment by Jiro · 2014-08-10T03:03:04.584Z · LW(p) · GW(p)

By the same reasoning which says that fundamentalists could do better with more efficient methods of conquest, they could do better with more efficient methods of making peace, too. They won't do as well as with conquest, but they'll do better than they are doing now. Yet they don't.

Barbarism is not optimized for conquest. It's optimized for supporting a set of social structures. Those social structures make them more dangerous as conquerers than the average society, but they're still not optimized for conquest; there are things which would make conquest more efficient which they would not do.

(To use just one example, for a country to embark on conquest and use the men from the conquered country to continue conquering more countries, they'd have to grant equal rights to conquered people who agreed to work with them. Rome did that except in a few rare but famous cases. So did the Mongols. But Muslim fundamentalists can't give non-Muslims or rival Muslims equal rights without no longer being Muslim fundamentalists.)

Replies from: Azathoth123
comment by Azathoth123 · 2014-08-10T19:07:32.524Z · LW(p) · GW(p)

To use just one example, for a country to embark on conquest and use the men from the conquered country to continue conquering more countries, they'd have to grant equal rights to conquered people who agreed to work with them.

Not necessarily. Muslims, in particular, have a history of using slave soldiers to good effect.

But Muslim fundamentalists can't give non-Muslims or rival Muslims equal rights without no longer being Muslim fundamentalists.

You do realize it's possible to convert to fundamentalist Islam?

Replies from: Nornagest, Jiro
comment by Nornagest · 2014-08-11T16:59:34.930Z · LW(p) · GW(p)

Muslims, in particular, have a history of using slave soldiers to good effect.

I seem to recall, and a glance over the Wikipedia articles suggests, that the Mamluk and Janissary systems involved raising (enslaved) boys into a military environment from a fairly young age. These boys might come from subjugated territories, but they'd in effect have been part of the dominant culture for much of their lives: it's not a system that could be used to quickly convert conquered territories into additional manpower.

That said, it hasn't been unusual for empires, modern and otherwise, to make substantial use of auxiliary forces drawn from client states. The Roman military probably relied on them as much as they did on the legions, or more in the late empire.

Replies from: Azathoth123
comment by Azathoth123 · 2014-08-12T05:39:06.109Z · LW(p) · GW(p)

The Roman military probably relied on them as much as they did on the legions, or more in the late empire.

The late Roman Empire wasn't exactly successful at conquering anything, or even at keeping the Empire from falling apart.

comment by Jiro · 2014-08-10T19:53:34.468Z · LW(p) · GW(p)

You do realize it's possible to convert to fundamentalist Islam?

Yes, but requiring that soldiers do so makes the process of conquest less optimized, since it's easier for obvious reasons to get soldiers without this requirement than with it. (The same goes for using slaves.)

Replies from: Vaniver
comment by Vaniver · 2014-08-10T22:22:58.318Z · LW(p) · GW(p)

Yes, but requiring that soldiers do so makes the process of conquest less optimized, since it's easier for obvious reasons to get soldiers without this requirement than with it.

You seem to be focusing solely on cost; the difference between benefit and cost is what matters, and the benefits of a fighting force with shared values (particularly shared religious ones) are many and obvious.

Replies from: Jiro
comment by Jiro · 2014-08-11T16:08:45.006Z · LW(p) · GW(p)

By that reasoning, it's the Romans and the Mongols who are un-optimized for conquest.

Replies from: Azathoth123
comment by Azathoth123 · 2014-08-12T05:37:28.662Z · LW(p) · GW(p)

The Mongols had the advantage of recruiting from a pool of steppe nomads with similar values.

The Roman Republic conquered the Mediterranean basin with an army consisting of Italians that were required to adopt Roman values before joining. Later the Roman legion adopted the looser system you described. Subsequently Roman legions would spend nearly as much effort fighting other Roman legions in civil wars as fighting Rome's enemies.

comment by Lumifer · 2014-08-05T16:31:31.861Z · LW(p) · GW(p)

That sort of thing is pretty much the reason that the West hasn't been conquered by Muslim fundamentalists yet

For a counterpoint, look at the speed and magnitude of the original spread of Islam in the VII-VIII centuries.

Also there is Iran.

Replies from: Jiro
comment by Jiro · 2014-08-05T16:54:26.472Z · LW(p) · GW(p)

I don't think the spread of Islam many centuries ago counts. Fanaticism isn't as much of a disadvantage when fighting medieval socieities as it is when fighting modern ones.

Replies from: Lumifer
comment by Lumifer · 2014-08-05T17:06:52.040Z · LW(p) · GW(p)

Fanaticism isn't as much of a disadvantage when fighting medieval socieities as it is when fighting modern ones.

Why is that so?

Replies from: Jiro
comment by Jiro · 2014-08-05T19:00:43.381Z · LW(p) · GW(p)

To summarize: Fanaticism keeps the culture from escaping the dark ages. If everyone is in the dark ages anyway, not being able to escape the dark ages isn't much of a disadvantage.

Replies from: Lumifer
comment by Lumifer · 2014-08-05T19:43:45.838Z · LW(p) · GW(p)

That looks to me like one of those sentences which sound pretty but don't actually mean much.

In your comment upthread you listed things which make a barbarian society "uncompetitive". They apply to medieval societies as well. Essentially, you would expect the non-fanatic society to be richer, have better technology, and be governed more effectively. That holds in any epoch (as long as we don't get too far into stone age :-/).

When Islam erupted out of the Arabian Peninsula, the "fanatics" easily took over huge -- amazingly huge -- territories. And it wasn't just pillage-and-burn, they conquered the lands and established their own rule.

Replies from: Jiro
comment by Jiro · 2014-08-05T22:07:17.139Z · LW(p) · GW(p)

Essentially, you would expect the non-fanatic society to be richer, have better technology, and be governed more effectively.

Why would I expect this when the society exists hundreds of years ago? The point is that back then, everyone lacked many of the things that fanaticism would cause a society to lack. The fanatics are not at such a disadvantage under such circumstances. The loss in efficiency from it taking weeks to communicate between distant parts of your empire is going to make the loss in efficiency from having a theocracy look like noise. The disadvantage of not getting investors in your country won't matter when there's no international investment anyway. The disadvantage of having little in the way of science and engineering won't matter if there's hardly any science yet and engineering is at the state of building bridges instead of launching satellites.

Replies from: Lumifer
comment by Lumifer · 2014-08-06T01:22:52.055Z · LW(p) · GW(p)

The point is that back then, everyone lacked many of the things that fanaticism would cause a society to lack.

Really? Consider trade -- a major factor in the society's wealth and survival for the last several thousands of years. The fanatic barbarians wouldn't trade, would they?

You don't think technology mattered before the Industrial Revolution? Oh, but it did. From bronze weapons to early firearms, an army with a technological edge had a big advantage.

Governance didn't matter in ancient and medieval societies? Do you actually believe that?

Replies from: Jiro
comment by Jiro · 2014-08-06T03:29:51.239Z · LW(p) · GW(p)

Technology mattered before the Industrial Revolution. The kind of technology that fanatics are bad at did not matter before the Industrial Revolution, however, because nobody had it, fanatic or not.

comment by James_Miller · 2014-08-05T16:55:49.520Z · LW(p) · GW(p)

That sort of thing is pretty much the reason that the West hasn't been conquered by Muslim fundamentalists yet.

Another reason: many members of our military do have the courage of the Spartans. U.S. soldiers don't put on suicide vests to kill children, but they do fall on grenades and hold hopeless positions under fire so their friends can escape death.

comment by James_Miller · 2014-08-04T18:12:00.838Z · LW(p) · GW(p)

I see competition among different groups of people, with those able to overcome their collective action problems gaining power and resources.

Replies from: Torello
comment by Torello · 2014-08-04T21:18:53.373Z · LW(p) · GW(p)

I see what you mean, but in a military conflict it sees that any gain in power or resources is the result of another group losing power or resources (a zero-sum game). I guess that trade/commerce might be a positive-sum example where competition is still involved but on the whole there is societal benefit.

comment by Torello · 2014-08-04T17:37:45.379Z · LW(p) · GW(p)

This seems like an elegant and funny take on Ben Franklin's wisdom.

Walter Sobchak: "Am I wrong?"

The Dude: "No you're not wrong."

Walter Sobchak: "Am I wrong?"

The Dude: "You're not wrong Walter. You're just an asshole."

-The Big Lebowski, Directed by Joel Coen and Ethan Coen, 1998

comment by Stabilizer · 2014-08-04T23:24:33.966Z · LW(p) · GW(p)

Most of the time what we do is what we do most of the time.

-Daniel Willingham, Why Don't Students Like School. The point is that, quite often the reason we're doing something is that that's what we're used to doing in that situation.

Note: He attributes the quote to some other psychologists.

comment by grendelkhan · 2014-08-18T18:45:56.127Z · LW(p) · GW(p)

Sometimes the biggest disasters aren't noticed at all -- no one's around to write horror stories.

Vernor Vinge, A Fire Upon the Deep

comment by Ixiel · 2014-08-19T00:55:58.555Z · LW(p) · GW(p)

Most of the time he asked questions. His questions were very good, and if you tried to answer them intelligently, you found yourself saying excellent things that you did not know you knew, and that you had not, in fact, known before. He had "educed" them from you by his question. His classes were literally "education" - they brought things out of you, they made your mind produce its own explicit ideas.

Thomas Merton, about professor Mark Van Doren

comment by rule_and_line · 2014-08-23T01:47:44.886Z · LW(p) · GW(p)

After describing

blind certainty, a close-mindedness that amounts to an imprisonment so total that the prisoner doesn't even know he's locked up.

David Foster Wallace continues

The point here is that I think this is one part of what teaching me how to think is really supposed to mean. To be just a little less arrogant. To have just a little critical awareness about myself and my certainties. Because a huge percentage of the stuff that I tend to be automatically certain of is, it turns out, totally wrong and deluded. I have learned this the hard way, as I predict you will, too.

Replies from: soreff
comment by soreff · 2014-08-23T17:47:42.053Z · LW(p) · GW(p)

Because a huge percentage of the stuff that I tend to be automatically certain of is, it turns out, totally wrong and deluded.

There is a very large amount of stuff that one is automatically certain of that is correct, though trivial, data like "liquid water is wet". I'm not sure how one would even practically quantify an analysis of what fraction of the statements one is certain of are or are not true. Even if one could efficiently test them, how would one list them (in the current state of science - tracing a full human neural network (and then converting its beliefs into a list of testable statements) is beyond our current capabilities).

Replies from: rule_and_line
comment by rule_and_line · 2014-08-23T19:14:17.105Z · LW(p) · GW(p)

I'm curious about this "liquid water is wet" statement. Obviously I agree, but for the sake of argument, could you taboo "is" and tell me the statement again? I'm trying to understand how your algorithm feels from the inside.

If you're curious how to quantify fractions of statements, you might enjoy this puzzle I heard once. Suppose you're an ecological researcher and you need to know the number of fish in a large lake. How would you get a handle on that number?

Replies from: soreff
comment by soreff · 2014-08-23T19:34:19.631Z · LW(p) · GW(p)

One of the parts of "liquid water is wet" is that a droplet of it will spread out on many common surfaces - salt, paper, cotton, etc. Yes, it is a bit tricky to unpack what is meant by"wet" - perhaps some other properties, like not withstanding shear are also folded in - but I don't think that it is just a tautology, with "wet" being defined as the set of properties that liquid water has.

Re the catch/count/mark/release/recapture/count puzzle - the degree to which that is feasible depends on how well one can do (reasonably) unbiased sampling. I'm skeptical that that will work well with the set of testable statements that one is automatically certain of.

comment by EGarrett · 2014-08-04T16:23:40.862Z · LW(p) · GW(p)

"Science alone of all the subjects contains within itself the lesson of the danger of belief in the infallibility of the greatest teachers of the preceding generation." -Richard Feynman

comment by Qwake · 2014-08-17T03:32:17.457Z · LW(p) · GW(p)

Few people are capable of expressing with equanimity opinions which differ from the prejudices of their social environment. Most people are even incapable of forming such opinions.

Albert Einstein

Replies from: somervta
comment by somervta · 2014-08-17T10:25:28.013Z · LW(p) · GW(p)

I don't suppose you have a source for the quote? (at this point, my default is to disbelieve any attribution of a quote unknown to me to Einstein)

Replies from: jazmt
comment by Yaakov T (jazmt) · 2014-08-17T19:47:16.698Z · LW(p) · GW(p)

according to this website (http://ravallirepublic.com/news/opinion/viewpoint/article_876e97ba-1aff-11e2-9a10-0019bb2963f4.html) it is part of 'aphorisms for leo baeck' (which I think is printed in 'ideas and opinions' but I don't have access to the book right now to check)

Replies from: Qwake, somervta, arundelo
comment by Qwake · 2014-08-18T05:31:57.085Z · LW(p) · GW(p)

Thank you for finding the source (I read it in a book and was to lazy to fact check it).

comment by somervta · 2014-08-17T22:40:07.963Z · LW(p) · GW(p)

Thanks! I didn't fine it with my minute of googling, good to know it's legit.

comment by rule_and_line · 2014-08-23T01:27:12.491Z · LW(p) · GW(p)

There is a real joy in doing mathematics, in learning ways of thinking that explain and organize and simplify. One can feel this joy discovering new mathematics, rediscovering old mathematics, learning a way of thinking from a person or text, or finding a new way to explain or to view an old mathematical structure.

This inner motivation might lead us to think that we do mathematics solely for its own sake. That’s not true: the social setting is extremely important. We are inspired by other people, we seek appreciation by other people, and we like to help other people solve their mathematical problems.

-- William Thurston

The entire essay is a beautiful discussion of success and failure in practicing the art of mathematics. Changing the things that need to be changed, much of it applies to practicing the art of rationality.

comment by arundelo · 2014-08-04T22:33:02.464Z · LW(p) · GW(p)

The power is not in the choice of metaphor, it is in the ability to shift among metaphors. Teaching people this other metaphor [...] but not leaving them with the flexibility to move freely in and out is not having enabled them at all.

-- Kent Pitman

Replies from: arundelo
comment by arundelo · 2014-08-05T12:43:40.764Z · LW(p) · GW(p)

Elsewhere in the thread he says the following. I have corrected some typos and added emphasis.

  • I expect a firestorm of complaining over the use of the word `stack'. Maybe I'll be pleasantly surprised. I prefer to use such metaphors because I think such abstractions give people a useful handhold when they are coming from other backgrounds. I get jumped on a lot for using a stack metaphor when talking about Scheme because people apparently think I've forgotten that it's not a strict stack; personally, I think the people who are so quick to jump on me have forgotten that even a metaphor that has a flaw can be a powerful way to reason and express even when not speaking rigorously. The remark here is intended to allow someone who is just barely reading along to confirm that something he may have strong knowledge of in another domain is in fact what is being discussed here. To not offer that handhold seems to me to be impolite.
comment by StephenR · 2014-08-04T04:20:31.642Z · LW(p) · GW(p)

"We must not criticize an idiom [...] because it is not yet well known and is, therefore, less strongly connected with our sensory reactions and less plausible than is another, more 'common' idiom. Superficial criticisms of this kind, which have been elevated into an entire 'philosophy', abound in discussions of the mind-body problem. Philosophers who want to introduce and to test new views thus find themselves faced not with arguments, which they could most likely answer, but with an impenetrable stone wall of well-entrenched reactions. This is not at all different from the attitude of people ignorant of foreign languages, who feel that a certain colour is much better described by 'red' than by 'rosso'.

Paul Feyerabend, Against Method, 4th Edition, p. 59.

comment by Qwake · 2014-08-22T05:05:02.427Z · LW(p) · GW(p)

Language exists only on the surface of our consciousness. The great human struggles are played out in silence and in the ability to express oneself.

Franz Xavier Kroetz

Replies from: rule_and_line
comment by rule_and_line · 2014-08-22T16:45:29.615Z · LW(p) · GW(p)

Could you give this some more context? My reaction was to downvote.

The word "only" gives me vibes like "language exerts a trivial or insignificant influence on our consciousness". I don't know any of Kroetz's plays, but given that he is a playwright I feel like I'm getting the wrong vibe.

Replies from: Qwake
comment by Qwake · 2014-08-24T04:15:07.792Z · LW(p) · GW(p)

My interpretation of the quote was not that language exerts a trivial influence on our consciousness but that language is an imperfect form of communication.

comment by Salemicus · 2014-08-21T16:49:48.063Z · LW(p) · GW(p)

In the fields of observation, chance favours only the prepared mind.

Louis Pasteur.

comment by NancyLebovitz · 2014-08-15T17:05:01.876Z · LW(p) · GW(p)

Challenge my assumption, not my conclusion, and do it with new evidence, instead of trying to twist the old stuff.

"The Originist", by Orson Scott Card

I believe the first part is frequently good advice. The second half is good, but not quite as good-- there still may be good new angles on old evidence.

comment by Vaniver · 2014-08-31T15:17:08.041Z · LW(p) · GW(p)

Two mares, each convinced she was standing firmly on The Shores Of Rationality, stared helplessly into The Sea Of Confusion and despaired over their inability to ever rescue the friend helplessly floundering within.

A vivid description of inferential distance from Twilight's Escort Service.

Edit: It's from a comedy that relies on misunderstandings; Twilight chooses the word "escort" to advertise her teleportation abilities. If you don't enjoy awkwardness-based comedies, I recommend you stay away. The actual quote is about explaining a value difference.

Replies from: None
comment by [deleted] · 2014-08-31T15:19:26.437Z · LW(p) · GW(p)

Explain, as I am not clicking on anything associating "Twilight Sparkle" and "Prostitution".

Replies from: Leonhart
comment by Leonhart · 2014-08-31T19:55:13.104Z · LW(p) · GW(p)

Haven't had time to read it; but from the story description, it seems to be a comic affair where Twilight decides to monetise her teleportation skillz, and picks the wrong word to advertise with. Hilarity presumably prevails?

Replies from: Richard_Kennaway, Vaniver
comment by Richard_Kennaway · 2014-09-01T11:17:52.804Z · LW(p) · GW(p)

Pretty much. I stopped reading at the point where her first "client" showed up, with supposed "hilarity" about to begin, as I can't stand comedy based on misunderstanding and embarrassment.

comment by Vaniver · 2014-09-01T15:51:06.502Z · LW(p) · GW(p)

Yep.

comment by Ben Pace (Benito) · 2014-08-04T10:01:06.216Z · LW(p) · GW(p)

'Deep pragmatism' is Joshua Greene's name for 'utilitarianism'.

Today we, some of us, defend the rights of gays and women with great conviction. But before we could do it with feeling, before our feelings felt like “rights,” someone had to do it with thinking. I’m a deep pragmatist, and a liberal, because I believe in this kind of progress and that our work is not yet done.

Joshua Greene, “Moral Tribes"

Replies from: Azathoth123
comment by Azathoth123 · 2014-08-05T03:27:30.481Z · LW(p) · GW(p)

'Deep pragmatism' is Joshua Greene's name for 'utilitarianism'.

And yet he's talking about 'rights', which are a deontological not a utilitarian concept.

Replies from: blacktrance, Benito
comment by blacktrance · 2014-08-05T08:56:49.556Z · LW(p) · GW(p)

Consequentialists can believe in something that can reasonably be called rights.

comment by Ben Pace (Benito) · 2014-08-05T05:33:55.047Z · LW(p) · GW(p)

I'm aware that may be jarring in the quote, but he has argued his case for being able to use the word very well. In fact, he's argued against it, calling it a rationalisation of our moral intuitions, and his point is that, for our moral intuitions to change, someone needs to do some good ethical reasoning first.

comment by EGarrett · 2014-08-05T23:53:20.717Z · LW(p) · GW(p)

"Just as eating against one’s will is injurious to health, so studying without a liking for it spoils the memory, and it retains nothing it takes in." -Da Vinci

Replies from: Stabilizer
comment by Stabilizer · 2014-08-06T00:30:15.169Z · LW(p) · GW(p)

Well...

Just as eating only what one likes is injurious to health, so studying only what one likes spoils the memory, and what is retained isn't very useful.

-Not Da Vinci

Replies from: EGarrett
comment by EGarrett · 2014-08-06T09:15:24.179Z · LW(p) · GW(p)

Compare Da Vinci's quote to Kubrick's...

"Interest can produce learning on a scale compared to fear as a nuclear explosion to a firecracker.”

They both seem quite clearly to be saying that the knowledge they gained studying what they were forced to study was essentially nothing in comparison to what they gained studying what they themselves found interesting.

From personal experience, I agree totally with both statements.

comment by fubarobfusco · 2014-08-04T21:02:35.804Z · LW(p) · GW(p)

I'm starting a new 30 day challenge: the month of no "should." Instead of tediously working down a list of all the little chores and errands that I "should" be doing, I'll work to listen to what that little voice inside me wants to do. I think it will be interesting.

Matt Cutts

Replies from: Azathoth123, Dorikka
comment by Azathoth123 · 2014-08-05T03:33:05.111Z · LW(p) · GW(p)

I don't really want to pay the electric bill, or the rent.

Oh dear, now I'm sitting in the dark and the landlord is evicting me onto the street.

Replies from: fubarobfusco, ChristianKl
comment by fubarobfusco · 2014-08-05T16:53:55.379Z · LW(p) · GW(p)

I'm pretty sure you've construed the quote entirely backwards — and that Matt's point is that any "I should do X" statement can be rephrased as "part of me wants to do X."

Replies from: Azathoth123
comment by Azathoth123 · 2014-08-06T02:51:29.274Z · LW(p) · GW(p)

I really don't like that guy and want him dead, and hey we're in the middle of nowhere and nobody knows he's here.

Replies from: ChristianKl, lmm
comment by ChristianKl · 2014-08-25T22:14:13.418Z · LW(p) · GW(p)

I really don't like that guy and want him dead

If you are a psychopath than simply doing what you want to do is bad. Normal civilized humans don't really want to kill other humans.

Replies from: Lumifer, None
comment by Lumifer · 2014-08-26T00:26:47.567Z · LW(p) · GW(p)

Normal civilized humans don't really want to kill other humans.

That REALLY depends on the circumstances.

Replies from: army1987
comment by A1987dM (army1987) · 2014-08-26T12:55:07.539Z · LW(p) · GW(p)

Isn't that covered by the first two words (especially the second) of the sentence you quoted?

Replies from: Lumifer
comment by Lumifer · 2014-08-26T15:10:48.817Z · LW(p) · GW(p)

No. Normal civilized humans find themselves in different circumstances. In some of these circumstances they DO want to kill other humans.

comment by [deleted] · 2014-08-31T15:14:40.250Z · LW(p) · GW(p)

Normal civilized humans don't really want to kill other humans.

Well, certainly not nearby humans who have similar skin coloration and evince membership in the same tribe. Those people, on the other hand, are disgusting, and the lot of them simply have to go.

comment by lmm · 2014-08-25T21:34:48.158Z · LW(p) · GW(p)

Maybe you should kill him then? I mean, do you actually want to?

comment by ChristianKl · 2014-08-25T22:14:18.766Z · LW(p) · GW(p)

Given that you can predict the results of your choices is there really no part in you that wants to choose the road that includes paying the electric bill?

It about where you put your attention. If you focus on the fact that you want to have electricity in your house and therefore pay the electric bill you feel agent and good. If you focus on the fact that you have an obligation to pay a bill you will feel bad.

comment by Dorikka · 2014-08-26T01:25:45.003Z · LW(p) · GW(p)

Textbook case of YMMV due to inferential distance/loss of resolution in verbal/textual communication.

comment by Qwake · 2014-08-06T19:02:47.882Z · LW(p) · GW(p)

Never let your sense of morals get in the way of doing what's right.

-Isaac Asimov

Replies from: hairyfigment
comment by shminux · 2014-08-18T19:08:30.148Z · LW(p) · GW(p)

I've realized that I had started noticing and mitigating trivial inconveniences some time after reading the Yvain's post. Something as simple as leaving the door open or taking cookies from the the wrapper and placing them in a bowl, (or supporting form auto-fill, or placing the button (physical or virtual) you want the user to press right there in front if you are a developer) makes a difference in the "feature" being used (e.g. cookies being eaten).

Up next: figure out a way to use fewer parenthesis (including nested ones (yes, I've heard of commas)).

comment by Stabilizer · 2014-08-04T23:21:11.540Z · LW(p) · GW(p)

Most of the time what we do is what we do most of the time.

  • Daniel Willingham, Why Don't Students Like School. The point is that, quite often the reason we're doing something is that that's what we're used to doing in that situation.

Note: He attributes the quote to some other psychologists.

comment by Qwake · 2014-08-06T18:59:22.052Z · LW(p) · GW(p)

Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and common sense.

-Buddha

Replies from: wedrifid, Lumifer, Stabilizer, TheMajor
comment by wedrifid · 2014-08-10T06:26:07.536Z · LW(p) · GW(p)

Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and common sense.

-Buddha

This is the first time I've been prompted to advocate the merit of this related quote.

comment by Lumifer · 2014-08-06T20:17:35.677Z · LW(p) · GW(p)

Isn't that, pretty much, a classic description of confirmation bias?

comment by Stabilizer · 2014-08-06T20:12:27.103Z · LW(p) · GW(p)

That one's a misquote. The original is:

Now, Kalamas, don’t go by reports, by legends, by traditions, by scripture, by logical conjecture, by inference, by analogies, by agreement through pondering views, by probability, or by the thought, ‘This contemplative is our teacher.’ When you know for yourselves that, ‘These qualities are skillful; these qualities are blameless; these qualities are praised by the wise; these qualities, when adopted & carried out, lead to welfare & to happiness’ — then you should enter & remain in them.

Not exactly a rationality quote, is it? Here is another famous misquote of the same passage.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-08-06T20:22:39.206Z · LW(p) · GW(p)

Not exactly a rationality quote, is it?

I think it is, and it has been so regarded on LessWrong several times already, first here.

comment by TheMajor · 2014-08-06T20:06:44.819Z · LW(p) · GW(p)

You mean never really change your mind? Sounds kinda dumb...

If the last half had said "own reason or common sense" all would be fine, I think.

Replies from: Qwake
comment by Qwake · 2014-08-10T06:46:52.069Z · LW(p) · GW(p)

I interpreted it to mean not to believe information simply because you hold the source of the information in high regard. It is very possible to change your mind and keep within your own reason and common sense.

comment by hairyfigment · 2014-08-06T06:13:41.246Z · LW(p) · GW(p)

But the more central point is that trying to explain or predict [institutional] behavior idealistically, in terms of things called "values" or moral fortitude, is foolish. It's magical thinking. "One party believes in..." Institutions don't have beliefs. They have incentives.

  • Internet commenter
Replies from: Lumifer, None
comment by Lumifer · 2014-08-06T14:56:39.173Z · LW(p) · GW(p)

I don't know about moral fortitude, but institutions certainly have values. It is precisely their values, combined with the environment around them, that create the incentives.

Don't forget that e.g. money and power are values, too.

Replies from: hairyfigment
comment by hairyfigment · 2014-08-06T16:15:59.073Z · LW(p) · GW(p)

As a general statement that seems flatly untrue, unless you mean that people in them have (often conflicting) values. Even thinking that for-profit corporations seek to make money for the corporation, rather than for decision-makers, seems like a dangerous mistake.

Replies from: Lumifer
comment by Lumifer · 2014-08-06T16:25:50.736Z · LW(p) · GW(p)

As a general statement that seems flatly untrue

I am sorry, I'm not going to read an extra-long rumination on a TV series I neither watch nor have any interest in.

Can you provide the argument in a.. condensed form? Preferably without relying on fictional evidence.

Replies from: hairyfigment
comment by hairyfigment · 2014-08-06T17:26:29.340Z · LW(p) · GW(p)

You should read it, as least as far as the second image - but the argument says you're talking about legal fictions as if they were people. Here's a random piece of real evidence.

Replies from: Lumifer
comment by Lumifer · 2014-08-06T17:31:34.767Z · LW(p) · GW(p)

the argument says you're talking about legal fictions as if they were people

And what's wrong with that?

Organizations and institutions share some characteristics with physical people and do not share others. For example, both organizations and people make decisions. Or, as was mentioned in this thread, respond to incentives.

I think that "having values" is one of those things which can be meaningfully said about both organizations and people. If you don't believe so, can you offer your reasoning?

Replies from: hairyfigment
comment by hairyfigment · 2014-08-06T18:20:06.949Z · LW(p) · GW(p)

What could it mean for organizations to have values in a world of mergers and (profitable) bankruptcies? Unless you're reducing it to the values of the people making decisions, or an internal game that constrains them, I don't have the first clue what you believe.

Replies from: Lumifer
comment by Lumifer · 2014-08-06T19:15:19.490Z · LW(p) · GW(p)

What could it mean for organizations to have values in a world of mergers and (profitable) bankruptcies?

If you want to point out that organizations are not eternal: they come into being, change, and then disappear -- so do people.

comment by [deleted] · 2014-08-31T15:18:30.991Z · LW(p) · GW(p)

Institutions certainly have optimization targets, which are what we normally call values. Just because you don't share them doesn't mean they're not there.

Replies from: hairyfigment
comment by hairyfigment · 2014-09-13T04:19:18.619Z · LW(p) · GW(p)

I don't know what you'd call an "optimization target", but if you treat the official written goals as the values of the organization, that will mark you as a useful idiot. You will lose your job whenever it suits the personal interests of the people making decisions.

Let's consider political parties in the US sense of the term, since corporate examples might be too easy. A party theoretically 'wants' to get votes. They could achieve this by (for example) swaying new voters or people who previously voted against them to their side. But this might weaken the power of party leaders within the party. And perhaps those leaders hold office in gerrymandered districts, expecting to retire before the scheme falls apart. Or maybe they don't hold elected office at all, and get paid regardless of the long-term demographic forecast for their party. Or maybe they simply fear the consequences to themselves if they happen to offend their local base while swaying voters in future national elections.

comment by hairyfigment · 2014-08-06T17:36:58.937Z · LW(p) · GW(p)

There is, to the [Slytherin adept], only one reality governing everything from quarks to galaxies. Humans have no special place within it. Any idea predicated on the special status of the human — such as justice, fairness, equality, talent — is raw material for a theater of mediated realities that can be created via subtraction of conflicting evidence, polishing and masking.

Replies from: Stabilizer
comment by Stabilizer · 2014-08-06T18:55:12.724Z · LW(p) · GW(p)

While I find Venkatesh Rao to be insightful, his writing can be quite frustrating. He seems to be allergic towards speaking plainly. Here is a possible re-write of the above quote:

Slytherin-adepts use human ideals -- like justice, fairness, equality, talent -- to deceive people. They employ these ideals in rhetoric, often to turn attention away from conflicting evidence.

Replies from: Qwake, hairyfigment
comment by Qwake · 2014-08-06T20:00:15.917Z · LW(p) · GW(p)

The impression I got is more that Slytherin adepts believe that human ideals such as justice, fairness, equality, and talent distort reality because they rely on the assumption that humans hold a special place in the universe which Slytherin adepts believe not to be true.

Replies from: hairyfigment
comment by hairyfigment · 2014-08-06T20:41:08.820Z · LW(p) · GW(p)

Yes to both this and the grandparent - though in principle, a Slytherin might try to produce an environment where those ideals make sense, out of personal preference.

comment by hairyfigment · 2014-08-06T20:51:34.584Z · LW(p) · GW(p)

Actually, in addition to the sibling comment, I should point out that "rhetoric" implies people claiming all the time that they're serving justice or what have you. Mostly (as I understand the quote) they just need to hide contrary evidence from view. Provide a distraction, and people will continue to believe their existing ideals determine reality.

comment by shminux · 2014-08-12T18:13:40.732Z · LW(p) · GW(p)

society should not be looking for ways to maintain privacy. It should be looking for ways to make privacy unnecessary. We will never be free until we lose our unnecessary secrets and discover we are better off without them.

Scott Adams

(Please read the link for context before commenting on the quote alone)

Replies from: Lumifer
comment by Lumifer · 2014-08-12T18:24:29.001Z · LW(p) · GW(p)

I disagree with the premise that there are only two reasons to want privacy.

Replies from: soreff, Richard_Kennaway
comment by soreff · 2014-08-17T00:25:06.026Z · LW(p) · GW(p)

Agreed. If nothing else, in a bargaining process, keeping the maximum/minimum price that one would accept private during the negotiation doesn't fit into either category.

Replies from: army1987
comment by A1987dM (army1987) · 2014-08-21T10:08:51.755Z · LW(p) · GW(p)

But if both parties were forbidden from keeping their reservation price secret the problem would be less bad, so it does kind-of fit into the spirit of the second category, though not its letter.

comment by Richard_Kennaway · 2014-08-13T07:50:52.742Z · LW(p) · GW(p)

I agree with your disagreement. For context, here are those two reasons, with which Adams begins his essay. It's only a click away, but I think it deserves to be dragged into the light:

There are only two reasons to have privacy and both of them involve dysfunction. You might want privacy because...

1. you plan to do something illegal or unethical.

or

2. to protect you from a dysfunctional world.

That pretty much condemns the rest of the article. If he can't think of protecting oneself from other people's criminal activities, protecting oneself from other people's judgements, protecting one's creative activities from dissipation, protecting one's investigations from being scooped, protecting business secrets, and the basic feeling of GODDAMMIT THIS IS NONE OF YOUR BUSINESS, then what planet is he oh forget it. He's writing this tosh just to get responses like that.

Scott Adams is a humorist, not a philosopher. Dilbert was worth reading. Since mining out that seam it's been a downhill journey into clickbait. He even admits to the game at the end:

I know this sort of topic gets massive down votes because you don't want to risk losing privacy. But please do me a favor and rate this post on the entertainment value alone. I'm trying to gauge how interesting this topic is to you. Thank you!

Replies from: CCC
comment by CCC · 2014-08-13T10:42:21.091Z · LW(p) · GW(p)

If he can't think of protecting oneself from other people's criminal activities, protecting oneself from other people's judgements, protecting one's creative activities from dissipation, protecting one's investigations from being scooped, protecting business secrets, and the basic feeling of GODDAMMIT THIS IS NONE OF YOUR BUSINESS, then what planet is he oh forget it.

I think most of these (all with the exception of "protecting one's investigations from being scooped" and possibly "protecting business secrets" or "THIS IS NONE OF YOUR BUSINESS") could fall under "protect you from a dysfunctional world", depending on the definition of "dysfunctional". That is a very broad reason, after all; almost as broad as "to protect you from negative consequences".

Of course, that implies that a non-"dysfunctional" world would be some variant of utopia - presumably one where everyone more-or-less accepts Adams' basic viewpoints.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-08-13T11:02:37.551Z · LW(p) · GW(p)

Yes, if you label every reason to keep the world and his dog out of your business "dysfunctional" then the whole thing reduces to tautology.

Of course, that implies that a non-"dysfunctional" world would be some variant of utopia - presumably one where everyone more-or-less accepts Adams' basic viewpoints.

As I say, Adams is not a deep thinker, he just plays one on the net.

Replies from: Lumifer, CCC
comment by Lumifer · 2014-08-13T14:35:25.299Z · LW(p) · GW(p)

Adams is not a deep thinker, he just plays one on the net

Well, first it's much better to play a deep thinker on the 'net than do the usual thing and play an idiot on the 'net...

Second, it doesn't look like he necessarily commits to everything he throws out in his blog. He plays with ideas, tries them on for size, puts them on a stick and waves them at people, etc. I think that's fine and useful as long as you don't take everything he writes very very seriously.

Replies from: Azathoth123
comment by Azathoth123 · 2014-08-14T02:23:21.133Z · LW(p) · GW(p)

Well, first it's much better to play a deep thinker on the 'net than do the usual thing and play an idiot on the 'net...

I'm not sure about that given what happens when someone who's not a deep thinker tries to play one.

Replies from: Lumifer
comment by Lumifer · 2014-08-14T02:24:12.374Z · LW(p) · GW(p)

So, what happens?

comment by CCC · 2014-08-14T04:17:05.460Z · LW(p) · GW(p)

Yes, if you label every reason to keep the world and his dog out of your business "dysfunctional" then the whole thing reduces to tautology.

Well, yes. I read his argument as less of an argument in favour of openness and more a sort of a whinge about how people make too much of a big deal about certain things (like homosexuality) which then leads to people keeping those certain things secret.

I'm not sure if that's what he intended with his argument, but that's what I got from it.

comment by skeptical_lurker · 2014-08-11T16:24:34.296Z · LW(p) · GW(p)

Any adequate spiritual system has to account for all of reality. Drinking is part of reality, therefore drinking is spiritual.

-a Vedic (Hindu) philosophy society in a pub.

This strikes me as rationality of the "taking joy in the mealy real" sort from the unliklyest of places. Either that, or a good way to rationalise getting drunk.

Replies from: Lumifer
comment by Lumifer · 2014-08-11T16:39:57.008Z · LW(p) · GW(p)

therefore drinking is spiritual.

Nope -- therefore any adequate spiritual system has to account for drinking. I don't think it's a problem...

Replies from: Stabilizer
comment by Stabilizer · 2014-08-11T19:16:38.623Z · LW(p) · GW(p)

You nailed it.

therefore drinking is spiritual.

This is the kind of bullshit logic many religions adopt to get from A to B; where A is something innocuous sounding and B is something that sounds profound. It works because thinking is contaminative. In the above example, there was a simple conflation of the concepts behind the words "spiritual system" and "spiritual." Most people won't pick up on that because the two words sound very similar.

Thus, in getting from A to B via a sequence, C,D,E..., all you have to do is slightly change the meanings of the words (or use similar sounding words) in each step of the argument. By the time you reach B, you can could've proved whatever you wanted.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2014-08-11T20:03:02.946Z · LW(p) · GW(p)

there was a simple conflation of the concepts behind the words "spiritual system" and "spiritual."

I dunno, thinking about it in terms of "spiritual system" applying in general, and "spiritual" applying to a specific case does not seem like a conflation, in the same way that "set" and "element of set" are distinct.

By the time you reach B, you can could've proved whatever you wanted.

In this case this certainly is true:

Drinking is part of reality, therefore drinking is spiritual.

generalises to:

X is part of reality, therefore X is spiritual.

Of course, this might sound more profound when you've been drinking.

Replies from: Stabilizer
comment by Stabilizer · 2014-08-11T20:52:21.760Z · LW(p) · GW(p)

I dunno, thinking about it in terms of "spiritual system" applying in general, and "spiritual" applying to a specific case does not seem like a conflation, in the same way that "set" and "element of set" are distinct.

Not all things referred to in a spiritual system need be spiritual. For example, a spiritual system could say that drinking is not spiritual -- which is what Islam explicitly says. Indeed, associating the tag "spiritual" or "not spiritual" to different activities is one of the main goals of religions.

comment by Torello · 2014-08-04T17:30:43.984Z · LW(p) · GW(p)

Material, adj. Having an actual existence, as distinguished from an imaginary one. Important

  • Ambrose Bierce, The Enlarged Devil's Dictionary, Compiled and Edited by Ernest J. Hopkins, p. 194
Replies from: VAuroch
comment by VAuroch · 2014-08-04T21:02:43.089Z · LW(p) · GW(p)

Fail to see the relevance.

Replies from: Torello
comment by Torello · 2014-08-04T21:19:53.439Z · LW(p) · GW(p)

I've always considered materialism to be intertwined with rationality.

Replies from: Benito, VAuroch
comment by Ben Pace (Benito) · 2014-08-04T22:00:27.080Z · LW(p) · GW(p)

Er, I always thought we were celebrating good cognitive algorithms here, not individual belief tokens.

comment by VAuroch · 2014-08-05T01:46:17.975Z · LW(p) · GW(p)

It does tend to be a belief that follows from increased rationality, but snarky remarks about it (while amusing) aren't particularly productive towards improving rationality.