Rationality Quotes February 2013

post by arundelo · 2013-02-05T22:20:50.370Z · LW · GW · Legacy · 566 comments

Contents

566 comments

Another monthly installment of the rationality quotes thread. The usual rules apply:

  • Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote comments or posts from Less Wrong itself or from Overcoming Bias.
  • No more than 5 quotes per person per monthly thread, please.

566 comments

Comments sorted by top scores.

comment by Mestroyer · 2013-02-06T05:52:02.330Z · LW(p) · GW(p)

"If all your friends jumped off a bridge, would you jump too?"

"Oh jeez. Probably."

"What!? Why!?"

"Because all my friends did. Think about it -- which scenario is more likely: every single person I know, many of them levelheaded and afraid of heights, abruptly went crazy at exactly the same time... ...or the bridge is on fire?"

Randall Munroe, on updating on other people's beliefs.

Replies from: satt, TobyBartels, army1987, olibain
comment by satt · 2013-02-09T17:05:01.985Z · LW(p) · GW(p)

Dilbert dunnit first!

(Seeing that strip again reminds me of an explanation for why teenagers in the US tend to take more risks than adults. It's not because the teenagers irrationally underestimate risks but because they see bigger benefits to taking risks.)

comment by TobyBartels · 2013-02-07T16:10:50.001Z · LW(p) · GW(p)

Let me just put the text string ‘xkcd’ in here, because I was going to add this if nobody else had, and it's lucky that I found it first.

Oh, and there's more text in the comic than what's quoted, and it's good too, so read the comic everybody!

comment by A1987dM (army1987) · 2013-02-09T13:59:57.642Z · LW(p) · GW(p)

See also this Will_Newsome comment. (I incorrectly remembered that it said something like “If all your friends jumped off a bridge, would you jump too?” “If all of them survived, I probably would.”)

comment by olibain · 2013-02-20T20:58:23.493Z · LW(p) · GW(p)

The " every single person I know, many of them levelheaded and afraid of heights, abruptly went crazy at exactly the same time" scenario should be given some credence in human society; there is such a thing as puberty. The definition of puberty being " every single person I know abruptly went crazy at exactly the same time, including me".

comment by Eugine_Nier · 2013-02-02T06:06:48.986Z · LW(p) · GW(p)

It’s nice to elect the right people, but that’s not the way you solve things. The way you solve things is by making it politically profitable for the wrong people to do the right things.

-- Milton Friedman

Replies from: Multiheaded, AlexSchell, Estarlio, ChristianKl
comment by Multiheaded · 2013-02-04T18:32:54.191Z · LW(p) · GW(p)

No one can be good for long if goodness is not in demand.

-- Bertold Brecht

(I'm always amused when people of opposite political views express similar thoughts on society.)

Also:

The aim of science is not to open the door to infinite wisdom, but to set some limit on infinite error.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-02-05T00:28:40.877Z · LW(p) · GW(p)

I think the Brecht quote is somewhat misleading. The problem is not that not enough people want/demand goodness, the problem is that it is too easy to profit by cheating without getting caught.

comment by AlexSchell · 2013-02-06T20:44:00.742Z · LW(p) · GW(p)

This solution only works if you are in the special position of being able to make institutional design changes that can't be undone by potential future enemies. Otherwise, whose "right things" will happen depends on who is currently in charge of institutional design (think gerrymandering).

Replies from: Sengachi
comment by Sengachi · 2013-02-08T00:32:17.259Z · LW(p) · GW(p)

Then try to make it politically profitable to help sustain those changes you make. Make it so painfully obvious that the only reason to remove those changes would be for one's unethical gain that no politician would ever do so. The problem then though, is that people end up just not caring enough.

Replies from: AlexSchell
comment by AlexSchell · 2013-02-08T04:53:39.660Z · LW(p) · GW(p)

What you're describing is exactly the position of being able to make institutional design changes that can't be undone by potential future enemies. This position is "special" not only because the task is very difficult, but also because you have to be the first to think of it.

comment by Estarlio · 2013-02-12T02:29:20.172Z · LW(p) · GW(p)

Couldn't I also set up the system to try to exclude the wrong people from ever getting power?

It seems to me that computers get better at detecting liars, and we have an ease of fact checking on things now we never used to have, and conflicts of interest are generally relatively easily seen, and we've got all this research about how influence functions... In short that we've made a lot more progress on the judging people front, than we have on the side of designing procedures and regulations that suit us and also serve as one-way functions.

Replies from: fubarobfusco, Richard_Kennaway
comment by fubarobfusco · 2013-02-12T02:46:02.889Z · LW(p) · GW(p)

Couldn't I also set up the system to try to exclude the wrong people from ever getting power?

Not if having power over others turns the right people into the wrong people.

comment by Richard_Kennaway · 2013-02-12T09:36:07.845Z · LW(p) · GW(p)

Couldn't I also set up the system

No. No-one can set up the system. The most that anyone can do is introduce a new piece into the game, pieces like Google, or Wikipedia, or Wikileaks.

comment by ChristianKl · 2013-02-02T16:35:19.951Z · LW(p) · GW(p)

That mentality is probably why US politics is as corrupt as it is at the moment. Electing people who aren't corrupt to replace corrupt people is very valuable if your goal is to have a well governed country.

If you have the political goals of Milton Friedman it might not be. If you want politicians to be corportate friendly than you make it politically profitable for them to do so by making it easy for companies to bribe them.

Replies from: Kingoftheinternet, Nornagest, Viliam_Bur, Luke_A_Somers
comment by Kingoftheinternet · 2013-02-02T19:55:50.520Z · LW(p) · GW(p)

I think the spirit of the quote is that instead of counting on anyone to be a both benevolent and effective ruler, or counting on voters to recognize such things, design the political environment so that that will happen naturally, even when an office is occupied by a corrupt or ineffective person.

Replies from: woodside, ChristianKl
comment by woodside · 2013-02-03T06:42:52.991Z · LW(p) · GW(p)

This idea is primarily why I'm skeptical of the effectiveness of institutions like the federal reserve (despite not being a subject matter expert). It seems pretty clear that in order to be effective the leadership has to be comprised of people that are not only exceptionally brilliant, but exceptionally benevolent as well.

comment by ChristianKl · 2013-02-02T20:36:55.342Z · LW(p) · GW(p)

What do you think that "design the political environment so that that will happen naturally" means concretely?

The policies that Milton advocated got a huge boost because companies put lobbyists who distribute campaign money in the "right places" to switch political incentives.

There are political enviroments in which the actors try to do what is right instead of just maximizing their personal interests. Milton says in the quoted video that the US congress isn't such an enviroment and that's no problem that anyone should be attempting to fix by electing different politicians.

Replies from: Vaniver, michaelkeenan, Eugine_Nier, HalMorris, Kingoftheinternet
comment by Vaniver · 2013-02-02T21:30:03.105Z · LW(p) · GW(p)

There are political enviroments in which the actors try to do what is right instead of just maximizing their personal interests.

The only one I've heard of is "fiction." Did you have an example in mind?

comment by michaelkeenan · 2013-02-03T04:24:33.109Z · LW(p) · GW(p)

An example is in Federalist No. 10. Madison is trying to design a political environment resilient to the corrupt effects of factions:

No man is allowed to be a judge in his own cause; because his interest would certainly bias his judgment, and, not improbably, corrupt his integrity. With equal, nay with greater reason, a body of men are unfit to be both judges and parties at the same time; yet what are many of the most important acts of legislation, but so many judicial determinations, not indeed concerning the rights of single persons, but concerning the rights of large bodies of citizens? and what are the different classes of Legislators, but advocates and parties to the causes which they determine? Is a law proposed concerning private debts? It is a question to which the creditors are parties on one side and the debtors on the other. Justice ought to hold the balance between them. Yet the parties are, and must be, themselves the judges; and the most numerous party, or, in other words, the most powerful faction, must be expected to prevail. Shall domestic manufactures be encouraged, and in what degree, by restrictions on foreign manufactures? are questions which would be differently decided by the landed and the manufacturing classes; and probably by neither, with a sole regard to justice and the public good. The apportionment of taxes on the various descriptions of property is an act which seems to require the most exact impartiality; yet there is, perhaps, no legislative act in which greater opportunity and temptation are given to a predominant party, to trample on the rules of justice. Every shilling, with which they overburden the inferior number, is a shilling saved to their own pockets.

It is in vain to say, that enlightened statesmen will be able to adjust these clashing interests, and render them all subservient to the public good. Enlightened statesmen will not always be at the helm: Nor, in many cases, can such an adjustment be made at all, without taking into view indirect and remote considerations, which will rarely prevail over the immediate interest which one party may find in disregarding the rights of another, or the good of the whole.

His concrete solutions are to choose representative democracy over direct democracy, and to have large republic rather than a small republic.

A more recent example would be last year's ban on members of Congress trading stocks based on the inside information they have as lawmakers. I think Milton Friedman's point is that one should direct efforts toward supporting policies like that, rather than trying to elect politicians who are too ethical to insider-trade.

Replies from: Eugine_Nier, Dahlen, Eugine_Nier, ChristianKl
comment by Eugine_Nier · 2013-02-03T04:49:21.550Z · LW(p) · GW(p)

Why is this comment at -1 yet 100% positive?

It then goes to 0 and 0% positive when I up-vote it.

comment by Dahlen · 2013-02-05T23:13:05.196Z · LW(p) · GW(p)

His concrete solutions are to choose representative democracy over direct democracy, and to have large republic rather than a small republic.

Why? How does this fix things? Without quite knowing what problem this solution is meant to address, the first consequence of this policy (representative democracy + large republic) that comes to my mind by judging it independently is that it looks optimized for the smallest number of rulers and the greatest amount of people limited in their political power by comparison -- in other words, it seems to concentrate power. (If there are other implications, they're not as obvious to me as this one.) How or why does that help overall impartiality?

comment by Eugine_Nier · 2013-02-03T04:47:59.757Z · LW(p) · GW(p)

Why is this comment at -1 yet 100% positive?

comment by ChristianKl · 2013-02-04T03:06:42.494Z · LW(p) · GW(p)

An example is in Federalist No. 10. Madison is trying to design a political environment resilient to the corrupt effects of factions:

Designing for resillence is not the same thing as designing a system to get politicans to do certain things. If you think as Miltion Friedman that "the right" thing is free market policies, designing the political system in a way that gives political advantages to those people who push free market policies, you are likely reduce resiliency.

A more recent example would be last year's ban on members of Congress trading stocks based on the inside information they have as lawmakers. I think Milton Friedman's point is that one should direct efforts toward supporting policies like that, rather than trying to elect politicians who are too ethical to insider-trade.

Given Friedman's politics I doubt that he had actions such as restricting members ability to trade stocks in mind. That's not the kind of political agenda that Friedman pushed.

Then I don't think you understand what that policy does. Lawmakers get their information regardless of how they vote or what policies they persue. That kind of insider trading allows lawmakers to personally enrich themselves instead of making bargains with people who want to hand them money.

What the policy does do, is that it provides a new tool for the people who have information about the trades that a congressman makes, to blackmail the congressman.

You might get some positive effects through the policy, so I'm not clear that it's a bad law.

rather than trying to elect politicians who are too ethical to insider-trade.

What's the problem that you are trying to solve in the first place? Insider-trade? Let Eliot Spitzer run the SEC and double SEC funding. Insider trading doesn't exist because there a lack of laws against the practice.

Replies from: michaelkeenan
comment by michaelkeenan · 2013-02-04T04:51:14.766Z · LW(p) · GW(p)

I didn't downvote you, but I'm not continuing the argument because it seems really political in a partisan way. I suspect that's what's motivating the downvotes.

comment by Eugine_Nier · 2013-02-05T00:44:19.249Z · LW(p) · GW(p)

The policies that Milton advocated got a huge boost because companies put lobbyists who distribute campaign money in the "right places" to switch political incentives.

You seem to be confusing support for a free market with rent seeking. Milton Friedman supported free markets, in this and your follow up comment you seem to equate this with rent seeking.

comment by HalMorris · 2013-02-02T23:00:07.735Z · LW(p) · GW(p)

A large part of the Federalist Papers is about designing structures and incentives to make government robust against overwhelming ambition and corruption - to make ambition in one branch check ambition in another branch, similarly between state and federal and between state and state.

That said, I think Friedman (I was never on a first name basis with him) is overly dismissive of electing the right people. But again we need to set up structures and incentives differently, so elections are less of an entertaining spectacle and more like a hiring search or job interview. The structures or institutions that might improve the situation don't have to come from legislation (though some of them could - I'm not against that on principal); e.g. parties weren't legislated into being, and if we want something better, we should not look exclusively to legislation.

It seems at least conceivable to have some agreement across party lines that our electoral processes, as I said before, look less like a circus to which we passively attend, and more like a hiring search / job interview type process.

I've often thought of ironically proposing that we should legislate that job interviews have to be more like elections: i.e. stop limiting candidates' ability to express themselves. If somebody wants to bring a brass band to an interview, it's their free speech right to do so. If they want to spread nasty rumours about the other candidates why not?

Not so ironically, maybe our best hope is to persuade people that our current approach with its sound bites, catch phrases, push polls, gerrymandered "safe seats" and so on is a source of dangerous blindness that affects all of us, with all of our different interpretations of "the good" (of the country, etc.); to persuade people to find all of that current process repulsive, and to insist that all that airtime, column inches, etc., be devoted to information about the candidates, analysis of the current crises and challenges and possibilities, and to debates of all sorts: dozens of debates, discussions, and joint press conferences among the candidates.

One problem: "Information" (as in "information" about the candidate, etc.) is a word that gets bandied about too casually. What might possibly be done to increase the sanity with which people evaluate what is and isn't truly "information". I think that is the big problem that people concerned with rationality might be able to make some progress in solving.

Replies from: Desrtopa
comment by Desrtopa · 2013-02-04T17:17:24.747Z · LW(p) · GW(p)

I've often thought of ironically proposing that we should legislate that job interviews have to be more like elections: i.e. stop limiting candidates' ability to express themselves. If somebody wants to bring a brass band to an interview, it's their free speech right to do so. If they want to spread nasty rumours about the other candidates why not?

I think that the only rules we have against spreading nasty rumors about other candidates are our laws against defamation; spreading malicious falsehoods about people in order to deprive them of business opportunities in favor of yourself is exactly the sort of thing that those laws prohibit, because then the people who did those sort of things would be the most likely to get the jobs. Job candidates would have to become willing to undercut their rivals in order to stay competitive, and we'd be at risk of risk devolving to a state where having to filter through webs of malicious falsehood in any hiring situation where the candidates are known to each other was the norm rather than the exception.

As for bringing a brass band to a job interview, candidates are entitled to do such things, with the caveat that they wouldn't get the jobs. It would be an awfully rare position where bringing a brass band to the interview would be positive evidence of the candidate's ability to perform the job well. Giving candidates too much leeway for self expression runs the risk of turning interviews into contests of showmanship.

Replies from: HalMorris
comment by HalMorris · 2013-02-04T17:58:07.392Z · LW(p) · GW(p)

and we'd be at risk of risk devolving to a state where having to filter through webs of malicious falsehood in any hiring situation where the candidates are known to each other was the norm rather than the exception.

which sounds to me like the state we are in w.r.t. election to political offices.

My point is nobody hires people for ordinary jobs the way we collectively hire a president. We are extremely passive, and don't manage the process. There is a field I am very interested in called Social Epistemology (it's a divided field with one part being excessively postmodern and relativistic; the other side, which interests me, holds that there really are such things as truth and falsehood, and the biggest name in that area is Alvin Goldman). This field is very interested in institutions, such as the law court in its different forms, that have tried to come up with procedures and standards (like selectivity in the sort of evidence you will listen to) that try to improve the chances of coming to the right conclusion. There is quite a lot of emphasis on law courts, but it occurred to me that hiring committees do something similar; they require things like resumes, and have a systematic way of questioning candidates rather than say to candidates "Come and put on a show and we'll see what we think of you".

Replies from: Desrtopa
comment by Desrtopa · 2013-02-04T20:02:25.899Z · LW(p) · GW(p)

which sounds to me like the state we are in w.r.t. election to political offices.

I don't understand why you're arguing that job interviews should be more like elections in that case. If the process leads to bad outcomes for elections, and is likely to lead to bad outcomes for job interviews as well, why use it?

Replies from: HalMorris
comment by HalMorris · 2013-02-04T20:19:50.094Z · LW(p) · GW(p)

To quote myself:

I've often thought of ironically proposing that we should legislate that job interviews have to be more like elections ...

It's irony. I.e., it's such a bad idea that I'd like to suggest it's also a bad way to elect presidents.

Replies from: Desrtopa
comment by Desrtopa · 2013-02-04T22:31:17.308Z · LW(p) · GW(p)

Ah, see, I thought you meant that ironically, while it's not a good way to elect presidents, it would be an improvement on how we conduct interviews.

comment by Kingoftheinternet · 2013-02-02T21:20:45.368Z · LW(p) · GW(p)

Concretely, Milton Friedman probably didn't have a workable plan for bringing about such an environment, though he may have thought he did; I'm not familiar enough with his thinking. One next-best option would be to try to convince other people that that's what part of a solution to bad government would look like, which under a charitable interpretation of his motives, is what he was doing with that statement he made.

comment by Nornagest · 2013-02-02T20:26:39.798Z · LW(p) · GW(p)

The nice thing about working with incentives is that they're pretty stable relative to political leanings. I'd expect a given person's perceptions of politicians' level of corruption or incompetence or any other negative adjective you can think of to depend almost entirely on party affiliation, but you can actually leverage that to get changes in incentive structures passed: just frame it as necessary to curb the excesses of those guys over there, you know, the ones you hate.

And in any case the quote works just as well for the governed. As anyone who's ever moderated a large forum can tell you, playing with incentives works almost embarrassingly well and quickly compared to working on sympathy or respect for authority. Of course, it's also harder to do.

Replies from: HalMorris, ChristianKl
comment by HalMorris · 2013-02-03T15:47:54.329Z · LW(p) · GW(p)

As anyone who's ever moderated a large forum can tell you, playing with incentives works almost embarrassingly well and quickly compared to working on sympathy or respect for authority. Of course, it's also harder to do.

That sounds very intriguing. Can you give some example of how you've used "playing with incentives" successfully to (I assume - correct me if I'm wrong) maintain a productive forum? That might be very enlightening - seriously, no irony here.

Replies from: Nornagest, Baruta07
comment by Nornagest · 2013-02-03T19:58:41.517Z · LW(p) · GW(p)

Simplest positive example I can think of offhand: if there's lots of content-free posting going on and you want it to go away, changing the board parameters so that user titles are no longer based on postcount goes a surprisingly long way.

Simplest negative example I can think of: if you think there's too much complaining going on (I didn't, but the board owner at the time did), allocating a subforum for complaints will only make things worse. Even if you call it something like "Constructive Criticism".

Replies from: HalMorris
comment by HalMorris · 2013-02-03T23:09:19.221Z · LW(p) · GW(p)

Sorry, I've never run a forum. Is there any easy place to learn enough to make "user titles are no longer based on postcount" make sense to me (unless you want to take the time to explain it). I really am very interested.

Replies from: Nornagest
comment by Nornagest · 2013-02-03T23:17:27.499Z · LW(p) · GW(p)

Sure. One feature in phpBB and several other popular bulletin board packages (but not in reddit or Slashdot or any of their descendants) is the ability to set user titles: little snippets of descriptive text that get displayed after a user's handle and which are usually intended to give some information about their status in the forum.

The most common arrangement is to have a couple of special titles for administrative positions (say, "mod" and "admin"), then several others for normal users that're tiered based on the number of posts the user's written, i.e. postcount: a user might start with the title "newbie" or "lurker", then progress through five or six cutely themed titles as they post more stuff. It's common for admins to change the exact titles and the progression pattern to suit the needs of the forum (a roleplaying forum for example might name them after monsters of increasing power), but uncommon to change the basic scheme.

You may notice that this doesn't differentiate on post quality.

comment by Baruta07 · 2013-02-04T21:23:34.127Z · LW(p) · GW(p)

Look up some of the karma discussions on this very site.

comment by ChristianKl · 2013-02-02T21:08:37.103Z · LW(p) · GW(p)

I'd expect a given person's perceptions of politicians' level of corruption or incompetence or any other negative adjective you can think of to depend almost entirely on party affiliation

Very few people argued that Cato was corrupt. Even those who disagreed with him mostly didn't.

As anyone who's ever moderated a large forum can tell you, playing with incentives works almost embarrassingly well and quickly compared to working on sympathy or respect for authority.

I do have experience with moderating a large forum and I still believe in not trying to corrupt people. You want people that are open for rational discouse and who changes their position when you bring them arguments to change their opinions even in the absence of giving them incentives to switch their position.

Replies from: Nornagest, taelor
comment by Nornagest · 2013-02-02T21:15:16.874Z · LW(p) · GW(p)

I do have experience with moderating a large forum and I still believe in not trying to corrupt people.

I'd say that setting up incentives so that people within a system do culturally useful things out of their own self-interest is about as close to an opposite of corrupting people as we're likely to find.

Replies from: ChristianKl
comment by ChristianKl · 2013-02-03T05:52:53.774Z · LW(p) · GW(p)

Doing X for specially crafted incentive Y for the intrinsic value of X is a form of corruption. It's not always possible that all decisions are made for the intrinsic value but if you have a political enviroment where there a lot pressure to do Y's.

Especially if you can't get any political power without Y, you won't have many people who persue political goals for their intrinsic value in your political system.

Things work much better when the politicians do what they consider to be right instead of having to do be coercied into taking any position that's political advantageous.

Replies from: Nornagest, HalMorris
comment by Nornagest · 2013-02-03T06:32:26.707Z · LW(p) · GW(p)

That's a... remarkably loose definition of corruption you've got going on there.

I'm not sure it's practical to make a political system completely free of incentives, as long as you're working with humans governing humans: the closest approximations I can think of would have to involve a leadership caste socially and economically isolated from the people they govern and without any means of improving their own welfare, and that's so far removed from anything historical I know about that I don't even want to try working out all its long-term implications. Imperial Japan's about the closest, and that degenerated at first into proxy governance by provincial warlords and later into a military-aristocratic dictatorship ruling in the imperial family's name but not in practice controlled by it.

Now, given anything resembling our existing politics, it seems naive to behave as if the default incentives surrounding political power are nonexistent or weak enough that they're drowned out by altruistic impulses among those inclined to seek power -- or even among random members of the populace, if you prefer direct democracy. This being the case, it makes far more sense to me to design systems to reward competent government -- however defined -- rather than to high-handedly dismiss any such attempts as unethical and rely wholly on the better angels of politicians' natures.

I'm not quite prepared to say that there can't exist any candidate systems where this wouldn't be necessary, but if you've got a proposal like that, we should really be talking about that proposal rather than speaking in generalities.

Replies from: HalMorris
comment by HalMorris · 2013-02-03T15:42:53.057Z · LW(p) · GW(p)

I'm not sure it's practical to make a political system completely free of incentives.

I'm not sure such a thing has been proposed (after reading most, if not all of this thread); in fact it sounds so absurd that I can't imagine what such a proposal would look like.

Maybe ChristianKI will correct me if he/she really is proposing a "political system completely free of incentives".

Replies from: CCC
comment by CCC · 2013-02-03T18:50:46.609Z · LW(p) · GW(p)

In The Tamuli, by David Eddings, one country's political system is described as an attempt to limit corruption. (The usual caveats regarding fictional evidence apply here, of course). In short, when a person is elected onto the ruling council of the Isle of Tega, all that he owns is sold, and the money is deposited into the country's treasury. He is then simply not permitted to own anything until his term is up, some four years later (presumably food and housing is provided at the expense of the state); when that time comes, the money in the treasury is divided among the ministers in proportion to how much they put in (and the former ministers presumably start re-purchasing stuff). Note that the one thing that the ministers are not allowed to do is to change the tax rates.

This is described as having two consequences. First of all, the Isle of Tega is the only country that always shows a profit. Secondly, the minute that a man is nominated to become a minister, he is put under immediate armed guard to prevent him from running away (and remains under armed guard until his term is over). A government position is viewed with the same trepidation as a prison sentance.

Replies from: BerryPick6
comment by BerryPick6 · 2013-02-03T19:53:09.714Z · LW(p) · GW(p)

Wouldn't they still have incentives to aid parties who promise to repay them once their term is up? Similar to how some legislators conveniently acquire lucrative positions requiring little-to-no effort on their part from companies who they have helped out through the years once they've retired from politics?

Replies from: Nornagest, CCC
comment by Nornagest · 2013-02-03T20:14:15.759Z · LW(p) · GW(p)

Or to aid their families and friends, or to adopt policies that benefit their industry or hometown or social class -- I considered similar systems when I was writing the ancestor (probably unconsciously influenced by Eddings; I haven't read him in years, though), but decided that they were transparently unworkable.

Replies from: HalMorris
comment by HalMorris · 2013-02-03T23:20:34.805Z · LW(p) · GW(p)

Yes, it seems both too drastic, and not really able to accomplish the desired result.

Funny, I've wondered about a similarly drastic action though to improve the quality of voting, namely for each election select a random 1% (or some such -- small enough to not crash the economy) of the population and lock them up with nothing to do but learn about what's going on in the country and in the world and debate who they should vote for. In the end, unlike in the jury system, it should still be a secret ballot. Of course, if as many people were exempted as in jury duty, then it would be biased. One would have to see how much exemption was unavoidable, and and see whether the bias could be sufficiently minimized.

Replies from: CCC
comment by CCC · 2013-02-04T07:25:43.119Z · LW(p) · GW(p)

random 1% (or some such -- small enough to not crash the economy)

If it's small enough not to crash the economy, then is it big enough to reliably alter the election results? And who provides the information for them to read through?

comment by CCC · 2013-02-04T07:24:09.278Z · LW(p) · GW(p)

Wouldn't they still have incentives to aid parties who promise to repay them once their term is up?

Only if they can trust the promise; once their term is up, the parties have little real incentive to stick to their promise, after all.

There will be an incentive to aid people who immediately donate a great big chunk of money to the State, as that money will be shared out among the ministers at the end of their term in any case; but the incentive only works, there, if the great big chunk of money is more than the state would obtain by other means.

Replies from: BerryPick6
comment by BerryPick6 · 2013-02-04T08:20:54.595Z · LW(p) · GW(p)

Only if they can trust the promise; once their term is up, the parties have little real incentive to stick to their promise, after all.

In iterated games, defection has its price.

Replies from: CCC
comment by CCC · 2013-02-04T10:54:18.760Z · LW(p) · GW(p)

I see your point and, on further thought, acknowledge it as correct.

comment by HalMorris · 2013-02-03T15:37:09.914Z · LW(p) · GW(p)

Doing X for specially crafted incentive Y for the intrinsic value of X is a form of corruption. It's not always possible that all decisions are made for the intrinsic value but if you have a political enviroment where there a lot pressure to do Y's.

Especially if you can't get any political power without Y, you won't have many people who persue political goals for their intrinsic value in your political system.

This seems like possibly quite a useful bit of abstraction and offer the potential of arguing the merits of a single principle that appears in many manifestations, in politics, corporations, volunteer organizations, etc. But I'm just having trouble getting it clearly in my head. Two things might help.

1) One or 2 concrete examples where you flesh out "X" and "Y". I spent 2 years in a math Ph.D. program, which is long enough to know that to move forward with an abstraction, it is best to start with at least a couple of examples.

2) Consider the "agency problem" (or "principle-agent problem"), which to me seems the most promising abstraction for reasoning about corruption. See http://en.wikipedia.org/wiki/Agency_problem, and maybe very close to what you're aiming at.

comment by taelor · 2013-02-03T01:22:49.429Z · LW(p) · GW(p)

You want people that are open for rational discouse and who changes their position when you bring them arguments to change their opinions even in the absence of giving them incentives to switch their position.

Be sure to let us know when you find such people. One of the main conceits of this site is that rationalists should win. If it's possible to get ahead by not being a rationalist (even temporarily), people are going to do that. Ultimately, I think what the original quote from Friedman boils down to is the old adage that you should try to fix the system rather than blame the people in it.

comment by Viliam_Bur · 2013-02-02T21:16:53.405Z · LW(p) · GW(p)

If you have corrupt politicians, blame the voters. The politicians did not vote themselves into the office. (Unless they own the vote-counting machines factory.) I guess the quote suggests that "making it politically profitable for the wrong people to do the right things", whatever precisely that means, could still be easier than replacing the whole population of voters; or at least the majority of them.

Replies from: CronoDAS, CronoDAS
comment by Luke_A_Somers · 2013-02-06T14:52:46.308Z · LW(p) · GW(p)

Agreed. It's too easy to pander to a base that doesn't expect you to be good, just deliver a few things... things that matter a great deal less than the cumulative effect of having the right people in charge.

comment by VincentYu · 2013-02-01T21:36:33.151Z · LW(p) · GW(p)

In Munich in the days of the great theoretical physicist Arnold Sommerfeld (1868–1954), trolley cars were cooled in summer by two small fans set into their ceilings. When the trolley was in motion, air flowing over its top would spin the fans, pulling warm air out of the cars. One student noticed that although the motion of any given fan was fairly random—fans could turn either clockwise or counterclockwise—the two fans in a single car nearly always rotated in opposite directions. Why was this? Finally he brought the problem to Sommerfeld.

“That is easy to explain,” said Sommerfeld. “Air hits the fan at the front of the car first, giving it a random motion in one direction. But once the trolley begins to move, a vortex created by the first fan travels down the top of the car and sets the second fan moving in precisely the same direction.”

“But, Professor Sommerfeld,” the student protested, “what happens is in fact the opposite! The two fans nearly always rotate in different directions.”

“Ahhhh!” said Sommerfeld. “But of course that is even easier to explain.”

Devine and Cohen, Absolute Zero Gravity, p. 96.

Replies from: Luke_A_Somers, John_Maxwell_IV
comment by Luke_A_Somers · 2013-02-06T14:41:49.231Z · LW(p) · GW(p)

So, uh, what's the explanation?

Replies from: shminux, TrE
comment by Shmi (shminux) · 2013-02-07T23:03:43.411Z · LW(p) · GW(p)

The story appears to be apocryphal. I've heard many versions of it associated with various famous scientists. The source quoted is a collection of jokes, with very low veracity. Additionally, there are no independent versions of the story anywhere on Google. By the way, the quoted date of Sommerfeld's death is also incorrect. I wonder if there even were (unpowered) ceiling fans in Munich's trolleys during that time.

Replies from: Luke_A_Somers, Desrtopa
comment by Luke_A_Somers · 2013-02-08T14:42:17.254Z · LW(p) · GW(p)

Good point. Effects that don't exist don't need to be explained.

comment by Desrtopa · 2013-02-08T15:08:19.854Z · LW(p) · GW(p)

I wonder if there even were (unpowered) ceiling fans in Munich's trolleys during that time.

I'm not much of an engineer, but based on my understanding of their design from the description given, I can't see how they would even contribute to their alleged purpose.

comment by TrE · 2013-02-07T22:14:28.618Z · LW(p) · GW(p)

Perhaps because pressure is (approximately) constant, for every molecule going into the car, one must leave it (on average)?

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-02-08T15:20:19.418Z · LW(p) · GW(p)

Trolleys have open windows in summer.

comment by John_Maxwell (John_Maxwell_IV) · 2013-02-06T09:14:22.856Z · LW(p) · GW(p)

It's an interesting story, but it might not be as silly as it sounds if one considers "ease of explanation" as a metric for how much credence one's model assigns to a given scenario. (Yes, I agree this is a hackneyed way of modeling stuff.)

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-02-07T03:02:54.072Z · LW(p) · GW(p)

Unfortunately, this seems to be the default way humans do things.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2013-02-07T03:32:34.363Z · LW(p) · GW(p)

Well, the world is a complicated place and we have limited working memory, so our models can only be so good without the use of external tools. In practice, I think looking for reasons why something is true, then looking for reasons why it isn't true, has been a useful rationality technique for me. Maybe because I'm more motivated to think of creative, sometimes-valid arguments when I'm rationalizing one way or the other.

comment by philh · 2013-02-02T11:22:32.350Z · LW(p) · GW(p)

Men in Black on guessing the teacher's password:

Zed: You're all here because you are the best of the best. Marines, air force, navy SEALs, army rangers, NYPD. And we're looking for one of you. Just one.
[...]
Edwards: Maybe you already answered this, but, why exactly are we here?
Zed: [noticing a recruit raising his hand] Son?
Jenson: Second Lieutenant, Jake Jenson. West Point. Graduate with honors. We're here because you are looking for the best of the best of the best, sir! [throws Edwards a contemptible glance]
[Edwards laughs]
Zed: What's so funny, Edwards?
Edwards: Boy, Captain America over here! "The best of the best of the best, sir!" "With honors." Yeah, he's just really excited and he has no clue why we're here. That's just, that's very funny to me.

Replies from: juped
comment by juped · 2013-02-06T15:41:42.593Z · LW(p) · GW(p)

The scene in question.

Replies from: DSimon
comment by DSimon · 2013-02-13T00:23:05.045Z · LW(p) · GW(p)

That whole testing sequence is one of the best examples in film of how to distinguish what's expected of you from what's actually a good idea.

(Or in that specific case, what seems to be expected of you.)

comment by Grognor · 2013-02-03T21:59:37.060Z · LW(p) · GW(p)

It is because a mirror has no commitment to any image that it can clearly and accurately reflect any image before it. The mind of a warrior is like a mirror in that it has no commitment to any outcome and is free to let form and purpose result on the spot, according to the situation.

—Yagyū Munenori, The Life-Giving Sword

comment by Stabilizer · 2013-02-05T01:20:51.152Z · LW(p) · GW(p)

Shipping is a feature. A really important feature. Your product must have it.

-Joel Spolsky

Replies from: CronoDAS, fubarobfusco, army1987, Richard_Kennaway
comment by CronoDAS · 2013-02-06T20:15:45.131Z · LW(p) · GW(p)

Real artists ship.

-- Steve Jobs

(The Organization Formerly Known as SIAI had this problem until relatively recently. Eliezer worked, but he never published anything.)

Replies from: cody-bryce
comment by cody-bryce · 2013-02-20T19:44:40.303Z · LW(p) · GW(p)

And they ship the characters the fans want.

comment by fubarobfusco · 2013-02-05T05:27:05.393Z · LW(p) · GW(p)

If your service is down, it has no features.

Replies from: DanArmak
comment by DanArmak · 2013-02-05T18:10:34.287Z · LW(p) · GW(p)

And no bugs.

Replies from: ygert
comment by ygert · 2013-02-05T18:57:55.413Z · LW(p) · GW(p)

Well, there is one pretty major bug: That your service is not doing anything at all!

Replies from: fubarobfusco, shminux
comment by fubarobfusco · 2013-02-05T19:41:35.613Z · LW(p) · GW(p)

It has all the bugs. All of them.

(Well, not really. For instance, it doesn't have any security holes.)

Replies from: Strange7
comment by Strange7 · 2013-02-07T02:23:54.099Z · LW(p) · GW(p)

If it bears any resemblance to a product at all, your own admin-level access constitutes a potential security hole.

comment by Shmi (shminux) · 2013-02-05T19:13:27.768Z · LW(p) · GW(p)

It's a feature.

comment by A1987dM (army1987) · 2013-02-05T17:05:54.256Z · LW(p) · GW(p)

I would have quoted more, because on reading that out of context I was like “YOU DON'T SAY?”

Replies from: Qiaochu_Yuan, Stabilizer
comment by Qiaochu_Yuan · 2013-02-06T22:17:47.632Z · LW(p) · GW(p)

Most people, when giving advice, don't optimize for maximal usefulness. They optimize for something like maximal apparent-insight or maximal signaling-wisdom or maximal mind-blowing, which are a priori all very different goals. So you shouldn't expect that incredibly useful advice sounds like incredibly insightful, wise, or mind-blowing advice in general. There's probably a lot of incredibly useful advice that no one gives because it sounds too obvious and you don't get to look cool by giving it. One such piece of advice I received recently was "plan things."

Replies from: Nornagest
comment by Nornagest · 2013-02-06T23:08:50.163Z · LW(p) · GW(p)

There's probably also a lot of useful advice that our minds filter out because it scans as obvious or trivial. Even when I'm trying to give maximally effective advice, I usually spend a lot of effort optimizing it for style; the better something sounds, the more people dwell on its implications and the likelier it is to stick. Fortunately, most messages leave plenty of latitude for presentation.

Alternately, you could try dressing simple advice up in enough cultural tinsel that it looks profound, as suggested here.

comment by Stabilizer · 2013-02-06T21:51:28.897Z · LW(p) · GW(p)

Well, a lot basic rationality literally seems to be about doing what is almost obvious but is hard to do because of bugs in your cognitive architecture. This reminds me of the following quote by Elon Musk in an interview where he was asked what he would say to new start-up founders:

Try to get together a group of people to do something useful. This may seem like an obvious thing, but often people will organize into a company that doesn't produce anything useful.

comment by Richard_Kennaway · 2013-02-07T12:18:32.007Z · LW(p) · GW(p)

And by the same author:

Always Be Shipping

and

Shipping Isn't Enough

(because what counts after getting it out the door is how many people actually use it.)

Replies from: pjeby
comment by pjeby · 2013-02-09T15:40:10.882Z · LW(p) · GW(p)

And by the same author:

That's Jeff Atwood. The quote is from Joel Spolsky. While the two both work together on Stack Exchange, they're different individuals.

comment by Mestroyer · 2013-02-07T09:13:19.466Z · LW(p) · GW(p)

I do not love the bright sword for its sharpness, nor the arrow for its swiftness, nor the warrior for his glory. I love only that which they defend

Faramir, from Lord of the Rings on lost purposes and the thing that he protects

Replies from: Dorikka, hankx7787
comment by Dorikka · 2013-02-14T04:54:30.021Z · LW(p) · GW(p)

Except that a non-overwhelming love of a useful art may help you become better in the art, even though you would switch to another if it helped you optimize more.

comment by hankx7787 · 2013-02-13T15:20:34.135Z · LW(p) · GW(p)

another great quote for 2013

comment by Qiaochu_Yuan · 2013-02-01T18:08:33.059Z · LW(p) · GW(p)

Things that are your fault are good because they can be fixed. If they're someone else's fault, you have to fix them, and that's much harder.

-- Geoff Anders (paraphrased)

Replies from: Giles
comment by Giles · 2013-02-09T04:34:03.795Z · LW(p) · GW(p)

Did he mean if they're someone else's fault then you have to fix the person?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-02-09T05:39:44.748Z · LW(p) · GW(p)

Yep.

comment by James_Miller · 2013-02-01T19:41:37.852Z · LW(p) · GW(p)

You want accurate beliefs and useful emotions.

From a participant at the January CFAR workshop. I don't remember who. This struck me as an excellent description of what rationalists seek.

Replies from: Dorikka, sark, Sniffnoy
comment by Dorikka · 2013-02-01T22:49:12.290Z · LW(p) · GW(p)

People often seem to get these mixed up, resulting in "You want useful beliefs and accurate emotions."

Replies from: FiftyTwo, James_Miller
comment by FiftyTwo · 2013-02-02T18:36:01.533Z · LW(p) · GW(p)

Not sure what an "accurate emotion" would mean, feel like some sort of domain error. (e.g. a blue sound.)

Replies from: James_Miller
comment by James_Miller · 2013-02-02T19:38:10.183Z · LW(p) · GW(p)

An accurate emotion = "I'm angry because I should be angry because she is being really, really mean to me."

A useful emotion = "Showing empathy towards someone being mean to me will minimize the cost to me of others' hostility."

Replies from: AdeleneDawner
comment by AdeleneDawner · 2013-02-02T19:40:30.954Z · LW(p) · GW(p)

Where's that 'should' coming from? (Or are you just explaining the concept rather than endorsing it?)

Replies from: James_Miller
comment by James_Miller · 2013-02-02T20:34:56.565Z · LW(p) · GW(p)

I mean in the way most (non-LW) people would interpret it, so explaining not endorsing.

comment by James_Miller · 2013-02-02T17:34:27.146Z · LW(p) · GW(p)

Contrasting "accurate beliefs and useful emotions" with "useful beliefs and accurate emotions" would probably make a good exercise for a novice rationalist.

comment by sark · 2013-02-02T18:47:23.618Z · LW(p) · GW(p)

Why not both useful beliefs and useful emotions?

Why privilege beliefs?

Replies from: Qiaochu_Yuan, James_Miller, Sengachi, Luke_A_Somers
comment by Qiaochu_Yuan · 2013-02-02T20:37:47.522Z · LW(p) · GW(p)

This is addressed by several Sequence posts, e.g. Why truth? And..., Dark Side Epistemology, and Focus Your Uncertainty.

Beliefs shoulder the burden of having to reflect the territory, while emotions don't. (Although many people seem to have beliefs that could be secretly encoding heuristics that, if they thought about it, they could just be executing anyway, e.g. believing that people are nice could be secretly encoding a heuristic to be nice to people, which you could just do anyway. This is one kind of not-really-anticipation-controlling belief that doesn't seem to be addressed by the Sequences.)

Replies from: sark, sark
comment by sark · 2013-02-03T12:00:40.089Z · LW(p) · GW(p)

"Beliefs shoulder the burden of having to reflect the territory, while emotions don't."

This is how I have come to think of beliefs. It's like refactoring code. You should do it when you spot regularities you can eke efficiency out of. But you should do this only if it does not make the code unwieldy or unnatural, and only if it does not make the code fragile. Beliefs should be the same thing. When your rules of thumb seem to respect some regularity in reality, I'm perfectly happy to call that "truth". So long as that does not break my tools.

comment by sark · 2013-02-03T11:56:19.643Z · LW(p) · GW(p)

"Beliefs shoulder the burden of having to reflect the territory, while emotions don't." Superb point that. And thanks for the links.

comment by James_Miller · 2013-02-02T19:16:09.666Z · LW(p) · GW(p)

If useful doesn't equal accurate then you have biased your map.

The most useful beliefs to have are almost always accurate ones so in almost all situations useful=accurate. But most people have an innate desire to bias their map in a way that harms them over the long-run. Restated, most people have harmful emotional urges that do their damage by causing them to have inaccurate maps that "feel" useful but really are not. Drilling into yourself the value of having an accurate map in part by changing your emotions to make accuracy a short-term emotional urge will cause you to ultimately have more useful beliefs than if you have the short-term emotional urge of having useful beliefs.

A Bayesian super-intelligence could go for both useful beliefs and emotions. But given the limitations of the human brain I'm better off programming the emotional part of mine to look for accuracy in beliefs rather than usefulness.

Replies from: NevilleSandiego, sark
comment by NevilleSandiego · 2013-02-23T09:53:21.079Z · LW(p) · GW(p)

useful may not be accurate, depending on one's motives. A 'useful' belief may be one that allows you to do what you really want to unburdened by ethical/logistic/moral considerations. e.g., belief that non-europeans aren't really human permits one to colonise their land without qualms.

I suppose that's why, as a rationalist, one would prefer accurate beliefs- they don't give you the liberty of lying to yourself like that. And as a rationalist, accurate beliefs will be far more useful than inaccurate ones.

comment by sark · 2013-02-03T11:55:17.433Z · LW(p) · GW(p)

Good point about beliefs possibly only "feeling" useful. But that applies to accuracy as well. Privileging accuracy can also lead you to overstate its usefulness. In fact, I find it's often better to not even have beliefs at all. Rather than trying to contort my beliefs to be useful, a bunch of non map-based heuristics gets the job done handily. Remember, the map-territory distinction is itself but a useful meta-heuristic.

comment by Sengachi · 2013-02-08T00:35:44.481Z · LW(p) · GW(p)

A useful belief is an accurate one. It is, however, easy to believe a belief is useful without testing its veracity. Therefore it is optimal to test for accuracy in beliefs, as opposed to querying one's belief in its usefulness.

comment by Luke_A_Somers · 2013-02-06T14:55:20.672Z · LW(p) · GW(p)

Conversely, why not both accurate beliefs and emotions?

Let useful come into play when choosing your actions. This can include framing your emotions - but if you just go around changing your emotions to whatever's useful, you're not being yourself.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-02-07T03:09:55.829Z · LW(p) · GW(p)

you're not being yourself.

Taboo "being yourself".

Replies from: epigeios, Luke_A_Somers
comment by epigeios · 2013-02-07T04:16:12.439Z · LW(p) · GW(p)

"being yourself": A metaphor for a feeling which is so far removed from modern language's ability to describe, that it's a local impossibility for all but a tiny portion of the people in the world to taboo it. It's purpose is to illicit the associated feeling in the listener, and not to be used as a descriptive reference. It is a feeling that is so deeply ingrained in 50% of people, that those people don't realize the other 50% of people don't know what it is; and so had never thought to even begin to try to explain it, much less taboo it.

tabooing the word as if it describes an action is an inadequate representation of the true meaning of the word. The same is true of tabooing the word as if it describes an emotion, a thought, a belief, or an identity.

"being yourself" is a conglomeration of two concepts. The first, "being", requires the assumption that there is such a thing as a "state of being", as an all-encompassing description of something that describes it's non-physical properties as a snapshot of a single moment; and that said description is unlikely to change over time. The second, "oneself", requires the assumption that there is such a thing as a spark of consciousness at the source of any mental processes, or related, of any living creature. This concept is reminiscent of the concept of a "soul".

I personally find the concept of "being oneself" to be of the fallacious origin of the assumption that the spark of consciousness is separate from the current state of being, and that said state and spark do not flux and change continuously.

However, the context of the phrase "being yourself", in this instance, requires not that this phrase be tabooed, but instead that "changing your emotions" be tabooed, along with "useful". The question in regards to "changing your emotions" is if the author meant that truly changing one's emotions would be "not being oneself"; or if the author meant something else, such as putting on a facade of an emotion that one is not experiencing is "not being oneself".

"Useful" is a word that has different definitions for many people, and often changes based on context. The comment in question is likely a misunderstanding of what is meant by the word "useful". This implies the possibility that many people have misunderstood what is meant by the word "useful", perhaps even including the original poster of the quote.

So, the useful thing to do would not be to taboo "being yourself", but to instead taboo "useful".

In my case, I am using "useful" to mean an action which produces a generalized and averaged value for all involved and all observers. In this case, I consider the "value" in question to be an increase in communication ability for all posters, and a general increase in all readers' ability to progress their own mental abilities. I could taboo further, but I don't see any proportionally significant value in doing so.

comment by Luke_A_Somers · 2013-02-07T04:39:11.087Z · LW(p) · GW(p)

Attempting to override your utility function. Effectively, a stab at wetware wireheading.

comment by Sniffnoy · 2013-02-03T02:35:04.233Z · LW(p) · GW(p)

It's perhaps worth noting that EY seems to have taken instead the "accurate beliefs and accurate emotions" tack in e.g. The Twelve Virtues of Rationality. Or at least that seems to be what's implied.

I mean, I suspect "accurate beliefs and useful emotions" really is the way to go; but this is something that -- if it really is a sort of consensus here -- we need to be much more explicit about, IMO. At the moment there seems to be little about that in the sequences / core articles, or at least little about it that's explicity (I'm going from memory in making that statement).

Replies from: Qiaochu_Yuan, Zaine
comment by Qiaochu_Yuan · 2013-02-03T21:33:31.351Z · LW(p) · GW(p)

Agreed. The idea that I should be paying attention to and then hacking my emotions is not something I learned from the Sequences but from the CFAR workshop. In general, though, the Sequences are more concerned with epistemic than instrumental rationality, and emotion-hacking is mostly an instrumental technique (although it is also epistemically valuable to notice and then stop your brain from flinching away from certain thoughts).

Replies from: non-expert
comment by non-expert · 2013-02-04T16:52:27.932Z · LW(p) · GW(p)

emotion-hacking seems far more important in epistemic rationality, as your understanding of the world is the setting in which you use instrumental rationality, and your "lens" (which presumably encompasses your emotions) is the key hurdle (assuming you are otherwise rational) preventing you from achieving the objectivity necessary to form true beliefs about the world.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-02-04T16:58:15.081Z · LW(p) · GW(p)

I suppose I should distinguish between two kinds of emotion-hacking: hacking your emotional responses to thoughts, and hacking your emotional responses to behaviors. The former is an epistemic technique and the latter is an instrumental technique. Both are quite useful.

Replies from: non-expert
comment by non-expert · 2013-02-05T16:45:18.420Z · LW(p) · GW(p)

whose thoughts and whose behaviors? not disagreeing, just asking.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-02-05T17:31:52.073Z · LW(p) · GW(p)

My thoughts and my behaviors. I suppose there is a third kind of emotion-hacking, namely hacking your emotional responses to external stimuli. But it's not as if I can respond to other people's thoughts, even in principle: all I have access to are sounds or images which purport to be correlated to those thoughts in some mysterious way.

Replies from: non-expert
comment by non-expert · 2013-02-05T19:26:34.676Z · LW(p) · GW(p)

All emotions are responses to external stimuli, unless your emotions relate only to what is going on in your head, without reference to the outside (i.e. outside your body) world.

I agree you can't respond to others' thoughts, unless they express them such that they are "behaviors." Interestingly, the "problem" you have with the sounds or images (or words?) which purport to be correlated to others' thoughts is the same exact issue everyone is having with you (or me).

if we're confident in our own ability to express our thoughts (i.e. the correlation problem is not an issue for you), then how much can we dismiss others' expressions because of that very same issue?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-02-05T20:15:27.918Z · LW(p) · GW(p)

I don't understand what point you're trying to make.

Replies from: non-expert
comment by non-expert · 2013-02-05T20:38:31.442Z · LW(p) · GW(p)

I suppose there is a third kind of emotion-hacking, namely hacking your emotional responses to external stimuli.

isn't this the ONLY kind of emotion-hacking out there? what emotions are expressed irrespective of external stimuli? seems like a small or insignificant subset.

But it's not as if I can respond to other people's thoughts, even in principle: all I have access to are sounds or images which purport to be correlated to those thoughts in some mysterious way.

the second two paragraphs above are responding to this. sorry to throw it back at you, but perhaps i'm misunderstanding the point you were trying to make here? I thought you were questioning the value of considering/responding to others' thoughts, because you are arguing that even if you could, you would need to rely on their words and expressions, which may not be correlated with their "true" state of mind.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-02-05T20:51:58.085Z · LW(p) · GW(p)

isn't this the ONLY kind of emotion-hacking out there? what emotions are expressed irrespective of external stimuli? seems like a small or insignificant subset.

Let me make some more precise definitions: by "emotional responses to my thoughts" I mean "what I feel when I think a given thought," e.g. I feel a mild negative emotion when I think about calling people. By "emotional responses to my behavior" I mean "what I feel when I perform a given action," e.g. I feel a mild negative emotion when I call people. By "emotional responses to external stimuli" I mean "what I feel when a given thing happens in the world around me," e.g. I feel a mild negative emotion when people call me. The distinction I'm trying to make between my behavior and external stimuli is analogous to the distinction between operant and classical conditioning.

I thought you were questioning the value of considering/responding to others' thoughts, because you are arguing that even if you could, you would need to rely on their words and expressions, which may not be correlated with their "true" state of mind.

No, I'm just making the point that for the purposes of classifying different kinds of emotion-hacking I don't find it useful to have a category for other people's thoughts separate from other people's behaviors (in contrast to how I find it useful to have a category for my thoughts separate from my behaviors), and the reason is that I don't have direct access to other people's thoughts.

Interestingly, the "problem" you have

What problem?

Replies from: non-expert
comment by non-expert · 2013-02-05T21:15:50.561Z · LW(p) · GW(p)

Thanks for the clarification, now i understand.

Going back to the original comment i commented on:

emotion-hacking is mostly an instrumental technique (although it is also epistemically valuable to notice and then stop your brain from flinching away from certain thoughts).

Particularly with your third type of emotion hacking ("hacking your emotional responses to external stimuli"), it seems emotion hacking is vital for for epistemic rationality -- i guess that relates to my original point, that hacking emotions are at least as important for epistemic rationality as hacking emotions for instrumental rationality.

I raised the issue originally because I worry that rationality, to the extent it must value subjective considerations, tends to minimize the importance of those considerations to yield a more clear inquiry.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-02-05T21:29:49.154Z · LW(p) · GW(p)

I worry that rationality, to the extent it must value subjective considerations, tends to minimize the importance of those considerations to yield a more clear inquiry.

Can you clarify what you mean by this?

Replies from: non-expert
comment by non-expert · 2013-02-05T22:46:08.101Z · LW(p) · GW(p)

sure. note that i don't offer this as conclusive or correct, but just something i'm thinking about. also, lets assume rational choice theory is universally applicable for decision making.

rational choice theory gives you an equation to use and all we have to do is fill that equation with the proper inputs, value them correctly, and you get an answer. Obviously this is more difficult in practice, particularly where inputs (as to be expected) are not easily convertible to probabilities/numbers -- I'm worried this is actually more problematic than we think. Once we have an objective equation as a tool, we may be biased to assume objectivity and truth regarding our answers, even though that belief often is based on the strength of the starting equation and not on our ability to accurately value and include the appropriate subjective factors. To the extent answering a question becomes difficult, we manufacture "certainty" by ignoring subjectivity or assuming it is not as relevant as it is.

Simply put, the belief we have a good and objective starting point biases us to believe we also can/will/actually derive an objectively correct answer, affecting the accuracy with which we fill in the equation.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-02-05T23:07:51.430Z · LW(p) · GW(p)

I agree that this is problematic but don't see what it has to do with what I've been saying.

Replies from: non-expert
comment by non-expert · 2013-02-05T23:19:45.321Z · LW(p) · GW(p)

you suggested that emotion hacking is more of an issue for instrumental rationality and not so much for epistemic rationality. to the extent that is wrong, you're ignoring emotion hacking (subjective factor) from your application of epistemic rationality.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-02-05T23:21:45.881Z · LW(p) · GW(p)

I'm happy to agree that emotion hacking is important to epistemic rationality.

Replies from: non-expert
comment by non-expert · 2013-02-05T23:30:17.718Z · LW(p) · GW(p)

ok, wasn't trying to play "gotcha," just answering your question. good chat, thanks for engaging with me.

comment by Zaine · 2013-02-03T04:12:43.687Z · LW(p) · GW(p)

Indeed, accurate emotions appear a better description. Consider killing someone might free up many opportunities, and would only have the consequence of bettering many lives; the useful emotion would be happiness at the opportunity to forever end that person's continued generation and spread of negative utility. Regardless of whether the accurate emotion might yield the same result, I'd trust the decisions of they who emote accurately, for though I know not whither hacking for emotional usefulness leads, a change of values for the disutility of others I strongly suspect.

comment by [deleted] · 2013-02-03T01:13:32.644Z · LW(p) · GW(p)

.

comment by andreas · 2013-02-02T05:42:44.940Z · LW(p) · GW(p)

"I design a cell to not fail and then assume it will and then ask the next 'what-if' questions," Sinnett said. "And then I design the batteries that if there is a failure of one cell it won't propagate to another. And then I assume that I am wrong and that it will propagate to another and then I design the enclosure and the redundancy of the equipment to assume that all the cells are involved and the airplane needs to be able to play through that."

Mike Sinnett, Boeing's 787 chief project engineer

Replies from: Nic_Smith
comment by Nic_Smith · 2013-02-05T02:47:59.205Z · LW(p) · GW(p)

Isn't the point of the article that Boeing may not have actually done at least the first two steps (design cell not to fail, prevent failure of a cell from causing battery problems)?

I am confused.

Replies from: Baughn
comment by Baughn · 2013-02-07T14:53:57.012Z · LW(p) · GW(p)

It's the point of the problem, anyway.

SInnett is probably a very good designer, but the battery design was outsourced.

comment by jooyous · 2013-02-06T21:57:17.397Z · LW(p) · GW(p)

I wept because I had no shoes until I met a man who had no feet, then I continued weeping because his foot problem did not actually solve my shoe problem.

-- Noah Brand

I'd prefer if this quote ended with " ... and then I got done weeping and started working on my shoe budget," but oh wells.

Replies from: B_For_Bandana, Dahlen, army1987, pjeby
comment by B_For_Bandana · 2013-02-07T00:38:32.265Z · LW(p) · GW(p)

"...And then I remembered status is positional, felt superior to the footless man, and stopped weeping."

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-09T13:20:45.595Z · LW(p) · GW(p)

Shoes aren't just about positional social status, are they? (I mean, the difference between a $20 pair of shoes and a $300 pair of shoes mostly is, but the difference between a $20 pair of shoes and no shoes at all isn't, is it?)

comment by Dahlen · 2013-02-08T01:24:34.807Z · LW(p) · GW(p)

This. If only people realized that unpleasant facts do not cancel each other out, and pointing out one unpleasant fact in addition to another should never ever make us feel better, because it only leaves us in a worse world than we started out in. Compute the actual utilities. It's such a common and avoidable error.

Replies from: jooyous, Eugine_Nier, Kaj_Sotala, Oligopsony
comment by jooyous · 2013-02-08T07:05:54.804Z · LW(p) · GW(p)

I think people just accidentally conflate keeping problems in perspective with the idea that the existence of bigger problems makes the small problems negligible and therefore equivalent to non-problems.

I've seen this happen with positive things too; sometimes you won't mind repeatedly doing small favors for someone and they start acting like you not minding means the favor is equivalent to doing nothing from your perspective, which is frustrating when your small but non-zero effort goes unacknowledged.

It's sort of like approximating sinθ as 0 for small angles. ^_^

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-09T13:51:53.124Z · LW(p) · GW(p)

Yep. Most people seem to behave as though the choice between spending $5 and spending $10 is a much bigger deal than the choice between spending $120 and spending $125, but if anything it's the other way round, because in the latter case you'll be left with less money. (That heuristic does have a point for acausal reasons analogous to these insofar as you'll have to make the first kind of choice much more often than the second, but people will still behave the same way in one-off situations.)

Replies from: satt
comment by satt · 2013-02-10T09:41:37.202Z · LW(p) · GW(p)

Another possible motivation for that heuristic: something that's a good buy for $5 might well be a bad buy for $10, but something that's a good buy for $120 is probably still a good buy for $125. If I find that a cheap item's twice the cost I thought it was, that's more likely to force me to re-do a utilitarian calculation than if I find an expensive item is 4% pricier than I thought it was.

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-10T10:49:53.707Z · LW(p) · GW(p)

Yes, but OTOH if I'm about to buy something for $125 it isn't that unlikely that if I looked more carefully I could found someone else selling the same thing for $120, whereas if I'm about to buy something for $10 it's somewhat unlikely that anyone else would sell the same thing for $5 (so looking around would most likely be a waste of time), and I'd guess these two effects would more-or-less cancel out.

Replies from: satt
comment by satt · 2013-02-11T02:25:58.321Z · LW(p) · GW(p)

I can often get a $10 good/service for $5 or less if I'm willing to delay consumption or find another seller (e.g. buying used books, not seeing films as soon as they come out, getting food at a canteen or fast food place instead of a pub or restaurant, using buses instead of trains). I might be atypical.

comment by Eugine_Nier · 2013-02-08T22:55:41.106Z · LW(p) · GW(p)

I think both your comment and the quote are forgetting the instrumental purpose of crying and/or feeling bad.

Replies from: Dahlen, jooyous
comment by Dahlen · 2013-02-09T09:52:03.497Z · LW(p) · GW(p)

I can't say I see your point. Mind explaining?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-02-17T11:35:09.064Z · LW(p) · GW(p)

My guess: The purpose of crying is to make people around you more likely to help you.

So if you don't have shoes, there is a chance that crying in public will make someone give you money to buy the shoes. But if there is a person without feet nearby, your chances become smaller, because people will redirect their limited altruist budgets to that other person. Your crying becomes less profitable.

Replies from: Dahlen
comment by Dahlen · 2013-02-17T16:55:23.980Z · LW(p) · GW(p)

... Alright, but... that's a separate point to make altogether. It's not a quote about making yourself as likely as possible to get others to help you, and, I would say, it doesn't have to be; it's a quote about how other people's negative experiences influence the way you feel about yours.

comment by jooyous · 2013-02-09T20:30:00.099Z · LW(p) · GW(p)

Unfortunately, I've met a lot of people who forget the instrumental purposes of crying and/or feeling bad. =[

comment by Kaj_Sotala · 2013-02-08T09:09:05.393Z · LW(p) · GW(p)

But if you look at it other way, then pointing out unpleasant facts about other people's condition (that don't apply to us) is equivalent to pointing out good facts about our condition, which should make us feel better, as it leaves us in a better world than we started out in.

Replies from: Dahlen
comment by Dahlen · 2013-02-09T09:42:05.828Z · LW(p) · GW(p)

That's exactly the kind of thinking the world needs less of, and the kind that I was trying to warn readers against in the parent comment. Why? Just why would a worse world for someone else make for a better world for you, if that someone is not your mortal enemy? It just makes for a worse world, period.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-02-09T16:13:07.426Z · LW(p) · GW(p)

The point isn't that you're taking pleasure in their misfortune, it's that you're taking pleasure in your own fortune. "I'm so lucky for having X." If you don't do that, then any improvements in your standard of living or situation in general will end up having no impact on your happiness, since you just get used to them and take them for granted and don't even realize that you would have a million reasons to be happy. And then (in the most extreme case) you'll end up feeling miserable because of something completely trivial, because you're ignoring everything that could make you happy and the only things that can have any impact on your state of mind are negative ones.

Replies from: jooyous, Dahlen
comment by jooyous · 2013-02-09T20:26:30.787Z · LW(p) · GW(p)

And then (in the most extreme case) you'll end up feeling miserable because of something completely trivial, because you're ignoring everything that could make you happy and the only things that can have any impact on your state of mind are negative ones.

Someone commented above about the instrumental value of crying and feeling bad, and you're actually pointing out the case where crying and feeling bad fail at being instrumental. Basically, I'm for whatever attitude that gets you to stop crying and start fixing some problem, and if resetting your baseline helps, it's fair game! It definitely works for me in some cases.

I think this quote is trying to argue against the attitude that problems that are minor compared to other problems don't deserve any attention at all. That everyone without shoes should just wrench themselves into happiness and go around being grateful, rather than acknowledging that they keep stepping on snails and pointy things, which sucks, and making productive steps toward acquiring shoes.

I remember reading something about plastic surgeons getting kind of looked down upon because they're not proper heroic doctors that handle real medical problems.

comment by Dahlen · 2013-02-09T17:26:18.131Z · LW(p) · GW(p)

... I think I see where you're coming from -- by realizing we're not at the far end of the unhappiness scale (since we have a counterexample to that), we should calibrate our feelings about our situation accordingly, yes?

It's still not the way I view things; I'd like to say I prefer judging these things according to an absolute standard, but it's likely that that would be less true for me than I want it to be. To the extent that it doesn't hold true for me, I think it's better to take into consideration better states as well as worse ones. Saying, "at least I don't have it as bad as X" just doesn't feel enough; everybody who doesn't have it as bad as X could say it, and people in this category can vary widely in their levels of satisfaction, the more so the worse X has it. It's more complete to say "Yes, but I don't have it as good as Y either" or, better yet, "I have it better/worse than my need requires".

Replies from: Kaj_Sotala, ygert
comment by Kaj_Sotala · 2013-02-10T07:16:14.440Z · LW(p) · GW(p)

by realizing we're not at the far end of the unhappiness scale (since we have a counterexample to that), we should calibrate our feelings about our situation accordingly, yes?

Yes, pretty much.

comment by ygert · 2013-02-09T17:52:58.901Z · LW(p) · GW(p)

Yes, yes, but now you are going into far more depth than the original quote. The idea behind the quote seems to have been (at least as I read it): "Be happy that you have feet, having feet is not something you should take for granted." The quote says nothing more than that. (Well, not quite. The point it makes is not only meant to be reserved for feet specifically, but rather seems to be meant as a comment on anything people take for granted.)

comment by Oligopsony · 2013-02-17T14:47:08.175Z · LW(p) · GW(p)

What's an actual utility?

Replies from: Dahlen
comment by Dahlen · 2013-02-17T16:38:14.379Z · LW(p) · GW(p)

In the example above: the fact that you have no shoes equates to negative utility for you. If you're a normal human being who is generally well-intended and wants people to have both feet and shoes for those feet, you would feel upset if you saw someone without feet, hence more negative utility. Your negative utility from you having no shoes + negative utility from seeing someone have no feet can only amount to a more negative total score than just the one obtained by considering your own lack of shoes. Even in the case where you're a complete egoist for whom others' misfortunes have absolutely no impact on your own personal happiness, if you sum them up again you still end up with the same negative utility from having no shoes. Only if you're the kind of monster that rejoices in other people's suffering is it possible for your utility score to raise after seeing someone with no feet. Yet it seems that even people who aren't complete monsters seem to take comfort in the fact that someone else has it worse than them, and this seems intuitive for most people, and counter-intuitive for others, i.e. me, and the person who made the quote.

(Disclaimer: I haven't studied utilitarianism formally; probably I'm using more of an everyday definition of the word "utility", akin to "feel-good-ness" in a broad sense. The way I've thought about this problem stems purely from my intuitions.)

comment by A1987dM (army1987) · 2013-02-09T13:31:55.544Z · LW(p) · GW(p)

Generally speaking, bigger problems tend to be cheaper to solve (i.e. solving them will yield more utilons per dollar); so if there is a painting in a museum that risks being sold, and there are people that risk dying from malaria, the existence of latter is a good indication that worrying about the former isn't the most effective use of a given amount of resources. (“Concentrate on the high-order bits” -- Umesh Vazirani.) But in this particular case, that heuristic doesn't seem to work (unless I'm overestimating the cost of prosthetics).

comment by pjeby · 2013-02-09T15:22:25.956Z · LW(p) · GW(p)

I'd prefer if this quote ended with " ... and then I got done weeping and started working on my shoe budget,"

That's really the entire point of the original quote that this quote is making fun of. The difference between the original and this one is that the author of the second has not updated his baseline expectation that he should have shoes, and that something is wrong if he doesn't.

Our baseline expectations determine what we consider a "loss", in the prospect theory sense, so if seeing someone else's problem helps you reset your baseline, it actually is a way to help you stop weeping and start working on the budget, as it were. What we call "getting perspective" on a situation is basically a name for updating your baseline expectation for how reality "ought to be" at the present moment.

(That isn't a perfect phrasing, because English doesn't really have distinct-enough words for different sorts of "oughts" or "shoulds". The kind I mean is the kind where reality feels awful or crushingly disappointing if it's not the way it "ought" to be, not the kind where you say that ideally, in a perfect world, things ought to be in thus and such a way, but you don't experience a bad feeling about it right now. It's a "near" sort of ought, not a "far" one. Believing the future should be a certain way doesn't cause this sort of problem, until the future actually arrives.)

Replies from: jooyous, army1987
comment by jooyous · 2013-02-09T20:10:23.847Z · LW(p) · GW(p)

What we call "getting perspective" on a situation is basically a name for updating your baseline expectation for how reality "ought to be" at the present moment.

I agree that resetting your baseline is often important if you think that your lack of shoes is a soul-crushing awfulness. This quote is mainly arguing against the attitude that says "you have feet therefore your shoe problem is a non-problem, don't even bother feeling bad or working on it". It's comparatively very minor, but it should be fixed just like any other problem. This quote is arguing against resetting your baseline to the point where minor problems get no attention at all.

Replies from: pjeby
comment by pjeby · 2013-02-10T02:57:05.978Z · LW(p) · GW(p)

This quote is mainly arguing against the attitude that says "you have feet therefore your shoe problem is a non-problem, don't even bother feeling bad or working on it".

That may be, but the actual context of the quote it's arguing with is quite different, on a couple of fronts.

Harold Abbott, the author of the original 1934 couplet ("I had the blues because I had no shoes / Until upon the street, I met a man who had no feet"), wrote it to memorialize an encounter with a happy legless man, at a time when Abbott was dead broke and depressed. (Abbott was not actually lacking in shoes, nor the man only lacking in feet, but apparently in those days people took their couplet-writing seriously. ;-) )

Thing is, at the time he encountered the legless man (who smiled and said good morning), Abbott was actually walking to the bank in order to borrow money to go to Kansas City to look for a job. And not only did he not stop walking to the bank after the encounter, he decided to ask for twice as much money as he had originally intended to borrow. He had in fact raised his sights, rather than lowering them.

That is, the full story is not anything like, "other people have worse problems so STFU", but rather that your attitude is a choice, and there are probably people who have much worse circumstances than you, who nonetheless have a better attitude. Abbott wrote the couplet to put on his bathroom mirror, as an ongoing reminder to have a positive outlook and persist in the face of adversity.

Which is quite a different message than what Noah Brand's snarky quip would imply.

Replies from: JGWeissman, jooyous
comment by JGWeissman · 2013-02-11T02:58:01.512Z · LW(p) · GW(p)

the full story is not anything like, "other people have worse problems so STFU"

I think the problem that people are having with the quote is that it doesn't actually contain the full story, and when it is repeated outside that context, the meaning they get from parsing the words is "other people have worse problems so STFU", and it's not a good idea to go around repeating it if people are going to predictably lack the context and misinterpret it.

comment by jooyous · 2013-02-10T03:27:50.552Z · LW(p) · GW(p)

I guess I didn't quote the original article, and he was saying "I am pointing out this problem that is probably not as big or painful as this other problem, but can we please acknowledge its existence also?" And, as often happens with social issues, he was trying to preempt the inevitable "why would we care? we have it worse!" response.

I definitely agree that attitude is a choice! I wasn't quite aware of the original quote, but I would put it down as an instrumental rationality quote as well. 8) But it sounds like his shoelessness was a symptom of bigger/different problems?

I consider Noah Brand's quote a rationality quote because it's a reminder that problems require real solutions. Changing your attitude to be positive is useful, but changing your attitude to accept that something that sucks will continue to suck indefinitely is not the answer.

Replies from: pjeby
comment by pjeby · 2013-02-10T03:45:22.263Z · LW(p) · GW(p)

it sounds like his shoelessness was a symptom of bigger/different problems?

Yes, his business (a grocery store) had just failed, taking his entire life savings with it. (And the story doesn't actually say he was shoeless, anyway, just that the rhyme was something he posted on his mirror as a reminder of the encounter.)

comment by A1987dM (army1987) · 2013-02-10T15:56:08.191Z · LW(p) · GW(p)

The kind I mean is the kind where reality feels awful or crushingly disappointing if it's not the way it "ought" to be

“need”

Replies from: pjeby
comment by pjeby · 2013-02-11T02:05:45.339Z · LW(p) · GW(p)

“need”

Nope, the thing I'm talking about is closer to what the Buddhists would call an "attachment", and some Buddhist-influenced writers call an "addiction". (Others would call it a "desire", but IMO this is inaccurate: one can desire something without being attached to actually getting it.)

comment by jsbennett86 · 2013-02-02T03:45:22.350Z · LW(p) · GW(p)

On scientists trying to photograph an atom's shadow:

...the idea sounds stupid. But scientists don't care about sounding stupid, which is what makes them not stupid, and they did it anyway.

Luke McKinney - 6 Microscopic Images That Will Blow Your Mind

comment by jsbennett86 · 2013-02-02T03:36:42.501Z · LW(p) · GW(p)

It seems that 32 Bostonians have simultaneously dropped dead in a ten-block radius for no apparent reason, and General Purcell wants to know if it was caused by a covert weapon. Of course, the military has been put in charge of the investigation and everything is hush-hush.

Without examining anything, Keyes takes about five seconds to surmise that the victims all died from malfunctioning pacemakers and the malfunction was definitely not due to a secret weapon. We're supposed to be impressed, but our experience with real scientists and engineers indicates that when they're on-the-record, top-notch scientists and engineers won't even speculate about the color of their socks without looking at their ankles. They have top-notch reputations because they're almost always right. They're almost always right because they keep their mouths shut until they've fully analyzed the data.

Insultingly Stupid Movie Physics' review of The Core

Replies from: jsbennett86, Desrtopa, army1987
comment by jsbennett86 · 2013-02-02T03:37:42.621Z · LW(p) · GW(p)

The remark included the following as a footnote:

Even top-notch engineers and scientists will speculate wildly when they're off-the-record. We define on-the-record as those times when their written or oral communications are likely to be taken seriously and directly attributed to the scientist or engineer making them. Surely answering a direct question posed by a general would fall into this category.

comment by Desrtopa · 2013-02-02T14:26:07.762Z · LW(p) · GW(p)

32 people in the same ten block radius simultaneously dying of malfunctioning pacemakers seems so tremendously unlikely, I can't imagine how one could even locate that as an explanation in a matter of seconds.

Replies from: jsbennett86, FiftyTwo
comment by jsbennett86 · 2013-02-02T22:57:46.945Z · LW(p) · GW(p)

Also from the review:

A pacemaker malfunction isn't automatically fatal. In most cases the patient's heart will still beat, although with an abnormal rhythm. The severity of a pacemaker problem depends on the type of malfunction as well as the severity of the patient's condition. EM interference can cause problems, but major problems are rare considering the amount of EM interference pacemaker patients are exposed to. Pacemakers are designed to minimize these problems. It's hard to believe that dozens of pacemaker patients with various heart conditions and different makes and models of pacemakers would simultaneously die from microwave exposure.

Replies from: HalMorris
comment by HalMorris · 2013-02-03T16:53:06.416Z · LW(p) · GW(p)

Unless the 32 people used the same, or very similar, pacemakers, and somebody forgot to say that.

Replies from: Desrtopa
comment by Desrtopa · 2013-02-04T16:56:41.060Z · LW(p) · GW(p)

Still sounds extremely unlikely. If a model of car has a particular design flaw, you'll expect to hear a lot of reports of that model suffering the same malfunction, but you wouldn't expect to hear that dozens of units within a certain radius suffered the same malfunction simultaneously. You'd need to subject them all to some sort of outside interference at the same time for that sort of occurrence to be plausible, and an event of that scale ought to leave evidence beyond its effect on all the pacemakers in the vicinity.

comment by FiftyTwo · 2013-02-11T14:14:40.095Z · LW(p) · GW(p)

If I recall correctly, he also pointed out that the fact they had invited two experts on magnetic fields was also a strong clue.

comment by A1987dM (army1987) · 2013-02-05T12:35:50.260Z · LW(p) · GW(p)

See also the extra panel (hover onto the red button) in yesterday's SMBC comic.

Replies from: Luke_A_Somers, Daniel_Molloy
comment by Luke_A_Somers · 2013-02-06T15:04:14.862Z · LW(p) · GW(p)

... I had not known about red buttons on SMBC.

roll d20... success on 'resist re-binge' check.

comment by Daniel_Molloy · 2013-02-07T03:25:11.235Z · LW(p) · GW(p)

Umm... how do I use the red button on a mobile device? (I also have this problem with xkcd.)

Replies from: TobyBartels, army1987
comment by TobyBartels · 2013-02-07T16:04:02.269Z · LW(p) · GW(p)

I know that you crossed this out, but the answer to the parenthetical implied question is this: Use the xkcd viewer app.

  • Android (used by me regularly on my Android phone)
  • Apple (never used by me because I don't have an Apple product)
Replies from: army1987
comment by A1987dM (army1987) · 2013-02-07T16:46:40.328Z · LW(p) · GW(p)

Thank you!

comment by A1987dM (army1987) · 2013-02-07T16:46:29.682Z · LW(p) · GW(p)

You just press it. It also works with karma scores on LW to see the percentage of positive votes (at least on Android). I didn't know how to read title texts on xkcd until reading TobyBartels's comment, though.

comment by arundelo · 2013-02-01T17:00:17.959Z · LW(p) · GW(p)

Eventually you just have to admit that if it looks like the absence of a duck, walks like the absence of a duck, and quacks like the absence of a duck, the duck is probably absent.

--Tom Chivers

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-01T23:13:04.633Z · LW(p) · GW(p)

I agree subject to the specification that each such observation must look substantially more like the absence of a duck then a duck. There are many things we see which are not ducks in particular locations. My shoe doesn't look like a duck in my closet, but it also doesn't look like the absence of a duck in my closet. Or to put it another way, my sock looks exactly like it should look if there's no duck in my closet, but it also looks exactly like it should look if there is a duck in my closet.

Replies from: fubarobfusco, pinyaka
comment by fubarobfusco · 2013-02-02T04:18:29.802Z · LW(p) · GW(p)

If your sock does not have feathers or duck-shit on it, then it is somewhat more likely that it has not been sat on by a duck.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-02T05:26:10.647Z · LW(p) · GW(p)

Insufficiently more likely. I've been around ducks many times without that happening to my socks. Log of the likelihood ratio would be close to zero.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-02-03T16:26:59.839Z · LW(p) · GW(p)

You originally were talking about a duck in your closet, which isn't the same as thing as being around ducks.

The discussion reminds me of this, which makes the point that, while corelation is not causation, if there's no corelation, there almost certainly isn't causation.

Replies from: Richard_Kennaway, IlyaShpitser, simplicio
comment by Richard_Kennaway · 2013-02-05T08:37:54.065Z · LW(p) · GW(p)

if there's no corelation, there almost certainly isn't causation.

This is completely wrong, though not many people seem to understand that yet.

For example, the voltage across a capacitor is uncorrelated with the current through it; and another poster has pointed out the example of the thermostat, a topic I've also written about on occasion.

It's a fundamental principle of causal inference that you cannot get causal conclusions from wholly acausal premises and data. (See Judea Pearl, passim.) This applies just as much to negative conclusions as positive. Absence of correlation cannot on its own be taken as evidence of absence of causation.

Replies from: shminux, army1987
comment by Shmi (shminux) · 2013-02-05T20:09:30.659Z · LW(p) · GW(p)

the voltage across a capacitor is uncorrelated with the current through it

It depends. While true when the signal is periodic, it is not so in general. A spike of current through the capacitor results in a voltage change. Trivially, if voltage is an exponent (V=V0exp(-at), then so is current (I=C dV/dt=-aCV0 exp(-at)), with 100% correlation between the two on a given interval.

As for the Milton's thermostat, only the perfect one is uncorrelated (the better the control system, the less the correlation), and no control system without complete future knowledge of inputs is perfect. Of course, if the control system is good enough, in practice the correlation will drown in the noise. That's why there is so little good evidence that fiscal (or monetary) policy works.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-02-05T20:52:30.645Z · LW(p) · GW(p)

It depends. While true when the signal is periodic, it is not so in general.

I skipped some details. A crucial condition is that the voltage be bounded in the long term, which excludes the exponential example. Or for finite intervals, if the voltage is the same at the beginning and the end, then over that interval there will be zero correlation with its first derivative. This is true regardless of periodicity. It can be completely random (but differentiable, and well-behaved enough for the correlation coefficient to exist), and the zero correlation will still hold.

Of course, if the control system is good enough, in practice the correlation will drown in the noise.

For every control system that works well enough to be considered a control system at all, the correlation will totally drown in the noise. It will be unmeasurably small, and no investigation of the system using statistical techniques can succeed if it is based on the assumption that causation must produce correlation.

For example, take the simple domestic room thermostat, which turns the heating full on when the temperature is some small delta below the set point, and off when it reaches delta above. To a first approximation, when on, the temperature ramps up linearly, and when off it ramps down linearly. A graph of power output against room temperature will consist of two parallel lines, each traversed at constant velocity. As the ambient temperature outside the room varies, the proportion of time spent in the on state will correspondingly vary. This is the only substantial correlation present in the system, and it is between two variables with no direct causal connection. Neither variable will correlate with the temperature inside. The temperature inside, averaged over many cycles, will be exactly at the set point.

It's only when this control stystem is close to the limits of its operation -- too high or too low an ambient outside temperature -- does any measurable correlation develop (due to that approximation of the temperature ramp as linear breaking down). The correlation is a symptom of its incipient lack of control.

no control system without complete future knowledge of inputs is perfect.

Knowledge of future inputs does not necessarily allow improved control. The room thermostat (assuming the sensing element and the heat sources have been sensibly located) keeps the temperature within delta of the set point, and could not do any better given any information beyond what it has, i.e. the actual temperature in the room. It is quite non-trivial to improve on a well-designed controller that senses nothing but the variable it controls.

Replies from: Luke_A_Somers, shminux
comment by Luke_A_Somers · 2013-02-06T15:11:13.666Z · LW(p) · GW(p)

Exponential decay is a very very ordinary process to find a capacitor in. Most capacitors are not in feedback control systems.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-02-06T15:27:48.747Z · LW(p) · GW(p)

The capacitor is just a didactic example. Connect it across a laboratory power supply and twiddle the voltage up and down, and you get uncorrelated voltage and current signals.

Somewhere at home I have a gadget for using a computer as a signal generator and oscilloscope. I must try this.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-02-06T17:23:06.872Z · LW(p) · GW(p)

On the other hand, I'd guess that 99% of actual capacitors are the gates of digital FETs (simply due to the mindbogglingly large number of FETs). Given just a moment's glimpse of the current through such a capacitor, you can deduce quite a bit about its voltage.

comment by Shmi (shminux) · 2013-02-05T21:22:04.060Z · LW(p) · GW(p)

For every control system that works well enough to be considered a control system at all, the correlation will totally drown in the noise.

False. Here (second graph) is an example of a real-life thermostat. The correlation between inside and outside temperatures is evident when the outside temperature varies.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-02-05T22:46:48.783Z · LW(p) · GW(p)

The thermostat isn't actually doing anything in those graphs from about 7am to 4pm. There's just a brief burst of heat to pump the temperature up in the early morning and a brief burst of cooling in the late afternoon. Of course the indoor temperature will be heavily influenced by the outdoor temperature. It's being allowed to vary by more than 4 degrees C.

Replies from: shminux
comment by Shmi (shminux) · 2013-02-05T22:48:01.263Z · LW(p) · GW(p)

OK, maybe I misunderstood your original point.

comment by A1987dM (army1987) · 2013-02-05T16:54:48.755Z · LW(p) · GW(p)

I wonder why EY didn't make an example of that in Stuff That Makes Stuff Happen.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-02-05T17:14:28.183Z · LW(p) · GW(p)

Examples like the ones I gave are not to be found in Pearl, and hardly at all in the causal analysis literature.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-02-05T19:05:05.853Z · LW(p) · GW(p)

Sorry, can you clarify what you mean by "like the ones". What is the distinguishing feature?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-02-05T19:43:52.395Z · LW(p) · GW(p)

Dynamical dependencies -- one variable depending on the derivative or integral of another. (Dealing with these by discretising time and replacing every variable X by an infinite series X0,X1,X2... does not, I believe, yield any useful analysis.) The result is that correlations associated with direct causal links can be exactly zero, yet not in a way that can be described as cancellation of multiple dependencies. The problem is exacerbated when there are also cyclic dependencies.

There has been some work on causal analysis of dynamical systems with feedback, but there are serious obstacles to existing approaches, which I discuss in a paper I'm currently trying to get published.

Replies from: IlyaShpitser, army1987
comment by IlyaShpitser · 2013-02-05T22:22:48.236Z · LW(p) · GW(p)

Sorry, confused. A function is not always uncorrelated with its derivative. Correlation is a measure of co-linearity, not co-dependence. Do you have any examples where statistical dependence does not imply causality without a faithfulness violation? Would you mind maybe sending me a preprint?


edit to express what I meant better: "Do you have any examples where lack of statistical dependence coexists with causality, and this happens without path cancellations?"

Replies from: Richard_Kennaway, Richard_Kennaway
comment by Richard_Kennaway · 2013-02-05T23:10:49.481Z · LW(p) · GW(p)

A function is not always uncorrelated with its derivative.

I omitted some details, crucially that the function be bounded. If it is, then the long-term correlation with its derivative tends to zero, providing only that it's well-behaved enough for the correlation to be defined. Alternatively, for a finite interval, the correlation is zero if it has the same value at the beginning and the end. This is pretty much immediate from the fact that the integral of x(dx/dt) is (x^2)/2. A similar result holds for time series, the proof proceeding from the discrete analogue of that formula, (x+y)(x-y) = x^2-y^2.

To put that more concretely, if in the long term you're getting neither richer nor poorer, then there will be no correlation between monthly average bank balance and net monthly income.

Do you have any examples where statistical dependence does not imply causality without a faithfulness violation?

Don't you mean causality not implying statistical dependence, which is what these examples have been showing? That pretty much is the faithfulness assumption, so of course faithfulness is violated by the systems I've mentioned, where causal links are associated with zero correlation. In some cases, if the system is sampled on a timescale longer than its settling time, causal links are associated not only with zero product-moment correlation, but zero mutual information of any sort.

Statistical dependence does imply that somewhere there is causality (considering identity a degenerate case of causality -- when X, Y, and Z are independent, X+Y correlates with X+Z). The causality, however, need not be in the same place as the dependence.

Would you mind maybe sending me a preprint?

Certainly. Is this web page current for your email address?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-02-06T00:04:36.590Z · LW(p) · GW(p)

Don't you mean causality not implying statistical dependence, which is what these examples have been showing?

That's right, sorry.


I had gotten the impression that you thought causal systems where things are related to derivatives/integrals introduce a case where this happens and it's not due to "cancellations" but something else. From my point of view, correlation is not a very interesting measure -- it's a holdover from simple parametric statistical models that gets applied far beyond its actual capability.

People misuse simple regression models in the same way. For example, if you use linear causal regressions, direct effects are just regression coefficients. But as soon as you start using interaction terms, this stops being true (but people still try to use coefficients in these cases...)


Yes, the Harvard address still works.

comment by Richard_Kennaway · 2013-02-14T15:36:56.361Z · LW(p) · GW(p)

I just noticed your edit:

edit to express what I meant better: "Do you have any examples where lack of statistical dependence coexists with causality, and this happens without path cancellations?"

The capacitor example is one: there is one causal arrow, so no multiple paths that could cancel, and no loops. The arrow could run in either direction, depending on whether the power supply is set up to generate a voltage or a current.

Of course, I is by definition proportional to dV/dt, and this is discoverable by looking at the short-term transient behaviour. But sampled on a long timescale you just get a sequence of i.i.d. independent pairs.

For cyclic graphs, I'm not sure how "path cancellation" is defined, if it is at all. The generic causal graph of the archetypal control system has arrows D --> P --> O and R --> O --> P, there being a cycle between P and O. The four variables are the Disturbance, the Perception, the Output, and the Reference.

If P = O+D, O is proportional to the integral of R-P, R = zero, and D is a signal varying generally on a time scale slower than the settling time of the loop, then O has a correlation with D close to -1, and O and D have correlations with P close to zero.

There are only two parameters, the settling time of the loop and the timescale of variations in D. So long as the former is substantially less than the latter, these correlations are unchanged.

Would you consider this an example of path cancellation? If so, what are the paths, and what excludes this system from the scope of theorems about faithfulness violations having measure zero? Not being a DAG is one reason, of course, but have any such theorems been extended to at least some class of cyclic graphs?

Addendum:

When D is a source with a long-term Gaussian distribution, the statistics of the system are multivariate Gaussian, so correlation coefficients capture the entire statistical dependence. Following your suggestion about non-parametric dependence tests I've run simulations in which D instead makes random transitions between +/- 1, and calculated statistics such as Kendall's tau, but the general pattern is much the same. The controller takes time to respond to the sudden transitions, which allows the zero correlations to turn into weak ones, but that only happens because the controller is failing to control at those moments. The better the controller works, the smaller the correlation of P with O or D.

I've also realised that "non-parametric statistics" is a subject like the biology of non-elephants, or the physics of non-linear systems. Shannon mutual information sounds in theory like the best possible measure, but for continuous quantities I can get anything from zero to perfect prediction of one variable from the other just by choosing a suitable bin size for the data. No statistical conclusions without statistical assumptions.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-02-14T18:33:27.552Z · LW(p) · GW(p)

Dear Richard,

I have not forgotten about your paper, I am just extremely busy until early March. Three quick comments though:

(a) People have viewed cyclic models as defining a stable distribution in an appropriate Markov chain. There are some complications, and it seems with cyclic models (unlike the DAG case) the graph which predicts what happens after an intervention, and the graph which represents the independence structure of the equilibrium distribution are not the same graph (this is another reason to treat the statistical and causal graphical models separately). See Richardson and Lauritzen's chain graph paper for a simple 4 node example of this.

So when we say there is a faithfulness violation, we have to make sure we are talking about the right graph representing the right distribution.

(b) In general I view a derivative not as a node, but as an effect. So e.g. in a linear model:

y = f(x) = ax + e

dy/dx = a = E[y|do(x=1)] - E[y|do(x=0)], which is just the causal effect of x on y on the mean difference scale.

In general, the partial derivative of the outcome wrt some treatment holding the other treatments constant is a kind of direct causal effect. So viewed through that lens it is not perhaps so surprising that x and dy/dx are independent. After all, the direct effect/derivative is a function of p(y|do(x),do(other parents of y)), and we know do(.) cuts incoming arcs to y, so the distribution p(y|do(x),do(other parents of y)) is independent of p(x) by construction.

But this is more an explanation of why derivatives sensibly represent interventional effects, not whether there is something more to this observation (I think there might be). I do feel that Newton's intuition for doing derivatives was trying to formalize a limit of "wiggle the independent variable and see what happens to the dependent variable", which is precisely the causal effect. He was worried about physical systems, also, where causality is fairly clear.

In general, p(y) and any function of p(y | do(x)) are not independent of course.

(c) I think you define a causal model in terms of the Markov factorization, which I disagree with. The Markov factorization

defines a statistical model. To define a causal model you essentially need to formally state that parents of every node are that node's direct causes. Usually people use the truncated factorization (g-formula) to do this. See, e.g. chapter 1 in Pearl's book.

comment by A1987dM (army1987) · 2013-02-05T20:50:05.576Z · LW(p) · GW(p)

I think that also works with acyclic graphs: suppose you have an arrow from “eXercising” to “Eating a lot”, one from “Eating a lot” to “gaining Weight”, and one from “eXercising” to “gaining Weight”, and P(X) = 0.5, P(E|X) = 0.99, P(E|~X) = 0.01, P(W|X E) = 0.5, P(W|X ~E) = 0.01, P(W|~X E) = 0.99, P(W|~X ~E) = 0.5. Then W would be nearly uncorrelated with X (P(W|X) = 0.4996, P(W|~X) = 0.5004) and nearly uncorrelated with E (P(W|E) = 0.5004, P(W|~E) = 0.4996, if I did the maths right), but it doesn't mean it isn't caused by either.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-02-05T21:06:55.003Z · LW(p) · GW(p)

Yes, this is the mechanism of cancellation of multiple causal paths. In theory one can prove, with assumptions akin to the ideal point masses and inextensible strings of physics exercises, that the probability of exact cancellation is zero; in practice, finite sample sizes mean that cancellation cannot necessarily be excluded.

And then to complicate that example, consider a professional boxer who is trying to maintain his weight just below the top of a given competition band. You then have additional causal arrows back from Weight to both eXercise and Eating. As long as he succeeds in controlling his weight, it won't correlate with exercise or eating.

comment by IlyaShpitser · 2013-02-05T09:41:24.789Z · LW(p) · GW(p)

Yes, this is completely wrong. There is frequently no correlation but strong causation due to effect cancellation (homeostasis, etc.)

Here's a recent paper making this point in the context of mediation analysis in social science (I could post many more):

http://www.quantpsy.org/pubs/rucker_preacher_tormala_petty_2011.pdf

Nancy, I don't mean to jump on you specifically here, but this does seem to me to be a special instance of a general online forum disease, where people {prefer to use | view as authoritative} online sources of information (blogs, wikipedia, even tvtropes, etc.) vs mainstream sources (books, academic papers, professionals). Vinge calls it "the net of a million lies" for a reason!

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-02-15T15:28:34.727Z · LW(p) · GW(p)

I didn't feel jumped on, though I still don't have a feeling for how common causation without corelation is.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-02-15T17:14:43.475Z · LW(p) · GW(p)

The common example I go on about is any situation where a system generally succeeds at achieving a goal. This is a very large class. In such situations there will tend to be an absence of correlation between the effort made and the success at achieving it. The effort will correlate instead with the difficulties in the way. Effort and difficulty together cause the result; result and goal together cause effort.

A few concrete examples. If my central heating system works properly and I am willing to spend what it takes to keep warm, the indoor temperature of my house will be independent of both fuel consumption and external temperature, although it is caused by them.

If a government's actions in support of some policy target are actually effective, there may appear to be little correlation between actions and outcome, creating the appearance that their actions are irrelevant.

An automatic pilot will keep an aircraft at a constant heading, speed, and altitude. Movements of the flight controls will closely respond to external air currents, even if those currents are not being sensed. Neither need correlate with such variations as remain in the trajectory of the plane, although these are caused by the flight controls and the external conditions.

"The carpets are so clean, we don't need janitors!"

"When you do things right, people won't be sure you've done anything at all."

comment by simplicio · 2013-02-04T23:44:31.817Z · LW(p) · GW(p)

Not disagreeing, but just wanted to mention the useful lesson that there are some cases of causation without correlation. For example, the fuel burned by a furnace is uncorrelated with the temperature inside a home. (See: Milton Friedman's thermostat.)

comment by pinyaka · 2013-02-13T16:19:52.554Z · LW(p) · GW(p)

My shoe doesn't look like a duck in my closet, but it also doesn't look like the absence of a duck in my closet.

I'm not sure I understand this. Do you mean that the way your shoe looks is not evidence for the presence or absence of a duck somewhere in your closet?

I think the original quote was meant to imply that as long as your shoe doesn't have the properties that differentiate ducks from non-ducks then your shoe possesses the absence of duck properties and should be assumed to be a non-duck. In other words, for a given object each property must have a binary value for duckness and when all properties have non-duckness values, you should conclude that the object as a whole has a non-duckness property.

I get confused by too many negatives and ducks.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-06T23:33:28.041Z · LW(p) · GW(p)

I've just come across a fascinatingly compact observation by I. J. Good:

Public and private utilities do not always coincide. This leads to ethical problems. Example - an invention is submitted to a scientific adviser of a firm...

The probability that the invention will work is p. The value to the firm if the invention is adopted and works is V, and the loss if the invention is adopted and fails is L. The value to the adviser personally if he advises the adoption of the invention and it works is v, and the loss if it fails to work is l. The losses to the firm and the adviser if he recommends the rejection of the invention are both negligible...

Then the firm's expected gain if the invention is adopted is pV - (1-p)L and the adviser's expected gain in the same circumstances is pv - (1-p)l. The firm has positive expected gain if p/(1-p) > L/V, and the adviser has positive expected gain if p/(1-p) > l/v.

If l/v > p/(1-p) > L/V, the adviser will be faced with an ethical problem, i.e. he will be tempted to act against the interests of the firm.

This is a beautifully simple recipe for a conflict of interest:

Considering absolute losses assuming failure and absolute gains conditioned on success, an adviser is incentivized to give the wrong advice, precisely when:

  • The ratio of agent loss to agent gain,
  • exceeds the odds of success versus failure
  • which in turn exceeds the ratio of principal loss to principal gain.

You can see this reflected in a lot of cases because the gains to an advisor often don't scale anywhere near as fast as the gains to society or a firm. It's the Fearful Committee Formula.

Replies from: shminux, Vaniver, Qiaochu_Yuan
comment by Shmi (shminux) · 2013-02-07T00:04:51.783Z · LW(p) · GW(p)

the Fearful Committee Formula.

Which is not nearly as common as the reverse, the Reckless Adviser Formula, when the personal loss to the adviser is so low and the potential personal gain is so high, they recommend adoption even when the expected gain for the company is negative.

comment by Vaniver · 2013-02-06T23:39:09.981Z · LW(p) · GW(p)

In general, this is referred to as the principal-agent problem.

Note that the adviser's ethical problem also exists if L/V > p/(1-p) > l/v.

The adviser to the value

Is the order also inverted in the original?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-06T23:48:02.983Z · LW(p) · GW(p)

Fixed.

I. J. Good's original, which I've somewhat abridged, explicitly specifies that there are no competitors who cause visible losses/gains after the invention is rejected.

Replies from: Vaniver
comment by Vaniver · 2013-02-06T23:53:20.459Z · LW(p) · GW(p)

I. J. Good's original, which I've somewhat abridged, explicitly specifies that there are no competitors who cause visible losses/gains after the invention is rejected.

To clarify, this is a summary of what you've excluded in your quote, not a response to the other case where the ethical problem exists, correct?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-07T00:07:30.004Z · LW(p) · GW(p)

It's a summary of what I excluded - I had actually misinterpreted, hence my quote indeed was not a valid reply! The other case is indeed real, sorry.

comment by Qiaochu_Yuan · 2013-02-06T23:36:16.725Z · LW(p) · GW(p)

You can see this reflected in a lot of cases because the gains to an advisor often don't scale anywhere near as fast as the gains to society or a firm.

Name three?

Replies from: Vaniver
comment by Vaniver · 2013-02-06T23:44:29.119Z · LW(p) · GW(p)

The success of Market-Based Management / Koch Industries appears to be due at least in part to their focus on NPV at the managerial level. You get stories like (from memory, and thus subject to fuzz) the manager of a refining plant selling the land the plant was on to a casino which was moving to the area, which he was rewarded for doing because the land the plant was on was more valuable to the casino than the company, even after factoring in the time lost because the plant was shut down and relocated. The corporate culture (and pay incentive structure) rewarded that sort of lateral use of resources, whereas a culture which compartmentalized people and departments would have balked at the lost time and disruption.

comment by Eugine_Nier · 2013-02-02T06:51:31.042Z · LW(p) · GW(p)

[S]econd thoughts tend to be tentative, and people tend not to believe that they are being lied to. Their own fairmindedness makes them gullible. Upon hearing two versions of any story, the natural reaction of any casual listener is to assume both versions are slanted to favor their side, and that the truth is perhaps somewhere in the middle. So if I falsely accuse an innocent group of ten people of wrongdoing, the average bystander, if he later hears my false accusation disputed, will assume that five or six of the people are guilty, rather than assume I lied and admit that he was deceived.

-- John C Wright

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-02T18:28:04.427Z · LW(p) · GW(p)

That reminds me of http://xkcd.com/690/.

Also:

If one group of editors were to say the Earth is flat and another group were to say it is round, it would not benefit Wikipedia for the groups to compromise and say the Earth is shaped like a calzone.

-- Raymond Arritt

(Quoting this before dinner is making me hungry.)

Replies from: HalMorris
comment by HalMorris · 2013-02-03T16:23:44.443Z · LW(p) · GW(p)

Wikipedia may ultimately have to do one of two things, or both:

1) Provide better structure for alternate versions of contested ideas

2) Construct a practically effective demarcation between strictly factual domains, and anything more interpretive.

Such a demarcation will always be challenged; I don't see any way around that, but I'd also insist that it's necessary for our sanity. Supposed it was possible, maybe using a browser with links to a database, to try to "brand" (or give the underwriters seal of approval to) those pages that provided straightforward factual assertions, and unretouched photographs, and scans of original source texts, such as all newspapers of which a copy still exists), and to promote the idea that the respectability of any interpretive or ethical claim consists very largely in its groundedness in showing links to the "smells like a fact" zone.

Replies from: RomeoStevens
comment by RomeoStevens · 2013-02-06T08:48:04.740Z · LW(p) · GW(p)

Several versions with explicit labeling of which viewpoint it represents would be a huge step in improving general information retrieval. Hypertext in general was obviously a huge leap, but the problem of presenting the evolution of a school of thought on a particular subject has not been solved satisfactorily IMO. Path dependence of various things is still among the information we regularly do not record/throw away. We should not be reliant upon brilliant synthesists taking interest in each subject and writing a well organized history.

comment by Rubix · 2013-02-02T01:17:50.604Z · LW(p) · GW(p)

"In any man who dies, there dies with him his first snow and kiss and fight. Not people die, but worlds die in them."

-Yevgeny Yevtushenko

Replies from: Mitchell_Porter, jooyous
comment by Mitchell_Porter · 2017-04-02T12:34:23.227Z · LW(p) · GW(p)

Ironically, the man Yevtushenko is now dead too; but the world Yevtushenko, asteroid number 4234, lives on.

comment by jooyous · 2013-02-05T23:50:33.017Z · LW(p) · GW(p)

I wonder if we'll ever learn to reconstruct people-shadows from other people's memories of them. Also, whether this is a worthwhile thing to be doing.

It's a little creepy the way Facebook keeps dead people's accounts around now.

Replies from: TheOtherDave, grendelkhan
comment by TheOtherDave · 2013-02-06T02:07:11.698Z · LW(p) · GW(p)

I imagine that depends on what we're willing to consider a "person-shadow".

Any thoughts on what your minimum standard for such a thing would be?

For example, I suspect that if we're ever able to construct artificial minds in a parameterized way at all (as opposed to merely replicating an existing mind as a black box), it won't prove too difficult thereafter, given access to all my writings and history and whatnot, to create a mind that identifies itself as "Dave" and acts in many of the same ways I would have acted in similar situations.

I don't know if that would be a worthwhile thing to do. If so, it would presumably only be worthwhile for what amount to entertainment purposes... people who enjoy interacting with me might enjoy interacting with such a mind in my absence.

Replies from: jooyous
comment by jooyous · 2013-02-06T03:04:39.757Z · LW(p) · GW(p)

I occasionally have dreams about people who have died in which they seem really real, where they're not saying stuff they've said when they were alive but stuff that sounds like something they would say. But it's not profound original thoughts or anything? So I think what I'm thinking is pretty close to what you're describing.

I guess if we can make one of these, then we could see how different people's mental models of that person were? Probably there is stuff in my mental model that I can't articulate! Stuff that's still useful information!

But maybe people will start using these instead of faking their deaths if they wanted to run away.

Replies from: Nornagest
comment by Nornagest · 2013-02-06T04:59:54.675Z · LW(p) · GW(p)

I've suspected -- though we're talking maybe p = 0.2 here -- for a while that our internal representations of people we know well might have some of the characteristics of sapience. Not enough to be fully realized persons, but enough that there's a sense in which they can be said to have their own thoughts or preferences, not fully dependent either on our default personae or on their prototypes. Accounts like your dreams seem like they might be weak evidence for that line of thought.

Replies from: Kaj_Sotala, Kawoomba, jooyous
comment by Kaj_Sotala · 2013-02-08T09:22:35.268Z · LW(p) · GW(p)

Authors commonly feel like the characters they write about are real, to various extents. On the mildest end of the spectrum, the characters will just surprise their creators, doing something completely contrary to the author's expectations when they're put in a specific scene and forcing a complete rewrite of the plot. ("These two characters were supposed to have a huge fight and hate each other for the rest of their lives, but then they actually ended up confessing their love for each other and now it looks like they'll be happily married. This book was supposed to be about their mutual feud, so what the heck do I do now?") Or they might just "refuse" to do something that the author wants them to do, and she'll feel miserable afterwards if she forces the characters to act in the wrong way nevertheless. On the other end of the spectrum, the author can actually have real conversations with them going on in her head.

Replies from: Baughn
comment by Baughn · 2013-02-08T15:55:05.996Z · LW(p) · GW(p)

I'm not much of an author, but I've had this happen.

My mental character-models generally have no fourth wall, which has on several occasions lead to them fighting each other for my attention so as to not fade away. I'm reasonably sure I'm not insane.

comment by Kawoomba · 2013-02-07T13:43:22.092Z · LW(p) · GW(p)

(...) but enough that there's a sense in which they can be said to have their own thoughts or preferences, not fully dependent either on our default personae or on their prototypes.

That sounds mystical.

Replies from: Nornagest
comment by Nornagest · 2013-02-07T17:25:57.839Z · LW(p) · GW(p)

Nah, this doesn't require any magic; just code reuse or the equivalent. If the cognitive mechanisms that we use to simulate other people are similar enough to those we use to run our own minds, it seems logical that those simulations, once rich and coherent enough, could acquire some characteristics of our minds that we normally think of as privileged. It follows that they could then diverge from their prototypes if there's not some fairly sophisticated error correction built in.

This seems plausible to me because evolution's usually a pretty parsimonious process; I wouldn't expect it to develop an independent mechanism for representing other minds when it's got a perfectly good mechanism for representing the self. Or vice versa; with the mirror test in mind it's plausible that self-image is a consequence of sufficiently good other-modeling, not the other way around.

Of course, I don't have anything I'd consider strong evidence for this -- hence the lowish p-value.

Replies from: Kawoomba
comment by Kawoomba · 2013-02-07T18:16:13.094Z · LW(p) · GW(p)

Relevant smbc.

So, in a way Batman exists when you imagine yourself to be Batman? Do you still coexist then (since it is your cognitive architecture after all)?

I'd say that of course any high level process running on your mind has characteristics of your mind, after all, it is running on your mind. Those, however, would still be characteristics inherent to you, not to Batman.

If you were thinking of a nuclear detonation, running through the equations, would that bomb exist inside your mind?

Having a good mental model of someone and "consulting" it (apart from that model not matching the original anyways) seems to me more like your brain playing "what if", and the accompanying consciousness and assorted properties still belonging to you pretending what-if, not to the what-if itself.

Replies from: shminux, Nornagest
comment by Shmi (shminux) · 2013-02-07T18:30:16.627Z · LW(p) · GW(p)

So, in a way Batman exists when you imagine yourself to be Batman?

If you were thinking of a nuclear detonation, running through the equations, would that bomb exist inside your mind?

My cached reply: "taboo exist".

Replies from: Kawoomba
comment by Kawoomba · 2013-02-07T18:35:57.089Z · LW(p) · GW(p)

This whole train of discussion started with

for a while that our internal representations of people we know well might have some of the characteristics of sapience

I'd argue that those characteristics of sapience still belong to the system that's playing "what-if", not to the what-if itself. There, no exist :-)

Replies from: DaFranker
comment by DaFranker · 2013-02-07T19:26:51.365Z · LW(p) · GW(p)

I was wondering whether things might be slightly different if you simulated batman-sapience by running the internal representation through simulations of self-awareness and decision-making, using one's own blackboxes as substitutes, attempting to mentally simulate in as much detail as possible every conscious mental process while sharing braintime on the subconscious ones.

Then I got really interested in this crazy idea and decided to do science and try it.

Shouldn't have done that.

comment by Nornagest · 2013-02-07T18:36:38.741Z · LW(p) · GW(p)

So, in a way Batman exists when you imagine yourself to be Batman? Do you still coexist then (since it is your cognitive architecture after all)?

It might not be entirely off base to say that a Batman or at least part of a Batman exists under those circumstances, if your representation of Batman is sophisticated enough and if this line of thought about modeling is accurate. It might be quite different from someone else's Batman, though; fictional characters kind of muddy the waters here. Especially ones who've been interpreted that many different ways.

The line between playing what-if and harboring a divergent cognitive object -- I'm not sure I want to call it a mind -- seems pretty blurry to me; I wouldn't think there'd be a specific point at which your representation of a friend stops being a mere what-if scenario, just a gradually increasing independence and fidelity as your model gets better and thinking in that mode becomes more natural.

Replies from: ygert
comment by ygert · 2013-02-07T20:28:44.641Z · LW(p) · GW(p)

I think the best way to say it is to say that Batman-as-Batman does not exist, but Batman-as-your-internal-representation-of-Batman does exist. I most certainly agree though that the distinction can be extremely blurry.

comment by jooyous · 2013-02-06T05:33:13.898Z · LW(p) · GW(p)

Has there been any work on how our internal representations of other people get built? I've only heard about the thin-slicing phenomenon but not much beyond that. I feel like sometimes people extrapolate pretty accurately -- like, "[person] would never do that" or "[person] will probably just say this" but I don't know how we know. I just kinda feel that a certain thing is something a certain person would do but I can't tell always what they did that makes me think so or that I'm simulating a state machine or anything.

Replies from: tgb, Nornagest
comment by tgb · 2013-02-07T17:07:42.007Z · LW(p) · GW(p)

Exercise: pick a sentence to tell someone you know well, perhaps asking a question. Write down ahead of time exactly what you think they might say. Make a few different variations if you feel like it. Then ask them and record exactly what they do say. Repeat. Let us know if you see anything interesting.

comment by Nornagest · 2013-02-06T05:43:49.689Z · LW(p) · GW(p)

There's been some, yeah. I haven't been able to find anything that looks terribly deep or low-level yet, and very little taking a cognitive science rather than traditional psychology approach, but Google and Wikipedia have turned up a few papers.

This isn't my field, though; perhaps some passing psychologist or cognitive scientist would have a better idea of the current state of theory.

comment by grendelkhan · 2014-04-05T16:45:18.156Z · LW(p) · GW(p)

Relevant: Greg Egan, "Steve Fever".

comment by Grif · 2013-02-02T01:12:40.130Z · LW(p) · GW(p)

If someone doesn’t value evidence, what evidence are you going to provide that proves they should value evidence? If someone doesn’t value logic, what logical argument would you invoke to prove they should value logic?

--Sam Harris

Replies from: ChristianKl, jooyous, Turgurth, Andreas_Giger, Qiaochu_Yuan, BerryPick6, Nisan
comment by ChristianKl · 2013-02-02T17:07:14.942Z · LW(p) · GW(p)

You put them into a social enviroment where the high status people value logic and evidence. You give them the plausible promise that they can increase their status in that enviroment by increasing the amount that they value logic and evidence.

Replies from: aleksiL
comment by aleksiL · 2013-02-03T14:16:21.742Z · LW(p) · GW(p)

How would this encourage them to actually value logic and evidence instead of just appearing to do so?

Replies from: Strange7, Omegaile, ChristianKl, magfrump, HalMorris, HalMorris
comment by Strange7 · 2013-02-07T04:06:43.867Z · LW(p) · GW(p)

The subject's capacity for deception is finite, and will be needed elsewhere. Sooner or later it becomes more cost-effective for the sincere belief to change.

Replies from: scav, Eugine_Nier
comment by scav · 2013-02-07T16:29:09.288Z · LW(p) · GW(p)

That is breathtakingly both the most cynical and beautiful thing I have read all day :)

Replies from: army1987
comment by Eugine_Nier · 2013-02-08T03:56:38.986Z · LW(p) · GW(p)

I generally agree with your point. The problem with the specific application is that the subject's capacity for thinking logically (especially if you want the logic to be correct) is even more limited.

Replies from: Strange7
comment by Strange7 · 2013-02-11T20:16:26.065Z · LW(p) · GW(p)

If the subject is marginally capable of logical thought, the straightforward response is to try stupid random things until it becomes obvious that going along with what you want is the least exhausting option. Even fruit flies are capable of learning from personal experience.

In the event of total incapacity at logical thought... why are you going to all this trouble? What do you actually want?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-02-12T05:11:04.602Z · LW(p) · GW(p)

If the subject is marginally capable of logical thought, the straightforward response is to try stupid random things until it becomes obvious that going along with what you want is the least exhausting option.

That depends on how much effort you're willing to spend on each subject verifying that they're not faking.

comment by Omegaile · 2013-02-04T14:14:24.703Z · LW(p) · GW(p)

People tend to conform to it's peers values.

Replies from: Desrtopa
comment by ChristianKl · 2013-02-03T22:00:31.587Z · LW(p) · GW(p)

It's not a question of encouragement. Humans tends to want to be like the high status folk that they look up to.

Replies from: aleksiL
comment by aleksiL · 2013-02-04T10:51:44.780Z · LW(p) · GW(p)

Want to be like or appear to be like? I'm not convinced people can be relied on to make the distinction, much less choose the "correct" one.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-02-04T13:43:54.010Z · LW(p) · GW(p)

Want to be like or appear to be like?

Or do they want to be like those folks appear to be like?

comment by magfrump · 2013-02-13T19:01:42.373Z · LW(p) · GW(p)

I think the most common human tactic for appearing to care is to lie to themselves about caring until they actually believe they care; once this is in place they keep up appearances by actually caring if anyone is looking, and if people look often enough this just becomes actually caring.

comment by HalMorris · 2013-02-03T16:52:27.424Z · LW(p) · GW(p)

Maybe the idea could gain popularity from a survival-island type reality program in which contestants have to measure the height of trees without climbing them, calculate the diameter of the earth, or demonstrate the existence of electrons (in order of increasing difficulty).

comment by HalMorris · 2013-02-03T16:46:59.823Z · LW(p) · GW(p)

Couple of attempts:

The hard sciences

Professions with a professional code of ethics, and consequences for violating it.

comment by jooyous · 2013-02-02T21:51:31.963Z · LW(p) · GW(p)

This reminds me of

You can't reason someone out of a position they didn't reason themselves into.

which I believe is a paraphrasing of something Jonathan Swift said, but I'm not sure. Anyone have the original?

Replies from: simplicio
comment by simplicio · 2013-02-04T23:35:34.560Z · LW(p) · GW(p)

You can't reason someone out of a position they didn't reason themselves into.

I don't think this is empirically true, though. Suppose I believe strongly that violent crime rates are soaring in my country (Canada), largely because I hear people talking about "crime being on the rise" all the time, and because I hear about murders on the news. I did not reason myself into this position, in other words.

Then you show me some statistics, and I change my mind.

In general, I think a supermajority of our starting opinions (priors, essentially) are held for reasons that would not pass muster as 'rational,' even if we were being generous with that word. This is partly because we have to internalize a lot of things in our youth and we can't afford to vet everything our parents/friends/culture say to us. But the epistemic justification for the starting opinions may be terrible, and yet that doesn't mean we're incapable of having our minds changed.

Replies from: Nornagest, Martin-2
comment by Nornagest · 2013-02-04T23:59:20.722Z · LW(p) · GW(p)

Suppose I believe strongly that violent crime rates are soaring in my country (Canada), largely because I hear people talking about "crime being on the rise" all the time, and because I hear about murders on the news. I did not reason myself into this position, in other words. Then you show me some statistics, and I change my mind.

The chance of this working depends greatly on how significant the contested fact is to your identity. You may be willing to believe abstractly that crime rates are down and public safety is up after being shown statistics to that effect -- but I predict that (for example) a parent who'd previously been worried about child abductions after hearing several highly publicized news stories, and who'd already adopted and vigorously defended childrearing policies consistent with this fear, would be much less likely to update their policies after seeing an analogous set of statistics.

Replies from: jooyous
comment by jooyous · 2013-02-05T00:23:38.175Z · LW(p) · GW(p)

This is partly because we have to internalize a lot of things in our youth and we can't afford to vet everything our parents/friends/culture say to us. But the epistemic justification for the starting opinions may be terrible, and yet that doesn't mean we're incapable of having our minds changed.

I agree, but I think part of the process of having your mind changed is the understanding that you came to believe those internalized things in a haphazard way. And you might be resisting that understanding because of the reasons @Nornagest mentions -- you've invested into them or incorporated them into your identity, for example. I think I'm more inclined to change the quote to

You can't expect to reason someone out of a position they didn't reason themselves into.

to make it slightly more useful in practice, because often changing the person's mind will require not only knowing the more accurate facts or proper reasoning, but also knowing why the person is attached to his old position -- and people generally don't reveal that until they're ready to change their mind on their own.

Oops, I guess I wasn't sure where to put this comment.

comment by Martin-2 · 2013-02-14T00:58:04.121Z · LW(p) · GW(p)

Suppose I believe strongly that violent crime rates are soaring in my country (Canada), largely because I hear people talking about "crime being on the rise" all the time, and because I hear about murders on the news. I did not reason myself into this position, in other words.

It looks to me like you arrived at this position via weighing the available evidence. In other words, you reasoned yourself into it. Upon second reading I see you don't have a base rate for the amount of violent crime on the news in peaceful countries, and you derived a high absolute level from a high[er than you'd like] rate of change. But you've shown a willingness to reason, even if you reasoned poorly (as poorly as me when I'm not careful. Scary!) So I think jooyus' quote survives.

comment by Turgurth · 2013-02-03T01:12:28.425Z · LW(p) · GW(p)

If you can't appeal to reason to make reason appealing, you appeal to emotion and authority to make reason appealing.

comment by Andreas_Giger · 2013-02-02T04:29:35.081Z · LW(p) · GW(p)

Put them in a situation where they need to use logic and evidence to understand their environment and where understanding their environment is crucial for their survival, and they'll figure it out by themselves. No one really believes God will protect them from harm...

Replies from: Swimmer963, DanArmak, ChristianKl
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-02-02T13:03:42.297Z · LW(p) · GW(p)

No one really believes God will protect them from harm...

I have some friends who do... At least insofar as things like "I don't have to worry about finances because God is watching over me, so I won't bother trying to keep a balanced budget." Then again, being financially irresponsible (a behaviour I find extremely hard to understand and sympathize with) seems to be common-ish, and not just among people who think God will take care of their problems.

Replies from: MixedNuts, army1987, Andreas_Giger
comment by MixedNuts · 2013-02-02T16:44:53.480Z · LW(p) · GW(p)

Why not? Thinking about money is work. It involves numbers.

Replies from: Kindly
comment by Kindly · 2013-02-02T16:51:06.798Z · LW(p) · GW(p)

Moreover, it often involves a great deal of stress. Small wonder that many people try to avoid that stress by just not thinking about how they spend money.

comment by A1987dM (army1987) · 2013-02-02T16:55:58.851Z · LW(p) · GW(p)

Well... as something completely and obviously deterministic (the amount of money you have at the end of the month is the amount you had at the beginning of the month, plus the amount you've earned, minus the amount you've spent, for a sufficiently broad definition of “earn” and “spend”), that's about the last situation in which I'd expect people to rely on God. With stuff which is largely affected by factors you cannot control directly (e.g. your health) I would be much less surprised.

Replies from: CCC, bentarm, Swimmer963
comment by CCC · 2013-02-02T18:57:47.345Z · LW(p) · GW(p)

Once you have those figures, it is deterministic; however, at the start of the month, those figures are not yet determined. One might win a small prize in a lottery; the price of some staple might unexpectedly increase or decrease; an aunt may or may not send an expensive gift; a minor traffic accident may or may not happen, requiring immediate expensive repairs.

So there are factors that you cannot control that affect your finances.

comment by bentarm · 2013-02-03T20:30:45.779Z · LW(p) · GW(p)

...that's about the last situation in which I'd expect people to rely on God

Does this cause you to doubt the veracity of the claim in the parent, or to update towards your model of what people rely on God for being wrong? I guess it should probably be both, to some extent. It's just not really clear from your post which you're doing.

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-03T23:41:23.351Z · LW(p) · GW(p)

Mostly the latter, as per Hanlon's razor.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-02-03T01:24:18.941Z · LW(p) · GW(p)

With stuff which is largely affected by factors you cannot control directly (e.g. your health) I would be much less surprised.

"Praying for healing" was quite a common occurrence at my friend's church. I didn't pick that as an example because's it's a lot less straightforward. Praying for healing probably does appear to help sometimes (placebo effect), and it's hard enough for people who don't believe in God to be rational about health–there aren't just factor you cannot control, there are plenty of factors we don't understand.

Replies from: woodside
comment by woodside · 2013-02-03T07:59:05.729Z · LW(p) · GW(p)

There hasn't been a lot of money spent researching it, but meta-analysis of the studies that have been conducted show that on average there is no placebo effect.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-02-03T13:36:12.684Z · LW(p) · GW(p)

That's really interesting...I had not heard that. Thanks for the info!

comment by Andreas_Giger · 2013-02-02T15:45:27.260Z · LW(p) · GW(p)

I think that's mostly because money is too abstract, and as long as you get by you don't even realize what you've lost. Survival is much more real.

comment by DanArmak · 2013-02-02T11:11:45.785Z · LW(p) · GW(p)

Sadly, that only works on a natural-selection basis, so the ethics boards forbid us from doing this. If they never see anyone actually failing to survive, they won't change their behavior.

Replies from: Andreas_Giger
comment by Andreas_Giger · 2013-02-02T15:47:46.032Z · LW(p) · GW(p)

Can't make an omelette without breaking some eggs. Videotape the whole thing so the next one has even more evidence.

comment by ChristianKl · 2013-02-02T16:47:34.275Z · LW(p) · GW(p)

If you threaten someone in their survival they are likely to get emotional. That's not the best mental state to apply logic.

Suicide bombers don't suddenly start believing in reason just before they are send out to kill themselves.

Soldiers in trenches who fear for their lives on the other hand do often start to pray. Maybe there are a few atheists in foxholes, but that state seems to promote religiousness.

Replies from: AspiringRationalist
comment by NoSignalNoNoise (AspiringRationalist) · 2013-02-04T02:17:13.351Z · LW(p) · GW(p)

Soldiers in trenches who fear for their lives on the other hand do often start to pray. Maybe there are a few atheists in foxholes, but that state seems to promote religiousness.

Does it promote religiousness or attract the religious?

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2013-02-06T11:36:07.272Z · LW(p) · GW(p)

I think it just promotes grasping at straws.

comment by Qiaochu_Yuan · 2013-02-02T03:39:43.310Z · LW(p) · GW(p)

Take all their stuff. Tell them that they have no evidence that it's theirs and no logical arguments that they should be allowed to keep it.

Replies from: fubarobfusco
comment by fubarobfusco · 2013-02-02T04:03:26.391Z · LW(p) · GW(p)

They beat you up. People who haven't specialized in logic and evidence have not therefore been idle.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-02-02T04:18:25.628Z · LW(p) · GW(p)

Shoot them?

Replies from: gryffinp
comment by gryffinp · 2013-02-02T10:32:43.211Z · LW(p) · GW(p)

I think you just independently invented the holy war.

comment by BerryPick6 · 2013-02-02T17:28:41.498Z · LW(p) · GW(p)

This is from the Sam Harris vs. William Lane Craig debate, starting around the 44 minute mark. IIRC, Luke's old website has a review of this particular debate.

comment by Nisan · 2013-02-02T04:15:58.316Z · LW(p) · GW(p)

You can find out what persuades them and give them that.

Replies from: James_Miller
comment by James_Miller · 2013-02-02T06:25:46.356Z · LW(p) · GW(p)

And in some instances that would likely be what we call logic or evidence.

Replies from: ChristianKl
comment by ChristianKl · 2013-02-02T16:47:25.055Z · LW(p) · GW(p)

You usually can't get someone with a spider phobia to drop his phobia by trying to convince them with logic or evidence. On the other hand there are psychological strategies to help them to get rid of the phobia.

Replies from: Emily
comment by Emily · 2013-02-02T18:47:35.862Z · LW(p) · GW(p)

I think cognitive behavioural therapy for phobias, which seems to work pretty well in a large number of cases, actually relies on helping people see that their fear is irrational.

Replies from: jooyous, NancyLebovitz
comment by jooyous · 2013-02-02T18:58:04.367Z · LW(p) · GW(p)

As someone with a phobia, I can tell you from experience that realizing your fear is irrational doesn't actually make the fear go away. Sometimes it even makes you feel more guilty for having it in the first place. Realizing it's irrational just helps you develop coping strategies for acting normal when you're freaking out in public.

Replies from: Emily
comment by Emily · 2013-02-02T20:20:33.729Z · LW(p) · GW(p)

Oh sure, I can definitely believe that. Maybe a better choice of wording above would have been "internalise" rather than "see", which would rather negate my point, I guess. Or maybe it works differently for some people. I don't have any experience with phobias or CBT myself.

comment by NancyLebovitz · 2013-02-03T16:56:20.101Z · LW(p) · GW(p)

It's alief vs. belief. It's one thing to see that, in theory, almost all spiders are harmless. It's another to remain calm in the presence of a spider if you've had a history of being terrified of them.

Desensitization is a process of teaching a person how to calm themselves, and then exposing them to things which are just a little like spiders (a picture of a cartoon spider, perhaps, or the word spider). When they can calm themselves around that, they're exposed to something a little more like a spider, and learn to be calm around that.

The alief system can learn, but it's not necessarily a verbal process.

Even when it is verbal, as when someone learns to identify various sorts of irrational thoughts, it's much slower than understanding an argument.

Replies from: Emily
comment by Emily · 2013-02-03T17:31:24.598Z · LW(p) · GW(p)

Right; that's the "behavioural" part of cognitive behavioural therapy, right? But the "cognitive" part is an explicit, verbal process.

comment by curiousepic · 2013-02-06T02:25:23.689Z · LW(p) · GW(p)

Q: I was wondering what the dumbest or funniest argument you've heard against the defeat of aging?

Aubrey de Grey: Um, It's been a very very long time since I've heard a question or concern I haven't heard before, so nothing's dumb or funny anymore, it's just... tedium.

From this recent talk

Replies from: Eliezer_Yudkowsky, army1987, EphemeralNight, Qiaochu_Yuan
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-06T23:21:44.971Z · LW(p) · GW(p)

I cannot express how true this is, at least not without a lot of swear words.

comment by A1987dM (army1987) · 2013-02-07T16:41:21.603Z · LW(p) · GW(p)

Aubrey de Grey being an immortalist himself, I'm assuming the irony to be unintentional?

Replies from: ESRogs
comment by ESRogs · 2013-02-09T09:03:07.191Z · LW(p) · GW(p)

Haha, didn't occur to me until I read your comment, so there's one data point for you.

comment by EphemeralNight · 2013-02-07T20:26:48.134Z · LW(p) · GW(p)

/clicks link, watches

... I can barely understand a single word this guy is saying. Is it just me or is the audio in that video really bad? I don't suppose it was transcribed anywhere?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-02-15T19:00:19.629Z · LW(p) · GW(p)

It's not just you. It was comprehensible but annoying for approximately the first 10 minutes, and then it became completely muddy. I hope there's a transcript somewhere.

comment by Qiaochu_Yuan · 2013-02-07T01:33:43.041Z · LW(p) · GW(p)

I'm confused. I thought that deathpigeon's quote was downvoted because it was anti-deathism and not rationality, but this quote is similar in that way and it has lots of upvotes. Was deathpigeon's quote actually downvoted because it incorrectly attributed a line to ASoIaF instead of Game of Thrones? Seriously?

Replies from: Nornagest, ArisKatsaris
comment by Nornagest · 2013-02-07T02:09:00.017Z · LW(p) · GW(p)

I wouldn't think so, but I wasn't expecting five upvotes on my comment saying so, either. Maybe we really are that pedantic.

This is only incidentally anti-deathist, though; its substance has more to do with popular reactions to controversial ideas. Which doesn't seem all that shiningly rational to me either, but perhaps I'm missing something.

Replies from: Mestroyer
comment by Mestroyer · 2013-02-15T10:32:10.060Z · LW(p) · GW(p)

Or we all secretly love anti-deathist quotes, and only downvote them when they have no rationality content because we feel it's our duty, but when we see one that can be interpreted as slightly rationalist, we seize the excuse to upvote it. Or our liking for a quote based on its anti-deathism enhances our appreciation for its insight into rationality, via the affect heuristic.

comment by ArisKatsaris · 2013-02-07T23:22:12.620Z · LW(p) · GW(p)

Or perhaps there are more criteria (aesthetic, informational, other) by which these quotes may be judged than whether they are anti-death or not.

And that other quote is neither ASoIaF nor TV series, it's a misquotation.

comment by [deleted] · 2013-02-02T01:19:39.171Z · LW(p) · GW(p)

.

Replies from: Desrtopa, woodside
comment by Desrtopa · 2013-02-06T21:33:16.063Z · LW(p) · GW(p)

The first response that comes to my mind is "because if the butterfly were trying that hard to escape the kid, it would fly above the kid's reach, and the kid would give up." When I look at the scene, I see a kid chasing a butterfly, and a butterfly too stupid to realize it should flee instead of simply dodging.

Animals on the intelligence levels of butterflies (which, keep in mind, have specific mating flight patterns they use to tell other members of their species apart from things like ribbons and stray flower petals,) don't seem to even have retreat instincts, just avoidance instincts. They can't recognize persistent pursuit. A fly won't hesitate to land on a person who has been trying to swat it for minutes on end.

comment by woodside · 2013-02-03T07:53:04.057Z · LW(p) · GW(p)

Because you're a human, not a butterfly. It seems like an animal that used a cognitive filter that defaulted to the latter case would take a pretty severe fitness hit.

Replies from: army1987, alex_zag_al
comment by A1987dM (army1987) · 2013-02-05T17:18:12.918Z · LW(p) · GW(p)

Three things, in no particular order:

  • I seem to recall that, in some obscure language, each noun has an agency level and in a sentence the most agenty noun is the subject by default, unless the verb is specially inflected to show otherwise: for example, “[dog] [bite] [man]” would mean ‘a man bit a dog’, regardless of word order, because the noun “[man]” has higher agency than “[dog]”.

  • Would you sooner see a tiger chasing a man, or a man running away from a tiger? If the former, it's not just the fact that butterflies are not human, it's the fact that the butterflies are small.

  • I think that, at least in the case of the lion, it would also depend on whether the two of them are moving towards the left side or the right side of my visual field. I heard that in _The Great Wave off Kanagawa_ the boats are intended to look more agenty than the wave, but for Western people it will typically look like the other way round (due to Western languages being written from left to right), and for a Westerner to get the right effect they'd have to look at the picture in a mirror. (It works for me, at least.)

Replies from: Luke_A_Somers, bbleeker
comment by Luke_A_Somers · 2013-02-06T15:50:40.223Z · LW(p) · GW(p)

Is this visual field orientation issue really Western vs Eastern? If so, has it evaporated lately?

One of the media that most lends itself to testing this notion is video games, since there is almost always an agent, and often a preferred direction to gameplay. In some cases, there is a lot of free movement but when you enter a new zone/approach a boss, it generally goes one way rather than the other.

Eastern games favoring left-to-right over right-to-left: Super Mario Brothers, Ninja Gaiden, Megaman, Ghosts and Goblins, Double Dragon, TMNT, River City Ransom, Sonic the Hedgehog, Gradius/Lifeforce, UN Squadron, Rygar, Contra, Codename: Viper, Faxanadu (at least, the beginning, which is all I saw), Excitebike, Zelda 2, Act Raiser, Wizards and Warriors, and Cave Story.

On the other side, Final Fantasy combat generally puts the party on to right side, facing left. That's pretty leftward-oriented for sure. And very slightly - more slightly than any of the above - Metroid. Whenever you find a major powerup, you approach it from the right. You enter Tourian (the last area) from the right, and approach all 3 full bosses from the right. Those two are all I can think of with any sort of leftward bias at all.

In the west, the only games I can think of that favor right-to-left over left-to-right are Choplifter and Solaris; also, we get slightly-leftward readings on the Atari game of The Empire Strikes Back (you go left to meet the attack, but the primary agents are the attacking walkers, which are going right, and you need to keep up with them) and Pitfall (it seems mainly designed for players going right... which meant it was easier to turn around and go left; however, I'm sure the designer did this intentionally).

In absolute terms and even more at a fractional level, that's more than the eastern games.

... Now my head hurts. And man, going to a boarding school at a young age really exposed me to a lot of games.

comment by Sabiola (bbleeker) · 2013-02-06T10:35:15.075Z · LW(p) · GW(p)

I heard that in The Great Wave off Kanagawa the boats are intended to look more agenty than the wave, but for Western people it will typically look like the other way round (due to Western languages being written from left to right), and for a Westerner to get the right effect they'd have to look at the picture in a mirror. (It works for me, at least.)

Huh, I just tried that, and it works for me too. When you mirror it, it looks like they're going into the wave instead of fleeing from it. The effect is really strong; I wondered if it would still work when I knew about it, but it does.

Replies from: army1987, NancyLebovitz, None
comment by A1987dM (army1987) · 2013-02-10T02:05:17.548Z · LW(p) · GW(p)

BTW, does anyone get different effects from the emoticons :-/ and :-\ or it's just me?

V erpragyl qvfpbirerq gung, juvyr gurl fhccbfrq gb or flabalzbhf (ba Snprobbx gurl eraqre gb gur fnzr cvp), gb zr gur sbezre srryf zber yvxr “crecyrkvgl, pbashfvba” (naq gung'f ubj V trarenyyl hfr vg), jurernf gur ynggre srryf zber yvxr “qvfnccebiny” (naq V bayl fnj gung orpnhfr zl cubar unf :-\ ohg abg :-/ nzbat gur cer-pbzcbfrq rzbgvpbaf, fb V cvpxrq gur sbezre ohg vg qvqa'g ybbx evtug gb zr).

[Edited to move the question to the front and rot-13 the rest as per Nesov's suggestion.]

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2013-02-10T02:15:27.552Z · LW(p) · GW(p)

Does anyone else get the same effect?

You shouldn't prime the audience before asking a question like that.

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-10T10:24:27.499Z · LW(p) · GW(p)

Good point. Fixed.

comment by NancyLebovitz · 2013-02-15T18:22:03.936Z · LW(p) · GW(p)

Interesting. In the normal version, it looks to me like the waves are lifting the boats, and mirror-reversed it looks like the boats are driving against it.

Actually, my normal way to look at it is to focus on the wave, then the mountain, and scarcely notice the boats.

On my first look at the mirror version, the wave looked like a giant claw attacking the mountain.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-02-15T18:29:08.842Z · LW(p) · GW(p)

Yeah, I spent a while looking for the boats in the image... I thought one of them was a beach. I think the question of which is more "agenty" was contaminated for me, though, since I read the comments before following the link to look at the image. I can make myself see either the wave as 'chasing' the boats, or the boats as fleeing the wave, or the boats sailing into the wave...

comment by [deleted] · 2013-02-07T14:50:03.567Z · LW(p) · GW(p)

For me, the default orientation of the picture makes it seem like the boats are moving into it, while the flipped version makes it seem like the wave is agent-ly 'attacking' the boats. The difference in agentiness is more pronounced in the flipped version, though. (I'm Asian-American.)

Replies from: Baughn
comment by Baughn · 2013-02-08T15:52:29.340Z · LW(p) · GW(p)

Did you grow up in America? Would this be consistent with a genetic basis, or have you been exposed to RTL language previously?

Replies from: None
comment by [deleted] · 2013-02-08T17:27:37.459Z · LW(p) · GW(p)

Born and raised in the US, so English is my primary language. I had some long-term exposure to Chinese growing up as a kid (generally written up-to-down then right-to-left in our workbooks). Speaking and understanding (rudimentary) Chinese has stuck with me; the writing and reading of, has not.

comment by alex_zag_al · 2013-02-05T01:54:15.001Z · LW(p) · GW(p)

Don't good hunters have good mental models of their prey? I mean I get that you're thinking that it wouldn't help to feel sympathy for animals of other species. But it would help in many cases to have empathy, and to see things from the other animal's perspective.

Replies from: Strange7
comment by Strange7 · 2013-02-07T03:01:11.101Z · LW(p) · GW(p)

Butterflies are not, and to my knowledge have never been, a major prey item for H. Sapiens.

comment by jsbennett86 · 2013-02-13T23:34:58.119Z · LW(p) · GW(p)

Every time you read something that mentions brain chemicals or brain scans, rewrite the sentence without the sciencey portions. “Hate makes people happy.” “Women feel closer to people after sex.” “Music makes people happy.” If the argument suddenly seems way less persuasive, or the news story way less ground-breaking… well. Someone’s doing something shady.

Ozy Frantz - Brain Chemicals are not Fucking Magic

comment by Kawoomba · 2013-02-06T10:26:25.240Z · LW(p) · GW(p)

A sharp knife is nothing without a sharp eye.

Klingon proverb.

comment by Vaniver · 2013-02-01T21:27:35.734Z · LW(p) · GW(p)

If you're not making quantitative predictions, you're probably doing it wrong.

--Gabe Newell during a talk. The whole talk is worthwhile if you're interested in institutional design or Valve.

Replies from: Mass_Driver
comment by Mass_Driver · 2013-02-02T08:20:12.310Z · LW(p) · GW(p)

What's the percent chance that I'm doing it wrong?

Replies from: Vaniver, DanArmak
comment by Vaniver · 2013-02-02T15:54:43.210Z · LW(p) · GW(p)

The whole quote:

If you're not making quantitative predictions, you're probably doing it wrong, or you're probably not doing it as well as you can. That's sort of become kind of critical to how we operate. You have to predict in advance. Anybody can explain anything after the fact, and it has to be quantitative or you're not being serious about how you're approaching the problem.

The problems you face might not require a serious approach; without more information, I can't say.

comment by DanArmak · 2013-02-02T11:14:55.878Z · LW(p) · GW(p)

78.544%.

comment by xv15 · 2013-02-11T16:34:07.463Z · LW(p) · GW(p)

Closeness in the experiment was reasonably literal but may also be interpreted in terms of identification with the torturer. If the church is doing the torturing then the especially religious may be more likely to think the tortured are guilty. If the state is doing the torturing then the especially patriotic (close to their country) may be more likely to think that the tortured/killed/jailed/abused are guilty. That part is fairly obvious but note the second less obvious implication–the worse the victim is treated the more the religious/patriotic will believe the victim is guilty. ... Research in moral reasoning is important because understanding why good people do evil things is more important than understanding why evil people do evil things.

-Alex Tabarrok

Replies from: Eugine_Nier, CCC
comment by Eugine_Nier · 2013-02-12T05:06:16.057Z · LW(p) · GW(p)

the worse the victim is treated the more the religious/patriotic will believe the victim is guilty.

One amusing aspect is that assuming the person is justified in their belief that their church/country is ethical, the above is a valid inference.

Replies from: ChristianKl
comment by ChristianKl · 2013-02-27T17:30:29.975Z · LW(p) · GW(p)

Not necessarily. You don't punish people based on their likelihood of being guilty but based on severity of their crime.

If torture is used as tool to gain information instead of being used to punish it's even more questionable whether the likelihood of being guilty correlates with the severity of the torture. The fact that someone decides to torture to get more information suggests that they have an insuffienct amount of information.

If there a 50% chance that a person has information that can prevent a nuclear explosion, you can argue that it's ethical to torture to get that information.

After the bomb has exploded and you know for certain who did the crime, there not much need to torture anyone.

An interrigator that tortures is more likely to get false confession that implicate innocents. If he then goes and tortures those innocents, you see that people who torture are more likely to punish innocents than people who don't.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-02-27T22:43:59.085Z · LW(p) · GW(p)

An interrigator that tortures is more likely to get false confession that implicate innocents. If he then goes and tortures those innocents, you see that people who torture are more likely to punish innocents than people who don't.

Even the first person who was tortured might be innocent or ignorant.

Replies from: ChristianKl
comment by ChristianKl · 2013-02-28T14:44:49.221Z · LW(p) · GW(p)

Yes, but that's besides the point I tried to make. Torturing in general produces a dynamic that makes you punish more innocent people.

comment by CCC · 2013-02-12T07:15:01.032Z · LW(p) · GW(p)

It seems to me that the same would apply to any in-group. The reasoning runs more-or-less as follows:

It is us (not me personally, but a group with which I strongly identify) that is treating this person badly; since we are doing it, then he must deserve it. Since he deserves it, he must be guilty. This is because if he did not deserve it, then I would be horrified at the actions of people I have always tried to emulate; and that, in turn, would mean that I had already given some support to an evil group, and had indeed put some significant effort into being a part of that group, taking up the group norms.

If the group is evil, or does evil actions, then I am evil by association.

And a good person does not want to reach that conclusion; therefore, the person being punished must be guilty. And thus, good people do evil things by not acknowledging evil being done in their name as what it is.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-02T05:25:27.239Z · LW(p) · GW(p)

Good things come to those who steal them.

-- Magnificent Sasquatch

Replies from: hankx7787
comment by hankx7787 · 2013-02-04T23:34:13.566Z · LW(p) · GW(p)

Quote of 2013!

comment by Eugine_Nier · 2013-02-05T01:22:59.859Z · LW(p) · GW(p)

Of a proposed course of action He wants men, so far as I can see, to ask very simple questions; is it righteous? is it prudent? is it possible? Now if we can keep men asking "Is it in accordance with the general movement of our time? Is it progressive or reactionary? Is this the way that History is going?" they will neglect the relevant questions. And the questions they do ask are, of course, unanswerable; for they do not know the future, and what the future will be depends very largely on just those choices which they now invoke the future to help them to make.

-- Screwtape, The Screwtape Letters by C.S. Lewis

Replies from: NevilleSandiego
comment by NevilleSandiego · 2013-02-23T10:42:50.167Z · LW(p) · GW(p)

I kind of wish people did use the future more, sometimes. For example, in Australia at the moment, neither major political party supports gay marriage. And beyond all the direct arguments for/against the concept, I can't help but wonder if they really expect, in 50 years time, that we will live in a world of strictly hetrosexual marriages. What are they possibly hoping to achieve? Maybe that reasoning isn't the best way to decide to actively do a thing, but it surely counts towards the cessation of resistance to a thing.

Replies from: Eugine_Nier, wedrifid, TheOtherDave
comment by Eugine_Nier · 2013-02-23T23:07:17.986Z · LW(p) · GW(p)

I can't help but wonder if they really expect, in 50 years time, that we will live in a world of strictly hetrosexual marriages.

Here are a few things that have at one time or another been considered "obviously inevitable":

  • The spread of enlightened dictatorship on the Prussian model.

  • The spread of eugenics.

  • The control of the world economy by "rational" central planners.

My point is that you appear to be overestimating how well you can predict the future.

What are they possibly hoping to achieve?

I don't think you really believe this argument. In particular if the success of something you opposed seemed inevitable, you'd still oppose it.

Maybe that reasoning isn't the best way to decide to actively do a thing, but it surely counts towards the cessation of resistance to a thing.

What I think is happening is that you support the "inevitable" outcome but are getting frustrated that the opposition just won't go away like they're "supposed" to.

Replies from: soreff
comment by soreff · 2013-02-23T23:48:49.320Z · LW(p) · GW(p)

In particular if the success of something you opposed seemed inevitable, you'd still oppose it.

Oppose in the sense of "actively work to stop it" or oppose in the sense of, "if asked about it, note that one dislikes it"? I dislike the increase of surveillance over the decades but look: Sensors get cheaper year by year. Computation gets cheaper year by year. I'm not happy to see more surveillance, but I see it as so close to inevitable, due to the dropping costs of the enabling technologies, that actively opposing it is a waste of time and effort.

To put it another way: In the original C.S.Lewis quote, Lewis includes in his own list of questions that he wants asked: "Is it possible?" I view most of the questions that Lewis disapproves of as just being ways of asking whether recent historical evidence make something look possible or impossible in the near future. In my view, usually, claims of historical inevitability are overstated, but, occasionally (as in the cheaper sensors example), I think there are situations where a fairly solid case for at least likely trends can be made.

comment by wedrifid · 2013-02-24T07:27:47.283Z · LW(p) · GW(p)

I can't help but wonder if they really expect, in 50 years time, that we will live in a world of strictly hetrosexual marriages. What are they possibly hoping to achieve?

Being elected at some point in the next 3 years. They aren't trying to achieve anything related to homosexual marriages. They don't care.

Replies from: simplicio
comment by simplicio · 2013-02-27T22:26:14.636Z · LW(p) · GW(p)

Um, I know this is classic Hansonian "X is not about X" cynicism, but I doubt it's actually true of most politicians. Sure, the need to get elected skews their priorities, but they do have policy preferences, which they are willing to pursue at cost if necessary.

comment by TheOtherDave · 2013-02-23T23:15:48.925Z · LW(p) · GW(p)

FWIW, 20 years ago (when my now-husband and I first got together) I expected that I would live in a world of strictly heterosexual marriages all my life.
That didn't incline me to cease my opposition to that world.
So I can empathize with someone who expects to live in a world of increasing marriage equality but doesn't allow that expectation to alter their opposition to that world.

comment by [deleted] · 2013-02-04T01:55:07.484Z · LW(p) · GW(p)

Been making a game of looking for rationality quotes in the super bowl

"It's only weird if it doesn't work" --Bud Light Commercial

Only a rationality quote out of context, though, since the ad is about superstitious rituals among sports fans. My automatic mental reply is "well that doesn't work"

Replies from: Jay_Schweikert
comment by Jay_Schweikert · 2013-02-05T19:46:24.952Z · LW(p) · GW(p)

Well, but in the universe of the commercials, it clearly did, so long as you went to the appropriate expert.

Replies from: None
comment by [deleted] · 2013-02-05T20:12:16.751Z · LW(p) · GW(p)

Good observation. I will accept your correction: It's only weird if it doesn't work, and it doesn't work unless you're in Stevie Wonder's presence

comment by Kindly · 2013-02-01T19:44:07.000Z · LW(p) · GW(p)

Were all stars to disappear or die,
I should learn to look at an empty sky
And feel its total darkness sublime,
Though this might take me a little time.

W. H. Auden, "The More Loving One"

Replies from: NevilleSandiego, Toddling
comment by NevilleSandiego · 2013-02-23T10:30:03.941Z · LW(p) · GW(p)

I had a thought recently, what if the existence of a benevolent, omnipotent creator was proven? and my first thought was that I would learn to love the world as the creation of a higher power. And that disturbed me. It's too new a thought for me to have plumbed it properly. But this reminded me. In the absence of the stars, what becomes of their beauty?

When the world is bereft of tigers, glaciers, the Amamzon, will we feel it to be sublime? imma go read the poem now

comment by Toddling · 2013-02-02T20:45:56.978Z · LW(p) · GW(p)

The only interpretation I've been able to read into this is that the speaker wants to become more emotionally accepting of death. Am I missing something?

Replies from: Kindly
comment by Kindly · 2013-02-02T21:13:26.442Z · LW(p) · GW(p)

That interpretation didn't even occur to me, possibly because I read the whole poem instead of the bit I quoted (and maybe I quoted the wrong bit). Here is the whole thing (it's short). I always feel a bit awkward arguing about how I interpreted a poem, so maybe this will resolve the issue?

(Incidentally, am I the only one mildly annoyed by how people seem to think of "rationality quotes" as "anti-deathism quotes"? The position may be rational, but it is not remotely related to rationality.)

Replies from: Qiaochu_Yuan, Toddling
comment by Qiaochu_Yuan · 2013-02-03T00:35:38.445Z · LW(p) · GW(p)

(Incidentally, am I the only one mildly annoyed by how people seem to think of "rationality quotes" as "anti-deathism quotes"? The position may be rational, but it is not remotely related to rationality.)

You're not the only one. We should be doing more firewalling the optimal from the rational in general.

comment by Toddling · 2013-02-02T23:10:26.473Z · LW(p) · GW(p)

Thank you, that was helpful. I don't see the deathist tones anymore. Now it reads a bit more like 'If I happened to find myself in a world without stars I think I'd adapt,' which reminds me a bit of the Litany of Gendlin and the importance of facing reality. It makes more sense to have it here now.

This is true, and now I have to go back and look at all the anti-deathist quotes I upvoted and examine them more closely for content directly related to rationality. Damn.

comment by JQuinton · 2013-02-18T20:10:42.327Z · LW(p) · GW(p)

I find for myself that my first thought is never my best thought. My first thought is always someone else’s; it’s always what I’ve already heard about the subject, always the conventional wisdom. It’s only by concentrating, sticking to the question, being patient, letting all the parts of my mind come into play, that I arrive at an original idea. By giving my brain a chance to make associations, draw connections, take me by surprise. And often even that idea doesn’t turn out to be very good. I need time to think about it, too, to make mistakes and recognize them, to make false starts and correct them, to outlast my impulses, to defeat my desire to declare the job done and move on to the next thing.

William Deseriewicz

The whole speech is worth reading as one giant rationality quote

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-02-19T02:33:04.660Z · LW(p) · GW(p)

Not bad, although it seems to equate originality with goodness a little too much.

comment by [deleted] · 2013-02-02T01:20:29.017Z · LW(p) · GW(p)

.

Replies from: MixedNuts
comment by MixedNuts · 2013-02-02T17:03:00.331Z · LW(p) · GW(p)

Do we know anything about executive function failures other than AD(H)D?

Replies from: Emily, None
comment by [deleted] · 2013-02-02T17:13:49.822Z · LW(p) · GW(p)

In most cases 'executive dysfunction' covers the same territory as 'adult ADHD', but it can also be the outcome of some kinds of brain damage.

comment by simplicio · 2013-02-07T03:15:03.441Z · LW(p) · GW(p)

It is important, therefore, to always maintain a balanced view of markets. There is something extremely elegant about the way they allocate goods and resources, and the way the price system automatically adjusts the system of production in response to changes in demand. There is a clear sense in which markets achieve a level of coordination and efficiency that no other form of social organization is able to provide. However, markets are not magical, and they will not solve all our problems. They work properly only under very specific institutional conditions.

(Joseph Heath, The Efficient Society)

Heath is an excellent writer on economics/philosophy.

comment by Kingoftheinternet · 2013-02-01T19:47:05.058Z · LW(p) · GW(p)

If you are reading this book and flipping out at every third sentence because you feel I'm insulting your intelligence, then I have three points of advice for you:

  • Stop reading my book. I didn't write it for you. I wrote it for people who don't already know everything.

  • Empty before you fill. You will have a hard time learning from someone with more knowledge if you already know everything.

  • Go learn Lisp. I hear people who know everything really like Lisp.

For everyone else who's here to learn, just read everything as if I'm smiling and I have a mischievous little twinkle in my eye.

Introduction to Learn Python The Hard Way, by Zed A. Shaw

Replies from: pewpewlasergun, Estarlio, wedrifid
comment by pewpewlasergun · 2013-02-02T04:15:27.931Z · LW(p) · GW(p)

If anyone feels even remotely inspired to click through and actually learn python, do it. Its been the most productive thing I've done on the internet.

Replies from: RomeoStevens
comment by RomeoStevens · 2013-02-06T09:06:18.574Z · LW(p) · GW(p)

This makes me wonder how much my writing skills would improve if I retyped excellently written essays for a while.

Replies from: Vaniver, Eliezer_Yudkowsky, BlueSun, Qiaochu_Yuan
comment by Vaniver · 2013-02-06T23:50:07.428Z · LW(p) · GW(p)

Benjamin Franklin's method of learning to write well is summarized here. His version:

A question was once, somehow or other, started between Collins and me, of the propriety of educating the female sex in learning, and their abilities for study. He was of opinion that it was improper, and that they were naturally unequal to it. I took the contrary side, perhaps a little for dispute's sake. He was naturally more eloquent, had a ready plenty of words; and sometimes, as I thought, bore me down more by his fluency than by the strength of his reasons. As we parted without settling the point, and were not to see one another again for some time, I sat down to put my arguments in writing, which I copied fair and sent to him. He answered, and I replied. Three or four letters of a side had passed, when my father happened to find my papers and read them. Without entering into the discussion, he took occasion to talk to me about the manner of my writing; observed that, though I had the advantage of my antagonist in correct spelling and pointing (which I ow'd to the printing-house), I fell far short in elegance of expression, in method and in perspicuity, of which he convinced me by several instances. I saw the justice of his remark, and thence grew more attentive to the manner in writing, and determined to endeavor at improvement.

About this time I met with an odd volume of the Spectator. It was the third. I had never before seen any of them. I bought it, read it over and over, and was much delighted with it. I thought the writing excellent, and wished, if possible, to imitate it. With this view I took some of the papers, and, making short hints of the sentiment in each sentence, laid them by a few days, and then, without looking at the book, try'd to compleat the papers again, by expressing each hinted sentiment at length, and as fully as it had been expressed before, in any suitable words that should come to hand. Then I compared my Spectator with the original, discovered some of my faults, and corrected them. But I found I wanted a stock of words, or a readiness in recollecting and using them, which I thought I should have acquired before that time if I had gone on making verses; since the continual occasion for words of the same import, but of different length, to suit the measure, or of different sound for the rhyme, would have laid me under a constant necessity of searching for variety, and also have tended to fix that variety in my mind, and make me master of it. Therefore I took some of the tales and turned them into verse; and, after a time, when I had pretty well forgotten the prose, turned them back again. I also sometimes jumbled my collections of hints into confusion, and after some weeks endeavored to reduce them into the best order, before I began to form the full sentences and compleat the paper. This was to teach me method in the arrangement of thoughts. By comparing my work afterwards with the original, I discovered many faults and amended them; but I sometimes had the pleasure of fancying that, in certain particulars of small import, I had been lucky enough to improve the method or the language, and this encouraged me to think I might possibly in time come to be a tolerable English writer, of which I was extremely ambitious. My time for these exercises and for reading was at night, after work or before it began in the morning, or on Sundays, when I contrived to be in the printing-house alone, evading as much as I could the common attendance on public worship which my father used to exact on me when I was under his care, and which indeed I still thought a duty, though I could not, as it seemed to me, afford time to practise it.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-06T23:23:50.406Z · LW(p) · GW(p)

I would expect the answer to be "not much, compared to writing and publishing horrible, horrible fanfiction".

comment by BlueSun · 2013-02-06T14:53:26.230Z · LW(p) · GW(p)

I'd like to see a study result on that.

In Art History class I learned that a common way for great artists to learn to paint was by copying the work of the masters. I then asked the art teacher why it was a rule that we couldn't copy other famous historical paintings. I can't remember her exact answer but the times I haven't followed her advice and went and copied a great painting, I seem to have learned more. But again, I'd like to see a study result.

Replies from: DaFranker
comment by DaFranker · 2013-02-06T15:13:04.402Z · LW(p) · GW(p)

But again, I'd like to see a study result.

I'd like that too.

It makes sense intuitively, but if I can't find any evidence either way this'll probably seep into my subconscious now and at some point in the future I'll just assume it as true and adopt strategies based on that assumption, which might be suboptimal.

comment by Qiaochu_Yuan · 2013-02-06T23:28:43.352Z · LW(p) · GW(p)

Your grammar and spelling might improve. I think you've matched the wrong things in your analogy.

comment by Estarlio · 2013-02-12T03:16:18.391Z · LW(p) · GW(p)

I'm not sure what this has to do with rationality quotes, but the extract basically convinces me to avoid the guy like the plague. The underlying premises seem to be something like:

  • The remaining choice when someone knows enough to feel a book is too simple for them is that they know everything.

  • They should discard all that they know - empty before you fill - so they can learn from someone with more knowledge than them.

  • Go learn lisp... -shrug-

It seems incredibly bad advice to give to someone who thinks a lot of what's in a book's too simple for them to essentially yell at them to shut up and knuckle down. As compared to say, pointing them to a few things that are generally not covered that well in self-learning and direct them to a more advanced book.

Replies from: Kindly, fubarobfusco, Kingoftheinternet
comment by Kindly · 2013-02-12T03:29:40.580Z · LW(p) · GW(p)

Agreed. I'm actually not sure if what I should take away from that introduction is "This material seems easy but isn't, so go through everything carefully even if you think you understand it" or the opposite: "If this book seems easy, it's not advanced enough for you and you already know everything; so read something else instead."

Replies from: CCC
comment by CCC · 2013-02-12T07:15:57.877Z · LW(p) · GW(p)

I took it as meaning the second. There's even a recommendation as to what else to read; a book on Lisp.

Replies from: Nebu, Estarlio
comment by Nebu · 2013-02-15T17:20:42.037Z · LW(p) · GW(p)

Of course, if your goal is to learn Python but you find Zed's book too easy, "Read a book on Lisp" is probably not suitable advice.

comment by Estarlio · 2013-02-16T23:28:58.228Z · LW(p) · GW(p)

I strongly suspect that's just him being an ass. If you're finding the concepts in his book too simple, there are plenty of other concepts you could be learning about in computer science that would expand your ability as a programmer more quickly than just picking up another language.

If you want to become a better programmer after learning the basics of a language, I recommend you go and pick up some books on the puzzles / problems in computer science and look at how to solve them using a computer. Go and read up on different search functions and path finding routines, go and read up on relational databases, and types as an expressive system rather than just as something that someone's making you do, go and read up on using a computer to solve tic-tac toe... Things like that - you'll get better a lot faster and become a much better programmer than you will just from picking up another language, which let's face it you're still not going to have a deep understanding of the uses of.

Which isn't to say that there's no learning in picking up another language. There is, I don't know any good programmers who only know one language. But it's not the fastest way to get the most gain in the beginning.

Once you have that extra knowledge about how to actually use the language you just learned. Then by all means go and learn another language.

If you just know Python, then you know what we'd call a high-level imperative language. Imperative just means you're giving the computer a list of commands, high-level means that you're not really telling the computer how to execute them (i.e. the further away you get from telling it to do things with specific memory locations and what commands to use from the command set on the processor the higher level the language is.)

C will give you, the rest of the procedural/imperative side of things that you didn't really get in Python, you'll learn about memory allocation and talking to the operating system - it's a lower level language but still works more or less the same in the style of programming. Haskell or Lisp are both fairly high level languages, like Python, but will give you functional abstraction which is a different way of looking at things than procedural programming.

But... even if you were going to recommend a language to learn after Python, and you knew the person already knew about stuff like relational databases and search functions and could use their skill to solve problems so that you weren't just playing a cruel joke on them, and even if you were going to recommend a functional language: deep breath ... it wouldn't be Lisp, I think.

Lisp has a horrible written style for a beginner. It does functional abstraction, it's true enough - and that is a different way of thinking about problems than the procedural programming that's used in Python - but so does Haskell, and Haskell programs don't look like someone threw up a load of brackets all over the screen; they're actually readable (which may explain why Haskell actually gets used in real life whereas I've never seen Lisp used for much outside of a university.) Haskell also has the awesomeness of monadic parser combinators which are really nice and don't show up in Lisp.

Lisp's big thing is its macros. I can't think of much other reason to learn the language and frankly I try to use them as little as possible anyway because it's so much easier to misuse them than it is with functions.

So, yeah. I can see where you're coming from but I don't think he's really on the level there.

Replies from: Estarlio
comment by Estarlio · 2013-02-17T21:01:43.365Z · LW(p) · GW(p)

Would you care to share your reason for the downvote? I promise not to dispute criticism so you don't have to worry about it escalating into a time-sink.

Replies from: CCC
comment by CCC · 2013-02-19T08:09:00.849Z · LW(p) · GW(p)

I can't, because I wasn't the one who downvoted it. (I can see why one might think so, since the comment was in response to my comment).

Your comment thoroughly explores possible routes for improvement in ability in a novice programmer who has knowledge of Python; probably to a far more detailed level than the author of the original "go read a book on Lisp" comment. I saw nothing it it that requires a downvote, but no particular benefit in continuing the original debate (debating a comment more thoroughly than the person who originally made it, in the absence of the person who originally made it, is only of particular benefit if at least one person firmly agrees with the original statement; while I think I can see where it came from, it's a matter of indifference to me).

comment by fubarobfusco · 2013-02-12T04:01:54.766Z · LW(p) · GW(p)

To me, it seems like a horribly hostile approach to teaching people, which comes across as saying, "In order to learn anything from me, you must abase yourself before me." Which is to say, "I am incapable of conveying useful information to anyone who does not present abject submission to me."

But then, it's possible that I'm just hearing Severus Snape (or the class of lousy teachers he is an imitation of) in the "so you think you know everything?" bullshit.

comment by Kingoftheinternet · 2013-02-12T15:06:13.253Z · LW(p) · GW(p)

I think the quote's main function is to warn those who don't know anything about programming of a kind of person they're likely to encounter on their journey (people who know everything and think their preferences are very right), and to give them some confidence to resist these people. It also drives home the point that people who know how to program already won't get much out of the book. I quoted it because it addresses a common failure mode of very intelligent and skilled people.

comment by wedrifid · 2013-02-12T07:01:26.936Z · LW(p) · GW(p)

This quote was enough for me to take Learn Python The Hard Way off my reading list. I had previously heard good reports about it but this gives me the impression that the book is likely to be far too opinionated and dogmatic for my taste. Mind you I have reason to suspect the same of Python itself.

Replies from: Nornagest, sketerpot, shokwave
comment by Nornagest · 2013-02-12T08:28:59.793Z · LW(p) · GW(p)

In case you'd be interested in a second opinion: I made it through twenty-one exercises of Learn Ruby the Hard Way a couple months ago, got bored, and have retained almost none of it. I'm probably not the target audience, but that doesn't bother me so much; on the other hand, if I'm not retaining stuff after faithfully going through Hard Way's copybook approach to language acquisition, that doesn't speak well for its efficacy among people who are. Unless for some reason programming experience makes me less likely to retain new languages? But that's (a) counterintuitive, and (b) contrary to data I've seen for natural languages, at least.

In any case, I don't think I'll be returning to the series.

comment by sketerpot · 2013-02-19T04:05:29.744Z · LW(p) · GW(p)

Python is just a programming language. Insofar as it can be said to have a personality, that personality is an accommodating and inoffensive one. The community is pretty good, too; the concentration of assholes is unremarkable, and places like /r/learnpython are quick to help out beginners with questions.

comment by shokwave · 2013-02-12T07:09:06.110Z · LW(p) · GW(p)

My understanding having completed parts of it is that it's aimed at someone who doesn't know what a programming language is. If you do know, you're probably better off with another book (and you're also probably better off with something other than Python, but that's my personal opinion clashing with Python's opinions).

comment by scav · 2013-02-07T16:13:25.405Z · LW(p) · GW(p)

But I've never seen the Icarus story as a lesson about the limitations of humans. I see it as a lesson about the limitations of wax as an adhesive.

-- Randall Munroe

Replies from: BerryPick6
comment by BerryPick6 · 2013-02-07T16:25:22.948Z · LW(p) · GW(p)

Definitely a double, but I can't link the others right now.

Replies from: scav, tgb
comment by scav · 2013-02-07T16:43:35.400Z · LW(p) · GW(p)

I thought that unlikely, because it's from last week's XKCD What If?

Maybe Randall has said it before (or borrowed it from someone else).

Replies from: JGWeissman
comment by JGWeissman · 2013-02-07T16:48:16.002Z · LW(p) · GW(p)

Earlier posting

Replies from: scav
comment by scav · 2013-02-08T16:34:50.739Z · LW(p) · GW(p)

OK thanks.

I don't know why I didn't see it - I tried searching the page for Icarus before posting :(

Replies from: JGWeissman, gryffinp
comment by JGWeissman · 2013-02-08T16:47:32.983Z · LW(p) · GW(p)

I tried searching the page for Icarus before posting

I searched on the entire quote. That's probably easier and more reliable than trying to pick out a keyword.

comment by gryffinp · 2013-02-15T20:55:05.856Z · LW(p) · GW(p)

Well, that post was from the January thread. If you only Control-F'd this page, then it wouldn't have come up.

comment by tgb · 2013-02-07T16:42:13.039Z · LW(p) · GW(p)

That seems unlikely; the quote above was only posted about three weeks ago and nothing about Icarus turns up in a search. Can anyone find a duplicate?

Replies from: BerryPick6
comment by BerryPick6 · 2013-02-07T17:08:31.224Z · LW(p) · GW(p)

Two, in fact.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-02-07T18:14:03.060Z · LW(p) · GW(p)

It was three, but I deleted mine.

comment by [deleted] · 2013-02-02T01:18:28.715Z · LW(p) · GW(p)

.

Replies from: sketerpot, Andreas_Giger, None
comment by sketerpot · 2013-02-02T06:13:42.042Z · LW(p) · GW(p)

The publisher selected that design. The author's involvement almost always ends with the manuscript.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-02-03T22:20:44.173Z · LW(p) · GW(p)

Authors are deliberately excluded from all this, on the grounds that they're so in love with what's inside the book that they don't understand what the cover stuff is for. Which is advertising.

The purpose of cover art is not to show the reader what's inside the book.

It's to get his attention from across the bookstore and get him to pick the book up in the first place.

Half-naked women and muscular barbarians are very good for getting teenaged readers to at least take a look. Black and red are good, too. And spiffy hardware, like spaceships. Cut-out covers, foil, blood, all that stuff--it gets attention, and the art and marketing people really don't give a damn whether it agrees with what's inside the book.

The cover gets you to pick up the book and read the blurbs; the blurbs are supposed to convince you to actually buy it. The blurb writer doesn't care any more about accuracy than the art director did; his job is to sell the book, period. One way to do that is to skim through the book and pick out all the most lurid details.

So all this is done without the author's interference. The author might put up a fuss about the half-naked women, since everyone in the story is ninety years old and wearing dirty bathrobes the whole time. The author might object to having his sentimental tale of old age cover-blurbed, "Shocking Love Secrets of the Ancients!" Who wants to waste time arguing with him? Better to shut him out and deliver the package as a fait accompli.

-- Lawrence Watt-Evans

comment by Andreas_Giger · 2013-02-02T15:40:32.987Z · LW(p) · GW(p)

You don't "judge" a book by its cover; you use the cover as additional evidence to more accurately predict what's in the book. Knowing what the publisher wants you to assume about the book is preferable to not knowing.

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-02T17:01:02.642Z · LW(p) · GW(p)

(Except when it's a novel and the text on the back cover spoilers events from the middle of the book or later which I would have preferred to not read until the right time.)

Replies from: aleksiL
comment by aleksiL · 2013-02-03T14:23:57.256Z · LW(p) · GW(p)

Spoilers matter less than you think.

Replies from: Kaj_Sotala, army1987, roystgnr
comment by Kaj_Sotala · 2013-02-03T22:16:24.097Z · LW(p) · GW(p)

According to a single counter-intuitive (and therefore more likely to make headlines), unreplicated study.

Replies from: BerryPick6
comment by BerryPick6 · 2013-02-03T22:17:36.031Z · LW(p) · GW(p)

Gah! Spoiler!

comment by A1987dM (army1987) · 2013-02-03T23:47:51.271Z · LW(p) · GW(p)

Those error bars look large enough that I could still be right about myself even without being a total freak.

Replies from: satt
comment by satt · 2013-02-04T01:11:06.909Z · LW(p) · GW(p)

Really? 11 of the 12 stories got rated higher when spoiled, which is decent evidence against the nil hypothesis (spoilers have zero effect on hedonic ratings) regardless of the error bars' size. Under the nil hypothesis, each story has a 50/50 chance of being rated higher when spoiled, giving a probability of (¹²C₁₁ × 0.5¹¹ × 0.5¹) + (¹²C₁₂ × 0.5¹² × 0.5⁰) = 0.0032 that ≥11 stories get a higher rating when spoiled. So the nil hypothesis gets rejected with a p-value of 0.0063 (the probability's doubled to make the test two-tailed), and presumably the results are still stronger evidence against a spoilers-are-bad hypothesis.

This, of course, doesn't account for unseen confounders, inter-individual variation in hedonic spoiler effects, publication bias, or the sample (79% female and taken from "the psychology subject pool at the University of California, San Diego") being unrepresentative of people in general. So you're still not necessarily a total freak!

Replies from: army1987, Kindly
comment by A1987dM (army1987) · 2013-02-04T20:10:54.000Z · LW(p) · GW(p)

Yeah, it doesn't seem likely given that study that works are liked in average less when spoiled; but what I meant is that probably there are certain individuals who like works less when spoiled. (Imagine Alice said something to the effect that she prefers chocolate ice cream to vanilla ice cream, and Bob said that it's not actually the case that vanilla tastes worse than chocolate, citing a study in which for 11 out of 12 ice cream brands their vanilla ice cream is liked more in average than their chocolate ice cream -- though in most cases the difference between the averages is not much bigger than each standard deviation; even if the study was conducted among a demographic that does include Alice, that still wouldn't necessarily mean Alice is mistaken, lying, or particularly unusual, would it?)

Replies from: satt
comment by satt · 2013-02-04T21:29:19.339Z · LW(p) · GW(p)

Just so. These are the sort of "inter-individual variation in hedonic spoiler effects" I had in mind earlier.

Edit: to elaborate a bit, it was the "error bars look large enough" bit of your earlier comment that triggered my sceptical "Really?" reaction. Apart from that bit I agree(d) with you!

Edit 2: aha, I probably did misunderstand you earlier. I originally interpreted your error bars comment as a comment on the statistical significance of the pairwise differences in bar length, but I guess you were actually ballparking the population standard deviation of spoiler effect from the sample size and the standard errors of the means.

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-05T04:56:34.629Z · LW(p) · GW(p)

These are the sort of "inter-individual variation in hedonic spoiler effects" I had in mind earlier.

Huh. For some reason I had read that as "intra-individual". Whatever happened to the "assume people are saying something reasonable" module in my brain?

I guess you were actually ballparking the population standard deviation of spoiler effect from the sample size and the standard errors of the means.

Yep.

comment by Kindly · 2013-02-04T14:56:54.217Z · LW(p) · GW(p)

You can't just ignore the error bars like that. In 8 of the 12 cases, the error bars overlap, which means there's a decent chance that those comparisons could have gone either way, even assuming the sample mean is exactly correct. A spoilers-are-good hypothesis still has to bear the weight of this element of chance.

As a rough estimate: I'd say we can be sure that 4 stories are definitely better spoilered (>2 sd's apart); out of the ones 1..2 sd's apart, maybe 3 are actually better spoilered; and out of the remainder, they could've gone either way. So we have maybe 9 out of 12 stories that are better with spoilers, which gives a probability of 14.5% if we do the same two-tailed test on the same null hypothesis.

I don't necessarily want you to trust the numbers above, because I basically eyeballed everything; however, it gives an idea of why error bars matter.

Replies from: satt
comment by satt · 2013-02-04T22:56:40.231Z · LW(p) · GW(p)

You can't just ignore the error bars like that.

Ignoring the error bars does throw away potentially useful information, and this does break the rules of Bayes Club. But this makes the test a conservative one (Wikipedia: "it has very general applicability but may lack the statistical power of other tests"), which just makes the rejection of the nil hypothesis all the more convincing.

In 8 of the 12 cases, the error bars overlap, which means there's a decent chance that those comparisons could have gone either way, even assuming the sample mean is exactly correct. A spoilers-are-good hypothesis still has to bear the weight of this element of chance.

If I'm interpreting this correctly, "the error bars overlap" means that the heights of two adjacent bars are within ≈2 standard errors of each other. In that case, overlapping error bars doesn't necessarily indicate a decent chance that the comparisons could go either way; a 2 std. error difference is quite a big one.

As a rough estimate: I'd say we can be sure that 4 stories are definitely better spoilered (>2 sd's apart); out of the ones 1..2 sd's apart, maybe 3 are actually better spoilered; and out of the remainder, they could've gone either way. So we have maybe 9 out of 12 stories that are better with spoilers, which gives a probability of 14.5% if we do the same two-tailed test on the same null hypothesis.

But this is an invalid application of the test. The sign test already allows for the possibility that each pairwise comparison can have the wrong sign. Making your own adjustments to the numbers before feeding them into the test is an overcorrection. (Indeed, if "we can be sure that 4 stories are definitely better spoilered", there's no need to statistically test the nil hypothesis because we already have definite evidence that it is false!)

I don't necessarily want you to trust the numbers above, because I basically eyeballed everything; however, it gives an idea of why error bars matter.

This reminds me of a nice advantage of the sign test. One needn't worry about squinting at error bars; it suffices to be able to see which of each pair of solid bars is longer!

Replies from: Kindly
comment by Kindly · 2013-02-04T23:04:13.797Z · LW(p) · GW(p)

Indeed, if "we can be sure that 4 stories are definitely better spoilered", there's no need to statistically test the nil hypothesis because we already have definite evidence that it is false!

Okay, if all you're testing is that "there exist stories for which spoilers make reading more fun" then yes, you're done at that point. As far as I'm concerned, it's obvious that such stories exist for either direction; the conclusion "spoilers are good" or "spoilers are bad" follows if one type of story dominates.

comment by roystgnr · 2013-02-05T22:43:28.279Z · LW(p) · GW(p)

I don't like the study setup there. One readthrough of spoiled vs one readthrough of unspoiled material lets you compare the participants' hedonic ratings of dramatic irony vs mystery, and it's quite reasonable that the former would be equally or more enjoyable... but unlike in the study, in real life unspoiled material can be read twice: the first time for the mystery, then the second time for the dramatic irony; with spoiled material you only get the latter.

comment by [deleted] · 2013-02-02T01:25:32.625Z · LW(p) · GW(p)

No, they selected them to sell more copies by highjacking the easier-to-press buttons of your nervous system.

Replies from: Nic_Smith, HalMorris
comment by Nic_Smith · 2013-02-02T02:38:51.326Z · LW(p) · GW(p)

There's something to that, but it's not as if Varian's Microeconomic Analysis is going to have the cover of Spice and Wolf 1.

Replies from: Desrtopa
comment by Desrtopa · 2013-02-02T14:31:22.443Z · LW(p) · GW(p)

On the other hand, the method of judging a book's contents by its cover clearly has holes in it considering Spice and Wolf 1 has the cover of Spice and Wolf 1.

Replies from: HalMorris
comment by HalMorris · 2013-02-03T16:37:53.174Z · LW(p) · GW(p)

Deliberate non sequitur alert: I'm often attracted to a cover that has holes in it. E.g. The Curious Incident of the Dog in the Night-Time.

comment by HalMorris · 2013-02-02T02:49:20.113Z · LW(p) · GW(p)

Probably purely true for some books, but as someone who buys thousands of books a year, my impression is they are very likely to reveal who they think their readers will be (hence a lot of covers say "stay away" to me), and just occasionally they can show a startling streak of originality. E.g. the board designs (there may be no dustjacket) on Dave Eggers' books are uniquely artistic in my opinion, and in this case since he has been seriously into graphics, I don't think it's any accident. You might think "Maybe this book is written by a bold and original person" and IMHO you'd be right. Also, the cover design of The Curious Incident of the Dog in the Night-Time by Mark Haddon kind of sent a message on my wavelength and it was not misleading (for me).

comment by simplicio · 2013-02-23T01:10:18.370Z · LW(p) · GW(p)

Whenever you feel that society is forcing you to conform or treating you like a number, not a person, just ask yourself the following question: "Does my individuality create more work for other people?" If the answer is yes, then you should be prepared to pay more.

(Joseph Heath & Andrew Potter, The Rebel Sell)

comment by Oscar_Cunningham · 2013-02-01T21:04:17.588Z · LW(p) · GW(p)

Evolutionary psychology, economics, and behavior studies in general often fail to account for what may be an innate, or strongly socialized, motivating variable. "Rational people will seek to maximize their gain." Sure. Now define gain. In many discussions about behavior and economics, we do not account for obedience and social pressure. This is a mistake, as it is evident that it is a highly significant, though invisible, determinant.

The Last Psychiatrist (http://thelastpsychiatrist.com/2009/06/delaying_gratification.html)

comment by Shmi (shminux) · 2013-02-12T07:45:18.609Z · LW(p) · GW(p)

Instead of assuming that people are dumb, ignorant, and making mistakes, assume they are smart, doing their best, and that you lack context.

@slicknet

Replies from: Vladimir_Nesov, Eugine_Nier, Document, Jakeness, ygert, Document, MugaSofer
comment by Vladimir_Nesov · 2013-02-13T13:56:44.349Z · LW(p) · GW(p)

If we are in the business of making assumptions, there is no dichotomy, you can as well consider both hypotheticals. (Actually believing that either of these holds in general, or in any given case where you don't have sufficient information, would probably be dumb, ignorant, a mistake.)

Replies from: Creutzer, shminux
comment by Creutzer · 2013-02-17T21:57:31.694Z · LW(p) · GW(p)

This misses the point a bit due to an equivocation on "assume". In ordinary discourse, it usually means "assume for the purpose of action until you encounter contrary evidence". That's very different from the scientist's hypothetical assumptions that are made in order to figure out what follows from a hypothesis.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2013-02-18T00:51:55.063Z · LW(p) · GW(p)

In ordinary discourse, it usually means "assume for the purpose of action until you encounter contrary evidence"

It's epistemically incorrect to adopt a belief "for the purpose of action", and permitting "contrary evidence" to correct the error doesn't make it a non-error.

Replies from: shaih
comment by shaih · 2013-02-18T02:51:51.075Z · LW(p) · GW(p)

I think what Creutzer is trying to mean is in ordinary discourse meaning everyday problems in which you are not always able to give the thought time it deserves, when you don't even have 5 minutes by the clock hand to think about the problem rationally, it is better to rely on the heuristic assume people are smart and some unknown context is causing problems then to rely on the heuristic people who make mistakes are dumb. this said heuristics are only good most of the time and may lead you to errors such as

It's epistemically incorrect to adopt a belief "for the purpose of action"

in this case it is still technically an error but you are merely attempting to be "less wrong" about a case where you don't have time to be correct then assuming the heuristic until you encounter contrary evidence (or you have the time to think of better answers) follows closely the point of this website

Replies from: Vladimir_Nesov, Creutzer
comment by Vladimir_Nesov · 2013-02-19T16:01:56.679Z · LW(p) · GW(p)

Using a heuristic doesn't require believing that it's flawless. You are in fact performing some action, but that is also possible in the absence of careful understanding of the its effect. There is no point in doing the additional damage of accepting a belief for reasons other than evidence of its correctness.

comment by Creutzer · 2013-02-18T10:27:12.639Z · LW(p) · GW(p)

Exactly, thanks for the clarification.

comment by Shmi (shminux) · 2013-02-14T17:23:24.252Z · LW(p) · GW(p)

I believe that this statement, while correct, misses the point of preemptive debiasing. Yvain said it better.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2013-02-14T19:32:26.147Z · LW(p) · GW(p)

The original quote draws attention to the mistake of not giving enough attention to the hypothetical where something appears to be wrong/stupid, but upon further investigation turns out to be correct/interesting. However, it confuses the importance of the hypothetical with its probability, and endorses increasing its level of certainty. I pointed out this error in the formulation, but didn't restate the lesson of the quote (i.e. my point didn't include the lesson, only the flaw in its presentation, so naturally it "misses" the point of the lesson by not containing it).

comment by Eugine_Nier · 2013-02-13T07:45:51.622Z · LW(p) · GW(p)

Also, consider the possibility that it is you who is dumb, ignorant, and making mistakes.

Replies from: BillyOblivion
comment by BillyOblivion · 2013-02-23T05:32:21.854Z · LW(p) · GW(p)

I don't consider it, I assume it.

But "dumb" and "ignorant" are not points on a line, they are relative positions.

To quote this bloke at a climbing gym I used to frequent "We all suck at our own level".

comment by Document · 2013-02-23T22:37:33.126Z · LW(p) · GW(p)

With apologies for double-commenting: "Don't assume others are ignorant" is likely to be read by a lot of people (including myself at first) as "Aim high and don't be easily be convinced of an inferential gap". Posts on underconfidence may also be relevant.

comment by Jakeness · 2013-02-23T19:36:27.549Z · LW(p) · GW(p)

I would somewhat agree with this if the phrase "making mistakes" was removed. People generally have poor reasoning skills and make non-optimal choices >99% of the time. (Yes, I am including myself and you, the reader, in this generalization.)

comment by ygert · 2013-02-12T08:13:49.842Z · LW(p) · GW(p)

Or better yet, assume nothing, and reserve judgement until you have more information.

Replies from: shminux
comment by Shmi (shminux) · 2013-02-22T21:09:12.404Z · LW(p) · GW(p)

Or better yet, assume nothing

You always assume things, whether you are aware of it or not. At least by making your assumptions explicit and conscious, you have a better chance of noticing when they are wrong. And assuming "that people are dumb, ignorant, and making mistakes" is a common default subconscious failure mode.

comment by Document · 2013-02-22T20:58:40.956Z · LW(p) · GW(p)

In most situations there are multiple people other than yourself who each think the others are dumb, ignorant and making mistakes. Don't assume that the one you happen to be interacting with at the moment is right by default.

comment by MugaSofer · 2013-03-05T21:51:44.495Z · LW(p) · GW(p)

You may or may not have noticed, but most people are biased. Whether bias counts as "dumb", "ignorant" or "making mistakes" is left as an exercise for the reader.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-02T13:03:41.262Z · LW(p) · GW(p)

Heaven? They tried to recruit me, but I turned them down. My place is here in shadows, with the blood and the fear and the screams of the dying, standing back to back with my loves against the world.

-- Time Braid

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-28T03:13:18.721Z · LW(p) · GW(p)

Responsibility without power breeds cynicism.

-- Scott Sumner (talking about Italian politicians when the EU controls their monetary policy, but it generalizes)

Replies from: wedrifid
comment by wedrifid · 2013-02-28T04:10:10.912Z · LW(p) · GW(p)

Responsibility without power breeds cynicism.

-- Scott Sumner (talking about Italian politicians when the EU controls their monetary policy, but it generalizes)

This just prompted me to (hypothetically, for the sake of amusement) reinterpret many of Eliezer's actions as a psychological experiment wherein he has contrived exaggerated scenarios in order to test this empirically.

comment by Qiaochu_Yuan · 2013-02-25T22:44:25.563Z · LW(p) · GW(p)

I am, in most of my endeavors, a solidly successful person. I decide I want things to be a certain way, and I make it happen. I've done it with my career, my learning of music, understanding of foreign languages, and basically everything I've tried to do. For a long time, I've known that the key to getting started down the path of being remarkable in anything is to simply act with the intention of being remarkable.

If I want a better-than-average career, I can't simply 'go with the flow' and get it. Most people do just that: they wish for an outcome but make no intention-driven actions toward that outcome. If they would just do something most people would find that they get some version of the outcome they're looking for. That's been my secret. Stop wishing and start doing.

Yet here I was, talking about arguably the most important part of my life - my health [emphasis added] - as if it was something I had no control over. I had been going with the flow for years. Wishing for an outcome and waiting to see if it would come. I was the limp, powerless ego I detest in other people.

But somehow, as the school nerd who always got picked last for everything, I had allowed 'not being good at sports' or 'not being fit' to enter what I considered to be inherent attributes of myself [emphasis added]. The net result is that I was left with an understanding of myself as an incomplete person. And though I had (perhaps) overcompensated for that incompleteness by kicking ass in every other way I could, I was still carrying this powerlessness around with me and it was very slowly and subtly gnawing away at me from the inside.

-- Chad Fowler (from The 4-Hour Body)

comment by jsbennett86 · 2013-02-18T10:40:35.515Z · LW(p) · GW(p)

The best way to have a good idea is to have lots of ideas.

Linus Pauling

Replies from: Qiaochu_Yuan, army1987, jsbennett86, DanArmak
comment by Qiaochu_Yuan · 2013-02-20T07:17:17.849Z · LW(p) · GW(p)

The example in the comic is not a good one. Of the choices on the board, E being proportional to mc^2 is the only option where the units match. You only need to have that one idea to save yourself the trouble of having lots of other ideas.

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-20T11:03:26.674Z · LW(p) · GW(p)

It's a joke, which I assume is intended for a mostly non-physicist audience.

Replies from: simplicio
comment by simplicio · 2013-02-23T01:42:08.514Z · LW(p) · GW(p)

We demand complete rigour from all forms of levity! The unexamined joke is not worth joking!

Replies from: BillyOblivion
comment by BillyOblivion · 2013-02-23T05:17:25.525Z · LW(p) · GW(p)

Mickey Mouse is dead Got kicked in the head Cause people got too serious They planned out what they said They couldn't take the fantasy They tried to accept reality Analyzed the laughs Cause pleasure comes in halves The purity of comedy They had to take it seriously Changed the words around Tried to make it look profound ...

--Sub Hum Ans, "Mickey Mouse is Dead"

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-23T09:52:17.996Z · LW(p) · GW(p)

To prevent lines from being merged together, add two spaces at the end of each one.

Replies from: BillyOblivion
comment by BillyOblivion · 2013-02-24T21:17:55.597Z · LW(p) · GW(p)

That's so...typewriter.

Thanks.

comment by A1987dM (army1987) · 2013-02-18T13:08:06.704Z · LW(p) · GW(p)

Yes, but also being able to tell which of those ideas are good is even better.

comment by jsbennett86 · 2013-02-18T10:41:00.044Z · LW(p) · GW(p)

From the alt-text in the above-linked comic:

Corollary: The most prolific people in the world suck 99% of the time.

comment by DanArmak · 2013-02-20T20:29:38.978Z · LW(p) · GW(p)

It's necessary, but not sufficient.

comment by Stabilizer · 2013-02-05T01:17:36.317Z · LW(p) · GW(p)

Clarity is the counterbalance of profound thoughts.

-Luc de Clapiers

comment by cody-bryce · 2013-02-20T19:39:48.943Z · LW(p) · GW(p)

"We're even wrong about which mistakes we're making."

-Carl Winfeld

Replies from: ygert
comment by ygert · 2013-02-20T20:02:20.744Z · LW(p) · GW(p)

That's a pretty great thing to be wrong about!

Replies from: DanArmak
comment by DanArmak · 2013-02-20T20:27:48.304Z · LW(p) · GW(p)

Not at all. It means you don't know about the real mistakes you make (so you can't fix them), and you spend resources trying to fix something that's not really broken.

comment by James_Miller · 2013-02-01T19:35:27.622Z · LW(p) · GW(p)

No scientific conclusions can ever be good or bad, desirable or undesirable, sexist, racist, offensive, reactionary or dangerous; they can only be true or false. No other adjectives apply.

Satoshi Kanazawa

Replies from: fubarobfusco, Nornagest, shminux, Qiaochu_Yuan, alex_zag_al, ChristianKl
comment by fubarobfusco · 2013-02-02T04:16:55.598Z · LW(p) · GW(p)

This seems to imply that science is somehow free from motivated cognition — people looking for evidence to support their biases. Since other fields of human reason are not, it would be astonishing if science were.

(Bear in mind, I use "science" mostly as the name of a social institution — the scientific community, replete with journals, grants and funding sources, tenure, and all — and not as a name for an idealized form of pure knowledge-seeking.)

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-02T18:22:10.031Z · LW(p) · GW(p)

I take the quote to be normative rather than descriptive. Science is not free from motivated cognition, but that's a bug, not a feature.

Replies from: fubarobfusco
comment by fubarobfusco · 2013-02-02T21:03:38.633Z · LW(p) · GW(p)

Sure, but I often see this sort of argument used against concerns about bias in (claimed) scientific conclusions. I'd rather people didn't treat science as privileged against bias, and the quote above seems to encourage that.

comment by Nornagest · 2013-02-01T20:35:08.441Z · LW(p) · GW(p)

While I pretty much agree with the quote, it doesn't provide anyone that isn't already convinced with many good reasons to believe it. Less of an unusually rational statement and more of an empiricist applause light, in other words.

In any case, a scientific conclusion needn't be inherently offensive for closer examination to be recommended: if most researchers' backgrounds are likely to introduce implicit biases toward certain conclusions on certain topics, then taking a close look at the experimental structure to rule out such bias isn't merely a good political sop but is actually good science in its own right. Of course, dealing with this properly would involve hard work and numbers and wouldn't involve decrying all but the worst studies as bad science when you've read no more than the abstract.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-02-02T05:56:24.193Z · LW(p) · GW(p)

if most researchers' backgrounds are likely to introduce implicit biases toward certain conclusions on certain topics, then taking a close look at the experimental structure to rule out such bias isn't merely a good political sop but is actually good science in its own right.

Unfortunately, since the people deciding which papers to take a closer look at tend to have the same biases as most scientists, the papers that actually get examined closely are the ones going against common biases.

Replies from: Nornagest
comment by Nornagest · 2013-02-02T07:19:00.467Z · LW(p) · GW(p)

I hate to find myself in the position of playing apologist for this mentality, but I believe the party line is that most of the relevant biases are instilled by mass culture and present at some level even in most people trying to combat them, never mind scientists who oppose them in a kind of vague way but mostly have better things to do with their lives.

In light of the Implicit Association Test this doesn't even seem all that far-fetched to me. The question is to what extent it warrants being paranoid about experimental design, and that's where I find myself begging to differ.

comment by Shmi (shminux) · 2013-02-01T19:41:48.236Z · LW(p) · GW(p)

I'd take an issue with "undesirable", the way I understand it. For example, the conclusion that traveling FTL is impossible without major scientific breakthroughs was quite undesirable to those who want to reach for the stars. Similarly with "dangerous": the discovery of nuclear energy was quite dangerous.

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-02T18:20:39.913Z · LW(p) · GW(p)

If travelling faster than light is possible,
I desire to believe that travelling faster than light is possible;
If travelling faster than light is impossible,
I desire to believe that travelling faster than light is impossible;
Let me not become attached to beliefs I may not want.

Replies from: shminux
comment by Shmi (shminux) · 2013-02-02T19:15:05.609Z · LW(p) · GW(p)

Something not (currently) possible can still be desirable.

Replies from: Larks
comment by Larks · 2013-02-02T19:59:34.342Z · LW(p) · GW(p)

FTL being impossible is undesirable if you want to go to the stars.

The conclusion that "FTL is impossible" is undesirable if and only iff FTL is possible.

The two conditions are very different.

Replies from: shminux, Baruta07
comment by Shmi (shminux) · 2013-02-05T02:34:05.070Z · LW(p) · GW(p)

They are indeed. You seem to have added a level of indirection not present in the original statement. One statement is about this world, the other is about possible worlds.

comment by Baruta07 · 2013-02-04T21:49:05.779Z · LW(p) · GW(p)

Shouldn't it read

"FTL is impossible" is undesirable if and only if FTL is possible."

as it stands it reads "FTL is impossible" is undesirable if and only if and only if (iff) FTL is possible.

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2013-02-06T12:38:54.932Z · LW(p) · GW(p)

Actually, it should be "FTL is impossible" is undesirable if and only if FTL is possible."

Replies from: Baruta07
comment by Baruta07 · 2013-02-06T17:34:53.566Z · LW(p) · GW(p)

Facepalms okay this is why I need to proofread everything I write

Thanks

Replies from: Baughn
comment by Baughn · 2013-02-08T17:05:00.786Z · LW(p) · GW(p)

Shouldn't it really be "Believing that FTL is impossible is undesirable iff FTL is possible"?

You seemed to be doing something clever with quotes, but mostly that made it hard to read. :P

Replies from: Baruta07
comment by Baruta07 · 2013-02-08T17:30:11.903Z · LW(p) · GW(p)

The author originally added an extra f to the last if in the original post rendering it as "if and only if and only if" instead of "if and only if"

comment by Qiaochu_Yuan · 2013-02-01T19:55:58.661Z · LW(p) · GW(p)

I think it's pretty clear that scientific conclusions can be dangerous in the sense that telling everybody about them is dangerous. For example, the possibility of nuclear weapons. On the other hand, there should probably be an ethical injunction against deciding what kind of science other people get to do. (But in return maybe scientists themselves should think more carefully about whether what they're doing is going to kill the human race or not.)

Replies from: Sengachi, NancyLebovitz
comment by Sengachi · 2013-02-16T21:40:12.188Z · LW(p) · GW(p)

That's the thing, the science wasn't good or bad, it was the to decision to give the results to certain people that held that quality of good/bad. And it was very, very bad. But the process of looking at the world, wondering how it works, then figuring out how it works, and then making it work the way you desire, that process carries with it no intrinsic moral qualities.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-02-16T22:43:59.635Z · LW(p) · GW(p)

But the process of looking at the world, wondering how it works, then figuring out how it works, and then making it work the way you desire, that process carries with it no intrinsic moral qualities.

I don't know what you mean by "intrinsic" moral qualities (is this to be contrasted with "extrinsic" moral qualities, and should I care less about the latter or what?). What I'm saying is just that the decision to pursue some scientific research has bad consequences (whether or not you intend to publicize it: doing it increases the probability that it will get publicized one way or another).

Replies from: shaih
comment by shaih · 2013-02-18T22:47:50.754Z · LW(p) · GW(p)

The majority of scientific discoveries (I'm tempted to say all but I'm 90% certain that there exist at least one counter example) have very good consequences as well as bad. I think the good and bad actually usually go hand in hand.

To make the obvious example nuclear research lead to both the creation of nuclear weapons but also the creation of nuclear energy.

At what point could you label research into any scientific field as having to many negative consequences to pursue?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-02-18T22:53:19.006Z · LW(p) · GW(p)

I agree that this is a hard question.

General complaint: sometimes when I say that people should be doing a certain thing, someone responds that doing that thing requires answering hard questions. I don't know what bringing this point up is supposed to accomplish. Yes, many things worth doing require answering hard questions. That is not a compelling reason not to do them.

Replies from: shaih
comment by shaih · 2013-02-18T23:24:03.534Z · LW(p) · GW(p)

I do not ask it because I wanted to stop the discussion by asking a hard question. I ask it because I aspire to do research into physics and will someday need an answer to it. As such I have been very curious about different arguments to this question. By no means did I mean by asking this question that there are things that should not be research simply how to go about finding them?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-02-18T23:40:07.682Z · LW(p) · GW(p)

Remove any confusions you might have about metaethics, figure out what it is you value, estimate what kind of impact the research you want to do will have with respect to what you value, estimate what kind of impact the other things you could do will have with respect to what you value, pick the thing that is more valuable.

Trying to retroactively judge previous research this way is difficult because the relevant quantity you want to estimate is not the observed net value of a given piece of research (which is hard enough to estimate) but the expected net value at the time the decision was being made to do the research. I think the expected value of research into nuclear physics in the past was highly negative because of how much it increased the probability of nuclear war, but I'm not a domain expert and can't give hard numbers to back up this assertion.

Replies from: shaih
comment by shaih · 2013-02-18T23:53:10.112Z · LW(p) · GW(p)

I'm reading through all of the sequences (slowly, it takes a while to truly understand and I started in 2012) and by coincidence I happen to be at the beginning of metaethics currently. Until I finish I won't argue any further on this subject due to being confused. Thanks for help

comment by NancyLebovitz · 2013-02-03T12:09:22.010Z · LW(p) · GW(p)

I think nuclear weapons have a chance of killing a large number of people but are very unlikely to kill the human race.

Replies from: Baughn
comment by Baughn · 2013-02-08T17:01:21.661Z · LW(p) · GW(p)

At one point, physicists thought detonating even one nuclear bomb might set fire to the atmosphere.

This was taken seriously, and disproven before one in fact was detonated, but it's not clear that the tests wouldn't have gone ahead even if the verdict had come back with merely "unlikely".

In the current day biologists, computer scientists and physicists are all working on devices which could be far more dangerous than nuclear weapons. In this case the danger is well known, but no-one high-status enough to succeed is seriously proposing a moratorium on research. To be fair, we've still got some time to go.

comment by alex_zag_al · 2013-02-05T02:08:41.918Z · LW(p) · GW(p)

A scientist can have an inclination towards--for example--racist ideas. You can't just call this a kind of being wrong, because depending on the truth of what they're studying, this can make them right more often or less often.

So racist scientists are possible, and racist scientific practice is possible. I think 'racist' is an appropriate label for the conclusions drawn with that practice, correct or incorrect.

Though, I think being racist is a property of a whole group of conclusions drawn by scientists with a particular bias. It's not an inherent property of any of the conclusions; another researcher with completely different biases wouldn't be racist for independently rediscovering one of them.

It's a useful descriptor because a body of conclusions drawn by racist scientists, right or wrong, is going to be different in important ways from one drawn by non-racist scientists. It doesn't reduce to "larger fraction correct" or "larger fraction incorrect" because it depends on if they're working on a problem where racists are more or less likely to be correct.

comment by ChristianKl · 2013-02-02T16:27:52.167Z · LW(p) · GW(p)

Is Newtons theory of gravity true or false? It's neither. For some problems the theory provides a good model that allows us to make good predictions about the world around us. For other problems the theory produces bad predictions.

The same is true for nearly every scientific model. There are problems where it's useful to use the model. There are problems where it isn't.

There are also factual statements in science. Claiming that true and false are the only possible adjectives to describe them is also highly problematic. Instead of true and false, likely and unlikely are much better words. In hard science most scientific conclusions come with p values. The author doesn't try to declare them true or false but declares them to be very likely.

It's also interesting that the person who made this claim isn't working in the hard sciences. He seems to be a evolutionary psychologist based in the London School of Economics. In the Wikipedia article that desribes him he's quoted as suggesting that the US should have retaliated 9/11 with nuclear bombs. That a non-scientific racist position. He published some material that's widely considered as racist in Psychology Today. I don't see why "racist" is no valid word to describe his conclusions.

Replies from: NancyLebovitz, Eugine_Nier
comment by NancyLebovitz · 2013-02-03T12:33:08.333Z · LW(p) · GW(p)

What happens if you apply the same epistomological standards to claims that someone is racist that you apply to claims from science?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2013-02-13T06:50:41.834Z · LW(p) · GW(p)

On the other hand, Kanazawa seems really good at saying controversial things that get attention... which suggests evidence for his views will overspread relative to those of his detractors. So it may make sense to hold people who say controversial stuff to high epistemological standards, or perhaps to scrutinize memes that seem unusually virulent especially carefully.

comment by Eugine_Nier · 2013-02-02T19:31:16.592Z · LW(p) · GW(p)

In the Wikipedia article that desribes him he's quoted as suggesting that the US should have retaliated 9/11 with nuclear bombs. That a non-scientific racist position.

Huh, what definition of "racist" are you using here? Would you describe von Neumann's proposal for a pre-emtive nuclear strike on the USSR as "racist"?

He published some material that's widely considered as racist in Psychology Today. I don't see why "racist" is no valid word to describe his conclusions.

I'm not sure what you mean by "racist", however is your claim supposed to be that this somehow implies that the conclusion is false/less likely? You may want to practice repeating the Litany of Tarski.

Replies from: ChristianKl
comment by ChristianKl · 2013-02-02T20:58:08.593Z · LW(p) · GW(p)

Huh, what definition of "racist" are you using here?

It's basically about putting a low value on the life on non-white civilians. In addition "I would do to foreigners, what Ann Coulter would do to them", is also a pretty straight way to signal racism.

I'm not sure what you mean by "racist", however is your claim supposed to be that this somehow implies that the conclusion is false/less likely?

I haven't argued that fact. I'm advocating for having a broad number of words which multidimensional meaning.

I see no reason to treat someone who makes wrong claims about race and who's personal beliefs cluster with racist beliefs in his nonscientific statements the same way as someone who just makes wrong statements about the boiling point of some new synthetic chemical.

Replies from: fubarobfusco, Eugine_Nier, Eugine_Nier, Eugine_Nier
comment by fubarobfusco · 2013-02-02T21:08:23.670Z · LW(p) · GW(p)

Rather than using the ambiguous word "racist", one could say specifically that Kanazawa is an advocate of genocide.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-02-13T07:56:04.235Z · LW(p) · GW(p)

As I said above, did the bombings of civilians during WWII constitute "genocide"?

comment by Eugine_Nier · 2013-02-02T22:01:23.198Z · LW(p) · GW(p)

It's basically about putting a low value on the life on non-white civilians.

So would you call the bombings of civilians during WWII "racist"?

I haven't argued that fact. I'm advocating for having a broad number of words which multidimensional meaning.

So you would agree that there are some statements that are both "racist" and true.

I see no reason to treat someone who makes wrong claims about race

What do you mean by "wrong"? If you mean "wrong" in the sense of "false", you've yet to present any evidence that any of Satoshi Kanazawa's claims are wrong.

comment by Eugine_Nier · 2013-02-02T22:03:18.636Z · LW(p) · GW(p)

It's basically about putting a low value on the life on non-white civilians.

So would you call the bombings of civilians during WWII "racist"?

I haven't argued that fact. I'm advocating for having a broad number of words which multidimensional meaning.

So you would agree that there are some statements that are both "racist" and true.

I see no reason to treat someone who makes wrong claims about race

What do you mean by "wrong", if you mean "wrong" in the sense of "false", you've yet to present any evidence that any of Satoshi Kanazawa's claims are wrong.

comment by Eugine_Nier · 2013-02-02T22:02:16.376Z · LW(p) · GW(p)

It's basically about putting a low value on the life on non-white civilians.

So would you call the bombings of civilians during WWII "racist"?

I haven't argued that fact. I'm advocating for having a broad number of words which multidimensional meaning.

So you would agree that there are some statements that are both "racist" and true.

I see no reason to treat someone who makes wrong claims about race

What do you mean by "wrong", if you mean "wrong" in the sense of "false", you've yet to present any evidence that any of Satoshi Kanazawa's claims are wrong.

comment by Alicorn · 2013-02-13T05:23:11.112Z · LW(p) · GW(p)

"It does not matter what we have believed," Caleb said. "What matters is the truth."

--Jovah's Angel by Sharon Shinn

Replies from: NevilleSandiego
comment by NevilleSandiego · 2013-02-23T10:09:45.762Z · LW(p) · GW(p)

maybe it's just my most recent physchem lecture talking, but my instant response to that was 'truth is a state function'. Or perhaps 'perceived truth', and 'should be'. (i.e., shouldn't depend on the history preceding current perceived truth)

comment by Yahooey · 2013-02-10T21:02:57.857Z · LW(p) · GW(p)

Coincidences … are the worst enemies of the truth. (Les coïncidences … sont les pires ennemies de la vérité.)

Gaston Leroux

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-10T21:22:02.728Z · LW(p) · GW(p)

Only with very low probability.

Replies from: Yahooey
comment by Yahooey · 2013-02-11T07:00:22.046Z · LW(p) · GW(p)

and the human mind loves to find patterns even when the probabilities of the pattern being a rule are low. Coincidences are correlation.

comment by HalMorris · 2013-02-03T16:01:55.467Z · LW(p) · GW(p)

Joke: a tourist was driving around lost in the countryside in Ireland among the 1 lane roads and hill farms divided by ancient stone fences, and he asks a sheep farmer how to get to Dublin, to which he replies:

"Well ... if I was going to Dublin, I wouldn't start from here."

Moral, as I see it anyway: While the heuristic "to get to Y, start from X instead of where you are" has some value (often cutting a hard problem into two simpler ones), ultimately we all must start from where we are.

comment by taelor · 2013-02-01T21:50:23.443Z · LW(p) · GW(p)

It has been said that the historian is the avenger, and that standing as a judge between the parties and rivalries and causes of bygone generations he can lift up the fallen and beat down the proud, and by his exposures and his verdicts, his satire and his moral indignation, can punish unrighteousness, avenge the injured or reward the innocent. One may be forgiven for not being too happy about any division of mankind into good and evil, progressive and reactionary, black and white; and it is not clear that moral indignation is not a dispersion of one’s energies to the great confusion of one’s judgement. There can be no complaint against the historian who personally and privately has his preferences and antipathies, and who as a human being merely has a fancy to take part in the game that he is describing; it is pleasant to see him give way to his prejudices and take them emotionally, so that they splash into colour as he writes; provided that when he steps in this way into the arena he recognizes that he is stepping into a world of partial judgements and purely personal appreciations and does not imagines that he is speaking ex cathedra.

But if the historian can rear himself up like a god and judge, or stand as the official avenger of the crimes of the past, then one can require that he shall be still more godlike and regard himself rather as the reconciler than as the avenger; taking it that his aim is to achieve the understanding of the men and parties and causes of the past, and that in this understanding, if it can be complete, all things will ultimately be reconciled. It seems to be assumed that in history we can have something more than the private points of view of particular historian; that there are “verdicts of history” and that history itself, considered impersonally, has something to say to men. It seems to be accepted that each historian does something more than make a confession of his private mind and his whimsicalities, and that all of them are trying to elicit a truth, and perhaps combining through their various imperfections to express a truth, which, if we could perfectly attain it, would be the voice of History itself.

But if history is in this way something like the memory of mankind and represents the spirit of man brooding over man’s past, we must imagine it as working not to accentuate antagonisms or to ratify old party-cries but to find the unities that underlie the differences and to see all lives as part of the one web of life. The historian trying to feel his way towards this may be striving to be like a god but perhaps he is less foolish than the one who poses as god the avenger. Studying the quarrels of an ancient day he can at least seek to understand both parties to the struggle and he must want to understand them better than they understood themselves; watching them entangled in the net of time and circumstance he can take pity on them – these men who perhaps had no pity for one another; and, though he can never be perfect, it is difficult to see why he should aspire to anything less than taking these men and their quarrels into a world where everything is understood and all sins are forgiven.

— Herbert Butterfield, The Whig Interpretation of History

comment by Roze_Function · 2013-02-07T01:57:34.887Z · LW(p) · GW(p)

True, reason was a difficult tool. You laboured with it to see a little more, and at best you got glimpses, partial truths; but the glimpses were always worth having.

Francis Spufford, Red Plenty

Replies from: simplicio
comment by simplicio · 2013-02-07T03:03:23.935Z · LW(p) · GW(p)

Is it a good book? I was thinking of buying it, but I am very risk-averse when it comes to buying fiction.

Replies from: gwern, Roze_Function
comment by gwern · 2013-03-02T18:38:24.459Z · LW(p) · GW(p)

I thought it was pretty good in its own way, although I expected (coming at it from Shalizi) much more math & science than it actually had.

comment by Roze_Function · 2013-02-07T22:48:56.250Z · LW(p) · GW(p)

I am only about one-third of the way through, but it is definitely a good book thus far.

I would not personally buy it, since I only purchase fiction that I am certain I will read more than once, but it is definitely worth reading.

comment by CronoDAS · 2013-02-06T23:33:05.290Z · LW(p) · GW(p)

Man who run in front of car get tired.
Man who run in back of car get exhausted.

(Sorry, I couldn't resist.)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-06T23:44:25.802Z · LW(p) · GW(p)

Studies show that people who try to run behind a car frequently fail to keep up, while nobody who runs in front of a car fails more than once.

Replies from: CronoDAS
comment by CronoDAS · 2013-02-13T05:53:28.003Z · LW(p) · GW(p)

Give a man a fire, and he'll be warm for a day. Set a man on fire, and he'll be warm for the rest of his life.

comment by Apprentice · 2013-02-18T23:42:57.561Z · LW(p) · GW(p)

He gazed about him, and the very intensity of his desire to take in the new world at a glance defeated itself. He saw nothing but colours - colours that refused to form themselves into things. Moreover, he knew nothing yet well enough to see it: you cannot see things till you know roughly what they are.

-- C. S. Lewis, Out of the Silent Planet

Replies from: Nisan
comment by Nisan · 2013-02-19T00:26:11.403Z · LW(p) · GW(p)

Reminds me of this:

Barron the Green stared incomprehendingly at the chaos of colors for long seconds. Understanding, when it came, drove a pile-driver punch into the pit of his stomach.

comment by shaih · 2013-02-18T04:43:25.303Z · LW(p) · GW(p)

No rational argument will have a rational effect on a man who does not want to adopt a rational attitude.

Karl Popper

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-02-20T07:19:06.779Z · LW(p) · GW(p)

There's a failure mode associated to this attitude worth watching out for, which is assuming that people who disagree with you are being irrational and so not bothering to check if you have arguments against what they say.

comment by CronoDAS · 2013-02-13T22:27:00.369Z · LW(p) · GW(p)

You can change your organization, or change your organization.

-- Martin Fowler

comment by tgb · 2013-02-09T19:22:54.124Z · LW(p) · GW(p)

It is interesting to note that Bohr was an outspoken critic of Einstein's light quantum (prior to 1924), that he mercilessly denounced Schrodinger's equation, discouraged Dirac's work on the relativist electron theory (telling him, incorrectly, that Klein and Gordon had already succeeded), opposed Pauli's introduction of the neutrino, ridiculed Yukawa's theory of the meson, and disparaged Feynman's approach to quantum electrodynamics.

[Footnote to: "This was a most disturbing result. Niels Bohr (not for the first time) was ready to abandon the law of conservation of energy". The disturbing result refers to the observations of electron energies in beta-decay prior to hypothesizing the existence of neutrinos.]

-David Griffiths, Introduction to Elementary Particles, 2008 page 24

comment by Apprentice · 2013-02-18T23:35:58.597Z · LW(p) · GW(p)

Those who stand against the dark mirror of evil are trapped in an eternal conflict. Because, for the cultists; they only have to succeed once. But for the defenders of humanity, we have to prevail every single time.

-- From the final screen of Call of Cthulhu: The Wasted Land

Replies from: Document
comment by Document · 2013-02-22T20:52:10.633Z · LW(p) · GW(p)

...Hooray for the phygists?

Replies from: Apprentice
comment by Apprentice · 2013-02-23T12:47:17.009Z · LW(p) · GW(p)

Well, there are lots of cultists running around trying to summon an Elder God. This will almost certainly end in disaster. The options we have to fight this are: a) We can try to stop all Elder-God-summoning related program activities or b) We can try to get there first and summon a Friendly Elder God.

Both a) and b) are almost impossibly difficult and I find it hard to decide which is less impossible.

comment by insufferablejake · 2013-02-18T08:43:48.109Z · LW(p) · GW(p)

Selection is the key to social harmony. Surround yourself with true friends who love you just as you are. If you don't see any around, quest for them.

Bryan Caplan

Replies from: gwern, Qiaochu_Yuan
comment by gwern · 2013-03-02T18:28:54.263Z · LW(p) · GW(p)

This sounds almost horrifically dystopian, in a sort of Friendship is Optimal way.

Replies from: insufferablejake
comment by insufferablejake · 2013-03-04T14:11:19.481Z · LW(p) · GW(p)

I suppose it does, in as objective a measure something like 'harmony' is.

comment by Qiaochu_Yuan · 2013-03-02T18:37:14.173Z · LW(p) · GW(p)

This sounds like a recipe for stagnation. A true friend is willing to encourage you to grow.

Replies from: insufferablejake, fubarobfusco
comment by insufferablejake · 2013-03-04T14:19:12.834Z · LW(p) · GW(p)

I think I parsed that quote less along the lines of 'dude, you hardly know any math and so I won't love you' and more along the lines of 'dude, you seem to have the same taste for movies and music and we can have a conversation -- I love (hanging out with) you'.

The former has an objective measure and thus one can speak of definite growth while the latter is subjective.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-03-04T16:50:12.424Z · LW(p) · GW(p)

That's not what I mean. Suppose you have various negative personality traits that are negatively influencing your life (e.g. perhaps you are selfish or short-tempered). If you don't carefully cull the people around you, you might start noticing that many people react negatively to you, and you might start wondering why. If you determine that the problem is with you and not them, that's an opportunity for growth. If you only surround yourself with people who are willing, for whatever reason, to ignore your negative personality traits, then you've lost an opportunity to notice them.

Similarly, and this should be scary to anyone who cares about epistemic rationality, suppose you have various false beliefs and you decide that those beliefs are so important to your identity that people who don't also believe them can't possibly love you the way you are, so you only surround yourself with people who agree with them...

Replies from: insufferablejake
comment by insufferablejake · 2013-03-04T18:47:43.117Z · LW(p) · GW(p)

Similarly, and this should be scary to anyone who cares about epistemic rationality, suppose you have various false beliefs and you decide that those beliefs are so important to your identity that people who don't also believe them can't possibly love you the way you are, so you only surround yourself with people who agree with them..

Sure, in such a case, I've optimized for my own 'social harmony'. We all do this to varying degrees anyway. Signalling, sub-cultures and all that blah. Note that the quote simply speaks of a process (selection) to maximize an end (social harmony, however that is defined). It doesn't say anything about whether such selection should be for false or true values (however these are defined).

comment by fubarobfusco · 2013-03-02T20:43:17.216Z · LW(p) · GW(p)

"Love you just as you are" doesn't imply "hate for you to change".

After all, you are changing.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-03-02T20:49:08.640Z · LW(p) · GW(p)

Okay, but P(doesn't want you to change | loves you just the way you are) is higher than P(doesn't want you to change | doesn't love you just the way you are), and in addition P(you won't change | you surround yourself with people who love you just the way you are) is higher than P(you won't change | you don't surround yourself with people who love you just the way you are).

comment by A1987dM (army1987) · 2013-02-02T16:46:25.067Z · LW(p) · GW(p)

Anything that's ever said is really just a signpost leading towards a certain state of being.

Eckhart Tolle, as quoted by Owen Cook in The Blueprint Decoded

comment by wedrifid · 2013-02-23T18:44:08.625Z · LW(p) · GW(p)

Ultimately, if some AI scientist is very concerned that an AI is going to kill us all, their opinion is more informative of the approaches to AI which they find viable, than of AIs in general. If someone is convinced that any nuclear power plant can explode like a multi megaton nuclear bomb, well, its probably better to let someone else design a nuclear power plant.

I think you have the lesson entirely backward.

Replies from: private_messaging
comment by private_messaging · 2013-02-23T19:21:03.682Z · LW(p) · GW(p)

How so? A person convinced that any nuclear power plant is a risk of multi megaton explosion would have some very weird ideas of how nuclear power plants should be built; they would deem moderated reactors impractical, negative thermal coefficient of reactivity infeasible, etc (or be simply unaware of the mechanisms that allow to achieve stability), and would build some fast neutron reactor that relies on very rapid control rod movement for it's stability. Meanwhile normal engineering produced nuclear power plants that, imperfect they might be, do not make a crater when they blow up.

Replies from: wedrifid, Creutzer
comment by wedrifid · 2013-02-24T03:55:10.952Z · LW(p) · GW(p)

To the extent that you already know that nuclear power plants are basically safe they clearly do not apply as an analogy here. Reasoning from them like this is an error.

comment by Creutzer · 2013-02-23T21:24:44.685Z · LW(p) · GW(p)

Yes, but you can say that because you have the independent evidence that nuclear power plants are workable, beyond the mere say-so of a couple of scientists. You don't have that kind of evidence for AI safety.

Also, this:

Non-Friendly AI is no Elder God. It kills you, at worst.

... is not a given. What makes you think that the worst it would do is kill you, when killing is not the worst thing humans do to each other?

comment by Mestroyer · 2013-02-22T17:49:22.562Z · LW(p) · GW(p)

Now this foreknowledge cannot be elicited from spirits; it cannot be obtained inductively from experience, nor by any deductive calculation. Knowledge of the enemy's dispositions can only be obtained from other men. Hence the use of spies...

Sun Tzu on establishing a causal chain from reality to your beliefs.

Replies from: Vaniver
comment by Vaniver · 2013-02-28T03:31:47.613Z · LW(p) · GW(p)

Dupe.

comment by Shmi (shminux) · 2013-02-10T20:43:13.995Z · LW(p) · GW(p)

Even the most rational among us believe we have something called a "mind" that is capable of something called "free will" which all feels a bit like magic. We have a sense that our minds can cook up thoughts and ideas on its own, without the benefit of external stimulation. The belief is that we can think ourselves into whatever frame of mind we need. We think we can use our "willpower" to overcome sadness, or focus on what is important, whatever. My view is the opposite. I believe our internal sensation of "mind" is nothing but the end result of external stimulation interacting with our DNA. By my view, we are moist robots and we have five senses that act as our operator interface. To me, it makes no sense to try and think my way to happiness when I can just take my dog for a walk and come back feeling great.

We'll be a lot happier when we stop believing in magic and start figuring out which types of stimulations create which reactions.

Scott Adams

Replies from: metatroll
comment by metatroll · 2013-02-10T21:32:49.116Z · LW(p) · GW(p)

This paper argues that at least one of the following propositions must be true: (1) a happy person is very likely to start believing in magic before reaching an "unhappy" stage; (2) any unhappy person is extremely unlikely to take their dog walking a significant number of times; (3) we are almost certainly living in a stimulation. It follows that the belief that I will one day become an unhappy person who doesn't walk their dog is false, unless I start believing in magic.

comment by Yahooey · 2013-02-10T20:29:05.869Z · LW(p) · GW(p)

Coincidences … are the worst enemies of the truth. (Les coïncidences … sont les pires ennemies de la vérité. —Gaston Leroux, Le mystère de la chambre jaune

comment by novalis · 2013-02-09T18:32:20.588Z · LW(p) · GW(p)

"For belief did not end with a public renunciation, a moment when one's brethren called one a heretic, and damned. Belief ended in solitude, and silence, the same way it began." -Robert V. S. Redick, The Night Of The Swarm

(I'm mid-way through the book, but perhaps I should instead say that I am mid-way through gur sryybjfuvc bs gur evat, juvpu unf sbe fbzr ernfba orra vafregrq vagb gur zvqqyr bs vg, pbzcyrgr jvgu eviraqryy, zvfgl zbhagnvaf, naq gur jvmneq qvfnccrnevat gb svtug n zbafgre).

comment by blogospheroid · 2013-02-08T14:00:22.950Z · LW(p) · GW(p)

Romance is for the evening, when the day's work of contributing to civilization is done. When all the drudgery of adult endeavors -- cooperation and competition and accountability and all of that -- can be put aside. The stars come out, a chill breeze blows, and the snapping of a twig out there can suddenly send chills up your spine!

Romance renounces accountability and so-called "objective reality!" It sees no need for them. And when that mind-set ruled our daylight hours, warping politics and business and the way we perceived our real-life neighbors... horror ensued. In almost every other culture and society, the romantic tendency to view our own worldview as perfect and the enemy as subhuman reigned. Until the Enlightenment came to show us - oh so painfully and gradually - how to utter the great words of science and decency: "I suppose I might be wrong. Let's find out."

-- David Brin

comment by Vaniver · 2013-02-17T18:39:39.489Z · LW(p) · GW(p)

If you want to get the plain truth,

Be not concerned with right and wrong.

The conflict between right and wrong

Is the sickness of the mind.

-- Seng-Ts'an

Replies from: TimS
comment by TimS · 2013-02-17T21:53:53.041Z · LW(p) · GW(p)

Does this mean something different than "Truth doesn't have a moral valence"?

Cause it seems like it is trying harder to sound deep than to sound insightful. Sigh - maybe I'm just jaded by various other trying-to-sound-deep-for-its-own-sake sayings. Aka seem deep vs. is deep issues.

Replies from: Vaniver, Estarlio, shaih
comment by Vaniver · 2013-02-18T01:59:20.236Z · LW(p) · GW(p)

Does this mean something different than "Truth doesn't have a moral valence"?

My primary interpretation was "attaching yourself to arguments obstructs your ability to seek the truth." If you are interested in the truth, it does not matter if you or your interlocutor is wrong or right; it matters what the truth is.

Another interpretation is "is-thinking leads to accuracy, should-thinking leads to delusion."

A third interpretation is "moralistic thinking degrades morals." I don't consider that interpretation interesting enough to agree or disagree with it.

Replies from: nshepperd, TimS
comment by nshepperd · 2013-02-18T04:58:14.882Z · LW(p) · GW(p)

It doesn't seem to be clear whether Seng-Ts'an is talking about moral right and wrong, or the kind of "wrong" that is involved in "proving your opponent wrong" in debates. The first interpretation is just silly according to any philosophy that cares about ethics, but the second one does make a lot of sense.

comment by TimS · 2013-02-18T03:06:19.724Z · LW(p) · GW(p)

"attaching yourself to arguments obstructs your ability to seek the truth"

This is probably a more plausible reading of the quote, but I think it is false. If I don't believe I am right, or at least making an important point (such as playing devil's advocate), I'm doubtful that my comments are relevant or helpful in figuring out what is true.

By contrast, your interpretation of the quote suggests that Professor Armstrong should be indifferent to whether particular x-risks that he has highlighted as "most dangerous" are actually the most dangerous x-risks.

Anyway, your second suggested reading is essentially my suggested reading, and I agree that your third suggested reading is not a very interesting assertion.

Replies from: Vaniver
comment by Vaniver · 2013-02-18T04:03:17.569Z · LW(p) · GW(p)

If I don't believe I am right, or at least making an important point (such as playing devil's advocate), I'm doubtful that my comments are relevant or helpful in figuring out what is true.

It may be worthwhile to consider the role of curiosity and questions.

By contrast, your interpretation of the quote suggests that Professor Armstrong should be indifferent to whether particular x-risks that he has highlighted as "most dangerous" are actually the most dangerous x-risks.

The first interpretation sees 'right' and 'wrong' as the property of people, not ideas. Doing so is less helpful than seeing rightness as a property of ideas- the plain truth.

Thus, it suggests that the Professor should be indifferent to which x-risks he highlights as most dangerous, except for the criterion of danger. It would risk sorting his list incorrectly to confine himself by his opinion, his past statements on the issue, or those which avoid giving support to an enemy.

I agree that your third suggested reading is not a very interesting assertion.

I was introduced to the poem by someone who was arguing against moralistic thinking, who knows much more about this sort of poetry than I do; I mention it for completeness, as it may have been the author's preferred interpretation.

comment by Estarlio · 2013-02-17T23:21:21.140Z · LW(p) · GW(p)

Maybe it's a reference to the idea that you need something more important than The Truth, so that you keep testing/refining your answer when you think you've got to the truth.

comment by shaih · 2013-02-17T22:50:48.262Z · LW(p) · GW(p)

i'm going to reply to the quote as if it means "Truth doesn't have a moral valence" and rebuttal that truth should be held more sacred then morals rather then simply outside of it. For example if there are two cases and case 1 leads to a morally "better" (in quotes because the word better is really a black box) outcome then case 2 but case 1 leads to hiding the truth (including hiding from it yourself) then I would have to think very specifically about it. In short I abide by the rule "That which can be destroyed by the Truth should be" but am weary that this breaks down practically in many situations. So when presented with a scenario where i would be tempted to break this principle for the "greater good" or the "morally better case" I would think long and hard about whether it is a rationalization or that i did not expend the mental effort to come up with a better third alternative.

comment by TobyBartels · 2013-02-11T02:05:57.872Z · LW(p) · GW(p)

I wouldn't be surprised if this has come up before:

Ideas on Earth were badges of friendship or enmity. Their content did not matter. Friends agreed with friends, in order to express friendliness. Enemies disagreed with enemies, in order to express enmity.

The ideas Earthlings held didn't matter for hundreds of thousands of years, since they couldn't do much about them anyway. Ideas might as well be badges as anything.

They even had a saying about the futility of ideas: ‘If wishes were horses, beggars would ride.’

And then Earthlings discovered tools. Suddenly agreeing with friends could be a form of suicide or worse. But agreements went on, not for the sake of common sense or self-preservation, but for friendliness.

Earthlings went on being friendly, when they should have been thinking instead. And even when they built computers to do some thinking for them, they designed them not so much for wisdom as for friendliness. So they were doomed. Homicidal beggars could ride.

―Kurt Vonnegut (attributed to Kilgore Trout), in Breakfast of Champions

Replies from: Qiaochu_Yuan
comment by Kawoomba · 2013-02-08T06:29:18.258Z · LW(p) · GW(p)

A sharp knife can kill even in the hands of a blind.

Klingon proverb.

Replies from: Qiaochu_Yuan, wedrifid
comment by Qiaochu_Yuan · 2013-02-08T06:56:11.656Z · LW(p) · GW(p)

So it's true what they say! The opposite of a Klingon proverb is also a Klingon proverb...

comment by wedrifid · 2013-02-08T06:56:44.042Z · LW(p) · GW(p)

A sharp knife can kill even in the hands of a blind.

Where is this from? I looked it up to see if the weird grammar was intended and couldn't find anything.

Replies from: Kawoomba
comment by Kawoomba · 2013-02-08T06:59:43.307Z · LW(p) · GW(p)

It's ... ahem ... non-canon. A different faction.

I thought it interesting that the near-inverse of a useful rationality quote can still be a useful rationality quote.

Replies from: jooyous
comment by jooyous · 2013-02-08T07:09:56.050Z · LW(p) · GW(p)

I don't think it's an inverse! The first one is saying you might not succeed in killing the person you're trying to kill and the second one is saying you might instead kill someone else that you don't want to kill! They're two properties of the same worst-case scenario. =]

Replies from: CCC
comment by CCC · 2013-02-08T07:51:26.228Z · LW(p) · GW(p)

I understood the second one as saying that that blind idiot with the knife might end up killing you, not necessarily intentionally, so be careful.

Replies from: jooyous
comment by jooyous · 2013-02-08T07:53:40.807Z · LW(p) · GW(p)

But also, if you're being a blind idiot waving your knife around, you could kill someone! So stop that. =]

comment by arundelo · 2013-02-19T03:20:23.536Z · LW(p) · GW(p)

"It seems to me that your first and third reasons contradict each other. Destroying the mirror cannot both multiply and exterminate the little pests."

"There's a contradiction, yes. That's because I don't know which is true. Destroying the mirror might kill them, or it might multiply them infinitely. I don't know. And neither do you."

--Lawrence Watt-Evans, The Spriggan Mirror

comment by deathpigeon · 2013-02-05T23:33:59.783Z · LW(p) · GW(p)

Gods? There are no 'gods', young bravo. There is only one God, and his name is Death - Him of Many Faces. And there is only one prayer that one says to him - 'Not Today'.

Syrio Forel, Game of Thrones based on A Song of Ice and Fire by George R R Martin

Replies from: Nornagest
comment by Nornagest · 2013-02-05T23:45:03.038Z · LW(p) · GW(p)

It doesn't matter that much, but I'm pretty sure that line is original to the HBO series, not to the books.

(Not my downvotes, incidentally, but I'd speculate they come from a desire to separate rationality from anti-deathism.)

Replies from: ArisKatsaris, deathpigeon
comment by ArisKatsaris · 2013-02-07T23:26:43.710Z · LW(p) · GW(p)

It's not from the TV series either.

The TV series quote would be this: "There is only one God. And his name is Death. And there is only thing we say to Death: 'Not today'."

Basically the grandparent post seems to be just a quote from memory, combining bits and pieces from both places, accurate to neither.

comment by deathpigeon · 2013-02-05T23:47:23.638Z · LW(p) · GW(p)

I could've sworn it was from both of them, and, thus, from the books originally...

Replies from: Sniffnoy, Nornagest
comment by Sniffnoy · 2013-02-06T00:21:16.899Z · LW(p) · GW(p)

It's not from the books; more generally, there isn't anything in the books directly suggesting a connection between Syrio and the Faceless Men.

Replies from: deathpigeon
comment by deathpigeon · 2013-02-06T02:10:50.902Z · LW(p) · GW(p)

Thanks. Fixed it.

comment by Nornagest · 2013-02-05T23:53:21.935Z · LW(p) · GW(p)

Couldn't find it in the Arya chapters of my copy. Wasn't looking terribly hard, though.

Replies from: deathpigeon
comment by deathpigeon · 2013-02-05T23:58:30.618Z · LW(p) · GW(p)

I remembered it vaguely, and found the more exact quote on the ASOIAF Quotes page on TvTropes since I didn't want to search through the Arya chapters to find the exact quote, though I was prepared to.

comment by taelor · 2013-02-06T04:57:48.058Z · LW(p) · GW(p)

This is why I don't care much for gambling. While a sucker is born with each tick of the clock, a cheater is born with each tock betwixt.

-- Doc Scratch, Homestuck

Replies from: Nornagest, Benedict
comment by Nornagest · 2013-02-06T05:25:03.299Z · LW(p) · GW(p)

I'm not certain what lesson on rationality I'm expected to glean from this, unless it's "model your opponents as agents, not as executors of cached scripts" -- and that seems both strongly dependent on the opponents you're facing and a little on the trivial side.

comment by Benedict · 2013-02-07T21:00:18.808Z · LW(p) · GW(p)

Doc Scratch isn't exactly the best source for rationality quotes- a guy who already knows the truth has little need to overcome flawed cognitive processes for arriving at it. Which isn't to say the guy doesn't say some relevant stuff:

Lies of omission do not exist. The concept is a very human one. It is the product of your story writing again. You have written a story about the truth, making emotional demands of it, and in particular, of those in possession of it. Your demands are based on a feeling of entitlement to the facts, which is very childish. ... If I do not volunteer information you deem critical to your fate, it possibly means that I am a scoundrel, but it does not mean that I am a liar. And it certainly means you did not ask the right questions. One can make either true statements or false statements about reality.

Replies from: Desrtopa, fubarobfusco
comment by Desrtopa · 2013-02-08T04:09:27.833Z · LW(p) · GW(p)

One can make either true statements or false statements about reality.

One can do these two things, but not to the exclusion of alternatives. One can make statements which are confused or nonsensical, that are not even false.

In any case, a statement doesn't have inherent truth value outside the way it's interpreted by the people who hear it. The statement that "If a tree falls in the forest, it does not make a sound" is true or false depending on the meanings understood by the audience and the person uttering it. It's entirely possible to convey false understandings by making statements which omit relevant information. To refuse to call a statement which is deliberately tailored to make its audience believe falsehoods a lie is using a distinction in an unhelpful way.

Replies from: Baughn
comment by Baughn · 2013-02-08T15:47:58.177Z · LW(p) · GW(p)

This.

It borders on arguing about the meaning of words, so I find it useful to describe what I mean by "lying", i.e. "conveying information that adjusts someone else's worldview away from reality". Funnily enough, that excludes most lies-to-children..

At that point whoever I'm talking to will either point out that his definition differs, or even decide to go with mine henceforth, and either way we can start getting some real work done.

comment by fubarobfusco · 2013-02-08T03:44:37.912Z · LW(p) · GW(p)

Of course, he was lying (arguably by omission); Doc Scratch was not merely reticent or uncooperative, but intentionally deceptive.

(Must resist urge to watch Cascade again ...)

comment by untothebreach · 2013-02-07T00:41:05.431Z · LW(p) · GW(p)

Hast thou reason? I have.- Why then dost not thou use it? For if this does its own work, what else dost thou wish?

Meditations - Marcus Aurelius

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-02-07T18:14:40.999Z · LW(p) · GW(p)

I don't get it.

Replies from: Baruta07
comment by Baruta07 · 2013-02-07T22:01:23.091Z · LW(p) · GW(p)

It's pretty much another injunction to use reason if you possess it.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-02-07T23:37:31.444Z · LW(p) · GW(p)

I don't see how to extract that meaning from the words I see. In particular, I don't understand what the last sentence is trying to say. The dash is also confusing. I thought initially that this was a dialogue but now I'm less sure.