Rationality Quotes Thread February 2015

post by Vaniver · 2015-02-01T15:53:28.049Z · LW · GW · Legacy · 129 comments

Contents

129 comments

Another month, another rationality quotes thread. The rules are:

129 comments

Comments sorted by top scores.

comment by Torello · 2015-02-03T15:17:47.359Z · LW(p) · GW(p)

[Charles] Darwin wrote in his autobiography of a habit he called a "golden rule": to immediately write down any observation that seemed inconsistent with his theories--"for I had found by experience that such facts and thoughts were far more apt to escape from the memory than favorable ones."

-Robert Wright, The Moral Animal, p.280

Replies from: TheMajor
comment by Pablo (Pablo_Stafforini) · 2015-02-01T20:17:12.011Z · LW(p) · GW(p)

A passion to make the world a better place is a fine reason to study social psychology. Sometimes, however, researchers let their ideals or their political beliefs cloud their judgment, such as in how they interpret their research findings. Social psychology can only be a science if it puts the pursuit of truth above all other goals. When researchers focus on a topic that is politically charged, such as race relations or whether divorce is bad for children, it is important to be extra careful in making sure that all views (perhaps especially disagreeable ones, or ones that go against established prejudices) are considered and that the conclusions from research are truly warranted.

Roy Baumeister & Brad Bushman, Social Psychology and Human Nature, Belmont, 2008, p. 13

Replies from: Lumifer
comment by Lumifer · 2015-02-02T18:27:08.211Z · LW(p) · GW(p)

Well, yeah, that's all fine in theory, but in practice...

comment by Epictetus · 2015-02-01T19:19:01.675Z · LW(p) · GW(p)

A special technique has been developed in mathematics. This technique, when applied to the real world, is sometimes useful, but can sometimes also lead to self-deception. This technique is called modelling. When constructing a model, the following idealization is made: certain facts which are only known with a certain degree of probability or with a certain degree of accuracy, are considered to be "absolutely" correct and are accepted as "axioms". The sense of this "absoluteness" lies precisely in the fact that we allow ourselves to use these "facts" according to the rules of formal logic, in the process declaring as "theorems" all that we can derive from them.

It is obvious that in any real-life activity it is impossible to wholly rely on such deductions. The reason is at least that the parameters of the studied phenomena are never known absolutely exactly and a small change in parameters (for example, the initial conditions of a process) can totally change the result...

In exactly the same way a small change in axioms (of which we cannot be completely sure) is capable, generally speaking, of leading to completely different conclusions than those that are obtained from theorems which have been deduced from the accepted axioms. The longer and fancier is the chain of deductions ("proofs"), the less reliable is the final result.

Complex models are rarely useful (unless for those writing their dissertations).

The mathematical technique of modelling consists of ignoring this trouble and speaking about your deductive model in such a way as if it coincided with reality. The fact that this path, which is obviously incorrect from the point of view of natural science, often leads to useful results in physics is called "the inconceivable effectiveness of mathematics in natural sciences" (or "the Wigner principle").

-Vladimir Arnold, On Teaching Mathematics

Replies from: emr
comment by emr · 2015-02-03T04:21:33.099Z · LW(p) · GW(p)

You can model uncertain parameters within a model as random variables, and then run a large number of simulations to get a distribution of outcomes.

Modeling uncertainty between models (of which guessing the distribution of an uncertain parameter is an example) is harder to handle formally. But overall, it's not difficult to improve on the naive guess-the-exact-values-and-predict method.

Replies from: Epictetus
comment by Epictetus · 2015-02-03T04:59:33.251Z · LW(p) · GW(p)

You can model uncertain parameters within a model as random variables, and then run a large number of simulations to get a distribution of outcomes.

The usual error analysis provides an estimate of an error in the result in terms of error in the parameters. Any experiment used to test a model is going to rely on this kind of error analysis to determine whether the result of the experiment lies within the estimated error of the prediction given the uncertainty in the measured parameters.

For example, for an inverse-square law (like Newtonian gravity) you could perturb the separation distance by a quantity epsilon, expand in a Taylor series with respect to epsilon, and apply Taylor's theorem to get an upper bound on the error in the prediction given an error in the measurement of the separation distance. Statistical methods could be used should such analysis prove intractable.

The issue that the author refers to is where the model exhibits chaotic behavior and a very small error in the measurement could cause a huge error in the result. This kind of behavior renders any long-term predictions completely unreliable. In the words of Edward Lorenz:

Chaos: When the present determines the future, but the approximate present does not approximately determine the future.

comment by Gram_Stone · 2015-02-04T23:59:25.386Z · LW(p) · GW(p)

Focus on what something does, rather than what it is.

-- Joshua Engel, Quora

comment by Salemicus · 2015-02-03T18:02:54.164Z · LW(p) · GW(p)

I consider as lovers of books not those who keep their books hidden in their store-chests and never handle them, but those who, by nightly as well as daily use thumb them, batter them, wear them out, who fill out all the margins with annotations of many kinds, and who prefer the marks of a fault they have erased to a neat copy full of faults.

Erasmus, Letter to an unidentified friend (1489)

Replies from: fortyeridania
comment by fortyeridania · 2015-02-04T06:42:05.730Z · LW(p) · GW(p)

What is the relationship to rationality? This seems simply to be a cheer for reading and a jeer for pretentious book-collecting.

Replies from: Salemicus
comment by Salemicus · 2015-02-04T09:12:49.098Z · LW(p) · GW(p)

Reread the quote. Erasmus isn't just talking about reading. There are multiple relations:

  • Knowledge is useless sitting there untouched you have to actively make use of it.
  • to truly understand something you have to make that knowledge your own, write down things, make annotations, work through examples, read and reread the book. You can't just absorb the knowledge by skimming through.
  • There are lost purposes here. How many people have books just to look cultured, or read books just to say they have read them. Think of all the famous unread books or Gatsby's famous library.
Replies from: fortyeridania
comment by fortyeridania · 2015-02-04T23:23:13.156Z · LW(p) · GW(p)

I agree with each of your bullet points, and they do help clarify the Erasmus quotation's relationship to rationality. Thanks.

comment by Shmi (shminux) · 2015-02-07T16:43:54.750Z · LW(p) · GW(p)

marketing is only legal because it doesn't work most of the time

Dilbert

comment by Kindly · 2015-02-02T14:29:08.875Z · LW(p) · GW(p)

Hypocrisy doesn't bother me. Everyone's got his ideal, and then the reality of what he can actually deliver. Scratch hypocrisy, and you're more likely to lose the ideal than the reality.

Milo Behr, Beowulf: A Bloody Calculus.

Replies from: Richard_Kennaway, HalMorris, torekp, maxikov, undermind
comment by Richard_Kennaway · 2015-02-03T00:07:26.620Z · LW(p) · GW(p)

Hypocrisy does not mean falling short of one's ideals. It means only pretending to hold them.

Replies from: Kindly
comment by Kindly · 2015-02-03T02:22:12.432Z · LW(p) · GW(p)

I think the same point still applies. I might pretend to hold grandiose ideals by accomplishing some small things in their support. Call me out on it, and I have no further reason to continue doing those small things.

comment by HalMorris · 2015-02-02T16:08:39.312Z · LW(p) · GW(p)

To criticize hypocrisy in debate you don't even have to understand the other's argument -- you only have to be able to find a logical contradiction, and you can always find a contradiction, or something you can plausibly claim is a contradiction.

For the debater, it may be very hard to give up. Many of us can find (or generate plausible arguments for) contradictions with 10% of your brain power, thus keeping the other on the defensive, while using the rest of ones mind to search for a deeper argument. But for this reason it makes for tedious unilluminating debate, and ought to be given less encouragement than it gets - that is, if we want more insightful argument.

comment by torekp · 2015-02-02T19:51:46.101Z · LW(p) · GW(p)

Daniel Dennett has a very similar remark in Elbow Room, to the effect that hypocrisy is the clutch of moral motivation. Paraphrasing: Use too much, and you go nowhere, but too little, and you're lugging the engine.

comment by maxikov · 2015-02-02T22:50:42.134Z · LW(p) · GW(p)

Hypocrisy isn't actually fundamentally wrong, even is deliberate. The idea that it's bad is a final yet arbitrary value that has to be taught to humans. Many religions contain the Golden Rule, which boils down to "don't be a hypocrite", and this is exactly an indicator that is was highly non-obvious before it permeated our culture.

Replies from: CCC
comment by CCC · 2015-02-03T09:24:04.257Z · LW(p) · GW(p)

Hypocrisy means that what you are signalling is not reality. It doesn't harm you, directly; but it does, potentially, and in general, harm anyone who relies on your signalling.

Therefore, in a sufficiently large and inter-connected society, the society will be more successful if hypocrisy is given some significant negatives, like social ostracisation for known hypocrites (that also cuts down on the potential damage radius).

Therefore, societies which punish hypocrisy will, on average, be more successful than societies which do not.

So I don't think that the idea that hypocrisy is bad is arbitrary. It might not be obvious, but it's not arbitrary.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-02-03T12:21:02.357Z · LW(p) · GW(p)

So I don't think that the idea that hypocrisy is bad is arbitrary. It might not be obvious, but it's not arbitrary.

It is totally obvious. No-one wants to be lied to, and no-one wants to be found out lying.

comment by undermind · 2015-02-05T12:59:04.762Z · LW(p) · GW(p)

Love it -- mainly because it invokes one of my favourite paradoxes.

If you preach hypocrisy, and you are in fact hypocritical, than you're not a hypocrite. And if you aren't a hypocrite, then you are.

Replies from: DanielLC
comment by DanielLC · 2015-02-06T07:58:38.962Z · LW(p) · GW(p)

The paradox arises only if you aren't hypocritical about anything else.

Replies from: dxu
comment by dxu · 2015-02-07T01:49:41.304Z · LW(p) · GW(p)

Hence why Russell's paradox makes reference to the set of all sets that do not contain themselves, rather than just some sets that don't contain themselves.

(Now, of course, mathematicians don't use naive set theory anymore, but ZFC, which solves the problem.)

comment by Ben Pace (Benito) · 2015-02-05T19:02:49.670Z · LW(p) · GW(p)

On the dispersing of memes (note: politicised).

The reason that millions of people read books about Islam (even if they don't believe in it) instead of studying Cosmology is that cosmologist don't go around blowing up train stations, turning school girls into sex slaves, executing cartoonists, and crashing airplanes into skyscrapers. The reason why we are having a worldwide discussion about one of the most ridiculous superstitions in the history of humankind (to depict or not to depict a man who lived in the 7th century) is that people get killed because of that superstition.

-Piero Scaruffi

Replies from: Username
comment by Username · 2015-02-06T11:29:33.503Z · LW(p) · GW(p)

cosmologist don't go around blowing up train stations

First thing I thought at this point was "Oh! Maybe we should," then I remembered that I'm not aware of anybody who's studied mathematics thanks to Ted Kaczynski.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2015-02-06T14:46:35.984Z · LW(p) · GW(p)

Also, you know, blowing up train stations.

comment by Davidmanheim · 2015-02-02T20:46:20.630Z · LW(p) · GW(p)

"Suppose you ask your friend Naomi to roll a die without letting you see the result... Having rolled the die Naomi must write down the result on a piece of paper (without showing you) and place it in an envelope...

So some people are happy to accept that there is genuine uncertainty about the number before it is thrown (because its existence is ‘not a fact’), but not after it is thrown. This is despite the fact that our knowledge of the number after it is thrown is as incomplete as it was before."

  • Risk Assessment and Decision Analysis with Bayesian Networks, Norman Fenton and Martin Neil, Chapter 1
Replies from: anandjeyahar
comment by anandjeyahar · 2015-02-03T06:59:27.065Z · LW(p) · GW(p)

Ah.... "genuine uncertainty" the term reminds me of "no true scotsman argument". My point being, there's an uncertainty reduction before and after the die was rolled, not to say this means, I should update my belief about the die's rolled/winning value.

Simply put my friend Naomi's beliefs have been updated and uncertainty in her mind has been eliminated. I think the author was trying to point out that most people conflate the two differences. It definitely is well worded for rhetoric, but not for pedagogy(in Feynman sense).

Replies from: Davidmanheim
comment by Davidmanheim · 2015-02-10T07:02:31.981Z · LW(p) · GW(p)

What does it mean to have uncertainty reduction taking place outside of the frame of reference of the person being asked for a decision?

In other terms, the discussion would have been the same if they replaced Naomi with a camera that is automatically used to take a picture.

Replies from: anandjeyahar
comment by anandjeyahar · 2015-02-11T03:18:06.316Z · LW(p) · GW(p)

What does it mean to have uncertainty reduction taking place outside of the frame of reference of the person being asked for a decision?

You're assuming humans are rational(as in the AI definition of a rational agent). We're not. So this knowledge that other person knows something for sure, that we don't know about, colours/biases one's judgement.

I am not saying one should update their beliefs based on another person knowing or not knowing, but that we do anyway, as part of perception. I would argue, that we should be learning to notice the confusion between the rational side of us vs the perceptive side which notes (the other agent's) confidence/lack there of. I know it is a hand-wavy explanation, but my point stands nevertheless. I agree with the OP that one shouldn't update their beliefs on the basis of Naomi/camera having no certainty about the outcome(of coin toss). Simply say that if it is Naomi, there could be cases where it is rational to update, though hard to actually observe/be-aware of these updations and therefore, safer to not update.

Replies from: Davidmanheim
comment by Davidmanheim · 2015-02-13T03:58:19.307Z · LW(p) · GW(p)

OK, so "there could be cases where it is rational to update." How would you do so?

(I can't understand what an update could reasonably change. You aren't going to make the probability of any particular side more than 1/6, so what is the new probability?)

Replies from: anandjeyahar
comment by anandjeyahar · 2015-02-13T04:50:22.466Z · LW(p) · GW(p)

OK, so "there could be cases where it is rational to update." How would you do so?

(I can't understand what an update could reasonably change. You aren't going to make the probability of any particular side more than 1/6, so what is the new probability?)

I don't know either. I can make up a scenario, based on a series of die throws, history of win-losses and guesses based on that, but that would simply be conjecture, and still may not produce a reasonable process. However, this discussion reminded me of a scene in HPMOR. (The scene where HP's critic part judges that Miss Camblebunker was not a Doctor, but an actor. (After Bellatrix is broken out of prison.))

Replies from: Davidmanheim
comment by Davidmanheim · 2015-02-16T23:38:08.210Z · LW(p) · GW(p)

My claim is that you can't come up with such a conjecture where it makes sense to change the probability away from 1/6. That is why you should not update.

Replies from: anandjeyahar
comment by anandjeyahar · 2015-02-17T15:09:24.231Z · LW(p) · GW(p)

I disagree. I'm not sure it's provable(maybe in professional poker players??), but if you've played the bet a lot of times, you could have come up with cues* about whether your friend has got the same roll(or number on the die) as the last time or not.

  • -- not sure how verbalizable or not it is .(which implies harder to teach to someone else).
Replies from: Davidmanheim
comment by Davidmanheim · 2015-02-19T01:25:17.495Z · LW(p) · GW(p)

So you should update after you see that she rolled some number, and saw her reaction - but this says nothing about updating again because she wrote the number down,

comment by Shmi (shminux) · 2015-02-13T23:53:48.777Z · LW(p) · GW(p)

it helps to be on board with the quasi-positivist idea that the sets of things that can and can’t be done, by any experiment, are not just random collections of engineering details; rather, they’re the scaffolding on which we need to construct our understanding of what’s real.

Scott Aaronson on memcomputing

comment by [deleted] · 2015-02-11T14:23:03.098Z · LW(p) · GW(p)

On the subject of whether or not you can always be correct about the contents of your consciousness:

"Let us therefore look at the bizarre mirror image to blindsight: Anton’s syndrome. Patients who suddenly become completely blind due to a lesion in the visual cortex in some cases keep insisting on stillbeing visually aware. While claiming to be seeing persons, they bump into furniture and show all the other signs of functional blindness. Still, they act as if the phenomenal disappearance of all visually given aspects of reality is not phenomenally available to them. For instance, when pressed by questions concerning their environment, they produce false, but consistent confabulations. They tell stories about nonexisting phenomenal worlds, which they seem to believe themselves, while denying any functional deficit with regard to their ability to see.

-- I still vividly remember one heated debate at an interdisciplinary conference in Germany a number of years ago, at which a philosopher insisted, in the presence of eminent neuropsychologists, that Anton’s syndrome does not exist because a priori it cannot exist. Anton’s syndrome shows how truthful reports about the current contents about one’s own self-consciousness can be dramatically wrong. It would also be an ideal starting point for philosophical debates concerning the incorrigibility -- On a functional construal of judgment, can there be rational, sentient beings that suffer from such a strong dissociation between consciousness and cognition that they are so systematically out of touch with their own conscious experience? Well, in a domain-specific way restricted to internal self-representation and metacognition, there are. Anton’s syndrome gives us good empirical reasons to believe that self-consciousness actually is “such an ill-behaved phenomenon”. Patients suffering from Anton’s syndrome certainly possess the appropriate conceptual sophistication to form rational judgments about their current visual experience. The empirical material interestingly shows us that they simply do not, and this, in all its domain-specificity, is an important and valuable constraint for philosophical theories of consciousness."

Thomas Metzinger, Being No One, p. 235

comment by LyleN · 2015-02-03T23:28:10.835Z · LW(p) · GW(p)

Science meant looking -- a special kind of looking. Looking especially hard at the things you didn't understand. Looking at the stars, say, and not fearing them, not worshiping them, just asking questions, finding the question that would unlock the door to the next question and the question beyond that.

Robert Charles Wilson, Darwinia

comment by WalterL · 2015-02-02T20:11:11.666Z · LW(p) · GW(p)

“I’ve said I understand. Stop fighting after you have won.”

Patrick Rothfuss, The Wise Man's Fear

Replies from: alienist
comment by alienist · 2015-02-03T03:09:20.750Z · LW(p) · GW(p)

Just because you say you understand (or even think you understand) doesn't mean you do.

comment by Salemicus · 2015-02-03T18:05:00.117Z · LW(p) · GW(p)

This type of man who is devoted to the study of wisdom is always most unlucky in everything, and particularly when it comes to procreating children; I imagine this is because Nature wants to ensure that the evils of wisdom shall not spread further throughout mankind.

Erasmus, The Praise of Folly

Replies from: Weedlayer
comment by Weedlayer · 2015-02-04T20:28:14.519Z · LW(p) · GW(p)

While the quote is anti-rationality, it IS satirical, so I suppose it's fine.

comment by aarongertler · 2015-02-09T01:10:28.133Z · LW(p) · GW(p)

"Applause, n. The echo of a platitude."

--Ambrose Bierce, The Cynic's Word Book

comment by Alex_Miller · 2015-02-03T23:55:08.396Z · LW(p) · GW(p)

I must respect you before your insults matter to me.

Brandon_Nish Concerning Cyberbullying

Replies from: Salemicus, Desrtopa
comment by Salemicus · 2015-02-04T09:28:14.359Z · LW(p) · GW(p)

Sadly, the insults of those we do not respect often matter, because of what they imply about that person's future conduct, and because of their effects on third parties.

So for example if a bully starts insulting you, this may matter, both because this might indicate he is about to attack you, and because it may cause other people to turn against you. To give a non-cyber-bullying example, the insults of Idi Amin against Indians residing in Uganda surely mattered to them, even though they did not respect him.

comment by Desrtopa · 2015-02-06T17:04:27.565Z · LW(p) · GW(p)

This seems inapt as a generalization about human psychology.

In one psychology experiment which a professor of mine told me about, test subjects were made to play a virtual game of catch with two other players, where every player was represented to each other player only as a nondescript computer avatar, the only input any player could give was which of the other two players to toss the "ball" to, and nobody had any identifying information about anyone else involved. Unbeknownst to the test subjects, the other two players were confederates of the experimenter, and their role was to gradually start excluding the test subject, eventually starting to toss the ball almost exclusively to each other, and almost never to the test subject.

Most test subjects found this highly emotionally taxing, to the point that such experiments will no longer be approved by the Institutional Review Board.

In addition to offering a hint of just how much ethical testing standards can hamstring psych research, it also suggests that our instinctive reactions to ostracization do not really demand identifying information on the perpetrators in order to come into play.

comment by Fluttershy · 2015-02-02T19:26:38.489Z · LW(p) · GW(p)

Another important factor that influences attraction is similarity… elementary school students prefer other children who perform about as well as they do in academics, sports, and music, and best friends in high school tend to resemble each other in age, race, year in school, and grades. Attributes such as race, ethnic origin, social and educational level, family background, income, and religion do affect attraction in general and marital choice in particular. Also relevant are behavioral patterns such as the degree of gregariousness and drinking and smoking habits. Like pairs with like, and heiresses rarely marry the butler except in the movies.

-P.536-537, Psychology, 8th ed., by Gleitman, Gorss, and Reisberg.

Replies from: Fluttershy
comment by Fluttershy · 2015-02-02T19:32:36.183Z · LW(p) · GW(p)

My interpretation: in order to have more awesome social interactions, try to seek out both friends and mates who are similar to you along most axes.

Replies from: Lumifer
comment by Lumifer · 2015-02-02T19:45:05.596Z · LW(p) · GW(p)

I don't think that when searching for friends and mates you're looking for a mirror.

In an ideal partner there is a balance between the degree to which s/he is similar to you and the degree to which s/he is different (in particular, complementary: strong where you are weak and vice versa).

comment by James_Miller · 2015-02-01T18:07:05.969Z · LW(p) · GW(p)

I knew a guy with passion to be a pro golfer and the brain to be a great accountant. He followed his passion. He's homeless now.

I have a 7-second rule. If I need to write down an idea I have about seven seconds before a distraction replaces it. Notepad in all rooms.

Note to terrorists: We cartoonists aren't all unarmed.

Memo to everyone: Unhealthy food is not a gift item.

I need to stop being surprised at how many problems can be solved with clarity alone.

From the Scott Adams (Dilbert creator) Twitter account.

Replies from: Good_Burning_Plastic, FeepingCreature
comment by Good_Burning_Plastic · 2015-02-02T08:11:29.593Z · LW(p) · GW(p)

Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)

comment by FeepingCreature · 2015-02-01T20:22:02.364Z · LW(p) · GW(p)

I need to stop being surprised at how many problems can be solved with clarity alone.

Note to Scott: a problem only counts as solved when it's actually gone.

Replies from: pjeby
comment by pjeby · 2015-02-01T21:59:11.855Z · LW(p) · GW(p)

a problem only counts as solved when it's actually gone.

And there are a surprising number of problems that disappear once you have clarity, i.e., they are no longer a problem, even if you haven't done anything yet. They become, at most, minor goals or subgoals, or cease to be cognifively relevant because the actual action needed -- if indeed there is any -- can be done on autopilot.

IOW, a huge number of "problems" are merely situations mistakenly labeled as problems, or where the entire substance of the problem is actually internal to the person experiencing a problem. For example, the "problem" of "I don't know where to go for lunch around here" ceases to be a problem once you've achieved "clarity".

Or to put it another way, "problems" tend to exist in the map more than the territory, and Adams' quote is commenting on how it's always surprising how many of one's problems reside in one's map, rather than the territory. (Because we are biased towards assuming our problems come from the territory; evolutionarily speaking, that's where they used to mostly come from.)

Replies from: FeepingCreature
comment by FeepingCreature · 2015-02-06T13:25:13.236Z · LW(p) · GW(p)

Yeah but it's also easy to falsely label a genuine problem as "practically already solved". The proof is in the pudding.

The next day, the novice approached Ougi and related the events, and said, "Master, I am constantly consumed by worry that this is all really a cult, and that your teachings are only dogma." Ougi replied, "If you find a hammer lying in the road and sell it, you may ask a low price or a high one. But if you keep the hammer and use it to drive nails, who can doubt its worth?"

--Two Cult Koans

Conversely, to show the worth of clarity you actually have to go drive some nails with it.

Replies from: pjeby
comment by pjeby · 2015-02-07T18:01:36.039Z · LW(p) · GW(p)

Yeah but it's also easy to falsely label a genuine problem as "practically already solved".

Yeah... but then that's your second problem. ;-)

And that problem exists only in the map, and can be resolved by getting clarity. ;-)

Replies from: FeepingCreature
comment by FeepingCreature · 2015-02-07T20:00:20.120Z · LW(p) · GW(p)

Haha. True!

comment by Ben Pace (Benito) · 2015-02-17T20:06:48.016Z · LW(p) · GW(p)

Quote from the comedian Frankie Boyle in his new blog post, about people deciding whether they're offended by his jokes:

Also, a lot of people would form an opinion about the joke without having heard it. It's a feature of late capitalism that we get a lot of information thrown at us, and we have to make snap decisions and form strong opinions without really knowing anything. Sure, if our football club buys a new centre half we might do a bit of research. But often we're just being asked if we should bomb Syria or not, and we're busy, and we just have to say fuck it, yeah, my mate Gavin's in the army, so yeah.

comment by Kenny · 2015-02-08T18:51:31.484Z · LW(p) · GW(p)

I don’t think you can reason people out of positions they didn’t reason themselves into.

Ben Goldacre

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-02-08T21:33:53.266Z · LW(p) · GW(p)

This is much older than Ben Goldacre. The earliest appearance known to wikiquote is in 1851, in Scientific American. The origin is unknown.

It also sounds like Deep Wisdom. Let's try a few variations:

Once a man's reasoned himself into something, the Devil himself won't reason him out of it.
-- this might be plausibly attributed to Thomas Edison.

No faith can long survive the light of thought,
Which as the salt sea eats away the iron:
However stoutly it may seem to stand
It yields to sleepless rust and falls apart.
When Epicurus "Atoms!" spoke, that light
Once kindled ever burned, nor all the priests
That spark could e'er extinguish from men's souls
Wherein it worked to split the true from false
And though it take a thousand years, yet still
As iron yields to rust, and stone to frost
Inexorable reason knows no sleep.
That Church most catholic that holds itself
To be the true custodian of the faith:
'Tis but fire-hardened wood, not ageless steel,
Which seems at first to gain access of strength
From that which would destroy it, but i' the end
Must split and crumble into smoking ash.
Like to the basilisk gaze is reason's light
That scours away the shadows of false night.
-- some suitable Victorian figure might be found to ascribe this doggerel to.

Replies from: Epictetus
comment by Epictetus · 2015-02-09T05:48:35.868Z · LW(p) · GW(p)

Found the following in Jonathan Swift's Letter to a Young Clergyman:

reasoning will never make a man correct an ill opinion, which by reasoning he never acquired

Replies from: gjm
comment by gjm · 2015-02-09T11:24:04.616Z · LW(p) · GW(p)

So now RichardKennaway's comment says that the earliest appearance known to Wikiquote is from 1851 ... but it links to Wikiquote's page about Jonathan Swift, with that same quotation. Richard, did you half-edit your comment?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-02-09T13:05:37.815Z · LW(p) · GW(p)

No, Google led me to a version of the quote in the section of unverified Swift quotes. I didn't notice that the same thought with slightly different wording appears earlier with a verifiable attribution.

Replies from: gjm
comment by gjm · 2015-02-09T14:01:23.475Z · LW(p) · GW(p)

Oh yes. How strange. I've tweaked the WQ page to make this less likely to recur.

comment by g_pepper · 2015-02-01T17:43:05.055Z · LW(p) · GW(p)

Removed due to formatting issues.

comment by Epictetus · 2015-02-09T06:20:55.403Z · LW(p) · GW(p)

He who wants to know too much knows in the end nothing, and he who conversely believes that some things do not concern him, often victimizes himself, when, for example, the philosopher believes that history is dispensable to him.

-Immanuel Kant, Logic

Replies from: Epictetus
comment by Epictetus · 2015-02-09T06:21:54.613Z · LW(p) · GW(p)

While we're on the subject, here's another:

"But it's a nice thing, surely, to be familiar with a lot of subjects." Well, in that case let us retain just as much of them as we need. Would you consider a person open to criticism for putting superfluous objects on the same level as really useful ones by arranging on display in his house a whole array of costly articles, but not for cluttering himself up with a lot of superfluous furniture in the way of learning? To want to know more than is sufficient is a form of intemperance. Apart from which this kind of obsession with the liberal arts turns people into pedantic, irritating, tactless, self-satisfied bores, not learning what they need simply because they spend their time learning things they will never need.

-Seneca, Letter 88

comment by hairyfigment · 2015-02-08T19:11:29.748Z · LW(p) · GW(p)

Bean came up with his plan (the Third Invasion) because he thought there was no ansible, so the invasion fleet would hit the formics just when they found out they had lost the Second Invasion. When he found out there was an ansible, if he'd had the slightest sense in his head, he should have realised that the formics therefore knew they lost the moment it happened, seventy years ago, and thus could have launched their own fleet just as long ago.

  • Will Wildman
comment by Torello · 2015-02-03T15:15:31.312Z · LW(p) · GW(p)

And even when "truth" can be clearly defined, it is a concept to which natural selection is indifferent.

-Robert Wright, The Moral Animal, p. 272

Replies from: Lumifer
comment by Lumifer · 2015-02-03T17:54:07.691Z · LW(p) · GW(p)

This seems to be just wrong. If your map significantly doesn't match the territory, natural selection is likely to be brutal to you.

Replies from: Torello, hairyfigment, Torello
comment by Torello · 2015-02-04T05:22:41.311Z · LW(p) · GW(p)

Maybe I should have included the whole paragraph:

"And even when "truth" can be clearly defined, it is a concept to which natural selection is indifferent. To be sure, if an accurate portrayal of reality, to oneself or to others can help spread one's genes, then accuracy of perception or communication may evolve. And often this will be the case (when, say, you remember where food is stored, and share the data with offspring or siblings). But when accurate reporting and genetic interest do thus intersect, that's just a happy coincidence. Truth and honesty are never favored by natural selection in and of themselves. Natural selection neither "prefers" honesty nor "prefers" dishonesty. It just doesn't care."

He's talking about the "maps" that humans/animals may carry in their brains. These maps don't need to match the territory to be adaptive (I think your criticism of the quote hinges on how you would define "significantly"). But there's quite a bit of space where a "bad map" does not prevent adaptive behavior.

For example, some non-venomous snakes "copied" the color patterns of venomous snakes. It's still adaptive for animals to avoid all snakes with this coloring (just to be safe) without needing to know the truth about which snake is dangerous and which isn't. And natural selection is "rewarding" the non-venomous snake for lying about how dangerous it is.

Replies from: dspeyer, Lumifer, 27chaos, fortyeridania
comment by dspeyer · 2015-02-19T08:22:37.996Z · LW(p) · GW(p)

This seems to be conflating possessing truth and sharing truth. The former is almost always valuable. The latter is an interesting bit of game theory, that can go either way.

As it has been said, truth may be spoken as events dictate, but should be heard on every occasion.

comment by Lumifer · 2015-02-04T15:39:08.620Z · LW(p) · GW(p)

These maps don't need to match the territory to be adaptive (I think your criticism of the quote hinges on how you would define "significantly")

Partially that, but also partially about the direction of the gradient. First, maps never match the territory perfectly precisely, they are always simplified models. In that sense, of course, a map not "matching" the territory is not a obstacle to surviving and prospering.

However I would claim that the greater the mismatch between the map and the territory, the greater disadvantage in the natural selection game does the creature accrue. If, magically, you get a choice between getting a more accurate map or a less accurate map, you should always choose the more accurate map.

It's still adaptive for animals to avoid all snakes with this coloring (just to be safe) without needing to know the truth about which snake is dangerous and which isn't.

That is not true -- you set up the question wrong. There are three maps involved: map 1 does not recognize venomous snakes at all; map 2 confuses venomous and mimicry-using snakes; and map 3 successfully distinguishes between venomous snakes and mimicry-using ones.

Map 3 matches the territory better than map 2 which matches the territory better than map 1. The natural selection would give advantage to an animal with map 3 over the one with map 2, and the one with map 2 over the one with map 1.

Replies from: Torello
comment by Torello · 2015-02-05T00:23:53.329Z · LW(p) · GW(p)

If, magically, you get a choice between getting a more accurate map or a less accurate map, you should always choose the more accurate map.

I think that point he's trying to make is that natural selection doesn't magically get a choice between maps. In general, a more accurate map will only become available to the mind of some creature if it happens to be adaptive for genes in a particular population in a particular environment.

Think of all the creatures with really bad maps. In terms of reproduction, they are doing just fine. For some species, their relative reproductive success can be improved with more accurate maps, but that's a means to the end of reproductive success,

There are many ways to "make a living" in evolutionary terms, and having a mind with accurate maps is only one of them.

Think of all the creatures who don't have "maps" as humans do. They are still being acted upon by natural selection.

comment by 27chaos · 2015-02-09T02:36:37.513Z · LW(p) · GW(p)

The idea that it is just "a happy coincidence" makes me think Lumifer's criticism still applies.

comment by fortyeridania · 2015-02-04T06:51:54.607Z · LW(p) · GW(p)

This additional context does help; thanks.

It's still adaptive for animals to avoid all snakes with this coloring (just to be safe)

Yes, this could be adaptive, but not costless. An animal that avoids all snakes that look venomous misses out on some opportunities (e.g., foraging for food in a tree occupied by a harmless but dangerous-seeming snake). The opportunity cost, in reproductive terms, might be negligible, or it might matter, depending on the specifics. (Here I'm agreeing with you when you point to the importance of the term "significantly.")

Because the truth, even in small matters like snake coloration, can make a difference, the original quotation is an overstatement.

Replies from: Torello
comment by Torello · 2015-02-05T00:43:00.407Z · LW(p) · GW(p)

Because the truth, even in small matters like snake coloration, can make a difference, the original quotation is an overstatement.

All natural selection "cares about" is genes copied. Claws, peacock tail feathers, and "maps" can all "make a difference," but natural selection only acts on the results (genes copied); natural selection itself doesn't favor any particular kind of adaptation, that's why I think the original quote is not an overstatement.

comment by hairyfigment · 2015-02-03T19:03:22.398Z · LW(p) · GW(p)

As a general statement the quote seems absurd (except in the sense that natural selection has no mind and is thus indifferent to everything). But note that our ancestors' beliefs interacted with evolution by way of predictions (conscious or unconscious) and not by truth directly. Our normative 'beliefs' may have had reproductive value in other ways, but this need have nothing to do with moral truth as defined by those same norms.

Replies from: Lumifer, ravenlewis, Torello
comment by Lumifer · 2015-02-03T19:10:18.119Z · LW(p) · GW(p)

our ancestors' beliefs interacted with evolution by way of predictions (conscious or unconscious) and not by truth directly

I don't understand what this means.

this need have nothing to do with moral truth

Is this quote talking about morality and moral "truth"? That's not how I read it -- if it does, it needs more context.

Replies from: hairyfigment
comment by hairyfigment · 2015-02-03T19:15:49.921Z · LW(p) · GW(p)

I don't understand

Evolution determines gene survival by (to use a hypothetical example) whether or not you can throw a spear, rather than whether or not you have an accurate picture of the the underlying physics.

Is this quote talking about morality and moral "truth"?

Beats me, I just read the title underneath.

comment by ravenlewis · 2015-02-04T06:09:47.994Z · LW(p) · GW(p)

yes, i would like to agree with you sir :)

comment by Torello · 2015-02-04T06:00:12.283Z · LW(p) · GW(p)

He's using "indifferent" metaphorically. He would completely agree that natural selection has no mind.

What he means is that natural selection operates on differential rates of reproduction of genes, not on the accuracy/truth of the beliefs that the mind of an individual holds.

comment by Torello · 2015-02-05T00:35:31.151Z · LW(p) · GW(p)

If your map significantly doesn't match the territory, natural selection is likely to be brutal to you.

Another way to think about his idea:

Natural selection is equally brutal to all life. Moss and ants have horrible maps, but they are still successful in terms of natural selection.

Replies from: Lumifer
comment by Lumifer · 2015-02-05T07:38:22.992Z · LW(p) · GW(p)

Moss and ants have horrible maps

That's not self-evident to me. They certainly have very limited maps, but I don't know if these limited maps are bad.

Replies from: Torello
comment by Torello · 2015-02-06T00:36:03.975Z · LW(p) · GW(p)

I agree that "limited" is a better word than "horrible".

What I meant by "horrible" is that, relative to human maps, ant maps are extremely limited; they do not represent "truth" or reality as well to the same scope or accuracy of human maps.

I think the point is that even though ant maps are limited, they can still be adaptive. Natural selection is indifferent to the scope/accuracy of a map in and of itself.

Replies from: Wes_W
comment by Wes_W · 2015-02-06T05:53:37.592Z · LW(p) · GW(p)

Critically, the areas in which ant maps are limited are the areas in which natural selection doesn't kill them for it. Colonies that think food is in places where food isn't starve to death.

Replies from: Torello
comment by Torello · 2015-02-06T23:55:55.545Z · LW(p) · GW(p)

Yes, you've hit on the main point. Survival (and later on, reproductive value) is what matters. The fact that the maps help them survive is what matters. The existence of the map or its accuracy matters only matters in so far as it contributes to reproductive success.

Natural selection doesn't "reward" them for having an accurate map, only a map that helps they live and reproduce.

Replies from: Wes_W
comment by Wes_W · 2015-02-08T05:29:17.731Z · LW(p) · GW(p)

Yes...? And one of the key traits of a useful map is accuracy.

I mean, yes, clearly natural selection doesn't value truth for its own sake. It certainly does favor truth for instrumental reasons. I'm uncomfortable phrasing this as "indifferent to truth" as the original quote did. But perhaps we're talking past each other, here.

comment by elharo · 2015-02-13T11:54:48.548Z · LW(p) · GW(p)

If you want to use google instead of science to "prove me wrong" then I am happy to call you an imbecile as well as misinformed.

-- Jennifer Hibben-White, "My 15-Day-Old Son May Have Measles", 02/11/2015

Replies from: ChristianKl, alienist
comment by ChristianKl · 2015-02-13T13:38:24.938Z · LW(p) · GW(p)

That's likely not an effective strategy of convincing people to change their opinion. The article likely would be more persuasive to anti-vaxxers if it didn't contain that line.

comment by alienist · 2015-02-14T05:46:23.949Z · LW(p) · GW(p)

The amusing thing is that Jennifer Hibben-White is no more using science then her opponents, and probably using Google just as much.

comment by ike · 2015-02-02T19:03:40.410Z · LW(p) · GW(p)

If a person doesn’t believe climate change is real, despite all the evidence to the contrary, is that a case of a dumb human or a science that has not earned credibility? We humans operate on pattern recognition. The pattern science serves up, thanks to its winged monkeys in the media, is something like this:

Step One: We are totally sure the answer is X.

Step Two: Oops. X is wrong. But Y is totally right. Trust us this time.

Science isn’t about being right every time, or even most of the time. It is about being more right over time and fixing what it got wrong. So how is a common citizen supposed to know when science is “done” and when it is halfway to done which is the same as being wrong?

You can’t tell. And if any scientist says you should be able to tell when science is “done” on a topic, please show me the data indicating that people have psychic powers.

So maybe we should stop scoffing at people who don’t trust science and ask ourselves why. Ignorance might be part of the problem. But I think the bigger issue is that science is a “mostly wrong” situation by design that is intended to become more right over time. How do you make people trust a system that is designed to get wrong answers more often than right answers? And should we?

Scott Adams

(I think he is wrong about what most climate skeptics are thinking. It seems to me more of a selective reading thing; if the media you see tells you that it's fiercely debated, you're going to think it's fiercely debated by default, rather than know enough to look up the actual state of the field.)

Replies from: Vaniver, ChristianKl
comment by Vaniver · 2015-02-02T20:53:34.417Z · LW(p) · GW(p)

I think he is wrong about what most climate skeptics are thinking.

My personal experience is that I've mostly seen two main camps of climate skepticism, both of which seem to map to contrarian sophistication levels. I don't see many people who are operating at level 2 and climate skeptics.

The first is the 'uneducated' critique, that nature is simply too big and variable for man to impact the climate, be sure he's impacting the climate, or have a desired climate reference level. This seems to mostly be out of touch with the data / scientific reasoning in general, but does fit into Adams's claim; one of the reasons why someone might disbelieve claims that we're sure about the causality and predictions is an overall poor track record of science. In this particular field, for example, some predictions of global cooling were made before, and people who value consistency more than correctness are upset by that. (It's worth pointing out that many people do give the argument "you're able to predict what the climate will look like in 80 years but you aren't able to predict what the weather will look like in 8 days?" despite the inherent difference between climate and weather, but that's mostly unrelated to Adams's point.)

The second is the 'meta-contrarian' critique, that pokes at the incentives of climate science, the difficulties of modeling, and the desirability of change. As an exercise in scientific number-crunching, climate predictions are very difficult and in a class of models where many tunable parameters can be adjusted to get highly variable results. Most of our understanding of how the climate will behave depends on the underlying feedback loops, and it seems that positive feedback loops (i.e. the temperature increases, which changes things so that temperatures continue to increase) are more publicized than negative feedback loops (i.e. the temperature increases, which changes things so that temperatures stop increasing). There's also evidence that the net effect of climate change will be positive until it's negative, suggesting that stopping change down the road would actually be better than stopping change now (if it were equally costly to stop change now and then).

(Note that both of those camps basically disagree with climatology as a field, for different reasons, and neither of them buy into the central premises of climatology but interpret the data differently.)

Replies from: Lumifer
comment by Lumifer · 2015-02-02T20:58:58.967Z · LW(p) · GW(p)

/waves

Hello from the second camp :-)

comment by ChristianKl · 2015-02-02T23:08:34.816Z · LW(p) · GW(p)

There the xkcd comic asking regarding the moon landing: "If NASA were willing to fake great accomplishments, they'd have a second one by now."

It's mean, but given the fake NASA discovery that "expands the definition of life" it's funny. At a time where jokes like that can be made, there's really question where the trust is supposed to come from.

Replies from: JoshuaZ, Kenny
comment by JoshuaZ · 2015-02-03T00:05:53.714Z · LW(p) · GW(p)

It's mean, but given the fake NASA discovery that "expands the definition of life" it's funny.

Do you mean the reports of life able to survive in high arsenic environments? In that context it may be important to note that that was poorly done science not deliberate fakery. It is pretty difficult to fake landing on the Moon out of sheer incompetence.

Replies from: ChristianKl
comment by ChristianKl · 2015-02-03T00:28:56.140Z · LW(p) · GW(p)

In that context it may be important to note that that was poorly done science not deliberate fakery.

Yes, but incompetence still doesn't encourage general trust in science and NASA should have known better than to announce the bacteria that supposedly live in high arsenic enviroments as a discovery that "expands the definition of life" at a big press conference.

comment by Kenny · 2015-02-09T04:03:54.640Z · LW(p) · GW(p)

I can easily understand how someone could consider everything NASA (or technologists generally) claims to do as being faked. Everything they claim to do is really hard to verify for almost anyone. And, a lot of it might actually be easier to pull off by faking it – CGI is pretty impressive nowadays and it's not that hard to believe that a lot of images and even video are manipulated or even generated from whole cloth.

If you had to verify, personally, that the ESA actually controlled a spacecraft that orbited a comet, etc., how would you do it? Myself, I accept that I'm really trusting a network of people and that I can't practically verify almost anything I'm told.

Replies from: Nornagest, ChristianKl
comment by Nornagest · 2015-02-09T18:31:39.421Z · LW(p) · GW(p)

If you had to verify, personally, that the ESA actually controlled a spacecraft that orbited a comet, etc., how would you do it?

Good question. Intercepting the data stream sent back from the spacecraft would probably be possible (direct imaging at that range isn't in the cards), but it would take some rather sensitive equipment. It might be possible to find amateur astronomers who tracked it during its launch or during its flybys of Earth in 2005, 2007, and 2009, though, and derive a trajectory from that; it's not "personal", but if you don't trust that kind of data, you'd be getting far into conspiracy-theory territory.

That'd only get you so far, though. Rosetta's flight plan was pretty complicated and included both several gravity-assist flybys and maneuvers under its own power, so if you doubt ESA's ability to do anything other than get mass near the comet, that'd be tricky to verify.

ETA: Googled "amateur spacecraft tracking" and found a response to almost precisely this question. Turns out there are a few amateur groups with the resources to find the carrier signals from deep-space probes. They even have a Yahoo group.

Replies from: Kenny
comment by Kenny · 2015-02-10T00:59:33.473Z · LW(p) · GW(p)

Great response. You're not fully resolving the potential skepticism I identified, but that's impossible anyways. What should be ultimately convincing is that good theories generate good predictions, and you should expect good theories to be connected to other good theories.

Unfortunately, I think a lot of people are firmly in "conspiracy-theory territory" already and aren't consistently testing their beliefs. I can sympathize because I know I spend a lot of time generating and trusting weak theories about, e.g. other people's motivations, my likely performance on a particular project.

comment by ChristianKl · 2015-02-09T10:25:51.794Z · LW(p) · GW(p)

In 2010 NASA hold a press conference that they made a discovery that supposedly expands the definition of life. Today the consensus among scientists seems to be that the finding is bullshit.

While incompetence is likely the better explanation than malice, it's still a fake.

Myself, I accept that I'm really trusting a network of people and that I can't practically verify almost anything I'm told.

The point is that the network you are trusting was likely wrong about a big discovery that NASA claimed to have made in this decade. Maybe even the biggest claimed discovery of NASA in this decade.

Replies from: gjm, Kenny
comment by gjm · 2015-02-09T11:20:32.839Z · LW(p) · GW(p)

the network you are trusting was likely wrong about a big discovery that NASA claimed to have made

I have no idea exactly what network Kenny trusts how much, but just about everything I read about NASA's alleged discovery was really skeptical about it and said "yeah, this would be amazingly cool if it were true, but don't hold your breath until it's been confirmed by more careful investigation". And, lo, it was not confirmed by more careful investigation, and now everyone thinks it was probably bullshit.

Much the same story for superluminal neutrinos (more so than the arsenic-using life) and CMB polarization due to inflation (less so than the arsenic-using life).

Replies from: ChristianKl, elharo
comment by ChristianKl · 2015-02-09T11:57:34.977Z · LW(p) · GW(p)

Much the same story for superluminal neutrinos (more so than the arsenic-using life) and CMB polarization due to inflation (less so than the arsenic-using life).

In the case of the neutrinos the announcement there was much more skepticism on the part of the people who made the discovery.

Replies from: gjm
comment by gjm · 2015-02-09T14:03:08.645Z · LW(p) · GW(p)

Yup, but I don't think that's relevant to how reliable the people Kenny trusts to tell him about scientific research are.

comment by elharo · 2015-02-13T12:01:08.707Z · LW(p) · GW(p)

In the case of superluminal neutrinos, pretty much nobody including the people who made the announcement believed it; and the real announcement was more along the lines of "we've got some problematic data here; and we can't find our mistake. Does anyone see what we've done wrong?"

comment by Kenny · 2015-02-10T01:03:09.246Z · LW(p) · GW(p)

Good point. But my trusting a network of people, or really many (overlapping) networks of people, doesn't mean that I trust every specific claim or theory or piece of information. It just means that I've learned that they're overall trustworthy, or trustworthy to a specific (perhaps even quantifiable) extent, or maybe only trustworthy for certain kinds of claims or theories or information.

comment by Glen · 2015-02-02T18:31:50.758Z · LW(p) · GW(p)

We can't go back, Mat. The Wheel has turned, for better or worse. And it will keep on turning, as lights die and forests dim, storms call and skies break. Turn it will. The wheel is not hope, and the Wheel does not care, the Wheel simply is. But so long as it turns, folk may hope, folk may care. For with light that fades, another will eventually grow, and each storm that rages must eventually die. Thom Merrilin, The Gathering Storm by Robert Jordan and Brandon Sanderson

(For those unfamiliar with the series, the Wheel is basically reality/the universe)

comment by Salemicus · 2015-02-02T18:23:42.710Z · LW(p) · GW(p)

Indecision and delays are the parents of failure.

George Canning, source.

Replies from: Lumifer
comment by Lumifer · 2015-02-02T18:29:34.420Z · LW(p) · GW(p)

Failure is the child of a... very extended family, including impulsiveness and hot-headedness.

Replies from: Salemicus
comment by Salemicus · 2015-02-03T09:12:19.188Z · LW(p) · GW(p)

Agreed. Very often the opposite of these quotations could also be a rationality quote. No such quote is ever going to cover the full spectrum, and they will always be from the speaker's experience and reflecting his personality.

In context, then, it is worth pointing out that Canning was a politician who achieved perhaps his greatest success in wartime, and who is otherwise best known for running a highly active and aggressive (and successful) foreign policy, largely independently of the rest of the government. He's also notorious for having a duel against another Cabinet minister, which hugely damaged his career, so "impulsiveness and hot-headedness" is a yes. As Muir wrote:

[Canning had] immense confidence in his own ability, [and] often inspired either great friendship or deep dislike and distrust...he was a passionate, active, committed man who poured his energy into whatever he undertook. This was his strength and also his weakness.

Emphasis mine.

comment by Vaniver · 2015-02-01T15:55:25.168Z · LW(p) · GW(p)

For example: most men have inner conflicts of values; these conflicts, in most lives, take the form of small irrationalities, petty inconsistencies, mean little evasions, shabby little acts of cowardice, with no crucial moments of choice, no vital issues or great, decisive battles--and they add up to the stagnant, wasted life of a man who has betrayed all his values by the method of a leaking faucet.

--Ayn Rand, The Romantic Manifesto

Replies from: emr, elharo, None
comment by emr · 2015-02-03T04:32:38.238Z · LW(p) · GW(p)

A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines.

-- Emerson, Self-Reliance

Perhaps these are two prescriptions for two different patients: The fox and the hedgehog!

comment by elharo · 2015-02-01T18:08:19.283Z · LW(p) · GW(p)

Absent context, I notice I'm confused about which sense of the word "values" she's using here. Perhaps someone can elucidate? In particular is she talking about moral/ethical type values or is she using it in a broader sense that we might think of as goals?

Replies from: hwold, Kenny
comment by hwold · 2015-02-02T14:11:12.913Z · LW(p) · GW(p)

Can’t tell for the Romantic Manifesto, but in Atlas Shrugged, Ayn Rand uses the word “value” as a synonym of “rule of conduct”. For example, she argue that “rational evaluation” is a correct value for man in the same way that “flying” is a correct value for birds.

She calls her philosophy objectivism because the thinks that correct values, which means rules of conduct that leads to environmental fitness (in her words says : “survival”), are objective.

comment by Kenny · 2015-02-09T00:41:26.315Z · LW(p) · GW(p)

Funny enough, I'm confused by your distinction between moral or ethical values and goals – aren't those really the same?

Ayn Rand held that some preferences were rational or more rational than others.

comment by [deleted] · 2015-02-01T16:59:01.807Z · LW(p) · GW(p)

The first sentence read like sound logic to me (personally, not speaking for other guys), but I couldn't really figure what the hell she wants from my life from all the rest.

I could explain, but that would be A. a spoiler and B. a potential (and equally idiotic, aka "reach a conclusion already idiots) type of) flamewar that shouldn't be here.

Replies from: Vaniver
comment by Vaniver · 2015-02-01T17:13:17.636Z · LW(p) · GW(p)

The first sentence read like sound logic to me (personally, not speaking for other guys), but I couldn't really figure what the hell she wants from my life from all the rest.

As far as I can tell, that quote is one sentence, so I'm not sure how you're dividing it up.

Replies from: None
comment by [deleted] · 2015-02-01T17:29:51.265Z · LW(p) · GW(p)

"For example: most men have inner conflicts of values;"

This part. It's an issue I can solve quite easily, but at the same time it feels like the bar is being set lower, rather than higher, and that's what irks me.

Replies from: Vaniver
comment by Vaniver · 2015-02-01T18:44:58.680Z · LW(p) · GW(p)

it feels like the bar is being set lower, rather than higher, and that's what irks me.

The quote is exhibiting a thing not to do, so that doesn't seem that surprising to me. I'd restate the quote as something like "resolve your inner conflicts, even if that requires dramatic effort, rather than letting them fester."

comment by WalterL · 2015-02-02T19:08:07.704Z · LW(p) · GW(p)

“What about honor and ethics?” “We’ve got honor in us, but it’s our own code...not the make-believe rules some frightened little man wrote for the rest of the frightened little men. Every man’s got his own honor and ethics, and so long as he sticks to ’em, who’s anybody else to point the finger? You may not like his ethics, but you've no right to call him unethical.”

-Alfred Bester, in The Demolished Man chapter 6 p 84, according to wikiquote

Replies from: alienist, wadavis
comment by alienist · 2015-02-03T03:04:00.084Z · LW(p) · GW(p)

You may not like his ethics, but you've no right to call him unethical.

In that case who is Alfred Bester to tell me who I can or can't call "unethical"?

comment by wadavis · 2015-02-03T17:21:56.992Z · LW(p) · GW(p)

I like how this touches on respecting the agency of others. The idea that: I do not like how you operate, but I will respect that you are a thinking person, and for that reason alone, deserving of respect.

comment by Shmi (shminux) · 2015-02-05T03:03:42.713Z · LW(p) · GW(p)

An open mind is like to an open wound. Vulnerable to poison. Liable to fester. Apt to give its owner only pain.

Joe Abercrombie, Before They Are Hanged

Replies from: Richard_Kennaway, MarkusRamikin
comment by Richard_Kennaway · 2015-02-05T17:26:01.521Z · LW(p) · GW(p)

A closed mind is like to a cyst. Vulnerable to rot. Liable to fester. Apt to give its owner only pain.

Me, as an exercise in improvising a contrary Deep Wisdom.

Replies from: Kindly, shminux
comment by Kindly · 2015-02-06T00:19:09.067Z · LW(p) · GW(p)

It's not as snappy, because "open mind" and "open wound" have a parallel structure that your example lacks.

A closed mind is like to closed-angle glaucoma. Can happen suddenly. Is often painful. Can lead to loss of vision.

comment by Shmi (shminux) · 2015-02-05T20:27:38.266Z · LW(p) · GW(p)

Striking the right balance is a tricky thing.

comment by MarkusRamikin · 2015-02-05T06:46:38.114Z · LW(p) · GW(p)

An open mind is like a fortress with its gates unbarred and unguarded.

Isador Akios

(Even if I doubt Isador had "harmful memes" and "infohazards" in mind the way I do when I think of this quote...)

comment by advancedatheist · 2015-02-02T02:10:54.196Z · LW(p) · GW(p)

Just something I've observed:

When "denial of death" can actually keep people from dying, we call it "effective health care," or words to that effect.

Replies from: somnicule
comment by somnicule · 2015-02-02T02:47:38.982Z · LW(p) · GW(p)

Wouldn't this be more ore appropriate for an open thread than the quotes thread?