Rationality Quotes March 2012
post by Thomas · 2012-03-03T08:04:55.112Z · LW · GW · Legacy · 534 commentsContents
534 comments
Here's the new thread for posting quotes, with the usual rules:
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself
- Do not quote comments/posts on LW/OB
- No more than 5 quotes per person per monthly thread, please.
534 comments
Comments sorted by top scores.
comment by baiter · 2012-03-02T12:52:37.500Z · LW(p) · GW(p)
"...I always rejoice to hear of your being still employ'd in experimental Researches into Nature, and of the Success you meet with. The rapid Progress true Science now makes, occasions my regretting sometimes that I was born so soon. It is impossible to imagine the Height to which may be carried, in a thousand years, the Power of Man over Matter. We may perhaps learn to deprive large Masses of their Gravity, and give them absolute Levity, for the sake of easy Transport. Agriculture may diminish its Labor and double its Produce; all Diseases may by sure means be prevented or cured, not excepting even that of Old Age, and our Lives lengthened at pleasure even beyond the antediluvian Standard. O that moral Science were in as fair a way of Improvement, that Men would cease to be Wolves to one another, and that human Beings would at length learn what they now improperly call Humanity!"
-- Benjamin Franklin, Letter to Joseph Priestley, 8 Feb 1780
Replies from: Stabilizer, None, peter_hurford↑ comment by Stabilizer · 2012-03-04T05:54:09.854Z · LW(p) · GW(p)
One of the first transhumanists?
Replies from: Jayson_Virissimo, Will_Newsome↑ comment by Jayson_Virissimo · 2012-03-05T08:29:56.839Z · LW(p) · GW(p)
The hard core of transhumanism goes back to at least the Middle Ages, possibly sooner.
Replies from: Stabilizer, Will_Newsome↑ comment by Stabilizer · 2012-03-05T08:35:52.553Z · LW(p) · GW(p)
Interesting. The particular philosophers you have in mind?
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2012-03-05T09:44:52.247Z · LW(p) · GW(p)
Primarily, I had the Arabic-speaking philosophical alchemists in mind, but there are others. If there is significant interest, then I will elaborate further.
Replies from: Jayson_Virissimo, None, Bugmaster, Ezekiel↑ comment by Jayson_Virissimo · 2012-03-06T12:52:01.437Z · LW(p) · GW(p)
Okay, 2 comments and 3 upvotes is good enough for a quick comment but not a discussion post.
By the "hard core of transhumanism" I mean the belief that humans could use reason to obtain knowledge of the natural world that we can use in order to develop technologies that will allow us to cure sickness, eliminate the need to labor, and extend our lifespans to greater-than-human levels and that we should do these things.
During the Islamic Golden Age, many thinkers combined Aristotelianism and Neoplatonism with knowledge from indigenous craft traditions into a form of alchemy that was refined using logic and laboratory experimentation (Jābir ibn Hayyān is probably the most famous of these thinkers). These philosophers and technologists believed that their theoretical system would allow them to perform transmutation of matter (turn one element into another) unlocking the ability to create almost any "machine" or medicine imaginable. This was thought to allow them to create al ixir (elixir) of Al Khidr fame which, in principle, could extend human life indefinitely and cure any kind of disease. Also of great interest was the attainment of takwin, which is artificial, laboratory-created "life" (even including the intelligent kind). It was hoped (by some) that these artificial creations (called a homunculus by Latin speakers and analogous to the Jewish golem) could do the work of humans the way angels do Allah's work. Not only could these AIs do our work for us, they could continue our scientific enterprise. According to William Newman, these AIs or robots "...of the pseudo-Plato and Jabir traditions could not only talk - it could reveal the secrets of nature." Sound familiar?
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-03-06T13:36:27.345Z · LW(p) · GW(p)
Was there any speculation about the Friendly takwin problem?
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2012-03-06T13:44:48.818Z · LW(p) · GW(p)
Not that I know of, but you would think they would have since they were familiar with how badly you could end up screwing yourself dealing with Jinn even though they would do exactly what you tell them to (literally). There are a great many Arabic texts that historians of science have yet to take a look at. Who knows, maybe we'll luck out and find the solution to the FAI problem in some library in Turkey.
Replies from: Strange7↑ comment by Strange7 · 2012-03-07T08:05:40.418Z · LW(p) · GW(p)
Might also have been an attitude like a lot of people have today, along the lines of :
Let's build something that works repeatably for trivial stuff under laboratory conditions first, to see what that tells us about the fundamental capabilities and limitations. We can spec out a control mechanism resilient enough to keep working in the field once someone's actually planning a full-scale prototype.
↑ comment by Will_Newsome · 2012-03-05T09:10:21.593Z · LW(p) · GW(p)
Does Imitation of Christ count as transhumanism, or is too ideologically distinct?
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2012-03-05T09:41:13.797Z · LW(p) · GW(p)
I would say no, because their isn't enough emphasis on technology as the means of achieving post-humanity.
↑ comment by Will_Newsome · 2012-03-04T12:14:24.144Z · LW(p) · GW(p)
"Be perfect, like an FAI is perfect." -- Jesus
↑ comment by Peter Wildeford (peter_hurford) · 2012-03-06T03:38:09.395Z · LW(p) · GW(p)
Benjamin Franklin sure knew how to use the caps. I miss the old days.
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2012-03-06T15:47:45.089Z · LW(p) · GW(p)
The Germans of his day put him to shame.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-03-07T05:13:52.848Z · LW(p) · GW(p)
In fact they still do.
comment by [deleted] · 2012-03-01T21:09:23.142Z · LW(p) · GW(p)
False opinions are like false money, struck first of all by guilty men and thereafter circulated by honest people who perpetuate the crime without knowing what they are doing
--Joseph de Maistre, Les soirées de Saint-Pétersbourg, Ch. I
Replies from: Ezekiel, Thomas, Multiheaded↑ comment by Ezekiel · 2012-03-05T22:32:25.603Z · LW(p) · GW(p)
I think this quote implies that most false opinions were deliberately invented to further someone's agenda, and I don't think that's true. People's brains just aren't optimised for forming true opinions.
(This is something of a sore point with me, as I've met too many religious people who challenge atheism with "What? You think [famously good guy X] was lying?")
And if you say that "guilty" here means not bothering to properly investigate before forming an opinion, then those who continue circulating it are equally guilty for not bothering to investigate before accepting an opinion.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2012-03-06T02:03:59.385Z · LW(p) · GW(p)
(This is something of a sore point with me, as I've met too many religious people who challenge atheism with "What? You think [famously good guy X] was lying?")
Which exemplifies why "faith" isn't about belief in propositions so much as it is about trust in individuals (including imagined or possible individuals). Many religionists will even tell you so out front: that while the creed is important, having a trust relationship with God (or Jesus, or the Church, or a guru, etc.) is what their faith is all about.
↑ comment by Thomas · 2012-03-02T14:15:44.328Z · LW(p) · GW(p)
Some guilt also falls onto those who are not eager enough to verify those opinions or the money they circulate.
The man on the top (at the beginning) is NOT guilty for everything.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-03-02T17:38:16.128Z · LW(p) · GW(p)
To my way of thinking, it's quite possible for me to be fully responsible for a chain of events (for example, if they would not have occurred if not for my action, and I was aware of the likelihood of them occurring given my action, and no external forces constrained my choice so as to preclude acting differently) and for other people upstream and downstream of me to also be fully responsible for that chain of events. This is no more contradictory than my belief that object A is to the left of object B from one perspective and simultaneously to the right of object A from another. Responsibility is not some mysterious fluid out there in the world that gets portioned out to individuals, it's an attribute that we assign to entities in a mental and/or social model.
You seem to be claiming that models wherein total responsibility for an event is conserved across the entire known causal chain are superior to mental models where it isn't, but I don't quite see why i ought to believe that.
Replies from: DaFranker↑ comment by DaFranker · 2012-08-06T15:15:40.686Z · LW(p) · GW(p)
You seem to be claiming that models wherein total responsibility for an event is conserved across the entire known causal chain are superior to mental models where it isn't, but I don't quite see why i ought to believe that.
My instinct tells me that dividing 1 responsibility per outcome throughout responsible actors is doomed to reduce to "The full responsibility is equally divided across the entire states of the Universe leading up to this point, since any small difference could have led to a different outcome". This would make it awfully similar to the argument that no human can be responsible for any crime in a deterministic universe since they did not have control over their actions.
To me, it feels anti-bayesian, but I lack the expertise to verify this.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-08-07T02:09:38.994Z · LW(p) · GW(p)
I don't endorse the model of "1 responsibility per outcome" that can be divided.
Neither do I endorse the idea that responsibility is incompatible with a deterministic universe.
Also, I have no idea what you mean by "anti-bayesian" here.
↑ comment by [deleted] · 2012-08-07T02:19:49.550Z · LW(p) · GW(p)
It took me a while, but his post made much more sense to me once I realized he was agreeing with you.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-08-07T02:26:50.592Z · LW(p) · GW(p)
Oh!
Huh.
Yeah, I see what you mean.
↑ comment by DaFranker · 2012-08-07T03:08:19.730Z · LW(p) · GW(p)
Heh, sorry, kind of skipped the preamble there.
Yes, the post was in agreement with you, and attempting to visualize / illustrate / imagine a potential way the model could be shown to be flawed.
As for feeling "anti-bayesian", the idea that a set amount of responsibility exists to be distributed over actors for any event seems completely uncorrelated with reality and independent of any evidence. It feels just like an arbitrary system of categorization, like using "golborf" as a new term for "LessWrong users that own a house, don't brush their teeth daily, drink milk daily, enjoy classical music and don't work in IT-related fields".
That little feeling somewhere that "This thing doesn't belong here in my model.", that there are freeloading nodes that need to be purged.
↑ comment by Multiheaded · 2012-08-06T11:19:37.137Z · LW(p) · GW(p)
I'm very surprised as to why is this so upvoted, other than the fact that some of the LW crowd really loves 19th century right-wing writers. The statement is patently untrue.
Even in regard to hard-line reactionaries themselves and their political circumstances; did de Maistre think that Voltaire or Rousseau or even Robespierre ever consciously produced "false opinions" to befuddle the masses?
No way; even later conservatives, like Burke and Chesterton, have admitted that if the French Revolution went wrong somewhere (and Chesterton thought it was off to a good start), it must have been a mistake, not a crime.
Replies from: Kaj_Sotala, ArisKatsaris↑ comment by Kaj_Sotala · 2012-08-06T11:52:00.930Z · LW(p) · GW(p)
I know ~nothing about the historical events which you allude to, but I upvoted the quote because experience tells me it's very true in real life. E.g. a journalist writes a news article that contains lies about its subject matter, and the link to the article gets widely shared by honest people who presume that it's telling the truth. Or a dishonest scientist makes up his data, and then gets cited by honest scientists.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-08-06T12:34:58.268Z · LW(p) · GW(p)
Oh. In that case, well, it's true about local "opinions" but false about views on global things. Like the so-called free market (which is mostly not free) or the so-called democracy (which is mostly not ruled by the People): I believe that most nominally educated people today have a pretty reasonable assessment of their value: they kinda work, and even bring some standard of living, but do so very ineffectively. So the only "false opinions" on this scale are just ritual statements semi-consciously produced out of fear of empowering the enemies of the present structure. I might make a great and benevolent dictator, but I can't trust my heir; so I'd rather endorse "democracy" steered by experts. Both the "democracy" and the "free market" are part of what we are, therefore we must defend them vigilantly.
Fortunately, we're leaving such close-mindedness behind. Unfortunately, we might have the illusion of not needing any other abstract concepts to use for our social identity. Humans always do! If we don't believe in Democracy, then we must believe in the Catholic Church, or Fascism, or Moldbuggery, or Communism, or Direct Theocracy (like in Banks' Culture). But believe we will.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2012-08-06T17:11:21.302Z · LW(p) · GW(p)
Unfortunately, we might have the illusion of not needing any other abstract concepts to use for our social identity. Humans always do! If we don't believe in Democracy, then we must believe in the Catholic Church, or Fascism, or Moldbuggery, or Communism, or Direct Theocracy (like in Banks' Culture). But believe we will.
This sounds somewhat like the assertion, usually made by religious critics of science, that "everyone believes in something; your faith is in Science" (or Darwin, or the like). Would you care to distinguish these assertions?
↑ comment by ArisKatsaris · 2012-08-06T13:20:08.558Z · LW(p) · GW(p)
I'm very surprised as to why is this so upvoted, other than the fact that some of the LW crowd really loves 19th century right-wing writers.
I don't think it's a very good quote but I'd guess that the majority of readers didn't know/notice/remember he was a 19th century right-wing writer. As such few people would associate this quote with opposition to the French Revolution, or even politics -- people would first think of such things as religions.
And I'd put money on Mohammed, Joseph Smith and Apostle Paul to have been deliberate conmen. (I'm leaving out Jesus, because I'd put odds on him being just delusional)
comment by Grognor · 2012-03-03T08:57:49.446Z · LW(p) · GW(p)
Replies from: Bugmaster, DSimon“Stupider” for a time might not have been a real word, but it certainly points where it’s supposed to. The other day my sister used the word “deoffensify”. It’s not a real word, but that didn’t make it any less effective. Communication doesn’t care about the “realness” of language, nor does it often care about the exact dictionary definitions. Words change through every possible variable, even time. One of the great challenges of communication has always been making sure words mean the same thing to you and your audience.
↑ comment by Bugmaster · 2012-03-03T09:54:54.197Z · LW(p) · GW(p)
Or, as the Language Log puts it:
Replies from: army1987, NominullThe first thing to say is that the only possible way to settle a question of grammar or style is to look at relevant evidence. I suppose there really are people who believe the rules of grammar come down from some authority on high, an authority that has no connection with the people who speak and write English; but those people have got to be deranged.
↑ comment by A1987dM (army1987) · 2012-03-03T11:03:54.168Z · LW(p) · GW(p)
the Language Log
It's Language Log, without the, goddammit!
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2012-03-06T15:50:43.179Z · LW(p) · GW(p)
Without the what? That isn't grammatical.
Replies from: wnoise, RobertLumley, army1987↑ comment by wnoise · 2012-03-11T06:21:46.319Z · LW(p) · GW(p)
Without the fnord, of course.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2012-03-24T11:56:38.694Z · LW(p) · GW(p)
What "of course"?
↑ comment by RobertLumley · 2012-03-06T17:59:21.975Z · LW(p) · GW(p)
Upvoted under the presumption that you're being ironic.
↑ comment by A1987dM (army1987) · 2012-03-11T01:15:46.171Z · LW(p) · GW(p)
Why, do you say “Less Wrong”, or “the Less Wrong”?
↑ comment by Nominull · 2012-03-03T09:58:49.826Z · LW(p) · GW(p)
Swap out "grammar" and "style" for "morality" and "ethics"?
Replies from: Ezekiel↑ comment by Ezekiel · 2012-03-06T14:58:57.333Z · LW(p) · GW(p)
Disagree strongly. What the heck is "evidence" for morality? Unless "emulate X" is one of your values, your ethical system needn't aspire to approximate anything.
Replies from: simplyeric↑ comment by simplyeric · 2012-03-06T18:13:34.654Z · LW(p) · GW(p)
But if you are settling a question of morality, I take it as being a question between multiple people (that's not explicit, but seems to be implicity part of the above). One's personal ethical system needn't aspire, but when settling a question of group ethics or morality, how do you proceed?
Or for that matter, how do I analyze my own ethics? How do I know if I'm achieving ataraxia without looking at the evidence: do my actions reduce displeasure, etc? The result of my (or other people's) actions are relevant evidence, providing necessary feedback to my personal system of ethics, no?
↑ comment by Ezekiel · 2012-03-06T23:50:18.646Z · LW(p) · GW(p)
Just so we're clear, I'm using "ethics" and "morality" as synonyms for each other and for "terminal values".
If you're settling a dispute, there's no objectively true meta-morality to go to in the same way as people speaking is the objectively there state of a language. One party wants some things, the other party wants other things, and depending on what the arbitrator wants, and how much power everyone involved has, the dispute will be settled in a certain way.
As for how you analyze your own ethics: You can't, as far as I know. The question of e.g. "do my actions reduce displeasure?" is only relevant once you've decided you want to reduce displeasure. We make decisions by measuring our actions' impact on reality and then measuring that against our values, but we've got nothing to measure our values against.
↑ comment by DSimon · 2012-03-04T05:55:45.085Z · LW(p) · GW(p)
One of my favorite things about many constructed languages is that they get rid of this distinction entirely. You don't have to worry about whether or not "Xify" is a so-called real word for any given value X, you only have to check if it X's type fits the pattern. This happens merely because it's a lot easier, when you're working from scratch anyways, to design the language that way than to have to come up with a big artificial list of -ify words.
comment by Peter Wildeford (peter_hurford) · 2012-03-01T17:19:45.407Z · LW(p) · GW(p)
The problem, often not discovered until late in life, is that when you look for things in life like love, meaning, motivation, it implies they are sitting behind a tree or under a rock. The most successful people in life recognize, that in life they create their own love, they manufacture their own meaning, they generate their own motivation. For me, I am driven by two main philosophies, know more today about the world than I knew yesterday. And lessen the suffering of others. You'd be surprised how far that gets you.
-- Neil DeGrasse Tyson
Replies from: fortyeridania, DanielLC↑ comment by fortyeridania · 2012-03-02T02:01:50.810Z · LW(p) · GW(p)
Fits this one, two out of three.
↑ comment by DanielLC · 2012-03-01T19:09:07.757Z · LW(p) · GW(p)
For me, I am driven by two main philosophies
I think he'd do better if he just made up his mind. I'd go with the second one.
Replies from: pedanterrific, None↑ comment by pedanterrific · 2012-03-01T19:41:37.041Z · LW(p) · GW(p)
watch out folks, we got a badass over here
comment by DSimon · 2012-03-02T05:50:57.772Z · LW(p) · GW(p)
T-Rex: Our bodies are amazing things! Check it, everyone!
We use our mouths to talk. We invent, remember and teach entire languages with which to do the talking! And if that fails, we can talk with our hands. We build planes and boats and cars and spaceships, all by either using our bodies directly, or by using instruments invented by our bodies. We compose beautiful music and tell amazing stories, all with our bodies, these fleshy bags with spooky skeletons inside.
And yet, if we have a severe enough peanut allergy, we can be killed in seconds by a friggin' legume. And hey, 70% of our planet is water, but what happens if we spend too much time in it? We drown. Game over, man!
I used to make fun of Green Lantern for being vulnerable to the color yellow. Then I choked on my orange juice one morning and nearly suffocated.
comment by gwern · 2012-03-01T17:57:07.773Z · LW(p) · GW(p)
"All logic texts are divided into two parts. In the first part, on deductive logic, the fallacies are explained; in the second part, on inductive logic, they are committed."
--Morris Raphael Cohen, quoted by Cohen in "The Earth Is Round (p < 0.05)"
comment by HonoreDB · 2012-03-01T16:58:22.888Z · LW(p) · GW(p)
"Are you trying to tell me that there are sixteen million practicing wizards on Earth?" "Sixteen million four hundred and--" Dairine paused to consider the condition the world was in. "Well it's not anywhere near enough! Make them all wizards."
--Diane Duane, High Wizardry
Replies from: RobertLumley↑ comment by RobertLumley · 2012-03-06T18:05:00.285Z · LW(p) · GW(p)
I can't remember anything about those books, other than that I liked them...
comment by gwern · 2012-03-01T17:58:13.190Z · LW(p) · GW(p)
Replies from: Will_Newsome"Hope always feels like it's made up of a set of reasons: when it's just sufficient sleep and a few auspicious hormones."
↑ comment by Will_Newsome · 2012-03-03T01:51:57.592Z · LW(p) · GW(p)
(Perhaps this individual quote is insightful (I can't tell), but this sort of causal analysis leads to basic confusions of levels of organization more often than it leads to insight.)
Replies from: Nonecomment by Stabilizer · 2012-03-04T05:50:49.947Z · LW(p) · GW(p)
Society changes when we change what we're embarrassed about.
In just fifty years, we've made it shameful to be publicly racist.
In just ten years, someone who professes to not know how to use the internet is seen as a fool.
The question, then, is how long before we will be ashamed at being uninformed, at spouting pseudoscience, at believing thin propaganda? How long before it's unacceptable to take something at face value? How long before you can do your job without understanding the state of the art?
Does access to information change the expectation that if you can know, you will know?
We can argue that this will never happen, that it's human nature to be easily led in the wrong direction and to be willfully ignorant. The thing is, there are lots of things that used to be human nature, but due to culture and technology, no longer are.
-Seth Godin
Replies from: simplyeric↑ comment by simplyeric · 2012-03-06T18:05:52.469Z · LW(p) · GW(p)
A. I'm not entirely sure that things that used to be human nature no longer are. We deal with them, surpress them, sublimate, etc. Anger responses, fear, lust, possesiveness, nesting. The animal instincts of the human animal. How those manifest does indeed change, but not the "nature" of them.
B. We live (in the USA) in a long-term culture of anti-intellectualism. Obviously this doesn't mean it can't change... Sometimes it seems like it will (remember the days before nerd-chic?), but in a nominally democratic society, there will always be a minority of people who are relatively "intellectual" by definition, we should recognize that you don't have to overcome anti-intellectualism, you just have to raise the bar. While still anti-intellectual, in many ways even the intentionally uninformed know more than the average person did back in the day. (just like there will always be a minority of people who will be "relatively tall", even as the average height has tended to increased over the generations)
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-03-07T05:06:09.118Z · LW(p) · GW(p)
We live (in the USA) in a long-term culture of anti-intellectualism.
Which type of anti-intellectualism are you referring to?
Replies from: Jayson_Virissimo, simplyeric↑ comment by Jayson_Virissimo · 2012-03-07T05:35:51.955Z · LW(p) · GW(p)
Interesting. If my experience is representative, then a sizable subset of Less Wrongers are what the author calls epistemic-skeptical anti-intellectuals.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-03-07T08:31:21.669Z · LW(p) · GW(p)
It seems slightly odd that there are many on LessWrong whose justification for not looking deeply into the philosophy literature is that philosophers "are too prone to overestimate their own cleverness" and end up shooting their own philosophical feet off, but that subset of LessWrong doesn't seem to overlap much with those who are epistemic-skeptical anti-intellectuals in the more political sense. Admittedly my own view is that the former subset is basically wrong whereas the latter is basically right, but naively viewed the two positions would seem to go together much as they do with neoconservatives. ...I feel like I'm not carving up reality correctly.
↑ comment by simplyeric · 2012-03-07T14:20:27.234Z · LW(p) · GW(p)
I'm probably referring to all of the above. That's an interesting speciation of anti-intellectualism, but I am meaning it in the broad sense, because I've seen all of them.
If someone calls me a "liberal elitist", is it version 1, 3, or 5? Does the class issue also result in a gut reaction? Is the traditionalism directly related to the totalizing? I understand the differences as described in the article, but I'm not sure they are easily separable. Sometimes yes, but not always.
So: A. I think the differences are interesting, and useful, but not always clearly delineated, and B. when generalizing about a group, I'm not sure it's necessary.
If I say "New Yorkers really like dogs", it's probably not cricitcal which breed I mean. If I say "that person really likes his/her dog" then it matters more.
(and we all know that when you generalize about things it's like when you assume things: it makes a general out of I and, um, ze)
As relates to the original quote: which type was Godin referring to? He talks about being ashamed at being uninformed, which touches on 1 and 5, possibly 2, and interacts with 3. (pobre quatro) One of the things we've slowly seen is the other side: being unashamed at being informed...or politically unpunished, for that matter. Politicians want to be "regular people" because they are berated for using subclauses in sentences (John Kerry), for being a know-it-all (Gore), elitist (everyone, per Palin), destroying the fabric (Obama), utopiansim (the 90's Clintons), etc...
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-03-07T15:11:04.735Z · LW(p) · GW(p)
(and we all know that when you generalize about things it's like when you assume things: it makes a general out of I and, um, ze)
What really entertained me about this clause is that I spent a noticeable period of time trying to remember which of the many competing novel pronoun schemes "ze" was in, before realizing from context that it had to be a second-person pronoun and wondering why would we create a new second-person pronoun given that the English "you" is already ambiguous about gender and number and basically everything else, and only then did my parsing of the rest of the sentence catch up and make me realize it was a joke.
comment by [deleted] · 2012-03-03T15:25:48.588Z · LW(p) · GW(p)
•••
comment by [deleted] · 2012-03-01T16:56:33.573Z · LW(p) · GW(p)
.
Replies from: roystgnr↑ comment by roystgnr · 2012-03-05T21:55:29.055Z · LW(p) · GW(p)
Wait, Google says nobody's posted this joke on LessWrong before?
...
A philosopher, a scientist, and a mathematician are travelling through Scotland, gazing out the window of the train, when they see a sheep.
"Ah," says the philosopher, "I see that Scottish sheep are black."
"Well," says the scientist, "at least we see that some Scottish sheep are black."
"No," says the mathematician, "we merely know that there exists at least one sheep in Scotland which is black on at least one side."
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-03-05T22:19:19.389Z · LW(p) · GW(p)
"Actually," says the stage magician, "we merely know that there exists something in Scotland which appears to be a sheep which is black on at least one side when viewed from this spot."
comment by Cthulhoo · 2012-03-01T10:18:27.839Z · LW(p) · GW(p)
When I disagree with a rational man, I let reality be our final arbiter; if I am right, he will learn; if I am wrong, I will; one of us will win, but both will profit.
Ayn Rand
Replies from: florian, Giles↑ comment by florian · 2012-03-01T12:11:53.204Z · LW(p) · GW(p)
Making the (flawed) assumption that in a disagreement, they cannot both be wrong.
Replies from: peter_hurford, shokwave, Endovior↑ comment by Peter Wildeford (peter_hurford) · 2012-03-01T17:22:16.485Z · LW(p) · GW(p)
Also, they could be wrong about whether they actually disagree.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-03-02T21:08:48.779Z · LW(p) · GW(p)
IME that's the case in a sizeable fraction of disagreements between humans; but if they “let reality be [their] final arbiter” they ought to realize that in the process.
↑ comment by Endovior · 2012-03-05T22:31:24.078Z · LW(p) · GW(p)
Perhaps, but it is rather unlikely that they are equally wrong. It is far more likely that one will be less wrong than the other. Indeed, improving on our knowledge by the comparison between such fractions of correctness would seem to be the whole point of Bayesian rationality.
comment by ArisKatsaris · 2012-03-07T23:08:18.447Z · LW(p) · GW(p)
-So what do you think happens after we die?
-The acids and lifeforms living inside your body eat their way out, while local detritivores eat their way in. Why?
-No, no, no, what happens to you?
-Oh, you guys mean the soul.
-Exactly.
-Is that in the body?
-Yes!
-The acids and lifeforms eat their way out, while local detritivores eat their way in.
comment by ShardPhoenix · 2012-03-06T09:55:47.594Z · LW(p) · GW(p)
Past me is always so terrible, even when I literally just finished being him.
- Karkat from Homestuck by Andrew Hussie
↑ comment by CuSithBell · 2012-04-22T19:24:07.679Z · LW(p) · GW(p)
I'm also fond of:
The only guy more irritating and stupid than future me is past me.
Karkat's just full of these gems of almost-wisdom.
↑ comment by arundelo · 2012-04-23T00:40:48.754Z · LW(p) · GW(p)
5-Second Films looks at past self and present self (NSFW written language).
↑ comment by David_Gerard · 2012-03-30T11:12:50.279Z · LW(p) · GW(p)
Me: "The BOFH stories are just stories and certainly not role models. Ha! Ha! Baseball bat, please."
Boss: "The DNS stuff is driving me batty, but I'm not sure who needs taking into a small room and battering."
Me: "Your past self."
Boss: "Yeah, he was a right twat."
(I was thinking of Karkat, too.)
comment by Will_Newsome · 2012-03-02T09:55:32.968Z · LW(p) · GW(p)
If you want to know how decent people can support evil, find a mirror.
Mencius Moldbug, A gentle introduction to Unqualified Reservations (part 2) (yay reflection!)
Replies from: satt↑ comment by satt · 2012-03-03T14:53:15.873Z · LW(p) · GW(p)
If only it were all so simple! If only there were evil people somewhere committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being. And who is willing to destroy a piece of his own heart?
— Aleksandr Solzhenitsyn, The Gulag Archipelago
comment by Stabilizer · 2012-03-01T09:29:18.133Z · LW(p) · GW(p)
To be a good diagnostician, a physician needs to acquire a large set of labels for diseases, each of which binds an idea of the illness and its symptoms, possible antecedents and causes, possible developments and consequences, and possible interventions to cure or mitigate the illness. Learning medicine consists in part of learning the language of medicine. A deeper understanding of judgments and choices also requires a richer vocabulary than is available in everyday language. The availability of a diagnostic label for [the] bias... makes it easier to anticipate, recognize and understand.
-Daniel Kahneman, Thinking, Fast and Slow
Replies from: khafra↑ comment by khafra · 2012-03-01T13:10:25.820Z · LW(p) · GW(p)
Yeah, a good compression algorithm--a dictionary that has short words for the important stuff--is vital to learning just about anything. I've noticed that in the martial arts; there's no way to learn a parry, entry, and takedown without a somatic vocabulary for the subparts of that; and the definitions of your "words" affects both the ease of learning and the effectiveness of its execution.
Replies from: Stabilizer, Stabilizer↑ comment by Stabilizer · 2012-03-02T03:03:24.133Z · LW(p) · GW(p)
Also, wouldn't it be better to call it a hash table or a lookup-table rather than a compression algorithm. The key is swift and appropriate recall. Example: Compare a long-time practicing theoretical physicist with a physics grad student. Both know most of basic quantum mechanics. But the experienced physicist would know when to whip out which equation in which situation. So, the knowledge content is not necessarily compressed (I'm sure there is some compression) as much as the usability of the knowledge is much greater.
↑ comment by Stabilizer · 2012-03-01T19:06:15.974Z · LW(p) · GW(p)
Interesting. So by somatic vocabulary, you basically mean composing long complicated moves from short, repeatable sub-moves?
Replies from: khafra↑ comment by khafra · 2012-03-01T20:04:24.542Z · LW(p) · GW(p)
Basically, yes. Much of the vocabulary has very long descriptions in English, but shorter ones in different arts' parlance; some of it doesn't really have short descriptions anywhere but in the movements of people who've mastered it. The Epistemic Viciousness problem makes it difficult, in general, to find and cleave at the joints.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-24T02:15:31.996Z · LW(p) · GW(p)
"I don't know if we've sufficiently analyzed the situation if we're thinking storming Azkaban is a solution."
- thatguythere47, enunciating an important general principle.
↑ comment by wedrifid · 2012-03-25T07:22:41.215Z · LW(p) · GW(p)
"I don't know if we've sufficiently analyzed the situation if we're thinking storming Azkaban is a solution."
Naturally not. Harry would only do something that reckless if it was to save a general of the Dark Lord on the whim of his mentor. ;)
I of course agree with thatguy, with substitution of 'the most viable immediate' in there somewhere. It is a solution to all sorts of things.
↑ comment by AspiringKnitter · 2012-03-25T03:56:59.179Z · LW(p) · GW(p)
If Eliezer Yudkowsky, the author, is lauding this statement, I think we can rule this out as Harry's solution.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-25T07:49:32.336Z · LW(p) · GW(p)
As previously stated, Harry is not a perfect rationalist.
Replies from: Nominull↑ comment by Nominull · 2012-03-25T08:27:54.484Z · LW(p) · GW(p)
Neither is Eliezer Yudkowsky.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-25T17:45:14.411Z · LW(p) · GW(p)
My philosophy is that it's okay to be imperfect, but not so imperfect that other people notice.
Replies from: Alex_Altair, Pavitra↑ comment by Alex_Altair · 2012-03-30T13:39:38.585Z · LW(p) · GW(p)
I propose that it's okay to be imperfect, but not so imperfect that reality notices.
Replies from: thomblake↑ comment by Pavitra · 2012-03-28T04:29:42.216Z · LW(p) · GW(p)
This is a cool-sounding slogan that doesn't actually say anything beyond "Winning is good."
Replies from: David_Gerard↑ comment by David_Gerard · 2012-03-30T10:55:14.896Z · LW(p) · GW(p)
No, it says that practical degrees of excellence are just fine and you don't actually have to achieve philosophically perfect excellence to be sufficiently effective.
It's the difference between not being able to solve an NP-complete problem perfectly, and being able to come up with pretty darn close numerical approximations that do the practical job just fine. (I think evolution achieves a lot of the latter, for example.)
Replies from: Pavitra↑ comment by Anubhav · 2012-03-25T03:22:00.523Z · LW(p) · GW(p)
enunciating an important general principle
This variant of when all you have is a hammer is seen often enough to merit a name.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2012-03-26T03:18:16.338Z · LW(p) · GW(p)
"When all you have is a powered-up Patronus, every problem looks like storming Azkaban is the answer"?
Replies from: Anubhav↑ comment by Anubhav · 2012-03-26T13:40:30.847Z · LW(p) · GW(p)
I meant something along the lines of "When your hammer is too darn impressive, everything begins to look like a nail."
comment by Stephanie_Cunnane · 2012-03-10T22:32:20.992Z · LW(p) · GW(p)
Some environments are worse than irregular. Robin Hogarth described "wicked" environments, in which professionals are likely to learn the wrong lessons from experience. He borrows from Lewis Thomas the example of a physician in the early twentieth century who often had intuitions about patients who were about to develop typhoid. Unfortunately, he tested his hunch by palpating the patient's tongue, without washing his hands between patients. When patient after patient became ill, the physician developed a sense of clinical infallibility. His predictions were accurate--but not because he was exercising professional intuition!
--Daniel Kahneman in Thinking, Fast and Slow
comment by Alejandro1 · 2012-03-02T01:36:55.802Z · LW(p) · GW(p)
Replies from: NominullThe reason you can't rigidly separate positive from normative economics is that you can't rigidly separate claims of fact from claims of value in general. Human language is too laden with thick concepts that mix the two. The claim that someone is a "slut" or a "bitch", for example, melds together factual claims about a woman's behavior with a lot of deeply embedded normative concepts about what constitutes appropriate behavior for a woman. The claim that financial markets are "efficient" is both an effort to describe their operation and a way of valorizing them. The idea of a "recession" or "full employment" or "potential output" all embed certain ideas about what would constitute a normal arrangement of human economic activity (...) You could try to rigorously purge your descriptions of the economy of anything that vaguely smells of a thick moral concept, but you'd find yourself operating with an impoverished vocubulary unable to describe human affairs in any kind of reasonable way.
↑ comment by Nominull · 2012-03-02T09:53:45.969Z · LW(p) · GW(p)
I found that very poignant, but I'm not sure I agree with his final claim. I think he's committing the usual mistake of claiming impossible what seems hard.
Replies from: Richard_Kennaway, magfrump↑ comment by Richard_Kennaway · 2012-03-02T12:20:09.454Z · LW(p) · GW(p)
Is it even hard? JFDI, or as we might say here, shut up and do the impossible. Is "efficient" a tendentious word? Taboo it. Is discussion being confused by mixing normative and positive concepts? DDTT.
The quote smells like rationalising to me.
Replies from: TheOtherDave, James_K↑ comment by TheOtherDave · 2012-03-02T17:42:07.536Z · LW(p) · GW(p)
Yeah, agreed. It's entirely possible to describe a system of economic agents without using such value-laden terns (though in some cases we may have to make up new terms). We don't do it, mostly because we don't want to. Which IMHO is fine; there's no particular reason why we should.
↑ comment by James_K · 2012-03-06T07:02:13.967Z · LW(p) · GW(p)
Yglesias seems to be committing an error here by confusing technical jargon with common English. Efficient has a very specific meaning in economics (well, two specific meanings, depending on what kind market you're talking about). The word efficient is not meant to refer to universal goodness and it's a mistake to treat it as if it were.
Replies from: wedrifid↑ comment by wedrifid · 2012-03-06T11:02:25.627Z · LW(p) · GW(p)
Efficient has a very specific meaning in economics (well, two specific meanings, depending on what kind market you're talking about).
I know of three, although it is a matter of parametrization (weak, strong, semi-strong). What two meanings do you have in mind?
Replies from: James_K↑ comment by James_K · 2012-03-06T17:51:15.645Z · LW(p) · GW(p)
The three you mention are all subtypes of the same efficiency - informational efficiency. Informational efficiency is used in finance and refers to how well a financial market incorporates information into prices. Basically a market is informationally efficient if you can't out-predict without using information it doesn't have. The weak / semi-strong / strong distinction merely indicates how much information it is incorporating into prices: weak means it's incorporating it's own past prices, semi-strong includes all public information, and strong includes all information held in private as well.
The other type of efficiency is allocative efficiency, a concept used in microeconomics. An allocatively efficient market is one that assigns goods to the people who place the highest value on them (subject to the constraints of each person's endowments). It is effectively a utility-maximising condition. The whole concept of market failure in economics is built around situations where markets are failing to be allocatively efficient.
↑ comment by magfrump · 2012-03-05T08:38:27.154Z · LW(p) · GW(p)
The first thought that I have when considering how to describe the economy without using normative language is that all of the values that are commonly measured (i.e. GDP, unemployment, etc.) are chosen to be measured because they are proxies for things that people value.
In fact, the whole study of economics seems to me like the study of things people value and how they are distributed. If you choose proxies for value you're having a profound effect on what gets measured (consider the recent discussions of statistical significance as a proxy for evidence) and if you try to list everything that everyone values you end up butting up against unsolved problems.
comment by taelor · 2012-03-07T09:00:12.381Z · LW(p) · GW(p)
But even as light is opposed by darkness, science and reason have their enemies. Superstition and belief in magic are as old as man himself; for the intransigence of facts and our limitations in controlling them can be powerfully hard to take. Add to this the reflection that we are in an age when it is popular to distrust whatever is seen as the established view or the Establishment, and it is no wonder that anti-rational attitudes and doctrines are mustering so much support. Still, we can understand what encourages the anti-rationalist turn without losing our zeal for opposing it. A current Continuing Education catalogue offers a course description, under the heading "Philosophy", that typifies the dark view at its darkest: "Children of science that we are, we have based our cultural patterns on logic, on the cognitive, on the verifiable. But more and more there has crept into current research and study the haunting suggestion that there are other kinds of knowledge unfathomable by our cognition, other ways of knowing beyond the limits of our logic, which are deserving of our serious attention." Now "knowledge unfathomable by our cognition" is simply incoherent, as attention to the words makes clear. Moreover, all that creeps is not gold. One wonders how many students enrolled.
-- W. V. O. Quine
Replies from: JoachimSchipper, David_Gerard↑ comment by JoachimSchipper · 2012-03-07T11:52:04.879Z · LW(p) · GW(p)
(1978). I expected this to be older.
↑ comment by David_Gerard · 2012-03-30T11:03:13.599Z · LW(p) · GW(p)
Did anyone ever track down the catalogue in question?
(Did the university in question later offer degrees in alternative medicine?)
comment by NexH · 2012-03-06T14:19:52.995Z · LW(p) · GW(p)
When it comes to rare probabilities, our mind is not designed to get things quite right. For the residents of a planet that may be exposed to events no one has yet experienced, that is not good news.
--Daniel Kahneman, *Thinking, fast and slow*
comment by Woodbun · 2012-03-04T12:02:38.818Z · LW(p) · GW(p)
"One of the great commandments of science is, 'Mistrust arguments from authority'. (Scientists, being primates, and thus given to dominance hierarchies, of course do not always follow this commandment.)"
-Carl Sagan, The Demon Haunted World
comment by NancyLebovitz · 2012-03-06T12:15:36.396Z · LW(p) · GW(p)
Carefully observe those good qualities wherein our enemies excel us
Plutarch, found here
comment by Will_Newsome · 2012-03-04T12:10:37.942Z · LW(p) · GW(p)
Replies from: Eugine_Nier, EzekielThe world is paved with good intentions; the road to Hell has bad epistemology mixed in.
↑ comment by Eugine_Nier · 2012-03-07T05:00:51.949Z · LW(p) · GW(p)
I think the original is instrumentally more useful. On hearing "the road to hell is paved with good intentions", one of my reactions is "I have good intentions, I'd better make sure I'm not on the road to hell". On hearing your version my first reaction is "whew, this doesn't apply to me, only to those people with bad epistemology".
Replies from: Will_Newsome, David_Gerard↑ comment by Will_Newsome · 2012-03-07T08:38:36.535Z · LW(p) · GW(p)
On hearing your version my first reaction is "whew, this doesn't apply to me, only to those people with bad epistemology".
Interesting, my immediate reaction is "oh, I guess I need to seriously work on my epistemology rather than work on having better intentions as such".
↑ comment by David_Gerard · 2012-03-30T11:17:06.001Z · LW(p) · GW(p)
I suspect that's a standard reaction to hearing of any cognitive bias.
“Hah, this article nails those assholes perfectly!”
-- some asshole
comment by gyokuro · 2012-03-02T04:39:18.652Z · LW(p) · GW(p)
"I've never ever felt wise," Derk said frankly. "But I suppose it is a tempation, to stare into distance and make people think you are."
"It's humbug," said the dragon. "It's also stupid. It stops you learning more."
Diana Wynne Jones, Dark Lord of Derkholm
comment by Alejandro1 · 2012-03-01T18:23:41.747Z · LW(p) · GW(p)
The demons told me that there is a hell for the sentimental and the pedantic. They are abandoned in an endless palace, more empty than full, and windowless. The condemned walk about as if searching for something, and, as we might expect, they soon begin to say that the greatest torment consists in not participating in the vision of God, that moral suffering is worse than physical suffering, and so on. Then the demons hurl them into the sea of fire, from where no one will ever take them out.
Adolfo Bioy Casares (my translation)
Replies from: cousin_it, Incorrect↑ comment by cousin_it · 2012-03-07T09:28:34.420Z · LW(p) · GW(p)
'Your God person puts an apple tree in the middle of a garden and says, do what you like, guys, oh, but don't eat the apple. Surprise surprise, they eat it and he leaps out from behind a bush shouting "Gotcha". It wouldn't have made any difference if they hadn't eaten it.'
'Why not?'
'Because if you're dealing with somebody who has the sort of mentality which likes leaving hats on the pavement with bricks under them you know perfectly well they won't give up. They'll get you in the end.'
-- Douglas Adams
Replies from: Micaiah_Chang↑ comment by Micaiah_Chang · 2012-03-11T21:16:05.296Z · LW(p) · GW(p)
They say that God created the world in a week didn't they? That's way too irresponsible. If he did create the world, surely he made some comment like 'This isn't something I would've made.' or perhaps 'Hold on! It's not done yet!'
~ Sca-自 Subarashiki Hibi ~Discontinuous Existence~ (personal translation)
↑ comment by Incorrect · 2012-03-01T18:32:51.945Z · LW(p) · GW(p)
The condemned walk about as if searching for something, and, as we might expect, they soon begin to say that the greatest torment consists in not participating in the vision of God, that moral suffering is worse than physical suffering, and so on
Why don't they just play tag with each other? Sounds like it would be fun.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2012-03-01T22:22:26.021Z · LW(p) · GW(p)
Because they're jerks.
Replies from: Alejandro1↑ comment by Alejandro1 · 2012-03-02T21:06:29.512Z · LW(p) · GW(p)
Indeed. The kind of people who would go "Whee! Let's play tag!" in this situation do not find themselves in Hell (at least in this particular one) in the first place.
comment by Nisan · 2012-03-13T22:54:11.138Z · LW(p) · GW(p)
Related to Schelling fences on slippery slopes:
If once a man indulges himself in murder, very soon he comes to think little of robbing; and from robbing he comes next to drinking and Sabbath-breaking, and from that to incivility and procrastination. Once begun upon this downward path, you never know where you are to stop. Many a man has dated his ruin from some murder or other that perhaps he thought little of at the time.
— Thomas De Quincey
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-03-14T23:59:23.219Z · LW(p) · GW(p)
I don't get this quote, it strikes me as wit with no substance.
Replies from: Will_Newsome, kdorian, Nisan↑ comment by Will_Newsome · 2012-03-15T01:01:27.547Z · LW(p) · GW(p)
Presumably the quote is from De Quincey's essay "On Murder Considered as one of the Fine Arts", and with that context & perspective in mind it has a tad more substance.
↑ comment by kdorian · 2012-03-24T21:54:49.759Z · LW(p) · GW(p)
I have always read it as intentionally ironic commentary on the 'slippery slope' more than anything else.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-03-24T22:36:49.227Z · LW(p) · GW(p)
I read it more specifically as a parody of moral slipperyslopism, in which slight moral infractions lead to the worst sort of behavior.
Arguably, we live in an era strongly shaped by revulsion at moral slipperyslopism.
comment by NancyLebovitz · 2012-03-24T12:12:54.856Z · LW(p) · GW(p)
THE WAY WE BREAK THINGS DOWN AND DESCRIBE THEM ARE NOT NECESSARILY HELPFUL TO UNDERSTANDING HOW TO CONSTRUCT THEM.
HULK EXPLAINS WHY WE SHOULD STOP IT WITH THE HERO JOURNEY SHIT
comment by EllisD · 2012-03-02T14:24:29.107Z · LW(p) · GW(p)
Whether a mathematical proposition is true or not is indeed independent of physics. But the proof of such a proposition is a matter of physics only. There is no such thing as abstractly proving something, just as there is no such thing as abstractly knowing something. Mathematical truth is absolutely necessary and transcendent, but all knowledge is generated by physical processes, and its scope and limitations are conditioned by the laws of nature.
-David Deutsch, The Beginning of Infinity.
Replies from: Richard_Kennaway, TimS↑ comment by Richard_Kennaway · 2012-03-03T17:25:23.655Z · LW(p) · GW(p)
There is no such thing as abstractly proving something
Of course there is. A proof of a mathematical proposition is just as much itself a mathematical object as the proposition being proved; it exists just as independently of physics. The proof as written down is a physical object standing in the same relation to the real proof as the digit 2 before your eyes here bears to the real number 2.
But perhaps in the context Deutsch isn't making that confusion. What scope and limitations on mathematical knowledge, conditioned by the laws of nature, does he draw out from these considerations?
↑ comment by TimS · 2012-03-02T14:56:07.499Z · LW(p) · GW(p)
The Pythagorean theorem isn't proved or or even checked by measuring right triangles and noticing that a^2 + b^2 = c^2. Is the Pythagorean theorem not knowledge?
Replies from: khafra, RolfAndreassen, Vaniver, Bugmaster↑ comment by khafra · 2012-03-02T14:59:56.202Z · LW(p) · GW(p)
I don't think Deutsch means that mathematical proofs are all inductive. I think he means that proofs are constructed and checked on physical computing devices like brains or GPGPUs; and that because of that mathematical knowledge is not in a different ontological category than empirical knowledge.
Replies from: TimS↑ comment by TimS · 2012-03-02T15:58:59.640Z · LW(p) · GW(p)
I feel quite confident saying that mathematics will never undergo paradigm shifts, to use the terminology of Kuhn.
The same is not true for empirical sciences. Paradigm shifts have happened, and I expect them to happen in the future.
Replies from: None, None, benelliott, Morendil, ChristianKl↑ comment by [deleted] · 2012-03-02T17:16:15.251Z · LW(p) · GW(p)
I feel quite confident saying that mathematics will never undergo paradigm shifts, to use the terminology of Kuhn.
It believe it already has. Consider the Weierstrass revolution. Before Weierstrass, it was commonly accepted that while continuous functions may lack a derivative at a set of discrete points, it still had to have a derivative somewhere. Then Weierstrass developed a counterexample, which I think satisfies the Kuhnian "anomaly that cannot be explained within the current paradigm."
Another quick example: during the pre-War period, most differential geometry was concerned with embedded submanifolds in Euclidean space. However, this formulation made it difficult to describe or classify surfaces -- I seem to believe but don't have time to verify that even deciding whether two sets of algebraic equations determine isomorphic varieties is NP-hard. Hence, in the post-War period, intrinsic properties and descriptions.
EDIT: I was wrong, or at least imprecise. Isomorphism of varieties can be decided with Grobner bases, the reduction of which is still doubly-exponential in time, as far as I can tell. Complexity classes aren't in my domain; I shouldn't have said anything about them without looking it up. :(
Replies from: TimS↑ comment by TimS · 2012-03-02T17:32:05.962Z · LW(p) · GW(p)
Reading the wiki page, it looks like Weierstrass corrected an error in the definition or understanding of limits. But mathematicians did not abandon the concept of limit the way physicists abandoned the concept of epicycle, so I'm not sure that qualifies as a paradigm shift. But I'm not mathematician, so my understanding may be seriously incomplete.
I can't even address your other example due to my failure of mathematical understanding.
Replies from: None↑ comment by [deleted] · 2012-03-02T19:35:06.217Z · LW(p) · GW(p)
Reading the wiki page, it looks like Weierstrass corrected an error in the definition or understanding of limits.
Hindsight bias. The old limit definition was not widely considered either incorrect or incomplete.
But mathematicians did not abandon the concept of limit the way physicists abandoned the concept of epicycle, so I'm not sure that qualifies as a paradigm shift.
They abandoned reasoning about limits informally, which was de rigeur beforehand. For examples of this, see Weierstrass' counterexample to the Dirichlet principle. Prior to Weierstrass, some people believed that the Dirichlet principle was true because approximate solutions exist in all natural examples, and therefore the limit of approximate solutions will be a true solution.
Replies from: Eugine_Nier, TimS↑ comment by Eugine_Nier · 2012-03-03T04:03:50.526Z · LW(p) · GW(p)
Hindsight bias. The old limit definition was not widely considered either incorrect or incomplete.
Not true. The "old limit definition" was non-existent beyond the intuitive notion of limit, and people were fully aware that this was not a satisfactory situation.
Replies from: None↑ comment by [deleted] · 2012-03-03T21:20:41.815Z · LW(p) · GW(p)
We need to clarify what time period we're talking about. I'm not aware of anyone in the generation of Newton/Leibniz and the second generation (e.g., Daniel Bernoulli and Euler) who felt that way, but it's not as if I've read everything these people ever wrote.
The earliest criticism I'm aware of is Berkeley in 1734, but he wasn't a mathematician. As for mathematicians, the earliest I'm aware of is Lagrange in 1797.
Replies from: TimS↑ comment by TimS · 2012-03-02T19:54:56.321Z · LW(p) · GW(p)
That's pretty clear, thanks. Obviously, experts aren't likely to think there is a basic error before it has been identified, but I'm not in position to have a reliable opinion on whether I'm suffering from hindsight bias.
Still, what fundamental object did mathematics abandon after Weierstrass' counter-example? How is this different from the changes to the definition of set provoked by Russell's paradox?
Replies from: None↑ comment by [deleted] · 2012-03-02T20:28:28.575Z · LW(p) · GW(p)
I don't recall where it is said that such an object is necessary for a Kuhnian revolution to have occurred. There was a crisis, in the Kuhnian sense, when the old understanding of limit (perhaps labeling it as limit1 will be clearer) could not explain the existence of e.g., continuous functions without derivatives anywhere, or counterexamples to the Dirichlet principle. Then Weierstrass developed limit2 with deltas and epsilons. Limit1 was then abandoned in favor of limit2.
↑ comment by [deleted] · 2012-03-02T18:03:42.204Z · LW(p) · GW(p)
Wikipedia gives the acceptance of non-Euclidean geometry as a "classical case" of a paradigm shift. I suspect that there were several other paradigm shifts involved from Euclid's math to our math: for instance, coordinate geometry, or the use of number theory applied to abstract quantities as opposed to lengths of line segments.
↑ comment by benelliott · 2012-03-02T16:42:16.500Z · LW(p) · GW(p)
Would the whole Russel's paradox incident count as a mathematical paradigm shift?
Replies from: TimS↑ comment by TimS · 2012-03-02T17:28:22.616Z · LW(p) · GW(p)
Reading Wikipedia, it looks like a naive definition of a set turns out to be internally inconsistent. Does that mean the concept of set was abandoned by mathematicians the way epicyles have been abandoned by physicists? That's not my sense, so I hesitate to say redefining set in a more coherent way is a paradigm shift. But I'm no mathematician.
Replies from: benelliott↑ comment by benelliott · 2012-03-02T17:40:25.832Z · LW(p) · GW(p)
Its a matter of degree rather than an absolute line. However, I would say a time when even the very highest experts in a field believed something of great importance to their field with quite high confidence, and then turned out to wrong, probably counts.
Replies from: TimS↑ comment by TimS · 2012-03-02T17:42:54.662Z · LW(p) · GW(p)
I don't think "everyone in field X made an error" is that same thing as saying "Field X underwent a paradigm shift."
Replies from: Bugmaster↑ comment by Bugmaster · 2012-03-02T18:56:07.108Z · LW(p) · GW(p)
Why not ? That sounds like a massive shift in the core beliefs of the field in question. If that's not a paradigm shift, then what is ?
Replies from: TimS↑ comment by TimS · 2012-03-02T19:01:15.770Z · LW(p) · GW(p)
The "non-expressible in the new concept-space" thing that you think never actually happens.
Replies from: David_Gerard, roystgnr, Bugmaster↑ comment by David_Gerard · 2012-03-03T10:38:35.825Z · LW(p) · GW(p)
This looks very like trying to define away something that sure felt like a paradigm shift to the people in the field. Remember that "paradigm" is a belief held by people, not a property inherent in the universe.
Replies from: TimS↑ comment by TimS · 2012-03-03T18:11:48.315Z · LW(p) · GW(p)
Perhaps this is a limitation of my understanding of Kuhn, in that I'm misusing his terminology. I am unaware of mathematics abandoning fundamental objects as inherently misguided the way physics abandoned epicycles or impetus. I expect physics will have similar abandonments in the future, but I expect mathematics never will. The difference is a property of the difference between mathematics and empirical facts. This comment makes the argument I'm trying to assert in slightly different form.
↑ comment by roystgnr · 2012-03-06T16:27:47.737Z · LW(p) · GW(p)
Isn't that exactly what happened? The phrase "set of all sets that do not contain themselves" isn't really expressible in Zermelo-Fraenkel set theory, since that has a more limited selection of ways to construct new sets and "the set of everything that satisfies property X" is not one of them.
↑ comment by Morendil · 2012-03-02T16:58:00.373Z · LW(p) · GW(p)
mathematics will never undergo paradigm shifts,
What would count as one?
Replies from: TimS↑ comment by TimS · 2012-03-02T17:38:24.878Z · LW(p) · GW(p)
As I understand it, a paradigm shift would include the abandonment of a concept. That is, the concept cannot be coherently expressed using the new terminology. For example, there's no way to express coherent concepts in things like Ptolomy's epicycles or Aristole's impetus. I think Kuhn would say that these examples are evidence that empirical science is socially mediated.
I'm not aware of any formerly prominent mathematical concepts that can't even be articulated with modern concepts. Because mathematics is non-empirical and therefore non-social, I would be surprised if they existed.
Replies from: None, Manfred, Morendil, Bugmaster↑ comment by [deleted] · 2012-03-02T22:57:24.995Z · LW(p) · GW(p)
Aristole's impetus
A totally trivial nit pick, I admit, but there's no such thing as the Aristotelian theory of impetus. The theory of impetus was an anti-Aristotelian theory developed in the middle ages. Aristotle has no real dynamical theory.
Replies from: TimS, Bugmaster↑ comment by Bugmaster · 2012-03-02T22:59:38.434Z · LW(p) · GW(p)
Thanks, I did not actually know that. But I should have known.
↑ comment by Manfred · 2012-03-02T18:11:02.096Z · LW(p) · GW(p)
there's no way to express coherent concepts in things like Ptolomy's epicycles or Aristole's impetus
There are perfectly fine ways to express those things. Epicycles might even be useful in some cases, since they can be used as a simple approximation of what's going on.
The reason people don't use epicycles any more isn't because they're unthinkable, in the really strong "science is totally culture-dependent" sense. It's because using them was dependent on whether we thought they reflected the structure of the universe, and now we don't. Ptolemy's claim behind using epicycles was that circles were awesome, so it was likely that the universe ran on circles. This is a fact that could be tested by looking at the complexity of describing the universe with circles vs. ellipses.
So this paradigm shift stuff doesn't look very unique to me. It just looks like the refutation of an idea that happened to be central to using a model. Then you might say that math can have no paradigm shifts because it constructs no models of the world. But this isn't quite true - there are models of the mathematical world that mathematicians construct that occasionally get shaken up.
Replies from: TimS↑ comment by TimS · 2012-03-02T18:24:19.110Z · LW(p) · GW(p)
My point was that trying to express epicycles in the new terminology is not possible. That is, modern physicists say, "Epicycles don't exist."
Obviously, it is possible to use sociological terminology to describe epicycles. You yourself said that they were useful at times. But that's not the language of physics.
Since you mentioned it, I would endorse "Science is substantially culturally dependent", NOT "Science is totally culturally dependent." So culturally dependent that there is not reason to expect correspondence between any model and reality. Better science makes better predictions, but it's not clear what a "better" model would be if there's no correspondence with reality.
I brought all this up not to advocate for the cultural dependence of science. Rather, I think it would be surprising for a discipline independent of empirical facts to have paradigm shifts. Thus, the absence of paradigm shifts is a reason to think that mathematics is independent of empirical facts.
If you don't think science is substantially culturally dependent, then there's no reason my argument should persuade you that mathematics is independent of empirical facts.
Replies from: komponisto, Manfred↑ comment by komponisto · 2012-03-02T22:50:10.711Z · LW(p) · GW(p)
My point was that trying to express epicycles in the new terminology is not possible.
This is false in an amusing way: expressing motion in terms of epicycles is mathematically equivalent to decomposing functions into Fourier series -- a central concept in both physics and mathematics since the nineteenth century.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-03-02T22:57:48.733Z · LW(p) · GW(p)
To be perfectly fair, AFAIK Ptolemy thought in terms of a finite (and small) number of epicycles, not an infinite series.
Replies from: komponisto↑ comment by komponisto · 2012-03-02T23:05:10.285Z · LW(p) · GW(p)
And so for the curves in question, the Fourier expansion would have only a finite number of terms.
The point being that, in contrast to what was being asserted, Ptolemy's concept is subsumed within the modern one; the modern language is more general, capable of expressing not only Ptolemy's thoughts, but also a heck of a lot more. In effect, modern mathematical physics uses epicycles even more than Ptolemy ever dreamed.
Replies from: Bugmaster↑ comment by Manfred · 2012-03-02T19:14:42.076Z · LW(p) · GW(p)
My point was that trying to express epicycles in the new terminology is not possible.
But it is! You simply specify the position as a function of time and you've done it! The reason why that seems so strange isn't because modern physics has erased our ability to add circles together, it's because we no longer have epicycles as a fundamental object in our model of the world.
So if you want the copernican revolution to be a paradigm shift, the idea needs to be extended a bit. I think the best way is to redefine paradigm shift as a change in the language that we describe the world in. If we used to model planets in terms of epicycles, and now we model them in terms of ellipses, that's a change of language, even though ellipses can be expressed as sums of epicycles, and vice versa.
In fact, in every case of inexpressibility that we know of, it's been because one of the ways of thinking about the world didn't give correct predictions. We have yet to find two ways of thinking about the world that let you get different experimental results if you plan the experiment two different ways. In these cases, the paradigm shift included the falsification of a key claim.
Rather, I think it would be surprising for a discipline independent of empirical facts to have paradigm shifts
I don't think it's necessarily true (for example, you can imagine an abstract game having a revolution in how people thought about what it was doing), but it seems reasonable for math, depending on how you define "math." I think people are just giving you a hard time because you're trying to make this general definitional argument (generally not worth the effort) on pretty shaky ground.
Replies from: TimS↑ comment by TimS · 2012-03-02T19:25:27.508Z · LW(p) · GW(p)
Thanks, that's quite clear. Should I reference abandonment of fundamental objects as the major feature of a paradigm shift?
In fact, in every case of inexpressibility that we know of, it's been because one of the ways of thinking about the world didn't give correct predictions.
Yes, every successful paradigm shift. Proponents of failed paradigm shifts are usually called cranks. :)
My position is that the repeated pattern of false fundamental objects suggest that we should give up on the idea of fundamental objects, and simply try to make more accurate predictions without asserting anything else about the "accuracy" of our models.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-03-02T21:32:37.407Z · LW(p) · GW(p)
and simply try to make more accurate predictions without asserting anything else about the "accuracy" of our models.
How can you make accurate predictions while at the same time discarding the notion of accuracy ?
Replies from: TimS↑ comment by TimS · 2012-03-02T21:45:30.480Z · LW(p) · GW(p)
I have no reason to expect that our models correspond to reality in any meaningful way, but I still think that useful predictions are possible.
Replies from: Vladimir_Nesov, Bugmaster↑ comment by Vladimir_Nesov · 2012-03-02T21:51:19.340Z · LW(p) · GW(p)
Predictions about the world are only possible to the extent the world controls the predictions, to the extent considerations you use to come up with the predictions correspond to the state of the world. So it's not possible to make useful predictions based on considerations that don't correspond to reality, or conversely if you manage to make useful predictions, there must be something in your considerations that corresponds to the world. See Searching for Bayes-Structure.
↑ comment by Bugmaster · 2012-03-02T21:48:47.635Z · LW(p) · GW(p)
Isn't "makes accurate predictions" synonymous with "corresponds to reality in some way" ? If there was absolutely no correspondence between your model and reality, you wouldn't be able to judge how accurate your predictions were. In order to make such a judgement, you need to compare your predictions to the actual outcome. By doing so, you are establishing a correspondence between your model and reality.
↑ comment by Morendil · 2012-03-02T17:53:39.239Z · LW(p) · GW(p)
That is, the concept cannot be coherently expressed using the new terminology. For example, there's no way to express coherent concepts in things like Ptolomy's epicycles or Aristole's impetus.
I'm not seeing how the second sentence is an example of the criterion in your first sentence. That criterion seems to strict, too: in general the new paradigm subsumes the old (as in the canonical example of Newtonian vs relativistic physics).
I'm also not seeing what the attributes "empirical" and "non-social" have to do (causally) with the ability to form coherent concepts.
Maybe you should also unpack what you mean by "coherent"?
I'm not a mathematician, but from my outside perspective I would cheerfully qualify something like Wilf-Zeilberger theory as the math equivalent to a paradigm shift in the empirical sciences.
WP lists "non-euclidean geometry" as a paradigm shift, BTW.
Replies from: TimS↑ comment by TimS · 2012-03-02T18:15:55.379Z · LW(p) · GW(p)
That is, the concept cannot be coherently expressed using the new terminology. For example, there's no way to express coherent concepts in things like Ptolomy's epicycles or Aristole's impetus. I'm not seeing how the second sentence is an example of the criterion in your first sentence.
Using modern physics, there is no way to express the concept that Ptolomy intended when he said epicycles. More casually, modern physicists would say "Epicycles don't exist" But contrast, the concept of set is still used in Cantor's sense, even though his formulation contained a paradox. So I think the move from geocentric theory to heliocentric theory is a paradigm shift, but adjusting the definition of set is not.
I'm also not seeing what the attributes "empirical" and "non-social" have to do (causally) with the ability to form coherent concepts.
I'm using the word science as synonymous with "empirical studies" (as opposed to making stuff up without looking). That's not intended to be controversial in this community. What is controversial is the assertion that studying the history of science shows examples of paradigm shifts.
One possible explanation of this phenomena is that science is socially mediated (i.e. affected by social factors when the effect is not justified by empirical facts).
I'm asserting that mathematics is not based on empirical facts. Therefore, one would expect that it could avoid being socially mediated by avoiding interacting with reality (that is, I think a sufficiently intelligent Cartesian skeptic could generate all of mathematics). IF I am correct that they are caused by the socially mediated aspects of the scientific discipline and IF mathematics can avoid being socially mediated by virtue of its non-empirical nature, then I would expect that no paradigm shifts would occur.
This whole reference to paradigm shifts is an attempt to show a justification for my belief that mathematics is non-empirical, contrary to the original quote. If you don't believe in paradigm shifts (as Kuhn meant them, not as used by management gurus), then this is not a particularly persuasive argument.
WP lists "non-euclidean geometry" as a paradigm shift, BTW.
If Wikipedia says that, I don't think it is using the word the way Kuhn did.
Replies from: komponisto, Bugmaster, Morendil↑ comment by komponisto · 2012-03-02T22:45:31.920Z · LW(p) · GW(p)
WP lists "non-euclidean geometry" as a paradigm shift, BTW.
If Wikipedia says that, I don't think it is using the word the way Kuhn did.
For Kuhn, the word was, if anything, a sociological term -- not something referring to the structure of reality itself. (Kuhn was not himself a postmodernist; he still believed in physical reality, as distinct from human constructs.) So it seems to me that it would be entirely consistent with his usage to talk about paradigm shifts in mathematics, since the same kind of sociological phenomena occur in the latter discipline (even if you believe that the nature of mathematical reality itself is different from that of physical reality).
↑ comment by Bugmaster · 2012-03-02T18:36:43.674Z · LW(p) · GW(p)
Using modern physics, there is no way to express the concept that Ptolomy intended when he said epicycles.
As I'd mentioned elsewhere, there's actually a pretty easy way to express that, IMO: "Ptolemy thought that planets move in epicycles, and he was wrong for the following reasons, but if we had poor instruments like he did, we might have made the same mistake".
IF I am correct that they are caused by the socially mediated aspects of the scientific discipline and IF mathematics can avoid being socially mediated by virtue of its non-empirical nature, then I would expect that no paradigm shifts would occur.
The abovementioned non-euclidean geometry is one such shift, as far as I understand (though I'm not a mathematician). I'm not sure what the difference is between the history of this concept, and what Kuhn meant.
But there were other, more powerful paradigm shifts in math, IMO. For example, the invention of (or discovery of, depending on your philosophy) zero (or, more specifically, a positional system for representing numbers). Irrational numbers. Imaginary numbers. Infinite sets. Calculus (contrast with Zeno's Paradox). The list goes on.
I should also point out that many, if not all, of these discoveries (or "inventions") either arose as a solution to a scientific problem (f.ex. Calculus), or were found to have a useful scientific application after the fact (f.ex. imaginary numbers). How can this be, if mathematics is entirely "non-empirical" ?
Replies from: TimS↑ comment by TimS · 2012-03-02T18:43:02.977Z · LW(p) · GW(p)
Hmm, I'll have to think about the derivation of zero, the irrational numbers, etc.
I should also point out that many, if not all, of these discoveries (or "inventions") either arose as a solution to a scientific problem (f.ex. Calculus), or were found to have a useful scientific application after the fact (f.ex. imaginary numbers). How can this be, if mathematics is entirely "non-empirical"
The motivation for derivation of mathematical facts is different from the ability to derive them. I don't why the Cartesian skeptic would want to invent calculus. I'm only saying it would be possible. It wouldn't be possible if mathematics was not independent of empirical facts (because the Cartesian skeptic is isolated from all empirical facts except the skeptic's own existence).
Replies from: Bugmaster↑ comment by Bugmaster · 2012-03-02T18:50:24.178Z · LW(p) · GW(p)
I don't why the Cartesian skeptic would want to invent calculus. I'm only saying it would be possible.
My point is that we humans are not ideal Cartesian skeptics. We live in a universe which, at the very least, appears to be largely independent of our minds (though of course our minds are parts of it). And in this universe, a vast majority of mathematical concepts have practical applications. Some were invented with applications in mind, while others were found to have such applications after their discovery. How could this be, if math is entirely non-empirical ? That is, how do you explain the fact that math is so useful to science and engineering ?
↑ comment by Morendil · 2012-03-02T20:24:59.323Z · LW(p) · GW(p)
socially mediated (i.e. affected by social factors when the effect is not justified by empirical facts).
Hmm, "justified" generally has a social component, so I doubt that this definition is useful.
there is no way to express the concept that Ptolomy intended when he said epicycles
So this WP page doesn't exist? ;)
My position, FWIW, is that all of science is socially mediated (as a consequence of being a human activity), mathematics no less than any other science. Whether a mathematical proposition will be assessed as true by mathematicians is a property ultimately based on physics - currently the physics of our brains.
↑ comment by Bugmaster · 2012-03-02T18:26:05.032Z · LW(p) · GW(p)
For example, there's no way to express coherent concepts in things like Ptolomy's epicycles or Aristole's impetus.
I disagree, as, I suspect, you already know :-)
But I have a further disagreement with your last sentence:
Because mathematics is non-empirical and therefore non-social...
What do you mean, "and therefore" ? As I see it, "empirical" is the opposite of "social". Gravity exists regardless of whether I like it or not, and regardless of how many passionate essays I write about Man's inherent freedom to fly by will alone.
Replies from: TimS↑ comment by TimS · 2012-03-02T18:30:32.389Z · LW(p) · GW(p)
Yes, non-empirical is the wrong word. I mean to assert that mathematics is independent of empirical fact (and therefore non-social. A sufficiently intelligent Cartesian skeptic could derive all of mathematics in solitude).
Replies from: ChristianKl, Bugmaster↑ comment by ChristianKl · 2012-03-03T18:35:23.092Z · LW(p) · GW(p)
Didn't Gödel show that nobody can derive all of mathematics in solitude because you can't have a complete and consistented mathamatical framework?
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-03-03T18:42:54.121Z · LW(p) · GW(p)
Goedel showed that no one can derive all of mathematics at all, whether in solitude or in a group, because any consistent system of axioms can't lead to all the true statements from their domain.
Anyone know whether it's proven that there are guaranteed to be non-self-referential truths which can't be derived from a given axiom system? (I'm not sure whether "self-referential" can be well-defined.)
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-03-04T00:44:23.364Z · LW(p) · GW(p)
Anyone know whether it's proven that there are guaranteed to be non-self-referential truths which can't be derived from a given axiom system?
It is. At least, it's possible to express Goedel statements in the form "there exist integers that satisfy this equation".
(I'm not sure whether "self-referential" can be well-defined.)
It can't.
↑ comment by Bugmaster · 2012-03-02T18:43:45.010Z · LW(p) · GW(p)
A sufficiently intelligent Cartesian skeptic could derive all of mathematics in solitude...
I don't know whether this is true or not; arguments could (and have) been made that such a skeptic could not exist in a non-empirical void. But that's a bit offtopic, as I still have a problem with your previous sentence:
I mean to assert that mathematics is independent of empirical fact ... and therefore non-social.
Are you asserting that all things which are "dependent on empirical fact" are "social" ? In this case, you must be using the word "social" in a different way than I am.
If we lived in a culture where belief in will-powered flight was the norm, and where everyone agreed that willing yourself to fly was really awesome and practically a moral imperative... then people would still plunge to their deaths upon stepping off of skyscraper roofs.
Replies from: TimS↑ comment by TimS · 2012-03-02T18:55:21.186Z · LW(p) · GW(p)
I don't know whether this is true or not; arguments could (and have) been made that such a skeptic could not exist in a non-empirical void.
:) It is the case that the coherence of the idea of the Cartesian skeptic is basically what we are debating.
I'm specifically asserting that things that are independent of empirical facts are non-social.
I think that things that are subject to empirical fact are actually subject to social mediation, but that isn't a consequence of my previous statement.
What does rejection of the assertion "If you think you can fly, then you can" have to do with the definition of socially mediated? I don't think post-modern thinking is committed to the anti-physical realism position, even if it probably should endorse the anti-physical models position. The ability to make accurate predictions doesn't require a model that corresponds with reality.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-03-02T21:07:29.685Z · LW(p) · GW(p)
It is the case that the coherence of the idea of the Cartesian skeptic is basically what we are debating.
That might be a bit orthogonal to the discussion; I'm certainly willing to grant you the Cartesian skeptic for the duration of this thread :-)
I'm specifically asserting that things that are independent of empirical facts are non-social.
If you are talking about pure reason, don't the conclusions depend on your axioms ? If so, the results may not be social, per se, but they're certainly arbitrary. If you pick different axioms, you get different conclusions.
What does rejection of the assertion "If you think you can fly, then you can" have to do with the definition of socially mediated? ... The ability to make accurate predictions doesn't require a model that corresponds with reality.
To me, these two sentences sound diametrically opposed to each other. If your model does not correspond to reality, how is it different from any other arbitrary social construct (such as the color of Harry Potter's favorite scarf or whatever) ? On the other hand, if your model makes specific predictions about reality, which are found to be true time and time again (f.ex., "if you step off this ledge, you'll plummet to your splattery doom"), then how can you say that your model does not correspond to reality in any meaningful way ?
↑ comment by ChristianKl · 2012-03-03T18:22:04.353Z · LW(p) · GW(p)
The frequentist vs. baysian debate is a debate of computing mathematical paradigms. True mathematicians however shun statistics. They don't like the statistical pradigm ;)
Gödel's discovery ended a certain mathmatical pradigm of wanting to construct a complete mathematics from the ground up.
I could imagine a future paradigm shift way from the ideal of mathmatical proofs to more experimental math. Neural nets or quantum computers can give you answer to mathematical question that you ask that might be better than the answer s that axiom and proof based math provides.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-03-04T00:46:16.994Z · LW(p) · GW(p)
Gödel's discovery ended a certain mathmatical pradigm of wanting to construct a complete mathematics from the ground up.
Except, in practice mathematics still works this way.
↑ comment by RolfAndreassen · 2012-03-05T20:00:47.217Z · LW(p) · GW(p)
It had damn well better be checked that way, because it rests on the assumption of flat space, which may or may not be true. The derivation from the axioms is not checked by empirical data; the axioms themselves are. If you don't check the axioms, you don't have knowledge, you have pretty equations on paper, unconnected to any fact. Pythagoras is just as much empirical knowledge as Einstein; it's just that the axioms are closer to being built-in to the human brain, so you get an illusion of Eternal Obviousness. Try explaining the flat-space axioms to squid beings from the planet Rigel, which as it happens has a gravity field twenty times that of Earth, and see how far you get. "There's only one parallel line through a given point", you say, and the squid explodes in scorn. "Of course there's more than one! Here, I'll draw them for you and you can see for yourself!"
Replies from: TimS↑ comment by TimS · 2012-03-05T20:31:12.655Z · LW(p) · GW(p)
The derivation from the axioms is not checked by empirical data
I agree. Isn't deriving propositions from axioms what mathematics is?
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2012-03-06T05:23:03.281Z · LW(p) · GW(p)
A mathematician might say so, yes. I'm a physicist; I'm not really interested in what can be derived from axioms unconnected to reality.
↑ comment by Vaniver · 2012-03-03T01:05:54.135Z · LW(p) · GW(p)
The Pythagorean theorem isn't proved or or even checked by measuring right triangles and noticing that a^2 + b^2 = c^2.
I am having trouble with this as a statement of historical fact. Isn't that how they did it?
Replies from: ChristianKl, TimS↑ comment by ChristianKl · 2012-03-03T18:35:14.765Z · LW(p) · GW(p)
You could call it a pradigm shift that we today don't like how they did it ;)
↑ comment by TimS · 2012-03-05T20:41:50.978Z · LW(p) · GW(p)
I'm not sure that's how it was motivated historically. Note that Euclid's proof (Edit: not Euler) doesn't require measuring anything at all.
To use a different example, how would one go about measuring whether there are more real numbers than integers? The proof is pretty easier, but it doesn't require any empirical facts as far as I can tell.
Replies from: Vaniver, None↑ comment by Vaniver · 2012-03-05T21:18:45.076Z · LW(p) · GW(p)
I think you mean Euclid's proof, and he was working centuries after Pythagoras, who was himself working over a thousand years after the Babylonians, who discovered Pythogorean Triples (the ones you notice by measuring).
To restate, I'm fine with saying that a proof for the Pythogorean Theorem exists that does not require measuring physical triangles, but I'm not comfortable with the statement that it cannot be proved by measuring physical triangles, which is what your original comment implied to me.
As discussed in the other subthread, I think that Deutsch's intention was to argue that any instance of a proof, as an object, has to exist in reality somewhere, which is a very different claim.
Replies from: Bugmaster, TimS, TimS↑ comment by Bugmaster · 2012-03-05T21:38:55.801Z · LW(p) · GW(p)
...but I'm not comfortable with the statement that it cannot be proved by measuring physical triangles...
It depends on what you mean by "proved". The Pythagorean Theorem applies to all possible triangles (on a flat Euclidean plane), and the answer it gives you is infinitely precise. If you are measuring real triangles on Earth, however, the best you could do is get close to the answer, due to the uncertainty inherent in your instruments (among other factors). Still, you could very easily disprove a theorem that way, and you could also use your experimental results to zero in on the analytical solution much faster than if you were operating from pure reason alone.
↑ comment by TimS · 2012-03-05T21:26:14.975Z · LW(p) · GW(p)
Other subthread? Don't see where anyone made that point. Moreover, I don't think it is a good reading of the original quote.
But the proof of [a mathematical] proposition is a matter of physics only. There is no such thing as abstractly proving something, just as there is no such thing as abstractly knowing something.
That's not fairly represented by saying "All actual proofs are on physical paper (or equivalent)."
Replies from: Vaniver↑ comment by Vaniver · 2012-03-06T20:08:08.223Z · LW(p) · GW(p)
I was thinking of this comment. If by "knowledge" he means "a piece of memory in reality," then by definition there is no abstract knowledge, and no abstract proofs, because he limited himself to concrete knowledge.
That knowledge can describe concepts that we don't think of as concrete- the Pythogorean Theorem doesn't have a physical manifestation somewhere- but my knowledge of it does have a physical manifestation.
↑ comment by [deleted] · 2012-03-05T21:28:10.211Z · LW(p) · GW(p)
To use a different example, how would one go about measuring whether there are more real numbers than integers? The proof is pretty easier, but it doesn't require any empirical facts as far as I can tell.
There are all kinds of quantitative ways in which there are more real numbers than integers. On the other hand a tiny minority of us regard Cantor's argument (that I think you're alluding to) as misleading and maybe false.
↑ comment by Bugmaster · 2012-03-02T18:55:00.782Z · LW(p) · GW(p)
No, that's not how you prove it, but you can check it pretty easily with right triangles. Similarly, if you believe that Pi == 3, you only need a large wheel and a piece of string to discover that you're wrong. This won't tell you the actual value of Pi, nor would it constitute a mathematical proof, but at least the experience would point you in the right direction.
Replies from: TimS↑ comment by TimS · 2012-03-02T18:59:28.628Z · LW(p) · GW(p)
If you find a right triangle with sides (2.9, 4, 5.15) rather than (3,4,5), are you ever entitled to reject the Pythagrean theorem? Doesn't measurement error and the non-Euclidean nature of the actual universe completely explain your experience?
In short, it seems like you can't empirically check the Pythagorean theorem.
Replies from: Bugmaster, MaoShan↑ comment by Bugmaster · 2012-03-02T21:17:42.965Z · LW(p) · GW(p)
If you find a right triangle with sides (2.9, 4, 5.15) rather than (3,4,5), are you ever entitled to reject the Pythagrean theorem?
That is not what I said. I said, regarding Pi == 3, "this won't tell you the actual value of Pi, nor would it constitute a mathematical proof, but at least the experience would point you in the right direction". If you believe that a^2 + b^2 = c^5, instead of c^2; and if your instruments are accurate down to 0.2 units, then you can discover very quickly that your formula is most probably wrong. You won't know which answer is right (though you could make a very good guess, by taking more measurements), but you will have enough evidence to doubt your theorem.
The words "most probably" in the above sentence are very important. No amount of empirical measurements will constitute a 100% logically consistent mathematical proof. But if your goal is to figure out how the length of the hypotenuse relates to the lengths of the two sides, then you are not limited to total ignorance or total knowledge, with nothing in between. You can make educated guesses. Yes, you could also get there by pure reason alone, and sometimes that approach works best; but that doesn't mean that you cannot, in principle, use empirical evidence to find the right path.
↑ comment by MaoShan · 2012-03-04T06:57:53.044Z · LW(p) · GW(p)
Peer review. If the next two hundred scientists who measure your triangle get the same measurements from other rulers by different manufacturers, you'd be completely justified in rejecting the Pythagorean theorem.
My challenge to you: go out and see if you can find a right triangle with those measurements.
Replies from: Eugine_Nier, Luke_A_Somers↑ comment by Eugine_Nier · 2012-03-04T20:37:15.121Z · LW(p) · GW(p)
Sure, how about a triangle just outside a black hole.
Replies from: MaoShan↑ comment by Luke_A_Somers · 2012-03-05T17:02:42.809Z · LW(p) · GW(p)
You're completely justified in rejecting Euclid's axioms. You're not at all justified in rejecting the Pythagorean theorem.
Replies from: MaoShancomment by James_Miller · 2012-03-01T15:22:58.742Z · LW(p) · GW(p)
Replies from: GilesHad no idea so much strategy was possible in Rock, Paper, Scissors? The rules of the game itself may be simple, but the human mind is not.
↑ comment by Giles · 2012-03-04T14:35:08.519Z · LW(p) · GW(p)
I saw on TV some kid lose convincingly against a RPS champion when the kid had been given a prepared (random) list of moves to make ahead of time. That can't be explained by strategy - it was either coincidence or it's possible to cheat by seeing which way your opponent's hand is unfolding and change your move at the last moment.
Replies from: Desrtopa, James_Miller↑ comment by Desrtopa · 2012-03-04T14:45:36.442Z · LW(p) · GW(p)
The latter is definitely possible. Back when I was still playing RPS as a kid, I was fairly good at it; enough for somewhere upwards of 70% of my plays to be wins.
You don't want to change your move at the last moment though so much as you want to keep your hand in a plausibly formless configuration you can turn into a move at the last moment. Less likely to be called out for cheating.
↑ comment by James_Miller · 2012-03-04T20:50:42.709Z · LW(p) · GW(p)
Or the losers were unintentionally signaling their moves beforehand.
comment by bungula · 2012-03-01T13:30:50.284Z · LW(p) · GW(p)
It's the Face of Boe. I'm absolutely certain about this, absolutely positive. Of course I'll probably turn out to be incorrect
Sam Hughes, talking about the first season finale of Doctor Who, differentiating between the subjective feeling of certainty and the actual probability estimate.
comment by Stabilizer · 2012-03-06T04:51:18.916Z · LW(p) · GW(p)
We have not succeeded in answering all our problems.
The answers we have found only serve
to raise a whole set of new questions.
In some ways we feel we are as confused as ever,
but we believe we are confused on a higher level
and about more important things.
-Posted outside the mathematics reading room, Tromsø University
From the homepage of Kim C. Border
comment by Stephanie_Cunnane · 2012-03-08T22:44:15.046Z · LW(p) · GW(p)
Replies from: Daniel_BurfootNow let's talk about efficient market theory, a wonderful economic doctrine that had a long vogue in spite of the experience of Berkshire Hathaway. In fact, one of the economists who won--he shared a Nobel Prize--and as he looked at Berkshire Hathaway year after year, which people would throw in his face as saying maybe the market isn't quite as efficient as you think, he said, "Well, it's a two-sigma event." And then he said we were a three-sigma event. And then he said we were a four-sigma event. And he finally got up to six sigmas--better to add a sigma than change a theory, just because the evidence comes in differently. [Laughter] And, of course, when this share of a Nobel Prize went into money management himself, he sank like a stone.
↑ comment by Daniel_Burfoot · 2012-03-11T18:41:31.839Z · LW(p) · GW(p)
I'm surprised by how consistently misinterpreted the EMH is, even by people with the widest possible perspective on markets and economics. The EMH practically requires that some people make money by trading, because that's the mechanism which causes the market to become efficient. The EMH should really be understood to mean that as more and more money is leached out of the market by speculators, prices become better and better approximations to real net present values.
Replies from: roystgnr↑ comment by roystgnr · 2012-03-24T22:49:20.918Z · LW(p) · GW(p)
I've always thought of the Efficient Market Hypothesis as the anti-Tinkerbell: if everybody all starts clapping and believing in it, it dies.
See, for example, every bubble ever. "We don't need to worry about buying that thing for more than it seems to be worth, because prices are going up so we can always resell it for even more than that later!"
Replies from: wedrifid, David_Gerard↑ comment by wedrifid · 2012-03-25T07:49:50.255Z · LW(p) · GW(p)
See, for example, every bubble ever. "We don't need to worry about buying that thing for more than it seems to be worth, because prices are going up so we can always resell it for even more than that later!"
If they actually believed the market they were trading in was efficient they wouldn't believe that prices would continue to go up. They would expect them to follow the value of capital invested at that level of risk. Further - as applicable to any bubble that doesn't represent overinvestment in the entire stockmarket over all industries - they wouldn't jump on a given stock or group of stocks more than any other. They would buy random stocks from the market, probably distributed as widely as possible.
No, belief in an efficient market can only be used as a scapegoat here, not as a credible cause.
↑ comment by David_Gerard · 2012-03-30T11:01:38.832Z · LW(p) · GW(p)
That's pretty much the thesis of Markets are Anti-Inductive by EY.
comment by [deleted] · 2012-03-06T12:19:59.293Z · LW(p) · GW(p)
The reality is actually scarier than that if there was a big conspiracy run by an Inner Party of evil but brilliant know-it-alls, like O’Brien in “1984″ or Mustapha Mond in “Brave New World.” The reality is that nobody in charge knows much about what is going on.
--Steve Sailer, here
Replies from: Ezekiel, NihilCredo↑ comment by Ezekiel · 2012-03-06T14:48:06.899Z · LW(p) · GW(p)
For all that it's fun to signal our horror at the ignorance/irrationality/stupidity of those in charge, I still think real-world 2012 Britain, USA, Canada and Australia are all better than Oceania circa 1984. For one thing, people are not very often written out of existence.
Replies from: None↑ comment by [deleted] · 2012-03-06T14:57:00.090Z · LW(p) · GW(p)
For one thing, people are not very often written out of existence.
Or ... are they?
Replies from: RobinZ, Aryn↑ comment by RobinZ · 2012-03-06T18:35:59.735Z · LW(p) · GW(p)
At a certain point, conspiracy theories become indistinguishable from skeptical hypotheses.
↑ comment by NihilCredo · 2012-03-08T09:14:34.224Z · LW(p) · GW(p)
Mustapha Mond evil?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2012-03-08T16:03:53.040Z · LW(p) · GW(p)
Of course. He keeps the brave new world running. I don't think there are many takers here for the idea that Brave New World depicts a society we should desire and work for.
comment by XFrequentist · 2012-03-02T21:01:52.478Z · LW(p) · GW(p)
May the best of your todays, be the worst of your tomorrows
- Jay-Z, Forever Young
[Taking the lyrics literally, the whole thing is a pretty sweet transhumanist anthem.]
comment by CasioTheSane · 2012-03-09T07:51:03.609Z · LW(p) · GW(p)
"Sir Isaac Newton, renowned inventor of the milled-edge coin and the catflap!"
"The what?" said Richard.
"The catflap! A device of the utmost cunning, perspicuity and invention. It is a door within a door, you see, a ..."
"Yes," said Richard, "there was also the small matter of gravity."
"Gravity," said Dirk with a slightly dismissed shrug, "yes, there was that as well, I suppose. Though that, of course, was merely a discovery. It was there to be discovered." ...
"You see?" he said dropping his cigarette butt, "They even keep it on at weekends. Someone was bound to notice sooner or later. But the catflap ... ah, there is a very different matter. Invention, pure creative invention. It is a door within a door, you see."
-Douglas Adams
comment by Ezekiel · 2012-03-05T22:13:09.488Z · LW(p) · GW(p)
Because throughout history, every mystery ever solved has turned out to be... Not Magic
-- Tim Minchin, Storm
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-03-06T02:47:55.298Z · LW(p) · GW(p)
That could just mean we're no good at solving mysteries that involve magic.
Also, I think there is a selection effect in so far as there are solved mysteries where the solution was magic; however, you'd probably argue that they were not solved correctly using no other evidence than that the solutions involved magic.
Replies from: Ezekiel↑ comment by Ezekiel · 2012-03-06T13:01:50.217Z · LW(p) · GW(p)
It depends what you mean by magic. Nowadays we communicate by bouncing invisible light off the sky, which would sure as hell qualify as "magic" to someone six hundred years ago.
The issue is that "magic", in the sense that I take Minchin to be using it, isn't a solution at all. No matter what the explanation is, once you've actually got it, it's not "magic" any more; it's "electrons" or "distortion of spacetime" or "computers" or whatever, the distinction being that we have equations for all of those things.
Take the witch trials, for example - to the best of my extremely limited knowledge, most witch trials involved very poorly-defined ideas about what a witch was capable of or what the signs of a witch were. If they had known how the accused were supposed to be screwing with reality, they wouldn't have called them "witches", but "scientists" or "politicians" or "guys with swords".
Admittedly all of those can have the same blank curiosity-stopping power as "magic" to some people, but "magic" almost always does. Which is why, once you've solved the mystery, it turns out to be Not Magic.
Replies from: Eugine_Nier, Eneasz↑ comment by Eugine_Nier · 2012-03-12T01:25:10.227Z · LW(p) · GW(p)
Take the witch trials, for example - to the best of my extremely limited knowledge, most witch trials involved very poorly-defined ideas about what a witch was capable of or what the signs of a witch were.
Consider something like this and notice that our modern "explanations" aren't much better.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-03-12T01:48:54.004Z · LW(p) · GW(p)
And because of those damned atheists we can't even start a witch hunt to figure out who's responsible!
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-03-12T03:03:17.914Z · LW(p) · GW(p)
We just need to rephrase "witch" in scientific terms.
(Also sorry about the political link, but with a topic like this that's inevitable).
UPDATE: This post goes into more details.
↑ comment by Eneasz · 2012-03-08T20:04:23.897Z · LW(p) · GW(p)
I think Tim Minchin was using "magic" the same way most people use "magic" - meaning ontologically basic mental things
Replies from: Ezekiel↑ comment by Ezekiel · 2012-03-09T00:27:21.068Z · LW(p) · GW(p)
To be fair, I've never asked him. But he included homoeopathy, which its practitioners claim isn't mental.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-03-09T03:41:37.549Z · LW(p) · GW(p)
So he was using magic in the sense of "disagrees with current scientific theory", in that case the initial quote is circular.
Replies from: Ezekiel, AspiringKnitter↑ comment by AspiringKnitter · 2012-03-09T06:56:06.966Z · LW(p) · GW(p)
And wrong. E.g., the perihelion precession of Mercury turned out to be caused by all matter being able to warp space and time by its very existence. We like to call that Not Magic, but it's magic in the sense of disagreeing with established scientific theory, and in the sense of being something that, if explained to someone who believed in Newtonian physics, would sound like magic.
Replies from: Desrtopa↑ comment by Desrtopa · 2012-03-24T03:58:51.226Z · LW(p) · GW(p)
I wouldn't say it would sound like magic. It would sound weird and inexplicable, but magic doesn't just sound inexplicable, it sounds like reality working in a mentalist, top-down sort of way. It sounds like associative thinking, believing that words or thoughts can act on reality directly, or things behaving in agentlike ways without any apparent mechanism for agency.
Relativity doesn't sound magical; in fact, I'd even say that it sounds antimagical because it runs so counter to our basic intuitions. Quantum entanglement does sound somewhat magical, but it's still well evidenced
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-03-24T05:21:18.233Z · LW(p) · GW(p)
Interesting. I hadn't thought about that. Now that I think about it, you're right; most fictional magic does act on things that are fundamental concepts in people's minds, rather than on things that are actually fundamental.
That said, I still say it all sounds like magic. I couldn't tell you exactly what algorithm my brain uses to come up with "sounds like magic", though.
Replies from: Desrtopa↑ comment by Desrtopa · 2012-03-24T05:43:27.242Z · LW(p) · GW(p)
Now that I think about it, you're right; most fictional magic does act on things that are fundamental concepts in people's minds, rather than on things that are actually fundamental.
I didn't just have fictional magic in mind; concepts like sympathetic magic are widespread, maybe even universal in human culture. Humans seem to have strong innate intuitions about the working of magic.
comment by [deleted] · 2012-03-01T20:02:07.735Z · LW(p) · GW(p)
.
Replies from: FiftyTwo↑ comment by FiftyTwo · 2012-03-04T19:22:34.495Z · LW(p) · GW(p)
Sounds like a counter to "Never interrupt your enemy when he is making a mistake." (Attributed but seemingly falsely to Napoleon Bonaparte)
comment by GLaDOS · 2012-03-01T19:07:58.916Z · LW(p) · GW(p)
I have sometimes seen people try to list what a real intellectual should know. I think it might be more illuminating to list what he shouldn’t.
--Gregory Cochran, in a comment here
Replies from: None, NancyLebovitz↑ comment by [deleted] · 2012-03-01T21:11:09.653Z · LW(p) · GW(p)
.
Replies from: GLaDOS↑ comment by GLaDOS · 2012-03-01T22:50:43.436Z · LW(p) · GW(p)
Yes but I didn't at first want to post that because it is slightly political. Though I guess the rationality core does outweigh any mind-killing.
Replies from: None↑ comment by NancyLebovitz · 2012-03-03T13:01:43.193Z · LW(p) · GW(p)
This has 6 karma points, so I'm left curious about whether people have anything in mind about what real intellectuals shouldn't know.
Replies from: player_03, None, cousin_it, Eugine_Nier, FiftyTwo↑ comment by [deleted] · 2012-03-07T15:46:50.558Z · LW(p) · GW(p)
Real intellectuals shouldn't know the details of fictional worlds. They shouldn't know the private business of their neighbors. They shouldn't know more about sports than is necessary for casual conversation on the matter (though no less either). They shouldn't know how to lie, how to manipulate people, they shouldn't know much about how to make money, they shouldn't know much about concrete political affairs unless that is their business. They shouldn't know too much about food or the maintenance of their health.
Real intellectuals should be able to play an instrument, but not very well. They shouldn't know too much about crimes, mental disorders, disasters, diseases, or wars. They should know the broad strokes of history, but not the details unless that is their primary business.
Real intellectuals should enjoy music, but never study it, unless that is their primary business. Most essentially, real intellectuals shouldn't know what they don't have the time or inclination to know well.
Replies from: thomblake↑ comment by thomblake · 2012-03-07T16:12:29.317Z · LW(p) · GW(p)
Is this meant to be funny?
Replies from: Will_Newsome, None↑ comment by Will_Newsome · 2012-03-07T16:24:39.995Z · LW(p) · GW(p)
Seemed serious and somewhat reasonable to me.
↑ comment by cousin_it · 2012-03-07T09:31:57.230Z · LW(p) · GW(p)
Real intellectuals shouldn't know things that science doesn't know.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-03-07T12:57:56.899Z · LW(p) · GW(p)
Then science would have nothing to learn from them.
Replies from: cousin_it, Will_Newsome↑ comment by cousin_it · 2012-03-07T13:33:14.028Z · LW(p) · GW(p)
Why? They could submit their tentative results to science, wait for verification, and only then become confident. In fact I think that's the right way.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-03-07T14:30:35.280Z · LW(p) · GW(p)
What about philosophy? Science doesn't know about philosophy of science, yet a real intellectual should know about philosophy of science. Do you mean "science" in a really broad sense or "intellectual" in a really narrow sense?
Replies from: cousin_it↑ comment by cousin_it · 2012-03-07T15:25:09.029Z · LW(p) · GW(p)
I don't understand your question yet. Can you give an example statement that philosophy of science knows but science doesn't?
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-03-07T15:32:09.227Z · LW(p) · GW(p)
"A mature science, according to Kuhn, experiences alternating phases of normal science and revolutions. In normal science the key theories, instruments, values and metaphysical assumptions that comprise the disciplinary matrix are kept fixed, permitting the cumulative generation of puzzle-solutions, whereas in a scientific revolution the disciplinary matrix undergoes revision, in order to permit the solution of the more serious anomalous puzzles that disturbed the preceding period of normal science." -- SEP on Kuhn
?
Replies from: cousin_it↑ comment by cousin_it · 2012-03-07T16:25:19.631Z · LW(p) · GW(p)
This is an instance of "X said Y". Science isn't forbidden from knowing that X said Y, but such knowledge is mostly useless and I'm not sure why people should bother learning it. The only interesting question is which bits of Y stay true without the "X said".
Replies from: TheOtherDave, None↑ comment by TheOtherDave · 2012-03-07T17:20:03.567Z · LW(p) · GW(p)
I suspect that Will meant that "A mature science experiences alternating phases of normal science and revolutions. In normal science the key theories, instruments, values and metaphysical assumptions that comprise the disciplinary matrix are kept fixed, permitting the cumulative generation of puzzle-solutions, whereas in a scientific revolution the disciplinary matrix undergoes revision, in order to permit the solution of the more serious anomalous puzzles that disturbed the preceding period of normal science." is a statement of philosophy of science, and consequently (according to Will) something that science doesn't know, and that the "according to Kuhn" part is irrelevant.
I suspect that your response is that, insofar as that statement is true and meaningful, science does know it.
If I'm wrong about either of those suspicions I'll be very surprised and inclined to update strongly accordingly, but I'm not yet sure in what directions beyond sharply reduced confidence that I understand either of you.
Replies from: cousin_it↑ comment by cousin_it · 2012-03-07T17:30:00.981Z · LW(p) · GW(p)
I suspect that your response is that, insofar as that statement is true and meaningful, science does know it.
Science doesn't know everything that's true. Make it "insofar as that statement is scientifically proven" :-)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-03-07T19:59:34.622Z · LW(p) · GW(p)
Mm, yes.
And much as I ought to distrust myself for saying this after having previously said I'd be very surprised and significantly update if I was wrong: "well, yes, that's what I meant."
I am chagrined.
↑ comment by [deleted] · 2012-03-07T16:28:49.898Z · LW(p) · GW(p)
I think the trouble here is that 'science' is a somewhat loosely held together institution of journals, technical practices, university departments, labs, etc. It doesn't 'know' anything, any more than it speculates, opines, believes, doubts, or worries. People know things, often (perhaps entirely) by engaging with other people.
↑ comment by Will_Newsome · 2012-03-07T13:10:42.728Z · LW(p) · GW(p)
(I thought User:cousin_it was making a descriptive statement about what academia thinks intellectuals should know, 'cuz as a normative statement it's obviously wrong.)
↑ comment by Eugine_Nier · 2012-03-04T00:39:51.393Z · LW(p) · GW(p)
I interpret the quote as saying that to be a "good intellectual" one needs to not know the problems with the positions "good intellectuals" are expected to defend.
comment by NancyLebovitz · 2012-03-05T13:26:22.800Z · LW(p) · GW(p)
Replies from: gwernMy father was a psychologist and a lifelong student of human behavior, and when I brought him my report card he often used to say: “This tells me something about you, something about your teacher, and something about myself.
↑ comment by gwern · 2012-03-06T16:28:54.201Z · LW(p) · GW(p)
Reminds me of a Bateson quote.
comment by cousin_it · 2012-03-14T12:01:03.904Z · LW(p) · GW(p)
Running any enterprise the size of Google or Goldman Sachs requires trading off many competing factors. To make the tradeoff, someone has to keep all that information in their head at once. There's no other way to balance competing demands; if you keep only part of the information in your head, your decision will be biased towards the part that you've loaded into your brain. If you try to spread decision making across multiple people, the decisions will be biased towards the part that the person who screams the loudest can hold in his head (which is usually a smaller subset than optimal; it takes mental effort to scream loudly).
-- nostrademons on Hacker news
Replies from: Hul-Gil↑ comment by Hul-Gil · 2012-03-30T18:14:05.802Z · LW(p) · GW(p)
That's a good quote! +1.
Unfortunately, for every rational action, there appears to be an equal and opposite irrational one: did you see bhousel's response?
Rationality is emotionless and mechanical. It's about making a reasonable decision based on whatever information is available to you. However, rational decisions do not involve morals, culture, or feelings. This is exactly what companies like Google and Goldman Sachs are being criticized for. [...] If I look down into my wallet and see no money there, and I'm hungry for lunch, and I decide to steal some money from a little old lady, that may be a perfectly rational decision to make. An outside observer may say I'm being evil, but they don't have a complete information picture about how hungry I am, or how long the line at the ATM is, or that everyone else is eating lunch so I have a duty to my shareholders to do the same.
Sigh.
comment by philh · 2012-03-01T22:32:08.078Z · LW(p) · GW(p)
The Princess Bride:
Man in Black: Inhale this, but do not touch.
Vizzini: [sniffs] I smell nothing.
Man in Black: What you do not smell is called iocane powder. It is odorless, tasteless, dissolves instantly in liquid, and is among the more deadlier poisons known to man.
[He puts the goblets behind his back and puts the poison into one of the goblets, then sets them down in front of him]
Man in Black: All right. Where is the poison? The battle of wits has begun. It ends when you decide and we both drink, and find out who is right... and who is dead.
[Vizzini stalls, then eventually chooses the glass in front of the man in black. They both drink, and Vizzini dies.]
Buttercup: And to think, all that time it was your cup that was poisoned.
Man in Black: They were both poisoned. I spent the last few years building up an immunity to iocane powder.
↑ comment by shokwave · 2012-03-02T00:18:19.693Z · LW(p) · GW(p)
Man in Black: All right. Where is the poison? The battle of wits has begun.
Vizzini: But it's so simple. All I have to do is divine from what I know of you: are you the sort of man who would put the poison into his own goblet or his enemy's? Now, a clever man would put the poison into his own goblet, because he would know that only a great fool would reach for what he was given. I am not a great fool, so I can clearly not choose the wine in front of you. But you must have known I was not a great fool. You would have counted on it, so I can clearly not choose the wine in front of me.
Man in Black: You've made your decision then?
Vizzini: Not remotely! Because iocane comes from Australia, as everyone knows! And Australia is entirely peopled with criminals. And criminals are used to having people not trust them, as you are not trusted by me, so I can clearly not choose the wine in front of you.
Man in Black: Truly, you have a dizzying intellect.
Vizzini: And you must have suspected I would have known the powder's origin, so I can clearly not choose the wine in front of me.
Man in Black: You're just stalling now.
Vizzini: You'd like to think that, wouldn't you?! You've beaten my giant, which means you're exceptionally strong, so you could've put the poison in your own goblet, trusting on your strength to save you, so I can clearly not choose the wine in front of you! But, you've also bested my Spaniard, which means you must have studied, and in studying you must have learned that man is mortal, so you would have put the poison as far from yourself as possible, so I can clearly not choose the wine in front of me!
...
Man in Black: Then make your choice.
Vizzini: I will, and I choose- ...
Vizzini of the Princess Bride, on the dangers of reasoning in absolutes - both logically ("this is proof it's not in my goblet") and propositionally (the implicit assumption Vizzini has that one and only one wine goblet is poisoned - P or ~P, as it were)
Replies from: philh↑ comment by philh · 2012-03-02T01:49:57.634Z · LW(p) · GW(p)
I don't agree that Vizzini is trying to reason in logical absolutes. He talks like he is, but he doesn't necessarily believe the things he's saying.
Man in Black: You're trying to trick me into giving away something. It won't work.
Vizzini: It has worked! You've given everything away! I know where the poison is!
My interpretation is that he really is trying to trick the man.
Later he distracts the man and swaps the glasses around; then he pretends to choose his own glass. He makes sure the man drinks first. I think he's reasoning/hoping that the man would not deliberately drink from the poisoned cup. So when the man does drink he believes his chosen cup is safe. If the man had been unwilling to drink, Vizzini would have assumed that he now held the poisoned glass, and perhaps resorted to treachery.
He's overconfident, but he's not a complete fool.
(I don't have strong confidence in this analysis, because he's a minor character in a movie.)
Replies from: shokwave, shokwave↑ comment by shokwave · 2012-03-02T04:40:56.939Z · LW(p) · GW(p)
Well, yes, he only pretends to reason in logical absolutes...
... which was why I wrote "and propositionally" - because he does actually reason in propositional absolutes. I agree with your analysis but note that it is only a good strategy if it's true that one and only one cup contains poison (or the equivalent, that one and only one cup will kill the Man in Black).
On re-reading I may have lost that subtlety in the clumsy (parenthetical-filled) expression of the final line.
comment by Richard_Kennaway · 2012-03-12T11:14:24.084Z · LW(p) · GW(p)
Said by a pub manager I know to someone who came into his pub selling lucky white heather:
"I'm running a business turning over half a million pounds a year, and you're selling lucky heather door to door. Doesn't seem to work, does it?"
comment by Will_Newsome · 2012-03-11T17:57:39.730Z · LW(p) · GW(p)
To a Frenchman like M. Renan, intelligence does not mean a quickness of wit, a ready dexterity in handling ideas, or even a ready accessibility to ideas. It implies those, of course, but it does not mean them; and one should perhaps say in passing that it does not mean the pert and ignorant cleverness that current vulgar usage has associated with the word. Again it is our common day-to-day experience that gives us the best possible assistance in establishing the necessary differentiations. We have all seen men who were quick witted, accessible to ideas and handy with their management of them, whom we should yet hesitate to call intelligent; we are conscious that the term does not quite fit. The word sends us back to a phrase of Plato. The person of intelligence is the one who always tends to "see things as they are," the one who never permits his view of them to be directed by convention, by the hope of advantage, or by an irrational and arbitrary authoritarianism. He allows the current of his consciousness to flow in perfect freedom over any object that may be presented to it, uncontrolled by prejudice, prepossession or formula; and thus we may say that there are certain integrities at the root of intelligence which give it somewhat the aspect of a moral as well as an intellectual attribute.
Albert Jay Nock, The Theory of Education in the United States
comment by MinibearRex · 2012-03-08T23:50:38.205Z · LW(p) · GW(p)
On the mind projection fallacy:
Mankind are (sic) always predisposed to believe that any subjective feeling, not otherwise accounted for, is a revelation of some objective reality.
-John Stuart Mill
Replies from: Voltairina↑ comment by Voltairina · 2012-03-10T18:57:21.286Z · LW(p) · GW(p)
Every subjective feeling IS at least one thing - a bunch of neurons firing. Whether stored representational content activated in that firing has any connection to events represented happening outside the brain is another question.
comment by NancyLebovitz · 2012-03-03T13:05:31.797Z · LW(p) · GW(p)
It is more important to know what is true today, than to have been right yesterday
Found here.
comment by [deleted] · 2012-03-01T16:04:23.224Z · LW(p) · GW(p)
.
comment by Grognor · 2012-03-06T14:39:37.244Z · LW(p) · GW(p)
Replies from: Grognor, VaniverSo when somebody else asks for your help, in the form of charity or taxes, or because they need you to help them move a refrigerator, you can cite all sorts of reasons for not helping ("I think you're lying about needing help" or "I don't care" or "I'm too tied up with my own problems"), but the one thing you can't say is, "Why should you need help? I've never gotten help!" Not unless you're either shamefully oblivious, or a lying asshole.
↑ comment by Grognor · 2012-03-08T06:05:58.552Z · LW(p) · GW(p)
Why did this quote get down-voted by at least two people? I thought it was much, much better than the other quote I posted this month, which is currently sitting pretty at 32 karma despite not adding anything we didn't already know from the Human's Guide to Words sequence.
Replies from: saturn, wedrifid, Jayson_Virissimo↑ comment by saturn · 2012-03-08T20:05:37.668Z · LW(p) · GW(p)
Although not directly contradictory, the idea expressed in the quote is somewhat at odds with libertarianism, which is popular on LW.
Replies from: None↑ comment by [deleted] · 2012-03-10T08:57:32.164Z · LW(p) · GW(p)
libertarianism, which is popular on LW.
Is this true? I mean, isn't that universally recognized as a mind killer?, just like most other political philosophies?
Are there any demographical studies of LW's composition in personspace?
Replies from: satt, TheOtherDave↑ comment by satt · 2012-03-10T13:46:26.078Z · LW(p) · GW(p)
Is this true? [...] Are there any demographical studies of LW's composition in personspace?
The closest things we have to those are probably the mid-2009 and late 2011 surveys. People could fill in their age, gender, race, profession, a few other things, and...politics!
The politics question had some default categories people could choose: libertarian, liberal, socialist, conservative & Communist. In 2009, 45% ticked the libertarian box, and in 2011, 32% (among the people who gave easy-to-categorize answers). Although those obviously aren't majorities, libertarianism is relatively popular here.
I mean, isn't that universally recognized as a mind killer?, just like most other political philosophies?
Political philosophies are like philosophies in general, I think. However mind-killy they are, a person can't really avoid having one; if they believe they don't have one, they usually have one they just don't know about.
↑ comment by TheOtherDave · 2012-03-10T15:12:51.784Z · LW(p) · GW(p)
Well, it's true and it's false.
It's popular "on" LW in the sense that many of the people here identify as libertarians.
It's not popular "on" LW, in the sense that discussions of libertarianism are mostly unwelcome.
And, yes, the same is true of many other political philosophies.
↑ comment by Jayson_Virissimo · 2012-03-10T09:23:59.645Z · LW(p) · GW(p)
IDK, but suspect it has to do with including taxes as a way to "ask for help" which is dangerously close to double-speak. To some ears, this sounds like you are saying rape is a form of asking for sex.
comment by [deleted] · 2012-03-01T16:00:15.269Z · LW(p) · GW(p)
.
comment by NancyLebovitz · 2012-03-28T15:07:47.426Z · LW(p) · GW(p)
In truth we know that the wind is its blowing. Similarly the stream is the running of water. And so, too, I am what I am doing. I am not an agent but a hive of activity. If you were to lift off the lid, you would find something more like a compost heap than the kind of architectural structure that anatomists and psychologists like to imagine.
---Tim Ingold, “Clearing the Ground"
comment by Grognor · 2012-03-10T12:21:46.326Z · LW(p) · GW(p)
The origin of all science is in the desire to know causes; and the origin of all false science and imposture is in the desire to accept false causes rather than none; or, which is the same thing, in the unwillingness to acknowledge our own ignorance.
-William Hazlitt, attacking phrenology.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-03-10T20:33:46.968Z · LW(p) · GW(p)
This quote is itself an example of the phenomenon it describes since it stems from a desire to be able to separate true from false science without the hard and messy process of looking at the territory.
Also hindsight bias.
Replies from: RobinZ↑ comment by RobinZ · 2012-03-10T20:59:00.595Z · LW(p) · GW(p)
I don't see that in the quote - it seems to be an attempted explanation for the existence of pseudoscience, not a heuristic for identifying such.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-03-11T03:03:21.281Z · LW(p) · GW(p)
The problem is that it's still false. A lot of false science was developed by people honestly trying to find true causes. I also suspect that a good deal of actual science was developed by people who accepted a cause without enough evidence out of a desire to have a cause for everything and got lucky.
comment by scav · 2012-03-05T14:02:46.838Z · LW(p) · GW(p)
Most of world history is a clash of mental illnesses.
-- Evan V Symon, Cracked.com http://www.cracked.com/article_19669_the-5-saddest-attempts-to-take-over-country.html
Not completely serious, but think of it in relation to the sanity waterline...
comment by Voltairina · 2012-03-04T22:51:52.244Z · LW(p) · GW(p)
Courage is what it takes to stand up and speak; courage is also what it takes to sit down and listen.
Winston Churchill
Replies from: Ezekiel↑ comment by Ezekiel · 2012-03-06T14:50:30.509Z · LW(p) · GW(p)
Incidentally, you need a double-newline to break the quote bar.
Replies from: Voltairina↑ comment by Voltairina · 2012-03-06T18:16:01.355Z · LW(p) · GW(p)
Thank you, I've rewritten it now.
comment by David Althaus (wallowinmaya) · 2012-03-02T20:16:07.704Z · LW(p) · GW(p)
Faith: not wanting to know what is true.
Friedrich Nietzsche
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-03-03T07:12:25.866Z · LW(p) · GW(p)
I don't think that is a good description of what people mean by "faith".
For a better idea of the concept of faith start here.
Replies from: DSimon, Elethiomel, patternistSummary: Theory is to faith as our concept of physical necessitation is to that of social obligation.
↑ comment by Elethiomel · 2012-05-25T08:27:30.304Z · LW(p) · GW(p)
Except that faith has little to nothing to do with social obligations. Faith is believing something without proof or even reason to believe it.
Unless you mean "faith" as in being "faithful" to your spouse, in which case, that's not even the same thing as what Nietzsche is talking about.
Replies from: wedrifid, Eugine_Nier↑ comment by wedrifid · 2012-05-26T02:32:49.694Z · LW(p) · GW(p)
Except that faith has little to nothing to do with social obligations.
Except for, well, being one in most social circumstances and for certain beliefs.
Replies from: Elethiomel↑ comment by Elethiomel · 2012-05-26T06:10:41.219Z · LW(p) · GW(p)
Let me restate: social obligations are not at the core of what faith is. One could believe something without proof if she were alone in the universe. Faith certainly can be a social obligation, and depending upon what it is faith in, could easily necessitate social obligations, but the general idea of "believing in something without evidence" can be done by one person alone, and social obligations are by no means part of that definition.
Replies from: wedrifid↑ comment by Eugine_Nier · 2012-05-26T03:23:24.721Z · LW(p) · GW(p)
Unless you mean "faith" as in being "faithful" to your spouse, in which case, that's not even the same thing as what Nietzsche is talking about.
The problem is that Nietzsche was confused about what religious people mean by "faith", as a result his argument is essentially a straw-man.
Replies from: Elethiomel↑ comment by Elethiomel · 2012-05-26T06:08:41.496Z · LW(p) · GW(p)
What religious people mean by "faith" and what faith actually is do not have to be the same thing.
Also, Nietzsche was definitely not confused about what religious people mean by faith. You're just confused because that quote isn't a statement about what faith is, but rather, a statement about the psychology of the faithful.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-05-27T03:53:59.020Z · LW(p) · GW(p)
As for the psychology of faith, to use your example of being faithful to you spouse, you want your spouse not to cheat on you. Thus this is a game of prisoner's dilemma or at least stag hunt, faith amounts to the Timeless Decision Theory solution which requires the belief that your spouse won't cheat on you if you don't cheat on her. Because there is no direct causal relationship between these two events it sounds a lot like believing without proof, especially if one doesn't know enough game theory to understand accusal relationships.
Replies from: Elethiomel↑ comment by Elethiomel · 2012-05-28T09:39:21.680Z · LW(p) · GW(p)
You seem to be missing the point. "Faith" in terms of religious belief is not the same thing as being "faithful" to your spouse.
You're equivocating. Also, that's not a Prisoner's Dilemma. A Prisoner's Dilemma allows no precommittments(you don't expect to get arrested; neither does your partner), and no communication with your partner once the game starts. It's clear that neither of those requirements is true when considering fidelity to one's partner. Relationships are not Prisoner's Dilemma situations. It takes an extreme stretch of the situation, and a skewed placement of values for BOTH players for it to resemble one. If both players can gain more utility from being unfaithful, why not implement an open relationship? If the utility from being unfaithful is high enough(higher than the utility of the relationship itself), why continue the relationship?
Loyalty to one's partner differs in many many many ways from religious faith.
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2012-05-28T10:05:50.044Z · LW(p) · GW(p)
You seem to be missing the point. "Faith" in terms of religious belief is not the same thing as being "faithful" to your spouse...You're equivocating.
No, this has been standard usage since at least as far back as the High Middle Ages.
Replies from: Elethiomel, wedrifid↑ comment by Elethiomel · 2012-05-31T07:20:51.671Z · LW(p) · GW(p)
That has to be the worst citation in support of an argument I've ever seen. "Standard usage"...is number 6 on a list of different models of faith in philosophical terms? Right. That's clearly what most people mean when they talk about faith.
Also, trusting someone else is the opposite of fidelity to that person, not the same thing.
Regardless, the definition Nietzsche is using is obviously not referring to a trust-based model.
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2012-05-31T09:55:00.040Z · LW(p) · GW(p)
That has to be the worst citation in support of an argument I've ever seen.
Let me be the first to welcome you, since it appears this is your first day on the Internet.
"Standard usage"...is number 6 on a list of different models of faith in philosophical terms? Right. That's clearly what most people mean when they talk about faith.
I wasn't aware of the context in which your back-and-forth with Eugine_Nier was taking place, since I only started reading at this comment when it was in the recent comments feed. My bad. I assumed you thought he was using "faith" in an idiosyncratic way, rather than in a way that has been part of theology for almost a millennium. After reading a few comments up I can see that you were referring to a particular quote by Nietzsche (one in which he probably did not mean to refer to the concept of faith as trust).
Also, trusting someone else is the opposite of fidelity to that person, not the same thing.
Obviously, "trusting someone" is not the same as "fidelity to that person". I never claimed otherwise. On the other hand, opposite is way too strong a word for this. Moreover, Eugine_Nier's comment never made such an equivalence claim. He said that "faith amounts" to the "belief that your spouse won't cheat on you". This sounds very much like the concept of faith as trust (and not its opposite).
Regardless, the definition Nietzsche is using is obviously not referring to a trust-based model.
We are in full agreement on this point.
↑ comment by wedrifid · 2012-05-28T14:07:11.243Z · LW(p) · GW(p)
No, this has been standard usage since at least as far back as the High Middle Ages.
It is a usage of the same original word that has clearly diverged such that to substitute the intended meaning across contexts is most decidedly equivocation. "Faith" as in a kind of belief is not the same meaning as "faithful" as in not fucking other people. This should be obvious. The origin of the (nearly euphemistic) usage of the term is beside the point.
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2012-05-28T14:17:35.879Z · LW(p) · GW(p)
It is a usage of the same original word that has clearly diverged such that to substitute the intended meaning across contexts is most decidedly equivocation. "Faith" as in a kind of belief is not the same meaning as "faithful" as in not fucking other people. This should be obvious. The origin of the (nearly euphemistic) usage of the term is beside the point.
What evidence, if it existed, would cause you to change your mind?
↑ comment by patternist · 2012-03-06T16:09:16.848Z · LW(p) · GW(p)
Summary: Theory is to faith as our concept of physical necessitation is to that of social obligation.
"Could be" is to "is" as "ought" (or faith) is to "must"? Strikes me as a very nuanced term with diverse associations across brains, interesting analogy.
comment by komponisto · 2012-03-02T02:17:24.225Z · LW(p) · GW(p)
By studying the masters, not their pupils.
-- Niels Henrik Abel, on how he developed his mathematical ability.
comment by GLaDOS · 2012-03-30T08:56:02.449Z · LW(p) · GW(p)
One has to belong to the intelligentsia to believe things like that: no ordinary man could be such a fool.
--George Orwell, here
Replies from: Hul-Gil↑ comment by Hul-Gil · 2012-03-30T18:01:36.980Z · LW(p) · GW(p)
Since I have just read that "the intelligentsia" is usually now used to refer to artists etc. and doesn't often include scientists, this isn't as bad as I first thought; but still, it seems pretty silly to me - trying to appear deep by turning our expectations on their head. A common trick, and sometimes it can be used to make a good point... but what's the point being made here? Ordinary people are more rational than those engaged in intellectual pursuits? I doubt that, though rationality is in short supply in either category; but in any case, we know the "ordinary man" is extremely foolish in his beliefs.
Folk wisdom and common sense are a favored refuge of those who like to mock those foolish, Godless int'lectual types, and that's what this reminds me of; you know, the entirely too-common trope of the supposedly intelligent scientist or other educated person being shown up by the homespun wisdom and plain sense of Joe Ordinary. (Not to accuse Orwell of being anti-intellectual in general - I just don't like this particular quote.)
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-03-31T02:59:58.296Z · LW(p) · GW(p)
but still, it seems pretty silly to me - trying to appear deep by turning our expectations on their head.
This quote isn't just about seeming deep, it refers to a frequently observed phenomenon. I think two main reasons for it are that intellectuals are better at rationalizing beliefs they arrived at for non-smart reasons (there is even a theory that some intellectuals signal their intelligence by rationalizing absurd beliefs) and the fact that they're frequently in ivory towers where day to day reality is less available.
Not to accuse Orwell of being anti-intellectual in general
Depends on which type of anti-intellectualism you're referring to.
Replies from: RobinZ↑ comment by RobinZ · 2012-03-31T03:12:37.307Z · LW(p) · GW(p)
I remember Tetlock's Expert Political Judgment suggested a different mechanism for intelligence to be self-defeating: clever arguing. In a forecaster's field of expertise, they have more material with which to justify unreasonable positions and refute reasonable ones, and therefore they are more able to resist the force of reality.
comment by CasioTheSane · 2012-03-09T07:50:27.630Z · LW(p) · GW(p)
I'd take the awe of understanding over the awe of ignorance any day.
-Douglas Adams
comment by gwern · 2012-03-03T07:53:42.837Z · LW(p) · GW(p)
"A full tour through the modern critics of the competitive organization of society would be a truly exhausting trip. It would include the drama, the novel, the churches, the academies, the lesser intellectual establishments, the socialists and communists and Fabians and a swarm of other dissenters. One is reminded of Schumpeter’s remark that the Japanese earthquake of 1924 had a remarkable aspect: it was not blamed on capitalism. Suddenly one realizes how impoverished our society would be in its indignation, as well as in its food, without capitalism."
--George F. Stigler, "Economics or Ethics?"
comment by NancyLebovitz · 2012-03-01T12:34:26.103Z · LW(p) · GW(p)
I wouldn’t say that we defy the limit, I’d say that we reexamine it, by very carefully considering the set of cases that actually matter.
--- pseudonym
comment by Thomas · 2012-03-01T08:36:37.916Z · LW(p) · GW(p)
How extremely stupid [I] not to have thought of that.
Thomas Henry Huxley - about Darwin's theory of evolution
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-03-01T10:47:51.028Z · LW(p) · GW(p)
Meh. That's just hindsight bias.
All truths are easy to understand when they are revealed; what's hard is to find them out.
Galileo Galilei (translated by me)
Replies from: Eliezer_Yudkowsky, Giles, ciphergoth, Thomas↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-03T08:10:32.642Z · LW(p) · GW(p)
With the great historical exception of quantum mechanics.
Replies from: CasioTheSane, Thomas, army1987, Will_Newsome↑ comment by CasioTheSane · 2012-03-08T03:52:45.225Z · LW(p) · GW(p)
I suspect this is because we're still missing major parts of quantum mechanics.
Richard Feynman's famous quote is accurate. Before I studied physics in college I was pretty sure that I still had a lot to learn about quantum mechanics. After studying it for several years, I now have a high level of confidence that I know almost nothing about quantum mechanics.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-08T05:36:28.767Z · LW(p) · GW(p)
Try reading this.
↑ comment by Thomas · 2012-03-03T08:48:22.361Z · LW(p) · GW(p)
In fact, most people don't understand the Relativity. Most still rejects Evolution. It wasn't easy to understand the Copernican system in the Galileo's time.
It is easy to understand for a handful, and it seems obvious only to a few, when a new major breakthrough is made. Galileo was wrong. It may be easier, but not "easy to understand once a truth is revealed".
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-03-03T10:27:24.818Z · LW(p) · GW(p)
It wasn't easy to understand the Copernican system in the Galileo's time.
I suppose people didn't understand it because they didn't want to, not because they couldn't manage to. (Same with evolution -- what the OP was about. I might agree about relativity, though I guess for some people at least the absolute denial macro does play some part.)
Galileo was wrong.
More like stuff that was true back them is no longer true now.
Replies from: Thomas↑ comment by Thomas · 2012-03-03T10:55:37.123Z · LW(p) · GW(p)
I suppose people didn't understand it because they didn't want to
I suppose not. Why? People either have an inborn concept of the absolute up-down direction, either they develop it early in life. Updating to the round (let alone moving and rotating Earth) is not that easy and trivial for a naive mind of a child or for a Medieval man.
A new truth is usually heavy to understand for everybody. Had not been so, the science would progress faster.
Replies from: army1987, army1987↑ comment by A1987dM (army1987) · 2012-03-03T11:25:15.517Z · LW(p) · GW(p)
I don't see how that contradicts my claim that it's not that people couldn't understand the meaning of the statement “the Earth revolves around the Sun”, but rather they disagreed with it because it was at odds with what they thought of the world. iħ∂|Ψ⟩/∂t = Ĥ|Ψ⟩, now that's a statement most people won't even understand enough to tell whether they think it's true or false.
↑ comment by A1987dM (army1987) · 2012-03-03T11:09:35.041Z · LW(p) · GW(p)
I don't see how that contradicts my claim that it's not that people couldn't understand the meaning of the statement “the Earth revolves around the Sun” but rather they disagreed with it. iħ∂|Ψ⟩/∂t = Ĥ|Ψ⟩, now that's a statement most people won't even understand enough to tell whether they think it's true or false.
↑ comment by A1987dM (army1987) · 2012-03-03T10:33:44.532Z · LW(p) · GW(p)
Historical? I know you count many worlds as “understanding”, but I wouldn't until this puzzle is figured out. (Or maybe it's that I like Feynman's (in)famous quote so much I want to keep on using it, even if this means using a narrower meaning for understand.)
Replies from: shminux↑ comment by Shmi (shminux) · 2012-03-06T00:57:19.818Z · LW(p) · GW(p)
I certainly hope that EY means that the problem of the origins of the Born rule is still open, not that the MWI has somehow solved it.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-03-11T01:25:50.742Z · LW(p) · GW(p)
IIRC he said something to the effect that it is no longer true that nobody understands QM since we have the MWI; my point is that I wouldn't count MWI as ‘understanding’ if the very rule connecting it to (probabilities of) experimental results is still not understood.
↑ comment by Will_Newsome · 2012-03-03T08:42:27.314Z · LW(p) · GW(p)
Not sure which part of QM you're referring to, but arguably QM hasn't really been "found out" yet, so we shouldn't be surprised that it's not easy to understand. I mean seriously, what the hell are complex numbers doing in the Dirac equation?
Replies from: Luke_A_Somers, Zack_M_Davis↑ comment by Zack_M_Davis · 2012-03-07T18:13:24.125Z · LW(p) · GW(p)
what the hell are complex numbers doing in the Dirac equation?
As I've pointed out to you before, if you have a problem with physical applications of complex numbers, you should be equally offended by physical applications of matrices, because matrices of the form [[a,-b],[b,a]] are isomorphic to complex numbers. In fact, your problem isn't just with quantum mechanics; if you can't stand complex numbers, you should also have a problem with (for just one example) simple harmonic motion.
In detail: we model a mass attached to a spring with the equation F=-kx: the force F on the mass is proportional to a constant -k times the displacement from the equilibrium position x. But because force is mass times acceleration, and acceleration is the second time derivative of position, this is actually the differential equation x''(t) + (k/m)x(t) = 0, which has the solution x(t) = ae^(i*sqrt(k/m)t) + be^(-i*sqrt(k/m)t) where a and b are arbitrary constants.
It's true that people tend to write this as ccos(sqrt(k/m)t)+dsin(sqrt(k/m)t), but the fact that we use a notation that makes the complex numbers less visible, doesn't change the underlying math. Trig functions are sums of complex exponentials.
Complex numbers are perfectly well-behaved, non-mysterious mathematical entities (consider also MagnetoHydroDynamics's point about algebraic closure); why shouldn't they appear in the Dirac equation?
↑ comment by Paul Crowley (ciphergoth) · 2012-03-01T15:56:06.893Z · LW(p) · GW(p)
So that I can google for it - what's the original text? Thanks!
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-03-01T16:36:56.653Z · LW(p) · GW(p)
The version I've read is "Tutte le verità sono facili da capire quando sono rivelate, il difficile è scoprirle!" But that sounds like suspiciously modern Italian to me, so I wouldn't be surprised to find out that it's itself a paraphrase.
ETA: Apparently it was quoted in Criminal Minds, season 6, episode 11, and I suspect the Italian dubbing backtranslated the English version of the show rather than looking for the original wording by Galileo. (Which would make my version above a third-level translation.)
ETA2: In the original version of Criminal Minds, it's "All truths are easy to understand once they are discovered; the point is to discover them" according to Wikiquote. (How the hell did point become difficile? And why the two instances of discover were translated with different verbs? That's why I always watch shows and films in the original language!)
ETA3: And Wikiquote attributes that as “As quoted in Angels in the workplace : stories and inspirations for creating a new world of work (1999) by Melissa Giovagnoli”.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2012-03-10T20:08:07.490Z · LW(p) · GW(p)
Edited Wikiquote - thanks!
↑ comment by Thomas · 2012-03-01T12:04:50.967Z · LW(p) · GW(p)
Generally, yes. But in this particular casa we can trust, that the later Darwin's bulldog really felt that way and that this was a justified statement. He obviously understood the matter well.
All those English animal breeders had a good insight. It was more or less a wild generalization for them. Non so wild for Huxley.
comment by CasioTheSane · 2012-03-09T07:50:03.321Z · LW(p) · GW(p)
If it looks like a duck, and quacks like a duck, we have at least to consider the possibility that we have a small aquatic bird of the family Anatidae on our hands.
-Douglas Adams
comment by Voltairina · 2012-03-06T19:46:43.455Z · LW(p) · GW(p)
“It's the stupid questions that have some of the most surprising and interesting answers. Most people never think to ask the stupid questions.”
― Cory Doctorow, For The Win
I interpret this to mean that often times questions are overlooked because the possibility of them being true seems absurd. Similar to the Sherlock Holmes saying, “When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.”
Replies from: Nominull, wedrifid↑ comment by Nominull · 2012-03-07T15:43:24.156Z · LW(p) · GW(p)
When you've eliminated the impossible, if whatever's left is sufficiently improbable, you probable haven't considered a wide enough space of candidate possibilities.
Replies from: Voltairina↑ comment by Voltairina · 2012-03-07T19:21:19.751Z · LW(p) · GW(p)
Seems fair. The Holmes saying seems a bit funny to me now that I think about it, because the probability of an unlikely event changes to become more likely when you've shown that reality appears constrained from the alternatives. I mean, I guess that's what he's trying to convey in his own way. But, by the definition of probability, the likelihood of the improbable event increases as constraints appear preventing the other possibilities. You're going from P(A) to P(A|B) to P(A|(B&C)) to.. etc. You shouldn't be simultaneously aware that an event is improbable and seeing that no other alternative is true at the same time, unless you're being informed of the probability, given the constraints, by someone else, which means that yes, they appear to be considering more candidate possibilities (or their estimate was incorrect. Or something I haven't thought of...).
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-03-09T07:00:35.877Z · LW(p) · GW(p)
Maybe he meant how a priori improbable it is?
Replies from: Voltairina↑ comment by Voltairina · 2012-03-09T07:05:56.113Z · LW(p) · GW(p)
That sounds right.
↑ comment by wedrifid · 2012-03-09T06:50:51.761Z · LW(p) · GW(p)
I interpret this to mean that often times questions are overlooked because the possibility of them being true seems absurd.
I interpret it to mean that Cory Doctorow doesn't fully consider the implications of hindsight bias when it comes to predicting the merits of asking questions from a given class.
Usually asking stupid questions really is just stupid.
Replies from: Voltairina, Eugine_Nier↑ comment by Voltairina · 2012-03-09T07:01:12.940Z · LW(p) · GW(p)
Hrm. Okay, I see your point, I think. I think there's some benefit in devoting a small portion of your efforts to pursuing outlying hypotheses. Probably proportional to the chance of them being true, I guess, depending on how divisible the resources are. If by "stupid", Doctorow means "basic", he might be talking about overlooked issues everyone assumed had already been addressed. But I guess probabilistically that's the same thing - its unlikely after a certain amount of effort that basic issues haven't been addressed, so its an outlying hypothesis, and should again get approximately as much attention as its likelihood of being true, depending on resources and how neatly they can be divided up. And maybe let the unlikely things bubble up in importance if the previously-thought-more-likely things shrink due to apparently conflicting evidence... A glaring example to me seems the abrahamic god's nonexplanatory abilities going unquestioned for as long as they did. Like, treating god as a box to throw unexplained things in and then hiding god behind "mysteriousness" begs the question of why there's a god clouded in mysteriousness hanging around.
↑ comment by Eugine_Nier · 2012-03-10T00:51:15.415Z · LW(p) · GW(p)
Usually asking stupid questions really is just stupid.
But the expected return on asking a stupid question is still positive.
Replies from: wedrifid, Desrtopa↑ comment by wedrifid · 2012-03-10T02:09:39.180Z · LW(p) · GW(p)
But the expected return on asking a stupid question is still positive.
No, not with even the slightest semblance of opportunity cost being taken into account.
Replies from: None↑ comment by [deleted] · 2012-03-10T08:52:36.846Z · LW(p) · GW(p)
I'd say there are probably cases where people have gotten hurt by not asking "stupid" questions.
Also, I think we need to dissolve what exactly a stupid question is?
Replies from: wedrifid↑ comment by wedrifid · 2012-03-10T09:53:06.288Z · LW(p) · GW(p)
I'd say there are probably cases where people have gotten hurt by not asking "stupid" questions.
Almost certainly. I am also fairly confident that there is someone who has been hurt because he did look before crossing the road.
Replies from: None↑ comment by [deleted] · 2012-03-10T11:21:49.631Z · LW(p) · GW(p)
But does the negative utility from the situations "find out, get hurt from it" outweight "don't find out, get hurt from it?"
Isn't the heuristic More Knowledge => Better Decisions quite powerful?
Replies from: wedrifid↑ comment by wedrifid · 2012-03-10T19:58:37.643Z · LW(p) · GW(p)
Get to the stupid questions after all the sensible questions have been exhausted if, for some reason, the expected utility of the next least stupid question is still positive.
Replies from: None↑ comment by [deleted] · 2012-03-11T23:08:13.446Z · LW(p) · GW(p)
I think we need to find out what we mean by stupid and sensible questions.
Of course one should in any given situation perform the experiments (ask questions) that gives highes expected information (largest number of bits) yield, I.E. ask if it is a vertebrae before you ask if it is a dog. What I think we disagree upon is the nature of a stupid question.
And now, it seems I cannot come up with a good definition of a stupid question as anything I previously would refer to as a "stupid question" can be equally reduced to humility.
↑ comment by Desrtopa · 2012-03-10T16:32:43.975Z · LW(p) · GW(p)
Asking stupid questions costs status.
Replies from: wedrifid, thomblake↑ comment by wedrifid · 2012-03-10T20:03:43.042Z · LW(p) · GW(p)
Asking stupid questions costs status.
From a slightly different perspective we could say that asking 'silly' questions (even good silly questions) costs status while asking stupid questions can potentially gain status in those cases where the people who hear you ask are themselves stupid (or otherwise incentivised to appreciate a given stupid gesture).
↑ comment by thomblake · 2012-03-10T17:50:09.222Z · LW(p) · GW(p)
And this sort of thing is why some of us think all this 'status' talk is harmful.
Replies from: Desrtopa↑ comment by Desrtopa · 2012-03-10T17:57:28.245Z · LW(p) · GW(p)
It doesn't go away if you stop talking about it.
Personally, I think Robin Hanson tends to treat status as a hammer that turns all issues into nails; it's certainly possible to overuse a perspective for analyzing social interaction. But that doesn't mean that there aren't cases where you can only get a meaningful picture of social actions by taking it into consideration.
Replies from: Ezekiel, thomblake↑ comment by thomblake · 2012-03-11T04:52:38.039Z · LW(p) · GW(p)
It doesn't go away if you stop talking about it.
No, but worrying about status can keep you from getting answers to your 'stupid' questions.
This is partly why nerds have largely internalized the "there are no stupid questions" rule. See Obvious Answers to Simple Questions by isaacs of npm fame.
comment by djcb · 2012-03-04T09:56:58.448Z · LW(p) · GW(p)
There is a spookier possibility. Suppose it is easy to send messages to the past, but that forward causality also holds (i.e. past events determine the future). In one way of reasoning about it, a message sent to the past will "alter" the entire history following its receipt, including the event that sent it, and thus the message itself. Thus altered, the message will change the past in a different way, and so on, until some "equilibrium" is reached--the simplest being the situation where no message at all is sent. Time travel may thus act to erase itself (an idea Larry Niven fans will recognize as "Niven's Law").
-- Hans Moravec Time Travel and Computing
Replies from: Ezekiel, Eugine_Nier↑ comment by Ezekiel · 2012-03-06T14:53:00.192Z · LW(p) · GW(p)
Extremely cool in an armchair-physicist sort of way, but what's the rationality?
Replies from: djcb, gwern↑ comment by djcb · 2012-03-06T19:39:42.412Z · LW(p) · GW(p)
Fair point -- I actually wasn't 100% convinced myself it fits here... Reason for posting it anyway was that (a) it somehow reminded me of the omega/2-boxes problem (i.e., the paradoxal way how present and past seem to influence each other), (b) Hans Moravec work touches on so many of the AI/transhumanist themes common in LW and (c) I found it such a clever observation that I thought people here would appreciate.
Not sure if that's enough reason, but that's how it went.
↑ comment by gwern · 2012-03-06T16:17:27.283Z · LW(p) · GW(p)
I guess 'it all adds up to normality', but that's a stretch.
Replies from: Voltairina↑ comment by Voltairina · 2012-03-06T19:30:13.475Z · LW(p) · GW(p)
There might be other equilibria in which the past and future adjust to form a new symmetry. So you kill your grandfather, say, but you're no longer related to him. Oh yeah... rationality. Haven't a clue:/. Its a nice quote.
↑ comment by Eugine_Nier · 2012-03-07T05:02:08.104Z · LW(p) · GW(p)
Not quiet, since you need time travel to establish the final timeline.
comment by David Althaus (wallowinmaya) · 2012-03-02T20:25:53.018Z · LW(p) · GW(p)
All sciences are now under the obligation to prepare the ground for the future task of the philosopher, which is to solve the problem of value, to determine the true hierarchy of values.
Friedrich Nietzsche, foreseeing the CEV-problem? (Just kidding, of course)
comment by benit0 · 2012-03-18T03:09:23.020Z · LW(p) · GW(p)
If a sufficient number of people who wanted to stop war really did gather together, they would first of all begin by making war upon those who disagreed with them. And it is still more certain that they would make war on people who also want to stop wars but in another way. -G.I. Gurdjieff
Replies from: wedrifid↑ comment by wedrifid · 2012-03-18T03:13:36.959Z · LW(p) · GW(p)
If a sufficient number of people who wanted to stop war really did gather together, they would first of all begin by making war upon those who disagreed with them.
Great quote, but I think I would just go ahead and make trade embargoes on anyone who started a war... and anyone who didn't also embargo anyone who, etc.
Not saying it would work (getting enough people to agree just wouldn't happen) but not everyone who wants to stop war is stupid.
Replies from: benit0↑ comment by benit0 · 2012-03-18T04:25:09.812Z · LW(p) · GW(p)
I dont think the idea is that anyone who wants to stop war is stupid ... its that anyone who thinks war is necessary clearly does not see that the diversity of viewpoints exists and that others viewpoints are just as valid as theirs (as hard as it may be to understand) and deserves respect.
In most cases where unnecessary violence has occurred, the suppression of individual freedom and loss / harm of human life has always been justified in an effort to end the conflict of one viewpoint and it's antithesis.
The blind spot of the oppressor will always be that their "oppressing" of others is justified for the viewpoint of their subjective view of "greater" good and not the good of all people, as they all would objectively see it.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2012-03-19T10:31:49.369Z · LW(p) · GW(p)
I dont think the idea is that anyone who wants to stop war is stupid ... its that anyone who thinks war is necessary clearly does not see that the diversity of viewpoints exists and that others viewpoints are just as valid as theirs (as hard as it may be to understand) and deserves respect.
I do not think that is what Gurdjieff meant. The idea that all viewpoints are valid could hardly be more alien to his system. From my reading of Gurdjieff, I take him to be speaking here of the mechanical nature of the ordinary man, who imagines himself to be thinking and acting, an idea contradicted as soon as one observes him in his life.
comment by Shmi (shminux) · 2012-03-03T08:00:39.570Z · LW(p) · GW(p)
Replies from: BillyOblivionHere's my advice: If you meet an economist, ask him to adjust your spine so you no longer get the common cold. Then ask him for some specific investment tips and do exactly what he recommends. Let me know which one works out best.
↑ comment by BillyOblivion · 2012-03-06T08:50:15.846Z · LW(p) · GW(p)
Do the same with a Chiropractor and let me know if you get different results.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-03-06T16:37:15.228Z · LW(p) · GW(p)
If you read the link, that's exactly the author's point
Replies from: BillyOblivion↑ comment by BillyOblivion · 2012-03-06T23:44:11.572Z · LW(p) · GW(p)
I was reading www.sciencebasedmedicine.org at the same time and my natural smart ass went for a walk. There's probably a creme for that somewhere.
comment by gRR · 2012-03-01T16:21:08.961Z · LW(p) · GW(p)
The winner is the one who makes the next-to-last mistake.
Ksawery Tartakower
Replies from: steven0461, Jonathan_Graehl↑ comment by steven0461 · 2012-03-02T21:05:19.173Z · LW(p) · GW(p)
Suppose White gives away a pawn, and then the next move White accidentally lets Black put him in checkmate. White made the next-to-last mistake, but lost, so the saying must be false in a mundane sense. Is there an esoteric sense in which the saying is true?
Replies from: None, Will_Newsome, gRR, None↑ comment by [deleted] · 2012-03-03T15:01:58.698Z · LW(p) · GW(p)
The winner is the one who makes the next-to-last mistake.
I read this as implying that the loser is the one who makes the last mistake — the mistake that allows his opponent to win.
But yeah, I think the quote is kinda sloppy — it assumes that the opponents take turns in making mistakes.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-03-03T15:39:51.247Z · LW(p) · GW(p)
the opponents take turns in making mistakes
This is true if you only count as mistakes moves which turn a winning position into a losing position, as gRR said elsethread. (I think I picked up this meaning from Chessmaster 10's automatic analyses, and was implicitly assuming it when reading the Tartakower quote.)
↑ comment by Will_Newsome · 2012-03-03T01:58:36.713Z · LW(p) · GW(p)
On a purely empirical level most amateur games once they reach critical positions are blunderfests punctuated by a few objectively strong moves that decide the game, and many complex positions near the end of games are similar blunderfests even among masters, and if you're assuming that the majority of moves are blunders then Tartakower's point is generally true. But I don't think that's what he meant.
↑ comment by gRR · 2012-03-03T01:56:01.750Z · LW(p) · GW(p)
Hmm, I suppose, a "mistake" in a technical sense is defined in terms of mini-max position evaluation, assuming infinite computing power:
eval(position) = -1 (loss), 0 (tie), or +1(win)
IsFatalMistake(move) = (eval(position before the move) > eval(position after the move) AND eval(position after the move) == -1)
With this definition, either giving away the pawn or missing the checkmate (or both) wasn't a fatal mistake, since the game was already lost before the move :)
↑ comment by Jonathan_Graehl · 2012-03-01T19:40:34.131Z · LW(p) · GW(p)
I like this even though it violates the correct standard of "mistake": was the choice expected-optimal, before the roll of the die?
I like that it suggests continuing to focus on the rest of the game rather than beating yourself up over a past mistake.
Replies from: bentarm, army1987↑ comment by bentarm · 2012-03-02T13:43:23.046Z · LW(p) · GW(p)
I like this even though it violates the correct standard of "mistake": was the choice expected-optimal, before the roll of the die?
Tartakower was a chess player.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2012-03-02T18:44:56.424Z · LW(p) · GW(p)
Somehow I'd imagined chess without really knowing.
The roll of the die is still in effect: unanticipated consequences of only-boundedly-optimal moves by each player can't make the original move more or less of a true mistake.
↑ comment by A1987dM (army1987) · 2012-03-02T20:45:28.491Z · LW(p) · GW(p)
I like that it suggests continuing to focus on the rest of the game rather than beating yourself up over a past mistake.
Tartakower also said "No one ever won a game by resigning" indeed.
comment by faul_sname · 2012-03-23T22:44:11.345Z · LW(p) · GW(p)
"The greatest lesson in life is to know that even fools are sometimes right."
-Winston Churchill
comment by [deleted] · 2012-03-20T19:40:05.129Z · LW(p) · GW(p)
God was a dream of good government.
-Morpheus, Deus Ex
Yes, I know, generalization from fictional evidence and the dangers thereof, etc. . . I think it a genuine insight, though. Just remember that humans are (almost) never motivated by just one thing.
Replies from: wedrifid↑ comment by wedrifid · 2012-03-20T21:54:42.234Z · LW(p) · GW(p)
Explain for me?
Replies from: None↑ comment by [deleted] · 2012-03-20T22:25:35.157Z · LW(p) · GW(p)
Certainly. The idea is that God was invented not just to explain the world (the standard answer to that question) but also as a sort of model of how a particular group of people wanted to be governed. One of the theses of the game is that governments constitute a system for (attempting to) compensate for the inability of people to rationally govern themselves, and that God is the ultimate realization of that attempt. A perfect government with a perfect understanding of human nature and access to everyone's opinions and desires (but without any actual humans involved). Over time, of course, views of what 'God' should be like shift with the ambient culture.
I agree, with the caveat that humans usually (and probably in this case) do things for multiple complicated reasons rather than just one. Also the caveat that Deus Ex is a video game.
Replies from: Nornagest↑ comment by Nornagest · 2012-03-20T23:05:16.921Z · LW(p) · GW(p)
Interesting theory, and perhaps one that's got legs, but there's some self-reinforcement going on in the religious sphere that keeps it from being unicausal -- if we've got a religion whose vision of God (or of a god of rulership like Odin or Jupiter, or of a divine hierarchy) is initially a simple reflection of how its members want to be governed, I'd nonetheless expect that to drift over time to variants which are more memorable or more flattering to adherents or more conducive to ingroup cohesion, not just to those which reflect changing mores of rulership. Then group identity effects will push those changes into adherents' models of proper rulership, and a nice little feedback loop takes shape.
This probably helps explain some of the more blatantly maladaptive aspects of religious law we know about, although I imagine costly signaling plays an important role too.
Replies from: Hul-Gilcomment by NancyLebovitz · 2012-03-09T13:28:58.437Z · LW(p) · GW(p)
No man demands what he desires; each man demands what he fancies he can get.
Chesterton, found here
comment by gwern · 2012-03-01T17:55:59.875Z · LW(p) · GW(p)
"In practice replacing digital computers with an alternative computing paradigm is a risky proposition. Alternative computing architectures, such as parallel digital computers have not tended to be commercially viable, because Moore's Law has consistently enabled conventional von Neumann architectures to render alternatives unnecessary. Besides Moore's Law, digital computing also benefits from mature tools and expertise for optimizing performance at all levels of the system: process technology, fundamental circuits, layout and algorithms. Many engineers are simultaneously working to improve every aspect of digital technology, while alternative technologies like analog computing do not have the same kind of industry juggernaut pushing them forward."
--Benjamin Vigoda, "Analog Logic: Continuous-Time Analog Circuits for Statistical Signal Processing" (2003 PhD thesis)
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2012-03-01T20:26:52.175Z · LW(p) · GW(p)
Alternative computing architectures, such as parallel digital computers have not tended to be commercially viable, because Moore's Law has consistently enabled conventional von Neumann architectures to render alternatives unnecessary.
And the very next year, Intel abandoned its plans to make 4 GHz processors, and we've been stuck at around 3 GHz ever since.
Since when, parallel computing has indeed had the industry juggernaut behind it.
Replies from: gwern↑ comment by gwern · 2012-03-01T20:57:16.910Z · LW(p) · GW(p)
Yep, and that's why we all have dual-core or more now rather than long ag. Parallel computers of various architectures have been around since at least the '50s (mainframes had secondary processors for IO operations, IIRC), but were confined to niches until the frequency wall was hit and the juggernaut had to do something else with the transistors Moore's law was producing.
(I also read this quote as an indictment of the Lisp machine and other language-optimized processor architectures, and more generally, as a Hansonesque warning against 'not invented here' thinking; almost all innovation and good ideas are 'not invented here' and those who forget that will be roadkill under the juggernaut.)
comment by antigonus · 2012-03-05T07:33:53.858Z · LW(p) · GW(p)
I tell you that as long as I can conceive something better than myself I cannot be easy unless I am striving to bring it into existence or clearing the way for it.
-- G.B. Shaw, "Man and Superman"
Shaw evinces a really weird, teleological view of evolution in that play, but in doing so expresses some remarkable and remarkably early (1903) transhumanist sentiments.
Replies from: MarkusRamikin↑ comment by MarkusRamikin · 2012-03-05T08:09:44.104Z · LW(p) · GW(p)
I love that quote, but if it carries a rationality lesson, I fail to see it. Seems more like an appeal to the tastes of the audience here.
Replies from: antigonus, DSimon↑ comment by DSimon · 2012-03-05T10:27:39.594Z · LW(p) · GW(p)
I have to disagree; the lesson in the quote is "Win as hard as you can", which is very important if not very complicated.
Replies from: MarkusRamikin↑ comment by MarkusRamikin · 2012-03-05T11:10:48.128Z · LW(p) · GW(p)
I don't see the connection. If bringing a superior being to myself into existence is maximum win for me, that's not obvious. Not everyone, like Shaw's Don Juan, values the Superman.
Replies from: DSimoncomment by Voltairina · 2012-03-04T22:35:25.522Z · LW(p) · GW(p)
“I don’t know what you mean by ‘glory,’ ” Alice said. Humpty Dumpty smiled contemptuously. “Of course you don’t—till I tell you. I meant ‘there’s a nice knock-down argument for you!’ ” “But ‘glory’ doesn’t mean ‘a nice knock-down argument’,” Alice objected. “When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean—neither more nor less.” “The question is,” said Alice, “whether you can make words mean so many different things.” “The question is,” said Humpty Dumpty, “which is to be master -- that’s all.”
-Charles Dodgeson(Lewis Carrol), Through the Looking Glass
Replies from: TimS↑ comment by TimS · 2012-03-04T22:51:35.964Z · LW(p) · GW(p)
Isn't Humpty Dumpty wrong, if the goal is intelligible conversation?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-03-04T23:29:06.653Z · LW(p) · GW(p)
Absolutely. But if the goal is to establish dominance, as Humpty Dumpty (appears to) suggest, its technique often works.
Replies from: Voltairina↑ comment by Voltairina · 2012-03-06T19:24:35.206Z · LW(p) · GW(p)
At first when I posted it I think I was thinking of it as kind of endorsing a pragmatic approach to language usage. I mean, it hurts communication to change the meanings of words without telling anyone, but occasionally it might be useful to update meanings when old ones are no longer useful. It used to be that a "computer" was a professional employed to do calculations, then it became a device to do calculations with, now its a device to do all sorts of things with.
Replies from: Voltairina↑ comment by Voltairina · 2012-03-06T19:27:03.108Z · LW(p) · GW(p)
But I feel like that's kind of a dodge - you're absolutely right when you say changing the meanings arbitrarily (or possibly to achieve a weird sense of anthropomorphic dominance over it) harms communication, and should be avoided, unless the value of updating the sense of the word outweighs this.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-03-06T23:35:50.265Z · LW(p) · GW(p)
It's also a useful way to establish a nonweird sense of dominance over my conversational partner.
comment by Voltairina · 2012-03-26T05:12:03.600Z · LW(p) · GW(p)
"Let us have faith that right makes might, and in that faith, let us, to the end, dare to do our duty as we understand it" - Abraham Lincoln's words in his February 26, 1860, Cooper Union Address
Replies from: sketerpot↑ comment by sketerpot · 2012-03-26T05:26:07.114Z · LW(p) · GW(p)
If right makes might, is the might you see right? Since blight and spite can also make might, is it safe to sight might and think it right?
Now, an application for Bayes' Theorem that rhymes!! Sweet Jesus!
Replies from: Voltairina↑ comment by Voltairina · 2012-03-26T05:39:47.541Z · LW(p) · GW(p)
I love it! How about in response: Since blight and spite can make might, its just not polite by citing might to assume that there's right, the probabilities fight between spite, blight and right so might given blight and might given spite must be subtracted from causes for might if the order's not right!
Replies from: sketerpot↑ comment by sketerpot · 2012-03-26T06:07:09.719Z · LW(p) · GW(p)
You have no idea how hard I'm giggling right now. Or maybe you do, because I'm telling you about it. Well met, mathpoet!
(I hope that mathpoets become enough of a real thing to warrant an unhyphenated word.)
Replies from: Nisan↑ comment by Nisan · 2012-03-26T07:16:08.989Z · LW(p) · GW(p)
Check out this alliteration: "When you see an infinite regress, consider a clever quining."
comment by Xece · 2012-03-08T00:22:36.384Z · LW(p) · GW(p)
Knowing is always better than not knowing
--Gregory House, M.D. - S02E11 "Need to Know"
Replies from: Grognor↑ comment by Grognor · 2012-03-08T06:09:14.321Z · LW(p) · GW(p)
Thought it was a duplicate of this superior quote, but it wasn't.
comment by shokwave · 2012-03-06T13:19:53.319Z · LW(p) · GW(p)
[it's] Strange that tradition should not show more interest in the past.
-- the character Sherkaner Underhill, from A Fire Upon the Deep, by Vernor Vinge.
If people believe traditions are valuable, they should anticipate that searching the past for more traditions is valuable. But we don't see that; we see most past traditions (paradoxically!) rejected with "things are different now".
Replies from: Jayson_Virissimo, roystgnr↑ comment by Jayson_Virissimo · 2012-03-06T13:31:34.008Z · LW(p) · GW(p)
Hmm...my subjective impression is that people that talk a lot about tradition actually are more interested in history than people who don't.
Replies from: simplyeric↑ comment by simplyeric · 2012-03-06T15:21:26.443Z · LW(p) · GW(p)
My subjective impression is that people who talk a lot about tradition are more interested in "the past" than they are interested in "history". e.g. the history of our nation does not bear out the traditional idea that everyone is equal. Or for that matter, the tradition of social mobility in our country, or the tradition of a wedding veil, or the tradition of Christmas caroling v. wassailing, etc.
↑ comment by roystgnr · 2012-03-06T16:09:38.240Z · LW(p) · GW(p)
If people believe traditions are valuable, they should anticipate that searching the past for more traditions is valuable.
This implication is true, but the premise typically is not. The conservative defense of tradition-for-tradition's-sake isn't really a defense of all traditions, it's a defense of long-term-stable, surviving traditions. Don't think, "It's old; revere it." Think, "It's working; don't break it." For traditions which weren't working well enough to be culturally preserved with no searching necessary, this heuristic doesn't apply. To the contrary, if it turned out that there was no correlation between how long a tradition survives and how worthwhile it is, then there would be no point in giving a priori respect to any traditions.
comment by Panic_Lobster · 2013-08-14T06:26:48.360Z · LW(p) · GW(p)
Karl Popper used to begin his lecture course on the philosophy of science by asking the students simply to 'observe'. Then he would wait in silence for one of them to ask what they were supposed to observe. [...] So he would explain to them that scientific observation is impossible without pre-existing knowledge about what to look at, what to look for, how to look, and how to interpret what one sees. And he would explain that, therefore, theory has to come first. It has to be conjectured, not derived.
David Deutsch, The Beginning of Infinity
comment by Michael Wiebe (Macaulay) · 2012-03-27T00:58:53.758Z · LW(p) · GW(p)
"Asking people to give up all forms of sacralized belonging and live in a world of purely "rational" beliefs might be like asking people to give the Earth and live in colonies orbiting the moon."
-- Jonathan Haidt, The Righteous Mind, quoted here
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2012-03-27T06:19:22.565Z · LW(p) · GW(p)
I'm still trying to decide whether going off to live in the metaphorical colonies orbiting the moon is to be considered a bad thing or a really awesome idea.
Replies from: wedrifid, Risto_Saarelma↑ comment by Risto_Saarelma · 2012-03-27T06:29:35.159Z · LW(p) · GW(p)
I mean, realistic orbiting colonies done using present-day space technology would be horrifying death traps, but metaphorical orbiting colonies are the future of humanity. I'm really confused here.
comment by NancyLebovitz · 2012-03-09T13:30:29.593Z · LW(p) · GW(p)
Politics is the art of the possible. Sometimes I’m tempted to say that political philosophy is the science of the impossible.
comment by MinibearRex · 2012-03-07T22:07:38.674Z · LW(p) · GW(p)
There’s no such thing as the unknown– only things temporarily hidden, temporarily not understood.
-Captain Kirk
Replies from: wedrifid, soreff↑ comment by wedrifid · 2012-03-09T06:48:18.423Z · LW(p) · GW(p)
There’s no such thing as the unknown– only things temporarily hidden, temporarily not understood.
Nonsense. I just threw Schrodinger's cat outside the future light cone. In your Everett branch is the cat alive or dead?
-Captain Kirk
Ok, sure, having a physics where faster than light and even (direct) time travel are possible makes things easier.
Replies from: nshepperd, MinibearRex↑ comment by nshepperd · 2012-03-10T07:42:25.223Z · LW(p) · GW(p)
I just threw Schrodinger's cat outside the future light cone. In your Everett branch is the cat alive or dead?
Both?
Replies from: wedrifid↑ comment by wedrifid · 2012-03-10T09:32:33.852Z · LW(p) · GW(p)
Both?
No.
Replies from: nshepperd↑ comment by nshepperd · 2012-03-11T03:18:55.716Z · LW(p) · GW(p)
Well, in this case the universal wavefunction does factorise into a product of two functions 𝛙(light cone)𝛙(cat), where 𝛙(cat) has an "alive" branch and "dead" branch, but 𝛙(light cone) does not. I'd rather identify with 𝛙(light cone) than 𝛙(light cone × cat) [i.e. 𝛙(universe)], but whatever.
The point you were trying to make is correct anyway, either way.
↑ comment by MinibearRex · 2012-03-10T06:29:59.763Z · LW(p) · GW(p)
I just threw Schrodinger's cat outside the future light cone. In your Everett branch is the cat alive or dead?
It seems to me that asking about the state of something in "your" Everett branch while it's outside your light cone is rather meaningless. The question doesn't really make sense. Someone with a detailed knowledge of physics in this situation can predict what an observer anywhere will observe.
But in general, your point is correct. We do have a very hard time trying to learn about events outside our light cone, etc. But the message in the quote is simply the idea that an uncertain map != an uncertain territory.
Replies from: Incorrect, wedrifid↑ comment by Incorrect · 2012-03-10T06:35:28.872Z · LW(p) · GW(p)
It seems to me that asking about the state of something in "your" Everett branch while it's outside your light cone is rather meaningless. The question doesn't really make sense. Someone with a detailed knowledge of physics in this situation can predict what an observer anywhere will observe.
So, if it was someone you care about instead of a cat, would you prefer that this happened or that they disappeared entirely?
Replies from: None↑ comment by [deleted] · 2012-03-10T09:01:17.563Z · LW(p) · GW(p)
It is still not meaningful from a physical standpoint. If you were to throw something I valued outside my future lightcone, then I would take the same as you destroying said thing.
And may I remind you that Schrödingers cat was proposed as a thought experimental counter argument to the copenhagen inteprentation, so asking if it is alive or dead before I have had particle interaction with it is equally meaningless, because it has yet to decohere.
Replies from: wedrifid↑ comment by wedrifid · 2012-03-10T09:48:05.129Z · LW(p) · GW(p)
It is still not meaningful from a physical standpoint.
Yes it is. Physics doesn't revolve around you. The fact that you can't influence or observe something is a limitation in you, not in physics. Stuff keeps existing when you can't see it.
If you were to throw something I valued outside my future lightcone, then I would take the same as you destroying said thing.
I don't believe you. I would bet that if actually given the choice between someone you loved being sent outside your future lightcone then destroyed or just sent outside the future lightcone and given delicious cookies then you would prefer them to be given the far-away cookies than the far away destruction.
Replies from: None↑ comment by [deleted] · 2012-03-10T11:27:33.312Z · LW(p) · GW(p)
Yes, of course I believe in the implied invisible. But from a personal standpoint It does not matter because the repercussions are the same either way, unless you can use your magical "throw stuff outside my future lightcone" powers to bring them back. Outside f-lightcone = I can never interact with it.
And if I have to be really nitpicky, current macroscopic physcis does revolve around the observer, but certain things can be agreed upon; such as the hamiltonian, timelike, spacelike and lightlike distances, etc. Saying physics does not revolve aroud me implies that there is a common reference point, which there isn't.
Also, I think we are straying from meaningful discussion.
↑ comment by wedrifid · 2012-03-10T09:41:41.357Z · LW(p) · GW(p)
The question doesn't really make sense. Someone with a detailed knowledge of physics in this situation can predict what an observer anywhere will observe.
No they can't. They most certainly can't predict what the observer that is right next to the damn box with the cat in it will observe when it opens the box. In fact, they can't even predict what all observers anywhere in my future light cone will observe (just those observations that could ever be sent back to me).
comment by [deleted] · 2012-03-03T15:51:49.602Z · LW(p) · GW(p)
Quetzalcoatl frowned more deeply. Finally he said, "Miguel, tell me how this fight started." "Fernandez wishes to kill me and enslave my family." "Why should he want to do that?" "Because he is evil," Miguel said. "How do you know he is evil?" "Because," Miguel pointed out logically, "he wishes to kill me and enslave my family."
— Henry Kuttner, Or Else
comment by bramflakes · 2012-03-24T14:35:34.809Z · LW(p) · GW(p)
When understanding is forgotten, education remains.
Though I don't remember who said it.
comment by [deleted] · 2012-03-02T04:01:53.374Z · LW(p) · GW(p)
When the perishable puts on the imperishable, and the mortal puts on immortality, then shall come to pass the saying that is written: “Death is swallowed up in victory.” “O death, where is your victory? O death, where is your sting?” The sting of death is sin, and the power of sin is the law.
1 Corinthians 15:54-57
(I like this quote, as long as it's shamelessly presented without context of the last line: "But thanks be to God, who gives us the victory through our Lord Jesus Christ." )
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-03-03T01:04:42.420Z · LW(p) · GW(p)
The sting of death is sin, and the power of sin is the law.
How do you interpret that line?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2012-03-12T11:23:09.303Z · LW(p) · GW(p)
The sting of death is ignorance, and the power of ignorance is the indifference of the universe. We do not yet know how to stop death, and until we do, we will go on dying.
comment by Multiheaded · 2012-03-20T12:04:44.201Z · LW(p) · GW(p)
K for your information, asshole, I have seen a lion. And not one of your crap ass queen of the jungle homoerotic pussy-cat lions. A real lion, with fangs and horns and wings and shit. Don't pull your fucking wierd ass african voodoo hypnosis crap on me when you don't even know wtf you're talking about.
-Anonymous
comment by Viliam_Bur · 2012-03-03T12:57:48.646Z · LW(p) · GW(p)
"Do you believe in revolution
Do you believe that everything will change
Policemen to people
And rats to pretty women
Do you think they will remake
Barracks to bar-rooms
Yperit to Coca-Cola
And truncheons to guitars?
Oh-oh, my naive
It will never be like that
Oh-oh, my naive
Life is like it is
Do you think that ever
Inferiority complexes will change to smiles
Petržalka to Manhattan
And dirty factories to hotels
Do you think they will elevate
Your idols to gods
That you will never have to
Bathe your sorrow with alcohol?
Oh-oh, my naive...
Do you think that suddenly
Everyone will reconcile with everyone
That no one will write you off
If you will have holes in your jeans
Do you think that in everything
Everyone will help you
That you will never have to be
Afraid of a higher power?
Oh-oh, my naive..."
My translation of a Slovak punk-rock song in 1990s "Slobodná Európa: Nikdy to tak nebude". Is it an example of an outside view, or just trying to reverse stupidity?
comment by Ezekiel · 2012-03-05T23:08:16.552Z · LW(p) · GW(p)
Replies from: Oscar_CunninghamScience knows it doesn't know everything; otherwise, it'd stop.
↑ comment by Oscar_Cunningham · 2012-03-06T00:50:19.096Z · LW(p) · GW(p)
Duplicate: http://lesswrong.com/r/all/lw/9pk/rationality_quotes_february_2012/5tm0
comment by CasioTheSane · 2012-03-09T06:52:48.350Z · LW(p) · GW(p)
Health in the modern era, health in the 21st century is a learned skill.
-Jeff Olson
comment by Spectral_Dragon · 2012-03-04T02:13:50.988Z · LW(p) · GW(p)
"The only sovereign I can allow to rule me is reason. The first law of reason is this: what exists exists; what is is. From this irreducible, bedrock principle, all knowledge is built. This is the foundation from which life is embraced. Reason is a choice. Wishes and whims are not facts, nor are they a means to discovering them. Reason is our only way of grasping reality--it is our basic tool of survival. We are free to evade the effort of thinking, to reject reason, but we are not free to avoid the penalty of the abyss we refuse to see."
-- Terry Goodkind, Faith of the fallen. I know quite a few here dislike the author, but there's still a lot of good material, like this one, or the Wizard Rules.
Replies from: Ezekiel↑ comment by Ezekiel · 2012-03-06T14:54:54.593Z · LW(p) · GW(p)
Wrong. Ockham's Razor is, at best, deducible from the axioms of probability theory, which are logically independent of "what is, is". Without the Razor, most of human knowledge is not justifiable.
Replies from: Spectral_Dragon↑ comment by Spectral_Dragon · 2012-03-06T15:39:34.387Z · LW(p) · GW(p)
I have yet to study probability theory in depth, but how can it be wrong? It simply means relying on facts, reason of wishes, facts instead of faith. Probability might be interesting, but since it's subjective, it only serves as an estimate in figuring out what might be true. The above quote tells us that facts are facts, and we can choose not to believe them, but they are still there. Using logic, for example Occam's Razor, helps in discerning fact from belief.
The quote can be boiled down to "what is is, regardless of our knowledge".
Replies from: Ezekiel↑ comment by Ezekiel · 2012-03-06T17:21:52.609Z · LW(p) · GW(p)
I'm sorry, I wasn't clear. Specifically:
what exists exists; what is is. From this irreducible, bedrock principle, all knowledge is built.
is wrong. Our knowledge is, as you say, subjective; it's based on our calculations, which are fallible, and on our axioms (or "priors"), which are even more fallible.
Replies from: Spectral_Dragon↑ comment by Spectral_Dragon · 2012-03-06T18:04:56.946Z · LW(p) · GW(p)
Thanks for clearing that up! As far as I can tell, however, all subjective knowledge is based on interpretations of the objective. We can all be wrong, but what is, is. We experiment to figure out what is in the first place, before we can try to form calculations, no? It would be more of something like "look at the territory first, else you might fall and break your neck if your map's wrong". I feel like I'm missing something painfully obvious here, though. Where am I going wrong here?
Replies from: Ezekiel↑ comment by Ezekiel · 2012-03-06T23:33:40.139Z · LW(p) · GW(p)
"What is, is" is a true statement, and one would do well to bear it in mind. My objection was to the assertion (as perceived by me) that we - as rationalists - can claim to deduce everything we know from that simple fact. We can't, and it's a flaw I don't think we pay enough attention to.
Replies from: Spectral_Dragon↑ comment by Spectral_Dragon · 2012-03-07T20:20:47.859Z · LW(p) · GW(p)
Maybe we should, then. I've always percieved it as we can potentially deduce everything from... Well, not just that fact, but the assumption that what is is, and we can only do our best to interpret it. We'll most likely never be completely right, I know damn well I'm not, but I understand your reasoning, anyway. What would in your view be impossible to deduce, then?
comment by Luke_A_Somers · 2012-03-11T16:11:49.077Z · LW(p) · GW(p)
The important thing, I take it, is to decide the level of our contribution on your own, without doing any detailed gathering of data or modeling. -- LeoChopper, at sluggy.net, summarizing an argument against AGW.
(Okay, I understand it sitting at 0. Downvoted for what? Putting modeling on the same footing as detailed gathering of data?)
Replies from: roystgnr↑ comment by roystgnr · 2012-03-24T22:44:23.685Z · LW(p) · GW(p)
I'm not the one who downvoted it, but I'm about to add another, because the quote makes little to no sense without context. Who is "our" and "your"? Does "contribution" refer to CO2 emissions, or to poltiical activism, or to planning work, or to research?
People also tend to downvote pro- and anti-AGW arguments here as "mindkilling", but this one hasn't even reached that point yet. From just the quoted text I can't even be certain whether this is an anti-AGW statement (climate modelling is insufficiently detailed and data is too sparse to justify economic contributions to mitigating global warming!) or a pro-AGW sarcastic summary of an anti-AGW argument (you're ignoring our detailed data and modeling and just deciding how much CO2 we should contribute to the air!) or something I've missed entirely.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2012-03-25T23:33:04.494Z · LW(p) · GW(p)
I see. It was the latter - someone had just pooh-poohed basically all climate science, explicitly citing gut feeling. The above was a very straightforward summary of the 'argument', not really sarcastic.
comment by [deleted] · 2012-03-06T17:49:04.661Z · LW(p) · GW(p)
All this dog and pony still monkies the whole time. We could not keep from flinging shit in our modern suits and ties.
Modest Mouse, lyrics Isaac Brock
Replies from: David_Gerard↑ comment by David_Gerard · 2012-03-30T11:05:02.385Z · LW(p) · GW(p)
I don't see why this got downvoted. It's making "humans are still apes, despite their pretensions" into a memorable image.
Replies from: AlanCrowe, ArisKatsaris↑ comment by AlanCrowe · 2012-03-30T12:18:43.094Z · LW(p) · GW(p)
"Humans are still apes" is un-Darwinian.
Darwin is saying that all animals are linked by genealogical ties. The mouse and the elephant share a common ancestor, a small, shrew-like creature, 200million years ago. So is he saying elephants are still mice, just big mice with a funny nose? No, the theory, as the book title suggests, is a theory of origins. Given 10million years descent with modification can come up with something genuinely new. By spreading the necessary changes across millions of generations, descent with modification can even produce genuine novelty without needing a mouse to give birth to an elephant.
Some people look at modern technological civilization and see it as evidence that humans are not apes, but are their own kind of thing, genuinely new. Darwinians accept that sufficient such evidence can prove the point that humans (or maybe post-humans) are not apes, because it is central to Darwin's theory that some kinds of genuine novelty arise despite (and indeed through) long chains of descent.
Replies from: David_Gerard, army1987↑ comment by David_Gerard · 2012-03-31T09:16:17.930Z · LW(p) · GW(p)
Uh, the reason to say "humans are apes" is because doing so turns out to have useful predictive power. That being the actual point of the original quote.
↑ comment by A1987dM (army1987) · 2012-03-30T17:16:34.532Z · LW(p) · GW(p)
Humans are still apes according to any monophyletic definition of ape, given that bonobos are more closely related to us than to orangutans. (Also, birds are dinosaurs and dogs are wolves.)
Replies from: David_Gerard, AlanCrowe, Eugine_Nier↑ comment by David_Gerard · 2012-03-30T17:55:08.831Z · LW(p) · GW(p)
"Birds are dinosaurs" is becoming commonplace. Even the Wikipedia article on dinosaurs has given up and gone present tense.
↑ comment by AlanCrowe · 2012-03-30T18:09:25.401Z · LW(p) · GW(p)
Human civilizations are extremely complicated, and defy current attempts to understand them. One indirect approach is to leave humans to one side for the moment and to study bonobos, chimpanzees, and gorillas first. Where does that get us? There are two competing ideas.
ONE The huge differences between modern human civilizations and the social behaviour of bonobos, chimpanzees, and gorillas, are a reflection of recent evolution. In the past few million years, since the last common ancestor, human evolution has taken some strange turns, leading to the advanced technological society we see around us. When we study bonobos, chimpanzees, and gorillas we are looking at creatures without key adaptions and when we try to transfer insights to help us understand human social behaviour we end up mislead.
TWO Once we understand bonobos, chimpanzee, and gorilla behaviour, we have the key to understanding all apes, including humans. Human civilisation may be incomprehensible when we come at it cold, but having warmed up on puzzling out the basis of the simpler social behaviours of other apes, we can expect to start making progress.
Which of these two views is correct? That strikes me as a very hard question. I'm uncomfortable with the words "humans are still apes" because that phrase seems to be used to beg the question. The more conservative formulation "humans and apes had a common ancestor a few million years ago." dodges giving a premature opinion on a hard question.
Here is a thought experiment to dramatize the issue: A deadly virus escapes from a weapons lab and kills all humans. Now the talking-animal niche on earth is vacant again. Will chimpanzees or gorillas evolve to fill it, building their own technologically advanced civilizations in a few million years time. If you believe view number two, this seems reasonably likely. If you believe view number one, it seems very unlikely. One is much more interested in the idea that the strange turns in human evolution in the past million years are a one in a million freak and are a candidate for the great filter
Replies from: army1987, Desrtopa, Richard_Kennaway↑ comment by A1987dM (army1987) · 2012-03-30T19:06:08.357Z · LW(p) · GW(p)
"humans and apes had a common ancestor a few million years ago."
More like “any common ancestor of all apes is also an ancestor of all humans”.
(Humans are not apes if you define apes paraphiletically e.g. as ‘the descendants of the most recent common ancestor of bonobos and gibbons, excluding humans’, but then “humans are not apes” becomes a tautology.)
↑ comment by Desrtopa · 2012-03-30T18:23:26.973Z · LW(p) · GW(p)
Which of these two views is correct? That strikes me as a very hard question. I'm uncomfortable with the words "humans are still apes" because that phrase seems to be used to beg the question. The more conservative formulation "humans and apes had a common ancestor a few million years ago." dodges giving a premature opinion on a hard question.
Humans might have adaptations which set us apart from all the other apes behavior-wise, but we share a common ancestor with chimps and bonobos more recently than they share a common ancestor with orangutans. It doesn't make a lot of sense to say we split off from the apes millions of years ago, when we're still more closely related to some of the apes than those apes are to other species of ape.
Edit: already pointed out in the grandparent, I guess this is what I get for only looking at the local context.
↑ comment by Richard_Kennaway · 2012-03-31T21:40:18.397Z · LW(p) · GW(p)
Which of these two views is correct? That strikes me as a very hard question. I'm uncomfortable with the words "humans are still apes" because that phrase seems to be used to beg the question. The more conservative formulation "humans and apes had a common ancestor a few million years ago." dodges giving a premature opinion on a hard question.
How you define the word "ape" makes no difference to the facts about our relationships with our ancestors and their other descendants.
↑ comment by Eugine_Nier · 2012-03-31T02:46:11.684Z · LW(p) · GW(p)
Humans are still apes according to any monophyletic definition of ape
I don't see why being monophyletic is the most relevant property of definitions.
Also, are you also going to attempt to argue that humans are fish?
Replies from: alex_zag_al, army1987↑ comment by alex_zag_al · 2012-03-31T02:58:28.484Z · LW(p) · GW(p)
The fish thing is irrelevant. If what makes bonobos and orangutans apes is that they share a common ancestor, then that also makes us an ape, since that's our ancestor too. Can't adapt that argument to fish, because descendants of the ancestor we share with fish are not generally called fish, the way descendants of the ancestor we share with orangutans are generally called apes.
Replies from: Nornagest↑ comment by Nornagest · 2012-03-31T03:30:25.109Z · LW(p) · GW(p)
Can't adapt that argument to fish, because descendants of the ancestor we share with fish are not generally called fish, the way descendants of the ancestor we share with orangutans are generally called apes.
I'm not sure this holds water: a common-ancestry approach would have to take in lobe-finned fishes like the lungfish, who're more closely related to tetrapods but are called fish on the basis of a morphological similarity derived from a common ancestor. Essentially the same process as for apes. They're in good company, though: there are plenty of traditional taxonomical groups which turn out to be polyphyletic when you take a cladistic approach, including reptiles.
↑ comment by A1987dM (army1987) · 2012-03-31T09:01:58.487Z · LW(p) · GW(p)
There would be no point in defining fish monophyletically anyway, as it would then be just a synonym of craniates. (Also note that “apes, i.e. non-human hominoids, do not include humans” is a tautology but “fish, i.e. non-tetrapod craniates, do not include humans” is not.)
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-03-31T18:27:09.320Z · LW(p) · GW(p)
(Of course, you could then say “There would be no point in defining apes monophyletically anyway, as it would then be just a synonym of hominoids.” But hominoids is a much uglier word, and hominoids/hominids/hominines/etc. are much harder to remember than apes/great apes/African apes/etc. (plus, my spell checker baulks at some of the former, FWIW). (See this proposal to rename the scientific names of the clades.)
↑ comment by ArisKatsaris · 2012-03-30T11:30:12.096Z · LW(p) · GW(p)
I don't see why this got downvoted.
Bad spelling and bad punctuation would suffice.
comment by Will_Newsome · 2012-03-02T09:39:18.035Z · LW(p) · GW(p)
I swear to interpret every phenomenon as a particular dealing of God with my soul.
Aleister Crowley, Magick, Liber ABA, Book 4
Replies from: JoachimSchipper, Will_Newsome↑ comment by JoachimSchipper · 2012-03-02T14:25:44.393Z · LW(p) · GW(p)
Could you explain what you mean by that?
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-03-02T14:41:14.838Z · LW(p) · GW(p)
If you search the text of the book, e.g. with Google Books, you can see the four places where it appears and get a sense of the context and meaning. His talk of pyramids is similar to Eliezer's Void or Musashi's nameless virtue or God; seeing that connection should I think be enough to figure the rest out? Maybe? It's a pretty deep piece of wisdom though so a lot of the meaning might not be immediately obvious. Hence my trepidation about explaining it; it'd take too long.
Replies from: DSimon↑ comment by DSimon · 2012-03-02T22:54:10.434Z · LW(p) · GW(p)
If it's too deep to be understandable without explanation, and you don't think it's feasible to explain it here, then why did you put the quote up in the first place?
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-03-03T00:52:39.813Z · LW(p) · GW(p)
Heterogeneous audience and asymmetric costs/benefits to reading it: people who don't get it aren't harmed much by its presence, the few people who do get it should benefit quite a bit.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-03-03T00:56:18.250Z · LW(p) · GW(p)
Shouldn't a good pithy saying work in the opposite way ? The people who don't get it walk away enlightened (or, at least, filled with curiosity regarding the topic), while the ones in the know are unharmed.
What's the point of telling the chosen few something which they already know ?
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-03-03T01:22:20.168Z · LW(p) · GW(p)
It's something that you could have derived if you'd thought to but didn't, like Bayes' rule. Once it's pointed out you immediately see why it's true and gain a fair bit of insight, but first you have to understand basic algebra. It's basically like clichés like "be the change you want to see in the world" but on a higher level; most normal people don't have enough knowledge to correctly interpret "be the change you want to see in the world", and most smart people don't have enough knowledge to correctly interpret "interpret every phenomenon as a particular dealing of God with your soul", but the few who do should benefit a lot.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-03-03T02:11:29.592Z · LW(p) · GW(p)
In that case I'm voting down your quote, because, not being one of the Elect, I see no particular meaning in it. But if you wrote some sort of a Sequence on the topic, I might vote it up.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-03-03T02:19:58.235Z · LW(p) · GW(p)
I think that's the correct choice; the quote and quotes like it should be voted down to minus ten or so, because most people will get no benefit from it.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-03-03T03:20:28.976Z · LW(p) · GW(p)
Do you consider it more than negligibly likely that the benefit-receiving subset will read a comment voted down to -10 or so?
Replies from: None, Will_Newsome↑ comment by [deleted] · 2012-03-03T10:20:00.236Z · LW(p) · GW(p)
I am more likley to read heavily downovted quotes, simply for the sake of novelty, than quotes voted at -2 to 4 karma. I don't think I'm in the benefit-receiving subset though.
Replies from: radical_negative_one↑ comment by radical_negative_one · 2012-03-04T07:21:43.920Z · LW(p) · GW(p)
I read strongly downvoted posts as well, but perhaps they have more than just novelty value. For a post that is merely bad, people usually stop downvoting it once it's negative. But something voted to -10 or below is often bad in a way that serves as an example of what not to do. Heavily downvoted comments can be educational.
↑ comment by Will_Newsome · 2012-03-03T03:27:32.099Z · LW(p) · GW(p)
Yes, especially if I point them to it. Having it already sitting there with links is useful. There's also a non-negligible subset of people that read my comments from my user page.
↑ comment by Will_Newsome · 2012-03-02T13:50:25.066Z · LW(p) · GW(p)
This might actually be the highest wisdom-to-length ratio I've ever seen in an English sentence. "Take heed therefore how ye hear: for whosoever hath, to him shall be given; and whosoever hath not, from him shall be taken even that which he seemeth to have" from Jesus is also pretty high up there.
Replies from: shokwave, Dmytry, Desrtopa↑ comment by shokwave · 2012-03-03T23:35:23.457Z · LW(p) · GW(p)
This might actually be the highest wisdom-to-length ratio I've ever seen in an English sentence.
Well let me impress you:
So heed this: whoever has, will be given to; and whoever has not, more will be taken from.
Replies from: Incorrect, army1987, Will_Newsome, DSimon↑ comment by Incorrect · 2012-03-03T23:43:20.391Z · LW(p) · GW(p)
Exchanges like this make me wish we had a signalling-analysis novelty account, akin to reddit's joke-explainer.
↑ comment by A1987dM (army1987) · 2012-03-24T12:57:06.415Z · LW(p) · GW(p)
Italian is even more awesome: the proverb Piove sempre sul bagnato (lit. ‘it always rain on the wet’) says the same thing in eight syllables. :-)
(There was once a discussion in Italy about whether to stop teaching Latin in a certain type of high schools. Someone said that Latin should be taught because it's the intellectual equivalent of high-nutrient food, giving the example of the proverb Homini fingunt et credunt and pointing out that a literal translation (‘People feign and believe’) would be nearly meaningless, and an actually meaningful translation (‘People make up things and then they end up believing them themselves’) wouldn't be as terse and catchy. But ISTM that all natural languages have proverbs whose point is not immediately obvious from the literal meaning, so that's hardly an argument as to why one particular language should be taught.)
↑ comment by Will_Newsome · 2012-03-04T01:51:24.694Z · LW(p) · GW(p)
Lolz, but the "how ye hear" part is actually an important nuance. (And sadly it doesn't appear in a few of the other gospels I think.) ETA: Also the "seemeth to have" part is actually an important nuance. (And sadly it doesn't appear in a few of the other gospels I think.)
Replies from: shokwave↑ comment by DSimon · 2012-03-04T06:01:22.609Z · LW(p) · GW(p)
The rich get rich, but the poor stay poor.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-03-04T06:16:17.133Z · LW(p) · GW(p)
But that's not as abstract and makes it seem like it's literally only about money, rather than a general principle of credit assignment that has important implications for people who want to have better epistemic habits. That's why the "take heed therefore how ye hear" part is important.
Take heed therefore how ye hear: for whosoever hath good inductive biases, to him more evidence shall be given, and he shall have an abundance: but whosoever hath not good inductive biases, from him shall be taken away even what little evidence that he hath.
ETA: I feel like some pedantic snobbish artist going on about this sort of thing, it's kinda funny.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-03-24T11:49:05.615Z · LW(p) · GW(p)
It's conceivable that "take care" is also a clue that this process will just happen-- it's not your job to be taking advantage of those who have little.
↑ comment by Dmytry · 2012-03-24T11:44:03.201Z · LW(p) · GW(p)
What is Jesus even talking about? Arguing that capitalism leads to monopolistic capitalism? Arguing against economic inequality? Discussing utility monsters? Ordering followers to strengthen the economic inequality by giving to the rich?
Imagine the LW, after fall of civilization, became a cult of Eliezer, misquoting and taking out of context anything said at any topics... after destruction of internet, relying on the memories.
↑ comment by Desrtopa · 2012-03-24T05:01:18.643Z · LW(p) · GW(p)
You can improve the wisdom to length ratio just by taking the "so" out of the whosoevers.
Edit: already done, and right below me too.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-03-24T11:34:02.084Z · LW(p) · GW(p)
Length isn't measured in number of letters, it's measured in ease of memorization, the encoding scheme of the brain. "Whosoever" flows better.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-03-24T13:00:22.195Z · LW(p) · GW(p)
Probably because you've already heard that quotation with the whosoever. In the encoding scheme where 0 encodes the lyrics to “Bohemian Rhapsody” and the encodings of all other messages start with 1, the lyrics to “Bohemian Rhapsody” have the shortest “length” in your sense of the word.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-03-24T13:23:15.865Z · LW(p) · GW(p)
I'm talking about writing to memory, not reading from it. I don't think it's just because I've heard it with "whosoever", I think it's because "whosoever" is more poetic and distinct in context.