Rationality Quotes September 2011
post by dvasya · 2011-09-02T07:38:10.556Z · LW · GW · Legacy · 492 commentsContents
492 comments
Here's the new thread for posting quotes, with the usual rules:
492 comments
Comments sorted by top scores.
comment by James_Miller · 2011-09-01T17:13:46.028Z · LW(p) · GW(p)
Replies from: JoshuaZ, majus, MarkusRamikinIt is a vast, and pervasive, cognitive mistake to assume that people who agree with you (or disagree) do so on the same criteria that you care about.
↑ comment by MarkusRamikin · 2011-09-13T12:30:41.272Z · LW(p) · GW(p)
Oh, how very true.
comment by gwern · 2011-09-11T14:53:32.534Z · LW(p) · GW(p)
Replies from: PhilGoetzAgain and again, I’ve undergone the humbling experience of first lamenting how badly something sucks, then only much later having the crucial insight that its not sucking wouldn’t have been a Nash equilibrium.
↑ comment by PhilGoetz · 2011-09-11T19:07:13.909Z · LW(p) · GW(p)
Interesting! Examples?
Replies from: gwern↑ comment by gwern · 2011-09-11T19:28:40.827Z · LW(p) · GW(p)
The whole link is basically a tissue of suggested examples by Aaronson and commenters.
Replies from: FiftyTwocomment by Maniakes · 2011-09-02T20:52:25.814Z · LW(p) · GW(p)
The church is near, but the road is icy. The bar is far away, but I will walk carefully.
-- Russian proverb
Replies from: Bugmaster↑ comment by Bugmaster · 2011-09-03T05:20:28.148Z · LW(p) · GW(p)
I'm Russian, and I don't think I've heard this proverb before. What does it sound like in Russian ? Just curious.
Replies from: Ms_Use, Risto_Saarelma, Maniakes↑ comment by Ms_Use · 2011-09-03T07:22:25.069Z · LW(p) · GW(p)
It's a rather lousy translation of the proverb, the more close variant of which than that above is mentioned in Vladimir Dahl's famous collection of russian proverbs: Церковь близко, да ходить склизко, а кабак далеконько, да хожу потихоньку.
Replies from: Normal_Anomaly, Bugmaster↑ comment by Normal_Anomaly · 2011-09-03T16:28:41.662Z · LW(p) · GW(p)
Can you provide a better translation?
↑ comment by Risto_Saarelma · 2011-09-03T06:57:30.796Z · LW(p) · GW(p)
comment by [deleted] · 2011-09-01T14:43:49.816Z · LW(p) · GW(p)
.
Replies from: Nic_Smith↑ comment by Nic_Smith · 2011-09-01T17:06:34.514Z · LW(p) · GW(p)
There is actually a pre-split thread about this essay on Overcoming Bias, and the notion of "Keep Your Identity Small" has come up repeatedly since then.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-09-03T09:50:10.742Z · LW(p) · GW(p)
And of course "Cached Selves", and especially this comment on that post.
comment by AlexSchell · 2011-09-08T20:13:09.717Z · LW(p) · GW(p)
It's one thing to make lemonade out of lemons, another to proclaim that lemons are what you'd hope for in the first place.
Gary Marcus, Kluge
Relevant to deathism and many other things
comment by Eugine_Nier · 2011-09-03T05:36:08.422Z · LW(p) · GW(p)
One day, I was playing with an "express wagon," a little wagon with a railing around it, I noticed something about the way the ball moved. I went to my father and said, "Say, Pop, I noticed something. When I pull the wagon, the ball rolls to the back of the wagon. And when I'm pulling it along and I suddenly stop, the ball rolls to the front of the wagon. Why is that?"
"That, nobody knows," he said. "The general principle is that things which are moving tend to keep on moving, and things which are standing still tend to stand still, unless you push them hard. This tendency is called 'inertia,' but nobody knows why it's true." Now, that's a deep understanding. He didn't just give me the name.
-Richard Feynman
comment by Jayson_Virissimo · 2011-09-01T16:40:38.911Z · LW(p) · GW(p)
The typical citizen drops down to a lower level of mental performance as soon as he enters the political field. He argues and analyzes in a way which he would readily recognize as infantile within the sphere of his real interests. He becomes primitive again.
-Joseph A. Schumpeter, Capitalism, Socialism, and Democracy
In other words, politics is the mind killer.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-09-10T13:33:54.503Z · LW(p) · GW(p)
I think it may be wiser to say "policy is the mind killer"; it emphasizes the cross-institutional cross-scale pervasive nature of political thinking.
comment by [deleted] · 2011-09-03T21:07:47.419Z · LW(p) · GW(p)
"The ordinary modes of human thinking are magical, religious, and social. We want our wishes to come true; we want the universe to care about us; we want the esteem of our peers. For most people, wanting to know the truth about the world is way, way down the list. Scientific objectivity is a freakish, unnatural, and unpopular mode of thought, restricted to small cliques whom the generality of citizens regard with dislike and mistrust."
— John Derbyshire
comment by Tesseract · 2011-09-01T20:48:19.495Z · LW(p) · GW(p)
If you want to live in a nicer world, you need good, unbiased science to tell you about the actual wellsprings of human behavior. You do not need a viewpoint that sounds comforting but is wrong, because that could lead you to create ineffective interventions. The question is not what sounds good to us but what actually causes humans to do the things they do.
Douglas Kenrick
Replies from: Vladimir_M↑ comment by Vladimir_M · 2011-09-02T07:51:44.943Z · LW(p) · GW(p)
(Retracted because I don't find the point significant enough to argue.)
comment by MichaelGR · 2011-09-11T04:37:05.240Z · LW(p) · GW(p)
“When you’re young, you look at television and think, There’s a conspiracy. The networks have conspired to dumb us down. But when you get a little older, you realize that’s not true. The networks are in business to give people exactly what they want. That’s a far more depressing thought. Conspiracy is optimistic! You can shoot the bastards! We can have a revolution! But the networks are really in business to give people what they want. It’s the truth.”
-Steve Jobs, [Wired, February 1996]
Replies from: PhilGoetz, Juno_Watt, private_messaging↑ comment by PhilGoetz · 2011-09-11T19:11:51.061Z · LW(p) · GW(p)
It's still an open question how well the networks succeed at giving people what they want. We still see, for instance, Hollywood routinely spending $100 million on a science fiction film written and directed by people who know nothing about science or science fiction, over 40 years after the success of Star Trek proved that the key to a successful science fiction show is hiring professional science fiction writers to write the scripts.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-09-13T13:08:28.370Z · LW(p) · GW(p)
I don't think knowing about science had much to do with the success of Star Trek. You're probably right about the professional science fiction writers, though. Did they stop using professional sf writers for the third season?
In general, does having professional science fiction writers reliably contribute to the success of movies?
A data point which may not point in any particular direction: I was delighted by Gattaca and The Truman Show-- even if I had specific nitpicks with them [1] because they seemed like Golden Age [2] science fiction. When composing this reply, I found that they were both written by Andrew Niccol, and I don't think a professional science fiction writer could have done better. Gattaca did badly (though it got critical acclaim), The Truman Show did well.
[1] It was actually at least as irresponsible as it was heroic for the main character in Gattaca to sneak into a space project he was medically unfit for.
I don't think Truman's fans would have dropped him so easily. And I would rather have seen a movie with Truman's story compressed into the first 15 minutes, and the main part of the movie being about his learning to live in the larger world.
[2] I think the specific Golden Age quality I was seeing was using stories to explore single clear ideas.
Replies from: Bugmaster↑ comment by Bugmaster · 2011-09-20T22:11:31.272Z · LW(p) · GW(p)
And I would rather have seen a movie with Truman's story compressed into the first 15 minutes, and the main part of the movie being about his learning to live in the larger world.
I disagree. As I see it, The Truman Show is, at its core, a Gnostic parable similar to The Matrix, but better executed. It follows the protagonist's journey of discovery, as he begins to get hints about the true nature of reality; namely, that the world he thought of as "real" is, in fact, a prison of illusion. In the end, he is able to break through the illusion, confront its creator, and reject his offer of a comfortable life inside the illusory world, in favor of the much less comfortable yet fully real world outside.
In this parable, the Truman Show dome stands for our current world (which, according to Gnostics, is a corrupt illusion); Christoff stands for the Demiurge; and the real world outside stands for the true world of perfect forms / pure Gnosis / whatever which can only be reached by attaining enlightenment (for lack of a better term). Thus, it makes perfect sense that we don't get to see Truman's adventures in the real world -- they remain hidden from the viewer, just as the true Gnostic world is hidden from us. In order to overcome the illusion, Truman must led go of some of his most cherished beliefs, and with them discard his limitations.
IMO, the interesting thing about The Truman Show is not Truman's adventures, but his journey of discovery and self-discovery. Sure, we know that his world is a TV set, but he doesn't (at first, that is). I think the movie does a very good job of presenting the intellectual and emotional challenges involved in that kind of discovery. Truman isn't some sort of a cliched uber-hero like Neo; instead, he's just an ordinary guy. Letting go of his biases, and his attachments to people who were close to him (or so he thought) involves a great personal cost for Truman -- which, surprisingly, Jim Carrey is actually able to portray quite well.
Sure, it might be fun to watch Truman run around in the real world, blundering into things and having adventures, but IMO it wouldn't be as interesting or thought-provoking -- even accounting for the fact that Gnosticism is, in fact, not very likely to be true.
Replies from: wedrifid↑ comment by wedrifid · 2011-09-20T22:52:10.235Z · LW(p) · GW(p)
As I see it, The Truman Show is, at its core, a Gnostic parable similar to The Matrix, but better executed.
Your essay fails to account for the deep philosophical metaphors of guns, leather, gratuitous exaggerated action and nerds doing kung fu because of their non-comformist magic.
Replies from: Bugmaster↑ comment by Bugmaster · 2011-09-20T23:16:40.556Z · LW(p) · GW(p)
Your essay fails to account for the deep philosophical metaphors of guns, leather, gratuitous exaggerated action and nerds doing kung fu because of their non-comformist magic.
With apologies to Freud, sometimes a leather-clad femme fatale doing kung fu is just a leather-clad femme fatale doing kung fu :-)
Replies from: wedrifid↑ comment by wedrifid · 2011-09-21T04:52:56.066Z · LW(p) · GW(p)
With apologies to Freud, sometimes a leather-clad femme fatale doing kung fu is just a leather-clad femme fatale doing kung fu :-)
That's kind of the point. A leather-clad femme fatale doing kung fu probably isn't a costar in an 'inferior execution of a Gnostic parable'. She's probably a costar in a entertaining nerd targeted action flick.
In general it is a mistake to ascribe motives or purpose (Gnostic parable) to something and judge it according to how well it achieves that purpose (inferior execution) when it could be considered more successful by other plausible purposes.
Another thing the Matrix wouldn't be a good execution of, if that is what it were, is a vaguely internally coherent counterfactual reality even at the scene level. FFS Trinity, if you pointed a gun at my head and said 'Dodge This!' then I'd be able to dodge it without any Agent powers. Yes, this paragraph is a rather loosely related tangent but damn. The 'batteries' thing gets a bad rap but I can suspend my disbelief on that if I try. Two second head start on your 'surprise attack' to people who can already dodge bullets is inexcusable.
Replies from: Bugmaster, FeepingCreature↑ comment by Bugmaster · 2011-09-21T05:07:52.497Z · LW(p) · GW(p)
In general it is a mistake to ascribed motives or purpose (Gnostic parable) to something and judge it according to how well it achieves that purpose (inferior execution) when it could be considered more successful by other plausible purposes.
I did not mean to give the impression that I judged The Truman Show or The Matrix solely based on how well they managed to convey the key principles of Gnosticism. I don't even know if their respective creators intended to convey anything about Gnosticism at all (not that it matters, really).
Still, Gnostic themes (as well Christian ones, obviously) do feature strongly in these movies; more so in The Truman Show than The Matrix. What I find interesting about The Truman Show is not merely the fact that it has some religious theme or other, but the fact that it portrays a person's intellectual and emotional journey of discovery and self-discovery, and does so (IMO) well. Sure, you could achieve this using some other setting, but the whole Gnostic set up works well because it maximizes Truman's cognitive dissonance. There's almost nothing that he can rely on -- not his senses, not his friends, and not even his own mind in some cases -- and he doesn't even have any convenient superpowers to fall back on. He isn't some Chosen One foretold in prophecy, he's just an ordinary guy. This creates a very real struggle which The Matrix lacks, especially toward the end.
The 'batteries' thing gets a bad rap but I can suspend my disbelief on that if I try.
AFAIK, in the original script the AIs were exploiting humans not for energy, but for the computing capacity in their brains. This was changed by the producers because viewers are morons .
Replies from: wedrifid, Desrtopa, wnoise↑ comment by wedrifid · 2011-09-21T05:13:11.405Z · LW(p) · GW(p)
This creates a very real struggle which The Matrix lacks, especially toward the end.
This is why I'm so glad the creators realized they had pushed their premise as far as they were capable and quit while they were ahead, never making a sequel.
↑ comment by Desrtopa · 2011-09-21T06:47:01.251Z · LW(p) · GW(p)
I don't even know if their respective creators intended to convey anything about Gnosticism at all (not that it matters, really).
I'm pretty sure that one of the Wachowski brothers talked about the deliberate Gnostic themes of The Matrix in an interview, but as for The Truman Show I have no idea.
↑ comment by wnoise · 2011-09-21T06:05:30.490Z · LW(p) · GW(p)
AFAIK, in the original script the AIs were exploiting humans not for energy, but for the computing capacity in their brains.
I have many times heard fans say this. Not once have any produced any evidence. Can you do so?
Replies from: Richard_Kennaway, Bugmaster, DSimon↑ comment by Richard_Kennaway · 2011-09-21T07:53:32.761Z · LW(p) · GW(p)
The only evidence I have is that it's so obviously the way the story should be. That's good enough for me. It does not matter precisely what fallen demiurge corrupted the parable away from its original perfection.
ETA: Just to clarify, I mean that as far as I'm concerned, brains used as computing substrate is the real story, even if it never crossed the Wachowskis' minds. Just like some people say there was never a sequel (although personally I didn't have a problem with it).
Replies from: wnoise, Tripitaka↑ comment by Tripitaka · 2011-09-21T16:39:24.956Z · LW(p) · GW(p)
Is not the alternative plot as faulted as the original plot, insofar as if the brainy computing substrate is used for something other than to run the originial software (humans) there is are no need to actually simulate a matrix?
Replies from: Nornagest, Richard_Kennaway↑ comment by Nornagest · 2011-09-21T17:04:49.214Z · LW(p) · GW(p)
Not only that, but I'm pretty sure building an interface that'd let you run arbitrary software on a human brain would be at least as hard and resource-intensive as building an artificial brain. We reach the useful limits of this kind of speculation pretty quickly, though; the films aren't supposed to be hard sci-fi.
↑ comment by Richard_Kennaway · 2011-09-21T19:41:17.050Z · LW(p) · GW(p)
You just need to stipulate that the brain can't stay healthy enough to do that without running a person.
But I'm not much interested in retconning a parable into hard science.
↑ comment by Bugmaster · 2011-09-21T06:46:10.371Z · LW(p) · GW(p)
According to IMDB,
An alternative is provided in the novelization and the spin-off short story "Goliath": the machines use human brains as computer components, to run "sentient programs" (the Agents and various characters in the sequels) and to solve scientific problems. Fans continue to debate the discrepancy, but there is no official explanation.
So, I guess the answer is "probably not". Sorry.
↑ comment by FeepingCreature · 2013-08-30T17:40:23.064Z · LW(p) · GW(p)
Two second head start on your 'surprise attack' to people who can already dodge bullets is inexcusable.
Inexcusable? :cracks knuckles:
Try to see it from the perspective of the agent. With how close that gun was to his head, and assuming that Trinity was not in fact completely stupid and had the training and hacker-enhanced reflexes to fire as soon as she saw the merest twitch of movement, there was really no realistic scenario where that agent could survive. A human might try to dodge anyway, and die, but for an agent, two seconds spent taunting him was two seconds delay. A miniscule difference in outcome, but still - U(let trinity taunt) > U(try to dodge and die immediately).
Replies from: wedrifid↑ comment by wedrifid · 2013-09-01T05:38:17.488Z · LW(p) · GW(p)
Inexcusable? :cracks knuckles:
Yes, where the meaning of 'inexcusable' is not 'someone can say words attempting to get out of it' but instead 'no excuse can be presented that the speaker or, by insinuation, any informed and reasonable person would accept'.
With how close that gun was to his head, and assuming that Trinity was not in fact completely stupid and had the training and hacker-enhanced reflexes to fire as soon as she saw the merest twitch of movement, there was really no realistic scenario where that agent could survive.
No, no realistic scenario. But in the scenario that assumes the particular science fiction question premises that define 'agent' in this context all reasonable scenarios result in trinity dead if she attempts that showmanship. The speed and reaction time demonstrated by the agents is such that they dodge, easily. Trinity still operates on human hardware.
Replies from: hairyfigment↑ comment by hairyfigment · 2013-09-01T06:11:32.716Z · LW(p) · GW(p)
I remind you that these agents were designed to let the One win, else they should have gone gnome-with-a-wand-of-death on all these people.
↑ comment by private_messaging · 2013-08-28T18:54:59.039Z · LW(p) · GW(p)
He was the guy who thought that people were too dumb to operate a two-button mouse. It's not that the networks conspired to dumb us down, and it's not that people want something exactly this dumb, but it's that those folks in control at the networks, much like Jobs himself, tend to make systematic errors such as believing themselves to be higher above the masses than is actually the case. Sometimes that helps to counter the invalid belief that people will really want to waste a lot of effort on your creation.
Replies from: gwern, FeepingCreature, mare-of-night, Decius↑ comment by gwern · 2013-08-30T02:13:39.314Z · LW(p) · GW(p)
He was the guy who thought that people were too dumb to operate a two-button mouse.
And many of his other simplifications were complete successes and why he died a universally-beloved & beatified billionaire.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-08-30T05:28:44.124Z · LW(p) · GW(p)
universally-beloved
Seems like a bit of an exaggeration. Almost universally respected, sure.
Replies from: Lumifer↑ comment by Lumifer · 2013-08-30T14:42:18.731Z · LW(p) · GW(p)
Yep. Respected and admired at a distance, certainly. But a lot of people who knew him personally tend to describe him as a manipulative jerk.
Replies from: gwern↑ comment by gwern · 2013-08-30T15:55:58.351Z · LW(p) · GW(p)
Which has little to do with how he & his simplifications were remembered by scores of millions of Americans. Don't you remember when he died, all the news coverage and blog posts and comments? It made me sick.
Replies from: shminux, Lumifer↑ comment by Shmi (shminux) · 2013-08-30T17:58:47.086Z · LW(p) · GW(p)
Meh, I thought of him as a brilliant but heavy-handed and condescending jerk long before I heard of his health problems. I refused to help my family and friends with iTunes (bad for my blood pressure) and anything Mac. My line was: if it "just works" for you, great, if not, you are SOL. Your iPod does not sync? Sorry, I don't want to hear about any device that does not allow straight file copying.
Replies from: Lumifer↑ comment by Lumifer · 2013-08-30T16:14:54.354Z · LW(p) · GW(p)
Actually, no, I don't remember because I didn't read them. I'm particular about the the kind of pollution I allow to contaminate my mind :-)
Anyway, we seem to agree. One of the interesting things about Jobs was the distance between his private self and his public mask and public image.
Replies from: gwern↑ comment by gwern · 2013-08-30T18:22:03.592Z · LW(p) · GW(p)
Actually, no, I don't remember because I didn't read them. I'm particular about the the kind of pollution I allow to contaminate my mind :-)
I am too, but I pay attention to media coverage to understand what the general population thinks so I don't get too trapped in my high-tech high-IQ bubble and wind up saying deeply wrong things like private_messaging's claim that "Jobs's one-button mice failed so ordinary people really are smart!"
Replies from: private_messaging↑ comment by private_messaging · 2013-08-30T18:46:10.879Z · LW(p) · GW(p)
Yeah, that's so totally what I claimed. Not. My point is that a lot of people overestimate how much smarter they are than ordinary people, and so they think ordinary people a lot dumber than ordinary people really are.
Also, the networks operate under the assumption that less intelligent people are more influenced by advertising, and therefore, the content is not even geared at the average joe, but at the below-average joe.
Replies from: gwern↑ comment by gwern · 2013-08-30T19:49:39.600Z · LW(p) · GW(p)
Yeah, that's so totally what I claimed. Not. My point is that a lot of people overestimate how much smarter they are than ordinary people, and so they think ordinary people a lot dumber than ordinary people really are.
Free free to elaborate how your one-button mouse example and all Jobs's other successes match what you are claiming here about Jobs being a person who underestimated ordinary people's intelligence. (If Jobs went broke underestimating ordinary people's intelligence, then may heaven send me a comparable bankruptcy as soon as possible.)
Replies from: private_messaging, Lumifer↑ comment by private_messaging · 2013-08-30T20:31:04.275Z · LW(p) · GW(p)
The original quote itself is a fairly good example - he assumes that the networks produce something which is exactly what people want, whereas the networks should, ideally, produce something which the people most influenced by the advertising want; a different, less intelligent demographic. If he was speaking truth in the quote, he had to have underestimated intelligence of the average people.
Secondarily, if you want to instead argue from the success, you need to outline how and why underestimation of intelligence would be inconsistent with the success. Clearly, all around more complicated user interfaces also enjoyed huge success. I even give an explanation in my comment - people also tend to massively over-estimate the willingness of users to waste cognitive effort on their creations.
As for what lessons we can learn from it, it is perhaps that underestimating the intelligence is relatively safe for a business, albeit many failed startups began from a failure to properly explore the reasons why an apparent opportunity exists, instead explaining it with the general stupidity of others.
edit: also, you could likewise wish for a comparable bankruptcy to some highly successful but rather overcomplicated operating system.
Replies from: gwern, Nornagest↑ comment by gwern · 2013-08-31T00:56:34.684Z · LW(p) · GW(p)
The original quote itself is a fairly good example - he assumes that the networks produce something which is exactly what people want, whereas the networks should, ideally, produce something which the people most influenced by the advertising want; a different, less intelligent demographic.
Why's that? Why aren't the networks making most profit by appealing to as many people as possible because that increase in revenue outweighs the additional advertising price increase made possible by narrowly appealing to the stupidest demographic? And why might the stupidest demographic be the most profitable, as opposed to advertising to the smartest and richest demographics? 1% of a million loaves is a lot better than 100% of one hundred loaves.
So you're making at least two highly questionable economics arguments here, neither of which I accept.
Secondarily, if you want to instead argue from the success, you need to outline how and why underestimation of intelligence would be inconsistent with the success. Clearly, all around more complicated user interfaces also enjoyed huge success.
Apple's success is, from the original Mac on, frequently attributed to simplification and improving UIs. How is this not consistent with correctly estimating the intelligence of people to be low?
I even give an explanation in my comment - people also tend to massively over-estimate the willingness of users to waste cognitive effort on their creations.
You're absolutely right about this part. And this pervasive overestimation is one of the reasons that 'worse is better' and Engelbart died not a billionaire, and Engelbart's beloved tiling window managers & chording keyboards are unfamiliar even to uber-geeks like us, and why so many brilliant techies watch other people make fortunes off their work. Because, among their other faults, they vastly overestimate how capable ordinary people and users are of using their products.
As for what lessons we can learn from it, it is perhaps that underestimating the intelligence is relatively safe for a business
If one deliberately attempts to underestimate the intelligence of users, one may make less of a mistake than usual.
Replies from: private_messaging↑ comment by private_messaging · 2013-08-31T10:23:58.108Z · LW(p) · GW(p)
Why's that? Why aren't the networks making most profit by appealing to as many people as possible because that increase in revenue outweighs the additional advertising price increase made possible by narrowly appealing to the stupidest demographic? And why might the stupidest demographic be the most profitable, as opposed to advertising to the smartest and richest demographics? 1% of a million loaves is a lot better than 100% of one hundred loaves.
Seen any TV ads lately? I'm kind of wondering if you're intending to win here by making an example.
Since you're on to the markers of real world success, how does your income compare to the median for people of your age, race, sex, and economical status of parents, anyway?
and why so many brilliant techies watch other people make fortunes off their work
I don't think making fortune is that much about not overestimating other people. Here's the typical profile of a completely failed start-up founder: someone with a high narcissism score - massive over-estimate of their own intelligence, massive under-estimating of other people's intelligence all across the board. Plus when they fail, it typically culminates in a conclusion that everyone's stupider.
edit: also with regards to techies watching others walk away with their money, there's things like this Atari story
There's a lot of cases of businesspeople getting more money, when the products are not user interfaces at all, but messy internals. Tesla and Edison are another story - Edison blew so much money on thinking that other people are stupid enough to be swayed enough by the electrocution of the elephant. He still made more money, of course, because he had the relevant money making talents. And Tesla's poor business ability (still well above average) can hardly be blamed on people being too stupid to deal with complex things that happen in enclosed boxes.
Replies from: gwern↑ comment by gwern · 2013-09-01T01:20:31.646Z · LW(p) · GW(p)
Seen any TV ads lately? I'm kind of wondering if you're intending to win here by making an example.
Yes. Ads vary widely in the target audience, ranging from the utter lowest-common denominator to subtle parodies and references, across all sorts of channels. The ads you see on Disney are different from the ads you see on Fox News which are different from the ads you see on Cartoon Network's Adult Swim block, which are different from the ones on the Discover channel. Exactly opposite of your crude 'ads exist only to exploit stupid people' model.
Since you're on to the markers of real world success, how does your income compare to the median for people of your age, race, sex, and economical status of parents, anyway?
Below-average, and my own website is routinely criticized by readers for being too abstract, having a bad UI, and making no compromises or helping out readers.
Oh, I'm sorry - was I supposed to not prove the point about geeks like me usually overestimating the intelligence of ordinary people? It appears I commit the same sins. Steve Jobs would not approve of my design choices, and he would be entirely correct.
I don't think making fortune is that much about not overestimating other people. Here's the typical profile of a completely failed start-up founder: someone with a high narcissism score - massive over-estimate of their own intelligence, massive under-estimating of other people's intelligence all across the board. Plus when they fail, it typically culminates in a conclusion that everyone's stupider.
And what does this have to do with Steve Jobs? Please try to stay on topic. I'm defending a simple point here: Steve Jobs correctly estimated the intelligence of people as low, designed UIs to be as simple, intuitive, and easy to use, and this is a factor in why he died a billionaire. What does his narcissism have to do with this?
Tesla and Edison are another story - Edison blew so much money on thinking that other people are stupid enough to be swayed enough by the electrocution of the elephant. He still made more money, of course, because he had the relevant money making talents.
As I recall the history, this had nothing to do with UIs or people's intelligence, but with Edison being in a losing position, having failed to invent or patent the superior alternating current technologies that Tesla did, and desperately trying anything he could to beat AC. Since this had nothing to do with UIs, all it shows is that one PR stunt was insufficient to dig Edison out of his deep hole. Which is not surprising; PR can be a powerful force, but it is far from omnipotent.
Thinking that average people's intelligence is low != thinking every PR stunt ever, no matter how crackbrained, must instantly succeed and dig someone out of any hole no matter how deep.
↑ comment by Nornagest · 2013-08-31T05:57:29.486Z · LW(p) · GW(p)
he assumes that the networks produce something which is exactly what people want, whereas the networks should, ideally, produce something which the people most influenced by the advertising want; a different, less intelligent demographic
I'd be astonished if resistance to advertising increases linearly or better with IQ once you control for viewing time. Marketing's basically applied cognitive science, and one of the major lessons of the heuristics-and-biases field is that it's really hard to outsmart our biases.
Replies from: private_messaging↑ comment by private_messaging · 2013-08-31T09:49:56.903Z · LW(p) · GW(p)
I'd be astonished if resistance to advertising increases linearly or better with IQ once you control for viewing time.
Why do you think you should control for the viewing time? As a marketer, it makes no difference for you why the higher IQs are less influenced. Furthermore a lot of advertising relies on outright lying.
Replies from: Nornagest↑ comment by Nornagest · 2013-08-31T20:30:41.456Z · LW(p) · GW(p)
Why do you think you should control for the viewing time?
Because I'd expect high-IQ populations to consume less media than the mean not thanks to anything intrinsic to IQ but because there's less media out there targeting them, and that's already factored into producers' and advertisers' expectations of audience size.
Similar considerations should come into play on the low end of the distribution: the IQ 80 cohort is roughly the same size as the IQ 120 and with less disposable income, both of which should make it less attractive for marketing. Free time might have an impact, but aside from stereotype I don't know if the lifestyles of the low-IQ lend themselves to more or less free time than those of the high-IQ; I can think of arguments for both.
Exposure to marketing tactics might also build resistance to them, and I'd expect that to be proportional in part to media exposure.
↑ comment by Lumifer · 2013-08-30T19:54:22.937Z · LW(p) · GW(p)
"No one in this world, so far as I know-and I have searched the record for years, and employed agents to help me-has ever lost money by underestimating the intelligence of the great masses of the plain people." -- H.L.Mencken
Replies from: Eliezer_Yudkowsky, private_messaging↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-30T21:57:18.915Z · LW(p) · GW(p)
I think this is happening with Hollywood, but that would be a longer story.
Replies from: somervta↑ comment by private_messaging · 2013-08-30T20:41:51.748Z · LW(p) · GW(p)
I think there's a great many apes that under-estimated the intelligence of a tiger or a bear, and haven't contributed to our gene pool. There's also all those wars where underestimations of the intelligence of enemy masses cost someone great deal of money and, at times, their own life.
↑ comment by FeepingCreature · 2013-08-30T17:42:04.459Z · LW(p) · GW(p)
My parents are incapable of using the context menu in any way.
Jobs may have been on to something.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-08-30T17:44:15.295Z · LW(p) · GW(p)
Forcing everyone to the lowest common denominator hardly counts as "onto something".
Replies from: gwern↑ comment by mare-of-night · 2013-08-30T21:08:36.787Z · LW(p) · GW(p)
He was the guy who thought that people were too dumb to operate a two-button mouse.
Did he say this, or are you inferring it from his having designed a one-button mouse?
Having two incorrect beliefs that counter each other (thinking that people want to spend time on your creation but are less intelligent than they actually are) could result in good designs, but so could making neither mistake. I'd expect any decent UI designer to understand that the user shouldn't need to pay attention to the design, and/or that users will sometimes be tired, impatient or distracted even if they're not stupid.
Replies from: private_messaging↑ comment by private_messaging · 2013-08-30T22:08:23.259Z · LW(p) · GW(p)
Did he say this, or are you inferring it from his having designed a one-button mouse?
I recall reading that he tried 3 button mouse, didn't like it, said it was too complicated, and gone for an one button one. Further down the road they need the difficult-to-teach alternate-click functionality and implemented it with option-click rather than an extra button. Apple stuck with one button mouse until 2005 or so, when it jumped to 4 programmable buttons and a scrollball.
The inventor of the mouse and of many aspects of the user interface, Douglas Engelbart, gone for 3 buttons and is reported on wikipedia as stating he'd put 5 if he had enough space for the switches.
Replies from: arundelo↑ comment by arundelo · 2013-08-30T22:20:53.162Z · LW(p) · GW(p)
I can't find a citation, but the rationale I've heard is to make it easier to learn how to use a Macintosh (or a Lisa) by watching someone else use one.
Replies from: David_Gerard↑ comment by David_Gerard · 2013-08-31T10:13:38.972Z · LW(p) · GW(p)
I did dial-up tech support in 1999-2000. Lots of general consumers who'd just got on this "internet" thing and had no idea what they were doing. It was SO HARD to explain right-clicking to them. Steve Jobs was right: more than one mouse button confuses people.
What happened, however, is that Mosaic and Netscape were written for X11 and then for Windows. So the Web pretty much required a second mouse button. Eventually Apple gave up and went with it.
(The important thing about computers is that they are still stupid, too hard to use and don't work. I speak as a professional here.)
Replies from: wedrifid, private_messaging↑ comment by wedrifid · 2013-09-02T01:22:26.945Z · LW(p) · GW(p)
What happened, however, is that Mosaic and Netscape were written for X11 and then for Windows. So the Web pretty much required a second mouse button. Eventually Apple gave up and went with it.
And for this we can be eternally grateful. While one button may be simple, two buttons is a whole heap more efficient. Or five buttons and some wheels.
I don't object to Steve Jobs (or rather those like him) making feature sparse products targeted to a lowest common denominator audience. I'm just glad there are alternatives to go with that are less rigidly condescending.
↑ comment by private_messaging · 2013-08-31T16:47:24.013Z · LW(p) · GW(p)
But did you deal with explaining option-clicking? The problem is that you get to see the customers who didn't get the press the right button on the mouse rather than the left. Its sort of like dealing with customer responses, you have, say, 1% failure rate but by feedback it looks like you have 50%..90% failure rate.
Then, of course, Apple also came up with these miracles of design such as double click (launch) vs slow double click (rename). And while the right-click is a matter of explanation - put your hand there so and so, press with your middle finger - the double clicking behaviour is a matter of learning a fine motor skill, i.e. older people have a lot of trouble.
edit: what percentage of people do you think could not get right clicking? And did you have to deal with one-button users who must option-click?
Replies from: David_Gerard↑ comment by David_Gerard · 2013-08-31T20:41:09.627Z · LW(p) · GW(p)
This was 1999, Mac OS9 as it was didn't really have option-clicking then.
I wouldn't estimate a percentage, but basically we had 10% Mac users and 2% of our calls came from said Mac users.
It is possible that in 2013 people have been beaten into understanding right-clicking ... but it strikes me as more likely those people are using phones and iPads instead. The kids may get taught right-clicking at school.
Replies from: private_messaging↑ comment by private_messaging · 2013-08-31T22:41:13.529Z · LW(p) · GW(p)
I remember classic Mac OS . One application could make everything fail due to lack of real process boundaries. It literally relied on how people are amazingly able to adapt to things like this and avoid doing what causes a crash (something which I notice a lot when I start using a new application), albeit not by deliberate design.
edit: ahh, it had ctrl-click back then: http://www.macwrite.com/beyond-basics/contextual-menus-mac-os-x (describes how ones in OS X differ from ones they had since OS 8)
Key quote:
Most people have never even heard of these menus, and unless you have a two-button mouse (as opposed to the standard single-button mouse), you probably wouldn't figure it out otherwise.
What I like about 2 buttons is that it is discoverable. I.e. you go like, ohh, there's two buttons here, what will happen if I press the other one?
Replies from: David_Gerard↑ comment by David_Gerard · 2013-08-31T23:58:55.127Z · LW(p) · GW(p)
Now that you mention it, I remember discovering command-click menus in OS 9 and being surprised. (In some apps, particularly web browsers, they would also appear if you held the mouse button down.)
comment by MinibearRex · 2011-09-01T22:01:10.002Z · LW(p) · GW(p)
The proposition here is that the human brain is, in large part, a machine for winning arguments, a machine for convincing others that its owner is in the right - and thus a machine for convincing its owner of the same thing. The brain is like a good lawyer: given any set of interests to defend, it sets about convincing the world of their moral and logical worth, regardless of whether they in fact have any of either. Like a lawyer, the human brain wants victory, not truth; and, like a lawyer, it is sometimes more admirable for skill than for virtue.
Robert Wright, The Moral Animal
comment by CronoDAS · 2011-09-24T22:56:02.482Z · LW(p) · GW(p)
If we don't change our direction, we're likely to end up where we're headed.
-- Chinese proverb
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2011-09-25T00:27:29.493Z · LW(p) · GW(p)
Ian Stewart invented the game of tautoverbs. Take a proverb and manipulate it so that it's tautological. i.e. "Look after the pennies and the pennies will be looked after" or "No news is no news". There's a kind of Zen joy in forming them.
This proverb however, is already there.
comment by crazy88 · 2011-09-04T07:29:46.780Z · LW(p) · GW(p)
Ralph Hull made a reasonable living as a magician milking a card trick he called "The Tuned Deck"...Hull enjoyed subjecting himself to the scrutiny of colleagues who attempted to eliminate, one by one, various explanations by depriving him of the ability to perform a particular sleight of hand. But the real trick was over before it had even begun, for the magic was not in clever fingers but in a clever name. The blatantly singular referent cried out for a blatantly singular explanation, when in reality The Tuned Deck was not one trick but many. The search for a single explanation is what kept this multiply determined illusion so long a mystery.
--Nicholas Epley, "Blackwell Handbook of Judgment and Decision Making"
Replies from: PhilGoetz, Normal_Anomaly, MinibearRex↑ comment by PhilGoetz · 2011-09-10T15:48:01.977Z · LW(p) · GW(p)
Google tells me Dennett referred to this, in arguing that there is nothing mysterious about consciousness, because it is just a set of many tricks.
It’s a shame that the niceness of the story of the tuned deck makes Dennett’s bad argument about consciousness more appealing.
Dennett’s argument that there is no hard problem of consciousness can be summarized thus:
Take the hard problem of consciousness.
Add in all the other things anybody has ever called “consciousness”.
Solve all those other issues one by one.
Conveniently forget about the hard problem of consciousness.
↑ comment by Normal_Anomaly · 2011-09-08T00:17:59.382Z · LW(p) · GW(p)
Would this count as doing something deliberately complicated to throw off anyone with an Occam prior?
↑ comment by MinibearRex · 2011-09-06T20:54:04.601Z · LW(p) · GW(p)
You don't have to put the little '>' signs in on every line, just the beginning of a paragraph.
Replies from: crazy88comment by [deleted] · 2011-09-01T14:38:27.013Z · LW(p) · GW(p)
.
Replies from: Vladimir_Nesov, juliawise↑ comment by Vladimir_Nesov · 2011-09-05T12:28:09.072Z · LW(p) · GW(p)
(Only in the sense of constructing some plan of action (or inaction) that currently seems no worse than others, not in the sense of deciding to believe things you have no grounds for believing. "Make up your mind" is a bad phrase because of this equivocation.)
comment by lukeprog · 2011-09-01T12:04:59.053Z · LW(p) · GW(p)
The rule that human beings seem to follow is to engage the brain only when all else fails - and usually not even then.
David Hull, Science and Selection: Essays on Biological Evolution and the Philosophy of Science
Replies from: James_Miller↑ comment by James_Miller · 2011-09-01T17:23:38.621Z · LW(p) · GW(p)
This is the idea behind duel-N back, that the only strategy your lazy brain can implement to do better at the game is to increase the brain's working memory.
comment by Grognor · 2011-09-28T03:51:15.483Z · LW(p) · GW(p)
Kant was proud of having discovered in man the faculty for synthetic judgements a priori. But "How are synthetic judgements a priori possible?" How did Kant answer? By saying "By virtue of a faculty" (though unfortunately not in five words). But is that an answer? Or rather merely a repetition of the question? How does opium induce sleep? "by virtue of a faculty, namely the virtus dormitiva", replies the doctor in Molière. Such replies belong in comedy. It is high time to replace the Kantian question by another question, "Why is belief in such judgements necessary?"
Nietzsche, Beyond Good and Evil
comment by Scott Alexander (Yvain) · 2011-09-19T18:22:15.910Z · LW(p) · GW(p)
I think there's a few posts by Yudkowsky that I think deserve the highest praise one can give to a philosopher's writing: That, on rereading them, I have no idea what I found so mindblowing about them the first time. Everything they say seems patently obvious now!
-- Ari Rahikkala
Replies from: MinibearRex↑ comment by MinibearRex · 2011-09-20T21:04:41.366Z · LW(p) · GW(p)
Is this really a rationality quote, is it just pro-Yudkowsky?
It does set a standard for the clarity of any writing you do, but I've seen substantially better quotes on that topic before.
Replies from: wedrifid, Yvain↑ comment by wedrifid · 2011-09-20T23:08:36.740Z · LW(p) · GW(p)
Is this really a rationality quote
I say yes. This is the difference between learning the 'Philosophy' how to quote deep stuff with names like Wittgenstein and Nietzsche and just learning stuff about reality that is just obvious. Once the knowledge is there is shouldn't seem remarkable at all.
For me at least this is one of the most important factors when evaluating a learning source. Is the information I'm learning simple in retrospect or is it a bunch of complicated rote learning. If the latter, is there a good reason related to complexity in the actual world that requires me to be learning complex arbitrary things?
↑ comment by Scott Alexander (Yvain) · 2011-09-21T10:02:15.370Z · LW(p) · GW(p)
Related to hindsight bias and inferential distances. I'd sort of noticed this happening before, but if I hadn't realized other people had the same experience I probably would have underestimated the degree to which rationality had changed my worldview and so underestimated the positive effect of spreading it to others.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-09-21T10:30:36.438Z · LW(p) · GW(p)
(Your "\" key is adjacent to "Shift".)
comment by [deleted] · 2011-09-24T15:35:50.530Z · LW(p) · GW(p)
The key is that it's adaptive. It's not that it succeeds despite the bad results of its good intentions. It succeeds because of the bad results of its good intentions.
--Mencius Moldbug
comment by lukeprog · 2011-09-16T00:53:43.535Z · LW(p) · GW(p)
It is remarkable that [probability theory], which originated in the consideration of games of chance, should have become the most important object of human knowledge... The most important questions of life are, for the most part, really only problems of probability.
Laplace
comment by Thomas · 2011-09-05T13:25:02.547Z · LW(p) · GW(p)
The investor who finds a way to make soap from peanuts has more genuine imagination than the revolutionary with a bayonet, because he has cultivated the faculty of imagining the hidden potentiality of the real. This is much harder than imagining the unreal, which may be why there are so many more utopians than inventors
- Joe Sobran
↑ comment by Vladimir_Nesov · 2011-09-05T13:31:19.471Z · LW(p) · GW(p)
which may be why there are so many more utopians than inventors
Is that the case?
Replies from: Thomas↑ comment by Thomas · 2011-09-05T13:48:46.436Z · LW(p) · GW(p)
The majority dreams about a "just society", the minority dreams about a better one through technological advances. No matter there was 20th century when "socialism" brought us nothing and the technology brought us everything.
Replies from: Vladimir_Nesov, Raw_Power, MixedNuts↑ comment by Vladimir_Nesov · 2011-09-05T14:35:21.276Z · LW(p) · GW(p)
Echoing a utopian meme is analogous to stamping an instance of an invention, not to inventing something anew. It is inventors of utopian dreams that I doubt to be more numerous than inventors of technology.
Replies from: gwern, Thomas, PhilGoetz↑ comment by Thomas · 2011-09-05T15:19:44.594Z · LW(p) · GW(p)
You may be right here. Utopias are usually also quite uninnovative. "All people will be brothers and sisters with enough to eat and Bible (or something else stupid) reading in a community house every night".
Variations are not that great.
↑ comment by PhilGoetz · 2011-09-10T15:31:26.386Z · LW(p) · GW(p)
Can you invent a utopia? A utopia is an incoherent concept about a society that contains too many internal contradictions or impracticalities to ever exist. Thus, it cannot be invented any more than a perpetual motion machine can be.
If you do consider utopias inventable, what's the difference between "inventing a new utopia" and "having a new preference"? You want X; you dream of a world where you get X, inventing Utopia X.
↑ comment by Raw_Power · 2011-09-06T00:31:59.698Z · LW(p) · GW(p)
I feel obliged to point out that Socialdemocracy is working quite well in Europe and elsewhere and we owe it, among other stuff, free universal health care and paid vacations. Those count as "hidden potentiality of the real." Which brings us to the following point: what's , a priori, the difference between "hidden potentiality of the real" and "unreal"? Because if it's "stuff that's actually been made", then I could tell you, as an engineer, of the absolutely staggering amount of bullshit patents we get to prove are bullshit everyday. You'd be amazed how many idiots are still trying to build Perpetual Motion Machines. But you've got one thing right: we do owe technology everything, the same way everyone ows their parents everything. Doesn't mean they get all the merit.
Replies from: CG_Morton, None↑ comment by CG_Morton · 2011-09-13T14:49:31.862Z · LW(p) · GW(p)
I feel obliged to point out that Socialdemocracy is working quite well in Europe and elsewhere and we owe it, among other stuff, free universal health care and paid vacations.
It's not fair to say we 'owe' Socialdemocracy for free universal health care and paid vacations, because they aren't so much effects of the system as they are fundamental tenets of the system. It's much like saying we owe FreeMarketCapitalism for free markets - without these things we wouldn't recognize it as socialism. Rather, the question is whether the marginal gain in things like quality of living are worth the marginal losses in things like autonomy. Universal health care is not an end in itself.
Replies from: Raw_Power↑ comment by Raw_Power · 2011-09-16T13:23:41.630Z · LW(p) · GW(p)
I dunno man, maybe it's a confusion on my part, but universal health coverage for one thing seems like a good enough goal in and of tiself. Not specifically in the form of a State-sponsored organziation, but the fuction of everyone having the right to health treatments, of no-one being left to die just because they happen not to have a given amount of money at a given time, I think that, from a humanistic point of view, it's sort of obvious that we should have it if we can pay for it.
Replies from: Normal_Anomaly, Jack, lessdazed↑ comment by Normal_Anomaly · 2011-09-16T21:59:29.164Z · LW(p) · GW(p)
Free universal health care is a good thing in itself; the question is whether or not that's worth the costs of higher taxes and any bureaucratic inefficiencies that may exist.
Replies from: Eugine_Nier, Raw_Power↑ comment by Eugine_Nier · 2011-09-18T04:26:49.904Z · LW(p) · GW(p)
Free universal health care is a good thing in itself
The healthcare isn't actually "free". It's either paid for individually, collectively on a national level, or some intermediate level, e.g., insurance companies. The question is what the most efficient way to deliver it is?
↑ comment by Raw_Power · 2011-09-18T01:01:29.327Z · LW(p) · GW(p)
Well, at least the bureaucratic inefficiencies are entirely incidental to the problem, and there's no decisive evidence for corporate bureaucracies to be any better than public ones (I suspect partisanship gets in the way of finding out said evidence, as well as a slew of other variables), so that factor... doesn't factor. As for the higher taxes... how much are you ready to pay so that, the day you catch some horrible disease, the public entity will be able to afford diverting enough of its resources to save you? What are you more afraid of, cancer and other potentially-fatal diseases that will eventually kill you, terrorism/invading armies/criminals/people trying to kill you, boredom...? What would be your priorities in assigning which proportion of the taxes you pay goes to funding what projects?
... Actually that might be a neat reform. Budget decision by a combination of individual budget assignments by every citizen...
Replies from: Normal_Anomaly, Eugine_Nier, NancyLebovitz, Eugine_Nier↑ comment by Normal_Anomaly · 2011-09-18T01:28:37.874Z · LW(p) · GW(p)
Well, at least the bureaucratic inefficiencies are entirely incidental to the problem, and there's no evidence for corporate bureaucracies to be any better than public ones, so that factor... doesn't factor.
This claim is disputed, but I have negligible information either way.
As for the higher taxes... how much are you ready to pay so that, the day you catch some horrible disease, the public entity will be able to afford diverting enough of its resources to save you? What are you more afraid of, cancer and other potentially-fatal diseases that will eventually kill you, terrorism/invading armies/criminals/people trying to kill you, boredom...?
Personally, I say that universal health care would be worth the higher taxes. For any given person the answer depends on their utility function: the relative values assigned to freedom, avoidance of harm, happiness, life, fairness,etc.
... Actually that might be a neat reform. Budget decision by a combination of individual budget assignments by every citizen...
This sets off my Really Bad Idea alarm. I don't trust the aggregate decisions of individual citizens to add up to any kind of sane budget relative to their CEV. (Note: the following sentences are American-centric.) Probably research would get massively under-funded. Defense would probably be funded less than it is now, but that might well put it closer to the optimal value if it forced some cost-effectiveness increases.
Basically, each person would assign all their taxes to whatever they thought was most important, thus prioritizing programs according to how many people pick them as first choice, regardless of how many dollars it takes to make a given one work. The same kind of math used to discuss different voting/electoral college variants would inform this, I think, but I'm too lazy to look it up. And of course, if too much freedom was allowed in deciding, all companies and most people would decide to allocate their money to themselves.
Replies from: Raw_Power, lessdazed↑ comment by Raw_Power · 2011-09-18T01:52:49.336Z · LW(p) · GW(p)
^Hm. That'd be some very near-sighted companies and people, don't you think? The Defending Your Doorstep fallacy etc. etc. Still, with some education fo the public ("Dear viewers, THIS is what would happen if everyone decided all the money should go to the Army right after a terrorist attack") and some patches (I can't imagine why people would put all their money into whatever they think is most important, rather than distributing it in an order of priorities: usually people's interests aren't so clear cut that they put one cause at such priority that the others become negligible... but if they did do that, just add a rule that there's only so much of your money you can dedicate to a specific type of endeavor and all endeavors related),.
This reminds me of Kino's Journey and the very neat simplisty solutions people used to their problems. The main reason those solutions failed was because the involved people were incrediby dumb at using them. The Democracy episode almost broke my willing suspension of disbelief, as did the Telepathy one. Are you familair with that story?
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-09-18T02:26:19.398Z · LW(p) · GW(p)
Re your 1st paragraph: you have a much higher opinion of human rationality than myself. I hope you're right, but I doubt it.
Re your second paragraph: I am currently watching Kino's Journey, and will respond later. Thanks for the reference, it sounds interesting.
Replies from: Raw_Power↑ comment by Raw_Power · 2011-09-18T16:02:16.048Z · LW(p) · GW(p)
Human rationality can be trained and improved, it's not an innate feature. To do that is part of the entire point of this site.
I hope you enjoy it. It is very interesting. Beware of generalizing from fictional evidence... but fiction is sometimes all we have to explore certain hypotheticals...
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-09-18T20:03:10.799Z · LW(p) · GW(p)
Human rationality can be trained and improved, it's not an innate feature. To do that is part of the entire point of this site.
True. Individual budget allocation would be a bad idea in present day America, but it wouldn't be a bad idea everywhere and for all time.
↑ comment by lessdazed · 2011-09-18T06:40:37.149Z · LW(p) · GW(p)
universal health care
What does this mean? In particular, what does "universal" mean?
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-09-18T15:28:39.556Z · LW(p) · GW(p)
What does this mean? In particular, what does "universal" mean?
It means that each person in the country would, if ey got sick, be able to receive affordable treatment. This is true in, for example, Great Britain, where the NHS pays for people's medical care regardless of their wealth. It is not true in the United States, where people who cannot afford health insurance and do not have it provided by their employer go without needed treatments because they can't afford them.
ETA: does someone think this definition is wrong? What's another definition I'm missing?
Replies from: lessdazed↑ comment by lessdazed · 2011-09-18T18:47:07.021Z · LW(p) · GW(p)
How different are the ways a society would treat citizens and various other people not covered by a system, such as Americans? What about tourists?
Isn't it true that Great Britain could provide better medical care if it diverted resources currently spent elsewhere? How are any other government expenditures and fungible things (like autonomy) ever justified if health could be improved with more of a focus on it?
Do you primarily value a right to medical care, or instead optimal health outcomes?
An intuition pump: What if a genie offered to, for free, provide medical care to all people in a society equivalent to that the American President gets, and a second genie, much better at medical care, offered even better average health outcomes for all people, with the caveat that he would randomly deny patients care (every patient would still have a better chance under the second genie, until the patient was rejected, of course). Both conditional on no other health care in the society, especially not for those denied care by the second genie. Which genie would you choose for the society? Under the first, health outcomes would be good and everyone would have a right to health care, under the second, health outcomes would be even better for every type of patient, but there would be no right to care and some people with curable diseases would be left to die.
If you would choose the first genie, your choice increases net suffering every person can expect.
If you would choose the second genie, then you're making a prosaic claim about the efficiency of systems rather than a novel moral point about rights for disadvantaged people - a claim that must be vulnerable to evidence and can't rightly be part of your utility function.
What if there were a third genie much like the one you chose, except the third genie could provide even better care to rich people. Would you prefer the third genie and the resulting inequality?
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-09-18T20:11:33.407Z · LW(p) · GW(p)
I prefer the genie which provides the maximum average utility* to the citizens, with the important note that utility is probably non-linear in health. The way I read your comment, that would appear to be the third. Also note that the cost of providing health care is an important factor in real life, because that money could also go to education. Basically, I do my best to vote like a rational consequentialist interested in everyone's welfare.
*I am aware that both average and total utilitarianism have mathematical issues (repugnant conclusion etc), but they aren't relevant here.
Replies from: lessdazed↑ comment by lessdazed · 2011-09-18T22:42:16.308Z · LW(p) · GW(p)
cost of providing health care is an important factor in real life, because that money could also go to...
OK, so when you say "Personally, I say that universal health care would be worth the higher taxes," you are referring to internal resource distribution along state, national, voluntary, or other lines to achieve efficient aggregate outcomes by taking advantage of the principle of diminishing returns and taking from the rich and giving to the poor. You don't believe in a right to care, or equal treatment for outgroup non-citizens elsewhere, or that it's very important for treatment to be equal between elites and the poor. Not an unusual position, it's potentially coherent, consistent, altruistic, and other good things.
I asked for clarification from your original "Personally, I say that universal health care would be worth the higher taxes," because I think that phrasing is compatible with several other positions.
↑ comment by Eugine_Nier · 2011-09-18T04:16:48.958Z · LW(p) · GW(p)
Well, at least the bureaucratic inefficiencies are entirely incidental to the problem, and there's no evidence for corporate bureaucracies to be any better than public ones,
Corporations that develop excessive inefficiencies tend to go bankrupt. (Ok, sometimes they can get government bailouts or are otherwise propped up by the government, but that is another against government intervention.)
Replies from: Desrtopa↑ comment by NancyLebovitz · 2011-09-18T04:50:53.415Z · LW(p) · GW(p)
The advertising would get very tiresome, but probably not bad enough to oppose the idea for that reason.
↑ comment by Eugine_Nier · 2011-09-18T04:31:23.457Z · LW(p) · GW(p)
As for the higher taxes... how much are you ready to pay so that, the day you catch some horrible disease, the public entity will be able to afford diverting enough of its resources to save you?
Well, maybe if it wasn't for the taxes I would be able to afford to pay for treatment myself. (Taxes are a zero-sum process, actually negative sum because of the inefficiencies.) If the idea is risk mitigation, then why not use a private insurance company?
Replies from: Raw_Power↑ comment by Raw_Power · 2011-09-18T15:58:54.608Z · LW(p) · GW(p)
Well, given that the government's allledged goal is to provide the service while the private organization's alledged goal is to make a profit, one would expect the State (I like to call the organization the State or the Adminsitration: the Government should simply mean whoever the current team of politically appointed president/minister/cabinet are, rather than the entire bureaucracy) to be less likely to "weasel out of" paying for your treatment, a risk I (in complete and utter subjectivity and in the here and now) deem more frightening (and frustrating) than the disease itself.
And yes, risk mitigation is always negative sum, that's kind of a thermodynamic requisite.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2011-09-18T16:58:04.124Z · LW(p) · GW(p)
one would expect the State (...) to be less likely to "weasel out of" paying for your treatment,
Well, since the ministry of health's budget is finite, whereas the potential amount of money that could be spent on everyone's treatment isn't, the state very quickly discovers that is too needs to find ways to weasel out of paying for treatment.
And yes, risk mitigation is always negative sum, that's kind of a thermodynamic requisite.
And the more layers of bureaucracy involved, the more negative sum it is.
Replies from: Raw_Power↑ comment by Raw_Power · 2011-09-18T17:23:30.652Z · LW(p) · GW(p)
1.You mean they incur in the exact same kind of legal practices as private groups, with the same frequency? Given the difference in position, methodolgy and resourses, I doubt it, but I don't have any evidence pointing to either side about the behavior of Universal Health Coverage systems. I'd need time to ask a few people and find a few sources.
2.I don't think it's a matter of "layers" so much as one of how those layers are organized. The exact same amount of people can have productivity outputs that are radically different in function of the algorythms used to organize their work. Your post seems to imply that State services have more bureaucratic layers than public ones. I'd think that'd be something to decide case by case, but I wouldn't say it's a foregone conclusion: private insurances are infamous for being bureauratic hells too. Ones deliberately designed to mislead and confuse unhappy clients, at that.
↑ comment by Jack · 2011-09-19T00:11:10.255Z · LW(p) · GW(p)
This conversation appears to not have incorporated the very strong evidence that higher health care spending does lead to improved health outcomes.
Personally I'd reform the American system in one of two ways- either privatize health care completely so that cost of using a health care provider is directly connected to the decision to use health care OR turn the whole thing over to the state and ration care (alternatively you could do the latter for basic health care and than let individuals purchase anything above that). What we have now leaves health care consumption decisions up to individuals but collectivizes costs-- which is obviously a recipe for inflating an industry well above its utility.
Replies from: gwern↑ comment by gwern · 2011-09-19T13:53:12.807Z · LW(p) · GW(p)
This conversation appears to not have incorporated the very strong evidence that higher health care spending does lead to improved health outcomes.
At what margin? Using randomized procedures?
- http://www.overcomingbias.com/2009/01/free-medicine-no-help-for-ghanaian-kids.html
- http://www.overcomingbias.com/2007/05/rand_health_ins.html
- http://www.overcomingbias.com/2011/07/the-oregon-health-insurance-experiment.html
↑ comment by lessdazed · 2011-09-16T23:00:29.103Z · LW(p) · GW(p)
universal
What does this mean?
of no-one being left to die just because they happen not to have a given amount of money at a given time
What does this mean?
we should have it if we can pay for it
What does this mean?
Replies from: Raw_Power↑ comment by Raw_Power · 2011-09-18T00:54:02.334Z · LW(p) · GW(p)
I have left it ambiguous on purpose. What this means specifically depends on the means available at any given time.
IDEALLY: Universal means everyone should have a right to as much health service as is necessary for their bodies and minds functioning as well as it can, if they ask for it. That would include education, coaching, and sports, among many others. And nobody should ever be allowed to die if they don't want to and there's any way of preventing it.
Between "leaving anyone to die because they don't have the money or assets to pay for their treatment"[your question puzzles me, what part of this scenario don't you understand] and "spending all our country's budget on progressively changing the organs of seventy-year-.olds", there's a lot of intermediate points. The touchy problem is deciding how much we want to pay for, and how, and who pays it for whom, No matter how you cut the cake, given our current state of development, at some point you have to say X person dies in spite of their will because either they can't afford to live or because his can't". So, are you* going to deny that seventy-year-old their new organs?
Replies from: Eugine_Nier, wedrifid, lessdazed↑ comment by Eugine_Nier · 2011-09-18T04:35:33.561Z · LW(p) · GW(p)
So, are you going to deny that seventy-year-old their new organs?
Yes, it's amazing how many bad decisions are made because it's heartbreaking to just say no.
Replies from: Raw_Power↑ comment by Raw_Power · 2011-09-18T15:50:53.790Z · LW(p) · GW(p)
More like it's potentially corrupting, but yeah, that too.
↑ comment by wedrifid · 2011-09-18T04:39:47.437Z · LW(p) · GW(p)
So, are you going to deny that seventy-year-old their new organs?
Yes, unless there is nobody else that can use them. If my watching of House tells me anything it is standard practice to prioritize by this kind of criteria.
Replies from: Raw_Power↑ comment by lessdazed · 2011-09-18T06:31:01.523Z · LW(p) · GW(p)
what part of this scenario don't you understand
Resources are limited and medical demand is not. The medical response time if the President of the United States gets shot is less for than if anyone else gets shot. It's not possible to give everyone as much health protection as the president. So it's not a scenario. I can imagine each person as being the only person on earth with such care, and I can imagine imagining a single hypothetical world has each person with that level of care, but I can't actually imagine it.
there's a lot of intermediate points
That indicates that no argument about the type of thing to be done will be based on a difference in kind. It won't resemble saying that we should switch from what happens at present to "no-one being left to die just because they happen not to have a given amount of money". We currently allow some people to die based on rationing, and you are literally proposing the impossible to connote that you would prefer a different rationing system, but then you get tripped up when sometimes speaking as if the proposal is literally possible.
deciding how much we want to pay for
Declaring that someone has a right is declaring one's willingness to help that person get something from others over their protests. We currently allow multimillionaires, and we allow them to spend all their money trying to discover a cure for their child's rare or unique disease, and we allow people to drive in populated areas.
We allow people to spend money in sub-optimal ways. Resources being limited means that not every disease gets the same attention. Allowing people to drive in populated areas is implicitly valuing the fun and convenience of some people driving over the actuarially inevitable death and carnage to un-consenting pedestrians.
What this means specifically depends on the means available at any given time.
I don't understand how you want to ration or limit people, in an ideal world, because you have proposed the literally impossible as a way of gesturing towards a different rationing system (infinitely) short of that ideal and (as far as I can see) not different in kind than any other system.
By analogy, you don't describe what you mean when you declare "infinity" a number preferable to 1206. Do you mean that any number higher than 1206 is equally good? Do you mean that every number is better than its predecessor, no matter what? Since you probably don't, then...what number do you mean? Approximately?
I can perhaps get an idea of the function if you tell me some points of x (resources) and y (what you are proposing).
Replies from: wedrifid, Raw_Power↑ comment by Raw_Power · 2011-09-18T15:29:48.239Z · LW(p) · GW(p)
Your post confuses me a lot: I am being entirely honest about this, there seem to be illusions of transparency and (un)common priors. The only part I feel capable of responding to is the first: I can perfectly imagine every human being having as much medical care as the chief of the wealthiest most powerful organization in the world, in an FAI-regimented society. For a given value of "imagining", of course: I have a vague idea of nanomachines in the bloodstream, implants, etc. I basically expect human bodies to be self-sufficient in taking care of themsleves, and able to acquire and use the necessary raw materials with ease, including being able to medically operate on themselves. The rare cases will be left to the rare specialist, and I expect everyone to be able to take care of the more common problems their bodies and minds may encounter.
As for the rest of your post:
What are people's rationing optimixation functions? Is it possible to get an entire society to agree to a single one, for a given value of "agree"? Or is it that people don't have a consistent optimization function, and that it's not so much a matter of some things being valued over others as a matter of tradition and sheer thoughtless inertia? Yes, I know I am answering questions with questions, but that's all I got right now.
Replies from: lessdazed↑ comment by lessdazed · 2011-09-18T17:22:18.477Z · LW(p) · GW(p)
Your post confuses me a lot: I am being entirely honest about this, there seem to be illusions of transparency
Thank you for leading with that.
In an FAI-regimented society
This seems to sidestep the limited resources issue, making your argument not clearly apply outside of that context.
Let me give an example outside of health to discuss the resources issue. I have read that when a guy tried to make a nuclear power source in his garage from clock parts, government agents swooped in very soon after it started emitting radiation - presumably there are people monitoring for that, with field agents ever-ready to pursue leads. This means that, for some 911 calls where the nuclear team would be the first to the scene, we allow the normal police to handle it, even at the risk of people's lives. If that isn't the case, imagine a world in which it were so, and in which it would be easy to tell that the police would be slower than the nuke guys (who don't even leave their stations most days). I think having such an institution would be worthwhile, even at the cost of crimes in progress being responded to slower.
Similarly, I think many things would be worth diverting resources from better policing, such as health - and from health to other things, such as better policing, and from both to fun, privacy, autonomy, and so forth. I'm only referring to a world in which resources are limited.
It is possible that there is a society wealthy enough to ensure very good health care for those it can influence by eliminating all choice about what to eat, mandating exercise, eliminating privacy to enforce those things, etc. It's not obvious to me that it's always the right choice to optimize health or that that would be best for the hypothetical society.
Considering the principle of diminishing returns, there's no plausible way of describing people's preferences such that all effort should be put towards better health. we don't have to be able to describe them perfectly to say that being forced to eat only the healthiest foods does not comport with them - ask any child told to eat vegetables before desert.
↑ comment by [deleted] · 2011-09-12T21:02:40.044Z · LW(p) · GW(p)
I feel obliged to point out that Socialdemocracy is working quite well in Europe and elsewhere and we owe it, among other stuff, free universal health care and paid vacations.
Comfortable, well maintained social democracies where the result of a very peculiar set of circumstances and forces which seem very unlikely to return to Europe in the foreseeable future.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-09-13T12:53:22.600Z · LW(p) · GW(p)
Would you care to expand on that?
Replies from: None↑ comment by [deleted] · 2011-09-13T17:27:48.443Z · LW(p) · GW(p)
Sure, though I hope you don't mind me giving the cliff note version.
Demographic dividend is spent. (The rate of dependency falls after the introduction of modernity (together with legalised contraception) because of lower birth rates. It later rises again as the population ages a few decades after the drop in birthrates)
Related, precisely because the society on average is old and seems incapable of embracing any kind of new ideas or a change in what its stated ideals and values are. Not only are young people few but they extremely conformist outside of a few designated symbolic kinds of "rebelling" compared to young people in other parts of the world. Oversocialized indeed.
Free higher education and healthcare produced a sort of "social uplift dividend", suddenly the cycle of poverty was broken for a whole bunch of people who where capable of doing all kinds of work, but simply didn't have the opportunity to get the necessary education to do so. After two generations of great results not only has this obviously hit diminishing returns, there are also some indications that we are actually getting less bang for buck on the policies as time continues. Though its hard to say since European society has also shifted away from meritocracy.
Massive destruction of infrastructure and means of production that enabled high demand for rebuilding much of the infrastructure (left half of the bell curve had more stuff to do than otherwise, since the price of the kinds of labour they are capable of was high).
The burden of technological unemployment was not as great as it is today (gwern's arguments regarding its existence where part of what changed my opinion away from the default view most economists seem to take. After some additional independent research I found myself not only considering it very likley but looking at 20th century history from an entirely fresh perspective ).
Event though there are some indications youth in several European countries is more trusting, the general trend seem to still be a strong move away from high trust societies.
↑ comment by NancyLebovitz · 2011-09-13T17:56:56.114Z · LW(p) · GW(p)
Thank you. Cliff notes is fine. What do you expect social democracies to turn into?
Replies from: None↑ comment by [deleted] · 2011-09-13T18:44:44.304Z · LW(p) · GW(p)
I put significantly lower confidence in these predictions than those of the previous post.
Generally speaking I expect comfortable, well maintained social democracies to first become uncomfortable, run down social democracies. Stagnation and sclerosis. Lower trust will mean lower investment which together with the rigidity and unadaptability will strengthen the oligarchic aspect of the central European technocratic way of doing things. Nepotism will become more prevalent in such an environment.
Overall violent crime will still drop, because of better surveillance and other crime fighting technology, but surprising outbursts of semi organized coordinated violence will be seen for a decade or two more (think London). These may become targeted at prosperous urban minorities. Perhaps some politically motivated terrorist attacks, which however won't spiral out into civil wars, but will produce very damaging backlash (don't just think radical Islam here, think Red Army fraction spiced with a nationalist group or two).
Replies from: Raw_Power↑ comment by Raw_Power · 2011-09-16T14:42:44.742Z · LW(p) · GW(p)
What, you mean like in Gangs of New York?
Could you please give more links to the stuff that helped you form these opinions? I'm very interested in this, especialy in explaining the peculiar behaviour of this generation's youth as opposed to that of the Baby Boomers when they were the same age. After all, it's irrational to apply the same tactics to a socipoloitical lanscape that's wildly different from the one in which these tactics got their most spectacular successes. Exiting the mind-killing narratives developed in bipartidist systems and finding the way to rethink the problems of this age from scratch is a worthy goal for the rationalist project, especially in a "hold off on proposing solutions", analyze-the-full-problem-and-introduce-it-from-a-novel-angle sense. Publications such as, say, Le Monde Diplomatique, are pretty good at presenting well-researched, competently presented alternative opinions, but they still suffer a lot from "political leanings".
I know we avoid talking politics here because of precisely its mind-killing properties, able to turn the most thoughtful of agents into a stubborn blind fool, but I think it's also a good way of putting our skills to the test, and refine them.
↑ comment by MixedNuts · 2011-09-05T15:27:16.124Z · LW(p) · GW(p)
Be fair. We tried socialism once (in several places, but with minor variations). We tried a lot of technology, including long before the 20th century.
Replies from: None↑ comment by [deleted] · 2011-09-12T21:13:03.233Z · LW(p) · GW(p)
I think socialism must fail because humans once freed from material want will compete for status. Status inequality will activate much the same sentiments as material inequality did. To level status one needs to embark on a massive value engineering campaign. These have so far always created alternative status inequalities, thus creating internal contradictions which combined with increasing material costs eventually bring the dissolution of the system and a partial undoing of the engineering efforts.
If technology advances to the point where such massive social engineering becomes practical and is indeed used for such a purpose on the whim of experts in academia/a democratic consensus/revolutionary vanguard... the implications are simply horrifying.
comment by Normal_Anomaly · 2011-09-03T01:08:18.962Z · LW(p) · GW(p)
From the day we arrive on the planet
and blinking, step into the sun
there's more to see than can ever be seen
more to do than can ever be done
--The Lion King opening song
Replies from: Alex_Altair↑ comment by Alex_Altair · 2011-09-03T01:34:26.418Z · LW(p) · GW(p)
Do you consider this a promotion of fun theory? Or a justification for living forever?
Replies from: Normal_Anomaly, Teal_Thanatos, Tiiba↑ comment by Normal_Anomaly · 2011-09-03T02:27:06.292Z · LW(p) · GW(p)
Both.
↑ comment by Teal_Thanatos · 2011-09-04T23:37:48.628Z · LW(p) · GW(p)
Can also be an indication that everything is more than one person/mind can handle. By stepping into the sun, we enjoy the warmth and may be overwhelmed by the world as we see it. The song's lyrics seem cautionary, indicating that despite the warmth of being in the world do not attempt to see everything, do not attempt to do everything? This is rational, there are things we may not enjoy as much as others. To reduce our overall enjoyment by not placing parameters on our activities would be irrational in my opinion.
comment by lukeprog · 2011-09-01T12:12:13.090Z · LW(p) · GW(p)
Imagine that everyone in North America took [a cognitive enhancement pill] before retiring and then woke up the next morning with more memory capacity and processing speed... I believe that there is little likelihood that much would change the next day in terms of human happiness. It is very unlikely that people would be better able to fulfill their wishes and desires the day after taking the pill. In fact, it is quite likely that people would simply go about their usual business - only more efficiently. If given more memory capacity and processing speed, people would, I believe: carry on using the same ineffective medical treatments because of failure to think of alternative causes; keep making the same poor financial decisions because of overconfidence; keep misjudging environmental risks because of vividness; play host to the [tempting bad ideas] of Ponzi and pyramid schemes; [and] be wrongly influenced in their jury decisions by incorrect testimony about probabilities... The only difference would be that they would be able to do all of these things much more quickly!
Keith Stanovich, What Intelligence Tests Miss
Replies from: Davorak, Eliezer_Yudkowsky, None, NancyLebovitz, soreff, BillyOblivion↑ comment by Davorak · 2011-09-01T16:28:13.603Z · LW(p) · GW(p)
Better memory and processing power would mean that probabilistically more businessmen would realize there are good business opportunities where they saw none before. Creating more jobs and a more efficient economy, not the same economy more quickly.
ER doctors can now spend more processing power on each patient that comes in. Out of their existing repertoire they would choose better treatments for the problem at hand then they would have otherwise. A better memory means that they would be more likely to remember every step on their checklist when prepping for surgery.
It is not uncommon for people to make stupid decisions with mild to dire consequences because they are pressed for time. Everyone now thinks faster and has more time to think. Few people are pressed for time. Fewer accidents happen. Better decisions are made on average.
There are problems which are not human vs human but are human vs reality. With increased memory and processing power humanity gains an advantage over reality.
By no means is increasing memory and processing power a sliver bullet but it seems considerably more then everything only moving "much more quickly!"
Edit: spelling
Replies from: loup-vaillant↑ comment by loup-vaillant · 2011-09-02T08:18:33.447Z · LW(p) · GW(p)
The potential problem with your speculation is that the relative reduction of the mandatory-work / cognitive-power ratio may be a strong incentive to increase individual work load (and maybe massive lay-offs). If we're reasonable, and use our cognitive power wisely, then you're right. But if we go the Hansonian Global Competition route, the Uber Doctor won't spend more time on each patient, but just as much time on more patients. There will be too much Doctors, and the worst third will do something else.
Replies from: Strange7↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-09-02T07:42:39.320Z · LW(p) · GW(p)
It's a nice list, but I think the core point strikes me as liable to be simply false. I forget who it was presenting this evidence - it might even have been James Miller, it was someone at the Winter Intelligence conference at FHI - but they looked at (1) the economic gains to countries with higher average IQ, (2) the average gains to individuals with higher IQ, and concluded that (3) people with high IQ create vast amounts of positive externality, much more than they capture as individuals, probably mostly in the form of countries with less stupid economic policies.
Maybe if we're literally talking about a pure speed and LTM pill that doesn't affect at all, say, capacity to keep things in short-term memory or the ability to maintain complex abstractions in working memory, i.e., a literal speed and disk space pill rather than an IQ pill.
Replies from: jimmy, juliawise, lukeprog, AlexMennen, erniebornheimer, DanielLC↑ comment by jimmy · 2011-09-02T18:14:21.398Z · LW(p) · GW(p)
Absolutely - IQ is very important, especially in aggregate. And yet, I'd still bet that the next day people will just be moving faster.
I think its worth making the distinction between having hardware which can support complex abstractions and actually having good decision making software in there. Although it'd be foolish to ignore the former because it tends to lead to the latter, it seems to be the latter that is more directly important.
That, and the fact that people can generally support better software than they pick up on their own is what makes our goal here doable.
↑ comment by juliawise · 2011-09-05T12:26:27.143Z · LW(p) · GW(p)
If this is true, it would affect my decisions about whether and how to have children. So I'd really like to see the source if you can figure out what it was.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-09-06T02:40:32.567Z · LW(p) · GW(p)
James Miller says:
Replies from: juliawiseHi,
It wasn't me. Garett Jones, an economist at George Mason University, has been making these points. See
↑ comment by lukeprog · 2011-09-02T17:01:50.512Z · LW(p) · GW(p)
Sounds plausible. If anybody finds the citation for this, please post it.
Replies from: gwern, gwern, gwern, gwern, gwern↑ comment by gwern · 2011-09-03T02:34:35.921Z · LW(p) · GW(p)
They found that intelligence made a difference in gross domestic product. For each one-point increase in a country’s average IQ, the per capita GDP was $229 higher. It made an even bigger difference if the smartest 5 percent of the population got smarter; for every additional IQ point in that group, a country’s per capita GDP was $468 higher.
Citing "Cognitive Capitalism: The impact of ability, mediated through science and economic freedom, on wealth". (PDF not immediately available in Google.)
EDIT: efm found the PDF: http://www.tu-chemnitz.de/hsw/psychologie/professuren/entwpsy/team/rindermann/publikationen/11PsychScience.pdf
Or http://www.nickbostrom.com/papers/converging.pdf :
Economic models of the loss caused by small intelligence decrements due to lead in drinking water predict significant effects of even a few points decrease (Salkever 1995; Muir and Zegarac 2001). Because the models are roughly linear for small changes, they can be inverted to estimate societal effects of improved cognition. The Salkever model estimates the increase in income due to one more IQ point to be 2.1% for men and 3.6% for women. (Herrnstein and Murray 1994) estimate that a 3% increase in overall IQ would reduce the poverty rate by 25%, males in jail by 25%, high-school dropouts by 28%, parentless children by 20%, welfare recipients by 18%, and out-of-wedlock births by 25%.
EDITEDIT: high IQ predicts superior stock market investing even after the obvious controls. High IQ types are also more likely to trust the stock market enough to participate more in it
Replies from: gwern, gwern, gwern, gwern, gwern, gwern, gwern, gwern, gwern, gwern, gwern, gwern, tut↑ comment by gwern · 2012-07-18T15:15:38.710Z · LW(p) · GW(p)
"Do you have to be smart to be rich? The impact of IQ on wealth, income and financial distress", Zagorsky 2007:
How important is intelligence to financial success? Using the NLSY79, which tracks a large group of young U.S. baby boomers, this research shows that each point increase in IQ test scores raises income by between $234 and $616 per year after holding a variety of factors constant. Regression results suggest no statistically distinguishable relationship between IQ scores and wealth. Financial distress, such as problems paying bills, going bankrupt or reaching credit card limits, is related to IQ scores not linearly but instead in a quadratic relationship. This means higher IQ scores sometimes increase the probability of being in financial difficulty.
One could also phrase this as: "if we control for factors which we know to because by intelligence, such as highest level of education, then mirabile dictu! intelligence no longer increases income or wealth very much!"; or, "regressions are hard, let's go shopping."
Apropos of http://lemire.me/blog/archives/2012/07/18/why-we-make-up-jobs-out-of-thin-air/
In the XXIst century within wealthy countries, people work hard primarily to gain social status. We often make the mistake of tying up wealth with social status, but most of the wealthy people we admire are also consumed by their great jobs. Celine Dion is very wealthy, yet she would still give one show every single day, including week-ends. I think most professors would feel exploited if they had to lecture every single day. Bill Gates is very wealthy and universally admired, however, as we may expect, he worked nights and week-ends as chairman of Microsoft. Every year he would read 100 papers from Microsoft employees about the state of the company.
...For many, wealth is merely a stepping stone to intense work. This may explain why people with higher IQs are not wealthier (Zagorsky, 2008): high IQ people may have an easier time getting rewarding work so they need less wealth....I used to openly worry that robots would steal our jobs and leave most of us in poverty. I have now concluded that I was underestimating the pull of prestige among human beings. We will make up jobs out of thin air if we need to.
↑ comment by gwern · 2012-08-13T22:48:31.759Z · LW(p) · GW(p)
Intelligence: A Unifying Construct for the Social Sciences, Lynn & Vanhanen 2012 (excerpts)
↑ comment by gwern · 2012-05-29T19:18:10.066Z · LW(p) · GW(p)
"IQ in the Ramsey Model: A Naïve Calibration", Jones 2006:
Replies from: thomblakeI show that in a conventional Ramsey model, between one-fourth and one-half of the global income distribution can be explained by a single factor: The effect of large, persistent differences in national average IQ on the private marginal product of labor. Thus, differences in national average IQ may be a driving force behind global income inequality. These persistent differences in cognitive ability - which are well-supported in the psychology literature - are likely to be somewhat malleable through better health care, better education, and especially better nutrition in the world’s poorest countries. A simple calibration exercise in the spirit of Bils and Klenow (2000) and Castro (2005) is conducted. I show that an IQ-augmented Ramsey model can explain more than half of the empirical relationship between national average IQ and GDP per worker. I provide evidence that little of the IQ-productivity relationship is likely to be due to reverse causality.
One question of interest is whether the IQ-productivity relationship has strengthened or weakened over the past few decades. Shocks such as the Great Depression and the Second World War were likely to move nations away from their steady-state paths. Further, many countries have embraced market economies in recent decades, a policy change which is likely to have removed non-IQ-related barriers to riches.11 Accordingly, one would expect the IQ-productivity relationship to have strengthened over the decades.
As Table 2 shows, I indeed found this to be the case. I used LV’s IQ data along with Penn World Table data for each decade from 1960 through 1990 (1950 only had 38 relevant observations, and so is omitted). As before, equation (3) was used to estimate the IQ-productivity relationship, while the IQ-elasticity of wages is assumed to equal 1 for simplicity. Both the unconditional R2 and the fraction of the variance explained by the IQ-wage relationship increase steadily across the decades. This is true regardless of the capital share parameter in question. Further, the log-slope of the IQ-productivity relationship has also increased.
- 11: Lynn and Vanhanen (2002) hypothesize that national average IQ and market institutions are the two crucial determinants of GDP per capita. They provide some bivariate regressions supporting this hypothesis; they show that both variables together explain much more—about 75% of the variance in the level of GDP per capita - than either variable alone, each of which can explain roughly 50%.
The Ramsey-style model of Manuelli and Seshadri (2005) would be a natural extension: In their model, ex-ante differences in total factor productivity of at most 27% interact with education decisions and fertility choices to completely replicate the span of the current global income distribution. In their calibration—less naïve and more complex then the one I present—a 1% rise in TFP (e.g., 1 IQ point) causes a 9% rise in steady- state productivity. Manuelli and Seshadri leave unanswered the question of what those ex-ante differences in TFP might be, but persistent differences in national average IQ are a natural candidate.
↑ comment by thomblake · 2012-05-29T19:34:27.702Z · LW(p) · GW(p)
That quote does not appear to come from the linked paper, and I'm confused as to how a paper from 2006 was supposed to have a citation from 2009.
Replies from: gwern↑ comment by gwern · 2012-05-29T19:46:00.862Z · LW(p) · GW(p)
Only the first paragraph is wrong (mixed it up with a paper on the Swiss iodization experience I'm using in a big writeup on iodide self-experimentation). Fixed.
↑ comment by gwern · 2012-12-15T20:02:33.496Z · LW(p) · GW(p)
"Economic gains resulting from the reduction in children's exposure to lead in the United States", Grosse et al 2002 (fulltext)
We assumed the change in cognitive ability resulting from declines in BLLs, on the basis of published meta-analyses, to be between 0.185 and 0.323 IQ points for each 1 g/dL blood lead concentration. These calculations imply that, because of falling BLLs, U.S. preschool-aged children in the late 1990s had IQs that were, on average, 2.2-4.7 points higher than they would have been if they had the blood lead distribution observed among U.S. preschool-aged children in the late 1970s. We estimated that each IQ point raises worker productivity 1.76-2.38%. With discounted lifetime earnings of $723,300 for each 2-year-old in 2000 dollars, the estimated economic benefit for each year's cohort of 3.8 million 2-year-old children ranges from $110 billion to $319 billion.
...We calculated the economic benefit realized by reduced lead exposure in the United States since the late 1970s through a series of steps, each associated with a component of the model in Figure 1. First, we estimated the amount by which BLLs have fallen over time through secondary analysis of data from the National Health and Nutrition Examination Surveys (NHANES). Second, we applied estimates from published studies of the strength, shape, and magnitude of the association between BLLs and cognitive ability test scores. In particular, we examined two published meta-analyses to arrive at estimates of the ratio of change in BLL to change in IQ. Third, on the basis of a brief review of literature on the association between cognitive ability and earning potential, we estimated the percentage change in earnings associated with absolute differences in IQ levels. Fourth, we calculated the present value (2000 dollars) of the percentage change in earnings.
...Schwartz (6) calculated that the total effect of a 1-point difference in cognitive ability is a 1.76% difference in earnings. Of this amount, 0.5% is the direct effect of ability on earnings. Schwartz (6) took this estimate from an econometric study by Griliches (19) that was representative of other econometric studies from the 1970s. Schwartz (6) assumed that a given difference in IQ scores observed in school-aged children can be expected to lead to a comparable difference in achieved cognitive ability in young adults.
The indirect effect of ability on earnings, which accounts for the remaining 1.26% difference, is modeled through two pathways. One is the effect of ability on years of schooling multiplied by the effect of years of schooling on hourly earnings. Needleman et al. (20) reported that a 4.5-point difference in IQ between groups with high tooth lead and with low tooth lead was associated with a 0.59 difference in grade level attained. The ratio of the two numbers implies a difference of 0.131 years of schooling for 1 IQ point. If each additional year of schooling results in a 6% increase in hourly wages, 1 IQ point would lead to a 0.79% increase in expected earnings through years of education. Second, Schwartz (6) modeled ability as influencing employment participation through influence on high school graduation. On the basis of the analysis of Needleman et al. (20) and 1978 survey data reported by Krupnick and Cropper (21), Schwartz (6) calculated that 1 point in IQ is associated with a 4.5% difference in probability of graduating from high school and that high school graduation is associated with a 10.5% difference in labor force participation. On the assumption of an equivalent percentage change in annual earnings, this leads to a 0.47% difference in expected earnings. Salkever (22) published an alternate estimation of the effect of cognitive ability on earnings. Salkever directly estimated the effect of ability on annual earnings, among those with earnings. The estimated association of ability with annual earnings incorporates both the effect of ability on hourly earnings and its effect on annual hours of work. He also added a direct pathway from ability to work participation independent of education.
According to Salkever (22), a 1-point difference in ability is associated with a 1.931% difference in earnings for males and a 3.225% difference for females. The direct effect on earnings is 1.24% for males and 1.40% for females. Salkever (22) analyzed income and educational attainment data from the 1990 wave of the National Longitudinal Study of Youth (NLSY) in combination with AFQT scores collected during 1979–1980, when the respondents were 14–23 years of age.
For the indirect effect of ability on schooling attainment, Salkever (22) reported that a 1-point difference was associated with 0.1007 years of schooling attained for both males and females in the NLSY data. Also, 1 year of schooling attainment raised hourly earnings by 4.88% for males and 10.08% for females in the 1990 NLSY data. According to these results, a 1-point difference in ability is associated, through an indirect effect on schooling, with a 0.49% difference in earnings for males and a 1.10% difference in earnings for females.
Salkever (22) reported that the direct effect of a 1-point difference in ability was a 0.1602 percentage point difference in probability of labor force participation for males and a 0.3679 percentage point difference for females. In addition, he calculated that 1 year of schooling raised labor force participation rates by 0.3536 percentage points for males and 2.8247 percentage points for females. Subtracting the other components from the totals, a 1-point change in cognitive ability is associated with a difference in earnings of 0.20% for males and 0.72% for females through effects on labor force participation. Finally, in an analysis of the 1990 NLSY earnings data, Neal and Johnson (23) reported smaller estimates of the effect of cognitive ability on earnings. They included workers who took the AFQT test when they were 14–18 years of age and excluded those who took the AFQT test at 19–23 years of age to make the test scores more comparable. They also estimated the total effect of ability on hourly earnings by excluding schooling variables. Their estimates indicate that a 1point difference in AFQT scores is associated with a 1.15% difference in earnings for men and a 1.52% difference for women. Their estimate of the direct effect of ability on hourly earnings, controlling for schooling, is 0.83% for men; they reported no estimate for women.
The analysis of Neal and Johnson (23) has no link from ability to labor force participation. According to Salkever (22), a 1-point difference in ability leads to a 0.20% difference for males and 0.72% for females. If we add Salkever’s figures (22) to the estimates from Neal and Johnson (23), the total effect of a 1-point difference in ability on earnings is 1.35% for males and 2.24% for females.
Their summary estimate from pg5/567 is a lower-middle-upperbound of each IQ point is worth, in net present value 2000 dollars: 12,700-14,500-17,200.
(Note that these figures, as usual, are net estimates of the value to an individual: so they are including zero-sum games and positional benefits. They aren't giving estimates of the positive externalities or marginal benefits.)
↑ comment by gwern · 2012-05-30T16:08:57.750Z · LW(p) · GW(p)
"Quality of Institutions : Does Intelligence Matter?", Kalonda-Kanyama & Kodila-Tedika 2012:
We analyze the effect of the average level of intelligence on different measures of the quality of institutions, using a 2006 cross-sectional sample of 113 countries. The results show that average IQ positively affects all the measures of institutional quality considered in our study, namely government eciency, regulatory quality, rule of law, political stability and voice and accountability. The positive effect of intelligence is robust to controlling for other determinants of institutional quality.
↑ comment by gwern · 2016-02-02T00:39:13.140Z · LW(p) · GW(p)
"IQ and Permanent Income: Sizing Up the “IQ Paradox”":
I used data from the NLSY79 which is an ongoing longitudinal study that follows the lives of a large sample of Americans born in 1957-64. Specifically, I used the nationally representative subsample comprising more than 6000 individuals...The unstandardized slope coefficient is 0.025 (95% CI: 0.023-0.027). Because the dependent variable is logarithmic, this coefficient, when multiplied by 100, can be (approximately) interpreted as the percent change in income in (unlogged) dollars associated with a 1 IQ point change.[Note] Therefore, one additional IQ point predicts a 2.5% boost in income. The standardized effect size, or correlation, is 0.36 and the R squared is 13%.
↑ comment by gwern · 2012-10-01T19:59:02.235Z · LW(p) · GW(p)
"Are Smarter Groups More Cooperative? Evidence from Prisoner's Dilemma Experiments, 1959-2003", Jones 2008:
A meta-study of repeated prisoner's dilemma experiments run at numerous universities suggests that students cooperate 5% to 8% more often for every 100 point increase in the school's average SAT score.
Later: http://econlog.econlib.org/archives/2012/10/group_iq_one_so.html
Replies from: John_Maxwell_IV, MixedNutsThis finding was the first of its kind: In prisoner's dilemmas, smarter groups really were more cooperative. Since then other researchers have found similar results, some of which I discuss in Section III of this article for the Asian Development Review. It looks like intelligence is a form of social intelligence...Does that happen in the real world? If it does, does it mean that there are negative political externalities to low-skill immigration? That's a topic for a later time. Another worthy question: Why would high IQ groups be more cooperative anyway? Isn't cynicism intelligent? Sure, sometimes, but the political entrepreneur who can find a way to sustain a truce can probably skim quite a lot of the resulting prosperity off for herself. And people who are better at solving the puzzles in an IQ test are probably better at solving the puzzles of human interaction.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-11-20T01:56:54.566Z · LW(p) · GW(p)
What if higher SAT schools tend to be more prestigious and have stronger student identification?
Replies from: gwern↑ comment by gwern · 2012-11-20T02:05:30.616Z · LW(p) · GW(p)
Dunno. It's consistent with all the other results about IQ and not school spirit...
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-11-20T04:30:33.677Z · LW(p) · GW(p)
Hm. Looks like going to a public/private school didn't seem to mediate student cooperation all that much, which probably works against my theory.
↑ comment by gwern · 2012-09-10T23:28:09.631Z · LW(p) · GW(p)
"IQ in the Production Function: Evidence from Immigrant Earnings", Jones & Schneider 2008:
We show that a country’s average IQ score is a useful predictor of the wages that immigrants from that country earn in the U.S., whether or not one adjusts for immigrant education. Just as in numerous microeconomic studies, 1 IQ point predicts 1% higher wages, suggesting that IQ tests capture an important difference in cross-country worker productivity. In a cross-country development accounting exercise, about one-sixth of the global inequality in log income can be explained by the effect of large, persistent differences in national average IQ on the private marginal product of labor. Taken together with the results of Jones and Schneider (2006) and Hanushek and Kimko (2000), this suggests that cognitive skills matter more for groups than for individuals.
↑ comment by gwern · 2015-08-13T18:26:09.172Z · LW(p) · GW(p)
"Costs and benefits of iodine supplementation for pregnant women in a mildly to moderately iodine-deficient population: a modelling analysis" (mirror; appendices), Monahan et al 2015
Background: Results from previous studies show that the cognitive ability of off spring might be irreversibly damaged as a result of their mother's mild iodine deficiency during pregnancy. A reduced intelligence quotient (IQ) score has broad economic and societal cost implications because intelligence affects wellbeing, income, and education outcomes. Although pregnancy and lactation lead to increased iodine needs, no UK recommendations for iodine supplementation have been issued to pregnant women. We aimed to investigate the cost-effectiveness of iodine supplementation versus no supplementation for pregnant women in a mildly to moderately iodine-deficient population for which a population- based iodine supplementation programme-for example, universal salt iodisation-did not exist.
Methods: We systematically searched MEDLINE, Embase, EconLit, and NHS EED for economic studies that linked IQ and income published in all languages until Aug 21, 2014. We took clinical data relating to iodine deficiency in pregnant women and the effect on IQ in their children aged 8-9 years from primary research. A decision tree was developed to compare the treatment strategies of iodine supplementation in tablet form with no iodine supplementation for pregnant women in the UK. Analyses were done from a health service perspective (analysis 1; taking direct health service costs into account) and societal perspective (analysis 2; taking education costs and the value of an IQ point itself into account), and presented in terms of cost (in sterling, relevant to 2013) per IQ point gained in the off spring. We made data-supported assumptions to complete these analyses, but used a conservative approach that limited the benefits of iodine supplementation and overestimated its potential harms.
Findings: Our systematic search identified 1361 published articles, of which eight were assessed to calculate the monetary value of an IQ point. A discounted lifetime value of an additional IQ point based on earnings was estimated to be £3297 (study estimates range from £1319 to £11 967) for the off spring cohort. Iodine supplementation was cost saving from both a health service perspective (saving £199 per pregnant woman [sensitivity analysis range -£42 to £229]) and societal perspective (saving £4476 per pregnant woman [sensitivity analysis range £540 to £4495]), with a net gain of 1·22 IQ points in each analysis. Base case results were robust to sensitivity analyses.
Interpretation: Iodine supplementation for pregnant women in the UK is potentially cost saving. This finding also has implications for the 1·88 billion people in the 32 countries with iodine deficiency worldwide. Valuation of IQ points should consider non-earnings benefits-eg, health benefits associated with a higher IQ not germane to earnings.
IQ estimates:
Our systematic search identified 1361 published articles, of which eight studies 47-54 passed quality criteria and were assessed to calculate the monetary value of an IQ point (appendix p 4). The quality criteria were as follows: an individual's IQ is used and is not a proxy; variables are clearly specified; IQ measure follows a conventional normal distribution with a mean of 100 and standard deviation of 15 or sufficient information is included in the study to allow the IQ measure's distribution to be converted into one (for cross study comparability); and the results reported in currency form have the applicable year stated. Most of the studies valued an IQ point on the basis of its effect on an individual's income (appendix p 3). The issue of differences in scaling of IQ tests hindered the comparability across studies. The value of an IQ point, derived from the systematic search and applied to the unborn cohort, comes from the lifetime earnings premium of an additional IQ point. This is calculated to be £3297 (study estimates range from £1319 to £11967; after adjustment with life tables).
All the details are in the Monahan et al 2015 appendices
One study looked at people's willingness to pay (WTP) for an additional IQ point. 4 Five studies used econometric regressions to determine the individuals IQ's effect on their subsequent income, 5-9 whereas two studies were cost benefit analysis on reducing lead exposure. 10,11 Only one of the studies included in the systematic literature search was not set in the USA. 5...In keeping with the conservative nature of the model, the relatively high earnings premium from IQ points from Schwartz 10 and Salkever 11 are excluded on the basis that the effect may be overstated.
The 8 studies are listed on pg8 of the appendix, Table 1:
- Fletcher J. "Friends or Family? Revisiting the Effects of High School Popularity on Adult Earnings". 2013. National Bureau of Economic Research Working Papers: 19232
- Lutter RW. "Valuing children's health: A reassessment of the benefits of lower lead levels". AEI-Brookings Joint Center Working Paper No. 00-02. 2000.
- Mueller G, Plug E. "Estimating the Effect of Personality on Male and Female Earnings". Ind Lab Relat Rev. 2006;60(1):3-22.
- Salkever DS. "Updated estimates of earnings benefits from reduced exposure of children to environmental lead". Environ Res. 1995;70(1):1-6.
- Schwartz J. "Societal benefits of reducing lead exposure". Environ Res. 1994;66(1):105-24.
- de Wolff P, van Slijpe ARD. "The Relation Between Income, Intelligence, Education and Social Background". Europ Econ Rev. 1973;4(3):235-64.
- Zax JS, Rees DI. IQ, "Academic Performance, Environment, and Earnings". Rev Econ Stat. 2002;84(4):600-16
- Zagorsky JL. "Do you have to be smart to be rich? The impact of IQ on wealth, income and financial distress". Intelligence. 2007;35(5):489-501.
(Note that by including covariates that are obviously caused by IQ rather than independent, and excluding any attempt at measuring the many positive externalities of greater intelligence, these numbers can usually be considered substantial underestimates of country-wide benefits.)
↑ comment by gwern · 2013-12-07T22:59:44.947Z · LW(p) · GW(p)
"The High Cost of Low Educational Performance: the long-run economic impact of improving PISA outcomes", Hanushek & Woessmann 2010:
This report uses recent economic modelling to relate cognitive skills – as measured by PISA and other international instruments – to economic growth. The relationship indicates that relatively small improvements in the skills of a nation’s labour force can have very large impacts on future well-being.
...A modest goal of having all OECD countries boost their average PISA scores by 25 points over the next 20 years – which is less than the most rapidly improving education system in the OECD, Poland, achieved between 2000 and 2006 alone – implies an aggregate gain of OECD GDP of USD 115 trillion over the lifetime of the generation born in 2010 (as evaluated at the start of reform in terms of real present value of future improvements in GDP) (Figure 1). Bringing all countries up to the average performance of Finland, OECD’s best performing education system in PISA, would result in gains in the order of USD 260 trillion (Figure 4). The report also shows that it is the quality of learning outcomes, not the length of schooling, which makes the difference. Other aggressive goals, such as bringing all students to a level of minimal proficiency for the OECD (i.e. reaching a PISA score of 400), would imply aggregate GDP increases of close to USD 200 trillion according to historical growth relationships (Figure 2).
...Using data from international student achievement tests, Hanushek and Kimko (2000) demonstrate a statistically and economically significant positive effect of cognitive skills on economic growth in 1960-90. Their estimates suggest that one country-level standard deviation higher test performance would yield around one percentage point higher annual growth rates. The country-level standard deviation is equivalent to 47 test-score points in the PISA 2000 mathematics assessment. Again, in terms of the PISA 2000 mathematics scores, 47 points would be roughly the average difference between Sweden and Japan (the best performer among OECD countries in 2000) or between the average Greek student and the OECD average score. One percentage point difference in growth is itself a very large value, because the average annual growth of OECD countries has been roughly 1.5%.
Their estimate stems from a statistical model that relates annual growth rates of real GDP per capita to the measure of cognitive skills, years of schooling, the initial level of income and a wide variety of other variables that might affect growth including in different specifications the population growth rates, political measures, or openness of the economies.
...The relationship between cognitive skills and economic growth has now been demonstrated in a range of studies. As reviewed in Hanushek and Woessmann (2008), these studies employ measures of cognitive skills that draw upon the international testing of PISA and of TIMSS (Trends in International Mathematics and Science Study) (along with earlier versions of these).7 The uniform result is that the international achievement measures provide an accurate measure of the skills of the labour force in different countries and that these skills are closely tied to economic outcomes.8
...While the PISA tests are now well-known throughout the OECD, the history of testing is less understood. Between 1964 and 2003, 12 different international tests of mathematics, science, or reading were administered to a voluntarily participating group of countries (see Annex Tables A1 and A2). These include 36 different possible scores for year-age-test combinations (e.g. science for students of grade 8 in 1972 as part of the First International Science Study or mathematics of 15-year-olds in 2000 as a part of the Programme on International Student Assessment). Only the United States participated in all possible tests. The assessments are designed to identify a common set of expected skills, which were then tested in the local language. It is easier to do this in mathematics and science than in reading, and a majority of the international testing has focused on mathematics and science. Each test is newly constructed, until recently with no effort to link to any of the other tests. While the analysis here focuses on mathematics and science, these scores are highly correlated with reading test scores and employing just mathematics and science performance does not distort the growth relationship that is estimated; see Hanushek and Woessmann (2009). The goal here is construction of consistent measures at the national level that will allow comparing performance across countries, even when they did not each participate in a common assessment.
...The simplest overview of the relationship is found in Figure 6 that plots regional growth in real per capita GDP between 1960 and 2000 against average test scores after allowing for differences in initial GDP per capita in 1960.14 Regional annual growth rates, which vary from 1.4% in Sub-Saharan Africa to 4.5% in East Asia, fall on a straight line.15 But school attainment, when added to this regression, is unrelated to growth- rate differences. Figure 6 suggests that, conditional on initial income levels, regional growth over the last four decades is completely described by differences in cognitive skills.
Second, to tackle the most obvious reverse-causality issues, Hanushek and Woessmann (2009) separate the timing of the analysis by estimating the effect of scores on tests conducted until the early 1980s on economic growth in 1980-2000. In this analysis, available for a smaller sample of countries only, test scores pre-date the growth period. The estimate shows a significant positive effect that is about twice as large as the coefficient used in the simulations here.
Needless to say, "cognitive skills" here is essentially an euphemism for intelligence/IQ.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-12-08T09:22:50.136Z · LW(p) · GW(p)
A modest goal of having all OECD countries boost their average PISA scores by 25 points over the next 20 years – which is less than the most rapidly improving education system in the OECD, Poland, achieved between 2000 and 2006 alone – implies an aggregate gain of OECD GDP of USD 115 trillion over the lifetime of the generation born in 2010 (as evaluated at the start of reform in terms of real present value of future improvements in GDP) (Figure 1). Bringing all countries up to the average performance of Finland, OECD’s best performing education system in PISA, would result in gains in the order of USD 260 trillion (Figure 4). The report also shows that it is the quality of learning outcomes, not the length of schooling, which makes the difference. Other aggressive goals, such as bringing all students to a level of minimal proficiency for the OECD (i.e. reaching a PISA score of 400), would imply aggregate GDP increases of close to USD 200 trillion according to historical growth relationships (Figure 2).
But but Goodhart's law!
Replies from: gwern↑ comment by gwern · 2012-12-21T17:19:18.523Z · LW(p) · GW(p)
Education and general intelligence both serve to inform opinions, but do they lead to greater attitude extremity? We use questions on economic policy, social issues, and environmental issues from the General Social Survey to test the impact of education and intelligence on attitude extremity, as measured by deviation from centrist or neutral positions. Using quantile regression modeling, we find that intelligence is a moderating force across the entire distribution in economic, social, and environmental policy beliefs. Completing high school strongly correlates to reduced extremity, particularly in the upper quantiles. College education increases attitude extremity in the lower tail of environmental beliefs. The relevance of the low extremity tail (lower quantiles) to potential swing-voters and the high extremity tail (upper quantiles) to a political party’s core are discussed.
"Education, Intelligence, and Attitude Extremity", Makowsky & Miller 2012
↑ comment by gwern · 2012-10-02T21:58:13.752Z · LW(p) · GW(p)
The authors analysed data from the 2007 Adult Psychiatric Morbidity Survey in England. The participants were adults aged 16 years or over, living in private households in 2007. Data from 6870 participants were included in the study...Happiness is significantly associated with IQ. Those in the lowest IQ range (70–99) reported the lowest levels of happiness compared with the highest IQ group (120–129). Mediation analysis using the continuous IQ variable found dependency in activities of daily living, income, health and neurotic symptoms were strong mediators of the relationship, as they reduced the association between happiness and IQ by 50 %
"The relationship between happiness and intelligent quotient: the contribution of socio-economic and clinical factors", Ali et al 2012; effect is weakened once you take into account all the relevant variables but does sort of still exist.
↑ comment by tut · 2011-09-10T00:11:32.875Z · LW(p) · GW(p)
I think that you might be confusing causation and correlation here. Countries that started to industrialize earlier have higher average IQ and higher GDP per capita. That would produce the effect you refer to. Whether or not the increased intelligence then contributes to further economic growth is a different matter.
Replies from: gwern, gwern↑ comment by gwern · 2012-02-28T03:45:12.384Z · LW(p) · GW(p)
What third factor producing both higher IQ and then industrialization are you suggesting?
Obviously you're not suggesting anything as silly as the industrialization causes all observed IQ changes, because that simply doesn't explain all examples, like East Asian countries:
Replies from: Richard_KennawayA crucial question is whether IQ differences across countries are a simple case of reverse causation: Do high-income countries simply develop higher IQ’s? We address this question in a number of ways, but the most important is likely to be this simple fact: East Asian countries had high average IQ’s—at or above the European and U.S. averages—well before they entered the ranks of the high income countries. This is precisely the opposite of what one would expect if the IQ-productivity relationship were merely epiphenomenal.
↑ comment by Richard_Kennaway · 2012-07-18T16:07:54.718Z · LW(p) · GW(p)
East Asian countries had high average IQ’s—at or above the European and U.S. averages—well before they entered the ranks of the high income countries.
That suggests that the correlation would have been less at that earlier time, which suggests the idea that the correlation of average IQ and average income has varied over history. Perhaps it has become stronger with increasing technological level -- that is, more opportunities to apply smarts?
Replies from: gwern↑ comment by gwern · 2012-07-18T16:37:51.558Z · LW(p) · GW(p)
That certainly seems possible. Imagine a would-be programming genius who is born now, versus born in the Stone Age - he could become the wealthiest human to ever live (Bill Gates) or just the best hunter in the tribe (to be optimistic...).
↑ comment by gwern · 2012-05-13T01:22:56.706Z · LW(p) · GW(p)
Rindermann 2011: "Intellectual classes, technological progress and economic development: The rise of cognitive capitalism"; from abstract:
Based on their pioneering research two research questions were developed: does intelligence lead to wealth or does wealth lead to intelligence or are other determinants involved? If a nation’s intelligence increases wealth, how does intelligence achieve this? To answer them we need longitudinal studies and theoretical attempts, investigating cognitive ability effects at the levels of individuals, institutions and societies and examining factors which lie between intelligence and growth. Two studies, using a cross-lagged panel design or latent variables and measuring economic liberty, shares of intellectual classes and indicators of scientific-technological accomplishment, show that cognitive ability leads to higher wealth and that for this process the achievement of high ability groups is important, stimulating growth through scientific-technological progress and by influencing the quality of economic institutions.
↑ comment by gwern · 2012-02-28T23:09:29.937Z · LW(p) · GW(p)
Here's another one: "National IQ and National Productivity: The Hive Mind Across Asia", Jones 2011
Replies from: Dr_Manhattan, gwern, gwern, gwern...cognitive skills—intelligence quotient scores, math skills, and the like—have only a modest influence on individual wages, but are strongly correlated with national outcomes. Is this largely due to human capital spillovers? This paper argues that the answer is yes. It presents four different channels through which intelligence may matter more for nations than for individuals: (i) intelligence is associated with patience and hence higher savings rates; (ii) intelligence causes cooperation; (iii) higher group intelligence opens the door to using fragile, high-value production technologies; and (iv) intelligence is associated with supporting market-oriented policies.
↑ comment by Dr_Manhattan · 2012-04-26T00:23:18.776Z · LW(p) · GW(p)
Above link is dead. Here is a new one
↑ comment by gwern · 2013-02-03T01:41:12.169Z · LW(p) · GW(p)
"Exponential correlation of IQ and the wealth of nations", Dickerson 2006:
Replies from: army1987, VaniverPlots of mean IQ and per capita real Gross Domestic Product for groups of 81 and 185 nations, as collected by Lynn and Vanhanen, are best fitted by an exponential function of the form: GDP = a 10^b\(IQ), where a and b are empirical constants. Exponential fitting yields markedly higher correlation coefficients than either linear or quadratic. The implication of exponential fitting is that a given increment in IQ, anywhere along the IQ scale, results in a given percentage in GDP, rather than a given dollar increase as linear fitting would predict. As a rough rule of thumb, an increase of 10 points in mean IQ results in a doubling of the per capita GDP.
....In their book, IQ and the Wealth of Nations, Lynn and Vanhanen (2002) present a table listing for 81 nations the measured mean IQ and the per capita real Gross Domestic Product as of 1998 (their Table 7.7). They subsequently extend this to all 185 nations, using estimated IQs for the 104 new entries based chiefly on IQ values for immediate neighbors (their Table 8.9). In both cases they observe a significant correlation between IQ and GDP, with linear correlation factors R^2 = 0.537 for the 81-nation group and 0.389 for 185 nations. McDaniel and Whetzel have extended the examination of correlations to quadratic fitting in a paper that demonstrates the robustness of these correlations to minor variations in individual IQ values (McDaniel & Whetzel, in press). But an even stronger correlation is found if the fitting is exponential rather than linear or quadratic.
↑ comment by A1987dM (army1987) · 2013-02-03T08:42:30.896Z · LW(p) · GW(p)
It peeves me when scatterplots of GDP per capita versus something else use a linear scale -- do they actually think the difference between $30k and $20k is anywhere near as important as that between $11k and $1k? And yet hardly anybody uses logarithmic scales.
Likewise, the fit looks a lot less scary if you write it as ln(GDP) = A + B*IQ.
Replies from: gwern↑ comment by gwern · 2013-02-09T20:43:05.499Z · LW(p) · GW(p)
Yes, Dickerson does point out that his exponential fit is a linear relationship on a log scale. For example, he does show a log-scale in figure 3 (pg3), fitting the most reliable 83 nation-points on a plot of log(GDP) against mean IQ in which the exponential fit looks exactly like you would expect. (Is it per capita? As far as I can tell, he always means per capita GDP even if he writes just 'GDP'.) Figure 4 does the same thing but expands the dataset to 185 nations. The latter plot should probably be ignored given that the expansion comes from basically guessing:
In their book, IQ and the Wealth of Nations, Lynn and Vanhanen (2002) present a table listing for 81 nations the measured mean IQ and the per capita real Gross Domestic Product as of 1998 (their Table 7.7). They subsequently extend this to all 185 nations, using estimated IQs for the 104 new entries based chiefly on IQ values for immediate neighbors (their Table 8.9).
↑ comment by Vaniver · 2013-02-09T22:19:33.835Z · LW(p) · GW(p)
Is it easy to compare the fit of their theory to the smart fraction theory?
Replies from: gwern↑ comment by gwern · 2013-02-09T23:32:07.430Z · LW(p) · GW(p)
I dunno. I've given it a try and while it's easy enough to reproduce the exponential fit (and the generated regression line does fit the 81 nations very nicely), I think I screwed up somehow reproducing the smart fraction equation because the regression looks weird and trying out the smart-fraction function (using his specified constants) on specific IQs I don't get the same results as in La Griffe's table. And I can't figure out what I'm doing wrong, my function looks like it's doing the same thing as his. So I give up. Here is my code if you want to try to fix it:
lynn <- read.table(stdin(),header=TRUE,sep="")
Country IQ rGDPpc
Argentina 96 12013
Australia 98 22452
Austria 102 23166
Barbados 78 12001
Belgium 100 23223
Brazil 87 6625
Bulgaria 93 4809
Canada 97 23582
China 100 3105
Colombia 89 6006
Congo 65 822
Congo 73 995
Croatia 90 6749
Cuba 85 3967
CzechRepublic 97 12362
Denmark 98 24218
Ecuador 80 3003
Egypt 83 3041
EquatorialGuinea 59 1817
Ethiopia 63 574
Fiji 84 4231
Finland 97 20847
France 98 21175
Germany 102 22169
Ghana 71 1735
Greece 92 13943
Guatemala 79 3505
Guinea 66 1782
HongKong 107 20763
Hungary 99 10232
India 81 2077
Indonesia 89 2651
Iran 84 5121
Iraq 87 3197
Ireland 93 21482
Israel 94 17301
Italy 102 20585
Jamaica 72 3389
Japan 105 23257
Kenya 72 980
Lebanon 86 4326
Malaysia 92 8137
MarshallIslands 84 3000
Mexico 87 7704
Morocco 85 3305
Nepal 78 1157
Netherlands 102 22176
NewZealand 100 17288
Nigeria 67 795
Norway 98 26342
Peru 90 4282
Philippines 86 3555
Poland 99 7619
Portugal 95 14701
PuertoRico 84 8000
Qatar 78 20987
Romania 94 5648
Russia 96 6460
SierraLeone 64 458
Singapore 103 24210
Slovakia 96 9699
Slovenia 95 14293
SouthAfrica 72 8488
SouthKorea 106 13478
Spain 97 16212
Sudan 72 1394
Suriname 89 5161
Sweden 101 20659
Switzerland 101 25512
Taiwan 104 13000
Tanzania 72 480
Thailand 91 5456
Tonga 87 3000
Turkey 90 6422
UKingdom 100 20336
Uganda 73 1074
UnitedStates 98 29605
Uruguay 96 8623
WesternSamoa 87 3832
Zambia 77 719
Zimbabwe 66 2669
em <- lm(log(lynn$rGDPpc) ~ lynn$IQ); summary(em)
Call:
lm(formula = log(lynn$rGDPpc) ~ lynn$IQ)
Residuals:
Min 1Q Median 3Q Max
-1.6124 -0.3866 -0.0429 0.3363 2.0311
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.77760 0.51848 3.43 0.00097
lynn$IQ 0.07876 0.00583 13.52 < 2e-16
Residual standard error: 0.624 on 79 degrees of freedom
Multiple R-squared: 0.698, Adjusted R-squared: 0.694
F-statistic: 183 on 1 and 79 DF, p-value: <2e-16
# plot
plot (log(lynn$rGDPpc) ~ lynn$IQ)
abline(em)
# an attempt at La Griffe
erf <- function(x) 2 * pnorm(x * sqrt(2)) - 1
sf <- function(iq) ((69321/2) * (1 + erf(((iq - 108)/15) / sqrt(2))))
# check for sigmoid
# plot(c(85:130), sf(c(85:130)))
lg <- lm(log(lynn$rGDPpc) ~ sf(lynn$IQ)); summary(lg)
Call:
lm(formula = log(lynn$rGDPpc) ~ sf(lynn$IQ))
Residuals:
Min 1Q Median 3Q Max
-2.5788 -0.6857 0.0678 1.0521 1.5901
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 8.705620 0.126152 69.01 <2e-16
sf(lynn$IQ) 0.000121 0.000102 1.19 0.24
Residual standard error: 1.13 on 79 degrees of freedom
Multiple R-squared: 0.0175, Adjusted R-squared: 0.0051
F-statistic: 1.41 on 1 and 79 DF, p-value: 0.239
# same plotting code
(In retrospect, I'm not sure it's even meaningful to try to fit the sf
function with the constants already baked in, but since I apparently didn't write it right, it doesn't matter.
↑ comment by Vaniver · 2013-02-10T01:27:29.844Z · LW(p) · GW(p)
Hm, one thing I notice is that you look like you're fitting sf against log(gdp). I managed to replicate his results in octave, and got a meaningful result plotting smart fraction against gdp.
My guess at how to change your code (noting that I don't know R):
sf <- function(iq,c) ((69321/2) * (1 + erf(((iq - c)/15) / sqrt(2))))
lg <- lm(lynn$rGDPpc ~ sf(lynn$IQ,108)); summary(lg)
That should give you some measure of how good it fits, and you might be able to loop it to see how well the smart fraction does with various thresholds.
(I also probably should have linked to the refinement.)
Replies from: gwern↑ comment by gwern · 2013-02-10T03:33:50.112Z · LW(p) · GW(p)
I can't tell whether that works since you're just using the same broken smart-fraction sf
predictor; eg. sf(107,108)
~> 32818, while the first smart fraction page's table gives a Hong Kong regression line of 19817 which is very different from 33k.
The refinement doesn't help with my problem, no.
Replies from: Vaniver↑ comment by Vaniver · 2013-02-10T20:08:25.531Z · LW(p) · GW(p)
Hmmm. I agree that it doesn't match. What if by 'regression line' he means the regression line put through the sf-gdp data?
That is, you should be able to calculate sf as a fraction with
sf <- function(iq,c) ((1/2) * (1 + erf((iq-c)/(15*sqrt(2)))))
And then regress that against gdp, which will give you the various coefficients, and a much more sensible graph. (You can compare those to the SFs he calculates in the refinement, but those are with verbal IQ, which might require finding that dataset / trusting his, and have a separate IQ0.)
Comparing the two graphs, I find it interesting that the eight outliers Griffe mentions (Qatar, South Africa, Barbados, China, and then the NE Asian countries) are much more noticeable on the SF graph than the log(GDP) graph, and that the log(GDP) graph compresses the variation of the high-income countries, and gets most of its variation from the low-income countries; the situation is reversed in the SF graph. Since both our IQ and GDP estimates are better in high-income countries, that seems like a desirable property to have.
With outliers included, I'm getting R=.79 for SF and R=.74 for log(gdp). (I think, I'm not sure I'm calculating those correctly.)
Replies from: gwern↑ comment by gwern · 2013-02-10T22:03:09.890Z · LW(p) · GW(p)
Trying to rederive the constants doesn't help me, which is starting to make me wonder if he's really using the table he provided or misstated an equation or something:
R> sf <- function(iq,f,c) ((c/2) * (1 + erf((iq-f)/(15*sqrt(2)))))
R> summary(nls(rGDPpc ~ sf(IQ,f,c), lynn, start=list(f=110,c=40000)))
Formula: rGDPpc ~ sf(IQ, f, c)
Parameters:
Estimate Std. Error t value Pr(>|t|)
f 99.64 3.07 32.44 < 2e-16
c 34779.17 6263.90 5.55 3.7e-07
Residual standard error: 5310 on 79 degrees of freedom
Number of iterations to convergence: 4
Achieved convergence tolerance: 8.22e-06
If you double 34779 you get very close to his $69,321 so there might be something going wrong due to the 1/2 that appears in uses of the erf
to make a cumulative distribution function, but I don't how a threshold of 99.64 IQ is even close to his 108!
(The weird start values were found via trial-and-error in trying to avoid R's 'singular gradient error'; it doesn't appear to make a difference if you start with, say, f=90
.)
↑ comment by Vaniver · 2013-02-11T02:12:14.081Z · LW(p) · GW(p)
Most importantly, we appear to have figured out the answer to my original question: no, it is not easy. :P
So, I started off by deleting the eight outliers to make lynn2. I got an adjusted R^2 of 0.8127 for the exponential fit, and 0.7777 for the fit with iq0=108.2.
My nls came back with an optimal iq0 of 110, which is closer to the 108 I was expecting; the adjusted R^2 only increases to 0.7783, which is a minimal improvement, and still slightly worse than the exponential fit.
The value of the smart fraction cutoff appears to have a huge impact on the mapping from smart fraction to gdp, but doesn't appear to have a significant effect on the goodness of fit, which troubles me somewhat. I'm also surprised that deleting the outliers seems to have improved the performance of the exponential fit more than the smart fraction fit, which is not what I would have expected from the graphs. (Though, I haven't calculated this with the outliers included in R, and I also excluded the Asian data, and there's more fiddling I can do, but I'm happy with this for now.)
> sf <- function(iq,iq0) ((1+erf((iq-iq0)/(15*sqrt(2))))/2)
> egdp <- function(iq,iq0,m,b) (m*sf(iq,iq0)+b)
> summary(nls(rGDPpc ~ egdp(IQ,iq0,m,b), lynn2, start=list(iq0=110,m=40000,b=0)))
Formula: rGDPpc ~ egdp(IQ, iq0, m, b)
Parameters:
Estimate Std. Error t value Pr(>|t|)
iq0 110.019 4.305 25.556 < 2e-16 ***
m 77694.174 26708.502 2.909 0.00486 **
b 679.688 1039.144 0.654 0.51520
Residual standard error: 4054 on 70 degrees of freedom
> gwe <- lm(lynn2$rGDPpc ~ sf(lynn2$IQ,99.64)); summary(gwe)
Call:
lm(formula = lynn2$rGDPpc ~ sf(lynn2$IQ, 99.64))
Residuals:
Min 1Q Median 3Q Max
-10621.6 -2463.1 442.6 1743.4 12439.7
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1345.7 886.6 -1.518 0.133
sf(lynn2$IQ, 99.64) 40552.7 2724.1 14.887 <2e-16 ***
Residual standard error: 4241 on 71 degrees of freedom
Multiple R-squared: 0.7574, Adjusted R-squared: 0.7539
F-statistic: 221.6 on 1 and 71 DF, p-value: < 2.2e-16
> opt <- lm(lynn2$rGDPpc ~ sf(lynn2$IQ,110)); summary(opt)
Call:
lm(formula = lynn2$rGDPpc ~ sf(lynn2$IQ, 110))
Residuals:
Min 1Q Median 3Q Max
-11030.3 -1540.1 -416.8 1308.6 12493.5
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 676.4 731.5 0.925 0.358
sf(lynn2$IQ, 110) 77577.0 4869.6 15.931 <2e-16 ***
Residual standard error: 4025 on 71 degrees of freedom
Multiple R-squared: 0.7814, Adjusted R-squared: 0.7783
F-statistic: 253.8 on 1 and 71 DF, p-value: < 2.2e-16
> his <- lm(lynn2$rGDPpc ~ sf(lynn2$IQ,108.2)); summary(his)
Call:
lm(formula = lynn2$rGDPpc ~ sf(lynn2$IQ, 108.2))
Residuals:
Min 1Q Median 3Q Max
-11014.0 -1710.0 -196.9 1396.3 12432.9
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 362.6 748.0 0.485 0.629
sf(lynn2$IQ, 108.2) 67711.5 4258.5 15.900 <2e-16 ***
Residual standard error: 4031 on 71 degrees of freedom
Multiple R-squared: 0.7807, Adjusted R-squared: 0.7777
F-statistic: 252.8 on 1 and 71 DF, p-value: < 2.2e-16
> em <- lm(log(lynn2$rGDPpc) ~ lynn2$IQ); summary(em)
Call:
lm(formula = log(lynn2$rGDPpc) ~ lynn2$IQ)
Residuals:
Min 1Q Median 3Q Max
-1.12157 -0.34268 -0.00503 0.29596 1.41540
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.812650 0.446308 1.821 0.0728 .
lynn2$IQ 0.089439 0.005051 17.706 <2e-16 ***
Residual standard error: 0.4961 on 71 degrees of freedom
Multiple R-squared: 0.8153, Adjusted R-squared: 0.8127
F-statistic: 313.5 on 1 and 71 DF, p-value: < 2.2e-16
Replies from: gwern↑ comment by gwern · 2013-02-11T02:35:06.836Z · LW(p) · GW(p)
Most importantly, we appear to have figured out the answer to my original question: no, it is not easy. :P
And inadvertently provided an object lesson for anyone watching about the value of researchers providing code...
The value of the smart fraction cutoff appears to have a huge impact on the napping from smart fraction to gdp, but doesn't appear to have a significant effect on the goodness of fit, which troubles me somewhat. I'm also surprised that deleting the outliers seems to have improved the performance of the exponential fit more than the smart fraction fit, which is not what I would have expected from the graphs.
My intuition so far is that La Griffe found a convoluted way of regressing on a sigmoid, and the gain is coming from the part which looks like an exponential. I'm a little troubled that his stuff is so hard to reproduce sanely and that he doesn't compare against the exponential fit: the exponent is obvious, has a reasonable empirical justification. Granting that Dickerson published in 2006 and he wrote the smart fraction essay in 2002 he could at least have updated.
[edit] Sorry, it looks like the formatting for my code is totally ugly.
You need to delete any trailing whitespace in your indented R terminal output. (Little known feature of LW/Reddit Markdown code blocks: one or more trailing spaces causes the newline to be ignored and the next line glommed on. I filed an R bug to fix some cases of it but I guess it doesn't cover nls
or you don't have an updated version.)
I don't understand your definition
egdp <- function(iq,iq0,m,b) (m*sf(iq,iq0)+b)
sf(iq,iq0)
makes sense, of course, and m
presumably is the multiplicative scale constant LG found to be 69k, but what is this b
here and why is it being added? I don't see how this tunes how big a smart fraction is necessary since shouldn't it then be on the inside of sf
somehow?
But using that formula and running your code (using the full dataset I posted originally, with outliers):
R> erf <- function(x) 2 * pnorm(x * sqrt(2)) - 1
R> sf <- function(iq,iq0) ((1+erf((iq-iq0)/(15*sqrt(2))))/2)
R> egdp <- function(iq,iq0,m,b) (m*sf(iq,iq0)+b)
R> summary(nls(rGDPpc ~ egdp(IQ,iq0,m,b), lynn, start=list(iq0=110,m=40000,b=0)))
Formula: rGDPpc ~ egdp(IQ, iq0, m, b)
Parameters:
Estimate Std. Error t value Pr(>|t|)
iq0 102.08 4.89 20.88 < 2e-16
m 37108.87 9107.73 4.07 0.00011
b 1140.94 1445.76 0.79 0.43241
Residual standard error: 5320 on 78 degrees of freedom
Number of iterations to convergence: 7
Achieved convergence tolerance: 5.09e-06
Replies from: gwern, Vaniver↑ comment by gwern · 2013-12-07T23:02:42.681Z · LW(p) · GW(p)
My intuition so far is that La Griffe found a convoluted way of regressing on a sigmoid, and the gain is coming from the part which looks like an exponential. I'm a little troubled that his stuff is so hard to reproduce sanely and that he doesn't compare against the exponential fit: the exponent is obvious, has a reasonable empirical justification. Granting that Dickerson published in 2006 and he wrote the smart fraction essay in 2002 he could at least have updated.
I emailed La Griffe via Steve Sailer in February 2013 with a link to this thread and a question about how his smart-fraction model works with the fresher IQ/nations data and compares to Dickerson's work. Sailer forwarded my email, but neither of us has had a reply since; he speculated that La Griffe may be having health issues.
In the absence of any defense by La Griffe, I think Dicker's exponential works better than La Griffe's fraction/sigmoid.
↑ comment by Vaniver · 2013-02-11T03:25:34.744Z · LW(p) · GW(p)
he doesn't compare against the exponential fit: the exponent is obvious, has a reasonable empirical justification.
The theoretical justifications are entirely different, though. It seems reasonable to me to suppose there's some minimal intelligence to be wealth-producing in an industrial society, and the smart fraction estimates that well and it predicts gdp well. But, it also seems reasonable to treat log(gdp) as a more meaningful object than gdp.
It's also bothersome that the primary empirical prediction of the smart fraction model (that there is some stable gdp level that you hit when everyone is higher than the smart fraction) is entirely from the extrapolated part of the dataset, and this doesn't seem noticeably better than the exponential model, whose extrapolations are radically different.
Granting that Dickerson published in 2006 and he wrote the smart fraction essay in 2002 he could at least have updated.
Yeah; I'm curious what they'd have to say about the relative merits of the two models. I'll see if I can get this question to them.
You need to delete any trailing whitespace in your indented R terminal output.
Fixed, thanks!
but what is this b here and why is it being added?
It's an offset, so that it's an affine fit rather than a linear fit: the gdp level for a population with no people above 108 IQ doesn't have to be 0. Turns out, it's not significantly different from zero, but I'd rather discover that than enforce it (and enforcing it can degrade the value for m).
Replies from: gwern↑ comment by gwern · 2013-02-11T03:46:31.331Z · LW(p) · GW(p)
But, it also seems reasonable to treat log(gdp) as a more meaningful object than gdp.
I'm not entirely sure... For individuals, log-transforms make sense on their own merits as giving a better estimate of the utility of that money, but does that logic really apply to a whole country? More money means more can be spent on charity, shooting down asteroids, etc.
It's also bothersome that the primary empirical prediction of the smart fraction model (that there is some stable gdp level that you hit when everyone is higher than the smart fraction) is entirely from the extrapolated part of the dataset, and this doesn't seem noticeably better than the exponential model, whose extrapolations are radically different.
The next logical step would be to bring in the second 2006 edition of the Lynn dataset, which increased the set from 81 to 113, and use the latest available per-capita GDP (probably 2011). If the exponential fit gets better compared to the smart-fraction sigmoid, then that's definitely evidence towards the conclusion that the smart-fraction is just a bad fit.
Yeah; I'm curious what they'd have to say about the relative merits of the two models. I'll see if I can get this question to them.
I'd guess that he'd consider SF a fairly arbitrary model and not be surprised if an exponential fits better.
It's an offset, so that it's an affine fit rather than a linear fit: the gdp level for a population with no people above 108 IQ doesn't have to be 0. Turns out, it's not significantly different from zero, but I'd rather discover that than enforce it (and enforcing it can degrade the value for m).
Why can't the GDP be 0 or negative? Afghanistan and North Korea are right now exhibiting what such a country looks like: they can barely feed themselves and export so much violence or fundamentalism or other dysfunctionality that rich nations are sinking substantial sums of money into supporting them and fixing problems.
Replies from: Vaniver↑ comment by Vaniver · 2013-02-11T16:28:59.182Z · LW(p) · GW(p)
For individuals, log-transforms make sense on their own merits as giving a better estimate of the utility of that money, but does that logic really apply to a whole country?
The argument would be that additional intelligence multiplies the per-capita wealth-producing apparatus that exists, rather than adding to it (or, in the smart fraction model, not doing anything once you clear a threshold).
Why can't the GDP be 0 or negative?
There's no restriction that b be positive, and so those are both options. I wouldn't expect it to be negative because pre-industrial societies managed to survive, but that presumes that aid spending by the developed world is not subtracted from the GDP measurement of those countries. Once you take aid into account, then it does seem reasonable that places could become money pits.
Replies from: gwern↑ comment by gwern · 2013-02-11T18:13:57.691Z · LW(p) · GW(p)
The argument would be that additional intelligence multiplies the per-capita wealth-producing apparatus that exists, rather than adding to it (or, in the smart fraction model, not doing anything once you clear a threshold).
That's the intuitive justification for an exponential model (each additional increment of intelligence adds a percentage of the previous GDP), but I don't see how this justifies looking at log transforms.
There's no restriction that b be positive, and so those are both options. I wouldn't expect it to be negative because pre-industrial societies managed to survive
The difference would be a combination of negative externalities and changing Malthusian equilibriums: it has never been easier for an impoverished country like North Korea or Afghanistan to export violence and cause massive costs they don't bear (9/11 directly cost the US something like a decade of Afghanistan GDP once you remove all the aid given to Afghanistan), and public health programs like vaccinations enable much larger populations than 'should' be there.
Replies from: Vaniver↑ comment by Vaniver · 2013-02-11T18:31:44.174Z · LW(p) · GW(p)
That's the intuitive justification for an exponential model (each additional increment of intelligence adds a percentage of the previous GDP), but I don't see how this justifies looking at log transforms.
GDP ~ exp(IQ) is isomorphic to ln(GDP) ~ IQ, and I think log(dollars per year) is an easier unit to think about than something to the power of IQ.
[edit] The graph might look different, though. It might be instructive to compare the two, but I think the relationships should be mostly the same.
Replies from: Kindly↑ comment by Kindly · 2013-02-11T22:16:36.481Z · LW(p) · GW(p)
It's worth pointing out that IQ numbers are inherently non-parametric: we simply have a ranking of performance on IQ tests, which are then scaled to fit a normal distribution.
If GDP ~ exp(IQ), that means that the correlation is better if we scale the rankings to fit a log-normal distribution instead (this is not entirely true because exp(mean(IQ)) is not the same as mean(exp(IQ)), but the geometric mean and arithmetic mean should be highly correlated with each other as well). I suspect that this simply means that GDP approximately follows a log-normal distribution.
Replies from: Vaniver↑ comment by Vaniver · 2013-02-11T23:32:38.603Z · LW(p) · GW(p)
I suspect that this simply means that GDP approximately follows a log-normal distribution.
This doesn't quite follow, since both per capita GDP and mean national IQ aren't drawn from the same sort of distribution as individual production and individual IQ are, but I agree with the broader comment that it is natural to think of the economic component of intelligence measured in dollars per year as lognormally distributed.
↑ comment by gwern · 2013-08-28T18:13:21.012Z · LW(p) · GW(p)
"Salt Iodization and the Enfranchisement of the American Worker", Adhvaryu et al 2013:
...We find substantial impacts of salt iodization. High school completion rose by 6 percentage points, and labor force participation went up by 1 point. Analysis of income transitions by quantile shows that the new labor force joiners entered at the bottom of the wage distribution and took up blue collar labor, pulling down average wage income conditional on employment. Our results inform the ongoing debate on salt iodization in many low-income countries. We show that large-scale iodized salt distribution had a targeted impact, benefiting the worker on the margin of employment, and generating sizeable economic returns at low cost...The recent study by Feyrer et al. (2013) estimates that Morton Salt Co.’s decision to iodize may have increased IQ by 15 points, accounting for a significant part of the Flynn Effect, the steady rise IQ in the US over the twentieth century. Our estimates, paired with this number, suggest that each IQ point accounts for nearly one tenth of a point increase in labor force participation.
If, in the 1920s, 10 IQ points could increase your labor participation rate by 1%, then what on earth does the multiplier look like now? The 1920s weren't really known for their demands on intelligence, after all.
And note the relevance to discussions of technological unemployment: since the gains are concentrated in the low end (think 80s, 90s) due to the threshold nature of iodine & IQ, this employment increase means that already, a century ago, people in the low-end range were having trouble being employed.
↑ comment by gwern · 2013-01-17T20:53:07.542Z · LW(p) · GW(p)
A 2012 Jones followup: "Will the intelligent inherit the earth? IQ and time preference in the global economy"
Social science research has shown that intelligence is positively correlated with patience and frugality, while growth theory predicts that more patient countries will save more. This implies that if nations differ in national average IQ, countries with higher average cognitive skills will tend to hold a greater share of the world’s tradable assets. I provide empirical evidence that in today’s world, countries whose residents currently have the highest average IQs have higher savings rates, higher ratios of net foreign assets to GDP, and higher ratios of U.S. Treasuries to GDP. These nations tend to be in East Asia and its offshoots. The relationship between national average IQ and net foreign assets has strengthened since the end of Bretton Woods.
...And time preference differs across countries in part because psychometric intelligence, a key predictor of patient behavior, differs persistently across countries (Wicherts et al., 2010a,b; Jones and Schneider, 2010)....John Rae (1834) provides a precursor of the approach presented here: Chapter Six of his treatise (cited in Becker and Mulligan, 1997, and Frederick et al., 2002) focuses on individual determinants of savings, including differences in rates of time preference, while his Chapter Seven draws out the cross-country implications...A recent meta-analysis of 24 studies by Shamosh and Gray concluded: “[A]cross studies, higher intelligence was associated with lower D[elay] D[iscounting]...” Their meta-study drew on experiments with preschool children and college students, drug addicts and relatively healthy populations: With few exceptions, they found a reliable relationship between measured intelligence and patience. And recent work by economists (Frederick, 2005; Benjamin et al. 2006; Burks et al, 2009; Chabris et al., 2007) has demonstrated that low-IQ individuals tend to act in a more “behavioral,” more impulsive fashion when facing decisions between smaller rewards sooner versus larger rewards later.
↑ comment by gwern · 2011-09-03T18:46:29.320Z · LW(p) · GW(p)
This is related, but not the research talked about. The Terman Project apparently found that the very highest IQ cohort had many more patents than the lower cohorts, but this did not show up as massively increased lifetime income.
Compare the bottom right IQ graph with SMPY results which show the impact of ability (SAT-M measured before age 13) on publication and patent rates. Ability in the SMPY graph varies between 99th and 99.99th percentile in quintiles Q1-Q5. The variation in IQ between the bottom and top deciles of the Terman study covers a similar range. The Terman super-smarties (i.e., +4 SD) only earned slightly more (say, 15-20% over a lifetime) than the ordinary smarties (i.e., +2.5 SD), but the probability of earning a patent (SMPY) went up by about 4x over the corresponding ability range.
http://infoproc.blogspot.com/2011/04/earnings-effects-of-personality.html
Unless we want to assume those 4x extra patents were extremely worthless, or that the less smart groups were generating positive externalities in some other mechanism, this would seem to imply that the smartest were not capturing anywhere near the value they were creating - and hence were generating significant positive externalities.
EDIT: Jones 2011 argues much the same thing - economic returns to IQ are so low because so much of it is being lost to positive externalities.
Replies from: MichaelBishop, NancyLebovitz↑ comment by Mike Bishop (MichaelBishop) · 2012-03-28T15:44:11.265Z · LW(p) · GW(p)
On its own, I don't consider this strong evidence for the greater productivity of the IQ elite. If they were contributions to open-source projects, that would be one thing. But people doing work that generates patents which don't lead to higher income - that raises some questions for me. Is it possible that extremely high IQ is associated with a tendency to become "addicted" to a game like patenting? Added: I think Gwern and I agree more than many people might think reading this comment.
Replies from: gwern↑ comment by gwern · 2012-03-28T16:11:27.199Z · LW(p) · GW(p)
If they were contributions to open-source projects, that would be one thing.
Open-source contribution is even more gameable than patents: at least with patents there's a human involved, checking to some degree that there is at least a little new stuff in the patent, while no one and nothing stops you from putting a worthless repo up on Github reinventing wheels poorly.
But people doing work that generates patents which don't lead to higher income - that raises some questions for me.
The usual arrangement with, say, industrial researchers is that their employers receive the unpredictable dividends from the patents in exchange for forking over regular salaries in fallow periods...
Is it possible that extremely high IQ is associated with a tendency to become "addicted" to a game like patenting?
I don't see why you would privilege this hypothesis.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2012-03-28T17:00:28.729Z · LW(p) · GW(p)
Let me put it this way. Before considering the Terman data on patents you presented, I already thought IQ would be positively correlated with producing positive externalities and that there was a mostly one way causal link from the former to the latter. I expected the correlation between patents and IQ. What was new to me was the lack of correlation between IQ and income, and the lack of correlation between patents and income. Correction added: there was actually a fairly strong correlation between IQ and income, just not between income and patents, (conditional on IQ I think). Surely more productive industrial researchers are generally paid more. Many firms even give explicit bonuses on a per patent basis. So for me, given my priors, the Terman data you presented shifts me slightly against correction: does not shift me for or against the hypothesis that at the highest IQ levels, higher IQ individuals continues to be associated with producing more positive externalities. ref Still, I think increasing people's IQ, even the already gifted, probably has strong positive externalities unless the method for increasing it also has surprising (to me) side-effects.
I agree that measuring open-source contributions requires more than merely counting lines of code written. But I did want to highlight the fact that the patent system is explicitly designed to increase the private returns for a given innovation. I don't think that there is a strong correlation between the companies/industries which are patenting the most, and the companies/industries, which are benefiting the world the most.
Replies from: gwern↑ comment by gwern · 2012-03-28T17:10:56.622Z · LW(p) · GW(p)
Surely more productive industrial researchers are generally paid more. Many firms even give explicit bonuses on a per patent basis.
Yes, but the bonuses I've heard of are in the hundreds to thousands of dollars range, at companies committed to patenting like IBM. This isn't going to make a big difference to lifetime incomes where the range is 1-3 million dollars although the data may be rich enough to spot these effects (and how many patents is even '4x'? 4 patents on average per person?), and I suspect these bonuses come at the expense of salaries & benefits. (I know that's how I'd regard it as a manager: shifting risk from the company to the employee.)
And I think you're forgetting that income did increase with each standard deviation by an amount somewhat comparable to my suggested numbers for patents, so we're not explaining why IQ did not increase income whatsoever, but why it increased it relatively little, why the patenters apparently captured relatively little of the value.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2012-03-28T17:38:57.646Z · LW(p) · GW(p)
Woh, I did allow myself to misread/misremember your initial comment a bit so I'll dial it back slightly. The fact that even at the highest levels IQ is still positively correlated to income is important, and its what I would have expected, so the overall story does not undermine my support for the hypothesis that at the highest IQ levels, higher IQ individuals produce more positive externalities. I apologize for getting a bit sloppy there.
I would guess that if you had data from people with the same job description at the same company the correlation between IQ, patents, and income would be even higher.
↑ comment by NancyLebovitz · 2012-03-28T16:23:45.277Z · LW(p) · GW(p)
Perhaps economic returns to IQ as so low because there are other skills which are good for getting economic returns, and those skills don't correlate strongly with IQ.
Replies from: gwern↑ comment by gwern · 2012-03-28T16:44:40.042Z · LW(p) · GW(p)
Yes, this is consistent with the large income changes seen with some of the personality traits. If you have time, you could check the paper to see if that explains it: perhaps the highest cohort disproportionately went into academia or was low on Extraversion or something, or those subsets were entirely responsible for the excess patents.
↑ comment by gwern · 2016-02-17T23:41:34.905Z · LW(p) · GW(p)
If anyone is curious, I am moving my bibliography here to http://www.gwern.net/Embryo%20selection#value-of-iq and I will be keeping that updated in the future rather than continue this thread further.
↑ comment by AlexMennen · 2011-09-02T23:07:05.997Z · LW(p) · GW(p)
they looked at (1) the economic gains to countries with higher average IQ, (2) the average gains to individuals with higher IQ, and concluded that (3) people with high IQ create vast amounts of positive externality, much more than they capture as individuals
How did they establish that economic gains are influenced by average IQ, rather than both being influenced by some other factor?
↑ comment by erniebornheimer · 2011-09-02T22:28:01.553Z · LW(p) · GW(p)
Sounds implausible to me, so I'm very interested in a citation (or pointers to similar material). If true, I'm going to have to do a lot of re-thinking.
↑ comment by DanielLC · 2011-09-02T23:07:58.443Z · LW(p) · GW(p)
Perhaps IQ correlates weakly with intelligence. If their are lots of people with high IQ, their are probably lots of intelligent people, but they're not necessarily the same people. Hence, the countries with high IQ do well, but not the people.
Replies from: None↑ comment by [deleted] · 2011-09-03T21:41:28.785Z · LW(p) · GW(p)
I think you really need to see this google tech talk by Steven Hsu.
↑ comment by [deleted] · 2011-09-03T21:34:19.804Z · LW(p) · GW(p)
But naturally doing everything faster would be pretty freaking awesome in itself.
- increased yearly economic growth (consequently higher average living standards since babies still take 9 months to make)
- it would help everyone cram much more living into their lifespan.
- it would help experts deal with events that aren't sped up much better. Say an oil leak in the Gulf of Mexico.
- medical advances would arrive earlier meaning that lots of people who would otherwise have died might live for a few more productive (sped up!) years.
But I'm having way to much fun nitpicking so I'll just stop here. :)
Replies from: fortyeridania, army1987↑ comment by fortyeridania · 2012-09-22T15:17:16.557Z · LW(p) · GW(p)
it would help everyone cram much more living into their lifespan
Yes, especially this one.
Put differently, imagine a pill that made North Americans cognitively slower. Wouldn't that be an obvious step down (for reasons symmetric to the ones you've highlighted)?
↑ comment by A1987dM (army1987) · 2012-09-23T11:50:53.794Z · LW(p) · GW(p)
it would help everyone cram much more living into their lifespan
ISTM that there are lots of people who don't want to cram more living into their lifespan, given the time they spend watching TV and stuff like that.
↑ comment by NancyLebovitz · 2011-09-01T14:31:36.146Z · LW(p) · GW(p)
I think it would take more than a day for people to get possible good effects of the change.
A better memory might enable people to realize that they have made the same mistake several times. More processing power might enable them to realize that they have better strategies in some parts of their lives than others, and explore bringing the better strategies into more areas.
↑ comment by soreff · 2011-09-01T14:24:27.794Z · LW(p) · GW(p)
I'm not convinced. One very simple gain from
more memory capacity and processing speed
is the ability to consider more alternatives. These may be alternative explanations, designs, or courses of action. If I consider three alternatives where before I could only consider two, if the third one happens to be better than the other two, it is a real gain. This applies directly to the case of
carry on using the same ineffective medical treatments because of failure to think of alternative causes
↑ comment by BillyOblivion · 2011-09-05T12:52:51.400Z · LW(p) · GW(p)
Don't confuse time-to-solution with correctness. Speed and the amount of facts at hand will not give you a good result if your fundamental assumptions (aka your algorithm) is wrong.
You cannot make up in quantity what you lose on each transaction, as the dot-com folks proved repeatedly.
comment by CaveJohnson · 2011-09-23T09:50:22.976Z · LW(p) · GW(p)
One of my favorite genres in the prestige press is the Self-Refuting Article. These are articles that contain all the facts necessary to undermine the premise of the piece, but reporters, editors, and readers all conspire together in an act of collective stupidity to Not Get the Joke
--Steve Sailer
Replies from: lessdazed, RobinZ↑ comment by RobinZ · 2011-09-23T20:12:42.230Z · LW(p) · GW(p)
I don't quite see how this is a Rationality Quote.
Replies from: CharlieSheen↑ comment by CharlieSheen · 2011-09-24T15:40:08.880Z · LW(p) · GW(p)
Tribal attire by another name.
comment by lukeprog · 2011-09-08T01:58:27.047Z · LW(p) · GW(p)
If you cannot calculate you cannot speculate on future pleasure and your life will not be that of a human, but that of an oyster or a jellyfish.
Plato, Philebus
Replies from: None↑ comment by [deleted] · 2011-09-08T02:07:44.147Z · LW(p) · GW(p)
I wish I were a jelly fish
That cannot fall downstairs:
Of all the things I wish to wish
I wish I were a jelly fish
That hasn't any cares,
And doesn't even have to wish
'I wish I were a jelly fish
That cannot fall downstairs.'
G.K. Chesterton
Replies from: lessdazed↑ comment by lessdazed · 2011-09-08T02:23:18.469Z · LW(p) · GW(p)
If I were a jelly fish,
Ya ha deedle deedle, bubba bubba deedle deedle dum.
All day long I'd biddy biddy bum.
If I were a jelly fish.
I wouldn't have to work hard.
Ya ha deedle deedle, bubba bubba deedle deedle dum.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-09-08T02:29:59.511Z · LW(p) · GW(p)
I prefer if I were a deep one.
(If you aren't familiar with this song I strongly recommend one looks at all of Shoggoth on the Roof.)
Replies from: lessdazed↑ comment by lessdazed · 2011-09-08T02:31:15.718Z · LW(p) · GW(p)
A gentle introduction to the mythos.
comment by Richard_Kennaway · 2011-09-10T22:12:14.053Z · LW(p) · GW(p)
To say that life evolves because of an elan vital is on a par with saying that a locomotive runs because of an elan locomotif.
Julian Huxley, Darwinism To-Day
Replies from: gwern↑ comment by gwern · 2011-09-10T23:06:34.043Z · LW(p) · GW(p)
A nod to Molière's satirical line which coined the 'dormitive fallacy':
Why Opium produces sleep: ... Because there is in it a dormitive power.
(Le Malade Imaginere (1673), Act III, sc. iii)
comment by [deleted] · 2011-09-13T22:21:34.780Z · LW(p) · GW(p)
Ars longa, vita brevis, occasio praeceps, experimentum periculosum, iudicium difficile.
-Hippocrates
Replies from: None, NihilCredo, ArisKatsaris↑ comment by [deleted] · 2011-09-13T22:27:34.350Z · LW(p) · GW(p)
[The] art is long,
life is short,
opportunity fleeting,
experiment dangerous,
judgment difficult.
Considering the beast that some hope to kill by sharpening people's mind-sticks on LW, this sounds applicable wouldn't you agree?
Replies from: Nisan↑ comment by Nisan · 2011-09-17T06:49:37.391Z · LW(p) · GW(p)
Upvote for "mind-sticks".
Replies from: pedanterrific↑ comment by pedanterrific · 2011-09-24T02:55:12.336Z · LW(p) · GW(p)
Agreed. Best analogy ever.
↑ comment by NihilCredo · 2011-09-17T03:25:30.774Z · LW(p) · GW(p)
Why is a quote by a Greek, about whom our main sources are also Greek, being posted in Latin?
Replies from: None, lessdazed↑ comment by [deleted] · 2011-09-17T11:32:19.040Z · LW(p) · GW(p)
The saying "Ars longa, vita brevis" is a well known saying in my lanugage in its latin form. Seems to be the most common renderng in English as well.
↑ comment by ArisKatsaris · 2011-09-23T20:39:26.939Z · LW(p) · GW(p)
Here's the ancient greek version, to appease NihilCredo:
Replies from: lessdazedὉ μὲν βίος βραχύς, ἡ δὲ τέχνη μακρή, ὁ δὲ καιρὸς ὀξύς, ἡ δὲ πεῖρα σφαλερή, ἡ δὲ κρίσις χαλεπή
comment by Risto_Saarelma · 2011-09-06T05:40:32.584Z · LW(p) · GW(p)
But I had hardly entered the room where the masters were playing when I was seized with what may justly be described as a mystical experience. I seemed to be looking on at the tournament from outside myself. I saw the masters—one, shabby, snuffy and blear-eyed; another, in badly fitting would-be respectable shoddy; a third, a mere parody of humanity, and so on for the rest. These were the people to whose ranks I was seeking admission. "There, but for the grace of God, goes Aleister Crowley," I exclaimed to myself with disgust, and there and then I registered a vow never to play another serious game of chess. I perceived with praeternatural lucidity that I had not alighted on this planet with the object of playing chess.
-- Aleister Crowley
Replies from: Raemon, cousin_it↑ comment by Raemon · 2011-09-06T17:49:15.344Z · LW(p) · GW(p)
I recently contemplated learning to play chess better (not to make an attempt at mastery, but to improve enough so I wasn't so embarassed about how bad I was).
Most of my motivation for this was an odd signalling mechanism: People think of me as a smart person, and they think of smart people as people who are good at chess, and they are thus disappointed with me when it turns out I am not.
But in the process of learning, I realized something else: I dislike chess, as compared to say, Magic the Gathering, because chess is PURE strategy, whereas Magic or StarCraft have splashy images and/or luck that provides periodic dopamine rushes. Chess only is mentally rewarding for me at two moments: when I capture an enemy piece, or when I win. I'm not good enough to win against anyone who plays chess remotely seriously, so when I get frustrated, I just go capturing enemy pieces even though it's a bad play, so I can at least feel good about knocking over an enemy bishop.
What I found most significant, though, was the realization that this fundamental not enjoying the process of thinking out chess strategies gave me some level of empathy for people who, in general, don't like to think. (This is most non-nerds, as far as I can tell). Thinking about chess is physically stressful for me, whereas thinking about other kinds of abstract problems is fun and rewarding purely for its own sake.
Replies from: FiftyTwo, NancyLebovitz, PhilGoetz, lessdazed↑ comment by FiftyTwo · 2011-09-07T23:03:50.718Z · LW(p) · GW(p)
My issue with chess is that the skills are non-transferable. As far as I can tell the main difference between good and bad players is memorisation of moves and strategies, which I don't find very interesting and can't be transferred to other more important areas of life. Whereas other games where tactics and reaction to situation is more important can have benefits in other areas.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-09-10T13:39:33.914Z · LW(p) · GW(p)
I think the literature disagrees. E.g. good players are less prone to confirmation bias and I think that this is transferable. (Google Scholar would know better.) Introspectively I feel like playing chess makes me a better thinker. Chess is memorization of moves and strategies only in the sense that guitar is memorization of scales and chords. You need them to play well but they're not sufficient.
Replies from: gwern↑ comment by gwern · 2011-09-10T20:49:39.329Z · LW(p) · GW(p)
E.g. good players are less prone to confirmation bias
True; see 2004 "Chess Masters' Hypothesis Testing" Cowley & Bryne:
But experimental evidence from studies of reasoning shows that people often find falsification difficult. We suggest that domain expertise may facilitate falsification. We consider new experimental data about chess experts’ hypothesis testing. The results show that chess masters were readily able to falsify their plans. They generated move sequences that falsified their plans more readily than novice players, who tended to confirm their plans. The finding that experts in a domain are more likely to falsify their hypotheses has important implications for the debate about human rationality.
I think that this is transferable
Well... The chess literature and general literature on learning rarely finds transfer. From the Nature coverage of that study:
Byrne and Cowley now hope to study developing chess players to find out how and when they develop falsification strategies. They also want to test chess masters in other activities that involve testing hypotheses - such as logic problems - to discover if their falsification skill is transferable. On this point Orr is more sceptical: "I've never felt that chess skills cross over like that, it's a very specific skill."
Checking Google Scholar, I see only one apparent followup, the 2005 paper by the same authors, "When falsification is the only path to truth":
Can people consistently attempt to falsify, that is, search for refuting evidence, when testing the truth of hypotheses? Experimental evidence indicates that people tend to search for confirming evidence. We report two novel experiments that show that people can consistently falsify when it is the only helpful strategy. Experiment 1 showed that participants readily falsified somebody else’s hypothesis. Their task was to test a hypothesis belonging to an ‘imaginary participant’ and they knew it was a low quality hypothesis. Experiment 2 showed that participants were able to falsify a low quality hypothesis belonging to an imaginary participant more readily than their own low quality hypothesis. The results have important implications for theories of hypothesis testing and human rationality.
While interesting and very relevant to some things (like programmers' practice of 'rubber ducking' - explaining their problem to an imaginary creature), it doesn't directly address chess transfer.
↑ comment by NancyLebovitz · 2011-09-10T21:05:18.997Z · LW(p) · GW(p)
What I found most significant, though, was the realization that this fundamental not enjoying the process of thinking out chess strategies gave me some level of empathy for people who, in general, don't like to think.
LW has put a lot of thought into the problem of akrasia, but nothing I can think of on how to induce more pleasure from thinking.
Replies from: lessdazed↑ comment by lessdazed · 2011-09-10T21:39:50.008Z · LW(p) · GW(p)
I think rationality helps to avoid making mistakes, and avoiding feeling unnecessarily bad, but not too much to the positive side of things.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-09-10T22:08:13.447Z · LW(p) · GW(p)
I agree-- pleasure in thinking might not be part of the study of rationality, but it could very much be part of raising sanity waterline.
↑ comment by PhilGoetz · 2011-09-10T15:25:45.047Z · LW(p) · GW(p)
What I found most significant, though, was the realization that this fundamental not enjoying the process of thinking out chess strategies gave me some level of empathy for people who, in general, don't like to think.
Wow - I have a similar response to chess, but never drew that analogy. Thanks.
↑ comment by lessdazed · 2011-09-10T21:37:48.405Z · LW(p) · GW(p)
Learn to play Go, then even if your chess ability is lower, people won't be able to judge your Go ability.
Go is roughly a game based on encircling the other's army before his or her army encircles yours. A bit of thought about the meaning of the word 'encircle" should hint to how awesome that can be.
If your gaming heart has been more oriented towards WWII operational and strategic-level games, Go is the game for you. If chess incorporates the essence of WWI, Go is incorporates the essence of mobile warfare in WWII, if the part of the essence represented by Poker is removed.
Go=an abstraction of mobile warfare - Poker
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-09-12T01:31:56.491Z · LW(p) · GW(p)
Chess is battle, Go is war. I don't see how it's very much about mobility rather than scale.
Replies from: lessdazed↑ comment by lessdazed · 2011-09-12T06:14:25.084Z · LW(p) · GW(p)
What real scale and era, if any, is even roughly modeled?
Replies from: gwern, Will_Newsome↑ comment by gwern · 2011-09-12T06:40:25.220Z · LW(p) · GW(p)
Scott Boorman in The Protracted Game tried to model Mao with Go, and in particular, the anti-Japanese campaign in Manchuria. It was an interesting book. I'm not convinced that Go is a real analogy beyond beginner-level tactics, but he did convince me that Go modeled insurgencies much better than, say, Chess.
↑ comment by Will_Newsome · 2011-09-12T06:22:34.750Z · LW(p) · GW(p)
Chess: Battle of Chi Bi is exemplary. (I am not sure if that is at all informative to people who don't already know a ridiculous amount about three kingdoms era China.) I don't feel qualified to say anything about Go.
Replies from: lessdazed↑ comment by lessdazed · 2011-09-12T07:16:33.522Z · LW(p) · GW(p)
Why did you choose that battle? Subterfuge was prominent in it.
Chess may resemble some other pitched battles from before the twentieth century, but it doesn't resemble modern war at all.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-09-12T07:43:51.045Z · LW(p) · GW(p)
By subterfuge do you mean Huang Gai's fire ships? I think of it more as a subtle pawn sacrifice which gets greedily accepted which allows for the invasion of Zhou Yu's forces which starts a king hunt that forces Cao Cao to give up lots of material in the form of ships and would have resulted in his getting mated if he hadn't a land to retreat to (and if he hadn't gotten kinda lucky). I thought I remembered Pang Tong doing something interesting and symbolic somewhere in there (a counterattack on the opposite wing to draw away some of Cao Cao's defending pieces) but I don't remember if that was fictional or not.
↑ comment by cousin_it · 2011-09-06T20:34:05.827Z · LW(p) · GW(p)
This is an awesome quote that captures an important truth, the opposite of which is also an important truth :-) If I were choosing a vocation by the way its practicioners look and dress, I would never take up math or programming! And given how many people on LW are non-neurotypical, I probably wouldn't join LW either. The desire to look cool is a legitimate desire that can help you a lot in life, so by all means go join clubs whose members look cool so it rubs off on you, but also don't neglect clubs that can help you in other ways.
comment by gwern · 2011-09-05T19:44:47.624Z · LW(p) · GW(p)
"Lessing, the most honest of theoretical men, dared to say that he took greater delight in the quest for truth than in the truth itself."
--Friedrich Nietzsche, The Birth of Tragedy (1872); cf. "Intellectual Hipsters and Meta-Contrarianism"
comment by Maniakes · 2011-09-02T20:49:38.853Z · LW(p) · GW(p)
I beseech you, in the bowels of Christ, think it possible that you may be mistaken.
-- Oliver Cromwell
Replies from: JoshuaZ, None, Will_Newsome↑ comment by [deleted] · 2011-11-17T23:16:06.847Z · LW(p) · GW(p)
Cromwell's rule is neatly tied to that phrase.
↑ comment by Will_Newsome · 2011-09-10T13:46:55.289Z · LW(p) · GW(p)
(Rephrasing: "For the love of Cthulhu, take a second to notice that you might be confused.")
comment by listic · 2011-09-02T13:42:17.699Z · LW(p) · GW(p)
True courage is loving life while knowing all the truth about it.
-- Sergey Dovlatov
(translation is mine; can you propose a better translation from Russian?)
comment by lionhearted (Sebastian Marshall) (lionhearted) · 2011-09-01T23:34:16.342Z · LW(p) · GW(p)
I moved out of the hood for good, you blame me?
Niggas aim mainly at niggas they can't be.
But niggas can't hit niggas they can't see.
I'm out of sight, now I'm out of they dang reach.
-- Dr. Dre, "The Watcher"
comment by cwillu · 2011-09-05T01:43:36.693Z · LW(p) · GW(p)
[...] Often I find that the best way to come up with new results is to find someone who's saying something that seems clearly, manifestly wrong to me, and then try to think of counterarguments. Wrong people provide a fertile source of research ideas.
-- Scott Aaronson, Quantum Computing Since Democritus (http://www.scottaaronson.com/democritus/lec14.html)
Replies from: PhilGoetz, Raw_Power↑ comment by Raw_Power · 2011-09-05T02:07:13.861Z · LW(p) · GW(p)
Reversed Stupidity?
Replies from: Kaj_Sotala, JoshuaZ, Desrtopa, lessdazed, shokwave↑ comment by Kaj_Sotala · 2011-09-10T08:06:04.865Z · LW(p) · GW(p)
In writing, I often notice that it's easier to let someone else come up with a bad draft and then improve it - even if "improving" means "rewrite entirely". Seeing a bad draft provides a basic starting point for your thoughts - "what's wrong here, and how could it be done better". Contrast this to the feeling of "there's an infinite amount of ways by which I could try to communicate this, which one of them should be promoted to attention" that a blank paper easily causes if you don't already have a starting point in mind.
You could explain the phenomenon either as a contraining of the search space to a more tractable one, or as one of the ev-psych theories saying we have specialized modules for finding flaws in the arguments of others. Or both.
Over in the other thread, Morendil mentioned that a lot of folks who have difficulty with math problems don't have any good model of what to do and end up essentially just trying stuff out at random. I wonder if such folks could be helped by presenting them with an incorrect attempt to answer a problem, and then asking them to figure out what's wrong with it.
Replies from: Raw_Power↑ comment by Raw_Power · 2011-09-16T15:23:25.521Z · LW(p) · GW(p)
Here are two excellent examples of what you just explained, as per the Fiction Identity Postulate:
*Doom, Consequences of Evil as the "bad draft", and this as the done-right version.
*Same for this infuriating Chick Tract and this revisiting of it (it's a Tear Jerker)
*And everyone is familiar with the original My Little Pony works VS the Friendship Is Magic continuity.
↑ comment by JoshuaZ · 2011-09-05T02:32:26.794Z · LW(p) · GW(p)
I don't think so. In this context, it seems that Scott is talking about in this context making his mathematical intuitions more precise by trying to state explicitly what is wrong with the idea. He seems to generally be doing this in response to comments by other people sort of in his field (comp sci) or connected to his field (physics and math ) so he isn't really trying to reverse stupidity.
↑ comment by Desrtopa · 2011-09-05T21:52:40.985Z · LW(p) · GW(p)
People come up with ideas that are clearly and manifestly wrong when they're confused about the reality. In some cases, this is just personal ignorance, and if you ask the right people they will be able to give you a solid, complete explanation that isn't confused at all (evolution being a highly available example.)
On the other hand, they may be confused because nobody's map reflects that part of the territory clearly enough to set them straight, so their confusion points out a place where we have more to learn.
Replies from: Raw_Power↑ comment by shokwave · 2011-09-05T12:33:16.382Z · LW(p) · GW(p)
Reversed stupidity isn't intelligence, but it's not a bad place to start.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-09-05T12:56:41.342Z · LW(p) · GW(p)
It is a bad place to start. The intended sense of "reversed" in "reversed stupidity" is that you pick the opposite, as opposed to retracting the decisions that led to privileging the stupid choice. The opposite of what is stupid is as arbitrary as the stupid thing itself, if you have considerably more than two options.
Replies from: PhilGoetz, Tomthefolksinger↑ comment by Tomthefolksinger · 2011-09-09T00:07:20.791Z · LW(p) · GW(p)
Not so, I can get very inventive trying to counter what I perceive as wrong or offensive. Disproving sources to offering countering and contradictory postulations; all are better when flung back. One of my great joys is when my snotty, off-hand comment makes someone go after real data to prove me wrong. If this is applied to some theoretical position, who knows where it could lead you. I'm pretty sure there is at least one Edison joke about this.
comment by AdeleneDawner · 2011-09-01T21:54:09.523Z · LW(p) · GW(p)
I know of no society in human history that ever suffered because its people became too desirous of evidence in support of their core beliefs.
-Sam Harris
Replies from: Nominullcomment by Will_Newsome · 2011-09-03T10:01:41.104Z · LW(p) · GW(p)
Replies from: Will_NewsomeThere are a thousand hacking at the branches of evil to one who is striking at the root.
↑ comment by Will_Newsome · 2011-09-03T10:10:38.228Z · LW(p) · GW(p)
(Though if a thousand people tried striking at the root at once they'd undoubtedly end up striking each other. (I wish there was something I could read that non-syncretically worked out analogies between algorithmic information theory and game-theory/microeconomics.))
Replies from: Teal_Thanatos↑ comment by Teal_Thanatos · 2011-09-04T23:24:39.894Z · LW(p) · GW(p)
That sounds awfully negative and I can't see any basis for it apart from negativity. ie: For what basis do you declare that people striking the root are any more likely to strike each other than striking the branches?
While you might use the analogy to declare that the root of the problem is smaller, please note that there are trees (like Giant sequoias ) which have root systems that far outdistance the branch width.
Replies from: BillyOblivion↑ comment by BillyOblivion · 2011-09-05T12:41:46.220Z · LW(p) · GW(p)
If you picture the metaphorical great oak of malignancy with branches tens of yards in radius, and a trunk with roots (at the top of the trunk) only about 10 feet in diameter, you face one of those square of the distance problems in terms of axe swinging space.
This is what happens when you take the comments of romantic goofballs and slam them up against ontological rationalists who just might be borderline aspies or shadow autists.
I guess I should point out for the sake of clarity that the romantic goofball has not yet posted on this thread, and given the advanced interaction with entropy is unlikely to do so. Unless the Hindus, Buddhists and a few others are more accurate than the Catholics and Atheists.
comment by sabre51 · 2011-09-02T13:36:08.531Z · LW(p) · GW(p)
I believe no discovery of fact, no matter how trivial, can be wholly useless to the race, and no trumpeting of falsehood, no matter how virtuous in intent, can be anything but vicious... I believe in the complete freedom of thought and speech- alike for the humblest man and the mightiest, and in the utmost freedom of conduct that is consistent in living in an organized society... But the whole thing can be put very simply. I believe it is better to tell the truth than to lie. I believe it is better to be free than to be a slave. And I believe it is better to know than be ignorant.
-HL Menken
Replies from: brazil84, Vladimir_Nesov↑ comment by brazil84 · 2011-09-03T00:03:08.234Z · LW(p) · GW(p)
From an evolutionary perspective, I would have to disagree. Believing that one's children are supremely cute; that one's spouse is one's soulmate; or even that an Almighty Being wants you to be fruitful and multiply -- these are all beliefs which are a bit shaky on rationalist grounds but which arguably increase the reproductive fitness in the individuals and groups who hold them.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-09-03T16:09:27.870Z · LW(p) · GW(p)
ERROR: POSTULATION OF GROUP SELECTION DETECTED
Replies from: wedrifid, Incorrect, Will_Newsome, brazil84↑ comment by wedrifid · 2011-09-03T16:20:21.379Z · LW(p) · GW(p)
Barely, as an afterthought.
If you want to worry about hints of superstition look to the anthropomorphizing of TDT that is starting to crop up. This one was really scraping the bottom the barrel as far as dire yet predictable errors go.
↑ comment by Incorrect · 2011-09-29T23:34:20.366Z · LW(p) · GW(p)
I understand why group selection is problematic: Individual selection trumps it.
However, when group and individual selective pressure coincide, the mutation could survive to the point where it exists in a group at which point the group will have better fitness because of the group selective pressure.
Is this incorrect?
↑ comment by Will_Newsome · 2011-09-10T13:43:24.341Z · LW(p) · GW(p)
Don't reverse stupidity too much: http://necsi.edu/research/evoeco/spatialpatterns.html (actual quantitative papers can be found by those who are interested; NECSI has some pretty cool stuff).
Replies from: Jack↑ comment by Jack · 2011-09-10T13:48:16.637Z · LW(p) · GW(p)
What is new here? It reads like the same old, wrong, group selection argument.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-09-10T13:52:09.437Z · LW(p) · GW(p)
Huh? But, like, spatial patterns and shit. Okay, I'll find something prestigious or something. Here's a nice short position piece: http://www.necsi.edu/research/evoeco/nature08809_proof1.pdf Bam, Greek symbols and Nature, can't argue with that.
ETA: Here's a lot of fancy words and mathy shit: http://www.necsi.edu/research/multiscale/ . I don't know how to read it but I do know that it agrees with my preconceptions, and whenever my intuition and Greek symbols align I know I'm right. It's like astrology but better.
ETA2: Delicious pretty graphs and more Greek shit: http://www.necsi.edu/research/multiscale/PhysRevE_70_066115.pdf . Nothing to do with evolution but it's so impressive looking that it doesn't matter, right?
Replies from: Richard_Kennaway, Jack, Jack↑ comment by Richard_Kennaway · 2011-09-10T19:51:56.921Z · LW(p) · GW(p)
Whatever you're trying to say, you aren't helping it by your presentation. I mean:
Bam, Greek symbols and Nature, can't argue with that.
Ordinarily that would be a rhetorical way of saying that you can and do argue with it (as do the authors of the paper that that was a response to), but you seem to be citing it in support of your previous comment. So, what is your actual point?
Replies from: lessdazed↑ comment by lessdazed · 2011-09-10T21:44:43.380Z · LW(p) · GW(p)
Whatever you're trying to say, you aren't helping it by your presentation.
He knows, he's Bruceing with his presentation.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-09-10T23:15:23.488Z · LW(p) · GW(p)
Eh, sorta. (Voted up.) But I think the psychology is somewhat different. It's like, "I'm going to be explicit about what signalling games I am participating in so that when you have contempt for me when I explicitly engage in them I get to feel self-righteous for a few seconds because I know that you are being hypocritical". On the virtuous side, making things explicit is important for good accounting. Ideally I'd like to make it easier for me to be damned when I am truly unjustified. (I just wish there were wiser judges, better institutions than the ones I currently have access to.)
This comment exemplifies itself.
Replies from: lessdazed↑ comment by lessdazed · 2011-09-10T23:17:53.284Z · LW(p) · GW(p)
I see what you did there.
ETA: you didn't need to edit to add "This comment exemplifies itself."
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-09-11T00:25:01.575Z · LW(p) · GW(p)
Wow, it's been a long time since someone chided me for pointing out the obvious! Heh. Point taken. (Sorry about editing after the fact, this almost never causes problems and is pretty useful but it does blow up once every 100 comments or so.)
Replies from: lessdazed↑ comment by Jack · 2011-09-10T23:52:09.156Z · LW(p) · GW(p)
After your edits: Do you have a problem with my question? It was clear and straightforward- I wanted to know what was new in the paper you linked. I was not trying to start some kind of status battle with you. I was not signaling anything. You indicated you had reason to believe previous findings on group selection were wrong- I asked you to explain the argument and you responded with what looks like rudeness and sarcasm. I don't know if you were intending to direct that rudeness and sarcasm at me or if you're just on a 48 hour Adderall binge. Either way, I suggest you take a nap.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-09-10T23:58:40.012Z · LW(p) · GW(p)
It wasn't directed at you at all; my sincere apologies for not making that clear. I don't have a problem with your question. It was more like "ahhhh, despair, it would take me at least two minutes to think about how to paraphrase the relevant arguments, but I don't have energy to do that, but I do want to somehow signal that it's not just tired old group selection arguments because I don't want NECSI to have been done injustice by my unwillingness to explain their ideas, but if I do that kind of signalling then I'm participating in a game that is plausibly in the reference class of propping up decision policies that are suboptimal, so I'll just do it in a really weird way that is really discreditable so that I can get out of this double bind while still being able to say in retrospect that on some twisted level I at least tried to do the right thing." ETA: Well, the double negative version of that which involves lots of fear of bad things, not desire for good things. I am not virtuous and have nothing to be humble about.
This is what Eliezer's talking about in HP:MoR with:
And he told me then that by the time good and moral people were done tying themselves up in knots, what they usually did was nothing; or, if they did act, you could hardly tell them apart from the people called bad.
I wish Dumbledore were made a steel man so he could give good counterarguments here rather than letting Harry win outright.
↑ comment by brazil84 · 2011-09-03T17:30:56.992Z · LW(p) · GW(p)
I'm not sure I understand your point. By way of example, do you agree that generally speaking, ultra-Orthodox Jews believe that it's a good idea to have a lot of children and to pass this idea to their children?
And do you agree that the numbers of ultra-Orthodox Jews have increased dramatically over the last 100 years and are likely to continue increasing dramatically?
Replies from: MinibearRex↑ comment by MinibearRex · 2011-09-03T17:38:59.823Z · LW(p) · GW(p)
His complaint is from here:
and groups who hold them.
Group selection doesn't work. if you were to delete those two words, it would be fine, but if you start talking about increasing the reproductive fitness of a group as a whole, evolutionary biologists and other scientists will tend to dismiss what you say.
Replies from: brazil84↑ comment by brazil84 · 2011-09-03T17:47:45.759Z · LW(p) · GW(p)
Group selection doesn't work.
Well what exactly is "group selection"? If a group of people has a particular belief; and as a result of that belief, the group increases dramatically in numbers, would it qualify as "group selection"?
Conversely, if a group of people has a particular belief; and as a result of that belief, the group decreases dramatically in numbers, would it qualify as "group selection"?
Replies from: Vaniver↑ comment by Vaniver · 2011-09-03T18:00:25.737Z · LW(p) · GW(p)
It would not qualify. The ultra-Orthodox Jews example you give is of a set of individuals each pursuing their own fitness, and the set does well because each individual in the set does well. Group selection specifically refers to practices which make the group better off at individual cost. For example, if you had more daughters than sons, your group could grow faster- but any person in the group who defects and has more sons than daughters will reap massive benefits from doing so.
The moral of the story is, some people are oversensitive to "group" in the same sentence as "reproductive fitness." Try to avoid it.
Replies from: brazil84↑ comment by brazil84 · 2011-09-03T18:17:05.573Z · LW(p) · GW(p)
It would not qualify.
Well in that case, I was not talking about group selection. I was referring to a set of individuals each of whose reproductive fitness would be enhanced by the beliefs shared by him and the other members of the set of individuals.
I think that in normal discussions, it's reasonable to refer to a set of individuals with shared beliefs as a "group." And if those beliefs generally enhance the reproduction of the individuals in that group, it's reasonable to state that the reproductive fitness in the group has been enhanced.
Try to avoid it.
I suppose, but I think it was pretty clear from the context what I meant when I said that certain beliefs "arguably increase the reproductive fitness in the individuals and groups who hold them." At a minimum, I think I deserve the benefit of the doubt.
Replies from: Vaniver↑ comment by Vladimir_Nesov · 2011-09-02T13:39:31.991Z · LW(p) · GW(p)
Could you remove the "quoted text" part?
comment by ata · 2011-09-28T03:07:16.879Z · LW(p) · GW(p)
"No. You have just fallen prey to the meta-Dunning Kruger effect, where you talk about how awesome you are for recognizing how bad you are."
— Horatio__Caine on reddit
Replies from: JoshuaZcomment by Will_Newsome · 2011-09-10T13:06:25.255Z · LW(p) · GW(p)
For whosoever hath good inductive biases, to him more evidence shall be given, and he shall have an abundance: but whosoever hath not good inductive biases, from him shall be taken away even what little evidence that he hath.
Matthew (slightly paraphrased...)
Replies from: Oscar_Cunningham, wedrifid↑ comment by Oscar_Cunningham · 2011-09-10T14:10:21.457Z · LW(p) · GW(p)
What does this mean?
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-09-11T19:10:24.996Z · LW(p) · GW(p)
If you have good judgement about what things imply, you'll be good at gathering evidence.
If you have poor judgement about what things imply, you'll lose track of the meaning of the evidence you've got.
Replies from: simplicio↑ comment by simplicio · 2011-09-13T13:16:46.815Z · LW(p) · GW(p)
Let me see if I've cottoned on by coming up with an example.
Say you work with someone for years, and often on Mondays they come in late & with a headache. Other days, their hands are shaking, or they say socially inappropriate things in meetings.
"Good inductive bias" appears to mean you update in the correct direction (alcoholism/drug addiction) on each of these separate occasions, whereas "bad inductive bias" means you shrug each occurrence off and then get presented with each new occurrence, as it were, de novo. So this could be glossed as basically "update incrementally." Have I got the gist?
I think what's mildly confusing is the normatively positive use of the word "bias," which typically suggests deviation from ideal reasoning. But I suppose it is a bias in the sense that one could go too far and update on every little piece of random noise...
Replies from: Oscar_Cunningham, NancyLebovitz↑ comment by Oscar_Cunningham · 2011-09-13T13:48:37.736Z · LW(p) · GW(p)
I think what's mildly confusing is the normatively positive use of the word "bias," which typically suggests deviation from ideal reasoning. But I suppose it is a bias in the sense that one could go too far and update on every little piece of random noise...
"Inductive bias" is a technical term, where the word bias isn't meant negatively.
↑ comment by NancyLebovitz · 2011-09-13T14:24:13.120Z · LW(p) · GW(p)
I think that's it, though there are at least two sorts of bad bias. The one you describe (nothing is important enough to notice or remember) is one, but there's also having a bad theory ("that annoying person is aiming it all at me", for example, which would lead to not noticing evidence of things going wrong which have nothing to do with malice).
This is reminding me of one of my favorite bits from Illuminatus!. There's a man with filing cabinets [1] full of information about the first Kennedy assassination. He's convinced that someday, he'll find the one fact which will make it all make sense. He doesn't realize that half of what's he's got is lies people made up to cover their asses.
In the novel, there were five conspiracies to kill JFK-- but that character isn't going to find out about them.
[1] The story was written before the internet.
comment by CSalmon · 2011-09-09T06:28:01.425Z · LW(p) · GW(p)
My desire and wish is that the things I start with should be so obvious that you wonder why I spend my time stating them. This is what I aim at because the point of philosophy is to start with something so simple as not to seem worth stating, and to end with something so paradoxical that no one will believe it.
-- Bertrand Russell, The Philosophy of Logical Atomism
comment by Xom · 2011-09-02T17:35:51.285Z · LW(p) · GW(p)
A certain amount of knowledge you can indeed with average faculties acquire so as to retain; nor need you regret the hours you spend on much that is forgotten, for the shadow of lost knowledge at least protects you from many illusions.
~ William Johnson Cory
comment by RobinZ · 2011-09-27T20:44:52.897Z · LW(p) · GW(p)
It is certain, it seems, that we can judge some matters correctly and wisely and yet, as soon as we are required to specify our reasons, can specify only those which any beginner in that sort of fencing can refute. Often the wisest and best men know as little how to do this as they know the muscles with which they grip or play the piano.
Georg Christoph Lichtenberg, via The Lichtenberg Reader: selected writings, trans. and ed. Franz H. Mautner and Henry Hatfield.
comment by AlexSchell · 2011-09-27T02:38:34.211Z · LW(p) · GW(p)
At this point one must expect to meet with an objection. ‘Well then, if even obdurate sceptics admit that the assertions of religion cannot be refuted by reason, why should I not believe in them, since they have so much on their side tradition, the agreement of mankind, and all the consolations they offer?’ Why not, indeed? Just as no one can be forced to believe, so no one can be forced to disbelieve. But do not let us be satisfied with deceiving ourselves that arguments like these take us along the road of correct thinking. If ever there was a case of a lame excuse we have it here. Ignorance is ignorance; no right to believe anything can be derived from it. In other matters no sensible person will behave so irresponsibly or rest content with such feeble grounds for his opinions and for the line he takes. It is only in the highest and most sacred things that he allows himself to do so.
Sigmund Freud, The Future of an Illusion, part VI
comment by engineeredaway · 2011-09-15T02:11:55.675Z · LW(p) · GW(p)
Captain Tagon: Lt. Commander Shodan, years ago when you enlisted you asked for a job as a martial arts trainer.
Captain Tagon: And here you are, trying to solve our current problem with martial arts training.
Captain Tagon: How's that saying go? "When you're armed with a hammer, all your enemies become nails?"
Shodan: Sir,.. you're right. I'm being narrow-minded.
Captain Tagon: No, no. Please continue. I bet martial arts training is a really, really useful hammer.
comment by engineeredaway · 2011-09-27T18:06:11.514Z · LW(p) · GW(p)
"What I cannot create, I do not understand."
-Richard Feynman
taken from wiki quotes which took it from Stephen Hawking's book Universe in a Nutshell which took it from Feynman's blackboard at the time of this death (1988)
its simple but it gets right at the heart of why the mountains of philosophy are the foothills of AI (as Eliezer put it) .
comment by lukeprog · 2011-09-01T12:02:14.833Z · LW(p) · GW(p)
The mind of man is far from the nature of a clear and equal glass, wherein the beams of things should reflect according to their true incidence; nay, it is rather like an enchanted glass, full of superstition and imposture…
Francis Bacon, The advancement of Learning and New Atlantis
comment by lukeprog · 2011-09-26T09:10:35.491Z · LW(p) · GW(p)
Let us then take in our hands the staff of experience, paying no heed to the accounts of all the idle theories of the philosophers. To be blind and to think one can do without this staff if the worst kind of blindness.
- Julien Offray de La Mettrie, Man a machine, 1748
comment by Dr_Manhattan · 2011-09-06T12:33:43.084Z · LW(p) · GW(p)
Michael: I don't know anyone who could get through the day without two or three juicy rationalizations. They're more important than sex. Sam Weber: Ah, come on. Nothing's more important than sex. Michael: Oh yeah? Ever gone a week without a rationalization?
- The Big Chill
comment by Will_Newsome · 2011-09-10T17:07:10.343Z · LW(p) · GW(p)
This is the use of metaness: for liberation - not less of love but expanding of love beyond local optima.
-- Nick Tarleton
The original goes:
This is the use of memory:
For liberation—not less of love but expanding
Of love beyond desire, and so liberation
From the future as well as the past.
-- T. S. Eliot
Replies from: Nisancomment by Will_Newsome · 2011-09-10T10:49:04.638Z · LW(p) · GW(p)
Why should the government get to decide how to destroy our money? We should let the free market find more efficient ways to destroy money.
The Onion (it's sort of a rationality and anti-rationality quote at multiple levels)
comment by JonathanLivengood · 2011-09-03T04:01:46.862Z · LW(p) · GW(p)
The elements of every concept enter into logical thought at the gate of perception and make their exit at the gate of purposive action; and whatever cannot show its passports at both those two gates is to be arrested as unauthorized by reason.
-- C.S. Peirce
comment by Tesseract · 2011-09-01T20:49:27.909Z · LW(p) · GW(p)
To love truth for truth's sake is the principal part of human perfection in this world, and the seed-plot of all other virtues.
Locke
Replies from: Multiheaded↑ comment by Multiheaded · 2011-09-03T09:21:11.401Z · LW(p) · GW(p)
I disagree. A lot of human conducts that I find virtuous, such as compassion or tolerance, have no immediate connection with the truth, and sometimes they are best served with white lies.
For example, all the LGBTQ propaganda spoken at doubting conservatives, about how people are either born gay or they aren't, and how modern culture totally doesn't make young people bisexual, no sir. We're quite innocent, human sexuality is set in stone, you see. Do you really wish to hurt your child for what they always were? What is this "queer agenda" you're speaking about?
Tee-hee :D
Replies from: Jack, Eugine_Nier, Raw_Power, None↑ comment by Jack · 2011-09-17T01:38:39.641Z · LW(p) · GW(p)
Um, this is both a strawman of what LGBTQ activists say and appears to seriously overestimate the degree to which a person has control over their sexual orientation.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2011-09-17T02:19:56.755Z · LW(p) · GW(p)
appears to seriously overestimate the degree to which a person has control over their sexual orientation.
I don't think control as such is the issue, though; at least, that's not how I read Multiheaded's comment. It seems at least plausible that human sexuality is at least somewhat malleable to cultural inputs: even if no one consciously and explicitly says, "I hereby choose to be gay," it could very well be that a gay-friendly culture results in more people developing non-straight orientations.
If nothing else, there are incentive effects: even if sexual orientation is fixed from birth, people's behavior is regulated by cultural norms. Thus, we should expect that greater tolerance of homosexuality will lead to more homosexual behavior, as gays and people who are only marginally non-straight feel more free to act on their desires. For example, an innately bisexual person might engage entirely in heterosexual behavior in a society where homosexuality was heavily stigmatized, but engage in more homosexual behavior once the stigma is lifted.
Thus, conservatives who fear that greater tolerance of homosexuality will lead to more homosexual behavior are probably correct on this one strictly factual point, although I would expect the magnitude of the effect to be rather modest.
Replies from: Jack, Multiheaded↑ comment by Multiheaded · 2011-09-18T15:16:52.643Z · LW(p) · GW(p)
Yeah, I meant something like that.
↑ comment by Eugine_Nier · 2011-09-17T01:27:49.613Z · LW(p) · GW(p)
You may want to carefully consider this comment.
↑ comment by Raw_Power · 2011-09-05T01:01:42.929Z · LW(p) · GW(p)
I can't tell if you're joking...
Replies from: Multiheaded↑ comment by Multiheaded · 2011-09-06T12:03:47.432Z · LW(p) · GW(p)
Dead serious actually. Well, what I mean is that a heteronormative approach where everyone must be either 6 or 1 on the Kinsey scale is hard to maintain in the modern world, and that when some extremely irrational older folks hate to see how young people can, for the first time in history, 1)discover their sexuality with some precision by using media and freely experimenting and 2)get a lot of happiness that way, it's fine to spin a clean and simple tale of the subject matter to those sorry individuals.
Replies from: Raw_Power↑ comment by Raw_Power · 2011-09-06T13:44:41.939Z · LW(p) · GW(p)
... I like the way you talk. This goes a long way into explaining the same person saying "homosexuality is not a choice" and "I have been with qute a few straight guys", as well as the treatment bi people get as "fence-sitters" and the resentment they generate by having an easier time in the closet.
↑ comment by [deleted] · 2011-09-17T01:32:13.219Z · LW(p) · GW(p)
I'm profoundly disappointed that this has been upvoted.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2011-09-17T01:36:44.366Z · LW(p) · GW(p)
Could you elaborate on what you found objectionable?
Replies from: Nonecomment by Will_Newsome · 2011-09-10T13:31:18.353Z · LW(p) · GW(p)
One must give value to their existence by behaving as if ones very existence were a work of art.
Friedrich Nietzsche
comment by Patrick · 2011-09-05T00:37:46.802Z · LW(p) · GW(p)
On some other subjects people do wish to be deceived. They dislike the operation of correcting the hypothetical data which they have taken as basis. Therefore, when they begin to see looming ahead some such ridiculous result as 2 + 3 = 7, they shrink into themselves and try to find some process of twisting the logic, and tinkering the equation, which will make the answer come out a truism instead of an absurdity; and then they say, “Our hypothetical premiss is most likely true because the conclusion to which it brings us is obviously and indisputably true.” If anyone points out that there seems to be a flaw in the argument, they say, “You cannot expect to get mathematical certainty in this world,” or “You must not push logic too far,” or “Everything is more or less compromise,” and so on.
-- Mary Everest Boole
comment by XFrequentist · 2011-09-01T20:26:54.247Z · LW(p) · GW(p)
Rationality gives us greater knowledge and greater control over our own actions and emotions and over the world. Although our rationality is, initially, an evolved quality - the nature of rationality includes the Nature in it - it enables us to transform ourselves and hence transcend our status as mere animals, actually and also symbolically. Rationality comes to shape and control its own function.
Our principles fix what our life stands for, our aims create the light our life is bathed in, and our rationality, both individual and coordinate, defines and symbolizes the distance we have come from mere animality. It is by these means that our lives can come to mean more than what they instrumentally yield. And by meaning more, our lives yield more.
-- Robert Nozick (The Nature of Rationality)
comment by Patrick · 2011-09-04T13:37:22.992Z · LW(p) · GW(p)
I believe that no discovery of fact, however trivial, can be wholly useless to the race, and that no trumpeting of falsehood, however virtuous in intent, can be anything but vicious.
-- HL Mencken
Replies from: Oscar_Cunningham, Normal_Anomaly↑ comment by Oscar_Cunningham · 2011-09-04T17:28:20.524Z · LW(p) · GW(p)
This is quoted already on this page albeit with "no matter" substituted for "however".
↑ comment by Normal_Anomaly · 2011-09-04T15:26:12.068Z · LW(p) · GW(p)
I disagree, especially with the second part. For a trivial example, take the traditional refutation of Kantianism: You are hiding Jews in your house during WWII. A Nazi shows up and asks if you are hiding any Jews.
Replies from: Teal_Thanatos↑ comment by Teal_Thanatos · 2011-09-04T23:17:02.671Z · LW(p) · GW(p)
I'm going to have to call you on this one, in your trivial example you are intending harm/chaos/diversion to/to/of the Nazi plan. Causing disruption to another is vicious, even if you are being virtuous in your choice to disrupt.
Replies from: Atelos, PhilGoetz↑ comment by Atelos · 2011-09-04T23:34:46.373Z · LW(p) · GW(p)
Causing disruption is certainly vicious in the sense of aggressive or violent, yes. I, and apparently Normal_Anomaly, read the quote from Mencken as meaning that lying is vicious in the sense of immoral, 'vice-ious', and hence unjustifiable.
↑ comment by PhilGoetz · 2011-09-10T15:45:59.557Z · LW(p) · GW(p)
No, it is not.
vicious [vish-uhs]:
addicted to or characterized by vice; grossly immoral; depraved; profligate: a vicious life.
given or readily disposed to evil: a vicious criminal.
reprehensible; blameworthy; wrong: a vicious deception.
spiteful; malicious: vicious gossip; a vicious attack.
unpleasantly severe: a vicious headache.
comment by NancyLebovitz · 2011-09-03T08:10:20.314Z · LW(p) · GW(p)
Replies from: MinibearRexIn other words, they’re looking to someone’s life as an example of perfection, rather than what the person was saying, to see if it is true or false. They should know full well that everybody has that measure of hypocrisy in their lives; everybody has a measure of being flawed. My parents were no better or no worse. Thus, if someone who looked to my dad as a kind of a guru or someone who walked on water is disillusioned, they probably should be. But they shouldn’t only be disillusioned about him, they should be disillusioned about any idea of perfection in any human being because no one is like that.
↑ comment by MinibearRex · 2011-09-03T17:45:56.084Z · LW(p) · GW(p)
My parents were no better or no worse.
Beware the fallacy of grey.
comment by [deleted] · 2011-09-18T23:21:13.057Z · LW(p) · GW(p)
"Our present study is not, like other studies, purely theoretical in intention; for the object of our inquiry is not to know what virtue is but how to become good, and that is the sole benefit of it." —Aristotle's Nichomachean Ethics (translated by James E. C. Weldon; emphasis added)
comment by DSimon · 2011-09-02T18:41:16.496Z · LW(p) · GW(p)
(Sheen is attempting to perform brain surgery on an unknown alien)
Sheen: That's weird. This brain has no labels.
Doppy: Labels?
Sheen: Yeah! Usually brains come with labels, like "this is the section for tasting chicken", "this is the section for running around in circles", "this is the section for saying AAAAARGHLBLAHH." But, this brain doesn't have any labels at all. So, I'm going to have to do what all the best doctors do.
Doppy: What's that?
Sheen: Poke around and see what happens!
-- Planet Sheen
comment by PhilGoetz · 2011-09-11T19:06:33.434Z · LW(p) · GW(p)
“When anyone asks me how I can describe my experience of nearly forty years at sea, I merely say uneventful. Of course there have been winter gales and storms and fog and the like, but in all my experience, I have never been in an accident of any sort worth speaking about. I have seen but one vessel in distress in all my years at sea… I never saw a wreck and have never been wrecked, nor was I ever in any predicament that threatened to end in disaster of any sort.”
E.J. Smith, 1907, later captain of the RMS Titanic
Note: This is one of those comments that has been repeated, without citation, on the internet so many times that I can no longer find a citation.
comment by JonathanLivengood · 2011-09-03T03:55:30.665Z · LW(p) · GW(p)
I will submit (separately) three quotations from my favorite philosopher, C.S. Peirce:
Upon this first, and in one sense this sole, rule of reason, that in order to learn you must desire to learn, and in so desiring not be satisfied with what you already incline to think, there follows one corollary which itself deserves to be inscribed upon every wall of the city of philosophy: Do not block the way of inquiry.
-- C.S. Peirce
Replies from: JonathanLivengood↑ comment by JonathanLivengood · 2011-09-03T03:56:26.031Z · LW(p) · GW(p)
Crap ... I appear to have screwed up something in the markdown syntax ...
comment by dvasya · 2011-09-01T07:34:20.624Z · LW(p) · GW(p)
Any useful idea about the future should appear to be ridiculous.
-- Jim Dator ("Dator's Law")
Replies from: JoshuaZ, lessdazed↑ comment by JoshuaZ · 2011-09-01T17:36:48.301Z · LW(p) · GW(p)
Any useful idea about the future should appear to be ridiculous.
Strongly disagree with this quote. Some useful ideas about the future might seem ridiculous. But a lot won't. Lots of new technologies and improvements are due to steady fairly predictable improvement of existing technologies. It might be true that a lot of useful ideas or the most useful ideas have a high chance of appearing to be ridiculous. But even that means we're poorly calibrated about what is and is not reasonably doable. There's also a secondary issue that the many if not most of the ideas which seem ridiculous turn out to be about as ridiculous as they seemed if not more so (e.g. nuclear powered aircraft which might be doable but will remain ridiculous for the foreseeable future) and even plausible seeming technologies often turn out not to work (such as the flying car). Paleo Future is a really neat website which catalogs predictions about the future especially in the form of technologies that never quite made it or failed miserably or the like. The number of ideas which failed is striking.
Replies from: gwern↑ comment by gwern · 2011-09-01T19:02:18.014Z · LW(p) · GW(p)
If there is a useful idea about the future which triggers no ridiculous or improbable filters, doesn't that imply many people will have already accepted that idea, using it and removing the profit from knowing it? To make money, you need an edge; being able to find ignored gems in the 'possible ridiculous futures' sounds like a good strategy.
Replies from: JoshuaZ, None↑ comment by JoshuaZ · 2011-09-01T19:07:26.523Z · LW(p) · GW(p)
If there is a useful idea about the future which triggers no ridiculous or improbable filters, doesn't that imply many people will have already accepted that idea, using it and removing the profit from knowing it?
Not necessarily. For example, it could be that no one had thought of the idea in question but once someone thought of the idea the usefulness is immediately obvious.
Replies from: gwerncomment by [deleted] · 2011-09-25T02:29:24.575Z · LW(p) · GW(p)
"Although nature commences with reason and ends in experience it is necessary for us to do the opposite, that is to commence with experience and from this to proceed to investigate the reason."
-Leonardo da Vinci
comment by brilee · 2011-09-08T22:39:33.192Z · LW(p) · GW(p)
"Communication usually fails, except by accident" - Osmo Wiio
"Communication" here has a different definition from the usual one. I interpreted it as meaning the richness of your internal experiences and the intricate web of associations are conjured in your mind when you say even a single word.
comment by Patrick · 2011-09-07T10:07:26.804Z · LW(p) · GW(p)
Leonard, if you were about to burn or drown or starve I would panic. It would be the least I could do. That's what's happening to people now, and I don't think my duty to panic disappears just because they're not in the room!
-- Raymond Terrific
Replies from: MixedNuts↑ comment by MixedNuts · 2013-02-08T17:22:26.306Z · LW(p) · GW(p)
I think it comes down to this:
If you live in a small community, and your friend or neighbor or family member contacts you and says “someone just committed a horrible act of violence here!” you have to drop everything and listen. Your discomfort is so insignificant compared to the magnitude of the event, you can’t ignore something like that.
You certainly can’t answer “sorry, I need you to stop right there, I’m trying to do some self-care right now and I’m avoiding triggers until I feel ready to engage with difficult subjects.” They’d crown you King Butthead.
But on the Internet, the “community” is 2.4 billion people. Something horrible will be happening to thousands of them every day. You can’t apply the same ethics. It’s emotionally impossible, and not terribly helpful to the world, to even try.
So hand me my Butt Crown.
comment by JonathanLivengood · 2011-09-03T04:00:54.030Z · LW(p) · GW(p)
It is the man of science, eager to have his every opinion regenerated, his every idea rationalized, by drinking at the fountain of fact, and devoting all the energies of his life to the cult of truth, not as he understands it, but as he does not yet understand it, that ought properly to be called a philosopher.
-- C.S. Peirce
comment by curiousepic · 2011-09-02T15:15:27.410Z · LW(p) · GW(p)
Emotions in the brain, they'll always be the same / it's just chemicals and glop and what you've got is what you've got / and we just apply it to whatever's passing by it
-- Jeffrey Lewis, If Life Exists, which is really about set point happiness
comment by anonym · 2011-09-04T18:11:38.451Z · LW(p) · GW(p)
Every truth is a path traced through reality: but among these paths there are some to which we could have given an entirely different turn if our attention had been orientated in a different direction or if we had aimed at another kind of utility; there are some, on the contrary, whose direction is marked out by reality itself: there are some, one might say, which correspond to currents of reality. Doubtless these also depend upon us to a certain extent, for we are free to go against the current or to follow it, and even if we follow it, we can variously divert it, being at the same time associated with and submitted to the force manifest within it. Nevertheless these currents are not created by us; they are part and parcel of reality.
Henri L. Bergson -- The Creative Mind: An Introduction to Metaphysics, p. 218
ETA: retracted. I posted this on the basis of my interpretation of the first sentence, but the rest of the quote makes clear that my interpretation of the first sentence was incorrect, and I don't believe it belongs in a rationality quotes page anymore.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2011-09-10T15:42:36.634Z · LW(p) · GW(p)
What?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2011-09-10T18:55:15.877Z · LW(p) · GW(p)
Quite. Bergson might not reach the same level of awfulness as the examples David Stove pillories, but I couldn't penetrate the fog of this paragraph, not even with the context. I think Wikipedia nails the jelly to the wall, though: Bergson argued that "immediate experience and intuition are more significant than rationalism and science for understanding reality". In which case, -1 to Bergson. I learn from the article that Bergson also coined the expression élan vital.
comment by anonym · 2011-09-04T17:57:54.103Z · LW(p) · GW(p)
The only laws of matter are those that our minds must fabricate and the only laws of mind are fabricated for it by matter.
James Clerk Maxwell
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-09-04T19:06:44.241Z · LW(p) · GW(p)
I am having difficulty parsing this. The easiest interpretation to make of the first part seems to be "There are no laws of matter except the ones we make up," and the second part is saying either "minds are subject to physics" or something I don't follow at all.
Replies from: anonym↑ comment by anonym · 2011-09-04T19:34:13.855Z · LW(p) · GW(p)
I interpret the first part as saying that there are no laws of matter other than ones our minds are forced to posit (forced over many generations of constantly improving our models). And the second part is something like "minds are subject [only] to physics", as you said. The second part explains how and why the first part works.
Together, I interpret them as suggesting a reductive physicalist interpretation of mind (in the 19th century!) according to which our law-making is not only about the universe but is itself the universe (or a small piece thereof) operating according to those same laws (or other, deeper laws we have yet to discover).
comment by Tripitaka · 2011-09-02T19:05:21.309Z · LW(p) · GW(p)
If other Mediators come to a different conclusion from mine, that is their affair. It may be that their facts are incomplete, or their aims different. I judge on the evidence.
-Whitbreads Fyunch(click), by Larry Niven & Jerry Pournelle in "The Mote in God's Eye".
Replies from: Teal_Thanatos↑ comment by Teal_Thanatos · 2011-09-04T23:42:37.231Z · LW(p) · GW(p)
This doesn't really comment that Whitbreads may have incomplete evidence, facts, bias or his own aims.
Replies from: Tripitaka↑ comment by Tripitaka · 2011-09-07T19:45:50.865Z · LW(p) · GW(p)
For me it runs more along the lines of Aumann`s agreement theorem.
comment by Jayson_Virissimo · 2011-09-02T16:04:18.573Z · LW(p) · GW(p)
Legends are usually bad news. There's not a lot of difference between heroes and madmen.
-Solid Snake, Metal Gear Solid 2: Sons of Liberty
In other words, have no heroes, and no villains.
comment by dvasya · 2011-09-01T07:33:29.102Z · LW(p) · GW(p)
If superior creatures from space ever visit earth, the first question they will ask, in order to assess the level of our civilization, is ‘Have they discovered evolution yet?’
-- Richard Dawkins, The Selfish Gene
(I know it's old and famous and classic, but this doesn't make it any less precious, does it?)
Replies from: Kingreaper, Bugmaster, wedrifid, rwallace, NancyLebovitz↑ comment by Kingreaper · 2011-09-01T07:46:46.408Z · LW(p) · GW(p)
Sometimes I suspect that wouldn't even occur to them as a question. That evolution might turn out to be one of those things that it's just assumed any race that had mastered agriculture MUST understand.
Because, well, how could a race use selective breeding, and NOT realise that evolution by natural selection occurs?
Replies from: MarkusRamikin, AlanCrowe↑ comment by MarkusRamikin · 2011-09-01T10:31:19.469Z · LW(p) · GW(p)
Easily.
Realizing far-reaching consequences of an idea is only easy in hindsight, otherwise I think it's a matter of exceptional intelligence and/or luck. There's an enormous difference between, on the one hand, noticing some limited selection and utilising it for practical benefits - despite only having a limited, if any, understanding of what you're doing - and on the other hand realizing how life evolved into complexity from its simple beginnings, in the course of a difficult to grasp period of time. Especially if the idea has to go up against well-entrenched, hostile memes.
I don't know if this has a name, but there seems to exit a trope where (speaking broadly) superior beings are unable to understand the thinking and errors of less advanced beings. I first noticed it when reading H. Fast's The First Men, where this exchange between a "Man Plus" child and a normal human occurs:
"Can you do something you disapprove of?" "I am afraid I can. And do." "I don't understand. Then why do you do it?"
It's supposed to be about how the child is so advanced and undivided in her thinking, but to me it just means "well then you don't understand how the human mind works".
In short, I find this trope to be a fallacy. I'd expect an advanced civilisation to have a greater, not lesser, understanding of how intelligence works, its limitations, and failure modes in general.
Replies from: erniebornheimer, Kingreaper, Logos01, Nornagest↑ comment by erniebornheimer · 2011-09-02T22:37:43.234Z · LW(p) · GW(p)
Yeah. This was put very well by Fyodor Urnov, in an MCB140 lecture:
"What is blindingly obvious to us was not obvious to geniuses of ages past."
I think the lecture series is available on iTunes.
↑ comment by Kingreaper · 2011-09-01T11:11:26.046Z · LW(p) · GW(p)
In short, I find this trope to be a fallacy. I'd expect an advanced civilisation to have a greater, not lesser, understanding of how intelligence works, its limitations, and failure modes in general.
But what reason do we have to expect them to pick evolution, as opposed to the concept of money, or of extensive governments (governments governing more than 10,000 people at once), or of written language, or of the internet, or of radio communication, or of fillangerisation, as their obvious sign of advancement?
Just because humans picked up on evolution far later than we should have, doesn't mean that evolution is what they'll expect to be the late discovery. They might equally expect that the internet wouldn't be invented until the equivalent tech level of 2150. Or they might consider moveable type to be the symbol of a masterful race.
Just because they'll likely be able to understand why we were late to it, doesn't mean it would occur to them before looking at us. It's easy to explain why we came to it when we did, once you know that that's what happened, but if you were from a society that realised evolution [not necessarily common descent] existed as they were domesticating animals; would you really think of understanding evolution as a sign of advancement?
EDIT: IOW: I've upvoted your disagreement with the "advanced people can't understand the simpler ways" trope; but I stand by my original point: they wouldn't EXPECT evolution to be undiscovered.
Replies from: Slackson↑ comment by Slackson · 2011-09-01T11:47:22.277Z · LW(p) · GW(p)
I suspect that the intent of the original quote is that they'll assess us by our curiosity towards, and effectiveness in discovering, our origins. As Dawkins is a biologist, he is implying that evolution by natural selection is an important part of it, which of course is true. An astronomer or cosmologist might consider a theory on the origins of the universe itself to be more important, a biochemist might consider abiogenesis to be the key, and so on.
Personally, I can see where he's coming from, though I can't say I feel like I know enough about the evolution of intelligence to come up with a valid argument as to whether an alien species would consider this to be a good metric to evaluate us with. One could argue that interest in oneself is an important aspect of intelligence, and scientific enquiry important to the development of space travel, and so a species capable of travelling to us would have those qualities and look for them in the creatures they found.
This is my time posting here, so I'm probably not quite up to the standards of the rest of you just yet. Sorry if I said something stupid.
Replies from: Kingreaper↑ comment by Kingreaper · 2011-09-01T11:55:47.736Z · LW(p) · GW(p)
Welcome to lesswrong.
I wouldn't consider anything you've said here stupid, in fact I would agree with it.
I, personally, see it as a failure of imagination on the part of Dawkin's, that he considers the issue he personally finds most important to be that which alien intelligences will find most important, but you are right to point out what his likely reasoning is.
Replies from: dvasya, Normal_Anomaly↑ comment by Normal_Anomaly · 2011-09-02T23:37:41.414Z · LW(p) · GW(p)
Another chain of reasoning I have seen people use to reach similar conclusions is that the aliens are looking for species that have outgrown their sense of their own special importance to the universe. Aliens checking for that would be likely to ask about evolution, or possibly about cosmologies that don't have the home planet at the center of the universe. However, I don't think a sense of specialness is one of the main things aliens will care about.
↑ comment by Logos01 · 2011-09-02T20:14:37.391Z · LW(p) · GW(p)
In short, I find this trope to be a fallacy. I'd expect an advanced civilisation to have a greater, not lesser, understanding of how intelligence works, its limitations, and failure modes in general.
Have you never looked at something someone does and asked yourself, "How can they be so stupid?"
It's not as though you literally cannot conceive of such limitations; just that you cannot empathize with them.
Replies from: ata↑ comment by ata · 2011-09-02T20:53:01.327Z · LW(p) · GW(p)
It's anthropomorphism to assume that it would occur to advanced aliens to try to understand us empathetically rather than causally/technically in the first place, though.
Replies from: Logos01↑ comment by Logos01 · 2011-09-03T00:06:18.499Z · LW(p) · GW(p)
Anthropomorphism? I think not. All known organisms that think have emotions. Advanced animals demonstrate empathy.
Now, certainly it might be possible that an advanced civilization might arise that is non-sentient, and thus incapable of modeling other's psyche empathetically. I will admit to the possibility of anthropocentrism in my statements here; that is, in my inability to conceive of a mechanism whereby technological intelligence could arise without passing through a route that produces intelligences sufficiently like our own as to possess the characteristic of 'empathy'.
It's one thing to postulate counter-factuals; it's another altogether to actually attempt to legitimize them with sound reasoning.
Replies from: bogdanb↑ comment by bogdanb · 2013-08-28T20:03:42.035Z · LW(p) · GW(p)
All known organisms that think have emotions.
Do you have any good evidence that this assertion applies to Cephalopods? I.e., either that they don’t think or that they have emotions. (Not a rhetorical question; I know about them only enough to realize that I don’t know.)
Replies from: Logos01↑ comment by Logos01 · 2013-10-26T05:22:09.521Z · LW(p) · GW(p)
Do you have any good evidence that this assertion applies to Cephalopods?
Cephalopods in general have actually been shown to be rather intelligent. Some species of squid even engage in courtship rituals. There's no good reason to assume that given the fact that they engage in courtship, predator/prey response, and have been shown to respond to simple irritants with aggressive responses that they do not experience at the very least the emotions of lust, fear, and anger.
(Note: I model "animal intelligence" in terms of emotional responses; while these can often be very sophisticated, it lacks abstract reasoning. Many animals are more intelligent beyond 'simple' animal intelligence; but those are the exception rather than the norm.)
Replies from: bogdanb↑ comment by bogdanb · 2013-10-26T20:14:10.449Z · LW(p) · GW(p)
There's no good reason to assume
I agree, but I’m not sure the examples you gave are good reasons to assume the opposite. They’re certainly evidence of intelligence, and there are even signs of something close to self-awareness (some species apparently can recognize themselves in mirrors).
But emotions are a rather different thing, and I’m rather more reluctant to assume them. (Particularly because I’m even less sure about the word than I am about “intelligence”. But it also just occurred to me that between people emotions seem much easier to fake than intelligence, which stated the other way around means we’re much worse at detecting them.)
Also, the reason I specifically asked about Cephalopods is that they’re pretty close to as far away from humans as they can be and still be animals; they’re so far away we can’t even find fossil evidence of the closest common ancestor. It still had a nervous system, but it was very simple as far as I can tell (flatworm-level), so I think it’s pretty safe to assume that any high level neuronal structures have evolved completely separately between us and cephalopods.
Which is why I’m reluctant to just assume things like emotions, which in my opinion are harder to prove.
On the other hand, this means any similarity we do find between the two kinds of nervous systems (including, if demonstrated, having emotions) would be pretty good evidence that the common feature is likely universal for any brain based on neurons. (Which can be interesting for things like uploading, artificial neuronal networks, and uplifting.)
↑ comment by Nornagest · 2011-09-21T00:49:59.216Z · LW(p) · GW(p)
While I think you're right to point out that the uncomprehending-superior-beings trope is unrealistic, I don't think Dawkins was generalizing from fictional evidence; his quote reads more to me like plain old anthropomorphism, along with a good slice of self-serving bias relating to the importance of his own work.
A point similar to your first one shows up occasionally in fiction too, incidentally; there's a semi-common sci-fi trope that has alien species achieving interstellar travel or some other advanced technology by way of a very simple and obvious-in-retrospect process that just happened never to occur to any human scientist. So culture's not completely blind to the idea. Both tropes basically exist to serve narrative purposes, though, and usually obviously polemic ones; Dawkins isn't any kind of extra-rational superhuman, but I wouldn't expect him to unwittingly parrot a device that transparent out of its original context.
↑ comment by AlanCrowe · 2011-09-01T18:43:35.162Z · LW(p) · GW(p)
The British agricultural revolution involved animal breeding starting in about 1750. Darwin didn't publish Origin of Species until 1859, so in reality it took about 100 years for the other shoe to drop.
Replies from: machrider, MarkusRamikin↑ comment by MarkusRamikin · 2011-09-02T07:23:15.257Z · LW(p) · GW(p)
Selective breeding had been around much longer than that.
Replies from: Logos01↑ comment by Logos01 · 2011-09-02T20:17:21.652Z · LW(p) · GW(p)
Selective breeding isn't necessarily the same as artificial selection, however. The taming of dogs and cats was largely considered accidental; the neotenous animals were more human-friendly and thus able to access greater amounts of food supplies from humans until eventually they could directly interact, whereupon (at least in dogs) "usefulness" became a valued trait.
There wasn't purposefulness in this; people just fed the better dogs more and disliked the 'worse' dogs. It wasn't until the mid-1700's that dog 'breeds' became a concept.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-09-02T20:34:11.272Z · LW(p) · GW(p)
There wasn't purposefulness in this; people just fed the better dogs more and disliked the 'worse' dogs. It wasn't until the mid-1700's that dog 'breeds' became a concept.
There were certainly attempts to breed specific traits earlier than that. But they were hindered by a poor understanding of inheritance. For example, in the Bible, Jacob tried to breed speckled cattle by putting speckled rods in front of the cattle when they are trying to mate. Problems with understanding genetics works at a basic level was an issue even for much later and some of them still impact what are officially considered purebreds now.
I think that deliberate breeding of stronger horses dates back prior to the 1700s, at least to the early Middle Ages, but I don't have a source for that.
Replies from: Logos01↑ comment by Logos01 · 2011-09-02T20:43:01.310Z · LW(p) · GW(p)
Absolutely. Even the dog-breeding practitioners were unaware of how inheritence operates; that didn't come about until Gregor Mendel. We really do take for granted the vast sums of understanding about the topic we are inculcated with simply through cultural osmosis.
↑ comment by rwallace · 2011-09-04T13:18:34.673Z · LW(p) · GW(p)
I would actually think evolution a particularly poor choice.
If you want to pick one question to ask (and if we leave aside the obvious criterion of easy detectability from space) then you would want to pick one strongly connected in the dependency graph. Heavier than air flight, digital computers, nuclear energy, the expansion of the universe, the genetic code, are all good candidates. You can't discover those without discovering a lot of other things first.
But Aristotle could in principle have figured out evolution. The prior probability of doing so at that early stage may be small, but I'll still bet evolution has a much larger variance in its discovery time than a lot of other things.
Replies from: lessdazed, JoshuaZ↑ comment by lessdazed · 2011-09-04T13:36:17.711Z · LW(p) · GW(p)
digital computers
This is a good one. I like it.
nuclear energy
Seems dependent on substitute energy availability and military technology.
the expansion of the universe
There seems to be significant variance in how much humans care about such things, and achievement depends significantly on interest. Would aliens care at all about this?
If you want to pick one question to ask
I think we would do quite poorly with any one such question and exponentially better if permitted a handful.
Replies from: JoshuaZ, soreff↑ comment by JoshuaZ · 2011-09-04T13:47:56.169Z · LW(p) · GW(p)
I think we would do quite poorly with any one such question and exponentially better if permitted a handful
cringe. Please don't use "exponentially" to mean a lot when you have only two data points.
Replies from: lessdazed, Normal_Anomaly↑ comment by lessdazed · 2011-09-04T13:51:06.755Z · LW(p) · GW(p)
I mean we'd do more than twice as well with one question than with two, and more than twice as well with three than with two. Usually, diminishing returns leads us to learn less from each additional question, but not here. How do I express that?
when you have only two data points.
I have zero data points, I'm comparing hypothetical situations in which I ask aliens one or more questions about their technology. (It seems Dawkins' scenario got inverted somewhere along the way, but I don't think that makes any difference.)
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-09-04T13:58:50.251Z · LW(p) · GW(p)
I mean we'd do more than twice as well with one question than with two, and more than twice as well with three than with two. Usually, diminishing returns leads us to learn less from each additional question, but not here. How do I express that?
That's actually a claim of superexponential growth, but how you said it sounds ok. I'm actually not sure that you can get superexponential growth in a meaningful sense. If you have n bits of data you can't do better than having all n bits be completely independent. So if one is measuring information content in a Shannon sense one can't do better than exponential.
Edit: If this is what you want to say I'd say something like "As the number of questions asked goes up the information level increases exponentially" or use "superexponentially" if you mean that.
Replies from: lessdazed↑ comment by lessdazed · 2011-09-04T14:35:52.809Z · LW(p) · GW(p)
My best guess for each individual achievement gets better each other achievement I learn about, as they are not independent.
So if one is measuring information content in a Shannon sense one can't do better than exponential.
I was trying to get at the legitimacy of summarizing the aggregate of somewhat correlated achievements as a "level of civilization". Describing a civilization as having a a "low/medium/high/etc. level of civilization" in relation to others depends on either its technological advances being correlated similarly or establishing some subset of them as especially important. I don't think the latter can be done much, which leaves inquiring about the former.
If the aliens are sending interstellar ships to colonize nearby systems, have no biology or medicine, have no nuclear energy or chemical propulsion (they built a tower on their low gravity planet and launched a solar sail based craft from it with the equivalent of a slingshot for their space program), and have quantum computers, they don't have a level of technology.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-09-04T14:53:59.225Z · LW(p) · GW(p)
If the aliens are sending interstellar ships to colonize nearby systems, have no biology or medicine, have no nuclear energy or chemical propulsion (they built a tower on their low gravity planet and launched a solar sail based craft from it with the equivalent of a slingshot for their space program), and have quantum computers, they don't have a level of technology.
Well what does no medicine mean? A lot of medicine would work fine without understanding genetics in detail. Blood donors and antibiotics are both examples. Also do normal computers not count as technology? Why not? Assume that we somehow interacted with an alien group that fit your description. Is there nothing we could learn from them? I think not. For one, they might have math that we don't have. They might have other technologies that we lack (for example, better superconductors). You may be buying into a narrative of technological levels that isn't necessarily justified. There are a lot of examples of technologies that arose fairly late compared to when they necessarily made sense. For example, one-time pads arose in the late 19th century, but would have made sense as a useful system on telegraphs 20 or 30 years before. Another example are high-temperature superconductors. Similarly, high temperature superconductors (that is substances that are superconductors at liquid nitrogen temperatures) were discovered in the mid 1980s but the basic constructions could have been made twenty years before.
Replies from: lessdazed↑ comment by lessdazed · 2011-09-04T15:14:42.381Z · LW(p) · GW(p)
Well what does no medicine mean?
No blood donors (if they have blood), no antibiotics (if they have bacteria), etc.
Also do normal computers not count as technology?
Of course they do.
Assume that we somehow interacted with an alien group that fit your description. Is there nothing we could learn from them?
We could learn a lot from them, but it would be wrong to say "The aliens have a technological level less than ours", "The aliens have a technological level roughly equal to ours", "The aliens have a technological level greater than ours", or "The aliens have a technological level, for by technological levels we can most helpfully and meaningfully divide possible-civilizationspace".
You may be buying into a narrative of technological levels that isn't necessarily justified. There are a lot of examples of technologies that arose fairly late compared to when they necessarily made sense.
My point is that there are a lot of examples of technologies that arose fairly late compared to when they necessarily made sense, so asking about what technologies have arisen isn't as informative as one might intuitively suspect. It's so uninformative that the idea of levels of technology is in danger of losing coherence as a concept absent confirmation from the alien society that we can analogize from our society to theirs, confirmation that requires multiple data points.
Replies from: JoshuaZ↑ comment by Normal_Anomaly · 2011-09-04T15:31:29.197Z · LW(p) · GW(p)
cringe. Please don't use "exponentially" to mean a lot when you have only two data points.
I heard a Calculus teacher do this with even less justification a few days ago.
EDIT: was this downvoted for irrelevancy, or some other reason?
Replies from: lessdazed↑ comment by lessdazed · 2011-09-04T23:11:25.566Z · LW(p) · GW(p)
I didn't downvote it, but if you notice, JoshuaZ concluded my use of "exponential" was "ok", as what I actually meant was not "a lot" but rather what is technically known as "superexponential growth".
"Even less justification" has some harsh connotations.
↑ comment by soreff · 2011-09-04T14:08:27.815Z · LW(p) · GW(p)
I think we would do quite poorly with any one such question and exponentially better if permitted a handful.
Very much agreed.
I also agree with:
I, personally, see it as a failure of imagination on the part of Dawkin's, that he considers the issue he personally finds most important to be that which alien intelligences will find most important,
I agree with the general idea of:
If you want to pick one question to ask (and if we leave aside the obvious criterion of easy detectability from space) then you would want to pick one strongly connected in the dependency graph.
though I think it is hard to correctly choose according to this criterion. I'm skeptical that digital computers would really pass this test. Considering the medium that we are all using to discuss this, we might be a bit biased in our views of their significance. (as a former chemist, I'm biased towards picking the periodic table - but I know I'm not making a neutral assessment here.)
Nuclear energy seems like a decent choice, from the dependency graph point of view. A civilization which is able to use either fission or fusion has to pass a couple of fairly stringent tests. To detect the relevant nuclear reactions in the first place, they need to detect Mev particles, which aren't things that everyday chemical or biological processes produce. To get either reaction to happen on a large scale, they must recognize and successfully separate isotopes, which is a significant technical accomplishment.
Replies from: lessdazed↑ comment by lessdazed · 2011-09-04T14:39:23.212Z · LW(p) · GW(p)
To get either reaction to happen on a large scale, they must recognize and successfully separate isotopes, which is a significant technical accomplishment.
Is it possible the right isotopes might be lying around? Like here, but more concentrated and dispersed?
Replies from: soreff↑ comment by soreff · 2011-09-04T15:00:18.775Z · LW(p) · GW(p)
Is it possible the right isotopes might be lying around?
Yes, good point, if intelligent life evolved faster on their planet. The relevant timing is how long it took after the supernova that generated the uranium for the alien civilization to arise. (since that sets the 238U/235U ratio).
Replies from: lessdazed↑ comment by lessdazed · 2011-09-04T15:27:13.324Z · LW(p) · GW(p)
I'm confused. I thought a reaction needed a quantity of 235U in an area, and that smaller areas needed more 235U to sustain a chain reaction. Wouldn't very small pieces of relatively 235U rich uranium be fairly stable? One could then put them together with no technological requirements at all.
Replies from: soreff↑ comment by soreff · 2011-09-04T16:20:31.049Z · LW(p) · GW(p)
You are quite correct, small pieces of 235U are stable. The difference is that low concentrations of 235U in natural uranium (because of it's faster decay than 238U) make it harder to get to critical mass, even with chemically pure (but not isotopically pure) uranium. IIRC, reactor grade is around 5% 235U, while natural uranium is 0.7%. IIRC, pure natural uranium metal, at least by itself, doesn't have enough 235U to sustain a chain reaction, even in a large mass. (but I vaguely recall that the original reactor experiment with just the right spacing of uranium metal lumps and graphite moderator may have been natural uranium - I need to check this... (short of time right now)) (I'm still not quite sure - Chicago Pile-1 is documented here but the web page described the fuel as "uranium pellets". I think they mean natural uranium, in which case I withdraw my statement that isotope separation is a prerequisite for nuclear power.)
Replies from: JoshuaZ, private_messaging, lessdazed↑ comment by JoshuaZ · 2011-09-04T16:46:29.342Z · LW(p) · GW(p)
I vaguely recall that the original reactor experiment with just the right spacing of uranium metal lumps and graphite moderator may have been natural uranium
I think this is correct but finding a source which says that seems to be tough. However, Wikipedia does explicitly confirm that the successor to CP1 did initially use unenriched uranium.
Edit: This article (pdf) seems to confirm it. They couldn't even use pure uranium but had to use uranium oxide. No mention of any sort of enrichment is made.
↑ comment by private_messaging · 2013-08-28T19:12:19.405Z · LW(p) · GW(p)
Yes, CP-1 used natural uranium (~0.7% U-235) and ultra high purity graphite. It would become impossible to attain without isotope separation in just a few hundred million years, to add to the billions from the formation of uranium in the star. Conversely, 1.7 billions years ago, it occurred naturally, with regular water to slow down neutrons.
Fusion is more interesting.
↑ comment by lessdazed · 2011-09-04T22:41:31.696Z · LW(p) · GW(p)
IIRC, pure natural uranium metal, at least by itself, doesn't have enough 235U to sustain a chain reaction, even in a large mass.
What is natural is something that I, without background other than a history of nuclear weapons class for my history degree, was/am not confident wouldn't vary from solar system to solar system.
The natural reactor ended up with less U235 than normal, decayed uranium because some of the fuel had been spent. I assume that it began with either an unusual concentration of regular uranium (or other configuration of elements that slowed neutrons or otherwise facilitated a reaction) or that the uranium there was unusually rich in 235U. If it was the latter, I don't know the limits for how rich in 235U uranium could be at time of seeding into a planet, but no matter the richness, having small enough pieces would preserve it for future beings. Richness alone wouldn't cause a natural reaction, so to the extent richness can vary, it can make nuclear technology easy.
If the natural reactor had average uranium, and uranium on planets wouldn't be particularly more 235U rich than ours, then nuclear technology's ease would be dependent on life arising quickly relative to ours, but not fantastically so, as you say.
↑ comment by JoshuaZ · 2011-09-04T13:27:52.297Z · LW(p) · GW(p)
Heavier than air flight, digital computers, nuclear energy, the expansion of the universe, the genetic code, are all good candidates. You can't discover those without discovering a lot of other things first.
Genetic code might likely vary. While it isn't implausible that other life would use DNA for its genetic storage it doesn't seem to be that likely. It seems extremely unlikely that DNA would be organized in the same triplet codon system that life on Earth uses.
Heavier than air flight is also a function of what sort of planet you are on. If Earth had slightly weaker or stronger gravity the difficulty of this achievement would change a lot. Also if intelligent life had arose from winged species one could see this as impacting how much they study aerodynamics and the like. One could conceive of that going either way (say having a very intuitive understanding of how to fly but considering it to be incredibly difficult to make an Artificial Flyer, or the opposite, using that intuition to easily understand what would need to be done in some form.)
Other than that, your argument seems to be a good one.
↑ comment by NancyLebovitz · 2011-09-03T08:05:45.089Z · LW(p) · GW(p)
I wonder if there's any way to estimate how hard it is for an intelligent species to think of evolution. It's a very abstract theory, and I think it's plausible that intelligent species could be significantly better or worse than we are at abstract thought. I have no idea where the middle of the bell curve (if it's a bell curve at all) would be.
comment by roland · 2011-09-02T19:40:32.602Z · LW(p) · GW(p)
Replies from: Oscar_Cunninghamthe outside view does not inform judgments of particular cases. --Kahneman, Lovallo
↑ comment by Oscar_Cunningham · 2011-09-02T20:28:15.874Z · LW(p) · GW(p)
But it does, no?
Replies from: roland↑ comment by roland · 2011-09-12T21:45:58.091Z · LW(p) · GW(p)
Sorry, but I don't understand you.
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2011-09-12T22:54:24.965Z · LW(p) · GW(p)
Using the outside view tells you something about your particular case. If you don't know how long your project is going to take, and then someone tells you that such projects normally take ten months, then you've learned something about how long your project will take. It will take about ten months. Your quote is the kind of thing that people say because they think their project is special in some way. They're trying to fight the outside view. But most of the time their project isn't special, it will take just as long as everyone else.
The outside view informs judgements of particular cases.
Replies from: roland↑ comment by roland · 2011-09-23T21:19:52.686Z · LW(p) · GW(p)
Alright, I will add a bit more context:
Academics are familiar with a related example: finishing our papers almost always takes us longer than we expected. We all know this and often say so. Why then do we continue to make the same error? Here again, the outside view does not inform judgments of particular cases.
From Timid Choices and Bold Forecasts A Cognitive Perspective on Risk Taking by Daniel Kahneman and Dan Lovallo
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2011-09-23T22:10:03.696Z · LW(p) · GW(p)
Ah, that makes much more sense. Thanks.
comment by [deleted] · 2011-09-01T16:06:30.952Z · LW(p) · GW(p)
.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-09-01T17:26:50.524Z · LW(p) · GW(p)
Sorry, I don't understand what this quote is trying to say. I've attempted to parse it and can sort get some sort of thing about not caring what the truth is. If that's the meaning then it seems to be pretty anti-rationalist. What am I missing?
Replies from: None↑ comment by [deleted] · 2011-09-01T17:36:22.108Z · LW(p) · GW(p)
.
Replies from: djcb↑ comment by djcb · 2011-09-03T10:46:43.011Z · LW(p) · GW(p)
I found Campbell's The Hero with a Thousand Faces not very convincing. The similarities he sees between folk stories are often rather trivial, I think, and the rubbery nature of human language makes it easy -- not even mentioning selection bias.
Is The Power of Myth better?
Replies from: None↑ comment by [deleted] · 2011-09-03T15:16:06.350Z · LW(p) · GW(p)
.
Replies from: Normal_Anomaly, djcb↑ comment by Normal_Anomaly · 2011-09-03T16:55:31.396Z · LW(p) · GW(p)
I wonder what he would think of the possibility of "editing" human nature via technology, and how those changes might negate the usefulness of mythology as a set of teaching memes.
Greg Egan's short story "The Planck Dive" has an interesting take on that subject. It's about a mythologist trying to force a description of a post-Singularity scientific expedition into one of the classic mythical narratives.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-09-03T17:28:39.111Z · LW(p) · GW(p)
It's not "post-Singularity", it's normal human technology, just more advanced.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-09-04T01:29:03.543Z · LW(p) · GW(p)
I guess you could say that. I said "post-Singularity" because all the characters are uploads, but there aren't any AGIs and human nature isn't unrecognizably different.
↑ comment by djcb · 2011-09-04T23:12:16.774Z · LW(p) · GW(p)
An example of a well-known non-trivial similarity would be the flood-myths that many cultures have -- it seems that least some of those myths are related somehow - but not in inherited psycho-analytical way (!) that Campbell suspects, but more likely simply due to copying the stories (e.g. Noah, Gilgamesh).
comment by Sblast · 2011-09-25T07:23:19.647Z · LW(p) · GW(p)
"LANGUAGE IS MORE THAN BLOOD'
-- Franz Rosenzweig, quoted in the book "Language of the Third Reich; a Philologist's Notebook" by Holocaust survivor Victor Klemperer
Replies from: wedrifid↑ comment by wedrifid · 2011-09-25T07:48:34.601Z · LW(p) · GW(p)
Huh? Unless you are quoting from a fantasy story with an unusual magic system then I have no idea what you are talking about.
Replies from: shokwave↑ comment by shokwave · 2011-09-25T08:16:25.222Z · LW(p) · GW(p)
Then -> than?
Language is more than blood... more powerful than blood? I recognise "Language of the Third Reich", it was a study on how language (most notably alien and eternal) was used to alter perceptions during the Third Reich's reign. Maybe this quote means language can turn blood relatives against each other? Or that language can dehumanise a person to the point that seeing them die (their blood spilled?) doesn't bother someone?
Yeah, I got nothing either.
Replies from: Sblast