Survey Results
post by Scott Alexander (Yvain) · 2009-05-12T22:09:39.463Z · LW · GW · Legacy · 212 commentsContents
212 comments
Followup to: Excuse Me, Would You Like to Take a Survey?, Return of the Survey
Thank you to everyone who took the Less Wrong survey. I've calculated some results out on SPSS, and I've uploaded the data for anyone who wants it. I removed twelve people who wanted to remain private, removed a few people's karma upon request, and re-sorted the results so you can't figure out that the first person on the spreadsheet was the first person to post "I took it" on the comments thread and so on. Warning: you will probably not get exactly the same results as me, because a lot of people gave poor, barely comprehensible write in answers, which I tried to round off to the nearest bin.
Download the spreadsheet (right now it's in .xls format)
I am not a statistician, although I occasionally have to use statistics for various things, and I will gladly accept corrections for anything I've done wrong. Any Bayesian purists may wish to avert their eyes, as the whole analysis is frequentist. What can I say? I get SPSS software and training free and I don't like rejecting free stuff. The write-up below is missing answers to a few questions that I couldn't figure out how to analyze properly; anyone who cares about them enough can look at the raw data and try it themselves. Results under the cut.
Out of 166 respondees:
160 (96.4%) were male, 5 (3%) were female, and one chose not to reveal their gender.
The mean age was 27.16, the median was 25, and the SD was 7.68. The youngest person was 16, and the oldest was 60. Quartiles were <22, 22-25, 25-30, and >30.
Of the 158 of us who disclosed our race, 148 were white (93.6%), 6 were Asian, 1 was Black, 2 were Hispanic, and one cast a write-in vote for Middle Eastern. Judging by the number who put "Hinduism" as their family religion, most of those Asians seem to be Indians.
Of the 165 of us who gave readable relationship information, 55 (33.3%) are single and looking, 40 (24.2%) are single but not looking, 40 (24.2%) are in a relationship, 29 (17.6%) are married, and 1 is divorced.
Only 138 gave readable political information (those of you who refused to identify with any party and instead sent me manifestos, thank you for enlightening me, but I was unfortunately unable to do statistics on them). We have 62 (45%) libertarians, 53 (38.4%) liberals, 17 (12.3%) socialists, 6 (4.3%) conservatives, and not one person willing to own up to being a commie.
Of the 164 people who gave readable religious information, 134 (81.7%) were atheists and not spiritual; 5 other atheists described themselves as "spiritual". Counting deists and pantheists, we had 11 believers in a supreme being (6.7%), of whom 2 were deist/pantheist, 2 were lukewarm theists, and 6 were committed theists. 14 of us (8.5%) were agnostic.
53 of us were raised in families of "about average religiousity" (31.9%). 24 (14.5%) were from extremely religious families, 45 (27.1%) from nonreligious families, and 9 (5.4%) from explicitly atheist families. 30 (18.1%) were from families less religious than average. The remainder wrote in some hard to categorize responses, like an atheist father and religious mother, or vice versa.
Of the 106 of us who listed our family's religious background, 92 (87%) were Christian. Of the Christians, 29 (31.5% of Christians) described their backgrounds as Catholic, 30 (32.6% of Christians) described it as Protestant, and the rest gave various hard-to-classify denominations or simply described themselves as "Christian". There were also 9 Jews, 3 Hindus, 1 Muslim, and one New Ager.
I didn't run the "how much of Overcoming Bias have you read" question so well, and people ended up responding things like "Oh, most of it", which are again hard to average. After interpreting things extremely liberally and unscientifically ("most" was estimated as 75%, "a bit" was estimated at 25%, et cetera) I got that the average LWer has read about half of OB, with a slight tendency to read more of Eliezer's posts than Robin's.
Average time in the OB/LW community was 13.6 ± 9.2 months. Average time spent on the site per day was 30.7 ± 30.4 minutes.
IQs (warning: self-reported numbers for notoriously hard-to-measure statistic) ranged from 120 to 180. The mean was 145.88, median was 141.50, and SD was 14.02. Quartiles were <133, 133-141.5, 141.5-155, and >155.
77 people were willing to go out on a limb and guess whether their IQ would be above the median or not. The mean confidence level was 54.4, and the median confidence level was 55 - which shows a remarkable lack of self-promoting bias. The quartiles were <40, 40-55, 55-70, >70. There was a .453 correlation between this number and actual IQ. This number was significant at the <.001 level.
Probability of Many Worlds being more or less correct (given as mean, median, SD; all probabilities in percentage format): 55.65, 65, 32.9.
Probability of aliens in the observable Universe: 70.3, 90, 35.7.
Probability of aliens in our galaxy: 40.9, 35, 38.5. Notice the huge standard deviations here; the alien questions were remarkable both for the high number of people who put answers above 99.9, and the high number of people who put answers below 0.1. My guess: people who read about The Great Filter versus those who didn't.
Probability of some revealed religion being true: 3.8, 0, 12.6.
Probability of some Creator God: 4.2, 0, 14.6.
Probability of something supernatural existing: 4.1, 0, 12.8.
Probability of an average person cryonically frozen today being successfully revived: 22.3, 10, 26.2.
Probability of anti-agathic drugs allowing the current generation to live beyond 1000: 29.2, 20, 30.8.
Probability that we live in a simulation: 16.9, 5, 23.7.
Probability of anthropic global warming: 69.4, 80, 27.8.
Probability that we make it to 2100 without a catastrophe killing >90% of us: 73.1, 80, 24.6.
When asked to determine a year in which the Singularity might take place, the mean guess was 9,899 AD, but this is only because one person insisted on putting 100,000 AD. The median might be a better measure in this case; it was mid-2067.
Thomas Edison patented the lightbulb in 1880. I've never before been a firm believer in the wisdom of crowds, but it really came through in this case. Even though this was clearly not an easy question and many people got really far-off answers, the mean was 1879.3 and the median was 1880. The standard deviation was 36.1. Person who put "2172", you probably thought you were screwing up the results, but in fact you managed to counterbalance the other person who put "1700", allowing the mean to revert back to within one year of the correct value :P
The average person was 26.77% sure they got within 5 years of the correct answer on the lightbulb question. 30% of people did get within 5 years. I'm not sure how much to trust the result, because several people put the exact correct year down and gave it 100% confidence. Either they were really paying attention in history class, or they checked Wikipedia. There was a high correlation between high levels of confidence on the question and actually getting the question right, significant at the <.001 level.
I ran some correlations between different things, but they're nothing very interesting. I'm listing the ones that are significant at the <.05 level, but keep in mind that since I just tried correlating everything with everything else, there are a couple hundred correlations and it's absolutely plausible that many things would achieve that significance level by pure chance.
How long you've been in the community obviously correlates very closely with how much of Robin and Eliezer's posts you've read (and both correlate with each other).
People who have read more of Robin and Eliezer's posts have higher karma. People who spend more time per day on Less Wrong have higher karma (with very strong significance, at the <.001 level.)
People who have been in the community a long time and read many of EY and RH's posts are more likely to believe in Many Worlds and Cryonics, two unusual topics that were addressed particularly well on Overcoming Bias. That suggests if you're a new person who doesn't currently believe in those two ideas, and they're important to you, you might want to go back and find the OB sequences about them (here's Many Worlds, and here's some cryonics). There were no similar effects on things like belief in God or belief in aliens.
Older people were less likely to spend a lot of time on the site, less likely to believe in Many Worlds, less likely to believe in global warming, and more likely to believe in aliens.
Everything in the God/revealed religion/supernatural cluster correlated pretty well with each other. Belief in cryonics correlated pretty well with belief in anti-agathics.
Here is an anomalous finding I didn't expect: the higher a probability you assign to the truth of revealed religion, the less confident you are that your IQ is above average (even though no correlation between this religious belief and IQ was actually found). Significance is at the .025 level. I have two theories on this: first, that we've been telling religious people they're stupid for so long that it's finally starting to sink in :) Second, that most people here are not religious, and so the people who put a "high" probability for revealed religion may be assigning it 5% or 10%, not because they believe it but because they're just underconfident people who maybe overadjust for their biases a little much. This same underconfidence leads them to underestimate the possibility that their IQ is above average.
The higher probability you assign to the existence of aliens in the universe, the more likely you are to think we'll survive until 2100 (p=.002). There is no similar correlation for aliens in the galaxy. I credit the Great Filter article for this one too - if no other species exist, it could mean something killed them off.
And, uh, the higher probability you assign to the existence of aliens in the galaxy (but not in the universe) the more likely you are (at a .05 sig) to think global warming is man-made. I have no explanation for this one. Probably one of those coincidences.
Moving on - of the 102 people who cared about the ending to 3 Worlds Collide, 68 (66.6%) prefered to see the humans blow up Huygens, while 34 (33.3%) thought we'd be better off cooperating with the aliens and eating delicious babies.
Of the 114 people who had opinions about the Singularity, 85 (74.6%) go with Eliezer's version, and 29 (25.4%) go with Robin's.
If you're playing another Less Wronger in the Prisoner's Dilemma, you should know that of the 133 who provided valid information for this question, 96 (72.2%) would cooperate and 37 (27.8%) would defect. The numbers switch when one player becomes an evil paper-clip loving robot; out of 126 willing to play the "true" Prisoner's Dilemma, only 42% cooperate and 58% defect.
Of the 124 of us willing to play the Counterfactual Mugging, 53 (42.7%) would give Omega the money, and 71 (57.3%) would laugh in his face.
Of the 146 of us who had an opinion on aid to Africa, 24 (16.4%) thought it was almost always a good thing, 42 (27.8%) thought it was almost always a bad thing, and 80 (54.8%) took a middle-of-the-road approach and said it could be good, but only in a few cases where it was done right.
Of 128 of us who wanted to talk about our moral theories, 94 (73.4%) were consequentialists, about evenly split between garden-variety or Eliezer-variety (many complained they didn't know what Eliezer's interpretation was, or what the generic interpretation was, or that all they knew was that they were consequentialists). 15 (9%) said with more or fewer disclaimers that they were basically deontologists, and 5 (3.9%) wrote-in virtue ethics, and objected to their beliefs being left out (sorry!). 14 people (10.9%) didn't believe in morality.
Despite the seemingly overwhelming support for cryonics any time someone mentions it, only three of us are actually signed up! Of the 161 of us who admitted we weren't, 11 (6.8%) just never thought about it, 99 (59.6%) are still considering it, and 51 (31.7%) have decided against it.
212 comments
Comments sorted by top scores.
comment by aluchko · 2009-05-13T04:34:40.199Z · LW(p) · GW(p)
Awesome work.
One thing that disappointed, but didn't really surprise me, was the lack of diversity in the community
"160 (96.4%) were male, 5 (3%) were female, and one chose not to reveal their gender.
The mean age was 27.16, the median was 25, and the SD was 7.68. The youngest person was 16, and the oldest was 60. Quartiles were 30.
Of the 158 of us who disclosed our race, 148 were white (93.6%), 6 were Asian, 1 was Black, 2 were Hispanic, and one cast a write-in vote for Middle Eastern. Judging by the number who put "Hinduism" as their family religion, most of those Asians seem to be Indians."
The thing that particularly worries me is our low age. Now it's to be expected as internet communities are a young person's game but I'd be more comfortable with an average age closer to 30.
Combine that with the fact that most of us seem to be in Computers or Engineering (I'd really like to know what those "Other Hard Sciences" were) I do worry about our rationality as a group. One thing I've noticed with junk science is that Engineers and to a lesser extent Computer Scientists seem to be overrepresented. I'm not sure of all the reasons for this, I suspect that part of the problem is that we regularly work with designed systems that have a master plan that can be derived from a small amount of evidence. The problem being if you take that tendency to problem spaces that aren't designed you have a tendency to go flying off in the wrong direction.
I'm worried that we could start turning into an echo-chamber where a localized consensus masks a growing dissonance with the outside world. The Shangri-la diet sounds interesting (I'm even giving it a try) but it also sounds a bit like pseudo-science. There could be a completely different mechanism at work, it could even be the good old fashioned placebo effect. I worry that we'll develop a tendency to believe our rationality is strong enough to wade outside of our fields of expertise, the halls of kookdom are filled with brilliant scientists who wandered into a neighbouring discipline and I worry we could risk the same fate.
I'm not saying Less Wrong is a doomed cause or anything, the topics we explore (oh that crazy old Omega!) we seem to do fairly well on and I've picked up many useful lessons and insights. I just worry since we all want to apply our rationality and find answers, but regardless of how rational you are you can't unravel the secrets of the universe just from analysing a piece of cake.
ps Oh yeah, how many of us 83.4% Libertarians/Liberals were very torn because while we really liked the free-market and social liberty ideals of libertarians there were just too many crackpots over there so we considered giving up some economic freedom for the mainstream democrats.
Replies from: JamesAndrix, MichaelVassar↑ comment by JamesAndrix · 2009-05-13T19:00:42.467Z · LW(p) · GW(p)
I suspect that part of the problem is that we regularly work with designed systems that have a master plan that can be derived from a small amount of evidence.
I've been playing alot of portal and half life 2 lately. (first person shooters with heavy puzzle elements) and I wonder about how the level design is affecting my thought process.
I'm often in a room with a prominent exit and it is clear that that is the exit I'm supposed to take. When the way I came in is blocked I know that there is some other way to get out. When my computer controlled squad mates parrot 'which way do we go?' I think to myself 'What do you mean? It's obvious the level designer wants us to go this way."
I wonder if this will affect how I deal with real world puzzles where there are many paths that don't lead to defined goals, but also don't lead to a clear dead end.
Replies from: Emile, dclayh, MichaelHoward↑ comment by Emile · 2009-05-15T15:52:52.032Z · LW(p) · GW(p)
You could play procedural games like nethack or Dwarf Fortress, which have no railroading, and some things you encounter just can't be solved.
Those kind of games aren't as popular as "mainstream" ones like Portal, but they may better reflect the real world.
↑ comment by dclayh · 2009-05-14T22:56:53.795Z · LW(p) · GW(p)
I've noticed the same thing with Valve games particularly (esp. after playing through with the developer commentary): they just seem so perfectly designed to guide the player that it becomes a bit boring. I want a few moments of running around in frustration before realizing "Aha! That's what you want me to do! How non-obvious." (A bit more like the old text-based adventures, King's Quest series, etc., in other words.)
Replies from: JamesAndrix↑ comment by JamesAndrix · 2009-05-15T15:15:14.312Z · LW(p) · GW(p)
Yes! I start interpreting things I see in game as communication from the developers, rather than a universe to figure out. Which is fine in game, but I worry it's training me for magical thinking.
King's Quest style games solve some of this problem, probably because there's more 'noise'; Pointless things you can do, more places to wander.
Grand Theft Auto is even more open ended. Though I haven't played the recent incarnations much.
↑ comment by MichaelHoward · 2009-05-13T21:05:38.409Z · LW(p) · GW(p)
This wired article may interest you.
Replies from: Jonnan↑ comment by Jonnan · 2009-05-15T00:22:55.977Z · LW(p) · GW(p)
Amusing article - I can't quite get my mind around feeling that way abuoy quake, but I'll cop to dreaming about Tetris when I was younger - .
Jonnane
Replies from: Cyan, mattnewport↑ comment by mattnewport · 2009-05-15T00:57:39.316Z · LW(p) · GW(p)
When I used to play a lot of Quake III, I had dreams where I'd have the sensation of moving around using jump-pads. I've also caught myself walking along the street and half-consciously scanning for potential cover and ambush points. My most disturbing video game carryover was a brief impulse after a long GTA session to gun my car at a pedestrian crossing a zebra crossing.
↑ comment by MichaelVassar · 2009-05-13T14:44:22.183Z · LW(p) · GW(p)
It's a cliche that kookdom is filled with brilliant scientists outside of their expertise, but its definitely not what I observe when I look at scientific history.
Lots of kook inventors, Faraday, and lots of chemical and life and social scientists who start out correct but ignored or rejected and gradually embrace more extreme, attention-getting, but exaggerated and false versions of their initial thesis as a result of years avoiding their peers and interacting primarily with those members of the public who will act as an echo chamber.
Then there are the free energy and anti-gravity crowds. They seem to be born that way.
Replies from: aluchko↑ comment by aluchko · 2009-05-13T17:03:20.823Z · LW(p) · GW(p)
I should clarify.
I'm specifically thinking of Linus Pauling with his theories about Vitamin C curing cancer and a former Nobel winning physicist (can't remember who) doing a debunking of global warming based on some flaky arguments. Of course Wikipedia claims that Pauling may not have been completely out to lunch (though I don't really trust Wikipedia when it comes to junk science). And I don't really have any hard numbers, just knowledge of a couple cases and some anecdotes from scientists complaining about the tendency of Nobel winners to turn crackpot.
I suppose this could underline the danger I was mentioning about working with limited evidence as I fell victim in my very own example of it!
comment by Emily · 2009-05-13T00:36:39.443Z · LW(p) · GW(p)
I'm surprised to see how close I was to the mean in so many cases. I expected on several questions that I would be, if not an outlier, then outside the middle quartiles. I was wrong in most cases. Clearly the OB/LW brainwashing process has been more successful than I realised... :P
Seriously, very interesting results. I'm a bit dismayed by the 3% female figure -- I knew I was in a minority, but I didn't realise it was that tiny. I wish I could articulate some suggestions for getting hold of more female readers/commenters. I can sort of see intuitively how this place could seem like not the most attractive one to some women, but I don't have any ideas for sorting that out. Largely I guess it may just be a self-perpetuating thing. Perhaps the first step ought to be just getting some of the current female readers/commenters to make (more/some) top-level posts too. I wish I felt brave and knowledgeable and intelligent enough to attempt one touching on some aspect or other of feminism.
Replies from: John_Maxwell_IV, aluchko↑ comment by John_Maxwell (John_Maxwell_IV) · 2009-05-13T05:09:04.525Z · LW(p) · GW(p)
Go for it!
Hopefully that will give you +1 bravery.
↑ comment by aluchko · 2009-05-14T03:53:50.834Z · LW(p) · GW(p)
Why not a top level post noting the lack of women on LW?
Doesn't have to be anything fancy, just note the survey results you don't even have to offer any analysis, just noting the problem in a top level post should be enough to draw some women out of the woodwork in the comments section. It might even inspire a few of them to write their own top level posts.
Replies from: Alicorn↑ comment by Alicorn · 2009-05-14T04:16:27.520Z · LW(p) · GW(p)
There has already been a top level post noting the rarity of women on LW.
comment by dfranke · 2009-05-12T23:57:24.460Z · LW(p) · GW(p)
I'm perplexed by the person who believes that the most severe existential risk is an asteroid strike, yet gave only 70% odds of surviving the century.
Replies from: SoullessAutomaton, Peter_de_Blanc↑ comment by SoullessAutomaton · 2009-05-13T02:46:43.006Z · LW(p) · GW(p)
Hopefully it's not because they're an astronomer...
↑ comment by Peter_de_Blanc · 2009-05-13T05:16:41.059Z · LW(p) · GW(p)
Maybe a human-caused asteroid strike?
Or maybe this person can think of many different extinction scenarios which are individually very improbable, but add up to a 30% chance?
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2009-05-13T05:30:43.211Z · LW(p) · GW(p)
Most astronomers seem to put the odds of an asteroid strike at below 1 in 1000. I'd be interested to hear the person's other 299 ideas for race-ending catastrophes, each worthy of its own category (!).
Replies from: Craig_Morgan↑ comment by Craig_Morgan · 2009-05-13T08:48:43.162Z · LW(p) · GW(p)
I agree with your point, but just because someone can't enumerate 299 possibilities, does not mean they should not reserve probability space for unknown unknowns. Put another way, in calculating these odds you must leave room for race-ending catastrophes that you didn't even imagine. I believe this point is important, that we succumb to multiple biases in this area, and that these biases have affected the decision-making of many rationalists. I am preparing a Less Wrong post on this and related topics.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2009-05-13T17:34:27.669Z · LW(p) · GW(p)
Hmmm... I think "something I can't think of" should qualify as a category, myself.
comment by steven0461 · 2009-05-13T20:30:34.067Z · LW(p) · GW(p)
The main thing I take away from this survey is that many of us still think they can assign 99%+ probabilities based on no good info (the aliens and existential risk questions stand out in particular). Maybe the LW community needs to focus more on the basics?
Replies from: ShardPhoenix, Peter_de_Blanc↑ comment by ShardPhoenix · 2009-05-15T07:13:35.368Z · LW(p) · GW(p)
I voted for a high probability of life occurring at least once elsewhere in the universe due to a combination of the very large size of the universe and the generalized Copernican principle.
↑ comment by Peter_de_Blanc · 2009-05-15T01:01:55.086Z · LW(p) · GW(p)
Maybe other people think they have good info.
Replies from: steven0461↑ comment by steven0461 · 2009-05-15T05:12:37.540Z · LW(p) · GW(p)
But they don't in fact have good info, so there must (with very high probability) have been some sort of rationality blunder involved.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2009-05-15T07:20:07.646Z · LW(p) · GW(p)
I reported a 1% chance of extra-Earth sentient aliens in our galaxy, on Yvain's survey. Is that overconfident? I was reasoning that, if the rest of the universe is real (which it may not be under some simulation hypotheses), the odds of there being extra-Earth intelligence in this particular time-window, which has already acquired sentience and has not yet become lightcone-tiling or made itself extinct, was rather small. (The galaxy is about 100,000 light-years across).
Now that I write this, I'm inclined to think I didn't pay enough heed to uncertainty in whether I set up the analysis correctly or was missing a major consideration.
Replies from: steven0461↑ comment by steven0461 · 2009-05-15T07:26:25.244Z · LW(p) · GW(p)
I meant the people who reported 99% chances, mostly. The Fermi paradox is probably good info.
One thing I do worry about is if some anthropic idea (self-indication principle) may favor densely populated universes over sparsely populated universes. Something like that is true in this model, but I'm not sure it could work (probability of intelligent life is a matter of logical necessity, what other implications do you get if you apply this model of anthropic reasoning to uncertain logical necessities?) and even if it does it should only imply a serious probability for aliens in the observable universe, not the galaxy.
comment by John_Maxwell (John_Maxwell_IV) · 2009-05-13T04:57:13.740Z · LW(p) · GW(p)
Why the heck is the average stated probability of a creator god greater than the average stated probability of something supernatural existing? Or did a team of scientists in a parallel universe count as a creator god?
Replies from: Yvain, Jack↑ comment by Scott Alexander (Yvain) · 2009-05-13T06:39:46.375Z · LW(p) · GW(p)
I think the supernatural question was phrased as something supernatural happening within the universe. So in a deist perspective, if God created the universe and then went away, that would qualify for creator god but not supernatural.
↑ comment by Jack · 2009-05-17T03:01:38.355Z · LW(p) · GW(p)
This was really interesting. And I actually have a confession to make in this regard. I remember changing my estimates after noting that I had given a lower probability to supernatural entities than I gave to the existence of God. I can't figure out how why I did that.
comment by taw · 2009-05-13T08:36:28.420Z · LW(p) · GW(p)
Two things surprise me.
First while 73.4% of responders are consequentialists and only 9% deontologists, at the same time 45% of responders are libertarians. While labels like that are vague, libertarianism is in most versions highly deontologist ideology, and cares about processes and not results as such.
The other thing was 33.3% of "single and looking" (plus 24.2% of "single and not looking" consists of some mix of "single and not interested" and "single, tried but given up"). There are some well known seduction techniques based on adjusting for biases people have because we're evolutionarily adapted to a different environment. I'd guess most responders compartmentalize their rationality and do not apply rational thinking to their personal lifes. More compartment-busting posts please?
Replies from: Richard_Kennaway, SoullessAutomaton, thomblake, badger, kodos96, Douglas_Knight↑ comment by Richard_Kennaway · 2009-05-13T12:05:26.493Z · LW(p) · GW(p)
While labels like that are vague, libertarianism is in most versions highly deontologist ideology, and cares about processes and not results as such.
The libertarian writing I've seen is primarily consequentialist: arguments, mostly by economists, that governments produce worse results than people directing their own efforts for their own reasons. So I see no contradiction in the survey responses.
If, in the places I don't read, which are surely more numerous than those I do, libertarianism is mainly promoted by arguments that government is morally wrong, then I can see why the Libertarian party has never been more than a splinter movement.
Replies from: taw, ciphergoth↑ comment by taw · 2009-05-13T14:54:21.474Z · LW(p) · GW(p)
The argument I've seen is mostly something like:
- People have rights to absolute personal liberty in economic matters (pure deontology).
- In the fairytale case of perfectly competitive markets with no externalities, no transaction costs, no information asymmetry, no cost of entry, and so on and so on, government intervention is inefficient.
- Somehow based on the fairytale case, following libertarian process is bound to produce the best possible results, without any need for serious empirical evidence that this link is true.
- In cases where following the rules doesn't seem to produce best results, we're supposed to follow the rules anyway, as breaking them would most likely result in something even worse, even if we don't immediately see it.
This sort of post-hoc consequentalization of essentially deontological ethics is extremely common. I don't know that many deontologies that have balls to avoid this trick, and assert that they don't care about consequences.
Ask a typical Christian, or other theist, and just like a libertarian, they will tell you something like that:
- sinning is wrong (deontological basis)
- sinning results in bad consequences (unsupported assertion)
- even when sinning seems to result in right consequences, they're bad anyway we just don't see it (post-hoc consequentialization)
On the other hand the modern European politics (there's not terribly much difference between "left" and "right") mixes market-based, government-based, and other solutions, based on what is estimated to work best, not on any big ideology, which it lost long time ago, even though it clings to all kinds of labels like "social democratic" or "christian democratic" etc.
Replies from: mattnewport, MichaelBishop↑ comment by mattnewport · 2009-05-14T18:14:39.331Z · LW(p) · GW(p)
I don't know where you've been finding this argument but it's hardly representative of a good argument for libertarianism. I grew up in Europe (well, the UK, which is kind of Europe) with Labour voting parents and grandparents with fairly socialist views and considered myself a socialist into my early 20s. Weak arguments like these wouldn't have been enough to convert me to a generally libertarian worldview.
I had a similar caricature of the views of supporters of the free market (back when I didn't even know the term libertarian) but learning more about economics and being confronted with evidence of better outcomes in freer economies, together with learning that few serious economists (or libertarians) believe in perfectly efficient markets and learning about Public Choice Theory were key in changing my political views.
Key to the economic arguments for libertarianism is the idea that incentives matter and that the incentives facing actors in a free market tend to be far less perverse than those facing politicians or employees of state run monopolies.
The moral arguments stem largely from a view that personal freedom is a high moral value and that the evidentiary bar should be set very high for any demonstration of harm to justify restriction of individual freedoms. That tendency seems to be correlated with certain personality types according to some research and the crossover between libertarians and progressives/liberals on social issues seems to be as much a factor of personal values as of consequentialist reasoning.
And being fairly familiar with UK politics (less so with European politics in other countries) the idea that European politics pick policies based on 'what is estimated to work best' strikes me as pretty laughable.
Replies from: Yvain, conchis, MrHen↑ comment by Scott Alexander (Yvain) · 2009-05-14T18:49:31.542Z · LW(p) · GW(p)
Thanks, Matt. You're providing some interesting points in a direction I hadn't heard much about before.
Do you think most libertarians believe that regulation by a responsible, intelligent, benevolent government would improve society, but that we simply don't have a government we can trust that much? Or do you think they believe that any government intervention is likely to have adverse effects no matter how well-planned it is?
Replies from: mattnewport↑ comment by mattnewport · 2009-05-14T19:58:42.703Z · LW(p) · GW(p)
I think most libertarians would tend to agree with Hayek's presentation of the Economic Calculation Problem as a fairly fundamental obstacle to successful government planning. There are a couple of problems with government attempts to improve society: one is their practical ability to do so (given a clear goal, are they able to achieve it) and the other is how they decide what constitutes 'improvement'. The fact that they generally fail at the former tends to mask the fact that they don't really have a good way of doing the latter. Given all the relevant inputs, perfect rationality and unlimited computational capacity I concede the theoretical possibility of a central planner producing more optimal outcomes than a market. Such a planner would be so far from any government that actually exists or could exist given current technology however that I don't consider it particularly relevant whether it is theoretically possible or not. That could perhaps change if Eliezer is successful.
The more immediate problem is that governments are not structured in a way that provides incentives to improve society. The reality of politics is all about special interests, rent seeking, regulatory capture and political maneuvering. The system as it actually exists is certainly not capable of making rational policy choices to improve society, though it remains possible that by some happy accident some policies may not be terribly harmful.
↑ comment by conchis · 2009-05-16T13:09:22.995Z · LW(p) · GW(p)
Matt, I'd be interested to know how your broader views on the nature of morality (i.e. that it's essentially enlightened self-interest) feed in to your support for libertarianism.
More specifically, it seems as though this view would set a lower empirical bar than more altruistic views, and I guess I'm wondering to what extent you view the empirical arguments for libertarianism as sufficiently strong that you would still endorse something like it if you were a utilitarian or a prioritarian or an egalitarian instead.
Replies from: mattnewport↑ comment by mattnewport · 2009-05-17T20:47:42.756Z · LW(p) · GW(p)
My views on morality are certainly interconnected with my support for libertarianism. In the case of healthcare for example, my idea of what would constitute a good system may well differ from someone who takes a more utilitarian view of morality. For example, I think there may well be a place for some kind of government involvement in the control and treatment of infectious disease since there are externalities to consider if someone foregos treatment for cost reasons and a free at the point of delivery treatment service for infectious diseases is arguably a public good that would be undersupplied without government involvement. I don't however think that anyone has a fundamental right to healthcare and utilitarian arguments for healthcare reform that advocate a system based on a more 'equitable' allocation of healthcare resources are not going to carry much weight for me.
This does mean that I will tend to judge empirical evidence according to somewhat different standards than someone who takes a different view of morality. If someone is arguing for universal healthcare based on a particular set of moral premises, I am likely to point out evidence suggesting the reforms won't work even to achieve their stated goals rather than to try and argue with their premises. It's entirely possible that the evidence would suggest that the proposed reforms would achieve their goals and I would still not support the reforms however since I might not share those goals. There's an obvious risk that I will tend to view evidence selectively because of this but once you're aware of confirmation bias and make an effort to allow for it I'm not sure how much more you can do to protect yourself.
Many of the economic arguments for libertarianism stem from the fact that people don't act like pure altruists/utilitarians and instead act largely in their own self interest. I'd argue that if you start from utilitarian premises and try to devise policies to further those goals you are often going to find that the evidence indicates that the policies won't work because people respond to incentives according to their own self interest. Healthcare is full of examples of such problems - once people are insulated from the costs of their own treatment they will have a tendency to over-consume healthcare resources. In order to control costs rationing must be implemented by some kind of bureaucracy rather than by individual choice and the results are seldom optimal by any reasonable measure.
↑ comment by MrHen · 2009-05-14T18:52:33.296Z · LW(p) · GW(p)
I don't know where you've been finding this argument but it's hardly representative of a good argument for libertarianism. I grew up in Europe (well, the UK, which is kind of Europe) [...]
FYI, "Libertarianism" apparently means something different in the United States than it does elsewhere. This comes from a friend who is currently majoring in Political Science. He claims that "true libertarians would just laugh at American libertarians." I do not know exactly what that means or give any more information, but it sounded relevant to the discussion.
Replies from: Emile, mattnewport↑ comment by Emile · 2009-05-14T19:26:03.392Z · LW(p) · GW(p)
In France at least, "Libertarians" ("Libertaires") are traditionally left-wing anarchists, US-style Liberterians would be what we call "Liberals" ("Liberaux"), though it seems recently some started calling themselves "liberaux-libertaires".
↑ comment by mattnewport · 2009-05-14T23:08:07.753Z · LW(p) · GW(p)
I hadn't heard the term in the UK before encountering it in discussions with American libertarians online. I believe Classical Liberalism would be the closest term commonly (though not very commonly any more) used in the UK.
↑ comment by Mike Bishop (MichaelBishop) · 2009-05-13T15:31:47.263Z · LW(p) · GW(p)
On the other hand the modern European politics (there's not terribly much difference between "left" and "right") mixes market-based, government-based, and other solutions, based on what is estimated to work best, not on any big ideology, which it lost long time ago, even though it clings to all kinds of labels like "social democratic" or "christian democratic" etc.
I'm not sure what you meant by "based on what is estimated to work best," but I would say that modern European politics is not that different from modern American politics, or politics fifty years ago, in that politics can be described as the result of pre-existing political institutions, irrational, ignorant, and unenlightened voters, corruption and special interest groups. Well, things could be a lot worse. We could live in Myanmar or Sudan.
If European politics has gotten less ideological, is that (to a first approximation) because political institutions changed or because voters became less ideological?
Replies from: taw↑ comment by taw · 2009-05-13T20:52:14.657Z · LW(p) · GW(p)
As far as I can tell, European politics (as far as it's even a valid label) is different from American politics. I didn't do any proper research, so this might be just impressions. From what I can see, many Americans would say they "are Democrat/Republican/etc.", Europeans would only say they "vote Labour/Conservative/etc.", Europeans are much more likely to switch votes between elections, American political parties talk a lot about ideologies (freedom, fairness, Constitution, Founding Fathers, Christian nation, this or that is socialism, and so on and so on) what is extremely unusual in Europe.
By the way your description of what politics is like while not invalid seems extremely biased. As far as I can tell politics is mostly about day to day dealing with mundane problems of managing the state, and balancing of interests of different groups in it. Yes, the things you're talking about are there, but if someone described modern capitalism as consisting of exploitation of third world workers, destruction of environment, corruption, union busting, focus on quarterly profits over sustainability, gender discrimination, race to the bottom, oligopolies, brainwashing consumers etc. is would also be true, but about as biased.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-05-13T23:31:58.564Z · LW(p) · GW(p)
When speaking about politics in general, and current governments in particular, my rhetoric tends to be negative and focus on problems. This is because I hope that talking about the problems will get people to help fix or work around them.
It is my impression that the public, though perhaps not people on LW, have too much faith that a) they know what good public policy is, and b) that current policy is good. You would probably respond, and would be correct to respond, that government, and the political process, can do good. This should be recognized... I am not a libertarian extremist.
I know a fair bit about American politics, and the disciplines of political science/economics/sociology. But I know little about Europe, and I should have admitted that straight up. I hadn't / still don't fully understand the differences in how ideological politics is across the Atlantic. Don't people trumpet Rights-based claims a lot? Or draw on what are considered admirable nationalistic characteristics in framing debates? Or talk about the dangers of neo-liberalism or capitalism? I'll have to think/read about that more. Thanks for the suggestion.
Replies from: taw↑ comment by taw · 2009-05-14T00:21:39.774Z · LW(p) · GW(p)
By focusing on problems of government and ignoring problems of modern capitalism which has arguably far more influence (both positive and negative) on our daily lives, and upon which we have a lot less control, you're highly biasing the debate. It's not just you - I would say people in general are a lot more critical of government policies than of consequences of current form of capitalism (which has nothing to do with libertarian/econ101 fairytale free market).
As for European politics (I'm basing it mostly about Poland, UK, and Germany, as opposed to States, but my understanding is that the situation is very similar in most European countries):
Admirable nationalistic characteristics - never, that's purely American thing, European politicians tend to be extremely shy about national issues, there's no flag waving etc.
Rights-based claims - not really, you can hear often that some policies are unjust toward some group, or cause some group suffering, or some policies would be beneficial for some group, but it's pretty very rarely about abstract "right to X" like American debates are framed.
Talking about dangers of neo-liberalism - this happens, usually in terms of specific problem (like mistreatment of employees, or job loses, or environmental issues etc.), more often in realistic "companies only care about profit, so we need to regulate things about them that we care about", rarely in a generic "neo-liberal capitalism is bad", but why do you include it as ideological? Should neo-liberalism be a taboo subject?
Replies from: MichaelBishop, MichaelBishop, Torben↑ comment by Mike Bishop (MichaelBishop) · 2009-05-14T03:54:45.825Z · LW(p) · GW(p)
Admirable nationalistic characteristics - never, that's purely American thing Really? nationalism is a purely American thing?
Companies care about profits which makes them care about their consumers, their suppliers, their workers, and their congressmen (for better or for worse). But regulations are obviously necessary, and I like public goods.
Again, I think your argument about U.S. and European politics differ is interesting, I should look into that.
Replies from: taw↑ comment by taw · 2009-05-14T13:01:07.741Z · LW(p) · GW(p)
Really? nationalism is a purely American thing?
Right now, yeah, pretty much. In Europe the most you can find is politicians of country X talking about protecting "X jobs", but on "we look after our interests, others look after theirs" basis, not on any sense of superiority and uniqueness that is so prevalent in American political propaganda.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-05-14T16:35:38.161Z · LW(p) · GW(p)
Nationalism may be less potent in Europe than the U.S., but there are other countries in the world. And my impression is that, thankfully, nationalism is less potent in the U.S. than in many of them.
Replies from: taw↑ comment by Mike Bishop (MichaelBishop) · 2009-05-14T16:30:48.168Z · LW(p) · GW(p)
I think that both promoting and criticizing neo-liberalism are fairly ideological projects. I wouldn't taboo either of them, but I would like to see politicians/journalists/voters more focused on discussing the costs and benefits of specific policies which I think would lead people to be more consequentialist.
Replies from: taw↑ comment by Torben · 2009-05-14T09:35:00.970Z · LW(p) · GW(p)
I agree on everything but the dangers of neo-liberalism. This seems to me to be ever present, also in relatively succesful countries like Germany and France. Boo neo-liberalism. A bit like inequality.
Ideology in the American sense is pretty much relegated to fringe movements.
I live in Denmark, but follow politics in major European countries.
↑ comment by Paul Crowley (ciphergoth) · 2009-05-13T13:25:14.715Z · LW(p) · GW(p)
FWIW, these have mostly been the arguments I've seen for libertarianism; that, and arguments which hinge on the importance of wealth going to the "deserving" over the "undeserving". If anyone can point me to any online writings on the subject which tackle the standard challenges to libertarian capitalism in a way that doesn't hinge on deontological ideas or ideas of deserving, I'd be interested to read them.
Replies from: steven0461, thomblake, MichaelBishop↑ comment by steven0461 · 2009-05-13T19:29:16.528Z · LW(p) · GW(p)
If anyone can point me to any online writings on the subject which tackle the standard challenges to libertarian capitalism in a way that doesn't hinge on deontological ideas or ideas of deserving, I'd be interested to read them.
No strong opinion on whether they're correct, but from what I've seen libertarians argue from consequences rather than deontology most of the time, so I have to wonder where you've been looking. As for pointers, there's a libertarian-leaning econ encyclopedia here.
↑ comment by thomblake · 2009-05-13T19:39:00.246Z · LW(p) · GW(p)
I recommend J.S. Mill's On Liberty - it's not necessarily argued entirely from consequentialist grounds, but that's basically where he's coming from. Online version
Replies from: taw↑ comment by taw · 2009-05-13T21:06:11.339Z · LW(p) · GW(p)
You cannot be honestly consequentialist without seeking the best empirical evidence you can get, and I find the idea that there might have been much useful evidence for best organization of government in 2009 back in 1869 extremely unlikely, so I'm going to completely disregard this recommendation.
Replies from: Cyan, thomblake↑ comment by Cyan · 2009-05-14T19:15:45.697Z · LW(p) · GW(p)
I'm not at all sympathetic to the libertarian point of view, but I have to say that this does not sound like your true rejection. I find thomblake's Boyle's Law analogy quite apt: if you are really interested in thermodynamics, you have to start with material at the Boyle's Law level. Likewise, if you are truly interested in understanding libertarian thought, it behooves you to start with a basic text.
Replies from: taw↑ comment by taw · 2009-05-14T19:32:33.078Z · LW(p) · GW(p)
If someone wants to argue for libertarianism versus status quo on consequentialist and empirical grounds, it stands to reason they should have some idea about status quo, what a person writing in 1869 couldn't possibly have without breaking causality.
I'm not saying Mill doesn't make good deontologist arguments, as these can be timeless, I'm simply not interested in deontology here.
Replies from: Cyan↑ comment by Cyan · 2009-05-14T19:49:46.041Z · LW(p) · GW(p)
You seem to have missed the part where thomblake claims J. S. Mills more-or-less originated consequentialism.
Seriously, asking for a reference on LW, getting one, and dismissing it without even flipping through it? Lame.
ETA: My bad -- you did not ask for the reference. I am lame.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-05-14T22:40:15.734Z · LW(p) · GW(p)
Seriously, asking for a reference on LW, getting one, and dismissing it without even flipping through it? Lame.
Wasn't it ciphergoth who asked, not taw?
↑ comment by thomblake · 2009-05-14T14:24:22.119Z · LW(p) · GW(p)
Your celebration of ignorance angers me. You asked for a recommendation and got one from probably one of the best-qualified here to answer that question.
Really, it's a very short book. And it's one of the basic works on classical liberalism, one of the foundations (along with Locke's Second Treatise on Government) of all current discourse on liberalism.
Mill is arguably the fellow who invented consequentialism (with a hat tip to Bentham, and J.S. Mill's father). It's like if someone referred you to Boyle's Law and you insisted someone from the 17th century couldn't possibly have anything useful to say about physics.
EDIT: correction - as noted above, it was not taw who asked for a recommendation in the first place. Mea culpa.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-05-14T22:32:58.821Z · LW(p) · GW(p)
It's like if someone referred you to Boyle's Law and you insisted someone from the 17th century couldn't possibly have anything useful to say about physics.
By this logic, one could also argue in favor of Newton's theories on alchemy because he essentially invented classical mechanics.
Consequentialism is a type of formalization of ideas on ethics, which are inherently arbitrary. Theories of political structure deal with empirical matters of actual results. taw asserts that someone in the 17th century would have had no empirical data relevant to modern govenment, an assertion that is, if not obviously correct, at least defensible to the extent that society has changed since then.
↑ comment by Mike Bishop (MichaelBishop) · 2009-05-14T04:03:38.344Z · LW(p) · GW(p)
Most economists are more libertarian than most people, which means something to me.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-05-14T07:45:46.924Z · LW(p) · GW(p)
That's enough to interest me but obviously not nearly enough to convince.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-05-14T15:56:49.619Z · LW(p) · GW(p)
Fair enough, I'd like to believe that my libertarian sympathies are based on a lot more than that as well.
I'm sure you've read a lot of Robin Hanson, do you feel he focuses a lot on a deontological justification for libertarian ideas? I also recommend http://www.marginalrevolution.com for learning to see the world through the eyes of thoughtful libertarian economists. Both of these sources are more libertarian than I am, but I find reading them very worthwhile and often convincing. In important respects, even Paul Krugman is more libertarian than most Americans.
I think we'd probably do well to discuss individual policies, which can be done more precisely than overarching political philosophies.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-05-14T22:39:21.919Z · LW(p) · GW(p)
I think we'd probably do well to discuss individual policies, which can be done more precisely than overarching political philosophies.
This is probably a good point, as for all the sound and fury of this thread I would be slightly surprised if there were more than a handful of actual, significant policy disagreements between participants.
↑ comment by SoullessAutomaton · 2009-05-13T23:31:24.102Z · LW(p) · GW(p)
First while 73.4% of responders are consequentialists and only 9% deontologists, at the same time 45% of responders are libertarians. While labels like that are vague, libertarianism is in most versions highly deontologist ideology, and cares about processes and not results as such.
The most straightforward types of both libertarianism and utilitarianism take the form of systems that can be logically built from a base of few, powerful, elegant axiomatic principles. This type of system appeals deeply to mathematicians and engineers, hence both the large intersection and the high representation here.
↑ comment by badger · 2009-05-13T13:55:09.513Z · LW(p) · GW(p)
Echoing RichardKennaway, IMO most of the strong arguments for libertarianism (as a set of policies) are consequentialist ones by economists.
The other issue is how to classify someone if they defend some mix of consequentialism and deontology. For example, Robert Nozick argued for rights as side constraints in an otherwise utilitarian moral theory, and Roderick Long argues for deontology based on consequentialist grounds. I'll raise my hand as someone who could probably truthfully describe myself as either, but settled on consequentialist in part for social reasons.
Replies from: SoullessAutomaton, thomblake↑ comment by SoullessAutomaton · 2009-05-13T21:59:27.353Z · LW(p) · GW(p)
IMO most of the strong arguments for libertarianism (as a set of policies) are consequentialist ones by economists.
Do any of these economists have a consistently successful track record of prediction? Remember, this is a field where opinions of serious economists on the recent stimulus package ranged from "it won't have any effect" to "it will make things worse" to "it doesn't go far enough".
Economists talking about large-scale political structures should be assumed to lack credibility until proven otherwise via actual, consistent predictive results.
EDIT: Requesting clarification on why this comment was voted down to -2. Robin has posted repeatedly on many experts' allergies to predictions. Have I made a mistake in my conclusions here?
Replies from: taw, ciphergoth↑ comment by taw · 2009-05-14T02:58:52.645Z · LW(p) · GW(p)
lesswrong is not completely there yet, but it's steadily heading toward reddit's "downvote to disagree". It's a natural consequence of reddit-style comment up/down-voting system, don't think about it too much.
Replies from: Z_M_Davis↑ comment by Z_M_Davis · 2009-05-14T04:03:25.451Z · LW(p) · GW(p)
Strongly disagree about "don't think about it too much," but upvoted for pointing out this important problem. Everyone: upvote for useful discourse, not agreement!
Replies from: taw, SoullessAutomaton↑ comment by taw · 2009-05-14T12:37:13.195Z · LW(p) · GW(p)
"don't think about it too much" as in "don't think about things you cannot affect". Unless you want to go and convince Eliezer to remove downvoting and leave only upvote and report links like on Hacker News.
This will leave more garbage in the comments of course, I think it's smaller problem than "downvote to disagree", but I have no strong evidence about it.
↑ comment by SoullessAutomaton · 2009-05-14T10:38:57.249Z · LW(p) · GW(p)
Unless or until we get separate voting for "agreement" vs. "quality", as people have mentioned a few times.
↑ comment by Paul Crowley (ciphergoth) · 2009-05-14T07:57:02.499Z · LW(p) · GW(p)
Listen to Robin Hanson discuss this phenomenon on EconTalk. Starts with half-hour monologue by presenter, but I find the presenter quite interesting too.
↑ comment by thomblake · 2009-05-13T14:37:58.718Z · LW(p) · GW(p)
Consequentialism and deontology don't really 'mix' well. Either the consequences ultimately matter, or the rules ultimately matter. So it's either 'consequentialism' that collapses into deontology, or 'deontology' that collapses into consequentialism, or some inconsistent mix, or a distinct theory altogether.
Replies from: conchis, Torben↑ comment by conchis · 2009-05-13T14:59:44.756Z · LW(p) · GW(p)
Consequentialism and deontology don't really 'mix' well.
What's wrong with maximize [insert consequentialist objective function here] subject to the constraints [insert deontological prohibitions here]?
Replies from: CarlShulman, thomblake↑ comment by CarlShulman · 2009-05-13T15:19:57.883Z · LW(p) · GW(p)
Act A will certainly generate X units of good, and has a Y% chance of violating some constraint (killing somone, say). For what values of X and Y will you perform A? It's very tough for deontology to be dynamically consistent.
Replies from: conchis↑ comment by conchis · 2009-05-13T18:01:13.382Z · LW(p) · GW(p)
This is a problem for deontology in general, not a specific problem that arises when trying to combine it with consequentialism.
Whatever probability Y a deontologist would accept can simply be built into the constraint. If the constraint is satisfied, then you do A iff it maximizes X. Otherwise you don't.
↑ comment by thomblake · 2009-05-13T15:13:03.171Z · LW(p) · GW(p)
Then there are further questions:
why maximize that? , and
why use those constraints?
Note that both of these are ethical questions. The way you answer one might have implications for the answer to the other.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-05-13T19:33:04.829Z · LW(p) · GW(p)
Can't both of these questions be asked of pure consequentialists?
Replies from: thomblake↑ comment by thomblake · 2009-05-13T20:30:23.495Z · LW(p) · GW(p)
Sure, but the point is that one concern will probably collapse into the other. For a pure consequentialist, question 2 is either irrelevant or answered by question 1, and for question 1 you will end up in a bit of a circle where "because it maximizes overall net utility" is the only possible answer, with maybe an "obviously" down the line.
Replies from: conchis↑ comment by conchis · 2009-05-14T23:36:06.861Z · LW(p) · GW(p)
For a pure consequentialist...
Well, yes. But we're not talking about pure consequentialists. It's obvious that hybrid deontology-consequentialism is inconsistent with pure consequentialism; it's also beside the point.
Deontological constraints are seldom sufficient to determine right action. When they're not it seems perfectly natural to try to fill the neither-prohibited-nor-obligatory middle ground with something that looks pretty much like consequentialism.
↑ comment by kodos96 · 2010-05-21T22:26:19.454Z · LW(p) · GW(p)
There are both deontological and consequentialist arguments for libertarianism, and I think they're both equally convincing (to their respective audiences). My perception is that libertarians who used to be liberals tend to favor the consequentialist arguments, while libertarians who used to be conservatives tend towards the deontological.
↑ comment by Douglas_Knight · 2009-05-13T17:34:49.475Z · LW(p) · GW(p)
Certainly most libertarians care about processes, or at most about results very similar to the processes, but this is a biased sample.
Most ideologies are about process and uninterested in evidence about consequences, but that doesn't mean that people who the term "libertarian" are ideologues. One cost of using the term is appearing to be an ideologue. For this reason, I refuse to reliquish the term "liberal" to the modern liberals. But I think that taw is poisoning the discourse, making it worse than it already is. It's a pretty common tactic to paint anyone outside the mainstream as an ideologue.
Replies from: Yvain, SoullessAutomaton, thomblake↑ comment by Scott Alexander (Yvain) · 2009-05-13T23:00:58.550Z · LW(p) · GW(p)
Oh, hey, we have data!
According to crosstabs, of our fifteen deontologists, four were libertarian, four were liberal, four were socialist, two were conservative, and one didn't list political views. That means deontologists were slightly less likely to be libertarian than the average person.
(deontologists were much more likely to be conservative than the average person, but I can't draw too many conclusions from that because there was such a small sample size of deontologists and conservatives.)
I admit I didn't expect that result. I think it's because the really, really loud obnoxious libertarians like Objectivists are all deontologists. But I don't think this site has a lot of those. I would be curious what would happen if we polled the reader based of lewrockwell.com
[EDIT: Also, an overwhelming majority of those who said they didn't believe in morality were libertarians. Wonder what that means.].
Replies from: Cosmos↑ comment by SoullessAutomaton · 2009-05-13T22:21:49.883Z · LW(p) · GW(p)
But I think that taw is poisoning the discourse, making it worse than it already is. It's a pretty common tactic to paint anyone outside the mainstream as an ideologue.
In what way is he "poisoning" the discourse? He didn't even use the term ideologue, and he explained in a later post why he thinks libertarianism is essentially deontological in nature. Accusing him of "making the discourse worse" only serves to itself worsen the discussion.
Quite frankly, in my experience with people arguing for libertarianism, it tends to be precisely what he describes--a lot of bottom-line faux-consequentialist arguments about why free market principles necessarily produce better results, combined with question-begging arguments that assume individual economic freedom as the value to be maximized.
As a concrete example, by almost any metric European-style socialized health care systems work empirically, objectively better. Given the high cost of trying untested systems and the general lack of predictive power demonstrated by current macroeconomics, I can't conceive of any coherent, consequentialist argument agaisnt the immediate utility of adopting such a system in the USA, yet most libertarians will argue until blue in the face that socialized health care is a terrible idea, in aparent defiance of reality.
EDIT: This comment was pretty promptly voted down to -2 for reasons not apparent to me. Any reasons other than disagreement?
Replies from: steven0461, newerspeak, Douglas_Knight↑ comment by steven0461 · 2009-05-14T00:02:32.085Z · LW(p) · GW(p)
Deontological principles often help maximize utility indirectly, as I'm sure most utilitarians agree in contexts like war and criminal justice. Still, I agree deontology can bias people in the direction of libertarian politics. On the other hand, folk economics can bias people away from libertarian politics.
Since utilitarianism values the sum of all future generations far more than it values the current generation, it seems like (if we ignore that existential risks are even more important) utilitarianism recommends whatever policies grow the economy the fastest in the long run. That might be an argument for libertarianism but it might also be an argument for governments spending lots of money subsidizing research and development.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-05-14T00:39:54.245Z · LW(p) · GW(p)
Deontological principles often help maximize utility indirectly, as I'm sure most utilitarians agree in contexts like war and criminal justice. Still, I agree deontology can bias people in the direction of libertarian politics.
It seems more the other way to me--die-hard libertarians tend toward deontological positions, typically by gradual reification of consequentialist instrumental values into deontological terminal values ("free markets usually produce the best results" becomes "free markets are Good", &c.).
On the other hand, folk economics can bias people away from libertarian politics.
This is true, of course, and it's worth noting that I agree with a substantial majority of libertarian positions, which is part of why I find some aspects of libertarianism so irritating--it helps marginalize a political outlook that could be doing some good.
Since utilitarianism values the sum of all future generations far more than it values the current generation, it seems like (if we ignore that existential risks are even more important) utilitarianism recommends whatever policies grow the economy the fastest in the long run. That might be an argument for libertarianism but it might also be an argument for governments spending lots of money subsidizing research and development.
I'd think more likely it'd be an argument for both--subsidized research combined with lowered barriers to entry for innovative businesses--tile the country with alternating universities and silicon valley-type startup hotbeds, essentially (see also: Paul Graham's wet dream).
Anyway, I don't think it's the case that all forms of utilitarianism assign value to future generations that may or may not ever exist. Assigning value to potential entities seems fraught with peril.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2009-05-14T00:55:55.768Z · LW(p) · GW(p)
Assigning value to potential entities seems fraught with peril.
Such as?
Replies from: mattnewport, SoullessAutomaton↑ comment by mattnewport · 2009-05-14T01:00:19.987Z · LW(p) · GW(p)
It would seem to support the biblical condemnation of onanism.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2009-05-19T20:04:50.901Z · LW(p) · GW(p)
"Potential entities" here doesn't mean "currently existing non-morally-significant entities that might give rise to morally significant entities", just "entities that don't exist yet". A much clearer phrasing would be something like "Does my utility function aggregate over all entities existing in spacetime, or only those existing now?" IMO, the latter is obviously wrong, either being dynamically inconsistent if "now" is defined indexically, or, if "now" is some specific time, implying that we should bind ourselves not to care about people born after that time even once they do exist.
↑ comment by SoullessAutomaton · 2009-05-14T01:05:29.453Z · LW(p) · GW(p)
Combinatorial explosion, for starters. There's a very large set of potential entities that may or may not exist, and most won't. Assigning value to these entities seems likely to lead to absurdity. If nothing else, it seems to quickly lead to some manner of obligation to see as many entities created as possible.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-05-14T03:35:28.823Z · LW(p) · GW(p)
But not assigning value to potential entities implies that we should make a lot of changes. Ignoring global warming for one. Perhaps enslaving future generations?
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-05-14T10:37:24.660Z · LW(p) · GW(p)
I think it's arguable that global warming could impact plenty of people already alive today, and I'm not sure what you mean by enslaving future generations.
But yes, assigning no value at all to potential entities may also be problematic, but I'm not sure what a reasonable balance is.
↑ comment by newerspeak · 2009-05-14T07:37:11.187Z · LW(p) · GW(p)
In what way is he "poisoning" the discourse?
Taken together, bullet points 2, 3, and 4 are a textbook strawman.
Quite frankly, in my experience with people arguing for libertarianism, it tends to be precisely what he describes-- a lot of bottom-line faux-consequentialist arguments...
To me, this speaks more to the extent of your motivation to find merit-worthy libertarian writing than to the merit of libertarian ideas. It so happens that an entire school of libertarian thought ("policy libertarianism") is dedicated to studying the specific consequences of government action. One interesting claim: "State actors are (made up of) people who are subject to the same irrational biases and collective stupidity as market actors, and often have perverse incentive structures as well."
If you're interested in reading some reasonable libertarians, you might try The Cato Institute, Reason Magazine, or EconLog as starting points.
As a concrete example, by almost any metric European-style socialized health care systems work empirically, objectively better... I can't conceive of any coherent, consequentialist argument against the immediate utility of adopting such a system in the USA.
Really? Respectfully, it seems much more plausible, based on the tone of your post, that you're couching an appeal for your own preferred policy in hypothetical terms than that you're actually suffering from a failure of imagination.
Replies from: conchis, Douglas_Knight, SoullessAutomaton↑ comment by conchis · 2009-05-19T20:31:59.333Z · LW(p) · GW(p)
If you're interested in reading some reasonable libertarians, you might try The Cato Institute, Reason Magazine, or EconLog as starting points.
FWIW, I generally find Will Wilkinson and Tyler Cowen more reasonable than those listed above. (Yes, I realize Will works for the Cato Institute; I find him more reasonable than his employers.) YMMV.
Replies from: newerspeak↑ comment by newerspeak · 2009-05-20T08:21:46.145Z · LW(p) · GW(p)
Will Wilkinson and Tyler Cowen are more reasonable...
Along with a few others, I mentioned them both by name in an earlier version of that post. I didn't want to get bogged down presenting all the relationships needed to establish that all these people were in fact libertarians:
Arnold Kling, who writes at EconLog, has done a fair amount of thinking about the unique worldview in the Economics department at George Mason University (see here and here). I claim the position he lays out is essentially the same as mine above, with the explicit partisan identification removed. Kling is an adjunct professor of economics at GMU, along with Robin Hanson, Alex Tabarrok, and Tyler Cowen (whose blog cites his frequently). Kling's co-blogger Bryan Caplan has a written a popular book about public choice theory, which presents a thorough critique of government intervention and is supported by a lot of important research and some cool math. Caplan and Kling are both adjunct scholars at the Cato Institute, which also sponsors Will Wilkinson, whose wife is an editor at Reason.
Instead of beating that glob of stuff into something readable, I got lazy and went for the low-hanging fruit instead, specifically the over-the-top claim that there's no such thing as a coherent, consequentialist, libertarian argument against (e.g.) European-style socialzed health care.
↑ comment by Douglas_Knight · 2009-05-14T16:36:12.459Z · LW(p) · GW(p)
Taken together, bullet points 2, 3, and 4 are a textbook strawman.
That's certainly not what I meant by "poisoning the discourse," or I would have made my comment on it. It isn't a strawman (in the sense of purely made up). That is how most libertarians argue. I liked that post much better, but it still doesn't say why these actions by the majority of libertarians matter. Maybe they've poisoned the word already. Saying "these guys are nuts, avoid their brand name" is just pointing out a bad situation, not making it worse. There are other reasons it might matter: a consequentialist libertarian should ask himself how he reached that state, if it was from fakely consequentialist libertarian arguments.
It reminds me of Robin Hanson's advice to pull the rope sideways; while that seems like good advice on how to choose policies to focus on, his advice not to choose sides seems exactly backwards. Instead, choose a party, prove your loyalty, and pull that party sideways.
I am not afraid of fakely consequentialist libertarians, because I think I can tell the difference. Except that I am afraid of Cato, which argues from the conclusions and might be cluefull enough to invest in rhetoric. Why would you ever look to lobbyists?
Replies from: newerspeak↑ comment by newerspeak · 2009-05-14T19:18:19.933Z · LW(p) · GW(p)
It isn't a strawman.
Let's not argue semantics. I had intended to express the following simile:
(3-bullet-points : rigorous libertarian thinking) :: (straw-facsimile-of-human : actual-human)
That is how most libertarians argue. I liked that post much better, but it still doesn't say why these actions by the majority of libertarians matter.
I'm afraid I'm having trouble understanding what you mean here. Can you clarify? I recognize it may not speak to the question you're actually asking, but my immediate reaction to this is: "Arguments employed by most libertarians are completely irrelevant. It's the arguments employed by the strongest and most sophisticated libertarians that demand our attention."
I am not afraid of fakely consequentialist libertarians, because I think I can tell the difference. Except that I am afraid of Cato, which argues from the conclusions and might be clueful enough to invest in rhetoric. Why would you ever look to lobbyists?
I'm confused here, too. You mention falsely consequentialist libertarians and seem dismissive of them. You mention the Cato institute, and suggest they are arguing in bad faith and therefore very likely to be wrong. Your reference to "tell[ing] the difference" suggests you might entertain the idea of a consequentialist libertarian who argues in good faith. Is it possible that an earnest consequentialist libertarian could be right? What about?
↑ comment by SoullessAutomaton · 2009-05-14T10:29:01.959Z · LW(p) · GW(p)
Taken together, bullet points 2, 3, and 4 are a textbook strawman.
Uhm, point 2 at least is a straight up fact, as real markets typically diverge to varying degrees from perfect markets. I also note you don't actually dispute point 1, which is the strong statement of a deontological ethical position, and pure deontology remains incompatible with consequentialism, hence the apparent contradiction in ethical systems.
To me, this speaks more to the extent of your motivation to find merit-worthy libertarian writing than to the merit of libertarian ideas.
If I've been unimpressed with the arguments of rank-and-file members of a political position, why would I be motivated to look for better writing that may or may not exist? Do you go looking for merit-worthy religious apologetics?
That said, do you know of any libertarian arguments that do not assume either 1) economic freedom as the primary terminal value or 2) assume the efficiency of real-world markets? Both are unwarranted assumptions that seem to underlie many libertarian arguments I've seen.
Really? Respectfully, it seems much more plausible, based on the tone of your post, that you're couching an appeal for your own preferred policy in hypothetical terms than that you're actually suffering from a failure of imagination.
Well, yes, I prefer policies that are empirically demonstrated to actually work, especially when the cost of trying a system that fails is very high. Why don't you?
Replies from: newerspeak↑ comment by newerspeak · 2009-05-14T11:15:35.143Z · LW(p) · GW(p)
Do you go looking for merit-worthy religious apologetics?
Yes. Diagnosing the faults in Alvin Plantinga's reasoning is important. Am I to understand you'd prefer a frank exchange of views with Jerry Falwell?
That said, do you know of any libertarian arguments that do not assume either 1) economic freedom as the primary terminal value or 2) assume the efficiency of real-world markets? Both are unwarranted assumptions that seem to underlie many libertarian arguments I've seen.
Yes. I included one such argument in the post you just replied to. I quote myself:
One interestding claim [of policy libertarianism]: State actors are (made up of) people who are subject to the same irrational biases and collective stupidity as market actors, and often have perverse incentive structures as well."
In other words, government decision-makers (i.e. bureaucrats) have just as much trouble integrating new information, violating social norms, and admitting error as consumers or decision-makers for firms, but bureaucrats are also subject to perverse incentives, regulatory capture, etc.
The implied primary terminal value here is welfare-maximization, according to some material standard that I'm assuming we could agree on, given that we're both here. No specific claim about the efficiency of markets is made. A fortiori, the argument derives some of its strength from the acknowledgment of certain deviations from rational behavior that (once again) we both presumably know about, because we're both here.
Replies from: Yvain, SoullessAutomaton↑ comment by Scott Alexander (Yvain) · 2009-05-14T12:28:24.381Z · LW(p) · GW(p)
One interesting claim [of policy libertarianism]: State actors are (made up of) people who are subject to the same irrational biases and collective stupidity as market actors, and often have perverse incentive structures as well."
My main complaint with this argument is that it should be empirically testable. You can implement regulatory scheme X in Area A, and no regulatory scheme in Area B, and see which produces better results. For example, ban all cancer treatments that top doctors agree are useless and dangerous in Area A, keep all treatments legal in Area B, and see which area has higher mortality among cancer patients.
Many libertarians I know have absolutely no interest in doing this, and don't even like talking about the term "regulatory scheme X" because they prefer to lump all possible regulatory schemes together and judge them on the merit of the first one that comes to mind (this is also a problem with many socialists, for the opposite reason).
I don't know much about economics, but I do know a bit about public health policy, and the people in charge of that are sometimes very good about using studies to determine whether their government interventions are an overall improvement over the no-intervention case (obvious exception: the FDA, which is very good at running studies, but very bad at running the right studies and doing sane cost-benefit analysis). When these studies show positive results at relatively low cost, a truly consequentialist libertarian ought to admit government regulation has been effective in that case. Instead, they tend to dismiss it as a fluke or start talking about some case where government regulation isn't effective.
I think the great error in this whole debate is framing it as a conflict between socialists (who supposedly ought to think all government interventions are great) and libertarians (who supposedly ought to think all government interventions are terrible). In reality, some of these will work and some of these won't. I'd rather people started paying more attention to which were which than become crusaders for bigger or smaller government. I think "Government regulation is bad" (or "is good") is approximately the same kind of sentence as "Islam is a religion of peace".
Replies from: newerspeak, steven0461↑ comment by newerspeak · 2009-05-14T17:33:17.820Z · LW(p) · GW(p)
I'm reluctant to jump into a long discussion of the specifics of libertarian public policy -- mind killer and all that -- but in light of the terrible account of itself libertarianism has given you and SoullessAutomaton, maybe a few nonspecific comments are in order.
There's such a thing as libertarian public policy research. It happens in think tanks. It gets done by academics (mostly economists), it incorporates peer review, and it usually doesn't hold with the kind of boorish behavior you're describing. Many of its hypotheticals are imports from the most inconvenient possible world. Specifically, it acknowledges that market failures exist and that government intervention is sometimes the most effective way to deal with them; that regulation has legitimate uses in service of the public good; and above all that pragmatism and compromise are the only virtues that can survive in the political arena.
Like most public policy it is essentially utilitarian, and its specific claims center around the idea that society is too complex for any central authority to administer efficiently. That's to say, while there are many good ends the government might achieve through intervention in the economy or the private lives of its citizens, the costs of such intervention -- money spent, conventions altered, expectations shifted, power grabbed, responsibility abdicated, and goals co-opted by the political process -- are rarely less than the benefits.
You may take issue with any of these claims, but hopefully you can agree that the framework I'm developing here supports more sophisticated answers to the question "As a society, what should we do?" than just chanting "Private GOOD! Public BAD!"
In the specific case of your test of Regulatory Scheme X, the thoughtful libertarian position might go something like this:
Accountability is great. Empirical validation is great. But in this case, your test is a non-starter. No one is going to want this to happen. Drug companies will resist the removal of their products from the marketplace. Doctors will see the legislative call to dispense with certain treatments as a threat to their professional autonomy. Crossover between the AMA and FDA will favor an equilibrium where most experts already support the status quo. Insurance companies will use the opportunity to demand changes elsewhere in their payment structure. Any one of these groups can scuttle the whole project and throw your whole party out of power in the next election by letting it slip to the AARP that you're planning on taking away something previously covered by Medicare. And even if you manage a legislative or executive miracle, anyone in Area A who wants the banned treatments can just migrate to Area B and get them there.
None of this should be interpreted to rule out the possibility that the test itself could yield invaluable information, saving lives or huge amounts of taxpayer money. But no regulator or legislator has any direct incentive to risk his career and all his political capital for nobody in particular. When talking about cost-benefit analysis, it's important to remember that government officials implicitly measure costs and benefits to themselves, and that many of the responsibilities government arrogates to itself go unmet as a result.
Replies from: SoullessAutomaton, ciphergoth↑ comment by SoullessAutomaton · 2009-05-14T22:25:58.229Z · LW(p) · GW(p)
I appreciate your presentation of the ideas here; it's more enlightening than most material I've seen. That said, I still take issue with some points:
It happens in think tanks. It gets done by academics (mostly economists), it incorporates peer review, and it usually doesn't hold with the kind of boorish behavior you're describing.
Peer review by academics is only meaningful if based on a foundation of empirical observation and testable predictions; I think it remains to be demonstrated that macroeconomics has any predictive power whatsoever. Otherwise you end up with something like literary criticism--sophisticated, elaborate arguments unrelated to reality in nearly every way.
and its specific claims center around the idea that society is too complex for any central authority to administer efficiently
This does not require that certain subsections of society may benefit from central administration. This is easily demonstrated by the existence of large, niche-dominating corporations, which tend to be every bit as inefficient and bureaucratic as government. Some problems are such that the benefits of centralization outweigh the costs of bureaucracy.
hopefully you can agree that the framework I'm developing here supports more sophisticated answers to the question "As a society, what should we do?" than just chanting "Private GOOD! Public BAD!"
Agreed completely, but sophisticated answers are still useless without empirical foundations.
But in this case, your test is a non-starter. No one is going to want this to happen.
...which, ironically, leads to the other great flaw of libertarian politics--it proposes that government voluntarily reduce its own power dramatically and promotes increased personal responsibility. This is not a popular idea. People like having an authority, and people in government like being authorities. Large-scale libertarian government stikes me as, unfortunately, every bit as unlikely as Yvain's government of controlled experimentation.
Replies from: mattnewport↑ comment by mattnewport · 2009-05-14T22:50:25.085Z · LW(p) · GW(p)
I think it remains to be demonstrated that macroeconomics has any predictive power whatsoever.
While I am pretty skeptical of the predictive power of a lot of macroeconomics as well, it seems odd to demand empirical research but simultaneously deny that the field in question is amenable to empirical research. A lot of the economics research that is used as support for libertarian positions is based on comparative studies between countries or between jurisdictions within countries. One common thread of research is to attempt to rate countries according to some defined measure of economic freedom and then see if the rating correlates with positive outcomes (GDP per capita being a common choice). There's all sorts of ways that research can be criticized but to completely rule out such research as admissible evidence would seem to render questions about how to organize society completely beyond the realms of scientific investigation. If studies of this kind are not a valid basis for making decisions then how do you propose ideal enlightened governments should determine policy?
Large-scale libertarian government stikes me as, unfortunately, every bit as unlikely as Yvain's government of controlled experimentation.
Hence the existence of things like the free state project and seasteading. Libertarians are quite aware of the difficulties of achieving their vision of society through conventional democratic politics. In fact, there's a recently established blog on that very topic run in part by a less wrong commenter (who is also behind the seasteading idea).
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-05-14T23:36:46.699Z · LW(p) · GW(p)
While I am pretty skeptical of the predictive power of a lot of macroeconomics as well, it seems odd to demand empirical research but simultaneously deny that the field in question is amenable to empirical research.
I'm not saying it's not amenable to empirical research, I just don't get the impression that any extant research has been fruitful. As I said earlier, I saw serious economists discussing the USA stimulus bill predicting everything from "harmful" to "no effect" to "doesn't do enough", and as Robin has observed the chances that any of these predictions will be remembered is close to nil. This is strong evidence that the field as a whole lacks rigor, and that anyone who does know their stuff is being drowned out by the rest.
If studies of this kind are not a valid basis for making decisions then how do you propose ideal enlightened governments should determine policy?
Relying first and foremost on things that are already demonstrated to have worked well is a good start--hence my argument for adopting European-style socialized health care. Also, drop things that have been demonstrated ineffective, like the "war on drugs".
Beyond that, take action only when necessary, and test new ideas in small areas first when possible. Modern governments are too large and powerful to be making large, expensive mistakes.
Hence the existence of things like the free state project and seasteading.
I've read about the seasteading project before, actually, and I think it's generally a wonderful idea.
Replies from: mattnewport↑ comment by mattnewport · 2009-05-15T00:03:44.608Z · LW(p) · GW(p)
This is strong evidence that the field as a whole lacks rigor, and that anyone who does know their stuff is being drowned out by the rest.
I'm not sure what kind of economics you're thinking of when you say macroeconomics. I have very little confidence in the kind of macroeconomics that tries to relate things like interest rates and savings to money supply using simple formulas, or that tries to give accurate values to 'multipliers' for stimulus spending, or that construct mathematical models of the economy and use them to make predictions about future growth from a few aggregated inputs by curve fitting to historical data. I'd agree that most of that is junk.
The kind of macroeconomics I think has some value is that which attempts to gather empirical support for particular policies by comparing outcomes across different countries or jurisdictions, or across different time periods. This kind of research is obviously far from ideal since it is weakly controlled and is often making hard to justify comparisons between different measures. Maybe macroeconomics is not the right term for that kind of research but I'm not sure what else to call it. In terms of gathering empirical evidence for the results of particular policy decisions, that seems about the best we can do at the moment.
Relying first and foremost on things that are already demonstrated to have worked well is a good start--hence my argument for adopting European-style socialized health care.
You do realize how far from an uncontroversial claim that is? I grew up with the NHS in the UK (and I now live in Canada which is also a universal health care system). They are far from perfect systems. Every government for as long as I can remember in the UK has come into power partly on a promise to 'fix' the NHS. None have succeeded. I don't think I've heard anyone argue that the healthcare system in the US is fine as it is - there is fairly universal agreement that it is broken. I'm far from persuaded that the best solution is to try and adopt a European model though. There's plenty of research from libertarian think tanks that provides empirical evidence in favour of health care reforms that reduce government involvement in healthcare rather than increase it. I'm sure there are grounds for questioning some of that research but it is disingenuous to pretend it doesn't exist.
Beyond that, take action only when necessary, and test new ideas in small areas first when possible. Modern governments are too large and powerful to be making large, expensive mistakes.
And healthcare is a good candidate for the largest and most expensive of them all. Why is healthcare exempt from the principle of testing new ideas on a small scale first?
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-05-15T13:18:26.983Z · LW(p) · GW(p)
I'm not sure what kind of economics you're thinking of when you say macroeconomics.
Primarily the first kind you describe.
In terms of gathering empirical evidence for the results of particular policy decisions, that seems about the best we can do at the moment.
It is, but it is unfortunately limited to examining the results of policies already implemented; anyone justifying novel solutions based on such evidence is likely veering more into the territory of what we agree is junk, and probably making very dubious assumptions about ability to extrapolate trends.
You do realize how far from an uncontroversial claim that is? I grew up with the NHS in the UK (and I now live in Canada which is also a universal health care system). They are far from perfect systems.
Of course, but for all their imperfections they are widely recognized as better than what the USA currently does and win out on almost every objective metric. I also seem to recall that the UK and Canada are often considered some of the worst other than the USA, though at least they spend less than half as much than the US does.
Socialized health care, like democracy, can generally be categorized under "the worst system, except for all the others that have been tried".
There's plenty of research from libertarian think tanks that provides empirical evidence in favour of health care reforms that reduce government involvement in healthcare rather than increase it.
On basis of what observations? To my knowledge all other developed nations employ some form of socialized health care.
As it stands now, the USA is the biggest outlier and getting the worst results; an obvious case for applying a little majoritarianism.
And healthcare is a good candidate for the largest and most expensive of them all. Why is healthcare exempt from the principle of testing new ideas on a small scale first?
Because it's not a new idea. Everyone else has helpfully tried it out for us already and found that it basically works.
On the other hand, aggressively de-regulating and getting government out of health care is, as far as I know, completely untried and untested.
Replies from: mattnewport↑ comment by mattnewport · 2009-05-15T19:18:37.296Z · LW(p) · GW(p)
Of course, but for all their imperfections they are widely recognized as better than what the USA currently does and win out on almost every objective metric.
The superiority of the Canadian and British models is not uncontroversial. This policy analysis rebuts many of the arguments for example. It includes numerous objective metrics on which the US does better than Canada or the UK. Debate the claims if you want but don't pretend the issue is settled.
On basis of what observations? To my knowledge all other developed nations employ some form of socialized health care.
There is a lot of variation between nations. Many nations that have some form of 'socialized' health care also have significant amounts of private health care. Many countries have introduced market based reforms within their socialized health care systems in an effort to improve efficiency. Health tourism is increasingly common in Europe.
Because it's not a new idea. Everyone else has helpfully tried it out for us already and found that it basically works.
'It' is not a single idea or system. Ultimately the biggest problem with healthcare reform in the US is that there is very little chance that it will adopt the best practices found in other nations. There are too many powerful special interests for the political process to select policies based on effectiveness. The more government involvement there is in healthcare, the more healthcare will be subject to problems of regulatory capture, special interest lobbying, rent seeking and bureaucracy. De-regulation and reduced government involvement creates incentives for serving patient interests as a primary route to success. Increased regulation and government involvement means that it becomes more and more profitable to focus effort on lobbying, gaming the system and political maneuvering rather than on patient care.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-05-15T19:48:50.615Z · LW(p) · GW(p)
Debate the claims if you want but don't pretend the issue is settled.
I don't have the time to respond properly to the linked PDF, but skimming it quickly it doesn't seem particularly persuasive. Obviously, the argument isn't "settled" because people still argue about it, but that doesn't mean both sides have arguments of equal strength.
At any rate, I'm really going to have to drop the discussion at this point because I don't have the time to go digging up supporting references. If you seriously think that the US system is of comparable quality to European systems our difference of perspective is far too vast to bridge by simple off-the-cuff arguments here.
Thanks for your time, though, this has been enjoyable.
Replies from: mattnewport↑ comment by mattnewport · 2009-05-15T20:28:31.731Z · LW(p) · GW(p)
they are widely recognized as better than what the USA currently does and win out on almost every objective metric
The linked document contains a number of objective metrics on which the US does better - waiting times, use of high tech surgical procedures, access to high tech diagnostic equipment, breast and prostate cancer mortality ratios, specialist to patient ratios and patient satisfaction measures. I linked it as evidence to rebut the specific claim that the US is worse on 'almost every objective metric'.
I don't have the time to respond properly to the linked PDF, but skimming it quickly it doesn't seem particularly persuasive.
I'm not asking you to make a detailed rebuttal, I'm just providing evidence of objective measures on which the US does better. I don't have time for a detailed debate either. You seem to assign an extremely high probability to the belief that healthcare in the US could be significantly improved by adopting a European system though and I'm questioning whether the evidence justifies such high confidence.
Ultimately the only reason this is even a political issue is because of the high levels of government involvement. With less government involvement people could spend their healthcare dollars in the way they thought best without having to persuade anyone else. That's another reason I oppose high levels of government involvement.
↑ comment by Paul Crowley (ciphergoth) · 2009-05-14T21:14:52.198Z · LW(p) · GW(p)
This is by far the most useful discussion of the subject I have ever had. I'm starting to think this rationality stuff might actually work out.
↑ comment by steven0461 · 2009-05-14T13:23:25.134Z · LW(p) · GW(p)
Now the question is, does advocating one regulatory scheme make other regulatory schemes more likely, through some habit-forming mechanism or other? If so, then your version of a libertarian (who thinks the average scheme is bad) should sometimes oppose even good schemes, and your version of a socialist (who thinks the average scheme is good) should sometimes support even bad schemes.
In building bridges between left and right, it's always a good idea to offer analogies between money and sex, so consider this: utilitarians generally oppose governments telling people whom to mate with and whom not to mate with, even in cases where these people will predictably make decisions that make them and others unhappy, because utilitarians think the good this would do is outweighed by the value of having a bright-line taboo against the government meddling in that sphere. It's less obvious to me than it is to you that the economy as a whole, or some particular circumscribed aspect of it, isn't also such a sphere, at least in part.
Since there's no good that could possibly come from us talking about this other than low-quality thinking and writing practice, I'm putting this sentence here to make myself look like an idiot in case I fail to resist the temptation to post about politics again anytime soon.
↑ comment by SoullessAutomaton · 2009-05-14T22:14:27.146Z · LW(p) · GW(p)
Yes. Diagnosing the faults in Alvin Plantinga's reasoning is important. Am I to understand you'd prefer a frank exchange of views with Jerry Falwell?
I see little value to discussion with either. Given the fundamental problems with theism (primarily a lack of empiricism), I can reasonably expect that "better" theist arguments will only be more elaborate and rigorous argumentation from the same broken axioms. Unless I were preparing for a formal, public debate with a theist it wouldn't be worth my time.
Yes. I included one such argument in the post you just replied to. I quote myself:
The phrase "policy libertarianism" gets only a couple thousand google hits, many of which are false positives. Eliminating the most common false positive (the phrase "foreign policy, libertarianism"), your own LW comment comes up on the first page of results. The remaining results seem to mostly concern matters of evolutionary vs. revolutionary change as a means of implementing libertarian goals, not arguments for goals, and bear no obvious relation to what you've mentioned. If you're referring to a major school of libertarian thought, I'm assuming there's another, more popular term for it, but I don't know how to figure out what it would be, sorry.
In other words, government decision-makers (i.e. bureaucrats) have just as much trouble integrating new information, violating social norms, and admitting error as consumers or decision-makers for firms, but bureaucrats are also subject to perverse incentives, regulatory capture, etc.
This point is not under dispute, but it also does not suffice to prove that governmental action is therefore less effective, especially given imperfect markets, other problematic incentives for smaller agents (e.g., problems of collective action), and empirical evidence showing that governmental programs can sometimes lead to better results than non-governmental programs (e.g., European vs. USA health care).
The implied primary terminal value here is welfare-maximization, according to some material standard that I'm assuming we could agree on, given that we're both here. No specific claim about the efficiency of markets is made. A fortiori, the argument derives some of its strength from the acknowledgment of certain deviations from rational behavior that (once again) we both presumably know about, because we're both here.
Some form of welfare-maximization, yes. Various quality-of-life metrics are a (rough) approximation. I'm not sure what else you're getting at here.
I should say again, it is likely that we agree on the vast majority of actual conclusions. My complaint with libertarianism as a political philosophy is that (like most other political philosophies) it has no apparent, consistent empirical basis and resorts frequently to bottom-line arguments, even though many of their conclusions can be justified rigorously.
I am convinced of this in large part because most libertarians I have encountered have been completely impervious to real-world examples of government programs being more efficient and effective than equivalent non-governmental systems, leading me to conclude that the mainstream of libertarian thought is essentially anti-empirical.
Replies from: mattnewport↑ comment by mattnewport · 2009-05-14T22:54:34.967Z · LW(p) · GW(p)
The phrase "policy libertarianism" gets only a couple thousand google hits, many of which are false positives.
It's a fairly recently coined term. The first use I'm aware of is here. The distinction between policy and structural libertarianism has been picked up quite quickly as many have found it useful.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-05-14T23:02:38.751Z · LW(p) · GW(p)
In that case I remain confused in that it seems to mostly refer to a framework for how to achieve libertarian goals, not for justifying that said goals will successfully confer the advertised benefits (e.g., some type of welfare maximization or general utility).
Replies from: mattnewport↑ comment by mattnewport · 2009-05-14T23:15:48.824Z · LW(p) · GW(p)
The post is on a libertarian blog and as such is aimed at an audience who already accept the case for libertarian goals being desirable. It's making a distinction between libertarians who believe that the best way to achieve their goals is to work within existing democratic systems to promote libertarian policies and those who believe that existing democratic systems are fundamentally inhospitable to libertarian policies and that achieving libertarianism requires addressing structural factors that tend to produce unlibertarian societies.
The 'policy libertarians' tend to be the ones focusing on demonstrating empirical support for improved outcomes under libertarian policies. The idea being that it may be possible to get more libertarian policies implemented by appealing to empirical evidence for their efficacy on a case by case basis rather than by trying to convince people that libertarianism is the 'one true way'. That would seem to be precisely the kind of approach you would seem to prefer.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-05-14T23:41:26.319Z · LW(p) · GW(p)
That clarifies the relevance, thank you.
↑ comment by Douglas_Knight · 2009-05-14T04:58:10.094Z · LW(p) · GW(p)
He didn't even use the term ideologue
He used the term "ideology."
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-05-14T10:31:27.627Z · LW(p) · GW(p)
He used the term "ideology."
Are you being disingenuous here, or do you really think those connotationally equivalent?
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2009-05-14T14:25:23.942Z · LW(p) · GW(p)
Some people use ideology more broadly. Others use it exactly as ideologue. It's pretty clear from taw's later comment that he meant it as ideologue. I responded to the short comment rather than the long comment because it merely insinuates.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-05-14T22:27:45.751Z · LW(p) · GW(p)
It's pretty clear from taw's later comment that he meant it as ideologue.
That does not seem clear to me. Are you certain you aren't reading too much into it?
Assume good faith, as Wikipedia would say.
↑ comment by thomblake · 2009-05-13T19:35:40.854Z · LW(p) · GW(p)
Indeed. "classical liberal" is the only way I use "liberal", though I'll only use the term at all if I'm actually discussing political philosophy.
Also, one's political philosophy is not necessarily isomorphic to one's ethics. The questions "Should I be a libertarian", "How should we arrange political institutions", and "How should I feel about other people telling me what to do" are all ethical questions, but their answers are far more complex than finding something that 'matches' one's ethics.
comment by Z_M_Davis · 2009-05-14T19:03:58.248Z · LW(p) · GW(p)
IQs (warning: self-reported numbers for notoriously hard-to-measure statistic)
Yeah, I'm extremely skeptical of the IQ data. Assuming a standard mean=100 SD=15 test (although at least one respondent says he took a test with SD=24), our reputed median is above the 0.003th percentile. I don't think any public blog is that elite.
ERRATUM: Oh, dear. I meant 99.7th percentile.
Replies from: Eliezer_Yudkowsky, anonym, AnnaSalamon↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-05-14T19:16:31.967Z · LW(p) · GW(p)
I'm skeptical of the IQ data because of the number of IQs above 140. Most IQ tests don't measure well above IQ 140, and so even if we have that many truly exceptional people, I would not expect it to show up in their measured IQs.
Replies from: Vladimir_Nesov, Alicorn↑ comment by Vladimir_Nesov · 2009-05-14T21:22:52.973Z · LW(p) · GW(p)
But if so many lied, it would also be a surprising fact, that doesn't seem to be a better explanation.
Replies from: anonym, SoullessAutomaton, AnnaSalamon↑ comment by anonym · 2009-05-17T01:11:43.853Z · LW(p) · GW(p)
It's only a little more surprising than somebody at an online forum for bodybuilders lying about how much they can bench press.
Replies from: Z_M_Davis↑ comment by Z_M_Davis · 2009-05-17T04:40:43.061Z · LW(p) · GW(p)
I take it the reason it's not equally surprising is that few bodybuilders are as monomaniacally obsessed with The Truth as we are?
Replies from: anonym↑ comment by anonym · 2009-05-17T17:37:35.895Z · LW(p) · GW(p)
Most human beings in any forum, anywhere, will be more obsessed with signaling and other concerns than The Truth -- even in a pseudo-anonymous survey -- and will be subject to most of the standard cognitive biases that bodybuilders will be, even if to a lesser degree. Being obsessed with The Truth does not mean never lying or exaggerating (or reporting just that one internet IQ test you took that was 1 std dev higher than your real-world test).
↑ comment by SoullessAutomaton · 2009-05-15T13:41:11.841Z · LW(p) · GW(p)
If a lot of people actually got scores outside the calibration range of whatever IQ test they took, they could have answered honestly and the resulting numbers still be as bogus as Eliezer suggests.
↑ comment by AnnaSalamon · 2009-05-15T03:53:59.914Z · LW(p) · GW(p)
We had similar data on the survey I ran (which I still need to write up the results of). I don't know that the numbers past 140 are intelligence-indicative, but I suspect people really did get their reported scores on IQ tests.
Replies from: AnnaSalamon, pepe_prime↑ comment by AnnaSalamon · 2009-05-15T07:03:00.928Z · LW(p) · GW(p)
Also, in the responses to my survey, people who said they were from the USA were no more or less likely than people who said they weren't to report scores over 140. Which argues against regional variation in what IQ tests mean. Although I don't know how consistent the meaning is of IQ tests within the USA; anyone have knowledge, here?
↑ comment by pepe_prime · 2016-03-31T08:13:19.361Z · LW(p) · GW(p)
Did you ever write up your results? They would make a valuable addition to the historical data.
↑ comment by anonym · 2009-05-17T01:06:32.199Z · LW(p) · GW(p)
If we were to assume a test with a standard deviation of 24, a median of 141.5 would be just below the 96th percentile. That still seems too high for the median user, but it's almost plausible -- much more so than 99.7th percentile.
It's also quite likely that LW readers with abnormally high IQs (relative to LW) are (A) much more likely to have been tested and to know (and remember) the result, and (B) include the score on the survey.
↑ comment by AnnaSalamon · 2009-05-15T03:56:53.124Z · LW(p) · GW(p)
It doesn't strike me as all that implausible, given how many other indicators of quirkiness we have as a group (e.g., the 95-97% male, the 12% with PhDs (and 23% of members over 35 with PhDs), the portion with advanced math/compsci skill, etc.).
Replies from: Alicorn↑ comment by Alicorn · 2009-05-15T04:10:35.785Z · LW(p) · GW(p)
Math is not my strong suit, but my arithmetic comes out differently on the PhD bit. Are you counting as PhDs the people who have "student" in the "degree status" field?
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2009-05-15T06:20:34.736Z · LW(p) · GW(p)
I was working off the 233 people who filled out my earlier survey. I haven't analyzed Yvain's data; what percentage do you get there?
Replies from: Alicorncomment by thomblake · 2009-05-13T14:13:17.833Z · LW(p) · GW(p)
Person who put "2172", you probably thought you were screwing up the results, but in fact you managed to counterbalance the other person who put "1700", allowing the mean to revert back to within one year of the correct value :P
Not to worry - I am a believer in the wisdom of crowds, so I knew full well that I wasn't going to be screwing up anything. That response was pure noise.
I just don't like guessing, and so I put "0%" for my confidence on that question, so that one of my answers was definitely wrong and the other was definitely right.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-05-13T15:10:10.589Z · LW(p) · GW(p)
I believe in the wisdom of crowds, but I also think that your actions were screwing up the results.
If you weren't going to take a question seriously, I wish you wouldn't have answered it at all.
ADDED: I decided not to downvote you because I don't want to discourage being honest/forthcoming.
Replies from: kpreid, randallsquared↑ comment by kpreid · 2009-05-13T16:18:03.015Z · LW(p) · GW(p)
0% confidence should mean zero weight when computing the results, no?
Replies from: MichaelBishop, orthonormal↑ comment by Mike Bishop (MichaelBishop) · 2009-05-13T19:22:20.321Z · LW(p) · GW(p)
Yes, but what was the point of that survey question? Among other things, it could assess a) the distribution of the survey takers accuracy, b) the distribution of the survey takers calibration, c) the relationship of accuracy and calibration to other personal characteristics.
I don't mean to make an overly-big-deal about this, and I appreciate thomblake's other contributions to the LW community, but because he didn't really give us his best guess about when the lightbulb was invented, he reduced our ability to learn all these things.
Replies from: thomblake↑ comment by thomblake · 2009-05-13T19:28:18.673Z · LW(p) · GW(p)
I don't think much of the concepts of 'accuracy' and 'calibration' and whatnot as they are used here. As far as I'm concerned, the correct response choices were either the correct answer with 100% confidence, or "I don't know". So my contemptuous answer to the question can be used to relate my attitude towards that to other personal characteristics.
Replies from: Nominull, jimrandomh, Craig_Morgan, MichaelBishop↑ comment by jimrandomh · 2009-05-14T02:41:24.673Z · LW(p) · GW(p)
When providing what you think is the correct answer, there is still some probability that you're mistaken. That probability could be 10^-10, but it can't be zero. And when answering "I don't know", you can still guess, and produce a probability that your guess was correct. Lumping all low probabilities of being correct into a single qualitative judgment of "I don't know" sometimes makes sense, but sometimes a concrete probability is useful so you should know how to generate one.
↑ comment by Craig_Morgan · 2009-05-14T02:34:50.527Z · LW(p) · GW(p)
At the time of this comment, thomblake's above comment is at -3 points and there are no comments arguing against his opinion, or why he is wrong. We should not downvote a comment simply because we disagree with it. Thomblake expressed an opinion that differs (I presume) from the community majority. A better response to such an expressed opinion is to present arguments that correct his belief. Voting based on agreement/disagreement will lead people not to express viewpoints they believe differ from the community's.
Replies from: Z_M_Davis, jimrandomh↑ comment by Z_M_Davis · 2009-05-14T04:49:16.444Z · LW(p) · GW(p)
While I agree that voting shouldn't be based strictly on agreement/disagreement, voting is supposed to be an indicator of comment quality, with downvotes going to poorly-argued comments that one would like to see less of. It is worth bearing in mind that the more mistaken a conclusion is, the less likely one is to encounter strong comments in support of that conclusion.
If someone were to present specific, clearly-articulated arguments purporting to show that popular notions of accuracy and calibration are mistaken, that might well deserve an upvote in my book. But above, thomblake seems to be rejecting out of hand the very notion of decisionmaking under uncertainty, which seems to me to be absolutely fundamental to the study of rationality. (The very name Less Wrong denotes wanting beliefs that are closer to the truth, even if one knows that not everything one believes is perfectly true.) I've downvoted thomblake's comment for this reason, and I've downvoted your comment because I don't think it advances the discourse to discourage downvotes of poor comments.
Replies from: thomblake↑ comment by thomblake · 2009-05-14T14:16:09.319Z · LW(p) · GW(p)
rejecting out of hand the very notion of decisionmaking under uncertainty
Nope. I'm something of a Popperian. On things I care about, I find the best position I can and act as though I'm 100% certain of it. When another position is shown to be superior, I reject the original view entirely.
There are some circumstances where we need to make a decision without anything we can feel that strongly about, but I think that in most circumstances, bringing 'probability' into the process isn't helpful. Humans just aren't built to think like that, and I'd rather use just plain 'judgment'.
Replies from: Z_M_Davis↑ comment by Z_M_Davis · 2009-05-14T18:48:25.418Z · LW(p) · GW(p)
On things I care about, I find the best position I can and act as though I'm 100% certain of it. When another position is shown to be superior, I reject the original view entirely.
Thank you for the clarification. Although frankly, I don't see how that could possibly work. I mean, suppose someone flips a coin that you know to be slightly biased towards heads. Would you be willing to bet a thousand dollars that the coin comes up heads?
Okay, coinflip examples are always somewhat contrived (although I really could offer you that bet), but we can come up with much more realistic scenarios, too. Say you've just been on a job interview that went really well, and it seems like you're going to get the position. Do you therefore not bother to apply anywhere else, acting with 100% certainty that you will get the job? Or I guess you could alternatively say that you simply "don't know" whether you'll get the position--but can "I don't know" really be a complete summary of your epistemological state? Wouldn't you at least have some qualitative feeling of "The interview went well; I'll probably get hired" versus "The interview did not go well at all; I probably won't get hired"?
Humans just aren't built to think like that
I certainly agree that humans aren't built to naturally think in terms of probabilities, but I see no reason to believe that human-default modes of reasoning are normative: we could just be systematically stupid on an absolute scale, and indeed I am rather convinced this is the case.
bringing 'probability' into the process isn't helpful
I agree that it would be incredibly silly to try to explicitly calculate all your beliefs using probability theory. But a qualitative or implicit notion of probability does seem natural. You don't have to think in terms of likelihood ratios to say things like "It's probably going to rain today" or "I think I locked the door, but I'm not entirely sure." Is this the sort of thing that you mean by the word judgment? In any case, even if bringing probability into the process isn't helpful, bringing in this dichotomy between absolutely-certain-until-proven-otherwise and complete ignorance seems downright harmful. I mean, what do you do when you think you've locked the door, but you're not entirely sure? Or does that just not happen to you?
Replies from: JGWeissman↑ comment by JGWeissman · 2009-05-14T19:29:57.669Z · LW(p) · GW(p)
I mean, suppose someone flips a coin that you know to be slightly biased towards heads. Would you be willing to bet a thousand dollars that the coin comes up heads?
Well, that is what Bayesian decision theory would suggest you do, provided your utility function is linear with respect to money.
But, to illustrate the problem with acting as though you were 100% certain in your best theory, suppose I offer you the following bet. I will roll an ordinary 6 sided dice, and if the result is between 1 and 4 (inclusively), I will pay you $10. But if the result is 5 or 6, you will pay me $100. So you see that getting a result between 1 and 4 is more likely than getting a 5 or a 6, so you treat it as certain, so you accept my bet which you assign an expected value of $10. But really, the expected value is (2/3)$10 - (1/3)$100 = -$80/3. On average, you lose about $27 with this bet.
The problem here is that, by acting as though you are 100% sure, you give no weight to the potential costs of being wrong (including the opportunity of cost of the potential benefits of a different decision).
Replies from: Z_M_Davis, thomblake↑ comment by thomblake · 2009-05-15T17:43:26.319Z · LW(p) · GW(p)
I was talking about ordinary circumstances. I've never bet money on the roll of a die, nor shall I. If it were to come up, I might well do the sort of analysis you suggest, as probability seems like it's correctly applied to die rolling. Can you think of a better example, that might actually occur in my life?
Replies from: JGWeissman, Cyan↑ comment by JGWeissman · 2009-05-15T18:45:45.152Z · LW(p) · GW(p)
[P]robability seems like it's correctly applied to die rolling.
Dice roles are deterministic. Given the initial orientation, the mass and elasticity of the dice, the position, velocity, and angular momentum it is released with (which themselves are deterministic), and the surface it is rolled on, it is possible in principal to deduce what the result will be. (Quantum effects will be negligible, the classical approximation is valid in this domain. Imagine the dice is thrown by mechanical device if you are worried this does not apply to the nervous system of the dice roller.)
The probability does not describe randomness in the dice, because the dice is not random. The probability describes your ignorance of the relevant factors and your lack of logical omniscience to compute the result from those factors.
If you reject this argument in the case of dice rolling, how do you accept it (or what alternative do you use) in other cases of probability representing uncertainty?
↑ comment by Cyan · 2009-05-15T17:50:36.631Z · LW(p) · GW(p)
Do you wear a seatbelt when you ride in a car? (I'm aware of at least one libertarian who didn't.) The most probable theory is that you won't need to, but even a small chance that it might prevent harm is generally thought to be worth the effort to put it on. Any action you take that fits this pattern qualifies.
Replies from: thomblake↑ comment by thomblake · 2009-05-15T18:18:04.833Z · LW(p) · GW(p)
I'm happy to report that I have made the decision to wear seat belts without evaluating anything using probability. If the justification is really:
but even a small chance that it might prevent harm is generally thought to be worth the effort to put it on
Then you're not explicitly assigning probabilities. Change 'small chance' to '5%' and I'd wonder how you got that number, and what would happen if the chance were 4.99%.
Replies from: JGWeissman, Cyan↑ comment by JGWeissman · 2009-05-15T18:59:31.751Z · LW(p) · GW(p)
How did you make the decision to wear seat belts then? If it is because you were taught to at a young age, or it is the law, then can you think of any safety precaution you take (or don't take) because it prevents or mitigates a problem that you believe would have less than 50% chance of occurring any particular time you do not take the precaution?
Then you're not explicitly assigning probabilities.
Often we make decisions based on our vague feelings of uncertainty, which are difficult to describe as a probability that could be communicated to others or explicitly analyzed mathematically. This difficulty is a failure of introspection, but the uncertainty we feel does somewhat approximate Bayesian probability theory. Many biases represent the limits of this approximation.
↑ comment by Cyan · 2009-05-15T18:31:35.823Z · LW(p) · GW(p)
I was arguing against:
On things I care about, I find the best position I can and act as though I'm 100% certain of it. When another position is shown to be superior, I reject the original view entirely.
with the implicit assumption that "best positions" are about states of the world, and not synonymous with "best decisions".
I guess we need to go back to Z. M. Davis's last paragraph, reproduced here for your convenience:
I agree that it would be incredibly silly to try to explicitly calculate all your beliefs using probability theory. But a qualitative or implicit notion of probability does seem natural. You don't have to think in terms of likelihood ratios to say things like "It's probably going to rain today" or "I think I locked the door, but I'm not entirely sure." Is this the sort of thing that you mean by the word judgment? In any case, even if bringing probability into the process isn't helpful, bringing in this dichotomy between absolutely-certain-until-proven-otherwise and complete ignorance seems downright harmful. I mean, what do you do when you think you've locked the door, but you're not entirely sure? Or does that just not happen to you?
↑ comment by jimrandomh · 2009-05-14T02:52:22.931Z · LW(p) · GW(p)
We should not downvote a comment simply because we disagree with it.
This sounds great in theory, but other communities have applied that policy with terrible results. Whether I agree with something or not is the only information I have as to whether it's true/wise, and that should be the main factor determining score. Excluding disagreement as grounds for downvoting leaves only presentation, resulting in posts that are eloquent, highly rated, and wrong. Those are mental poison.
Replies from: JGWeissman↑ comment by JGWeissman · 2009-05-14T04:07:25.821Z · LW(p) · GW(p)
When someone honestly presents their position, and is open to discussing it further, there is no need to down vote their comment for being wrong. In fact, it is counterproductive. By discouraging them from expressing their incorrect position, you do not cause them to relinquish it. By instead explaining why you think it is wrong, you help them to adopt a better position. And if it happens that they were right and you were wrong, then you have the opportunity to learn something.
I tend to down vote comments that are off topic, incoherent, arrogant, or present a conclusion without support.
I tend to up vote comments when they are eloquent, insightful, and correct, or sometimes, when they say pretty much what I was planning to say.
↑ comment by Mike Bishop (MichaelBishop) · 2009-05-13T19:49:58.675Z · LW(p) · GW(p)
I'm intrigued. Please point me to a discussion of these issues or make a top level post.
Replies from: dfranke↑ comment by orthonormal · 2009-05-13T18:28:06.070Z · LW(p) · GW(p)
That's an interesting idea, but I think Yvain just averaged the answers without regard to confidence.
↑ comment by randallsquared · 2009-05-13T16:31:14.232Z · LW(p) · GW(p)
I believe in the wisdom of crowds, but I also think that your actions were screwing up the results.
This seems contradictory. Care to explain?
Replies from: orthonormal↑ comment by orthonormal · 2009-05-13T17:11:32.563Z · LW(p) · GW(p)
The "wisdom of crowds" would only apply if everyone is trying to actually get the answer right, and if the errors of incompetence are somewhat random. A large number of intentional pranksters (or one prankster who says "a googolplex") can predictably screw up the average by introducing large variance or acting in a non-random fashion.
comment by Unnamed · 2009-05-13T03:29:16.339Z · LW(p) · GW(p)
There was a .453 correlation between this number and actual IQ; that is, 45% of the variance in how likely you thought you were to have a higher-than-average IQ could be explained by your actual IQ.
Correlation is r and percent of variance explained is r^2, so I think that should be 21% rather than 45%. There's also a typo where you say ".5 level" and presumably mean .05.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2009-05-13T06:40:15.577Z · LW(p) · GW(p)
Thanks. Edited out.
Replies from: Unnamedcomment by blink · 2009-05-13T03:05:47.578Z · LW(p) · GW(p)
Probability of some Creator God: 4.2, 0, 14.6. Probability of something supernatural existing: 4.1, 0, 12.8.
It looks like some of us have yet to overcome the conjunction fallacy.
Replies from: MichaelVassar↑ comment by MichaelVassar · 2009-05-13T03:41:17.788Z · LW(p) · GW(p)
Some people may believe in natural simulator/creator/gods or some other sort of natural god.
Replies from: homunqcomment by dfranke · 2009-05-12T23:50:31.116Z · LW(p) · GW(p)
If you intend to hide something by shuffling the results, it's probably also a good idea to remove the "timestamp" column :-)
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2009-05-13T06:38:41.232Z · LW(p) · GW(p)
...right. Done.
comment by Peter_de_Blanc · 2009-05-15T01:06:45.419Z · LW(p) · GW(p)
Having a copy of the original survey would be nice.
comment by timtyler · 2009-05-12T22:41:07.090Z · LW(p) · GW(p)
Re: Probability of an average person cryonically frozen today being successfully revived: 22.3, 10, 26.2.
An enormous estimate, IMO - close to that given by the salesmen(!):
http://www.alcor.org/Library/html/WillCryonicsWork.html
Replies from: MichaelVassar, Lawliet, timtyler↑ comment by MichaelVassar · 2009-05-13T03:39:29.049Z · LW(p) · GW(p)
That's because cryonics salesmen are generally amateur rationalists who are actually trying to believe rationally and report their beliefs honestly.
Replies from: timtyler↑ comment by timtyler · 2009-05-13T07:08:41.266Z · LW(p) · GW(p)
I am more inclined to believe that they are a self-selected group - drawn from the section of the population with the most optimistic estimates of whether cryonics will work. Usually, "most optimistic" != "most realistic".
Replies from: homunq↑ comment by timtyler · 2009-05-12T22:45:30.329Z · LW(p) · GW(p)
From that document:
"If all my best case figures are used, P(now) from the Warren Equation is 0.15, or a bit better than one chance in seven. This is my most optimistic scenario. The pessimistic scenario puts P at 0.0023, or less than one chance in 400."
comment by Jake · 2009-05-13T12:33:27.597Z · LW(p) · GW(p)
It's interesting that Yvain credited The Great Filter with the huge standard deviations seen in the existence of aliens question. I don't recall seeing any qualifier about conscious or intelligent beings. When in doubt, blame Cached Thoughts :)
Also, does observable mean within our future or past light cone?
comment by John_Maxwell (John_Maxwell_IV) · 2009-05-13T05:01:01.469Z · LW(p) · GW(p)
Here is an anomalous finding I didn't expect: the higher a probability you assign to the truth of revealed religion, the less confident you are that your IQ is above average (even though no correlation between this religious belief and IQ was actually found). Significance is at the .025 level. I have two theories on this: first, that we've been telling religious people they're stupid for so long that it's finally starting to sink in :) Second, that most people here are not religious, and so the people who put a "high" probability for revealed religion may be assigning it 5% or 10%, not because they believe it but because they're just underconfident people who maybe overadjust for their biases a little much. This same underconfidence leads them to underestimate the possibility that their IQ is above average.
Christianity exalts humility. I wouldn't be surprised if other religions do as well. But your second explanation seems very plausible.
comment by steven0461 · 2009-05-13T19:52:39.948Z · LW(p) · GW(p)
It saddens me that so many people assigned extreme probabilities to propositions that may well be false. Makes me wonder what the entire OB/LW project has been for.
↑ comment by Emily · 2009-05-13T16:40:31.036Z · LW(p) · GW(p)
Uh... sorry you think so, why?
Replies from: Cyancomment by thomblake · 2009-05-13T15:40:33.049Z · LW(p) · GW(p)
14 people (10.9%) didn't believe in morality.
I'd really like to know what these folks are thinking. Are they using 'morality' in the way Nietzsche did when he called himself an amoralist? Or do they really think there's nothing to the concepts of 'good/bad' and 'right/wrong'?
Supposing one wants to open a pickle jar, and one considers the acts of (a)twisting the top until it comes off, (b)smashing the jar with a hammer, and (c) cutting off one's own hand with a chainsaw, do these folks think (for instance) that (a) is no better than (c)?
Replies from: AllanCrossman, orthonormal, randallsquared, conchis↑ comment by AllanCrossman · 2009-05-13T15:43:53.332Z · LW(p) · GW(p)
I would guess that they would say that one can certainly have preferences, without there being anything worth calling "morality".
↑ comment by orthonormal · 2009-05-13T17:17:32.087Z · LW(p) · GW(p)
It's probably more of a statement about our jargon: most OB veterans are probably on board with the concept that "morality" should be used to generally talk about our goal systems and decision processes, and not as if it implied naive moral realism.
I'd suspect that some of the 14 are relative newcomers who thought that the question was asking whether they accepted some form of moral realism or not. I'd also expect that some of them are veterans who simply disagreed that the term "morality" should be extended in the above fashion.
↑ comment by randallsquared · 2009-05-13T16:24:04.340Z · LW(p) · GW(p)
Someone can believe in an action being good or bad for a purpose without believing that there is any ultimate reason to choose one purpose over another. Once you've assumed very high-level goals, further discussion is about effectiveness rather than morality. Further, except for sub-goals, where goal X is required or useful for reaching goal Y, rationality doesn't have anything to say about "choosing" goals, which means you cannot rationally argue about morality with someone whose highest goal conflicts with your own.
Replies from: thomblake↑ comment by thomblake · 2009-05-13T16:34:31.540Z · LW(p) · GW(p)
But ethics doesn't just apply to these high-level goals. A utilitarian is committed to whatever action generates the most overall net utility - even when choosing how to (for instance) open a pickle jar. (of course, it's been rightly argued that even a true utilitarian might do best in fact to not consider the question while making the decision, due to the cost of considering the decision). If it turns out (b) results in more overall net utility than (a), then the utilitarian says (a) was the wrong thing to do.
If someone nonetheless thinks one should (a) instead of (b) because one should choose the option that most effectively reaches one's goals without terrible side-effects, then that person would disagree with the utilitarian above about ethics. If you don't believe in ethics, then you have no grounds for disagreeing with the utilitarian.
↑ comment by conchis · 2009-05-13T17:58:49.597Z · LW(p) · GW(p)
See e.g. non-cognitivism and error theory.
comment by andrewc · 2009-05-13T02:46:31.234Z · LW(p) · GW(p)
of the 102 people who cared about the ending to 3 Worlds Collide, 68 (66.6%) prefered to see the humans blow up Huygens, while 34 (33.3%) thought we'd be better off cooperating with the aliens and eating delicious babies.
I'm shocked. Are there any significant variations in the responses of babyeaters compared to freedom fighters to other questions?
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2009-05-13T05:21:23.299Z · LW(p) · GW(p)
Can I make a pro-babyeater argument?
Here is a dialogue between an imaginary six-year-old child named Dennis and myself.
Me: Hi Dennis, do you like broccoli?
Dennis: No, I hate it!
Me: But it's good for you, right?
Dennis: I don't care! It tastes awful!
Me: Would you like to like broccoli?
Dennis: No, I can't stand broccoli! That stuff is gross!
Me: What if I told you some magic words that would make it so that every piece of broccoli you ever ate would taste just like chocolate if you said them? Would you say the magic words?
Dennis: Well...
Me: You like chocolate, don't you?
Dennis: Yes, but...
Me: What?
Dennis: Your questions are too hard.
I think everyone has conflicts between their different wants. I want to do well in my classes, but I don't want to study. And yet I can't think of any conflicts between my metawants: If I could choose to like studying just as much as I like my favorite computer game, I would make that choice. The wants offered to the humans in the babyeaters story seem fairly sensible from a utilitarian perspective. They promote peace throughout the galaxy and mean lots of fun for everyone. What's not to like?
Replies from: dclayh, andrewc, MichaelHoward↑ comment by dclayh · 2009-05-14T23:25:26.004Z · LW(p) · GW(p)
I wish someone would do a post on metawants. Personally I view them with deep suspicion.
Replies from: Alicorn↑ comment by Alicorn · 2009-05-14T23:52:12.400Z · LW(p) · GW(p)
What about metawants (a.k.a. second-order desire) do you want to see a post on?
Replies from: dclayh↑ comment by dclayh · 2009-05-15T00:06:12.244Z · LW(p) · GW(p)
Well, their ontological, epistemological, and ethical statuses, for three. Specifically, how it's possible to want X and simultaneously want to not want X (while remaining more or less sane/rational). Whether metawants have any special status when making utilitarian ethical calculations. That sort of thing. Even the history of thought on the subject (e.g. Buddhism, where the stated (and only?) metawant is to eliminate all first-order wants).
Replies from: Alicorn↑ comment by andrewc · 2009-05-14T00:41:15.181Z · LW(p) · GW(p)
I get the argument, but I assign a high value to self-determination. Like Arthur Dent, I don't want my brain replaced (unless by choice), even if the new brain is programmed to be ok with being replaced. Which ending did you pick in Deus Ex 2? I felt guilty gunning down JC and his brother, but it seemed the least wrong (according to my preferences) thing to do.
Replies from: dclayh, John_Maxwell_IV↑ comment by dclayh · 2009-05-14T23:30:44.607Z · LW(p) · GW(p)
I don't want my brain replaced (unless by choice)
A rather vacuous statement, no?
I felt guilty gunning down JC and his brother, but it seemed the least wrong (according to my preferences) thing to do.
Isn't human nature funny* that we have qualms about behaving immorally in a sufficiently realistic simulation, yet can hear cold numbers about enormous real disutility (genocides, natural disasters, etc.) and feel nothing? That's speaking for myself incidentally, not casting aspersions on you.
*(where by "funny" I mean "designed by a blind idiot god", naturally)
↑ comment by John_Maxwell (John_Maxwell_IV) · 2009-05-14T22:42:35.490Z · LW(p) · GW(p)
Like Arthur Dent, I don't want my brain replaced (unless by choice), even if the new brain is programmed to be ok with being replaced.
I don't think you're being very fair to your new brain. Do you?
I haven't played Deus Ex 2, sorry.
↑ comment by MichaelHoward · 2009-05-13T13:06:22.663Z · LW(p) · GW(p)
As things that were good for us in the anscestral environment (fat and sugar) tend to taste good, and things that might be bad (suspect plants) taste yucky, Imaginary Dennis' reaction makes adaptive sence. Do you want to want to eat poison?
comment by lukeprog · 2011-01-21T06:44:01.080Z · LW(p) · GW(p)
Oh God. Even LWers are libertarians! We are doomed.
:)
Replies from: ciphergoth, ata, arundelo↑ comment by Paul Crowley (ciphergoth) · 2011-01-21T11:35:04.274Z · LW(p) · GW(p)
Oh come on, liberals + socialists outnumber libertarians or even libertarians + conservatives...
Replies from: CaveJohnson↑ comment by CaveJohnson · 2011-07-01T07:10:49.038Z · LW(p) · GW(p)
Yes, but among the educated, Liberalism or Social Democratic views are socially expected and rewarded (the default in other words).
Edit: Am wrong on this?
Replies from: None↑ comment by arundelo · 2011-01-24T06:57:06.090Z · LW(p) · GW(p)
Source (a 2009 LW survey) for ciphergoth's comment.