Open Thread: June 2010

post by Morendil · 2010-06-01T18:04:48.504Z · LW · GW · Legacy · 663 comments

To whom it may concern:

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

(After the critical success of part II, and the strong box office sales of part III in spite of mixed reviews, will part IV finally see the June Open Thread jump the shark?)

663 comments

Comments sorted by top scores.

comment by Scott Alexander (Yvain) · 2010-06-01T23:17:06.181Z · LW(p) · GW(p)

Cleaning out my computer I found some old LW-related stuff I made for graphic editing practice. Now that we have a store and all, maybe someone here will find it useful:

Replies from: ata, pjeby, fburnaby, Unnamed, gaffa, Roko, Houshalter
comment by ata · 2010-06-05T01:54:50.960Z · LW(p) · GW(p)

You are magnificent.

(Alternate title for the LW tabloid — "The Rational Enquirer"?)

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2010-06-05T10:28:59.468Z · LW(p) · GW(p)

That's....brilliant. I might have to do another one just for that title.

comment by pjeby · 2010-06-02T03:06:15.911Z · LW(p) · GW(p)

Less Wrong tabloid magazine

Sweet!

Replies from: cousin_it
comment by cousin_it · 2010-06-02T09:42:34.493Z · LW(p) · GW(p)

Yep, it was probably the first rationalist joke ever that made me laugh.

Replies from: FourFire
comment by FourFire · 2015-05-07T17:58:23.137Z · LW(p) · GW(p)

I didn't see that until right now, made me chuckle.

comment by fburnaby · 2010-06-03T18:54:17.930Z · LW(p) · GW(p)

Is your boyfriend a frequentist?

Nearly killed me.

comment by Unnamed · 2010-06-02T00:36:29.227Z · LW(p) · GW(p)

We have a store? Where?

Replies from: arundelo
comment by gaffa · 2010-06-05T01:25:43.623Z · LW(p) · GW(p)

Tabloid 100% gold. Hanson slayed me.

comment by Roko · 2010-06-02T22:39:18.529Z · LW(p) · GW(p)

Oh dear oh dear oh dear oh dear...

comment by Houshalter · 2010-06-02T22:18:02.949Z · LW(p) · GW(p)

"Aliens ate my baby!"

Lol, although, what does astrology have to do with anything less wrong-ish.

Replies from: cousin_it
comment by cousin_it · 2010-06-02T22:21:28.128Z · LW(p) · GW(p)

That's a reference to Three Worlds Collide.

comment by NaN · 2010-06-01T22:14:47.076Z · LW(p) · GW(p)

Why is LessWrong not an Amazon affiliate? I recall buying at least one book due to it being mentioned on LessWrong, and I haven't been around here long. I can't find any reliable data on the number of active LessWrong users, but I'd guess it would number in the 1000s. Even if only 500 are active, and assuming only 1/4 buy at least one book mentioned on LessWrong, assuming a mean purchase value of $20 (books mentioned on LessWrong probably tend towards the academic, expensive side), that would work out at $375/year.

IIRC, it only took me a few minutes to sign up as an Amazon affiliate. They (stupidly) require a different account for each Amazon website, so 5*4 minutes (.com, .co.uk, .de, .fr), +20 for GeoIP database, +3-90 (wide range since coding often takes far longer than anticipated) to set up URL rewriting (and I'd be happy to code this) would give a 'worst case' scenario of $173 annualized returns per hour of work.

Now, the math is somewhat questionable, but the idea seems like a low-risk, low-investment and potentially high-return one, and I note that Metafilter and StackOverflow do this, though sadly I could not find any information on the returns they see from this. So, is there any reason why nobody has done this, or did nobody just think of it/get around to it?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-06-02T01:28:25.831Z · LW(p) · GW(p)

From your link, a further link doesn't make it sound great at SO - 2-4x the utter failure. But they are very positive about it because the cost of implementation was very low. Just top-level posts or no geolocating would be even cheaper.

You may be amused (or something) by this search

Replies from: mattnewport
comment by mattnewport · 2010-06-02T01:47:23.248Z · LW(p) · GW(p)

A possibly relevant data point: I usually post any links to books I put online with my amazon affiliate link and in the last 3 months I've had around 25 clicks from links to books I believe I posted in Less Wrong comments and no conversions.

comment by bentarm · 2010-06-01T22:53:33.601Z · LW(p) · GW(p)

The entire world media seems to have had a mass rationality failure about the recent suicides at Foxconn. There have been 10 suicides there so far this year, at a company which employs more than 400,000 people. This is significantly lower than the base rate of suicide in China. However, everyone is up in arms about the 'rash', 'spate', 'wave'/whatever of suicides going on there.

When I first read the story I was reading a plausible explanation of what causes these suicides by a guy who's usually pretty on the ball. Partly due to the neatness of the explanation, it took me a while to realise that there was nothing to explain.

Your strength as a rationalist is your ability to be more confused by fiction than by reality. It's even harder to achieve this when the fiction comes ready-packaged with a plausible explanation (especially one which fits neatly with your political views).

Replies from: kodos96, mattnewport, Bo102010, Houshalter
comment by kodos96 · 2010-06-02T04:47:53.908Z · LW(p) · GW(p)

That's what I thought as well, until I read this post from "Fake Steve Jobs". Not the most reliable source, obviously, but he does seem to have a point:

But, see, arguments about national averages are a smokescreen. Sure, people kill themselves all the time. But the Foxconn people all work for the same company, in the same place, and they’re all doing it in the same way, and that way happens to be a gruesome, public way that makes a spectacle of their death. They’re not pill-takers or wrist-slitters or hangers. ... They’re jumpers. And jumpers, my friends, are a different breed. Ask any cop or shrink who deals with this stuff. Jumpers want to make a statement. Jumpers are trying to tell you something.

Now I'm not entirely sure of the details, but if it's true that all the suicides in the recent cluster consisted of jumping off the Foxconn factory roof, that does seem to be more significant than just 15 employees committing suicide in unrelated incidents. In fact, it seems like it might even be the case that there are a lot more suicides than the ones we've heard about, and the cluster of 15 are just those who've killed themselves via this particular, highly visible, method (I'm just speculating here).

I'm not sure what to make of this - without knowing more of the details its probably impossible to say what's going on. But the basic point seems sound: that the argument about being below national average suicide rates doesn't really hold up if there's something specific about a particular group of incidents that makes them non-independent. As an example, if the members of some cult commit suicide en masse, you can't look at the region the event happened in and say "well the overall suicide rate for the region is still below the national average, so there's nothing to see here"

Replies from: Douglas_Knight, Torben
comment by Douglas_Knight · 2010-06-02T05:04:28.012Z · LW(p) · GW(p)

Suicide and methods of suicide are contagious, FWIW.

Replies from: Eliezer_Yudkowsky, wedrifid
comment by wedrifid · 2010-06-02T05:33:48.068Z · LW(p) · GW(p)

I was surprised when I read a statistical analysis on national death rates. Whenever there was a suicide by a particular method published in newspapers or on television, deaths of that form spiked in the following weeks. This is despite the copycat deaths often being called 'accidents' (examples included crashed cars and aeroplanes). Scary stuff (or very impressive statistics-fu).

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-02T05:44:34.370Z · LW(p) · GW(p)

Yes, this is connected to the existence of suicide epidemics. The most famous example is the ongoing suicide epidemic over the last fifty years in Micronesia, where both the causes and methods of suicide have been the same (hanging). See for example this discussion.

comment by Torben · 2010-06-02T05:14:09.373Z · LW(p) · GW(p)

If all the members of a cult committed suicide then the local rate is 100%.

The most local rate that we so far know of is 15/400,000 which is 4x below baseline. If these 15 people worked at, say, the same plant of 1,000 workers you may have a point. But we don't know.

At this point there is nothing to explain.

Replies from: kodos96
comment by kodos96 · 2010-06-02T06:23:18.796Z · LW(p) · GW(p)

If all the members of a cult committed suicide then the local rate is 100%.

Fair enough - my example was poorly thought out in retrospect.

But I don't think it's correct that there's nothing to explain. If it's true that all 15 committed suicide by the same method - a fairly rare method frequently used by people who are trying to make a public statement with their death - then there seems to be something needing to be explained. As Fake Steve Jobs points out later in the cited article, if 15 employees of Walmart committed suicide within the span of a few months, all of them by way of jumping off the roof of their Walmart, wouldn't you think that was odd? Don't you think that would be more significant, and more deserving of an explanation, than the same 15 Walmart employees committing suicide in a variety of locations, by a variety of different methods?

I'm not committing to any particular explanation here (Douglas Knight's suggestion, for one, sounds like a plausible explanation which doesn't involve any wrongdoing on Foxconn's part), I'm just saying that I do think there's "something to explain".

Replies from: kodos96
comment by kodos96 · 2010-06-02T20:30:10.306Z · LW(p) · GW(p)

Just curious: why the downvote? Was this just a case of downvote = disagree? If so, what do you disagree with specifically?

Replies from: SilasBarta
comment by SilasBarta · 2010-06-02T20:58:08.013Z · LW(p) · GW(p)

Strange. I thought it made a good point, so I just upvoted it.

comment by mattnewport · 2010-06-01T23:01:05.264Z · LW(p) · GW(p)

The first question that came to mind when I heard about this story was 'what's the base rate?'. I didn't investigate further but a quick mental estimate made me doubt that this represented a statistically significant increase above the base rate. It's disappointing yet unsurprising that few if any media reports even consider this point.

comment by Bo102010 · 2010-06-02T00:53:16.263Z · LW(p) · GW(p)

Wasn't there a somewhat well-publicized "spate" of suicides at a large French telecom a while back? I remember the explanation being the same - the number observed was just about what you'd expect for an employer of that size.

ETA: http://en.wikipedia.org/wiki/France_Telecom

Replies from: mattnewport
comment by mattnewport · 2010-06-02T01:04:10.848Z · LW(p) · GW(p)

Even if the suicide rate was somewhat higher than average it still doesn't necessarily tell you much. You should really be looking at the probability of that number of suicides occurring in some distinct subset of the population - given all the subsets of a population that you can identify you will expect some to have higher than suicide rates than for the population as a whole. The relevant question is 'what is the probability that you would observe this number of suicides by chance in some randomly selected subset of this size?'

Incidentally the rate appears to be below that of Cambridge University students:

RESULTS: We identified 157 student deaths during academic years 1970-1996, of which 36 appeared to be suicides. The overall suicide rate was 11.3/100,000 person years at risk. Suicide rates were similar to those seen amongst 15- to 24-year-olds in the general population. There were non-significant trends for male postgraduates to be over-represented and first-year undergraduates under-represented. Examination times were not associated with excess suicide. CONCLUSIONS: Suicide rates in University of Cambridge students do not appear to be unduly high.

Replies from: gwern
comment by gwern · 2010-06-02T17:52:25.639Z · LW(p) · GW(p)

Yes, this is my counter-counter-criticism as well. 'Sure, the overall China rate may be the same, but what's the suicide rate for young, employed workers employed by a technical company with bright prospects? I'll bet it's lower than the overall rate...'

Replies from: SilasBarta, mattnewport
comment by SilasBarta · 2010-06-02T17:57:50.903Z · LW(p) · GW(p)

Agreed. Also, I think what got the suicides in China in the news was that the victim attributed the suicide specifically to some weird policy or rule the company adhered to. It could be that the "normal" suicides at the company are being ignored, and the ones being reported are the suicides on top of this, justifying that concern that this is abnormal.

comment by mattnewport · 2010-06-02T18:11:56.713Z · LW(p) · GW(p)

This was why I went looking for stats on suicides amongst university students. I remembered some talk when I was at Cambridge of a high suicide rate, which you might see as somewhat similarly counter-intuitive to a high suicide rate for 'young, employed workers employed by a technical company with bright prospects'.

Actually, there are a number of reasons to expect a somewhat elevated suicide rate in a relatively high pressure environment where large numbers of young people have left home for the first time and are living in close proximity to large numbers of strangers their own age. Stories about high suicide rates at elite universities tend to take a very different tack to stories about Chinese workers however.

comment by Houshalter · 2010-06-02T02:08:48.020Z · LW(p) · GW(p)

There's a recreation centre, but the engineers I was training told me they had never been there. Then I saw on TV that there's a stress room full of these dolls that look like Japanese warriors. You get a bat and you beat them. That's how they are encouraged to relieve the stress.

Ya, I can see how something like this could happen. By the way, a few statistics don't exactly prove anything. Was there 10 deaths last year? The year before? Do other factories have similiar problems? Etc. To many variables.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-02T02:25:32.538Z · LW(p) · GW(p)

Incidentally, note that the evidence strongly suggests that actively taking out your aggression actually increases rather than decreases stress and aggression levels. See for example, Berkowitz's 1970 paper "Experimental investigation of hostility catharsis" in the Journal of Consulting and Clinical Psychology.

comment by steven0461 · 2010-06-01T22:35:58.077Z · LW(p) · GW(p)

Marginal Revolution linked to A Fine Theorem, which has summaries of papers in decision theory and other relevant econ, including the classic "agreeing to disagree" results. A paper linked there claims that the probability settled on by Aumann-agreers isn't necessarily the same one as the one they'd reach if they shared their information, which is something I'd been wondering about. In retrospect this seems obvious: if Mars and Venus only both appear in the sky when the apocalypse is near, and one agent sees Mars and the other sees Venus, then they conclude the apocalypse is near if they exchange info, but if the probabilities for Mars and Venus are symmetrical, then no matter how long they exchange probabilities they'll both conclude the other one probably saw the same planet they did. The same thing should happen in practice when two agents figure out different halves of a chain of reasoning. Do I have that right?

ETA: it seems, then, that if you're actually presented with a situation where you can communicate only by repeatedly sharing probabilities, you're better off just conveying all your info by using probabilities of 0 and 1 as Morse code or whatever.

ETA: the paper works out an example in section 4.

Replies from: HalFinney, Roko, cousin_it
comment by HalFinney · 2010-06-03T21:46:59.182Z · LW(p) · GW(p)

I thought of a simple example that illustrates the point. Suppose two people each roll a die privately. Then they are asked, what is the probability that the sum of the dice is 9?

Now if one sees a 1 or 2, he knows the probability is zero. But let's suppose both see 3-6. Then there is exactly one value for the other die that will sum to 9, so the probability is 1/6. Both players exchange this first estimate. Now curiously although they agree, it is not common knowledge that this value of 1/6 is their shared estimate. After hearing 1/6, they know that the other die is one of the four values 3-6. So actually the probability is calculated by each as 1/4, and this is now common knowledge (why?).

And of course this estimate of 1/4 is not what they would come up with if they shared their die values; they would get either 0 or 1.

Replies from: HalFinney
comment by HalFinney · 2010-06-04T19:11:36.962Z · LW(p) · GW(p)

Here is a remarkable variation on that puzzle. A tiny change makes it work out completely differently.

Same setup as before, two private dice rolls. This time the question is, what is the probability that the sum is either 7 or 8? Again they will simultaneously exchange probability estimates until their shared estimate is common knowledge.

I will leave it as a puzzle for now in case someone wants to work it out, but it appears to me that in this case, they will eventually agree on an accurate probability of 0 or 1. And they may go through several rounds of agreement where they nevertheless change their estimates - perhaps related to the phenomenon of "violent agreement" we often see.

Strange how this small change to the conditions gives such different results. But it's a good example of how agreement is inevitable.

comment by Roko · 2010-06-02T22:49:25.450Z · LW(p) · GW(p)

But in reality, what happens when people try to aumann involves a different set of problems, such as status-signalling, especially the idea that updating toward someone else's probability is instinctively seen as giving them status.

comment by cousin_it · 2010-06-03T10:41:22.289Z · LW(p) · GW(p)

Thanks a lot for both links. I already understood common knowledge, but the paper is a very pleasing and thorough treatment of the topic.

comment by Alexandros · 2010-06-02T08:50:49.016Z · LW(p) · GW(p)

Observation: The may open thread, part 2, had very few posts in the last days, whereas this one has exploded within the first 24 hours of its opening. I know I deliberately withheld content from it as once it is superseded from a new thread, few would go back and look at the posts in the previous one. This would predict a slowing down of content in the open threads as the month draws to a close, and a sudden burst at the start of the next month, a distortion that is an artifact of the way we organise discussion. Does anybody else follow the same rule for their open thread postings? Is there something that should be done to solve this artificial throttling of discussion?

Replies from: billswift, Kaj_Sotala
comment by billswift · 2010-06-02T16:16:12.757Z · LW(p) · GW(p)

Some sites have gone to an every Friday open thread; maybe we should do it weekly instead of monthly, too.

Replies from: Blueberry, RobinZ
comment by Blueberry · 2010-06-02T20:12:30.206Z · LW(p) · GW(p)

I would support that.

comment by RobinZ · 2010-06-02T20:50:17.850Z · LW(p) · GW(p)

From observations even of previous "Part 2"s, it would seem that there is enough content to support that frequency of open thread.

comment by Kaj_Sotala · 2010-06-02T21:07:41.276Z · LW(p) · GW(p)

I don't post in the open threads much, but if I run into a good rationality quote I tend to wait until the next rationality quotes thread is opened unless the current one is less than a week or so old.

comment by Briareos · 2010-06-05T15:44:40.859Z · LW(p) · GW(p)

I think my only other comment here has been "Hi." But, the webcomic SMBC has a treatment of the prisoner's dilemma today and I thought of you guys.

comment by Will_Newsome · 2010-06-02T07:28:20.729Z · LW(p) · GW(p)

So I've started drafting the very beginnings of a business plan for a Less Wrong (book) store-ish type thingy. If anybody else is already working on something like this and is advanced enough that I should not spend my time on this mini-project, please reply to this comment or PM me. However, I would rather not be inundated with ideas as to how to operate such a store yet: I may make a Less Wrong post in the future to gather ideas. Thanks!

comment by Kaj_Sotala · 2010-06-02T07:18:45.469Z · LW(p) · GW(p)

My theory of happiness.

In my experience, happy people tend to be more optimistic and more willing to take risks than sad people. This makes sense, because we tend to be more happy when things are generally going well for us: that is when we can afford to take risks. I speculate that the emotion of happiness has evolved for this very purpose, as a mechanism that regulates our risk aversion and makes us more willing to risk things when we have the resources to spare.

Incidentally, this would also explain why people falling in love tend to be intensly happy at first. In order to get and keep a mate, you need to be ready to take risks. Also, if happiness is correlated with resources, then being happy signals having lots of resources, increasing your prospective mate's chances of accepting you. [...]

I was previously talking with Will about the degree to which people's happiness might affect their tendency to lean towards negative or positive utilitarianism. We came to the conclusion that people who are naturally happy might favor positive utilitarianism, while naturally unhappy people might favor negative utilitarianism. If this theory of happiness is true, then that makes perfect sense: risk aversion and a desire to avoid pain corresponds to negative utilitarianism, and willingness to tolerate pain corresponds to positive utilitarianism.

Note that most Western humans have a far greater access to resources than our ancestors did, so we are likely all far more risk-averse than would be optimal given the environment.

Replies from: Houshalter, Alexandros, Will_Newsome
comment by Houshalter · 2010-06-02T13:40:50.985Z · LW(p) · GW(p)

How does this make sense exactly? A happy person, with more resources, would be better off not taking risks that could result in him losing what he has. On the other hand, a sad person with few resources, would need to take more risks then the happy person to get the same results. If you told a rich person, jump off that cliff and I'll give you a million dollars, they probably wouldn't do it. On the other hand, if you told a poor person the same thing, they might do it as long as there was a chance they could survive.

My idea of why people were happy wasn't a static value of how many resources they had, but a comparative value. A rich person thrown into poverty would be very unhappy, but the poor person might be happy.

Replies from: pjeby, Kaj_Sotala
comment by pjeby · 2010-06-02T16:19:25.261Z · LW(p) · GW(p)

How does this make sense exactly? A happy person, with more resources, would be better off not taking risks that could result in him losing what he has. On the other hand, a sad person with few resources, would need to take more risks then the happy person to get the same results.

Kaj's hypothesis is a bit off: what he's actually talking about is the explore/exploit tradeoff. An animal in a bad (but not-yet catastrophic) situation is better off exploiting available resources than scouting new ones, since in the EEA, any "bad" situation is likely to be temporary (winter, immediate presence of a predator, etc.) and it's better to ride out the situation.

OTOH, when resources are widely available, exploring is more likely to be fruitful and worthwhile.

The connection to happiness and risk-taking is more tenuous.

If you told a rich person, jump off that cliff and I'll give you a million dollars, they probably wouldn't do it. On the other hand, if you told a poor person the same thing, they might do it as long as there was a chance they could survive.

I'd be interested in seeing the results of that experiment. But "rich" and "poor" are even more loosely correlated with the variables in question - there are unhappy "rich" people and unhappy "poor" people, after all.

(In other words, this is all about internal, intuitive perceptions of resource availability, not rational assessments of actual resource availability.)

Replies from: RobinZ, Kaj_Sotala
comment by RobinZ · 2010-06-02T16:41:01.663Z · LW(p) · GW(p)

If I were to wager a guess, the people who would accept the deal are those who feel they are in a catastrophic situation.

Speaking of catastrophic situations, have you seen The Wages of Fear or any of the remakes? I've only seen Sorcerer), but it was quite good. It's a rather more realistic situation that jumping off a cliff, but the structure is the same: a group of desperate people driving cases of nitroglycerin-sweating dynamite across rough terrain to get enough money that they can escape.

Replies from: Houshalter
comment by Houshalter · 2010-06-02T22:11:20.373Z · LW(p) · GW(p)

It's a rather more realistic situation that jumping off a cliff

Or maybe not...

Driving in teams of two, they meet various hazards on their journey, including a dilapidated rope-suspension bridge swinging violently in a huge storm over a flood-swollen river, a massive tree blocking the road, and a number of desperate, dangerous bandits.

Replies from: RobinZ
comment by RobinZ · 2010-06-02T22:33:47.381Z · LW(p) · GW(p)

I'd buy "main road incorporating rope suspension bridges" over "millionaire hiring people to throw themselves off cliffs", but I see what you mean.

comment by Kaj_Sotala · 2010-06-02T21:11:40.491Z · LW(p) · GW(p)

Kaj's hypothesis is a bit off: what he's actually talking about is the explore/exploit tradeoff.

I believe you're right, now that I think about that.

comment by Kaj_Sotala · 2010-06-02T21:11:12.885Z · LW(p) · GW(p)

I was kind of thinking expected value. In principle, if you always go by expected value, in the long run you will end up maximizing your value. But this may not be the best move to make if you're low on resources, because with bad luck you'll run out of them and die even though you made the moves with the highest expected value.

However, your objection does make sense and Eby's reformulation of my theory is probably the superior one, now that I think about it.

comment by Alexandros · 2010-06-02T08:46:35.627Z · LW(p) · GW(p)

Hi Kaj, I really liked the article. I had a relevant theory to explain the perceived difference of attitudes of north Europeans versus south Europeans. I guess you could call it a theory of unhappiness. Here goes:

I take as granted that mildly depressed people tend to make more accurate depictions of reality, that north Europeans have higher incidence of depression and also much better functioning economies and democracies. Given a low resource environment, one needs to plan further, and make more rational projections of the future. If being on the depressive side makes one more introspective and thoughtful, then it would be conducive to having better long-term plans. In a sense, happiness could be greed-inducing, in a greedy algorithm sense. This more or less agrees with kaj's theory. OTOH, not-happiness would encourage long-term planning and even more co-operative behaviour.

In the current environment, resources may not be scarce, but our world has become much more complex, actions having much deeper consequences than in the ancestral environment (Nassim Nicholas Taleb makes this point in Black Swan) therefore also needing better thought out courses of action. So northern Europeans have lucked out where their adaptation to climate has been useful for the current reality. If one sees corruption as a local-greedy behaviour as opposed to lawfulness as a global-cooperative behaviour, this would also explain why going closer to the equator you generally see an increase in corruption and also failures in democratic government. Taken further, it would imply that near-equator peoples are simply not well-adapted to democratic rule, which demands a certain limiting of short-term individual freedom for the longer-term common good, and a more distributed/localised form of governance would do much better. I think this (rambling) theory can more or less be pieced together with kaj's, adding long-term planning as a second dimension.

Disclaimer: Before anyone accuses me of discrimination, I am in fact a south European (Greek), living in north Europe (the UK), and while this does not absolve me of all possibility of racism against my own, this theory has formed from my effort to explain the cultural differences I experience on a daily basis. Take it for what it's worth.

Replies from: Jayson_Virissimo, RomanDavis
comment by Jayson_Virissimo · 2010-06-04T05:46:42.002Z · LW(p) · GW(p)

Before anyone accuses me of discrimination...

If any given instance of discrimination increases the degree of correspondence between your map and the territory, then there is no need for apology. Are these sorts of disclaimers really necessary here?

comment by RomanDavis · 2010-06-03T20:12:28.929Z · LW(p) · GW(p)

Relevant to your interests:

http://www.youtube.com/watch?v=A3oIiH7BLmg&feature=channel

Replies from: Alexandros
comment by Alexandros · 2010-06-03T21:21:43.748Z · LW(p) · GW(p)

Greatly appreciated.

present-oriented vs. future oriented is a good way to put it and I suspect there is some more research I could find if I dig further behind that speech.

comment by Will_Newsome · 2010-06-02T07:21:44.176Z · LW(p) · GW(p)

And a very condensed note I wrote to myself (in brainstormish mode, without regard for feasibility or testability):

Emotions are filters on the brain, brain subsystems activated for different reasons in response to different cognitive stimuli. This would explain why those who are happy have a hard time remembering things that are saddening or vice versa (possibly causing cascades). It seems that flow is the opposite of suffering, as both are responses to difficult problems such as the ones the brain evolved to solve. Pain asymbolia may be the opposite of something like bipolar disorder or multiple personality disorder, and the difference may be strength of emotion or the cognitive subsystems similar to emotion. It is odd that people who suffer are more often negative utilitarians: this is probably because the suffering filter is affecting what sorts of memories of experience they have access to, and biasing their thoughts in that direction.

comment by Nisan · 2010-06-04T09:18:51.982Z · LW(p) · GW(p)

Searle has some weird beliefs about consciousness. Here is his description of a "Fading Qualia" thought experiment, where your neurons are replaced, one by one, with electronics:

... as the silicon is progressively implanted into your dwindling brain, you find that the area of your conscious experience is shrinking, but that this shows no effect on your external behavior. You find, to your total amazement, that you are indeed losing control of your external behavior. You find, for example, that when the doctors test your vision, you hear them say, ‘‘We are holding up a red object in front of you; please tell us what you see.’’ You want to cry out, ‘‘I can’t see anything. I’m going totally blind.’’ But you hear your voice saying in a way that is completely out of your control, ‘‘I see a red object in front of me.’’

(J.R. Searle, The rediscovery of the mind, 1992, p. 66, quoted by Nick Bostrom here.)

This nightmarish passage made me really understand why the more imaginative people who do not subscribe to a computational theory of mind are afraid of uploading.

My main criticism of this story would be: What does Searle think is the physical manifestation of those panicked, helpless thoughts?

Replies from: DanArmak, Vladimir_M
comment by DanArmak · 2010-06-04T18:26:22.217Z · LW(p) · GW(p)

I don't have Searle's book, and may be missing some relevant context. Does Searle believe normal humans with unmodified brains can consciously affect their external behavior?

If yes, then there's a simple solution to this fear: do the experiment he describes, and then gradually return the test subject to his original, all-biological condition. Ask him to describe his experience. If he reports (now that he's free of non-biological computing substrate) that he actually lost his sight and then regained it, then we'll know Searle is right, and we won't upload. Nothing for Searle to fear.

But if, as I gather, Searle believes that our "consciousness" only experiences things and is never a cause of external behavior, then this is subject to the same criticism as Searle's support of zombies.

Namely: if Searle is right, then the reason he is giving us this warning isn't because he is conscious. Maybe in fact his consciousness is screaming inside his head, knowing that his thesis is false, but is unable to stop him from publishing his books. Maybe his consciousness is already blind, and has been blind from birth due to a rare developmental accident, and it doesn't know what words he types in his books at all. Why should we listen to him, if his words about conscious experience are not caused by conscious experience?

Replies from: torekp
comment by torekp · 2010-06-06T00:19:09.286Z · LW(p) · GW(p)

Searle thinks that consciousness does cause behavior. In the scary story, the normal cause of behavior is supplanted, causing the outward appearance of normality. Thus, it's not that consciousness doesn't affect things, but just that its effects can be mimicked.

Nisan's criticism is devastating, and has the advantage of not requiring technological marvels to assess. I do like the elegance of your simple solution, though.

comment by Vladimir_M · 2010-06-06T00:32:45.988Z · LW(p) · GW(p)

David Chalmers discusses this particular passage by Searle extensively in his paper "Absent Qualia, Fading Qualia, Dancing Qualia":
http://consc.net/papers/qualia.html

He demonstrates very convincingly that Searle's view is incoherent except under the assumption of strong dualism, using an argument based on more or less the same basic idea as your objection.

comment by gwern · 2010-06-03T18:41:41.703Z · LW(p) · GW(p)

http://www.kk.org/quantifiedself/2010/05/eric-boyd-and-his-haptic-compa.php

'Here is Eric Boyd's talk about the device he built called North Paw - a haptic compass anklet that continuously vibrates in the direction of North. It's a project of Sensebridge, a group of hackers that are trying to "make the invisible visible".'

The technology itself is pretty interesting; see also http://www.wired.com/wired/archive/15.04/esp.html

comment by Alexandros · 2010-06-02T08:53:14.922Z · LW(p) · GW(p)

To the powers that be: Is there a way for the community to have some insight into the analytics of LW? That could range from periodic reports, to selective access, to open access. There may be a good reason why not, but I can't think of it. Beyond generic transparency brownie points, since we are a community interested in popularising the website, access to analytics may produce good, unforeseen insights. Also, authors would be able to see viewership of their articles, and related keyword searches, and so be better able to adapt their writing to the audience. For me, a downside of posting here instead of my own blog is the inability to access analytics. Obviously i still post here, but this is a downside that may not have to exist.

comment by roland · 2010-06-02T01:21:24.769Z · LW(p) · GW(p)

LW too focused on verbalizable rationality

This comment got me thinking about it. Of course LW being a website can only deal with verbalizable information(rationality). So what are we missing? Skillsets that are not and have to be learned in other ways(practical ways): interpersonal relationships being just one of many. I also think the emotional brain is part of it. There might me people here who are brilliant thinkers yet emotionally miserable because of their personal context or upbringing, and I think dealing with that would be important. I think a hollistic approach is required. Eliezer had already suggested the idea of a rationality dojo. What do you think?

Replies from: Will_Newsome, RomanDavis, JoshB, realitygrill
comment by Will_Newsome · 2010-06-02T07:35:57.698Z · LW(p) · GW(p)

I've been talking to various people about the idea of a Rationality Foundation (working title) which might end up sponsoring or facilitating something like rationality dojos. Needless to say this idea is in its infancy.

Replies from: Morendil
comment by Morendil · 2010-06-02T14:29:33.972Z · LW(p) · GW(p)

The example of coding dojos for programmers might be relevant, and not just for the coincidence in metaphors.

comment by RomanDavis · 2010-06-02T02:04:27.151Z · LW(p) · GW(p)

I'm a draftsman and it always struck me how absolutely terrible the English language is for talking about ludicrously simple visual concepts precisely. Words like parallel and perpendicular should be one syllable long.

I wonder if there's a way to apply rationality/ mathematical think beyond geometry and to the world of art.

comment by JoshB · 2010-06-03T08:53:56.897Z · LW(p) · GW(p)

According to wiki: "Tacit knowledge (as opposed to formal or explicit knowledge) is knowledge that is difficult to transfer to another person by means of writing it down or verbalizing it"

Thus: "Effective transfer of tacit knowledge generally requires extensive personal contact and trust. Another example of tacit knowledge is the ability to ride a bicycle."

Supports the dojo idea...perhaps in SecondLife once the graphics are better?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-06-03T09:45:27.847Z · LW(p) · GW(p)

Thus: "Effective transfer of tacit knowledge generally requires extensive personal contact and trust. Another example of tacit knowledge is the ability to ride a bicycle."

How much personal contact and trust does it take to learn to ride a bicycle?

Replies from: RobinZ, cousin_it, Morendil
comment by RobinZ · 2010-06-03T12:23:10.445Z · LW(p) · GW(p)

As someone who learned cycling as a near-adult, the main insight is that you turn the wheel in the direction in which the bike is falling to push it back vertical. Once I had been told that negative-feedback mechanism, the only delay was until I got frustrated enough with going slowly to say, "heck with this 'rolling down a slight slope' game, I'm just going to turn the pedals." Whereupon I was genuinely riding the bicycle.

...for about a minute, until I got the bright idea of trying to jump the curb. Did you know that rubbing the knee off a pair of jeans will leave a streak of blue on concrete?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-06-03T18:35:05.247Z · LW(p) · GW(p)

What was your total time frame in learning to ride? Was there a period before you were told about turning the wheel?

Replies from: RobinZ
comment by RobinZ · 2010-06-03T18:40:02.233Z · LW(p) · GW(p)

I estimate the total time between donning the helmet and hitting the sidewalk was less than an hour - but it was probably a decade ago, so I don't trust my recollections.

comment by cousin_it · 2010-06-03T09:48:52.190Z · LW(p) · GW(p)

Hahaha, great catch. Though maybe they meant personal contact with a bicycle!

comment by Morendil · 2010-06-03T14:01:47.657Z · LW(p) · GW(p)

Uh, lots? Who did you learn it from?

Replies from: SilasBarta, Richard_Kennaway
comment by SilasBarta · 2010-06-03T14:57:34.043Z · LW(p) · GW(p)

Per my upcoming "Explain Yourself!" article, I am skeptical about the concept of "tacit knowledge". For one thing, it puts up a sign that says, "Hey, don't bother trying to explain this in words", which leads to, "This is a black box; don't look inside", which leads to "It's okay not to know how this works".

Second, tacit knowledge often turns out to be verbalizable, questioning whether the term "tacit" is really calling out a valid cluster in thingspace[1]. For example, take the canonical example of learning to ride a bike. It's true that you can learn it hands-on, using the inscrutable, patient training of the master. But you can also learn it by being told the primary counterintuitive insights ("as long as you keep moving, you won't tip over"), and then a little practice on your own.

In that case, the verbal knowledge has substituted one-for-one with (much of) the tacit learning you would have gained on your own from practice. So how much of it was "really" tacit all along? How much of it are you just calling tacit because the master never reflected on what they were doing?

So for me, the appeal to "difficulty of verbalizing it" certainly has some truth to it, but I find it mainly functions to excuse oneself from critical introspection, and from opening important black boxes. I advise people to avoid using this concept if remotely possible; it tends to say more about you than the inherent inscrutability of the knowledge.

[1] To someone who sucks at programming, the ability to revise a recipe to produce more servings is "tacit knowledge".

Replies from: Morendil, Tyrrell_McAllister
comment by Morendil · 2010-06-03T15:29:17.050Z · LW(p) · GW(p)

As someone who has made much of the concept of tacit knowledge in the past, I'll have to say you have a point.

(I'm now considering the addendum: "made much of it because it served my interests to present some knowledge I claimed to have as being of that sort". I'm not necessarily endorsing that hypothesis, just acknowledging its plausibility.)

It still feels as if, once we toss that phrase out the window, we need something to take its place: words are not universally an effective method of instruction, practice clearly plays a vital part in learning (why?), and the hypothesis that a learner reconstructs knowledge rather than being the recipient of a "transfer" in a literal sense strikes me as facially plausible given the sum of my learning experiences.

Perhaps an adult can comprehend "as long as you keep moving, you won't tip over", but I have a strong intuition it wouldn't go over very well with kids, depending on age and dispositions. My parenting experience (anecdotal evidence as it may be) backs that up. You need to see what a kid is doing right or wrong to encourage the former and correct the latter, you need a hefty dose of patience as the kid's anxieties get in the way sometimes for a long while.

Learning to ride a bike is a canonical example because it is taught early on, there is hedonic value in learning it early on, but it is typically taught at an age when a kid rarely (or so my hunch says) has the learning-ability to understand advice such as "as long as you keep moving, you won't tip over". There is such a thing as learning to learn (and just how verbalizable is that skill?).

It's all too easy to overgeneralize from a sparse set of examples and obtain a simple, elegant, convincing, but false theory of learning. I hope your article doesn't fall into that trap. :)

Replies from: SilasBarta
comment by SilasBarta · 2010-06-03T18:08:58.219Z · LW(p) · GW(p)

It still feels as if, once we toss that phrase out the window, we need something to take its place: words are not universally an effective method of instruction, practice clearly plays a vital part in learning (why?), and the hypothesis that a learner reconstructs knowledge rather than being the recipient of a "transfer" in a literal sense strikes me as facially plausible given the sum of my learning experiences.

I don't disagree, but I don't see how it contradicts my position either. The evidence you give against words being effective is that, basically, they don't fully constrain what the other person is being told to do, so they can always mess up in unpredictable ways. That's true, but it just shows how you need to understand the listener's epistemic state to know which insights they lack that would allow them to bridge the gap

People do get this wrong, and end up giving "let them eat cake" advice -- advice that, if it were useful, the problem would have been solved. But at the same time, a good understanding of where they are can lead to remarkably informative advice. (I've noticed Roko and HughRistik are excellent at this when it comes to human sociality, while some are stuck in "let them eat cake" land.)

Perhaps an adult can comprehend "as long as you keep moving, you won't tip over", but I have a strong intuition it wouldn't go over very well with kids, depending on age and dispositions.

Well, in my case, once it clicked for me, my thought was, "Oh, so if you just keep moving, you won't tip over, it's only when you stop or slow down that you tip -- why didn't he just tell me that?"

It's all too easy to overgeneralize from a sparse set of examples and obtain a simple, elegant, convincing, but false theory of learning. I hope your article doesn't fall into that trap. :)

Well, if it were a sparse set I wouldn't be so confident. I have a frustratingly long history of people telling me something can't be explained or is really hard to explain, followed by me explaining it to newbies with relative ease. And of cases where someone appeals to their inarticulable personal experience for justification, when really it was an articulable hidden assumption they could have found with a little effort.

Anyone is welcome to PM me for an advance draft of the article if they're interested in giving feedback.

Replies from: NancyLebovitz, Richard_Kennaway, mattnewport
comment by NancyLebovitz · 2010-06-03T23:01:26.788Z · LW(p) · GW(p)

I'm in general agreement, but

And of cases where someone appeals to their inarticulable personal experience for justification, when really it was an articulable hidden assumption they could have found with a little effort.

leaves me wondering if you underestimate how much effort it takes to notice and express how to do things which are usually non-verbal.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-03T23:27:32.273Z · LW(p) · GW(p)

I don't understand. The part you quoted isn't about expressing how to do non-verbal things; it's about people who say, "when you get to be my age, you'll agree, [and no I can't explain what experiences you have as you approach my age that will cause you to agree because that would require a claim regarding how to interpret the experience which you have a chance of refuting]"

What does that have to do with the effort need to express how to do non-verbal things?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-04T01:11:05.508Z · LW(p) · GW(p)

Excuse me-- I wasn't reading carefully enough to notice that you'd shifted from claims that it was too hard to explain non-verbal skills to claims that it was too hard to explain the lessons of experience.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-04T01:55:53.556Z · LW(p) · GW(p)

Okay. Well, then, assuming your remark was a reply to a different part of my comment, my answer is that yes, it may be hard, but for most people, I'm not convinced they even tried.

comment by mattnewport · 2010-06-03T18:22:54.965Z · LW(p) · GW(p)

Am I interpreting you correctly that you are not denying that some skills can only be learned by practicing the skill (rather than by reading about or observing the skill) but are saying that verbal or written instruction is just as effective as an aid to practice as demonstration if done well?

I'm still a bit skeptical about this claim. When I was learning to snowboard for example it was clear that some instructors were better able to verbalize certain key information (keep your weight on your front foot, turn your body first and let the board follow rather than trying to turn the board, etc.) but I don't think the verbal instructions would have been nearly as effective if they were not accompanied by physical demonstrations.

It's possible that a sufficiently good instructor could communicate just as effectively through purely verbal instruction but I'm not sure such an instructor exists. The fact that this is a rare skill also seems relevant even if it is possible - there are many more instructors who can be effective if they are allowed to combine verbal instruction with physical demonstrations.

Replies from: SilasBarta, Blueberry
comment by SilasBarta · 2010-06-03T21:53:25.478Z · LW(p) · GW(p)

Good points, but keep in mind snowboarding instructors aren't optimizing the same thing that a rationalist (in their capacity as a rationalist) is optimizing. If you just want to make money, quickly, and churn out good snowboarders, then use the best tools available to you -- you have no reason to convert the instruction into words where you don't have to.

But if you're approaching this as a rationalist, who wants to open the black box and understand why certain things work, then it is a tremendously useful exercise to try to verbalize it, and identify the most important things people need to know -- knowledge that can allow them to leapfrog a few steps in learning, even and especially if they can't reach the Holy Grail of full transmission of the understanding.

And I'd say (despite the first paragraph in this comment) that it's a good thing to do anyway. I suspect that people's inability to explain things stems in large part from a lack of trying -- specifically, a lack of trying to understand what mental processes are going on in side of them that allows a skill to work like it does. They fail to imagine what it is like not to have this skill and assume certain things are easy or obvious which really aren't.

To more directly answer your question, yes, I think verbal instruction, if it understands the epistemic state of the student, can replace a lot of what normally takes practice to learn. There are things you can say that get someone in just the right mindset to bypass a huge number of errors that are normally learned hands-on.

My main point, though, is that people severely overestimate the extent of their knowledge which can't be articulated, because the incentives for such a self-assessment are very high. Most people would do well to avoid appeals to tacit knowledge, an instead introspect on their knowledge so as to gain a deeper understanding of how it works, labeling knowledge as "tacit" only as a last resort.

comment by Blueberry · 2010-06-03T18:51:23.254Z · LW(p) · GW(p)

It's possible that a sufficiently good instructor could communicate just as effectively through purely verbal instruction but I'm not sure such an instructor exists.

I would suspect this has more to do with the skill of the student in translating verbal descriptions into motions. You can perfectly understand a series of motions to be executed under various conditions, without having the motor skill to assess the conditions and execute them perfectly in real-time.

comment by Tyrrell_McAllister · 2010-06-03T15:10:37.729Z · LW(p) · GW(p)

For example, take that canonical example of learning to ride a bike. It's true that you can learn it hands-on, using the inscrutable, patient training of the master. But you can also learn it by being told the primary counterintuitive insights ("as long as you keep moving, you won't tip over"), and then a little practice on your own.

In that case, the verbal knowledge has substituted one-for-one with tacit learning you would have gained on your own from practice.

I'm looking forward to your article, and I think that you're right to emphasize the vast gap between "unverbalizable" and "I don't know at the moment how to verbalize it".

But, to really pass the "bicycle test", wouldn't you have to be able to explain verbally how to ride a bike so well that someone could get right on the bike and ride perfectly on the first try? That is, wouldn't you have to be able to eliminate even that "little practice on your own"?

Or is there some part of being able to ride a bike that you don't count as knowledge, and which forms the ineliminable core that needs to be practiced?

Replies from: SilasBarta
comment by SilasBarta · 2010-06-03T15:32:50.692Z · LW(p) · GW(p)

But, to really pass the "bicycle test", wouldn't you have to be able to explain verbally how to ride a bike so well that someone could get right on the bike and ride perfectly on the first try? That is, wouldn't you have to be able to eliminate even that "little practice on your own"?

Depends on what the "bicycle test" is testing. For me, the fact that something is staked out as a canonical, grounding example of tacit knowledge, and then is shown to be largely verbalizable, blows a big hole in the concept. It shows that "hey, this part I can't explain" was groundless in several subcases.

I do agree that some knowledge probably deserves to be called tacit. But given the apparent massive relativity of tacitness, and the above example, it seems that these cases are so rare, you're best off working from the assumption that nothing is tacit, than from looking for cases that you can plausibly claim are tacit.

It's like any other case where one possibility should be considered last. If you do a random test on General Relativity and find it to be way off, you should first work from the assumption that you, rather than GR, made a mistake somewhere. Likewise, if your instinct is to label some of your knowledge as tacit, your first assumption should be, "there's some way I can open up this black box; what am I missing?". Yes, these beliefs could be wrong -- but you need a lot more evidence before rejecting them should even be on the radar.

(And to be clear, I don't claim my thesis about tacitness to deserve the same odds as GR!)

Replies from: Morendil
comment by Morendil · 2010-06-03T15:44:08.061Z · LW(p) · GW(p)

something is staked out as a canonical, grounding example of tacit knowledge, and then is shown to be largely verbalizable

Just to be clear, I don't think it has been shown in the case of bike-riding that the knowledge can be transferred verbally. You can give someone verbal instruction that will help them improve faster at bike-riding, that isn't at issue. It's much less clear that telling someone the actual control algorithm you use when you ride a bike is sufficient to transform them from novice into proficient bike rider.

You can program a robot to ride a bike and in that sense the knowledge is verbalizable, but looking at the source code would not necessarily be an effective method of learning how to do it.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-03T18:17:26.287Z · LW(p) · GW(p)

I think being able to verbally transmit the knowledge that solves most of the problem for them is proof that at least some of the skill can be transferred verbally. And of course it doesn't help to tell someone the detailed control algorithm to ride a bike, and I wouldn't recommend doing so as an explanation -- that's not the kind of information they need!

One day, I think it will be possible to teach someone to ride a bike before they ever use one, or even carry out similar actions, though you might need a neural interface rather than spoken words to do so. The first step in such a quest is to abandon appeals to tacit knowledge, even if there are cases where it really does exist.

comment by Richard_Kennaway · 2010-06-03T14:08:56.976Z · LW(p) · GW(p)

None, and nobody. I got a bicycle and tried to ride it until I could ride it. It took about three weeks from never having sat on a bicycle to confidently mixing with heavy traffic. (At the age of 22, btw. I never had a bicycle as a child.)

The first line that JoshB quoted from Wikipedia is fine -- there is this class of knowledge -- but I don't agree with the second at all. Some things you can learn just by having a go untutored. Where an instructor is needed, e.g. in martial arts, the only trust required is enough confidence in the competence of the teacher to do as he says before you know why.

Replies from: Morendil
comment by Morendil · 2010-06-03T14:17:10.982Z · LW(p) · GW(p)

How typical is that bike-learning history in your estimation?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-06-03T14:32:49.291Z · LW(p) · GW(p)

I guess that more people learn to ride a bike in childhood than as adults, but I believe that the usual method at any age is to get on it and ride it. There really isn't much you can do to teach someone how to do it.

Replies from: Morendil, Blueberry
comment by Morendil · 2010-06-03T14:55:34.140Z · LW(p) · GW(p)

OK, so I suppose it doesn't take much personal contact and trust to acquire a skill of the bike-riding type. In particular if you're an autonomous enough learner, in particular if the skill is relatively basic.

The original assertion, though, was about personal contact and trust being required to transfer a skill of the bike-riding type, and perhaps one reason to make this assertion is that the usual method involves a parent dispensing encouragement and various other forms of help, vis-a-vis a child. (I learnt it from my grandfather, and have a lot of positive affect to accompany the memories.)

Providing an environment in which learning, an intrinsically risky activity, becomes safe and pleasurable - I know from experience that this takes rapport and trust, it doesn't just happen. Such an environment is perhaps not a prerequisite to acquiring a non-verbalized skill, but it does help a lot; as such it makes it possible for people who would otherwise give up on learning before they made it to the first plateau.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-06-03T15:19:52.437Z · LW(p) · GW(p)

learning, an intrinsically risky activity

We must have had very different experiences of many things. Tell me more about learning being risky. I have been learning Japanese drumming since the beginning of last year (in a class), and stochastic calculus in the last few months (from books), and "risky" is not a word it would occur to me to apply to either process. The only risk I can see in learning to ride a bicycle is the risk of crashing.

Replies from: Morendil
comment by Morendil · 2010-06-03T15:35:20.507Z · LW(p) · GW(p)

One major risk involved in learning is to your self-esteem: feeling ridiculous when you make a mistake, feeling frustrated when you can't get an exercise right for hours of trying, and so on.

As you note, in physical aptitudes there is a non-trivial risk of injury.

There is the risk, too, of wasting a lot of time on something you'll turn out not to be good at.

Perhaps these things seem "safe" to you, but that's what makes you a learner, in contrast with large numbers of people who can't be bothered to learn anything new once they're out of school and in a job. They'd rather risk their skills becoming obsolete and ending up unemployable than risk learning: that's how scary learning is to most people.

Replies from: Richard_Kennaway, JoshuaZ
comment by Richard_Kennaway · 2010-06-03T15:55:48.436Z · LW(p) · GW(p)

One major risk involved in learning is to your self-esteem: feeling ridiculous when you make a mistake, feeling frustrated when you can't get an exercise right for hours of trying, and so on.

I would say that the problem then is with the individual, not with learning. Those feelings reset on false beliefs that no-one is born with. Those who acquire them learn them from unfortunate experiences. Others chance to have more fortunate experiences and learn different attitudes. And some manage in adulthood to expose their false beliefs to the light of day, clearly perceive their falsity, and stop believing them.

Thus it is said, "The things that we learn prevent us from learning."

comment by JoshuaZ · 2010-06-03T15:39:02.337Z · LW(p) · GW(p)

They'd rather risk their skills becoming obsolete and ending up unemployable than risk learning: that's how scary learning is to most people.

I doubt people are consciously making this decision, but rather they aren't calculating the potential rewards as opposed to potential risks well. A risk that is in the far future is often taken less seriously than a small risk now.

Replies from: Morendil
comment by Morendil · 2010-06-03T15:54:51.262Z · LW(p) · GW(p)

People who buy insurance are demonstrating ability to trade off small risks now against bigger risks in the future, but often the same people invest less in keeping their professional skills current than they do in insurance.

Personal experience tells me that I had (and still have) a bunch of Ugh fields related to learning, which suggest that there are actual negative consequences of engaging in the activity (per the theory of Ugh fields).

My hunch is that the perceived risks of learning accounts in a significant part for why people don't invest in learning, compared to the low perceived reward of learning. I could well be wrong. How could we go about testing this hypothesis?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-03T15:57:59.564Z · LW(p) · GW(p)

I'm not sure. It may require a more precise statement to make it testable.

comment by Blueberry · 2010-06-04T08:24:42.613Z · LW(p) · GW(p)

I believe that the usual method at any age is to get on it and ride it. There really isn't much you can do to teach someone how to do it.

Are you serious? I could never have learned to ride a bike without my parents spending hours and hours trying to teach me. Did you also learn to swim by jumping into water and trying not to drown? I'd be very surprised if most people learned to ride a bike without instruction, but I may be unusual.

Replies from: Morendil, Richard_Kennaway, RomanDavis
comment by Morendil · 2010-06-04T08:32:56.719Z · LW(p) · GW(p)

Did you also learn to swim by jumping into water and trying not to drown?

There was actually at some point a theory that "babies are born knowing how to swim", and on one occasion at around age three, at a holiday resort the family was staying at, I was thrown into a swimming pool by a caretaker who subscribed to this theory.

It seems that after that episode nobody could get me to feel comfortable enough in water to get any good at swimming (in spite of summer vacations by the seaside for ten years straight, under the care of my grandad who taught me how to ride a bike). I only learned the basics of swimming, mostly by myself with verbal instruction from a few others, around age 30.

Replies from: Blueberry
comment by Blueberry · 2010-06-04T08:40:35.729Z · LW(p) · GW(p)

I'm so sorry. That is truly horrific abuse.

comment by Richard_Kennaway · 2010-06-04T11:09:14.196Z · LW(p) · GW(p)

I could never have learned to ride a bike without my parents spending hours and hours trying to teach me.

Maybe there's a cultural difference, but I don't know what country you're in (or were in). I've never heard of anyone learning to ride a bike except by riding it. But clearly we need some evidence. I don't care for the bodge of using karma to conduct a poll, so I'll just ask anyone reading this who can ride a bicycle to post a reply to this comment saying how they learned, and in what country. "Taught" should mean active instruction, something more than just someone being around to provide comfort for scrapes and to keep children out of traffic until they're ready.

Results so far:

RichardKennaway: self-taught as adult, late 70's, UK

Morendil: taught in childhood by grandfather, UK?

Blueberry: taught in childhood by parents, where?

So that's two to one against my current view, but those replies may be biased: other self-taught people will not have had as strong a reason to post agreement.

Replies from: SilasBarta, Blueberry, Jowibou, NancyLebovitz, Morendil
comment by SilasBarta · 2010-06-04T15:49:31.952Z · LW(p) · GW(p)

I dont't know how much this will support your position, but: mid 1980s, Texas, USA, by my father.

And as I said above, it did take a while to learn, but afterward, my reaction was, "Wait -- all I have to do is keep in motion and I won't fall over. Why didn't he just say that all along?" That began my long history of encountering people who overestimate the difficulty of, or fail to simplify the process to teaching or justifying something.

ETA: Also, I haven't ridden a bike in over 15 years, so that might be a good test of whether my "just keep in motion" heuristic allows me to preserve the knowledge.

Replies from: mattnewport
comment by mattnewport · 2010-06-04T16:22:22.165Z · LW(p) · GW(p)

Also, I haven't ridden a bike in over 15 years, so that might be a good test of whether my "just keep in motion" heuristic allows me to preserve the knowledge.

The fact that 'like riding a bike' is a saying used to describe skills that you never forget suggests that it wouldn't be a very good test.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-04T16:28:24.418Z · LW(p) · GW(p)

Yeah, I wasn't so sure it would be a good test. Still, I'm not sure how well the "you don't forget how to learn a bike" hypothesis is tested, nor how much of its unforgettability is due to the simplicity of the key insights.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-05T11:04:17.927Z · LW(p) · GW(p)

Most people don't store the insights of bike riding verbally-- the insights are stored kinesthetically. It seems to be much easier to forget math.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-05T23:35:56.867Z · LW(p) · GW(p)

I don't disagree, but there's typically a barrier, increasing with time since last use, that must be overcome to re-access that kinesthetic knowledge. And think verbal heuristics like the one I gave can greatly shorten the time you need to complete this process.

comment by Blueberry · 2010-06-04T15:38:58.134Z · LW(p) · GW(p)

early 90s, US. I also had training wheels for a while first, which didn't actually teach me anything. I didn't learn until they were removed. And I also had someone running along for reassurance.

comment by Jowibou · 2010-06-04T13:23:05.936Z · LW(p) · GW(p)

Canada, mid 1960s. Brother tried to teach me but I mostly ignored him. Used bike with training wheels, which I raised higher and higher and removed completely after a couple of weeks.

comment by NancyLebovitz · 2010-06-04T11:42:04.610Z · LW(p) · GW(p)

United States, early 60s (I think it's worth mentioning when because cultures change), just given a bike with training wheels, and I figured it out myself.

comment by Morendil · 2010-06-04T11:17:45.899Z · LW(p) · GW(p)

France, but close enough. ;)

There's some variation in method of instruction. My grandpa had fitted my bike with a long handle in the back and used that to help me balance after taking the training wheels off. With one of my kids I tried the method of gradually lifting the training wheels to make the balance more precarious over time. One of the other two just "got it", as I remember, in one or two sessions. Otherwise it was the standard riding down a slight slope and advising them "keep your feet on the pedals", and running alongside for reassurance.

comment by RomanDavis · 2010-06-04T09:08:04.757Z · LW(p) · GW(p)

The truth is, that's how most skilled artists learned to draw. In the past, there was a more formalized teaching role, often starting at age eight, and you can go through school and even get through art school having been given so little knowledge, that if you know how to draw a human from imagination, you can confidently say you are an autodidact.

It's not because art, (particularly representational figure drawing, from imagination or not) is inherently unteachable, but a lot of people tend to think so.

This is not the only skill like this, although I think it's one that's perhaps the least understood and where misinformation is the most tolerated.

comment by realitygrill · 2010-06-02T02:53:30.814Z · LW(p) · GW(p)

I think it would be great to systematically explore and develop useful skillsets, perhaps in a modular fashion. We do have sequences. I would join a rationality dojo immediately.

What do you mean practical ways? I understand the difficulty of transferring kinesthetic or social understanding, but how can we overcome that in nonverbalized fashion?

Replies from: roland
comment by roland · 2010-06-02T03:01:09.711Z · LW(p) · GW(p)

What do you mean practical ways? I understand the difficulty of transferring kinesthetic or social understanding, but how can we overcome that in nonverbalized fashion?

Some things have to be shown, you have to sometimes take part in an activity to "get" it, learn by trial and error, get feedback pointing out mistakes that you are unaware of, etc...

Replies from: CannibalSmith
comment by CannibalSmith · 2010-06-02T13:10:16.407Z · LW(p) · GW(p)

Some things

For example?

Replies from: RomanDavis
comment by RomanDavis · 2010-06-02T17:04:23.529Z · LW(p) · GW(p)

Do you think you could describe this image to an arbitrarily talented artist and end up with an image that even looked like it was based on it?

http://smithandgosling.files.wordpress.com/2009/05/the-reader.jpg

It's not so much, "Such insolence, our ideas are so awesome they can not be broken down by mere reductionism" as "Wow, words are really bad at describing things that are very different from what most of the people speaking the language do."

I think you could make an elaborate set of equations on a cartesian graph and come up with a drawing that looked like it and say fill up RGB values #zzzzzz at coordinates x,y or whatever, but that seems like a copout since that doesn't tell you anything about how Fragonard did it.

Replies from: bogdanb, Risto_Saarelma
comment by bogdanb · 2010-06-03T11:10:10.700Z · LW(p) · GW(p)

This reminds me of an exercise we did in school. (I don’t remember either when or for what subject.)

Everyone was to make a relatively simple image, composed of lines, circles, triangles and the such. Then, without showing one’s image to the others, each of us was to describe the image, and the others to draw according to the description. The “target” was to obtain reproductions as close as possible to the original image. It’s surprisingly hard.

It’s was a very interesting exercise for all involved: It’s surprisingly hard to describe precisely, even given the quite simple drawings, in such a way that everyone interprets the description the way you intended it. I vaguely remember I did quite well compared with my classmates in the describing part, and still had several “transcriptions” that didn’t look anywhere close to what I was saying.

I think the lesson was about the importance of clear specifications, but then again it might have been just something like English (foreign language for me) vocabulary training.


An example:

Draw a square, with horizontal & vertical sides. Copy the square twice, once above and once to the right, so that the two new squares share their bottom and, respectively, left sides with the original square. Inside the rightmost square, touching its bottom-right corner, draw another square of half the original’s size. (Thus, the small square shares its bottom-right corner with its host, and its top-left corner is on the center of its host.) Inside the topmost square, draw another half-size square, so that it shares both diagonals with its host square. Above the same topmost square, draw an isosceles right-angled triangle; its sides around the right angle are the same length as the large squares’; its hypotenuse is horizontal, just touching the top side of the topmost square; its right angle points upwards, and is horizontally aligned with the center of the original square. (Thus, the original square, its copy above, and the triangle above that, should form an upwards-pointing arrow.) Then make a copy of everything you have, to the right of the image, mirrored horizontally. The copy should be vertically aligned with the original, and share its left-most line with the right-most line of the original.

Try to follow the instructions above, and then compare your drawing with the non-numbered part of this image.

The exercise we did in school was a bit harder: the images had fewer parts (a rectangle, an ellipse, a triangle, and a couple lines, IIRC), but with more complex relationships for alignment, sizes and angles.

Replies from: Larks, RobinZ
comment by Larks · 2010-06-06T12:18:14.024Z · LW(p) · GW(p)

My mum had to do this take for her work, save with building blocks, and for the learning-impaired. Instructions like 'place the block flat on the ground, like a bar of soap' were useful.

One nit-pick: when you say squares half the size, you mean with half the side length, or one quarter of the size.

comment by RobinZ · 2010-06-03T16:16:18.747Z · LW(p) · GW(p)

Color and line weight have not been specified, I note. Nor position relative to the canvas.

comment by Risto_Saarelma · 2010-06-03T11:39:45.574Z · LW(p) · GW(p)

You could probably get pretty good results without messing with complex equations, by first describing the full picture, then describing what's in four quadrants made by drawing vertical and horizontal lines that split the image exactly in half, then describing quadrants of these quadrants, split in a similar way and so on. The artist could use their skills to draw the details without an insanely complex encoding scheme, and the grid discipline would help fix the large-scale geometry of the image.

Edit: A 3x3 grid might work better in practice, it's more natural to work with a center region than to put the split point right in the middle of the image, which most probably contains something interesting. On the other hand, maybe the lines breaking up the recognizable shapes in the picture (already described in casual terms for the above-level description) would help bring out their geometrical properties better.

Edit 2: Michael Baxandall's book Patterns of Intetion has some great stuff on using language to describe images.

Replies from: RomanDavis
comment by RomanDavis · 2010-06-04T09:36:15.286Z · LW(p) · GW(p)

Drawing a photograph with the aid of a Grid is a common technique for making copyinng easier, although it's also sometimes used as a teaching tool for early artists.

I'm not in love with this explanation (Loomis does much better) but this should give you the essential idea:

http://drawsketch.about.com/od/drawinglessonsandtips/ss/griddrawing.htm

As a teaching tool for people who can't draw, I haven't seen it be effective, but it's awesome if you've got a deadline and don't want to spend all your time checking and rechecking your proportions.I doubt it would be effective, since it's so easy for novice artists to screw up when they have the image right in front of them.

There's a more effective method which uses a ruler or compass and is often used to copy Bargue drawings. Use precise measurements around a line at the meridian and essentially connect the dots. For the curious:

http://conceptart.org/forums/showthread.php?t=121170

This might work long distance: "Okay, draw the next dot 9/32nds of an inch a way at 12 degrees down to the right."

This still seems like a bit of a cop out, though. Yes, there are ways to assemble copies of images using a grid, but it doesn't help us figure out how such freehand images were made in the first place. We're not even taking a crack at the little black box.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-04T15:16:54.541Z · LW(p) · GW(p)

Drawing on the Right Side of the Brain seems to be the classic for teaching people how to draw. It's a bunch of methods for seeing the details of what you're seeing (copying a drawing held upside down, drawing shadows rather than objects) so that you draw what you see rather than a mental simplified hieroglyphic of what you see.

comment by steven0461 · 2010-06-01T23:25:25.264Z · LW(p) · GW(p)

New papers from Nick Bostrom's site.

Replies from: timtyler, gwern
comment by timtyler · 2010-06-02T13:08:44.237Z · LW(p) · GW(p)

2nd one "ANTHROPIC SHADOW: OBSERVATION SELECTION EFFECTS AND HUMAN EXTINCTION RISKS" - is good reading.

comment by gwern · 2010-06-03T18:40:40.920Z · LW(p) · GW(p)

Speaking of the Simulation argument, I just stumbled across (but haven't read) http://www.arsdisputandi.org/publish/articles/000338/index.html / http://www.arsdisputandi.org/publish/articles/000338/article.pdf :

"Theological Implications of the Simulation Argument"

"Nick Bostrom’s Simulation Argument (SA) has many intriguing theological implications. We work out some of them here. We show how the SA can be used to develop novel versions of the Cosmological and Design Arguments. We then develop some of the affinities between Bostrom’s naturalistic theogony and more traditional theological topics. We look at the resurrection of the body and at theodicy. We conclude with some reflections on the relations between the SA and Neoplatonism (friendly) and between the SA and theism (less friendly)."

comment by James_K · 2010-06-01T20:46:54.462Z · LW(p) · GW(p)

This post is about the distinctions between Traditional and Bayesian Rationality, specifically the difference between refusing to hold a position on an idea until a burden of proof is met versus Bayesian updating.

Good quality government policy is an important issue to me (it's my Something to Protect, or the closest I have to one), and I tend to approach rationality from that perspective. This gives me a different perspective from many of my fellow aspiring rationalists here at Less Wrong.

There are two major epistemological challenges in policy advice, in addition to the normal difficulties we all have to deal with: 1) Policy questions fall almost entirely within the social sciences. That means the quality of evidence is much lower than it is in the physical sciences. Uncontrolled observations, analysed with statistical techniques, are generally the strongest possible evidence, and sometimes you have nothing but theory or professional instinct to work with.
2) You have a very limited time in which to find an answer. Cabinet Ministers often want an answer within weeks, a timeframe measured in months is luxurious. And often a policy proposal is too sensitive to discuss with the general public, or sometimes with anyone outside your team.

By the standards of Traditional Rationality, policy advice is often made without meeting a burden of proof. Best guesses and theoretical considerations are too weak to reach conclusions. A proper practitioner of Traditional Rationality wouldn't be able to make any kind of recommendation, one could identify some promising initial hypotheses, but that's it.

But Just because you didn't have time to come up with a good answer doesn't mean that Ministers don't expect an answer. And a practitioner of Bayesian Rationality always has a best guess as to what is true, even if the evidence base is non-existent you can fall back on your prior. You don't want to be overconfident in stating your position, assumptions must be outlined and sensitivities should be explored. But you still need to give an answer and that's what attracts me to Bayesian approaches: you don't have to be officially agnostic until being presented with a level of evidence that is unrealistically high for policy work.

It seems to me that if you have very good quality evidence then Bayesian and Traditional Rationality are very similar. Good evidence either proves or disproves a proposition for a Traditional Rationalist, and for a Bayesian Rationalist it will shift their probability estimate, as well as increasing their confidence a lot. The biggest difference seem to me to be that Bayesian Rationality seems is able to make use of weak evidence in a way Traditional Rationality can't.

Replies from: xamdam, realitygrill, billswift
comment by xamdam · 2010-06-02T16:15:26.856Z · LW(p) · GW(p)

Reminded me of one of my favorite movie dialogues - from Sunshine. Context was actually physics, but the complexity of the situation and the time frame but the characters in the same situation as you with the Cabinet ministers.

Capa: It's the problem right there. Between the boosters and the gravity of the sun the velocity of the payload will get so great that space and time will become smeared together and everything will distort. Everything will be unquantifiable.

Kaneda: You have to come down on one side or the other. I need a decision.

Capa: It's not a decision, it's a guess. It's like flipping a coin and asking me to decide whether it will be heads or tails.

Kaneda: And?

Capa: Heads... We harvested all Earth's resources to make this payload. This is humanity's last chance... our last, best chance... Searle's argument is sound. Two last chances are better than one.

http://www.imdb.com/title/tt0448134/quotes?qt0386955

Replies from: James_K
comment by James_K · 2010-06-02T22:06:19.516Z · LW(p) · GW(p)

Yes, that's a good example. There are times when a decision has to be made, and saying you don't know isn't very useful. Even if you have very little to go on, you still have to decide one way or the other.

comment by realitygrill · 2010-06-02T03:05:53.992Z · LW(p) · GW(p)

I am not at all like you. I don't have much interest in policy at all, and I do tend to refuse to hold a position, being very mindful of how easy it is to be completely off course (Probably from reading too much history of science. It's "the graveyard of dead ideas", after all.). I'm likely to tell the Cabinet Ministers to get off my back or they'll have absolutely useless recommendations.

However, I think you have hit upon the point that makes Bayesianism attractive to me: it's rationality you can use to act in real-time, under uncertainty, in normal life. Traditional Rationality is slow.

Replies from: James_K
comment by James_K · 2010-06-02T03:38:38.446Z · LW(p) · GW(p)

I see your point, the trouble is that a recommendation that comes too late often is absolutely useless. A lot of policy is time-dependant, if you don't act within a certain time frame then you might a swell do nothing. While sometimes doing nothing is the right thing to do, a late recommendation is often no better than no recommendation.

Replies from: realitygrill
comment by realitygrill · 2010-06-02T05:02:12.670Z · LW(p) · GW(p)

Yeah, I forgot to add that you've budged me slightly from my staunch positivist attitude for social science. Thanks. Reading up on complex adaptive systems has made me just that much more skeptical about our ability to predict policy's effects, and perhaps biased me.

Replies from: James_K
comment by James_K · 2010-06-02T06:05:33.731Z · LW(p) · GW(p)

It's nice to know I've had an influence :)

As it happens, I'm pretty sceptical as to how much we can know as well. There's nothing like doing policy to gain an understanding of how messy it can be. While the social sciences have a less than wonderful record in developing knowledge (look at the record of development economics, as one example), and economic forecasting is still not much better than voodoo but it's not like there's another group out there with all the answers. We don't have all of the answers, or even most of them, but we're better than nothing, which is the only alternative.

Replies from: matt
comment by matt · 2010-06-02T21:48:18.262Z · LW(p) · GW(p)

Nothing is often a pretty good alternative. Government action always comes at a cost, even if only the deadweight loss of taxation (keyphrase "public choice" for reasons you might expect the cost to be higher than that). I'm not trying to turn this into a political debate, but you should consider doing nothing not necessarily a bad thing, and what you do not necessarily better.

Replies from: James_K, mattnewport
comment by James_K · 2010-06-03T05:48:22.051Z · LW(p) · GW(p)

When I said "better than nothing" I was referring to advice, not the actual actions taken. My background is in economics so I'm quite familiar with both dead-weight loss of taxation and public choice theory, though these days I lean more toward Bryan Caplan's rational irrationality theory of government failure.

I agree that nothing is often a good thing for governments to do, and in many cases that is the advice that Cabinet receives.

comment by mattnewport · 2010-06-02T22:01:50.766Z · LW(p) · GW(p)

Politicians' logic: “Something must be done. This is something. Therefore we must do it.”

comment by billswift · 2010-06-02T16:48:30.345Z · LW(p) · GW(p)

Good quality government policy

There is no more evidence for that than there is for God. Indeed:

Indeed, belief in the legitimacy and wisdom of government seemed to require more blind faith than belief in God. - - George H Smith, Atheism, Ayn Rand, and Other Heresies

Replies from: CronoDAS, JoshuaZ, JoshuaZ, James_K, RobinZ, Houshalter
comment by CronoDAS · 2010-06-02T23:52:26.002Z · LW(p) · GW(p)

Amazingly, there really are domains in which socialism actually works. In the first half of the nineteenth century, the U.S. had privatized firefighting. It was horrible. After the American Civil War, firefighting was taken over by governments, and, astoundingly enough, things actually got better!

comment by JoshuaZ · 2010-06-02T17:03:42.801Z · LW(p) · GW(p)

Simply responding with a Randian quote doesn't show that government doesn't work. Moreover, there are some things where government has worked well. At the most basic level, one needs governments to protect property rights, without which markets can't function. Similarly, various forms of pooled goods are useful (you are welcome to try to have roads run by private industry and see how well that works) But even beyond that, government policies are helpful for dealing with negative externalities. In particular, some forms of harm are by nature spread out and not connected strongly to any single source. The classic example is pollution. Since pollution is spread out, the transaction cost is prohibitively high for any given individual to try to reduce pollution levels they are subject to. But a government, using regulation and careful taxation, can do this efficiently. In some situations, this can even be done in conjunction with market forces (such as cap and trade systems). In the US, this was very successful in efficiently handling levels of sulfur dioxide. See this paper. Governments are often slow and inefficient. But to claim that well-thought out policies never exist? That's simply at odds with reality.

Replies from: RomanDavis, mattnewport
comment by RomanDavis · 2010-06-02T18:17:22.942Z · LW(p) · GW(p)

In particular, some forms of harm are by nature spread out and not connected strongly to any single source. The classic example is pollution. Since pollution is spread out, the transaction cost is prohibitively high for any given individual to try to reduce pollution levels they are subject to. But a government, using regulation and careful taxation, can do this efficiently. In some situations, this can even be done in conjunction with market forces (such as cap and trade systems). In the US, this was very successful in efficiently handling levels of sulfur dioxide.

Even from a libertarian point of view, pollution is something that causes harm, like murder or theft. The governments job is to enforce laws that mitigate sources of harm and, when possible, correct harms against individuals. A person or corporation who puts out some amount of pollution should be forced to pay for any clean up or harm that they make.

If you drive a car, you emmitted some fraction of the pollution that caused temperatures to go up, caused smog induced illness and some other miscellaneous harms that cost some amount of money. If that amount of money was 40 billion dollars, and you contributed 1 billionth towards the harm, you sshould pay 40 dollars.

This should be even less controversial than imprisoning murderers

Replies from: SilasBarta
comment by SilasBarta · 2010-06-02T18:36:05.903Z · LW(p) · GW(p)

This should be even less controversial than imprisoning murderers

Sadly it isn't. I consider(ed) myself libertarian, and then found that most self-identified ones reject that reasoning entirely. Pity.

I was also unpleasantly suprised to find that there was a group of people griping about programs that would make it easier to identify cars that weren't liability-insured or pollution-tested, and this was called a "libertarian" position.

ETA: And libertarian-leaning academics don't seem to "get" why paying polluters to go away isn't a solution, and don't even understand what problem is supposed to be solved, even when hypothetically placed in such a situation! (See the exchange between me and Hanson in the link.)

ETA2: I edited an EDF graphic to make this cute picture about the pollution issue and Coasean reasoning. ETA3: Full blog post with original graphic

Replies from: RomanDavis, mattnewport
comment by RomanDavis · 2010-06-02T18:57:53.616Z · LW(p) · GW(p)

It's not so much that it doesn't solve the problem as things just don't work that way. For starters, current energy distribution methods are local monopolies, so they are strongly regulated on price because the competition mechanism doesn't work as it should. The idea that a customers might "choose" cleaner energy doesn't always work.

Second, some logging companies tried that. They had an outside company, come in, do an inspection, and certify the ecological viability of their practices. There were a fair number of people who actually were willing to pay a little more. The problem is, another set of companies came by, inspected and approved themselves (with a different label that they invented) , and customers weren't able to tell the difference. That's a problem.

Replies from: CronoDAS
comment by CronoDAS · 2010-06-03T01:10:40.394Z · LW(p) · GW(p)

It's not so much that it doesn't solve the problem as things just don't work that way. For starters, current energy distribution methods are local monopolies, so they are strongly regulated on price because the competition mechanism doesn't work as it should. The idea that a customers might "choose" cleaner energy doesn't always work.

Also, to a great extent, electricity is fungible. Suppose you have both windmills and coal-fired plants connected to the same electrical grid, and they both generate equal amounts of power. Now suppose I tell the electric company that I only want to buy power from the windmills, so instead of getting half wind power and half coal power, I get 100% wind power (on paper). However, the electric company doesn't actually have to change the way it produces electricity in order to do this. All they have to do slightly increase the percentage of coal power that they deliver to everyone else (on paper). So all that changes is numbers on paper, and there's exactly as much coal power being generated as before.

comment by mattnewport · 2010-06-02T18:50:29.365Z · LW(p) · GW(p)

Your noise pollution example is a potentially problematic one for libertarians but the obvious answer that occurs to me is the one I would expect many thoughtful libertarians to make. You are assuming a libertarian world with largely unchanged amounts of public space which is a problematic combination. The space outside your window has no reason to be public space. You would see a lot more 'gated community' type arrangements in a more libertarian society. People with low noise tolerance could choose to live in communities where the 'public' space was owned by a municipal service provider with strict rules about noise pollution. Anyone not adhering to these rules could be ejected from the property.

Many common problems with imagined libertarian societies dissolve when you allow for much greater private ownership of currently public land than currently exists.

Replies from: CronoDAS
comment by CronoDAS · 2010-06-03T00:07:15.741Z · LW(p) · GW(p)

What's the difference between a government and a "municipal service provider"?

Replies from: RomanDavis, mattnewport
comment by RomanDavis · 2010-06-03T00:11:06.563Z · LW(p) · GW(p)

It's easier to move out? You are not born under a landlord. You do not swear fealty to the flag of the landlord. Nobody thinks the landlord should be able to draft you for civil service. The landlord cannot put you in jail for failing to pay rent. There's a long, long list of other differences where the landlord as government analogy breaks down. I'm surprised anyone still brings it up.

EDIT: Ha. You changed it. In reality, not necessarily that much, although it's nice to have extra governmental agency that you can choose to pay or not, and that is accountable to the government in a transparent way. Asking the government to regulate itself is almost as dumb as asking a logging company to regulate itself.

Replies from: CronoDAS
comment by CronoDAS · 2010-06-03T00:56:57.951Z · LW(p) · GW(p)

Well, If you expect a landlord to perform the functions of a government, by, say, regulating noise levels for the benefit of tenants, then doesn't the analogy hold in this particular case? If regulation is bad, does it matter if it's regulation by landlord or regulation by city council?

Replies from: Bo102010
comment by Bo102010 · 2010-06-03T02:55:35.396Z · LW(p) · GW(p)

It does matter if one has guns (or SWAT teams) and the other relies on non-violent persuasion.

Replies from: CronoDAS
comment by CronoDAS · 2010-06-03T07:41:10.583Z · LW(p) · GW(p)

::does some Googling::

If a landlord tries to have you evicted, and you refuse to leave when a court rules that you must do so, local law enforcement is allowed to physically remove you from the property. That doesn't sound non-violent to me.

Replies from: Bo102010
comment by Bo102010 · 2010-06-03T14:48:31.233Z · LW(p) · GW(p)

This is a fair point. I would note, however, that eviction typically requires repeated notification, and opportunities for you modify your behavior before encountering violence.

Contrast with how your local sheriff can bust down your door in the middle of the night, shoot your dogs, destroy your property, and arrest you merely for suspecting you of possessing marijuana. And then be praised for it even if you are innocent.

comment by mattnewport · 2010-06-03T00:28:05.297Z · LW(p) · GW(p)

Municipal services are generally provided by a local government but this is largely an artifact of the way modern democracies are organized. Private arrangements are fairly rare in the modern world but cruise ships, private resorts, corporate campuses and on a smaller scale large managed apartment buildings provide examples of decoupling the idea of provision of municipal services and government.

Replies from: Houshalter, CronoDAS
comment by Houshalter · 2010-06-03T02:24:45.363Z · LW(p) · GW(p)

What if you had a dozen different companies that provided services like that. They would have a monopoly in different areas, however, the local governments would still be able to choose which one they wanted, and at any time they were displeased they could switch. Actually, this is a good idea!

Replies from: mattnewport
comment by mattnewport · 2010-06-03T04:11:17.119Z · LW(p) · GW(p)

You can probably go further than that. Municipal services can be unbundled and can operate without a geographical monopoly. This is already widely done for cable and telecoms in the US and UK and for electricity and gas in the UK. Some countries do it for water and sanitation services. There are examples worldwide of it being done for transportation, refuse collection, health and education. Arguments that such services are a 'natural monopoly' are usually promoted most strongly by those who wish to operate that monopoly with government protection.

comment by CronoDAS · 2010-06-03T00:52:49.071Z · LW(p) · GW(p)

Allow me to rephrase.

If the "municipal service provider" has the power to enforce its edicts on noise level (because it has the power to exile those who violate them), then doesn't that mean that it has exactly the same power over noise that a government would - and the same potential to misuse that power?

Replies from: mattnewport
comment by mattnewport · 2010-06-03T01:06:58.530Z · LW(p) · GW(p)

I tend to think that the right of exit is the ultimate and fundamental check on such abuses of power. This is why I favour decentralization / federalization / devolution as improvements to the status quo of increasing centralization of political power. I think that on more or less every level of government we would benefit from decentralization of power. City-wide bylaws on noise pollution are too coarse-grained for example. An entertainment district or an area popular with students should have different standards than a residential area with many working families. Zoning rules are an attempt to make such allowances but I think private solutions are likely to work better. I'd at least like to see them tried so we can start to see what works.

Replies from: CronoDAS
comment by CronoDAS · 2010-06-03T01:36:55.629Z · LW(p) · GW(p)

So the issue is that of scale, then?

And the right of exit is conditional on there being somewhere to go. Finding such a place can sometimes be difficult.

Replies from: mattnewport
comment by mattnewport · 2010-06-03T02:08:45.390Z · LW(p) · GW(p)

"Keep, ancient lands, your storied pomp!" cries she
with silent lips. "Give me your tired, your poor,
Your huddled masses yearning to breathe free,
The wretched refuse of your teeming shore,
Send these, the homeless, tempest-tost to me,
I lift my lamp beside the golden door!"

It worked out pretty well for the US.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-03T10:29:38.576Z · LW(p) · GW(p)

It worked out pretty well for the US, but a distressingly high proportion of Americans don't seem to know that.

comment by mattnewport · 2010-06-02T18:03:49.994Z · LW(p) · GW(p)

Moreover, there are some things where government has worked well.

Hitler was kind to animals. Even accepting your dubious claim it is not enough to show that government sometimes achieves positive outcomes (and don't forget to ask what criteria are being used to determine 'positive'). The relevant question is whether government intervention produces an overall net benefit. Generally it seems you can make the strongest case for this in small, relatively homogeneous countries. These results do not necessarily scale.

you are welcome to try to have roads run by private industry and see how well that works

There are an awful lot of hidden assumptions in this statement.

But a government, using regulation and careful taxation, can do this efficiently.

Can in theory and ever actually do in practice are worlds apart. Negative externalities are one of the stronger economic arguments for government intervention but actual examples of government regulation rarely approximate the theoretical regulatory framework proposed by economists. This is largely because the behaviour of governments is determined primarily by public choice theory and not by the benevolent, enlightened pursuit of economic rationality.

Replies from: JoshuaZ, RomanDavis
comment by JoshuaZ · 2010-06-02T18:24:03.979Z · LW(p) · GW(p)

I agree with most of what you said. That's one of the reasons I gave the historical example of SO2. The claim being made by the person I was responding to was not a remark about net gain but the claim that regarding "Good quality government policy" that "There is no more evidence for that than there is for God" and then backing it up with an argument from irrelevant authority. So giving examples to show that's not the case accomplishes the basic goal.

comment by RomanDavis · 2010-06-02T18:21:37.825Z · LW(p) · GW(p)

There are an awful lot of hidden assumptions in this statement.

There's a pretty good precedent for this happening in the form of the railway system in early America. I think I'd classify it as a market failure as private roads and railways have a way of becoming local monopolies and having an enormous advantage when it comes to rent-seeking behavior.

It's not that it's impossible, I just don't think it's a very good idea.

Replies from: mattnewport
comment by mattnewport · 2010-06-02T18:31:38.321Z · LW(p) · GW(p)

One of the hidden assumptions I was thinking of is the assumption that government built roads have been a net benefit for America. The highway system has been a large implicit subsidy for all kinds of business models and lifestyle choices that are not obviously optimal. America's dependence on oil and outsize energy demands are in large part a function of the incentives created by huge government expenditure on highways. Suburban sprawl, McMansions, retail parks and long commutes are all unintended consequences of the implicit subsidies inherent in large scale government road construction.

American culture and society would probably look quite different without a history of government road construction. It's not obvious to me that it would not look better by many measures.

Replies from: Houshalter, RomanDavis
comment by Houshalter · 2010-06-02T22:35:48.786Z · LW(p) · GW(p)

Yes but you'd be stuck with complex and inefficent series of toll roads. It might work, but I doubt. Not efficiently anyways.

Replies from: RomanDavis
comment by RomanDavis · 2010-06-02T22:40:58.887Z · LW(p) · GW(p)

Not necessarily. If you've ever been to Disney World, it's not like that. And hell, government roads in the states and Japan often dissolve into a complex and inefficient series of toll roads, at least in some areas.

I'm much more worried about uncompetitive practices, like powerful local monopolies and rent seeking behavior.

Replies from: Houshalter
comment by Houshalter · 2010-06-02T23:13:40.742Z · LW(p) · GW(p)

Disney world owns the land, they can do whatever they want. But here in order to make efficient roads, we have to use eminent domain. A private company wouldn't be able to do that. In order to have a governmentless society, you have to a) create a nearly impossible to maintain system of total anarchy like exists in parts of Afghanistan today or b) create a very corrupt and broken society ruled by private corporations, which is essentially a government anyways.

Replies from: LucasSloan, RomanDavis
comment by LucasSloan · 2010-06-03T00:21:22.268Z · LW(p) · GW(p)

But here in order to make efficient roads, we have to use eminent domain.

The Kelo case allows government to use its eminent domain powers on the behalf of private companies. Why couldn't a private road builder borrow this government power?

Replies from: RomanDavis, Houshalter
comment by RomanDavis · 2010-06-03T00:25:57.544Z · LW(p) · GW(p)

You actually support the Kelo case? To me that's like a Glenn Beck conspiracy theory come to life.

Yup. Mind killed. I'm out, guys. Was fun while it lasted.

Replies from: LucasSloan
comment by LucasSloan · 2010-06-03T01:02:54.132Z · LW(p) · GW(p)

Why do you assume I support the Court's decision? All I did was state that under current United States law, Houshalter's objection was possible to overcome.

comment by Houshalter · 2010-06-03T02:30:03.421Z · LW(p) · GW(p)

The government does use private contracters in many cases for different projects. It might work on roads, I'm not sure if they already use it, but its still alot differnet from asking a private corporation to decide when and where to build roads.

Replies from: RomanDavis
comment by RomanDavis · 2010-06-03T02:42:11.809Z · LW(p) · GW(p)

They do. And private corporations or councils already decide where to build the roads for some things, it's just that all of those things only work if they're already connected to other infrastructure, which, in the US, means public federal, state and locally built roads.

comment by RomanDavis · 2010-06-02T23:23:57.361Z · LW(p) · GW(p)

Well, I think you aren't really imaginative enough in your view of anarchy, but... I'm not an anarchist and I'm not going to defend anarchy.

I disagree with the idea that efficient roads require imminent domain. It's not even hard to prove. All I have to do is give one example of a business that was made without imminent domain. The railroad system, which I brought up before.

I still mostly think a nation of private roads is a bad idea, since it's hard to imagine a way or scenario in which they wouldn't be a local monopoly.

Replies from: CronoDAS, JoshuaZ
comment by CronoDAS · 2010-06-02T23:43:27.063Z · LW(p) · GW(p)

All I have to do is give one example of a business that was made without eminent domain. The railroad system, which I brought up before.

Actually, in the U.S. at least, railroads did get lots of land grants, right-of-way rights, and similar subsidies from the government. So yeah.

Replies from: RomanDavis
comment by RomanDavis · 2010-06-02T23:55:37.813Z · LW(p) · GW(p)

Which is part of the reason I think it's a bad idea. The railroads constantly petitioned for those rights, that money and essentially leached off the American people. That's what rent seeking means.

comment by JoshuaZ · 2010-06-02T23:42:03.373Z · LW(p) · GW(p)

Are railroads that good an example? Some railroads and subways were built using eminent domain although I don't know how much. And many of the large railroads built in the US in the second half of the 20th century went through land that did not have any private ownership but was given to the railroads by the government.

Replies from: RomanDavis
comment by RomanDavis · 2010-06-03T00:00:31.402Z · LW(p) · GW(p)

Railroads are a good example of a bad idea. The reason I picked them is that they were terrible, if I was going to pick innovative and creative real estate purchases by private industry, I'd be talking about McDonalds or Starbucks.

Replies from: Houshalter
comment by Houshalter · 2010-06-03T00:15:32.801Z · LW(p) · GW(p)

Railroads weren't a terrible idea. The canal system was a terrible idea, not railroads. Railroads created lots of industry that wouldn't have been possible without them. Many 19th century leaders thought of them as the best thing that ever happened to America.

Replies from: LucasSloan, RomanDavis
comment by LucasSloan · 2010-06-03T00:26:49.809Z · LW(p) · GW(p)

The system of canals built in the early 19th century in the United States allowed the settlement of the old west and the development of industry in the north east (by allowing grain from western farms to reach the east). Why do you consider them a terrible idea? They were one of the centerpieces of the American System, which was largely successful.

Replies from: Houshalter
comment by Houshalter · 2010-06-03T00:31:23.866Z · LW(p) · GW(p)

Because they would dump the waste off the left side of the boat, and get drinking water from the right. The actual sides would switch depending on wich way they were going. I've been on those canal boats before, they are very, very slow. They had orphans walk on the side of the boat and guide the donkey (ass) that pulled it. They also took a long time to build, and didn't last that long.

Replies from: JoshuaZ, LucasSloan
comment by JoshuaZ · 2010-06-03T00:46:51.583Z · LW(p) · GW(p)

Because they would dump the waste off the left side of the boat, and get drinking water from the right.

This was a general problem more connected to cleanliness as a whole in 19th century America. Read a history of old New York, and realize that it took multiple plagues before they even started discussing not having livestock roaming the city.

I've been on those canal boats before, they are very, very slow.

Of course they were slow. They were an efficient method of moving a lot of cargo. Each boat moved slowly, but the total cargo moved was a lot more than they could often be moved by other means. Think of it as high latency and high bandwith.

They had orphans walk on the side of the boat and guide the donkey (ass) that pulled it.

In general 19th century attitudes towards child labor weren't great. But what does this have to do with the canal system itself? Compared to many jobs they could have, this would have been a pretty good one. And this isn't at all connected to using orphans; it isn't like the canals were Powered by the souls of forsaken children. They were simply the form of cheap labor used during that time period for many purposes.

They also took a long time to build, and didn't last that long.

The first point isn't relevant unless you are trying to make a detailed economic estimate of whether they paid for themselves. The second is simply because they weren't maintained after a few years once many of them were made obsolete by rail lines. If the rails had not come in, the canals would have lasted much longer.

comment by LucasSloan · 2010-06-03T00:42:33.602Z · LW(p) · GW(p)

So they're a terrible idea because of bad sanitation and child labor? In that case, the entire history of economic ideas is bad up until 1920-ish. They unquestionably achieved their goal of providing better transportation. Am I to infer that you believe that government run highways are wrong because there is trash strewn on the sides of the road?

Replies from: Houshalter
comment by Houshalter · 2010-06-03T03:04:33.726Z · LW(p) · GW(p)

Maybe but thats not the point. They might have worked, maybe even made a profit, but I still say that they were inefficient which is why we don't use them today (all thats left is a few large pieces of stone jutting out of rivers that passers by can't explain.)

Replies from: LucasSloan
comment by LucasSloan · 2010-06-03T05:35:53.531Z · LW(p) · GW(p)

Were telegraphs a bad idea? Horse-drawn plows? Why does the fact a technology was superseded mean that it's a terrible idea?

comment by RomanDavis · 2010-06-03T00:19:57.387Z · LW(p) · GW(p)

I think they might have been been better as wither a fully government venture or a private one. When they merge, a conflict of interest becomes immediately present.

comment by RomanDavis · 2010-06-02T18:46:01.875Z · LW(p) · GW(p)

That's interesting. I wouldn't expect there to be many examples of working privatized roads and their effects on a nationwide scale, but if there were, I'd love to see more about them, or even a good paper based on a hypothetical.

Replies from: mattnewport
comment by mattnewport · 2010-06-02T18:55:59.013Z · LW(p) · GW(p)

I think you're stuck in the mindset of 'if it wasn't for our government provided roads where would we drive our cars?'. Such a world would probably have fewer private cars and be arranged in such a way that many ordinary people could get by perfectly well without a car, as is the case in many European and Japanese cities.

This article might help you understand some of the hidden assumptions many Americans operate under. Note: this guy has some rather wacky ideas but his articles on 'traditional cities' are pretty interesting.

Replies from: Mass_Driver, JoshuaZ, RomanDavis, Houshalter
comment by Mass_Driver · 2010-06-02T19:28:58.453Z · LW(p) · GW(p)

I strongly agree with you that the US federal government has spent too much on road subsidies over the years and should decrease its current spending.

That said, not everywhere is Juneau, Alaska; not all sites connected to government roads are a "Suburban Hell," and not all inhabitants of the suburbs would prefer to live in a "Traditional City." Roads are useful for accommodating a highly mobile, atomistic society that exploits new resources and adopts new local trade routes every 20 years or so. Cars and parking lots are useful for separating people who have recently immigrated from all different places and who really don't like each other and don't want to have much to do with each other. Interstate highways were built for evacuation and civil defense as well as for actual transport. Finally, regardless of whether you prefer roads or trains, some level of government subsidy and/or coordination is probably needed to get the most efficient transportation system possible.

In any case, this thread started out as a discussion of Traditional vs. Bayesian rationality, did it not? Improving government policy was merely the example chosen to illustrate a point. It seems unsportsmanlike to shoot that point down on the grounds that virtually all government does more harm than good. Even if such a claim were true, one might still want to know how to generate government policies that do relatively less harm, given a set of political constraints that temporarily prevent enacting a strong version of (anarcho)libertarianism.

Replies from: mattnewport
comment by mattnewport · 2010-06-02T20:21:30.616Z · LW(p) · GW(p)

Even if such a claim were true, one might still want to know how to generate government policies that do relatively less harm

The failure of government is not a problem of not knowing which government policies would do relatively less harm. The primary problem of government is that there is little incentive to implement such policies. Trying to improve government by working to figure out better policies is like trying to avoid being eaten by a lion by making a sound logical argument for the ethics of vegetarianism. The lion has no more interest in the finer points of ethics than a politician does in the effects of policy on anything other than his own self-interest.

Replies from: NancyLebovitz, Mass_Driver
comment by NancyLebovitz · 2010-06-03T00:25:51.220Z · LW(p) · GW(p)

Some governments cause much less damage than others, so I think there's something to study.

Replies from: mattnewport
comment by mattnewport · 2010-06-03T00:30:13.056Z · LW(p) · GW(p)

I mentioned elsewhere that governments of relatively small states with relatively homogeneous populations seem to do better than average. Scaling these relative successes up appears problematic.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-03T01:17:15.370Z · LW(p) · GW(p)

Even among large heterogeneous states, some do better than others.

If small homogeneous states do best, then campaigning for devolution to the best available approximation of such might be the best move.

Replies from: mattnewport, Blueberry
comment by mattnewport · 2010-06-03T01:27:22.015Z · LW(p) · GW(p)

If small homogeneous states do best, then campaigning for devolution to the best available approximation of such might be the best move.

Yes, that or seasteading. I'm also a firm believer in the 'voting with your feet' approach to campaigning. I have no desire to wait around until a democratic majority are convinced for improvements to happen locally. Migration is one of the few competitive pressures on governments today.

comment by Blueberry · 2010-06-03T01:41:15.448Z · LW(p) · GW(p)

That's one of the principal aims of the states' rights movement.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-03T10:21:48.226Z · LW(p) · GW(p)

And possibly one of the reasons it's disreputable-- afaik the states involved aren't all that homogeneous.

comment by Mass_Driver · 2010-06-02T20:33:58.136Z · LW(p) · GW(p)

Your link provides very little evidence for your claim. At the national level, to say that a program costs $1 million per year is unimpressive. Suppose, for the sake of argument, that the multiplier effect for mohair production is quite low, say, 0.5. I suspect that is it rather higher than that, since multiple people will go and card and weave and spin the damn fibers and then sell them to each other at art fairs, but let's say it's 0.5. That means you're wasting $500,000 a year. In the context of a $5 trillion annual budget, you're looking at 1 part per 10 million, or an 0.00001% increase in efficiency. Why should one of our 545 elected representatives, or even one of their 20,000 staffers, make this a priority to eliminate? The amazing thing is that the subsidy was eliminated at all, not that it crept back in. All systems have some degree of parasitism, 'rent', or waste. This is not exactly low-hanging fruit we're talking about here.

More generally, I have worked for a few different politicians, and so far as I could tell, most of them mostly cared about figuring out better policies subject to maintaining a high probability of being re-elected. None of them appeared to have the slightest interest in directly profiting from their work as public servants, nor in exploiting their positions for fame, sex, etc. Those are just the cases that make the news. In my opinion, based on a moderate level of personal experience, the assumption that politicians are primarily motivated by self-interest at the margin in equilibrium is simply false.

Replies from: mattnewport
comment by mattnewport · 2010-06-02T20:49:03.554Z · LW(p) · GW(p)

Your link provides very little evidence for your claim.

What did you take my claim to be? The example in the link is intended to illustrate the fact that the problem of politics is not one of figuring out better policy. It is an example of a policy that is universally agreed to be bad and yet has persisted for over 60 years, despite a brief period in which it was temporarily stamped out. The magnitude of the subsidy in this case may be small but there are many thousands of such bad policies, some of much greater individual magnitude, and they add up. The example is intentionally a small and un-controversial example since it is intended to illustrate that if even minor bad policies like this are hard to kill then vastly larger ones are unlikely to be eliminated without structural reform.

None of them appeared to have the slightest interest in directly profiting from their work as public servants, nor in exploiting their positions for fame, sex, etc.

Giving this appearance is fairly important to succeeding as a politician so this is not indicative of much. I find it more relevant to judge by actual actions and results produced rather than by words or carefully cultivated appearances.

In my opinion, based on a moderate level of personal experience, the assumption that politicians are primarily motivated by self-interest at the margin in equilibrium is simply false.

As a well known politician once noted, you can fool some of the people all of the time.

Replies from: Mass_Driver, RobinZ
comment by Mass_Driver · 2010-06-03T01:29:24.554Z · LW(p) · GW(p)

As a well known politician once noted, you can fool some of the people all of the time.

Indeed you can! Be aware, though, that memes about government corruption and the people who peddle them may have just as much power to fool you as the 'official' authorities. Hollywood, for example, has a much larger propaganda budget than the US Congress. When's the last time a Hollywood movie showcased virtuous politicians?

Also, beware of insulated arguments. If you assume that (a) politicians are amazingly good at disguising their motives, and (b) that politicians do in fact routinely disguise their motives, your assertions are empirically unfalsifiable. If you disagree, consider this: what could a politician do to convince you that he was honestly motivated by something like altruism?

Replies from: mattnewport
comment by mattnewport · 2010-06-03T02:03:14.664Z · LW(p) · GW(p)

When's the last time a Hollywood movie showcased virtuous politicians?

An Inconvenient Truth? Seriously though, I don't think Hollywood is particularly tough on politicians. It's a major enabler for the cult of the presidency with heroic presidents saving the world from aliens, asteroids and terrorists. Evil corporations and businessmen get a far worse rap. The mainstream media is much too soft on politicians in the US in my opinion as well. Where's the US Paxman?

If you disagree, consider this: what could a politician do to convince you that he was honestly motivated by something like altruism?

I think some politicians actually believe that they are acting for the 'greater good'. Sometimes when they lobby for special interests they really convince themselves they are doing a good thing. It is sometimes easier to convince others when you believe your own spiel - this is well known in sales. They surely often think they are saving others from themselves by restricting their liberties and trampling on their rights. Ultimately what they really believe is somewhat irrelevant. I judge them by how they respond to incentives, whose interests they actually promote and what results they achieve.

I don't think being motivated by altruism is desirable and I don't think pure altruism exists to any significant degree.

Replies from: Mass_Driver, Douglas_Knight
comment by Mass_Driver · 2010-06-03T05:32:05.190Z · LW(p) · GW(p)

Good examples!

I agree with you that Hollywood is soft on Presidents, and that the mainstream media is soft on just about everyone, with the possible exception of people who might be robbing a convenience store and/or selling marijuana in your neighborhood, details at eleven.

That still leaves legislators, bureaucrats, administrators, police chiefs, mayors, governors, and military officers as Rent-A-Villains (tm) for Hollywood action flicks and dramas.

Sometimes when they lobby for special interests they really convince themselves they are doing a good thing.

From my end, it still looks like you're starting with the belief that government is wrong, and deducing that politicians must be doing harm. Your arguments are sophisticated enough that I'm assuming you've read most of the sequences already, but you might want to review The Bottom Line.

I'm not sure to what extent either of us has an open mind about our fundamental political assumptions. I'm also unsure as to whether the LW community has any interest in reading a sustained duel about abstract versions of anarcholibertarianism and representative democracy. Worse, I at least sympathize with some of your arguments; my main complaint is that you phrase them too strongly, too generally, and with too much certainty. For all those reasons, I'm not going to post on this particular thread in public for a few weeks. I will read and ponder one more public post on this thread by you, if any -- I try to let opponents get in the last word whenever I move the previous question.

All that said, if you'd like to talk politics for a while, you're more than welcome to private message me. You seem like a thoughtful person.

Replies from: mattnewport
comment by mattnewport · 2010-06-03T16:33:04.951Z · LW(p) · GW(p)

I'm not sure to what extent either of us has an open mind about our fundamental political assumptions.

I described myself as a socialist 10 years ago when I was at university. My parents are lifelong Labour) voters. I have changed my political views over time which gives me some confidence that I am open minded in my fundamental political assumptions. Caveats are that my big 5 personality factors are correlated with libertarian politics (suggesting I may be biologically hardwired to think that way) and from some perspectives I could be seen as following the cliched route of moving to the right in my political views as I get older.

my main complaint is that you phrase them too strongly, too generally, and with too much certainty.

This is partly a stylistic thing - I feel that padding comments with disclaimers tends to detract from readability and distracts from the main point. I try to avoid saying things like in my opinion (should be obvious given I'm writing it) or variations on the theme of the balance of evidence leads me to conclude (where else would conclusions derive from) or making comments merely to remind readers that 0 and 1 are not probabilities (here of all places I hope that this goes without saying). I used to make heavy use of such caveats but I think they tend to increase verbiage without adding much information. If it helps, imagine that I've added all these disclaimers to anything I say as a footnote.

All that said, if you'd like to talk politics for a while, you're more than welcome to private message me. You seem like a thoughtful person.

I tend to subscribe to the idea that the best hope for improving politics is to change incentives, not minds but periodically I get drawn into political debates despite myself. I'll try to leave the topic for a while.

Replies from: NancyLebovitz, Mass_Driver
comment by NancyLebovitz · 2010-06-03T22:18:01.366Z · LW(p) · GW(p)

tend to subscribe to the idea that the best hope for improving politics is to change incentives, not minds but periodically I get drawn into political debates despite myself. I'll try to leave the topic for a while.

Incentives (or incentive structures, like markets [1]) are the result of human decisions.

Perhaps you mean changing the minds of the people who set the incentives.

[1] A market's incentives aren't set in detail, but permitting the market to operate in public or not is the result of a relatively small number of decisions.

Replies from: mattnewport, RomanDavis
comment by mattnewport · 2010-06-03T23:09:23.954Z · LW(p) · GW(p)

Perhaps you mean changing the minds of the people who set the incentives.

Part of the thinking behind competitive government is that we are the people who set the incentives.

Seasteading is explicitly designed to create alternative social systems that operate somewhat outside the boundaries of existing states. An analogy is trying to introduce revolutionary technologies by convincing a democratic majority to vote for your idea vs. founding a startup and taking the 'if you build it they will come' route. The latter approach generally appears to have a better track record.

Charter cities were born out of a slightly different agenda but embody similar principles.

A simple step that individuals can take is to move to a jurisdiction in line with their values rather than trying to change their current jurisdiction through the political process. Competition works to improve products in ordinary markets because customers take their business to the companies that best satisfy their preferences. Migration is one of the few forces that applies some level of competitive pressure to governments.

Other potential approaches are to support secession or devolution movements, things like the free state project, supporting the sovereignty of tax havens, 'starving the beast' by structuring your affairs to minimize the amount of tax you pay, personal offshoring and other direct individual action that creates competitive pressure on jurisdictions.

comment by RomanDavis · 2010-06-03T22:35:05.242Z · LW(p) · GW(p)

I think he's talking from a government perspective or a perspective of power.

Obviously, you can educate people yjat malaria is bad and beg people to solve the problem of malaria. It is, however, possible to know a lot about and not do anything about it.

Or you could pay people a lot of money if they would show work that might help the problem of malaria. I tend to think this method would be more effective, although there are other effective incentives than money.

comment by Mass_Driver · 2010-06-03T17:10:22.741Z · LW(p) · GW(p)

Voted up. I think you should consider writing a top-level post summarizing some of the themes from Thousand Nations.

comment by Douglas_Knight · 2010-06-03T03:39:32.842Z · LW(p) · GW(p)

I don't think being motivated by altruism is desirable and I don't think pure altruism exists to any significant degree.

The common form "I don't believe in X, but X would be bad if it did exist" seems to me like a bad sign; of what, I'm not sure, perhaps motivated cognition.

Replies from: mattnewport
comment by mattnewport · 2010-06-03T03:46:52.217Z · LW(p) · GW(p)

It can be a bad pattern but there are cases where it is legitimate, for example "I don't believe in the Christian god but if he did exist he would appear to be a major asshole."

comment by RobinZ · 2010-06-02T20:54:11.693Z · LW(p) · GW(p)

In my opinion, based on a moderate level of personal experience, the assumption that politicians are primarily motivated by self-interest at the margin in equilibrium is simply false.

As a well known politician once noted, you can fool some of the people all of the time.

It would either be polite or impolite to make explicit who the "some of the people" are that you refer to in this sentence, and what relevance this has to Mass_Driver's remark. I am curious to hear which.

Replies from: mattnewport
comment by mattnewport · 2010-06-02T20:56:50.107Z · LW(p) · GW(p)

Mass_Driver appears to be one of the people who can be fooled all of the time since he judges politicians by what they say and how they present themselves rather than by what their actions say about their incentives and motivations. I did not intend to be ambiguous.

Replies from: RobinZ
comment by RobinZ · 2010-06-02T21:04:06.623Z · LW(p) · GW(p)

Thank you - I had suspected that might be your meaning, but I prefer not to pronounce negative judgments on people without clear cause, and I have read plenty of comments which appeared equally damning but were of an innocent nature upon elaboration. Carry on.

Replies from: mattnewport
comment by mattnewport · 2010-06-02T21:22:10.182Z · LW(p) · GW(p)

I appreciate the irony of your veiled criticism. Upvoted.

Replies from: RobinZ
comment by RobinZ · 2010-06-02T21:38:36.813Z · LW(p) · GW(p)

I appreciate your unusually deft grasp of the English language. Upvoted.

(I also appreciate the paucity of my education in the sociology of representative government, and must therefore bow out of the discussion. Please discount my opinion appropriately.)

comment by JoshuaZ · 2010-06-02T19:06:57.786Z · LW(p) · GW(p)

Wow. That's really very eye-opening. And as someone who has spent time in old cities outside the US and doesn't even drive, I'm a bit shocked about how much of an assumption I seem to be operating with about what a city should look like.

comment by RomanDavis · 2010-06-02T19:02:19.014Z · LW(p) · GW(p)

Japanese cities still have massive infrastructure and public transportation subsidies. It's not OMG how can we not have cars?; it's OMG how can we actually have transportation in a non governmental way that actually operates in a healthy market?

Replies from: mattnewport
comment by mattnewport · 2010-06-02T19:11:40.910Z · LW(p) · GW(p)

City scale transportation infrastructure doesn't require large amounts of governmental involvement. Traditional European cities evolved for much of their history with minimal government involvement. City level infrastructure would be well within the capabilities of private enterprise in a world with more private ownership of public space. Large privately constructed resorts (think Disneyland) illustrate the feasibility of the concept although they are not necessarily great adverts for its desirability.

comment by Houshalter · 2010-06-02T22:55:08.872Z · LW(p) · GW(p)

That site you linked to has an article comparing Toledo, Ohio to Toledo, Spain. Its kind of unfair because Toledo Ohio is a relativley small city and is dying economically. I was kind of offended because I live really close to there, but he does make a point.

Replies from: mattnewport
comment by mattnewport · 2010-06-02T23:32:07.999Z · LW(p) · GW(p)

Toledo, Spain: Pop 80,810, Unemployment 10% (estimated from Wikipedia figures). Toledo, Ohio: Pop 316,851 (city), Unemployment 13%.

Replies from: Houshalter
comment by Houshalter · 2010-06-03T00:27:28.833Z · LW(p) · GW(p)

Huh. Well Toledo just seems like a craphole. Well once they get around to demolishing all of those old buildings it will look better. And I can't explain how people live without cars. It boggles me. Sure we have big roads, but seriously, who wants to walk for 20 miles every day?

Replies from: mattnewport
comment by mattnewport · 2010-06-03T00:33:42.733Z · LW(p) · GW(p)

And I can't explain how people live without cars. It boggles me. Sure we have big roads, but seriously, who wants to walk for 20 miles every day?

The point made in the discussion of traditional cities I linked is that living without a car can be a nightmare in places that were designed around cars but that many cities that were not designed around cars are very livable without them. I've lived in Vancouver for 7 years without a car quite happily and it's not even particularly pedestrian friendly compared to many European cities (though it is by North American standards). I only walk about 3-4 miles a day.

Replies from: Houshalter
comment by Houshalter · 2010-06-03T03:00:23.467Z · LW(p) · GW(p)

I live in the middle of nowhere North west Ohio actually. I don't exactly consider it "the country", but it is compared to other places I've been. The roads make 1 mile grids and each has a dozen houses on it and a few fields and woods. Walking to town would take the better part of a day. Also, why are many modern cities built in the 18th century designed around cars if they only were invented in the later half of the century and became popular nearly half a century after that?

Replies from: RomanDavis
comment by RomanDavis · 2010-06-03T03:04:08.587Z · LW(p) · GW(p)

Because suburbs were built afterward, around the cities, like a tumor, and usually after World War II.

comment by JoshuaZ · 2010-06-03T19:23:15.356Z · LW(p) · GW(p)

Ok. It looks like someone just did a driveby and downvoted every single entry in this subthread by 1 (I noticed because I saw my karma drop by 13 points with about 5 minute span since my last click on a LW page, and then glancing through saw that a lot of entries in this thread (including many that are not mine) had a lower karma than they had been when I last looked at the thread this morning, with many comments at 0 now at -1). Can the person who did this please explain their logic?

Replies from: RobinZ
comment by RobinZ · 2010-06-03T19:29:13.244Z · LW(p) · GW(p)

Request for explanation seconded - I have had four comments (one, two three four) downvoted in the same timespan, with several surrounding comments visibly downvoted.

comment by James_K · 2010-06-02T22:04:46.295Z · LW(p) · GW(p)

When it comes to government policy I tend to grade on a curve. I actually agree with you that the quality of government policy is generally quite poor. But it's not equally poor everywhere, and improving government's function (which will in some cases meaning having it do less) can do a lot of good for a lot of people.

I should also point out that choosing to take no action is still a policy decision. To give you an example, a few years a go some crazy woman pulled a knife on a plane, leading to a bit of an incident. There was a review of airline security regulation for domestic flights (which usually have no searches or metal detectors in my country). Cabinet decided, on the basis of advice from officials, that existing regulation was sufficient, and the only thing that needed to be done was put a lockable door on the cabin, which was being phased in already. Would you regard this as a good policy decision?

Replies from: mattnewport, realitygrill
comment by mattnewport · 2010-06-02T22:10:16.460Z · LW(p) · GW(p)

Would you regard this as a good policy decision?

I'd question the need to have government involved in the decision at all. Why not let the airlines decide their own security policies?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-02T22:22:04.111Z · LW(p) · GW(p)

I'd question the need to have government involved in the decision at all. Why not let the airlines decide their own security policies?

At least three reasons:

  1. Because airlines have these large objects that can function as missiles and bring down buildings. So failing to secure them harms lots of other people.

  2. As with other industries, individuals do not have the resources to make detailed judgments themselves about safety procedures. This is similar to the need for government inspection and regulation of drugs and food.

  3. Violation of security procedures is (for a variety of good reasons) a criminal offense. In order for that to make any sense, you need the government to have some handle in what procedures do and do not make sense.

Replies from: SilasBarta, mattnewport
comment by SilasBarta · 2010-06-02T23:11:32.697Z · LW(p) · GW(p)

The first two reasons only justify requiring that airlines carry liability insurance policies against the external damage that can be caused by by their planes and injuries/deaths of passengers. Then, the insurer would specify what protocols airlines must follow before the insurer will offer an affordable policy. Passengers would not have to make such judgments in that case.

Remember to look for the third alternative!

I don't understand the point you're making in 3.

ETA: Actually, you know what? This has devolved into a political debate. Not cool. Can we wind this down? (To avoid the obvious accusation, anyone can feel free to reply to my arguments here and I won't reply.)

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-02T23:28:56.325Z · LW(p) · GW(p)

Well, my general approach is to think that we should continue political discussions as long as they are not indicating mind-killing. For example, I find your point about liability insurance to be very interesting, and not one I had thought about before. It is certainly worth thinking about, but even then, that's a different type of regulation, not a lack of regulation as a whole.

Replies from: SilasBarta, mattnewport
comment by SilasBarta · 2010-06-02T23:35:02.271Z · LW(p) · GW(p)

Well, my general approach is to think that we should continue political discussions as long as they are not indicating mind-killing.

If it's not there in your judgment then, I'll continue.

For example, I find your point about liability insurance to be very interesting, and not one I had thought about before. It is certainly worth thinking about, but even then, that's a different type of regulation, not a lack of regulation as a whole.

Yes, but it certainly makes a difference in how many choices and alternatives regulation chokes off. Even if you believe in regulation as a necessary evil, you should favor the kind that accomplishes the same result with less intrusion. And there's a big difference between "Follow this specific federal code for airline security", versus "Do anything that convinces an insurer to underwrite you for a lot of potential damages."

Similarly, when it comes to restricting carbon emissions, it makes much more sense to assign a price or scarcity to the emissions themselves, rather than try to regulate loose correlates, such as banning products that someone has deemed "inefficient".

If you consider all that obvious, then you should understand my frustration when libertarians have to pull teeth to get people to agree to mere simplifications of regulation like I describe above.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-02T23:39:07.617Z · LW(p) · GW(p)

Yeah, no disagreement with those points. (Although now thinking more about the use of insurance underwriting there may be a problem getting large enough insurance. For example, in some areas there have been home insurance companies that went bankrupt after major natural disasters and didn't have enough money to pay everything out. One could see similar problems occurring when one has potential loss in the multi-billion dollar range.)

Replies from: mattnewport
comment by mattnewport · 2010-06-02T23:42:34.813Z · LW(p) · GW(p)

Reinsurance.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-02T23:51:11.810Z · LW(p) · GW(p)

Good point, although again, would then push the regulation back one level to make sure that the insurance companies risk was appropriately allocated.

Replies from: mattnewport
comment by mattnewport · 2010-06-02T23:54:31.281Z · LW(p) · GW(p)

One of the oldest reinsurers originally had unlimited liability for members. I think that provides much more effective oversight of risk allocation than any regulation.

Replies from: gwern, CronoDAS
comment by gwern · 2010-06-03T19:14:17.456Z · LW(p) · GW(p)

I think that provides much more effective oversight of risk allocation than any regulation.

No, it didn't. Did you miss the part where Lloyds imploded, and the unlimited liability destroyed scores of lives (and caused multiple suicides)? The 'reinsurance spiral' certainly was not effective oversight. Even counting the Names' net worth, Lloyds had less reserves and greater risk exposure than regular corporate insurance giants that it competed with, like Swiss Re and Munich Re.

EDIT: It occurs to me that the obvious rebuttal is that Lloyds was quite profitable for a century or two, and so we shouldn't hold the asbestos disaster against it. But it seems to me that any fool can capably insure against risks that eventuate every month or year; high quality risk management is known from how well the extremely rare events are handled.

Replies from: mattnewport
comment by mattnewport · 2010-06-03T19:49:39.501Z · LW(p) · GW(p)

Did you miss the part where Lloyds imploded, and the unlimited liability destroyed scores of lives (and caused multiple suicides)?

Issue Status: Closed.

Reason: As Designed.

comment by CronoDAS · 2010-06-03T00:03:53.405Z · LW(p) · GW(p)

Their liability is still limited by the laws regarding personal bankruptcy. You can't pay back money you don't have. (In the old days, there was debtor's prison, but that really doesn't help anyone.)

comment by mattnewport · 2010-06-02T23:39:22.952Z · LW(p) · GW(p)

Some libertarians oppose limited liability for shareholders of corporations because it distorts the incentives to reduce the risk of harm to third parties. I tend to lean in that direction although I can see the merit in some arguments in favour of limited liability.

comment by mattnewport · 2010-06-02T22:31:21.797Z · LW(p) · GW(p)

Ah yes, the orthodox doctrine of the Church of Unlimited Government. I'm a heretic and don't accept any of these as self evident. I find it interesting that it doesn't even occur to most people to ask the question whether any given issue should even be considered as a legitimate concern of government. From the second link (emphasis mine):

Do you remember the flap recently about the airline that was going to charge for carry-on luggage? And then a Congressman said we need to pass a law saying that the airlines cannot do that? Now, the merits of the issue are debatable (as a passenger, I think I might actually prefer to fly on an airline that charges for carry-on luggage), but that is not the point. Even if we all felt really strongly that charging for carry-on luggage is evil, are we willing to say that government should stay out of the issue, on principle? The libertarian says that indeed the government should stay out of it. The member of the Church does not. Again, being ok with government staying out of it gets you libertarian points only if you care about the issue. If you are ambivalent about charging for carry-on luggage or you think it's a really minor issue, then it's not in the set of social problems that you feel are important.

I bring up the carry-on luggage example because to me it illustrates the relative strength of the forces for limited government and the forces for unlimited government. From my standpoint, the idea of regulating the pricing of carry-on luggage is nutty as a fruitcake. But it seemed perfectly normal to most people--certainly to most of our "thought leaders." It seems to me that I belong to the Dissenting Church, and the established church is the Church of Unlimited Government.

Replies from: JoshuaZ, Blueberry, Houshalter
comment by JoshuaZ · 2010-06-02T22:40:42.086Z · LW(p) · GW(p)

I'm not at all sure what any of this has to do with anything. I agree with the quoted section that having the government step in to regulate how much carryon luggage people can have is an example of people making bad assumptions about government. Indeed, this one is particularly stupid because it is economically equivalent to charging a higher price and then offering a discount for people who don't bring carryon luggage. And psych studies show that if anything people react more positively to things framed as a discount.

But I don't see what this has to do with anything I listed. Can you explain for example how the fact that airplanes are effectively large missiles is not a good reason for the government to be concerned about their security? The use of airplanes as weapons is not fictional.

Similarly, regarding my second point are you claiming that people in general do have the time and resources to determine if any given drug is safe or is even what it is claimed to be? I'm curious how other than government regulation you intend to prevent people from diluting drugs for examples.

Edit: And having now read the essays you linked to I have to say that I'm a bit confused. The notion that the US of all countries has a religious belief in unlimited government is difficult for me to understand. The US often has far less regulation and government intervention than say most of Europe. So the claim that the US has a religion of "Unlimited Government" as a replacement for an established religion clashes with the simple fact that many countries which do have established or semi-established religions still have far more government intervention. Meanwhile, it seems that it is frequently politically helpful in the US to talk about "getting the government off of peoples' backs" or something similar. So how the heck is this a religion in the US?

Replies from: mattnewport
comment by mattnewport · 2010-06-02T23:12:33.790Z · LW(p) · GW(p)

I'm not at all sure what any of this has to do with anything. I agree with the quoted section that having the government step in to regulate how much carryon luggage people can have is an example of people making bad assumptions about government.

This rather illustrates my point. You can see the lack of justification for a fairly extreme example like the carry on luggage but can't see how that relates to the question of airline security. From my perspective the idea that government should even be discussing what to do about airline security in the original example is at least as ridiculous as the luggage example is from your perspective.

Can you explain for example how the fact that airplanes are effectively large missiles is not a good reason for the government to be concerned about their security? The use of airplanes as weapons is not fictional.

Airlines already have a strong economic incentive to take measures to avoid hijacking and terrorist attacks, both due to the high cost of losing a plane and to the reputational damage and possible liability claims resulting from passenger deaths and from the destruction of the target. I would expect them to do a better job of developing efficient security measures to mitigate these risks if government were not involved and also to do a better job of trading off increased security against increased inconvenience for travelers. There is absolutely no reason why a potentially dangerous activity necessitates government involvement to mitigate risks.

Similarly, regarding my second point are you claiming that people in general do have the time and resources to determine if any given drug is safe or is even what it is claimed to be? I'm curious how other than government regulation you intend to prevent people from diluting drugs for examples.

You can make the same argument with regard to many goods and services available in our complex modern world. It is equally flawed when applied to drugs as when applied to computers, cars or financial products. There is no reason why government has to play the role of gatekeeper, guardian and guarantor. In markets where government involvement is minimal other entities fill these roles quite effectively.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-02T23:32:49.746Z · LW(p) · GW(p)

There is absolutely no reason why a potentially dangerous activity necessitates government involvement to mitigate risks.

Since the italics are yours, I'm going to focus on that term and ask what you mean by necessitate? Do you mean society will inevitably fall apart without it? Obviously no one is going to make that argument. Do you mean just that there are potentially ways to try to approach the problem other than the government? That's a much weaker claim.

You can make the same argument with regard to many goods and services available in our complex modern world. It is equally flawed when applied to drugs as when applied to computers, cars or financial products. There is no reason why government has to play the role of gatekeeper, guardian and guarantor. In markets where government involvement is minimal other entities fill these roles quite effectively.

Really? Cars are extensively regulated. The failure of government regulation is seen by many as part of the current financial crisis. And computers don't (generally) have the same fatality concerns. What sort of institution would you replace the FDA with ?

Replies from: mattnewport
comment by mattnewport · 2010-06-03T00:19:59.204Z · LW(p) · GW(p)

Since the italics are yours, I'm going to focus on that term and ask what you mean by necessitate?

I mean that recognizing the existence of a perceived problem does not need to lead automatically to considering ways that government can 'fix' it. Drug prohibition is a classic example here. Many people see that there are problems associated with drug use and jump straight to the conclusion that therefore there is a need for government to regulate drug use. Not every problem requires a government solution. The mindset that all perceived problems with the world necessitate government convening a commission and devising regulation is what I am criticizing.

What sort of institution would you replace the FDA with ?

I'd abolish the FDA but I wouldn't replace it with anything. That's kind of the point. People would still want independent assessments of the safety and efficacy of medical treatments and without the crowding out effects of a government supported monopoly there would be strong incentives for private institutions to satisfy that demand. The fact that the nature of these institutions would not be designed in advance by government but would evolve to meet the needs of the market is a feature, not a bug.

Replies from: Houshalter
comment by Houshalter · 2010-06-03T02:20:11.099Z · LW(p) · GW(p)

I can kind of see how a private company could test and recomend/approve drugs, but what about snake oil sales men. No, this system wouldn't work at all. To many people would die or be seriously hurt for no reason.

Replies from: RomanDavis, mattnewport
comment by RomanDavis · 2010-06-03T02:36:42.472Z · LW(p) · GW(p)

True and they wouldn't deserve it, but the truth is, there are a lot of really awesome effective drugs that either take forever to get approved, or don't get approved it at all. This kills people, too.

And there are a lot of diseases, like bronchitis, that are easy for a person to diagnose in themselves, and know that they need an antibiotic, but it costs a hundred dollars to see a doctor to tell him what he already knows so he can get the medicine, and if that's the difference between him paying the rent or not... and, hypothetically, he dies because it goes untreated.

It's more a propblem of political viability rather than anything else.

Replies from: Blueberry, cupholder
comment by Blueberry · 2010-06-03T05:30:32.070Z · LW(p) · GW(p)

And there are a lot of diseases, like bronchitis, that are easy for a person to diagnose in themselves, and know that they need an antibiotic,

And then they misdiagnose it, and antibiotic resistance increases, and then the antibiotic doesn't work when they need it. Or they diagnose it but miss a warning sign for another disease that a doctor would have noticed and tested for. No thanks, I'd much rather have people who have gone to medical school for years make that decision.

Replies from: thomblake
comment by thomblake · 2010-06-03T18:34:50.543Z · LW(p) · GW(p)

No thanks, I'd much rather have people who have gone to medical school for years make that decision.

And I'd much rather the decision to trust doctors be made by the people to be affected, rather than politicians (who have not done any school / training in particular).

Replies from: Blueberry
comment by Blueberry · 2010-06-03T18:39:58.463Z · LW(p) · GW(p)

The "people to be affected" are the general public, who suffer when contagious diseases aren't treated properly, and the general public makes these decisions through elected politicians. Also, these decisions are frequently based on recommendations by administrators with degrees in Public Health.

comment by cupholder · 2010-06-03T19:11:05.088Z · LW(p) · GW(p)

Some day I hope someone without an axe to grind does an in-depth study estimating how badly people would be harmed with drug regulation v. without drug regulation. I've seen the 'yeah but regulation causes harms' versus 'yeah but non-regulation causes harms' argument before, but I can't remember seeing anyone try to rigorously and comprehensively quantify the respective pros and cons of both courses of action and compare them.

Replies from: Douglas_Knight, RomanDavis, None
comment by Douglas_Knight · 2010-06-05T15:19:48.963Z · LW(p) · GW(p)

Have you looked at the academic studies on the topic? Are these the "axe-grinding" "arguments" that you dismiss? Simple comparisons of the US vs Europe during times when one was systematically more conservative seems to me to be a pretty reasonable methodology, but maybe you don't consider it "rigorous" or "comprehensive."

Maybe I'm overdoing the scare quotes, but those words were not helpful for me to identify what you have looked at, whether our disagreement is due to your ignorance or my lower standards.

Replies from: cupholder
comment by cupholder · 2010-06-05T16:37:06.978Z · LW(p) · GW(p)

Have you looked at the academic studies on the topic? Are these the "axe-grinding" "arguments" that you dismiss?

I have not, and my comment was not intended to slam whatever genuinely unbiased academic studies of the topic there are.

My comment's referring to the times I've been a bystander for arguments about the utility of pharmaceutical drug regulation, both in real life and online; a pattern I noticed is the arguers failing to cite hard, quantitative evidence or make an argument based on the numbers. At best they might cite particular claims from think tanks or other writers/groups with a political agenda that would plausibly bias the analysis.

So when I say I've seen the argument before, I'm not thinking of the abstract debate over whether what the FDA does is a net good or not, or particular pieces of academic work; I'm thinking of concrete occasions where people have started arguing about it in my presence, and the failure of the people I've witnessed arguing about it to present detailed evidence.

I haven't tried to research the topic in detail, so I don't know precisely what ground the academic studies cover. At any rate, I didn't mean to claim knowledge of the field and to imply that there aren't any. I genuinely do just mean that I haven't seen them, because laymen (including the parent posters in this subthread, at least so far) don't mention them when they argue about the issue. As I wrote before, I added the 'axe to grind' warning not as a preemptive slam on academics, but because I suspect there have already been some overtly partisan analyses of the subject, and I want to discourage people from suggesting them to me.

Simple comparisons of the US vs Europe during times when one was systematically more conservative seems to me to be a pretty reasonable methodology, but maybe you don't consider it "rigorous" or "comprehensive."

In this context, what I mean by 'rigorously and comprehensively' is that the analysis should satisfy basic standards for causal inference - all important confounding variables should be accounted for, and so on. For example, it would not be 'rigorous' to just collect a list of countries and compare the lifespan of those with an FDA-like administration with those that don't, because there are almost certainly confounding variables involved, and it's not clear that lifespan is a suitably relevant overcome variable. We might pick a more suitable outcome variable and use a regression to try controlling for one or two confounders, but we still wouldn't have a 'comprehensive' analysis without a list of all of the significant confounding variables, and a way to adjust for them or vitiate their effects.

One rigorous and comprehensive way to evaluate the question, although not a very realistic one, would be a global randomized trial. We might agree on a set of outcome variables, carefully measure them in every country in the world, randomly assign half the countries to having an FDA and the other half no FDA, and then come back after a pre-agreed number of years to re-measure the outcome variables and check for an effect in the countries with an FDA.

Now of course we don't have that dataset, so if we want evidence we have to make do with what we have, perhaps by comparing the US and Europe as you mention. That could be a pretty good way to test for a positive/negative effect of drug regulation, or it could be a pretty bad way, but I'd need to hear more details about the precise method to say.

Maybe I'm overdoing the scare quotes, but those words were not helpful for me to identify what you have looked at, whether our disagreement is due to your ignorance or my lower standards.

I'm not sure what you believe we're disagreeing about. I think you might have gotten the wrong impression of my intentions - I wasn't trying to score points off RomanDavis or Houshalter or mattnewport or anyone else in this thread, or imply that drug regulation is obviously good/bad and only an axe grinder could think otherwise. At any rate, if you have citations for academic studies you think I'd find informative, I'd like them.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-06-05T17:26:25.667Z · LW(p) · GW(p)

The disagreement was just that you seemed to say (by the phrasing "some day") that there had not been any good work on the subject.

The only such paper I remember reading is Gieringer. That link is to a whole bibliography, compiled by people with a definite slant, so I can't guarantee that there aren't contradictory papers with equally good methodology.

I genuinely do just mean that I haven't seen them, because laymen (including the parent posters in this subthread, at least so far) don't mention them when they argue about the issue.

I'm reminded of Bruce Bueno de Mesquita, who gives the impression of having fabricated the papers assessing him, but they're real.

Replies from: cupholder
comment by cupholder · 2010-06-05T18:12:49.046Z · LW(p) · GW(p)

Fair enough. Thanks for the Gieringer 1985 cite; it's 25 pages long so I haven't read it yet, but skimming through it I see a couple of quantitative tables, which is a good sign, and that it was published in the Cato Journal, which is not such a good sign. But it's something!

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-06-05T20:08:51.551Z · LW(p) · GW(p)

published in the Cato Journal, which is not such a good sign

I said my standards were lower. My point was that your original comment could be taken for having read this and dismissed it.

Replies from: cupholder
comment by cupholder · 2010-06-05T20:26:44.936Z · LW(p) · GW(p)

I had noticed that you said that. I was originally not going to draw attention to the paper's source, but it occurred to me that someone might then have asked me whether I was aware of the paper's source, referring to my earlier claim that I wanted to discourage people from offering me overtly partisan analyses. So I decided to pre-empt that possible confusion/accusation by acknowledging the paper's origin from a libertarian-leaning journal.

comment by RomanDavis · 2010-06-03T19:34:58.106Z · LW(p) · GW(p)

Yeah, I was thinking of bringing up examples myself, but because of the various axes involved, bringing one up might not be terrible effective.

Another person (I think it was cousin_it) brought up the idea that it should come down to a bet. If we bet ten dollars, and one of us kept arguing after the evidence was in and the bet was lost all it would come down to is, "If you're so smart, why aren't you rich?"

EDIT:Also someone went and down voted the crap out of me. Who'd I make mad and why?

Replies from: cupholder, mattnewport
comment by cupholder · 2010-06-03T19:57:44.975Z · LW(p) · GW(p)

Yeah, I was thinking of bringing up examples myself, but because of the various axes involved, bringing one up might not be terrible effective.

Yup. I thought of the 'without an axe to grind' proviso because I expect some politically-aligned think tanks out there have already published pamphlets or reports arguing one side or the other, but I wouldn't be inclined to take their claims very seriously.

EDIT:Also someone went and down voted the crap out of me. Who'd I make mad and why?

Whoever did it, it's not just you.

comment by mattnewport · 2010-06-03T19:50:35.250Z · LW(p) · GW(p)

EDIT:Also someone went and down voted the crap out of me. Who'd I make mad and why?

Me too. Around 30 points in around 10 minutes. I'm flattered.

Replies from: thomblake, RobinZ
comment by thomblake · 2010-06-03T19:52:59.462Z · LW(p) · GW(p)

My guess for all this is that someone found the whole conversation off-topic and mind-killing. Which seems to justify downvotes.

comment by RobinZ · 2010-06-03T19:52:30.108Z · LW(p) · GW(p)

Did either of you perhaps post in any of the threads replying to billswift?

Replies from: mattnewport
comment by mattnewport · 2010-06-03T19:55:26.245Z · LW(p) · GW(p)

Yes. I think someone downvoted extra comments elsewhere for effect based on the magnitude and speed of the karma hit.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-03T19:59:22.574Z · LW(p) · GW(p)

Yes, it looks like almost all the comments related to the government policy issue got downvoted. This is annoying in that, I at least thought that it was a calm, rational discussion which was showing that political discussion isn't necessarily mind-killing. I'm particularly perplexed by the downvoting of comments which consisted of either interesting non-standard ideas or of comments which included evidence of claims.

Replies from: mattnewport
comment by mattnewport · 2010-06-03T20:00:53.806Z · LW(p) · GW(p)

It must be a relatively high karma user given the fact that downvotes are limited by total karma. Perhaps they'd care to explain themselves.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-03T20:02:42.491Z · LW(p) · GW(p)

The downvote limit is 4 times your karma yes? So if the total downvote for the thread was around 60 points, the individual would only need to be around 15 karma.

Replies from: thomblake, mattnewport
comment by thomblake · 2010-06-03T20:05:49.533Z · LW(p) · GW(p)

The downvote limit is 4 times your karma yes?

Yes. It was originally equal to your karma but some of us had already spent that many downvotes and the point of the policy wasn't to stop established users from being able to downvote.

comment by mattnewport · 2010-06-03T20:03:48.913Z · LW(p) · GW(p)

The downvote limit is 4 times your karma yes?

I'm not sure. I've never actually run into it.

comment by [deleted] · 2010-06-05T14:52:09.234Z · LW(p) · GW(p)

I hope someone without an axe to grind does this; if there are axes involved, its much more likely to turn out supporting whatever the person thought before, i. e. not strongly correlated with how people are hurt or helped by regulation

comment by mattnewport · 2010-06-03T03:44:40.666Z · LW(p) · GW(p)

The FDA doesn't prevent snake oil salesmen: various kinds of alternative medicine escape regulation. It seems regulation primarily applies to treatments that might actually have a hope in hell of working.

Are you considering the other side of the ledger? The people deprived of potentially life saving new treatments because they have not yet been approved? The innovative new medical companies that never get started because of the barriers to entry formed by the regulatory agencies and the big pharmaceutical companies who know how to navigate their rules? The new treatments for rare diseases that are never developed because the market is too small to justify the costs of gaining regulatory approval? The effective anti-venoms already used successfully in other countries that are not available to treat rare snake bites in the US because FDA approval is too onerous?

The FDA doesn't even have a perfect track record achieving its stated aims. As with any large government agency, private alternatives would be more cost effective and better at the job.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-03T04:12:52.617Z · LW(p) · GW(p)

Alternative medicine used to be much more closely regulated. A lot of these products were more closely regulated until lobbying by the alternative medicine industry lead to the Dietary Supplement Health and Education Act of 1994 which made it much harder for the FDA to regulate them.

comment by Blueberry · 2010-06-02T22:43:48.092Z · LW(p) · GW(p)

So how do you feel about the government regulating what credit card issuers or insurers are allowed to offer? I see this as similar to the carry-on luggage issue. I don't want credit card companies to be allowed to offer misleading rates or unfair policies like paying off the lowest interest rates first. I'm not sure about carry-on luggage, but what about charging for a bathroom? That seems clearly within the scope of legitimate concerns of government, given that air travel is already heavily regulated.

Replies from: JoshuaZ, RomanDavis
comment by JoshuaZ · 2010-06-02T22:47:17.713Z · LW(p) · GW(p)

That seems clearly within the scope of legitimate concerns of government, given that air travel is already heavily regulated

This argument doesn't work. Just because you already have heavy regulation, doesn't justify having more regulation. Also, many libertarians would say that the solution should be to simply remove much of the heavy regulation of air travel.

Replies from: Blueberry
comment by Blueberry · 2010-06-02T23:00:03.720Z · LW(p) · GW(p)

This argument doesn't work. Just because you already have heavy regulation, doesn't justify having more regulation.

Well, it doesn't by itself justify more regulation, but it makes additional regulation less burdensome. If trains were not regulated and planes were, it might be reasonable to add regulation of bathrooms to plane regulations, but not to introduce regulation to trains so we could regulate bathrooms.

Also, many libertarians would say that the solution should be to simply remove much of the heavy regulation of air travel.

Fair enough.

comment by RomanDavis · 2010-06-02T22:52:05.175Z · LW(p) · GW(p)

I think there are some credit card practices that could be framed as fraud (You can change my interest rate without telling me? And without telling me you won't tell me? Seriously? What the hell?) so the government would have to be involved even in a strict libertarian society, but I never like where this is going.

Libertarianism, as a political concept was an idea invented by David Nolan to suit his political theories. He had a chart, and a quarter of it is various types of libertarians.

If you like more social liberties than the American center, and more economic liberties, and are willing to forgo some amount (even a small amount) of government services and protections to achieve them, then you are some where on that quarter of the map. You don't necessarily have to be way off in the corner with the anarchists or defend every idea they have.

comment by Houshalter · 2010-06-03T02:10:33.152Z · LW(p) · GW(p)

So basically it all comes down to "Should the government worry about this or not?" Is there any good heuristics or principles for determining wether or not the government should regulate something? I'm not upset at the system for being wrong per se, but I am upset about it being so inconsistent and unreliable.

Replies from: mattnewport
comment by mattnewport · 2010-06-03T02:16:06.424Z · LW(p) · GW(p)

Is there any good heuristics or principles for determining wether or not the government should regulate something?

A good heuristic is "no it shouldn't". Whether there are any exceptions to this rule is an open question.

comment by realitygrill · 2010-06-05T05:21:11.389Z · LW(p) · GW(p)

I guess I would say I don't know.

Have you read Taleb's The Black Swan? He has a counterfactual story that is extremely similar (though it uses 9/11); basically there aren't any (even negative) incentives for politicians to push such policies through until after some huge disaster happens.

Replies from: James_K
comment by James_K · 2010-06-05T07:22:28.813Z · LW(p) · GW(p)

I haven't read Taleb, but I have heard a few interviews of him where he got the opportunity to outline his ideas.

I think politicians in general have a tendency to overreact to adverse events, and often by doing things that involve signals of reassurance (such as security theatre) rather than steps to fix the problem. I'm open to the possibility that they don't do enough to prevent problems, but as a rule governments are very risk averse entities, usually preoccupied with things that might go wrong.

comment by RobinZ · 2010-06-02T18:31:11.024Z · LW(p) · GW(p)

In what way is this a useful response to James_K? What do you believe James_K is doing that he shouldn't be doing (or vice-versa), such that your comment is likely to lead him toward better action?

comment by Houshalter · 2010-06-02T22:22:43.418Z · LW(p) · GW(p)

What if there is evidence for God? Why do you assume there isn't? Plus, quote mining is a fallacy which doesn't prove anything. Make an argument.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-02T22:30:30.660Z · LW(p) · GW(p)

What if there is evidence for God? Why do you assume there isn't?

Note that general Less Wrong consensus is that religion in almost all forms is very wrong. It is a safe operating assumption to work with on LW, in that you don't need to go through the logic everytime to justify it. it probably isn't as safe a starting point as say the wrongness of a flat-earth, or the wrongness of phlogiston, but it is pretty safe.

Replies from: Houshalter
comment by Houshalter · 2010-06-02T23:07:08.408Z · LW(p) · GW(p)

Maybe, but I am a Christian and I don't agree that religion is "wrong".

Replies from: None, orthonormal, RobinZ, CronoDAS
comment by [deleted] · 2010-06-03T04:58:24.780Z · LW(p) · GW(p)

This is not a site that devotes a whole lot of space to debating religion. People aren't getting mean so much as they're using shorthand. It can save time, for atheists, not to explain why they're atheists over and over. Hence the links. The sequences are a pretty good expression of why the majority around here is atheist. They're the expansion of the shorthand. If you're anything like me, reading them will probably move some of your mental furniture around; even if not, you'll talk the lingo better.

comment by orthonormal · 2010-06-03T01:11:22.498Z · LW(p) · GW(p)

Chill with the downvotes, guys. Houshalter's new, looks to be participating well in other threads, and is just stating a belief for the first time.

Houshalter, this is a tangent to the current... tangent. It might be better to discuss theism in its own Open Thread comment or within a past discussion on the topic.

On a related note, have you looked through the Mysterious Answers to Mysterious Questions sequence yet? Not to throw a short book's worth of stuff at you, but there's a lot of stuff taken for granted around here when discussing theism, the supernatural, and evidence for such.

Replies from: Houshalter
comment by Houshalter · 2010-06-03T02:39:54.502Z · LW(p) · GW(p)

Chill with the downvotes, guys. Houshalter's new, looks to be participating well in other threads, and is just stating a belief for the first time.

Uh... thanks?

Houshalter, this is a tangent to the current... tangent. It might be better to discuss theism in its own Open Thread comment or within a past discussion on the topic.

I have debated my religion before, but ironically this looks like a bad place to make a stand because everyones against me and theres a karma system.

On a related note, have you looked through the Mysterious Answers to Mysterious Questions sequence yet? Not to throw a short book's worth of stuff at you, but there's a lot of stuff taken for granted around here when discussing theism, the supernatural, and evidence for such.

D: GAHHH!!! D: Hundreds of links to pages that contain hundreds of more links. D:

Replies from: Vladimir_Nesov, orthonormal, cupholder, Sniffnoy, Emile
comment by Vladimir_Nesov · 2010-06-03T04:33:55.114Z · LW(p) · GW(p)

I have debated my religion before, but ironically this looks like a bad place to make a stand because everyones against me and theres a karma system.

Don't take the adversarial attitude: "taking a stand", "against me". This leads to a broken mode of thought. Just study the concepts that will allow you to cut through semantic stopsigns and decide for yourself. Taking advice on an efficient way to learn may help as well.

comment by orthonormal · 2010-06-03T07:27:10.194Z · LW(p) · GW(p)

Chill with the downvotes, guys. Houshalter's new, looks to be participating well in other threads, and is just stating a belief for the first time.

Uh... thanks?

Occasionally someone will show up here and try to flame-bait us, not really arguing (or not responding to counterarguments) but just trying to provoke people with contrary opinions. (This is, after all, the Internet.) It's obvious from your other contributions that you're not doing that, but someone who'd only seen your two comments above might have wrongly assumed otherwise. I was explaining why the downvotes should be taken back, as it appears they were.

By the way, the mainstream view among Less Wrong readers is that any evidence we've seen for theism is far too weak to overcome the prior improbability of such a sneakily complex hypothesis (and that much of the evidence that we might expect from such a hypothesis is absent); but there are a few generally respected theists around here. The community norm on theism has more to do with how people conduct themselves in disputes than with the fact of disagreement— but you should be prepared for a lot of us to talk amongst ourselves as if atheism is a settled question, and not be too offended by that. (Consider it a role reversal from an atheist's social interactions with typical Americans.)

I've enjoyed my exchanges with you so far, and look forward to more!

Replies from: Houshalter
comment by Houshalter · 2010-06-03T15:38:18.643Z · LW(p) · GW(p)

I was explaining why the downvotes should be taken back, as it appears they were.

I recently found out that you can't downvote someone past zero, so that must be why they stopped :)

I might just delete the post anyways. Ah well.

Replies from: orthonormal, JoshuaZ
comment by orthonormal · 2010-06-03T19:05:25.128Z · LW(p) · GW(p)

It's considered poor form to delete a post or comment on LW, since it makes it impossible to tell what the replies were talking about. (Also, it doesn't restore the karma.)

What's preferable, if one regrets a comment, is to edit it in a manner that keeps it clear what the original comment was, or to add a disclaimer. Here's one example— note that if cousin_it had just deleted the post, it would be more difficult to understand the comments on it.

Or a fake example:

Oh yeah, well your MOM coherently extrapolated my volition last night

should probably be edited to

Oh yeah, well (inappropriate joke, sorry)

if the content is to be removed.

Replies from: Blueberry
comment by Blueberry · 2010-06-03T19:12:49.702Z · LW(p) · GW(p)

I enjoyed that example. I would hope it wouldn't get deleted.

comment by JoshuaZ · 2010-06-03T17:26:21.542Z · LW(p) · GW(p)

It might be better to just spend some time reading the sequences. A lot of people here like myself disagree with the LW consensus views on a fair number of issues, but we have a careful enough understanding of what those consensus views are to know when to be explicit about what assumptions and what methods of reasoning we are using.

comment by cupholder · 2010-06-03T05:38:49.911Z · LW(p) · GW(p)

I have debated my religion before, but ironically this looks like a bad place to make a stand because everyones against me and theres a karma system.

Awwwww, I'm not against you. I just think you're incorrect.

If you post on Less Wrong a lot, you'll eventually say something several posters will disagree with, and some of them will say so. Try not to interpret it as a personal attack - taking it personally makes it harder to rationally evaluate new arguments and evidence.

I wouldn't expect the karma system to be much of a problem, by the way. If I remember rightly, your karma can't go below 0, so you can continue posting comments even if it falls to zero.

Replies from: Houshalter
comment by Houshalter · 2010-06-03T13:54:52.662Z · LW(p) · GW(p)

It was at 20 yesterday, now its at zero.

Replies from: cupholder
comment by cupholder · 2010-06-03T17:14:49.746Z · LW(p) · GW(p)

So it is. On the bright side, it looks like your karma loss is from getting downvoted on quite a lot of comments (about a dozen over the past 4 days, it looks like) rather than arguing about God as such. And I see you can still post. :-)

Replies from: Blueberry
comment by Blueberry · 2010-06-03T18:07:57.179Z · LW(p) · GW(p)

I downvoted several of Houshalter's comments for containing multiple spelling and punctuation errors, though I'd upvote a well-written defense of theism.

comment by Sniffnoy · 2010-06-03T02:45:18.719Z · LW(p) · GW(p)

D: GAHHH!!! D: Hundreds of links to pages that contain hundreds of more links. D:

Hm, had you not noticed the sequences yet? The "sequences" button is next to the "about" button. There's quite a few more of them. :)

comment by Emile · 2010-06-03T09:22:05.047Z · LW(p) · GW(p)

I have debated my religion before, but ironically this looks like a bad place to make a stand because everyones against me and theres a karma system.

You're probably getting most downvotes because, as orthonormal said, you're going off a tangent to the current tangent, and with a somewhat adverserial stance.

comment by RobinZ · 2010-06-03T00:51:16.718Z · LW(p) · GW(p)

I think the essays most directly related to the rectitude of religion are "Religion's Claim to be Non-Disprovable", which CronoDAS linked, and "Atheism = Untheism + Antitheism". That said, the real introduction to the sort of thinking that led most of us to reject religions are illuminated to an extent in the Mysterious Answers to Mysterious Questions and Reductionism) sequences.

comment by mkehrt · 2010-06-02T22:19:15.505Z · LW(p) · GW(p)

Forgive me if this is beating a dead horse, or if someone brought up an equivalent problem before; I didn't see such a thing.

I went through a lot of comments on dust specks vs. torture. (It seems to me like the two sides were miscommunicating in a very specific way, which I may attempt to make clear at some point.) But now I have an example that seems to be equivalent to DSvs.T, easily understandable via my moral intuition and give the "wrong" (i.e., not purely utilitarian) answer.

Suppose I have ten people and a stick. The appropriate infinitely powerful theoretical being offers me a choice. I can hit all ten of them with a stick, or I can hit one of them nine times. "Hitting with a stick" has some constant negative utility for all the people. What do I do?

This seems to me to be exactly dust specks vs. torture scaled down to humanly intuitable scales. I think the obvious answer is to hit all the people once. Examining my intuition tells me that this is because I think the aggregation function for utility is different across different people than across one person's possible futures. Specifically, my intuition tells me to maximize across people the minimum expected utilty across an individual's future.

So, is there a name for this position?

Do people think my example is equivalent to DSvsT?

Do people get the same or different answer with this question as they do with DSvsT?

Replies from: Unnamed, None, Nick_Tarleton, Blueberry, Blueberry, RomanDavis, snarles
comment by Unnamed · 2010-06-03T03:38:03.881Z · LW(p) · GW(p)

DSvsT was not directly an argument for utilitarianism, it was an argument for tradeoffs and quantitative thinking and against any kind of rigid rules, sacred values, or qualitative thinking which prevents tradeoffs. For any two things, both of which have some nonzero value, there should be some point where you are willing to trade off one for the other - even if one seems wildly less important than the other (like dust specks compared to torture). Utilitarianism provides a specific answer for where that point is, but the DSvsT post didn't argue for the utilitarian answer, just that the point had to be at less than 3^^^3 dust specks. You would probably have to be convinced of utilitarianism as a theory before accepting its exact answer in this particular case.

The stick-hitting example doesn't challenge the claim about tradeoffs, since most people are willing to trade off one person getting hit multiple times with many people each getting hit once, with their choice depending on the numbers. In a stadium full of 100,000 people, for instance, it seems better for one person to get hit twice than for everyone to get hit once. Your alternative rule (maximin) doesn't allow some tradeoffs, so it leads to implausible conclusions in cases like this 100,000x1 vs. 1x2 example.

comment by [deleted] · 2010-06-02T22:36:03.802Z · LW(p) · GW(p)

I don't think maximising the minima is what you want. Suppose your choice is to hit one person 20 times, or five people 19 times each. Unless your intuition is different from mine, you'll prefer the first option.

comment by Nick_Tarleton · 2010-06-03T05:14:41.332Z · LW(p) · GW(p)

"Hitting with a stick" has some constant negative utility for all the people.

I don't think you can justifiably expect to be able to tell your brain something this self-evidently unrealistic, and have it update its intuitions accordingly.

comment by Blueberry · 2010-06-02T22:28:32.494Z · LW(p) · GW(p)

I went through a lot of comments on dust specks vs. torture. (It seems to me like the two sides were miscommunicating in a very specific way, which I may attempt to make clear at some point.)

Oh, and I'd love to hear what you mean about this.

comment by Blueberry · 2010-06-02T22:24:58.746Z · LW(p) · GW(p)

There's one difference, which is that the inequality of the distribution is much more apparent in your example, because one of the options distributes the pain perfectly evenly. If you value equality of distribution as worth more than one unit of pain, it makes sense to choose the equal distribution of pain. This is similar to economic discussions about policies that lead to greater wealth, but greater economic inequality.

comment by RomanDavis · 2010-06-02T22:30:00.750Z · LW(p) · GW(p)

I think the point of Dust Specks Vs Torture was scope failure. Even allowing for some sort of "negative marginal utility" once you hit a wacky number 3^^^3, it doesn't matter. .000001 negative utility point multiplied by 3^^^3 is worse than anything, because 3^^^3 is wacky huge.

For the stick example, I'd say it would have to depend on a lot of factors about human psychology and such, but I think I'd hit the one. Marginal utility tends to go down for a product, and I think that the shock of repeated blows would be less than the shock of the one against ten separate people.

I think your opinion basically is an appeal to egalitarianism, since you expect negative utility to yourself from an unfair world where one person gets something that ten other people did not, for no good or fair reason.

Replies from: NancyLebovitz, Blueberry
comment by NancyLebovitz · 2010-06-03T01:12:29.954Z · LW(p) · GW(p)

I think you're mistaken about the marginal utility-- being hit again after you've already been injured (especially if you're hit on the same spot) is probably going to be worse than the first blow.

Marginal disutility could plausibly work in the opposite direction from marginal utility.

Each 10% of your money that you lose impacts your quality of life more. Each 10% of money that you gain impacts your quality of life less. There might be threshold effects for both, but I think the direction is right.

Replies from: RomanDavis
comment by RomanDavis · 2010-06-03T01:38:28.996Z · LW(p) · GW(p)

I was thinking more along the lines of scope failure: If some one said you were going to be hit 11 times would you really expect it to feel exactly 110% as bad as being hit ten times?

But yes, from a traditional economics point of view, your post makes a hell of a lot more sense. Upvoted.

comment by Blueberry · 2010-06-02T22:46:43.709Z · LW(p) · GW(p)

Marginal utility tends to go done for a product, and I think that the shock of repeated blows would be less than the shock of the one against ten separate people.

Part of the assumption of the problem was that hitting with a stick has some constant negative utility for all the people.

Replies from: None, RomanDavis
comment by [deleted] · 2010-06-02T23:01:32.659Z · LW(p) · GW(p)

Part of the assumption of the problem was that hitting with a stick has some constant negative utility for all the people.

It's always hard to think about this sort of thing. I read that in the original problem, but then I ended up thinking about actual hitting people with sticks when deciding what was best. Is there anything in the archives like The True Prisoner's Dilemma but for giving an intuitive version of problems with adding utility?

comment by RomanDavis · 2010-06-02T22:55:36.042Z · LW(p) · GW(p)

Then it depends. If you're a utilitarian, it is still better to hit the guy nine times than to hit ten people ten times.

If you allow some ideas about the utility of equality, then things get more complicated. That's why I think most people reject the simple math that 9 < 10.

comment by snarles · 2010-06-03T14:53:26.305Z · LW(p) · GW(p)

I'd analyze your question this way. Ask any one of the ten people which they would prefer: A) to get hit B) to have a 1/10th chance of getting hit 9 times.

Assuming rationality and constant disutility of getting hit, every one of them would choose B.

comment by taw · 2010-06-03T04:56:46.813Z · LW(p) · GW(p)

I have a theory: Super-smart people don't exist, it's all due to selection bias.

It's easy to think someone is extremely smart if you've only seen the sample of their most insightful thinking. But every time that happened to me, and I found that such a promising person had a blog or something like that, it universally took very little time to find something terribly brain-hurtful they've written there.

So the null hypothesis is: there's a large population of fairly-smart-but-nothing-special people, who think and publish their thought a lot. Because the best thoughts get distributed, and average and worse thoughts don't, it's very easy from such small biased samples to believe some of them are far smarter than the rest, but their averages are pretty much the same.

(feel free to replace "smart" by "rational", the result is identical)

Replies from: None, snarles, NancyLebovitz, dyokomizo, Jack
comment by [deleted] · 2010-06-03T05:09:08.198Z · LW(p) · GW(p)

I was thinking something similar just today:

Some people think out loud. Some people don't. Smart people who think out loud are perceived as "witty" or "clever." You learn a lot from being around them; you can even imitate them a little bit. They're a lot of fun. Smart people who don't think out loud are perceived as "geniuses." You only ever see the finished product, never their thought processes. Everything they produce is handed down complete as if from God. They seem dumber than they are when they're quiet, and smarter than they are when you see their work, because you have no window into the way they think.

In my experience, there are far more people who don't think out loud in math than in less quantitative fields. This may be part of why math is perceived as so hard; there are all these smart people who are hard to learn from, because they only reveal the finished product and not the rough draft. Rough drafts make things look feasible. Regular smart people look like geniuses if they leave no rough drafts. There may really be people who don't need rough drafts in the way that we mundanes do -- I've heard of historical figures like that, and those really are savants -- but it's possible that some people's "genius" is overstated just because they're cagey about expressing half-formed ideas.

Replies from: cousin_it, NancyLebovitz
comment by cousin_it · 2010-06-03T10:04:10.794Z · LW(p) · GW(p)

You may be right about math. Reading the Polymath research threads (like this one) made me aware that even Terry Tao thinks in small and well-understood steps that are just slightly better informed than those of the average mathematician.

comment by NancyLebovitz · 2010-06-03T11:03:58.807Z · LW(p) · GW(p)

I Am a Strange Loop by Hofstadter may be of interest-- it's got a lot about how he thinks as well as his conclusions.

comment by snarles · 2010-06-03T14:45:58.664Z · LW(p) · GW(p)

I'm not a psychologist but I thought I could improve on the vagueness of the original discussion.

There are a few factors which determine "smartness" (or potential for success):

  1. Speed. Having faster hardware.

  2. Pattern Recognition. Being better at "chunking".

  3. Memory.

  4. Creativity. (="divergent" thinking.)

  5. Detail-awareness.

  6. Experience. Having incorporated many routines into the subconscious thanks to extensive practice.

  7. Knowledge. (Quality is more important than quantity.)

The first five traits might be considered part of someone's "talent." Experience and knowledge, which I'll group together as "training", must be gained through hard work. Potential for success is determined by a geometric (rather than additive) combination of talent and training: that is, roughly,

potential for success=talent * training

All this math, of course, is not remotely intended to be taken at face value, but it's merely the most efficient way to make my point.

The "super-smart" start life with more talent than average. The rule of the bell curve holds, so they generally do not have an overwhelming cognitive advantage over the average person. But they have enough talent to justify investing much more of their resources into training. This is because a person with 15 talent will gain 15 success for every unit of time they put into training, while a unit of training is worth 17 success for a person with 17 talent. The less time you have to spend, the more time costs, so all other things being equal, the person with more talent will put more time into training. Suppose the person with 15 talent puts 100 units of time into training, and the person with 17 talent puts 110 units of time into training. Then:

person with 15 talent * 100 training => 15000 success

person with 17 talent * 110 training => 18700 success

Which is 25% more success for only 13% more talent.

There's probably some more formal work done along these lines, I'm not an economist either.

comment by NancyLebovitz · 2010-06-03T11:02:43.197Z · LW(p) · GW(p)

If you're interpreting "super-smart" to mean always right, or at least reasonable, and thus never severely wrong-headed, I think you're correct that no one like that exists, but it seems like a rather comic bookish idea of super-smartness.

Also, I have no idea how good your judgment is about whether what you call brain-hurtful is actually ideas I'd think were egregiously wrong.

I think there are a lot of folks smart enough to be special people-- those who come up with worthwhile insights frequently.

And even if it's just a matter of generating lots of ideas and then publishing the best, recognizing the best is a worthwhile skill. It's conceivable that idea-generation and idea-recognizing are done by two people who together give the impression of one person who's smarter than either of them.

comment by dyokomizo · 2010-06-07T00:59:33.591Z · LW(p) · GW(p)

How would you describe the writing patterns of super-smart people? Similarly, how would meeting/talking/debating them would feel like?

Replies from: taw
comment by taw · 2010-06-08T18:09:55.415Z · LW(p) · GW(p)

I think my comment was rather vague, and people aren't sure what I meant.

This is all my impressions, as far as I can tell evidence of all that is rather underwhelming; I'm writing this more to explain my thought than to "prove" anything.

It seems to me that people come in different level of smartness. There are some people with all sort of problems that make them incapable of even human normal, but let's ignore them entirely here.

Then, there are normal people who are pretty much incapable of original highly insightful thought, critical thinking, rationality etc. They can usually do OK in normal life, and can even be quite capable in their narrow area of expertise and that's about it. They often make the most basic logic mistakes etc.

Then there are "smart" people who are capable of original insight, and don't get too stupid too often. They're not measuring example the same thing, but IQ tests are capable of distinguishing between those and the normal people reasonably well. With smart people both their top performance and their average performance is a lot better than with average people. In spite of that, all of them very often fail basic rationality for some particular domains they feel too strongly about.

Now I'm conflicted if people who are so much above "smart" as "smart" is above normal really exists. A canonical example of such person would be Feynman - from my limited information he seems to be just so ridiculously smart. Eliezer seems to believe Einstein is like that, but I have even less information about him. You can probably think of a few such other people.

Unfortunately there's a second observation - there's no reason to believe such people existed only in the past, or would have aversion to blogging - so if super-smart people exist, it's fairly certain that some blogs of such people exist. And if such blogs existed, I would expect to have found a few by now.

And yet, every time it seemed to me that someone might just be that smart and I started reading their blog - it turned out very quickly that my estimate of their smartness suffered from rapid regression to the mean. All my super-smart candidates managed to say such horrible things, and be deaf to such obvious arguments that I doubt any of them really qualifies.

So here's an alternative theory. No human alive is much smarter than the "normally smart". Of population of normally smart people, thanks to domain expertise, wit and writing skill, compatibility with my beliefs (or at least happening to avoid my red flags), higher productivity, luck etc. some people simply seem much smarter than that.

I'm not trolling here, but consider Eliezer - I've picked the example because it's well known here. For some time he was exactly such a candidate, however:

  • he is ridiculously good at writing - just look at his fanfics, biasing my perception
  • he manages to avoid many of my red flags, biasing my perception
  • he has cultural background pretty similar to mine, biasing my perception
  • his writing style is very good at avoiding unwarranted certainty - this might seem more rational, but it's really more of a style issue - people like Eliezer and Tyler Cowen who write cautiously just seem far smarter to me than people like Robin Hanson who write in "no disclaimer" style - even though I know very well that Robin is fully aware that contrarian theories he proposes are usually wrong, and there are usually other factors in addition to one he happens to write at the moment - and says that every time he's asked. Style differences bias my perception again.
  • Eliezer usually manages to avoid writing about things I know more than him about, so he usually has advantage of expertise, biasing my perception.
  • So it's safe to guess that however smart Eliezer is, I'm overestimating him - nearly all biases point in identical way.
  • On the other hand he sometimes makes ridiculously wrong statements, like his calculations of cost of cryonics which was blatantly order of magnitude off - I still don't know if this was a massive brain failure (this and other such disqualifying him as a supersmart candidate), or conscious attempt at dark arts (in which case he might still qualify, but he loses points for other reasons).

On the other hand, and this provides some counter-evidence to my theory - let's look at myself. I publish anything on my blog and in comments everywhere that seems to have expected public value higher than zero, and very often I'm in hurry / sleep-depraved, or otherwise far below my top performance. I exaggerate to get the point across very often. I write outside my area of expertise a lot, not uncommonly making severe mistakes. I'm not that good at writing (not to mention that English is not my first language) so things I say may be very unclear.

Unfortunately a normally smart person with my behaviour patterns, and a super-smart person with my behaviour patterns, would probably both fail my super-smartness test.

As you can see, I'm not even terribly convinced that my "super-smart people don't exist" theory is true. I would love to see if other people have good evidence or insight one way or the other.

Another by-the-way: Very often blatantly wrong belief might still be the least-wrong belief given someone's web of beliefs. Often it's easier to believe some minor wrong than to rebuild your whole belief system risking far more damage just to make something small come out correct. So perhaps even my test for being really really wrong is not really all that useful.

Replies from: cousin_it, CronoDAS, dyokomizo, Mitchell_Porter, JoshuaZ, xamdam, cupholder, DanielVarga
comment by cousin_it · 2010-06-11T12:34:13.654Z · LW(p) · GW(p)

A few people who blog frequently and fit my criteria for "super-smart": Terence Tao, Cosma Shalizi, John Baez.

Replies from: Risto_Saarelma, cupholder
comment by Risto_Saarelma · 2010-06-12T08:03:04.802Z · LW(p) · GW(p)

I was thinking of Tao as well. Also, Oleg Kiselyov for programming/computer science.

Replies from: cousin_it
comment by cousin_it · 2010-06-12T10:11:07.727Z · LW(p) · GW(p)

Yep, seconding the recommendation of Oleg. I read a lot of his writings and I'd definitely have included him on the list.

comment by cupholder · 2010-06-11T23:41:52.402Z · LW(p) · GW(p)

Interesting picks. I hadn't thought of Cosma Shalizi as 'super-smart' before, just erudite and with a better memory for the books and papers he's read than me. Will have to think about that...

comment by CronoDAS · 2010-06-09T04:57:04.121Z · LW(p) · GW(p)

Then, there are normal people who are pretty much incapable of original highly insightful thought, critical thinking, rationality etc. They can usually do OK in normal life, and can even be quite capable in their narrow area of expertise and that's about it. They often make the most basic logic mistakes etc.

I think you're giving the "normal person" too little credit.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-09T08:50:51.214Z · LW(p) · GW(p)

Agreed. If nothing else, refugee situations aren't that uncommon in human history, and the majority are able to migrate and adapt if they're physically permitted to do so.

comment by dyokomizo · 2010-06-11T10:39:37.911Z · LW(p) · GW(p)

It doesn't seem to me that you have an accurate description of what a super-smart person would do/say other than match your beliefs and providing insightful thought. For example, do you expect super-smart people to be proficient in most areas of knowledge or even able to quickly grasp the foundations of different areas through super-abstraction? Would you expect them to be mostly unbiased? Your definition needs to be more objective and predictive, instead of descriptive.

Replies from: taw
comment by taw · 2010-06-11T12:00:43.250Z · LW(p) · GW(p)

I don't know what's the correct super-smartness cluster, so I cannot make objective predictive definition, at least yet. There's no need to suffer from physics envy here - a lot of useful knowledge has this kind of vagueness. Nobody managed to define "pornography" yet, and it's far easier concept than "super-smartness". This kind of speculation might end up with something useful with some luck (or not).

Even defining by example would be difficult. My canonical examples would be Feynman and Einstein - they seem far smarter than the "normally smart" people.

Let's say I collected a sufficiently large sample of "people who seem super-smart", got as accurate information about them as possible, and did a proper comparison between them and background of normally smart people (it's pretty easy to get good data on those, even by generic proxies like education - so I'm least worried about that) in a way that would be robust against even large number of data errors. That's about the best I can think of.

Unfortunately it will be of no use as my sample will be not random super-smart people but those super-smart people who are also sufficiently famous for me to know about them and be aware of their super-smartness. This isn't what I want to measure at all. And I cannot think of any reasonable way to separate these.

So the project is most likely doomed. It was interesting to think about this anyway.

comment by Mitchell_Porter · 2010-06-12T06:30:20.501Z · LW(p) · GW(p)

if super-smart people exist, it's fairly certain that some blogs of such people exist. And if such blogs existed, I would expect to have found a few by now.

Why would they blog? They would already know that most people have nothing of interest to tell them; and if they want to tell other people something, they can do it through other channels. If such a person had a blog, it might be for a very narrow reason, and they would simply refrain from talking about matters guaranteed to produce nothing but time-consuming stupidity in response.

comment by JoshuaZ · 2010-06-08T18:27:54.014Z · LW(p) · GW(p)

I'm not sure that the ability to have original thoughts is at all closely connected to the ability to think rationally. What makes you reach that conclusion?

Unfortunately there's a second observation - there's no reason to believe such people existed only in the past, or would have aversion to blogging - so if super-smart people exist, it's fairly certain that some blogs of such people exist. And if such blogs existed, I would expect to have found a few by now.

Have you tried looking at Terence Tao's blog? I think he fits your model, but it may be that many of his posts will be too technical for a non-mathematician. I'm not sure in general if blogging is a good medium for actually finding this sort of thing. It is easy to see if a blogger isn't very smart. it isn't clear to me that it is a medium that allows one to easily tell if someone is very smart.

comment by xamdam · 2010-06-11T16:30:49.811Z · LW(p) · GW(p)

I doubt your disproof of super-smart people, for the very same reasons you do, perhaps with a greater weight assigned to those reasons.

I am also not sure about your definition of super-smart. Is idiot savant (in math, say) super-smart? If you mean super-smart=consistently rational, I suspect nothing prevents people of normal-smart IQ from scoring (super) well there, trading off quantity of ideas for quality. There is a ceiling there as good ideas get more complex and require more processing power, but I suspect given how crazy this world is Norm Smart the Rationalist can score surprisingly highly on relative basis.

As a data point you might want to look at "Monster Minds" chapter of Feynman's "Surely you're joking". Since you mentioned Feynman. The chapter is about Einstein.

Finally, where is your blog? ;)

Replies from: taw
comment by taw · 2010-06-11T18:21:17.750Z · LW(p) · GW(p)

My blog is here.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-11T18:22:46.759Z · LW(p) · GW(p)

You can set that in "preferences".

comment by cupholder · 2010-06-08T20:58:16.739Z · LW(p) · GW(p)

So here's an alternative theory. No human alive is much smarter than the "normally smart".

Reminds me of 'My Childhood Role Model'.

As for the actual meat of your comment, I don't have much to add. 'Smart' is a slippery enough word that I'd guess one's belief in 'super-smart people' depends on how one defines 'smart.'

comment by DanielVarga · 2010-06-12T10:07:40.636Z · LW(p) · GW(p)

There is an important systematic bias you only tangentially mention in your analysis. Super-smart people (more generally, very successful people) don't feel they have to prove themselves all the time. (Especially if they are tenured. :) ) Many of them like to talk before they think. There are very smart people around them who quickly spot the obvious mistakes and laboriously complete the half-baked ideas. It is just more economic this way.

comment by Jack · 2010-06-05T09:44:23.845Z · LW(p) · GW(p)

Have you never had an in-person conversation with a super-smart person?

Also, hi folks, I'm back. It is surprisingly difficult to dive back into LW after leaving it for a few weeks.

Replies from: taw
comment by taw · 2010-06-05T14:32:45.976Z · LW(p) · GW(p)

Obviously no, as I don't believe in their existence.

Replies from: Jack
comment by Jack · 2010-06-05T17:15:19.540Z · LW(p) · GW(p)

My point is that I have trouble telling the difference between a fairly-smart and super-smart person by their writing for exactly the reason you mentioned. But in-person conversations give you access to the raw material and, if I take myself to be fairly smart there are definitely super-smart people out there. For example, I imagine if you had got to talking to Richard Feynman while he was alive you would have quickly realized he was a super-smart person.

Replies from: JoshuaZ, taw
comment by JoshuaZ · 2010-06-05T17:21:06.331Z · LW(p) · GW(p)

I'm not sure about this. I have a lot of trouble distinguishing between just smart, super-smart, and smart-and-an-expert-in-their-field. Distinguishing them seems to not occur easily simply based on quick interactions. I can distinguish people in my own field to some extent, but if it isn't my own area, it is much more difficult. Worse, there are serious cognitive biases about intelligence estimations. People are more likely to think of someone as smart if they share interests and also more likely to think of someone as smart if they agree on issues. (Actually I don't have a citation for this one and a quick Google search doesn't turn it up, does someone else maybe have a citation for this?) One could imagine that many people might if meeting a near copy of themselves conclude that the copy was a genius. That said, I'm pretty sure that there are at least a few people out there who reasonably do qualify as super-smart. But to some extent, that's based more on their myriad accomplishments than any personal interaction.

comment by taw · 2010-06-06T01:14:09.177Z · LW(p) · GW(p)

I'd guess it's far far easier to fool someone in person with all the noise of primate social clues, so such information is worth a lot less than writing.

comment by NancyLebovitz · 2010-06-05T09:25:32.995Z · LW(p) · GW(p)

The Unreasonable Effectiveness of My Self-Exploration by Seth Roberts.

This is an overview of his self-experiments (to improve his mood and sleep, and to lose weight), with arguments that self-experimentation, especially on the brain, is remarkably effective in finding useful, implausible, low-cost improvements in quality of life, while institutional science is not.

There's a lot about status and science (it took Roberts 10 years to start getting results, and it's just to risky to careers for scientists to take on projects which last that long), and some intriguing theory at the end that activities can be classified into exploitation (low risk, low reward) and exploration (high risk, high reward), and that people aren't apt to want to do exploration full time, so, if given a job that's full-time exploration (like institutional science), they'll turn most of it into exploitation.

comment by Seth_Goldin · 2010-06-02T17:57:48.188Z · LW(p) · GW(p)

Good article on the abuse of p-values: http://www.sciencenews.org/view/feature/id/57091/title/Odds_are,_its_wrong

Replies from: ocr-fork, Daniel_Burfoot
comment by ocr-fork · 2010-06-03T01:39:46.227Z · LW(p) · GW(p)

Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos.

I winced.

comment by Daniel_Burfoot · 2010-06-02T21:01:01.821Z · LW(p) · GW(p)

I would like to see a top-level link post and discussion of this article (and maybe other related papers).

Replies from: cupholder, Seth_Goldin
comment by cupholder · 2010-06-03T20:42:41.544Z · LW(p) · GW(p)

I'm slightly tempted to, because that article is sloppy and unfocused enough that it annoys me, even though it's broadly accurate. (I mean, 'the standard statistical system for drawing conclusions is, in essence, illogical'? Really?) But I don't know what I'd have to add to it, really, other than basically whining 'it is so unfair!'

comment by Seth_Goldin · 2010-06-02T22:49:05.947Z · LW(p) · GW(p)

Yeah, that would be great, but I can't do it; I don't have the technical background, so I hereby delegate the task to someone else willing to write it up.

comment by Spurlock · 2010-06-01T19:46:09.033Z · LW(p) · GW(p)

I've been reading the Quantum Mechanics sequence, and I have a question about Many-Worlds. My understanding of MWI and the rest of QM is pretty much limited to the LW sequence and a bit of Wikipedia, so I'm sure there will be no shortage of people here who have a better knowledge of it and can help me.

My question is this: why are the Born Probabilites a problem for MWI?

I'm sure it's a very difficult problem, I think I just fail to understand the implications of some step along the way. FWIW, my understanding of the Born Probabilities mainly clicks here:

If a whole gigantic human experimenter made up of quintillions of particles,

Interacts with one teensy little atom whose amplitude factor has a big bulge on the >left and a small bulge on the right,

Then the resulting amplitude distribution, in the joint configuration space,

Has a big amplitude blob for "human sees atom on the left", and a small amplitude >blob of "human sees atom on the right".

And what that means, is that the Born probabilities seem to be about finding >yourself in a particular blob, not the particle being in a particular place.

Firstly, I know probability is the wrong word, but I'm going to use it here, insufficiently, in the same way that it's normally insufficiently used to talk about QM. I sure hope that's okay because it is a pain to nail down in English.

So... If a quantum event has a 30% chance of going LEFT and a 70% chance of going right (which you could observe without entangling yourself, for example by blasting a whole bunch of photons through slits and seeing the overall density pattern without measuring individual photons) (I think), then if you entangle yourself with a single instance of it, you'll have a 30% probability of observing LEFT and a 70% probability of observing RIGHT.

So why is this surprising? Obviously if we're just counting observers then we would expect a 50/50 probability spread, but I assume the problem isn't that naive. Obviously if the particles themselves exhibit a 30/70 preference, then we, being made of particles, should expect to do the same. Or... if the particles themselves can exist along a (psuedo)probability continuum, then why should we, the entagled, not expect to do the same? If those quarks are 70/30, then why aren't yours? Why should MWI necessarily imply the sudden creation of exactly 2 worlds with equal weight, as opposed to just dividing experience, locally and where necessary, into a weighted continuum?

I think I'll try this from another angle. MWI gets points for treating people/observers as particles, governed by the same laws as everything else. But are we really treating ourselves equally if we don't assume that we too follow this 30/70 split? It seems like this should be the default assumption, the one requiring no extra postulates, that we divide up not into discrete worlds but along a weighted continuum. Obviously it's easier on our typical conception of conciousness if we can just have the whole universe split neatly in two, but that feels to me like putting the weirdness where it logically belongs (on our comparatively weak understanding of concious experience).

Hope this makes at least some since to someone who can steer me in the right direction. I'd appreciate responses as to where specifically I've erred, as this will continue to bug me until I see where exactly I went wrong. Thanks in advance.

Replies from: None
comment by [deleted] · 2010-06-01T22:07:21.984Z · LW(p) · GW(p)

So... If a quantum event has a 30% chance of going LEFT and a 70% chance of going right . . . you'll have a 30% probability of observing LEFT and a 70% probability of observing RIGHT.

So why is this surprising?

The surprising (or confusing, mysterious, what have you) thing is that quantum theory doesn't talk about a 30% probability of LEFT and a 70% probability of RIGHT; what it talks about is how LEFT ends up with an "amplitude" of 0.548 and RIGHT with an "amplitude" of 0.837. We know that the observed probability ends up being the square of the absolute value of the amplitude, but we don't know why, or how this even makes sense as a law of physics.

Replies from: Spurlock
comment by Spurlock · 2010-06-01T22:19:51.438Z · LW(p) · GW(p)

Ah. So it's not the idea that it's weighted so much as the specific act of squaring the amplitude. "Why squaring the amplitude, why not something else?".

I suppose the way I had been reading, I thought that the problem came from expecting a different result given the squared amplitude probability thing, not from the thing itself.

That is helpful, many thanks.

Replies from: Douglas_Knight, Douglas_Knight, None
comment by Douglas_Knight · 2010-06-01T23:20:48.537Z · LW(p) · GW(p)

"Why squaring the amplitude, why not something else?"

That's one issue, but as Warrigal said, the other issue is "how this even makes sense." it seems to say that the amplitude is a measure of how real the configuration is.

comment by Douglas_Knight · 2010-06-01T23:23:12.706Z · LW(p) · GW(p)

"Why squaring the amplitude, why not something else?"

That's one issue, but as Warrigal said, the other issue is "how this even makes sense." it seems to say that the amplitude is a measure of how real the configuration is.

comment by [deleted] · 2010-06-01T22:25:21.636Z · LW(p) · GW(p)

Yes, precisely.

comment by JamesAndrix · 2010-06-01T18:27:14.171Z · LW(p) · GW(p)

http://fora.tv/2010/05/22/Adam_Savage_Presents_Problem_Solving_How_I_Do_It

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-01T20:16:17.319Z · LW(p) · GW(p)

Delightful, and has a nice breakdown of the sort of questions to ask yourself (what exactly is the problem, how much precision is actually needed, what is the condition of the tools, etc.) if you want to get things done efficiently.

comment by khafra · 2010-06-03T04:56:26.552Z · LW(p) · GW(p)

After more-or-less successfully avoiding it for most of LW's history, we've plunged headlong into mind-killer territory. I'm a little bit worried, and I'm intrigued to find out what long-time LWers, especially those who've been hesitant about venturing that direction, expect to see as a result over the next month or two.

Replies from: cousin_it, simplicio, mattnewport, Matt_Duing
comment by cousin_it · 2010-06-03T10:15:39.706Z · LW(p) · GW(p)

It doesn't look encouraging. The discussions just don't converge, they meander all over the place and leave no crystalline residue of correct answers. (Achievement unlocked: Mixed Metaphor)

comment by simplicio · 2010-06-03T05:59:28.206Z · LW(p) · GW(p)

It is problematic but necessary, in my opinion. Politics IS the mind-killer, but politics DOES matter. Avoiding the topic would seem to be an admission that this rationality thing is really just a pretty toy.

But it would be nice to lay down some ground-rules.

comment by mattnewport · 2010-06-03T05:00:26.895Z · LW(p) · GW(p)

I don't think anyone has mentioned a political party or a specific current policy debate yet. That's when things really go downhill.

Replies from: khafra
comment by khafra · 2010-06-03T16:37:47.794Z · LW(p) · GW(p)

I think a current policy debate has potential for better results, since it would offer the potential for betting, and avoid some of the self-identification and loyalty that's hard to avoid when applying a model as simple as a political philosophy to something as complex as human culture.

Replies from: fburnaby
comment by fburnaby · 2010-06-03T18:50:51.216Z · LW(p) · GW(p)

Since we've had some discussion about additions/modifications to the site, and LW -- as I understand it -- was a originally a sort of spin-off from OB, maybe addition of a karma-based prediction market of some sort would be suitable (and very interesting).

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-03T18:53:19.111Z · LW(p) · GW(p)

Maybe make bets of karma? That might be very interesting. It would have less bite than monetary stakes, but highly risk averse individuals might be more willing to join the system.

Replies from: fburnaby
comment by fburnaby · 2010-06-04T18:49:26.674Z · LW(p) · GW(p)

I think having such a low-stakes game to play would be beneficial not only to highly risk-averse individuals, but to anyone. It would provide a useful training ground (maybe even a competitive ladder in a rationality dojo) for anyone who wants to also play with higher stakes elsewhere.

Edit: I'm currently a mediocre programmer (and intend to become good via some practice). And while I don't participate often in the community (yet), this could be fun and educational enough that I would be willing to contribute a fairly substantial amount of labour to it. If anyone with marginally more know-how is willing to implement such an idea, let me know and I'll join up.

comment by Matt_Duing · 2010-06-03T05:17:06.617Z · LW(p) · GW(p)

My feelings on this are mixed. I've found LW to be a refreshing refuge from such quarrels. On the other hand, without careful thought political debates reliably descend into madness quickly, and it is not as if politics is unimportant. Perhaps taking the mental techniques discussed here to other forums could improve the generally atrocious level of reasoning usually found in online political discussions, though I expect the effect would be small.

comment by Eneasz · 2010-06-02T21:07:05.440Z · LW(p) · GW(p)

Are there any rationalist psychologists?

Also, more specifically but less generally relevant to LW; as a person being pressured to make use of psychological services, are there any rationalist psychologists in the Denver, CO area?

Replies from: Kevin, torekp, NancyLebovitz
comment by Kevin · 2010-06-06T02:37:51.282Z · LW(p) · GW(p)

As a start, http://en.wikipedia.org/wiki/Cognitive_behavioral_therapy is a branch of psychotherapy with some respect around here because of the evidence that it sometimes works, compared to the other fields of psychotherapy with no evidence.

Replies from: RomanDavis
comment by RomanDavis · 2010-06-06T03:16:14.991Z · LW(p) · GW(p)

Do they really have such a poor track record? I know some scientists have very little respect for the "soft" sciences, but sociologist can at least make generalizations from studies done on large scales. Psychotherapy makes a lot of people incredulous, but iis it really fair to say that most methods in practice today are ~0% effective?

Yes this is essentially a post stating my incredulity. Would you mind quelling it?

Replies from: pjeby, Kevin
comment by pjeby · 2010-06-06T04:14:12.026Z · LW(p) · GW(p)

Psychotherapy makes a lot of people incredulous, but iis it really fair to say that most methods in practice today are ~0% effective?

It's not that they're 0% effective, it's that they're not much more effective than placebo therapy (i.e. being put on a waiting list for therapy), or keeping a journal.

CBT is somewhat more effective, but I've also heard that it's not as effective for high-ruminators... i.e., people who already obsess about their thinking.

Replies from: AlanCrowe, Douglas_Knight
comment by AlanCrowe · 2010-06-06T20:08:27.881Z · LW(p) · GW(p)

Scientific medicine is difficult and expensive. I worry that the apparent success of CBT may be because methodological compromises needed to make the research practical happen to flatter CBT more than they flatter other approaches.

I might be worrying about the wrong thing. Do we know anything about the usefulness of Prozac in treating depression? Since we turn a blind eye to the unblinding of all our studies by the sexual side-effects of Prozac, and also refuse to consider the direct impact of those side-effects it could be argued that we don't actually have any scientific knowledge of the effectiveness of the drug.

comment by Douglas_Knight · 2010-06-06T11:49:42.177Z · LW(p) · GW(p)

The claim I've seen associated with Robyn Dawes is that therapy is useful (which I read as "more useful than being on a waiting list"), but that untrained therapists are just as good as those trained under most methods. (ETA: and, contrary to Kevin, they have been tested and found wanting)

comment by Kevin · 2010-06-06T03:44:02.796Z · LW(p) · GW(p)

It's not that other forms of psychotherapy are scientifically shown to be 0% effective; it's just that evidence-based psychotherapy is a surprisingly recent field. Psychotherapy can still work even if some fields of it have not had rigorous studies showing their effectiveness... but you might as well go with a therapist that has training in a field of psychotherapy that has some scientific method behind it.

http://www.mentalhelp.net/poc/view_doc.php?type=doc&id=13023&cn=5

comment by torekp · 2010-06-06T01:00:23.233Z · LW(p) · GW(p)

I can't help you with the Denver area in particular, but the general answer is a definite yes. In an interesting juxtaposition, American Psychologist magazine had a recent issue prominently featuring discussion of how to get past the misuse of statistics discussed in this very LW open thread. And it's not the first time the magazine addressed the point.

comment by NancyLebovitz · 2010-06-03T00:33:12.614Z · LW(p) · GW(p)

Does cognitive rationalist therapy count as both rationalist and psychology for purposes of this question?

I think Learning Methods is a more sophisticated rationalist approach than CBT (it does a more meticulous job of identifying underlying thoughts), and might be worth checking into.

Replies from: pjeby
comment by pjeby · 2010-06-06T04:31:14.439Z · LW(p) · GW(p)

I think Learning Methods is a more sophisticated rationalist approach than CBT

Interesting. I found the site to be not very helpful, until I hit this page, which strongly suggests that at least one thing people are learning from this training is the practical application of the Mind Projection Fallacy:

Was the movie good or bad? If you answer BOTH, think it through. In a factual sense, can the same movie be good AND bad? If it’s good, how can it be bad? The only way to make sense of a movie being both good and bad is to realize that the goodness and badness does not exist IN the movie, but IN Jack and IN Jill as a reflection of how the movie matches their individual criteria.

The quote is from an article written by an LM student, and some insights from the learning process that helped her overcome her stage fright.

IOW, at least one aspect of LM sounds a bit like "rationality dojo" to me (in the sense that here's an ordinary person with no special interest in rationalism, giving a beautiful (and more detailed than I quoted here) explanation of the Mind Projection Fallacy, based on her practical applications of it in everyday life .

(Bias disclaimer: I might be positively inclined to what I'm reading because some of it resembles or is readily translatable to aspects of my own models. Another article that I'm in the middle of reading, for example, talks about the importance of addressing the origins of nonconsciously-triggered mental and physical reactions, vs. consciously overriding symptoms -- another approach I personally favor.)

comment by cousin_it · 2010-06-01T21:52:35.155Z · LW(p) · GW(p)

The blog of Scott Adams (author of Dilbert) is generally quite awesome from a rationalist perspective, but one recent post really stood out for me: Happiness Button.

Suppose humans were born with magical buttons on their foreheads. When someone else pushes your button, it makes you very happy. But like tickling, it only works when someone else presses it. Imagine it's easy to use. You just reach over, press it once, and the other person becomes wildly happy for a few minutes.

What would happen in such a world?

...

Replies from: Christian_Szegedy, Vladimir_Nesov, Alicorn
comment by Christian_Szegedy · 2010-06-02T21:46:23.270Z · LW(p) · GW(p)

We already have these buttons on LessWrong... ;)

Replies from: cousin_it
comment by cousin_it · 2010-06-02T21:59:04.719Z · LW(p) · GW(p)

Karma does make me feel important, but when it comes to happiness karma can't hold a candle to loud music, alcohol and girls (preferably in combination). I wish more people recognized these for the eternal universal values they are. If only someone invented a button to send me some loud music, alcohol and girls, that would be the ultimate startup ever.

comment by Vladimir_Nesov · 2010-06-01T22:11:03.622Z · LW(p) · GW(p)

What would happen in such a world?

Classical game theorists establish a scientific consensus that the only rational course of action is not to push the buttons. Anyone who does is regarded with contempt or pity and gets lowered in the social stratum, before finally managing to rationalize the idea out of conscious attention, with the help of the instinct to conformity. A few free-riders smugly teach the remaining naive pushers a bitter lesson, only to stop receiving the benefit. Everyone gets back to business as usual, crazy people spinning the wheels of a mad world.

Replies from: Wei_Dai, AlephNeil, Houshalter
comment by Wei Dai (Wei_Dai) · 2010-06-02T04:15:17.080Z · LW(p) · GW(p)

Are you saying that classical game theorists would model the button-pushing game as one-shot PD? Why would they fail to notice the repetitive nature of the game?

Replies from: khafra, Vladimir_Nesov
comment by khafra · 2010-06-02T13:37:51.496Z · LW(p) · GW(p)

I'd be far more willing to believe in game theorists calling for defection on the iterated PD than in mathematicians steering mainstream culture.

However, with the positive-sum nature of this game, I'd expect theorists to go with Schelling instead of Nash; and then be completely disregarded by the general public who categorize it under "physical ways of causing pleasure" and put sexual taboos on it.

comment by Vladimir_Nesov · 2010-06-02T08:58:19.836Z · LW(p) · GW(p)

The theory says to defect in the iterated dilemma as well (under some assumptions).

Replies from: cousin_it
comment by cousin_it · 2010-06-02T12:18:05.221Z · LW(p) · GW(p)

Here's what the theory actually says: if you know the number of iterations exactly, it's a Nash equilibrium for both to defect on all iterations. But if you know the chance that this iteration will be the last, and this chance isn't too high (e.g. below 1/3, can't be bothered to give an exact value right now), it's a Nash equilibrium for both to cooperate as long as the opponent has cooperated on previous iterations.

comment by AlephNeil · 2010-06-02T19:17:42.261Z · LW(p) · GW(p)

This comment was very entertaining... but...

I actually do think people in such a world ought not to press buttons. But not very strongly... only about the same "oughtnotness" as people ought not to waste time looking at porn.

The argument is the same: Aren't there better things we could be doing?

Ideally, in button-world, people will devise a way to remove their buttons.

But if that couldn't be done, and we're seriously asking "what would happen?" I suppose it might end up being treated like sex. Having one's button publicly visible is "indecent" - buttons are only pushed in private. Etc. etc.

Replies from: Blueberry, Mass_Driver
comment by Blueberry · 2010-06-02T21:38:50.680Z · LW(p) · GW(p)

I suppose it might end up being treated like sex. Having one's button publicly visible is "indecent" - buttons are only pushed in private.

The analogy to sex is rough. From a historical and evolutionary perspective, sex is treated the way it is because it leads to gene replication and parenthood, not because it leads to pleasure. The lack of side effects from the buttons makes them more comparable to rubbing someone's back, smiling, or saying something nice to someone.

Replies from: AlephNeil
comment by AlephNeil · 2010-06-02T21:59:49.125Z · LW(p) · GW(p)

OK - well that's one possibility. But in discussing either of these analogies, aren't we just showing (a) that the pleasure-button scenario is underdetermined, because there are many different kinds of pleasure and (b) that it's redundant, because people can actually give each other pats on the back, or hand-jobs or whatever.

comment by Mass_Driver · 2010-06-02T19:53:56.724Z · LW(p) · GW(p)

I dunno, this strikes me as a somewhat sex-negative attitude. Responding seriously to your question about the better things we could be doing, it strikes me that we people spend most of our time doing worthless things. We seldom really know whether we are happy, what it means to be happy, or how what we are doing might connect to somebody's future happiness.

If the buttons actually made people happy from time to time, it could be quite useful as a 'reality check.' People suspecting that X led to happiness could test and falsify their claim by seeing whether X produced the same mental/emotional state that the button did.

Obviously we shouldn't spend all our time pressing buttons, having sex, or looking at porn. But I sometimes wonder whether we wouldn't be better off if most people, especially in the developed world where labor seems to be over-supplied and the opportunity cost of not working is low, spent a couple hours a day doing things like that.

Replies from: AlephNeil
comment by AlephNeil · 2010-06-02T21:28:06.942Z · LW(p) · GW(p)

If the buttons actually made people happy from time to time, it could be quite useful as a 'reality check.' People suspecting that X led to happiness could test and falsify their claim by seeing whether X produced the same mental/emotional state that the button did.

Isn't that a bit like snorting some coke (or perhaps just masturbating) after a happy experience (say, proving a particularly interesting theorem) to test whether it was really 'happy'?

There are many different kinds of 'happiness', and what makes an experience a happy or an unhappy one is not at all simple to pin down. A kind of happiness that one can obtain at will, as often as desired, and which is unrelated to any "objective improvement" in oneself or the things one cares about, isn't really happiness at all.

Pretend it's new year's eve and you're planning some goals for next year - some things that, if you achieve them, you will look back with pride and a sense of accomplishment. Is 'looking at lots of porn' on your list (even assuming that it's free and no-one was harmed in producing it)?

I don't mean to imply anything about sex, because sex has a whole lot of things associated with it that make it extremely complicated. But the 'pleasure button' scenario gives us a clean slate to work from, and to me it seems an obvious reductio ad absurdum of the idea that pleasure = utility.

Replies from: Blueberry, Mass_Driver, NancyLebovitz
comment by Blueberry · 2010-06-02T21:36:15.721Z · LW(p) · GW(p)

You seem to be confusing happiness with accomplishment:

A kind of happiness that one can obtain at will, as often as desired, and which is unrelated to any "objective improvement" in oneself or the things one cares about, isn't really happiness at all.

Sure it is. It may not be accomplishment, or meaningfulness, but it is happiness, by definition. I think the confusion comes because you seem to value many other things more than happiness, such as pride and accomplishment. Happiness is just a feeling; it's not defined as something that you need to value most, or gain the most utility from.

Replies from: AlephNeil, RomanDavis
comment by AlephNeil · 2010-06-02T21:49:14.204Z · LW(p) · GW(p)

How do you distinguish a degenerate case of 'happiness' from 'satiation of a need'. Is the smoker or heroin addict made 'happy' by their fix? Does a glass of water make you 'happy' if you're dying from thirst, or does it just satiate the thirst?

And can't the same sensation be either 'happy' or 'unhappy' depending on the circumstances. A person with persistent sexual arousal syndrome isn't made 'happy' by the orgasms they can't help but 'endure'.

The idea that there's a "raw happiness feeling" detachable from the information content that goes with it is intuitively appealing but fatally flawed.

Replies from: Blueberry
comment by Blueberry · 2010-06-02T21:57:33.057Z · LW(p) · GW(p)

And can't the same sensation be either 'happy' or 'unhappy' depending on the circumstances? A person with persistent sexual arousal syndrome isn't made 'happy' by the orgasms they can't help but 'endure'.

Yes, this is true. We will need to assume that the button can analyze the context to determine how to provide happiness for the particular brain it's attached to.

My point is that happiness is not necessarily associated with accomplishment or objective improvement in oneself (though it can be). In such a situation, some people might not value this kind of detached happiness, but that doesn't mean it's not happiness.

comment by RomanDavis · 2010-06-02T21:43:04.819Z · LW(p) · GW(p)

Depends on how you define happiness. If you define it as "how much dopamine is in my system" ,"joy" or "these are the neat brainwaves my brain is giving off" then yes, you could achieve happiness by pressing a button (in theory).

A lot of people seem to assume happiness = utility measured in utilons, which is a whole different thing altogether.

Sort of like seeing some one writhe in ecstasy after jamming a needle in their arm and saying, "I'm so happy I'm not a heroin addict."

Replies from: SilasBarta, Blueberry
comment by SilasBarta · 2010-06-02T21:49:34.823Z · LW(p) · GW(p)

Depends on how you define happiness. If you define it as "how much dopamine is in my system" ,"joy" or "these are the neat brainwaves my brain is giving off" then yes, you can achieve happiness by pressing a button.

Oh, really? How can I get a cheap, legal, repeatable dopamine rush to my brain?

Replies from: RomanDavis
comment by RomanDavis · 2010-06-02T21:53:46.919Z · LW(p) · GW(p)

Edited my post to reflect your point. Although, I'm a young male and can achieve orgasm multiple times in under ten minutes with the aid of some lube and free porn. You probably didn't want to know that.

Replies from: Blueberry
comment by Blueberry · 2010-06-02T21:59:51.356Z · LW(p) · GW(p)

That's amazing. A drug that could eliminate refractory period like that would sell better than Viagra.

Replies from: cousin_it, RomanDavis
comment by cousin_it · 2010-06-02T22:16:57.668Z · LW(p) · GW(p)

It seems the pharma industry discovered the effect of PDE5 inhibitors on erectile dysfunction pretty much by accident. The stuff was initially developed to treat heart disease, initial tests showed it didn't work, but male test subjects reported a useful side effect. Reminds me of the story of post-it notes: the guy who developed them actually wanted to create the ultimate glue, but sadly the result of his best efforts didn't stick very well, so he just went ahead and commercialized what he had.

If big pharma is listening, I'd like to post a request for exercise pills.

comment by RomanDavis · 2010-06-02T22:04:02.020Z · LW(p) · GW(p)

Actually, orgasms are usually much less intense and don't result in ejaculation if I achieve them in under a certain amount of amount of time. I find the best are in the 20-30 minute period.

comment by Blueberry · 2010-06-02T21:51:16.403Z · LW(p) · GW(p)

A lot of people seem to assume happiness = utility measured in utilons, which is a whole different thing altogether.

Yes, I've noticed that assumption, and I think even Jeremy Bentham talked about pleasure in utility terms. I don't think it's accurate for everyone, for instance, someone who values accomplishment more than happiness will assign higher utility to choices that lead to unhappy accomplishment than to unproductive leisure.

Replies from: RomanDavis
comment by RomanDavis · 2010-06-02T21:56:31.250Z · LW(p) · GW(p)

...and then they're happier working. By definition. Welcome to semantics.

Replies from: Blueberry
comment by Blueberry · 2010-06-02T22:02:18.984Z · LW(p) · GW(p)

That's a strange definition of "happier". They're happier with a choice just because they prefer that choice? Even if they appear frustrated and tired and grumpy all the time? Even if they tell you they're not happy and they prefer this unhappiness to not accomplishing anything?

(In real life, I suspect happy people actually accomplish more, but consider a hypothetical where you have to choose between unhappy accomplishment and unproductive leisure.)

Replies from: RomanDavis
comment by RomanDavis · 2010-06-02T22:07:50.475Z · LW(p) · GW(p)

Eliezer did this whole thing in the Fun Theory sequence. Yes, not doing anything would be very boring, and being filled with cool drugs sounds like a horror story to my current utility curve. Let's hope the future isn't some form of ironic hell.

comment by Mass_Driver · 2010-06-03T01:39:15.014Z · LW(p) · GW(p)

AlephNeil, I was taking Scott Adams' assertion that the button produces "happiness" at face value. I was being rather literal, I'm afraid. I think you're right to worry that no actual mechanism we can imagine in the near future would act like Scott's button.

I stand by my point, though, that if we really did have a literal happiness button, it would probably be a good thing.

As perhaps a somewhat more neutral example, I like to splash around in a swimming pool. It's fun. I hope to do that a lot over the next year or so. If I successfully play in the pool a lot during time that otherwise might have been spent reading marginally interesting articles, staring into space, harassing roommates, or working overtime on projects I don't care about, I will consider it a minor accomplishment.

More to the point, if regular bouts of aquatic playtime keep me well-adjusted and accurately tuned-in to what it means to be happy, then I will rationally expect to accomplish all kinds of other things that make me and others happy. I will consider this to be a moderate accomplishment.

There is a difference between pleasure and utility, but I don't think it's ridiculous at all to have a pleasure term in one's utility function. A more pleasant life, all else being equal, is a better one. There may be diminishing returns involved, but, well, that's why we shouldn't literally spend all day pressing the button.

comment by NancyLebovitz · 2010-06-03T00:35:41.175Z · LW(p) · GW(p)

That depends on how people react. It's at least plausible that people need some amount of pleasure in order to be able to focus on their other goals.

comment by Houshalter · 2010-06-01T22:31:08.885Z · LW(p) · GW(p)

How does that work? I suppose it makes sense a little considering that the world has to go on and can't stop because everyones on the ground being "happy", but it wouldn't mean that people wouldn't do it, or even that it wouldn't be the "rational" thing to do.

Replies from: mattnewport
comment by mattnewport · 2010-06-01T22:33:25.017Z · LW(p) · GW(p)

Is everyone missing the obvious subtext in the original article - that we already live in just such a world but the button is located not on the forehead but in the crotch?

Perhaps some people would give their button-pushing services away for free, to anyone who asked. Let's call those people generous, or as they would become known in this hypothetical world: crazy sluts.

Replies from: CronoDAS, Richard_Kennaway, Blueberry, Houshalter, Vladimir_Nesov
comment by CronoDAS · 2010-06-01T23:27:43.184Z · LW(p) · GW(p)

But you can touch that button yourself...

Replies from: SilasBarta
comment by SilasBarta · 2010-06-02T00:45:27.778Z · LW(p) · GW(p)

How does that compare to when someone else touches your button with their button?

Replies from: CronoDAS
comment by CronoDAS · 2010-06-02T01:46:21.901Z · LW(p) · GW(p)

I've never done that, so I don't know.

comment by Richard_Kennaway · 2010-06-02T10:16:01.255Z · LW(p) · GW(p)

I see that subtext, but I also see a subtext of geeks blaming the obvious irrationality of everyone else for them not getting any, like, it's just poking a button, right?

comment by Blueberry · 2010-06-01T22:48:54.969Z · LW(p) · GW(p)

Except that sex, unlike the button in the story, doesn't always make people happy. Sometimes, for some people, it comes with complications that decrease net utility. (Also, it is possible to push your own button with sex.)

Replies from: mattnewport
comment by mattnewport · 2010-06-01T22:58:07.377Z · LW(p) · GW(p)

Sure, but it's not my comparison - I'm just saying it appears to be the obvious subtext of the original article.

Button pushing would become an issue of power and politics within relationships and within business. The rich and famous would get their buttons pushed all day long, while the lonely would fantasize about how great that would be.

Replies from: Houshalter
comment by Houshalter · 2010-06-01T23:10:08.305Z · LW(p) · GW(p)

The rich and famous would get their buttons pushed all day long, while the lonely would fantasize about how great that would be.

But two poor, "lonely" people could just get together and push each others buttons. Thats the problem with this, any two people that can cooperate with each other can get the advantage. There was once an expiriment to evolve different programs in a genetic algorithm that could play the prisoners dilema. I'm not sure exactly how it was organized, which would really make or break different strategies, but the result was a program which always cooperated except when the other wasn't and it continued refusing to cooperate with the other untill it believed they were "even".

Replies from: mattnewport
comment by mattnewport · 2010-06-01T23:14:41.773Z · LW(p) · GW(p)

Are you thinking of tit for tat?

I'm not trying to argue for or against the comparison. Would you agree that the subtext exists in the original article or do you think I'm over-interpreting?

Replies from: bentarm
comment by bentarm · 2010-06-02T09:31:30.970Z · LW(p) · GW(p)

No, the subtext is definitely there in the original article. At least, I saw it immediately, as did most of the commenters:

My invisible friend says that having your happiness button pushed will cause you to spend eternity boiling in a lava pit.

comment by Houshalter · 2010-06-01T22:58:51.952Z · LW(p) · GW(p)

I think the best analogy would be drugs, but those have bad things associated with them that the button example doesn't. They take up money, they cause health problems, etc.

comment by Vladimir_Nesov · 2010-06-01T22:42:11.623Z · LW(p) · GW(p)

That would not model the True Prisoner's Dilemma.

Replies from: mattnewport
comment by mattnewport · 2010-06-01T22:57:15.358Z · LW(p) · GW(p)

What's that got to do with the price of eggs?

comment by Alicorn · 2010-06-01T23:27:34.495Z · LW(p) · GW(p)

A social custom would be established that buttons are only to be pressed by knocking foreheads together. Offering to press a button in a fashion that doesn't ensure mutuality is seen as a pathetic display of low status.

Replies from: Wei_Dai, cousin_it
comment by Wei Dai (Wei_Dai) · 2010-06-02T04:15:06.044Z · LW(p) · GW(p)

Pushing someone's happiness button is like doing them a favor, or giving them a gift. Do we have social customs that demand favors and gifts always be exchanged simultaneously? Well, there are some customs like that, but in general no, because we have memory and can keep mental score.

comment by cousin_it · 2010-06-02T09:21:28.324Z · LW(p) · GW(p)

Hah. Status is relative, remember? Your setup just ensures that "dodging" at the last moment, getting your button pressed without pressing theirs, is seen as a glorious display of high status.

comment by dclayh · 2010-06-01T20:52:46.555Z · LW(p) · GW(p)

William Saletan at Slate is writing a series of articles on the history and uses of memory falsification, dealing mainly with Elizabeth Loftus and the ethics of her work. Quote from the latest article:

Loftus didn't flinch at this step. "A therapist isn't supposed to lie to clients," she conceded. "But there's nothing to stop a parent from trying something like [memory modification] with an overweight child or teen." Parents already lied to kids about Santa Claus and the tooth fairy, she observed. To her, it was a no-brainer: "A white lie that might get them to eat broccoli and asparagus vs. a lifetime of obesity and diabetes: Which would you rather have for your kid?"

(This topic has, of course, been done to death around these parts.)

Replies from: billswift
comment by billswift · 2010-06-02T17:01:41.745Z · LW(p) · GW(p)

Interesting. I have read several of Loftus's books, but the last one was The Myth of Repressed Memory: False Memories and Allegations of Sexual Abuse over ten years ago. I think I'll go see what she has written since. Thanks for reminding me of her work.

comment by cousin_it · 2010-06-01T18:23:36.905Z · LW(p) · GW(p)

This might be old news to everyone "in", or just plain obvious, but a couple days ago I got Vladimir Nesov to admit he doesn't actually know what he would do if faced with his Counterfactual Mugging scenario in real life. The reason: if today (before having seen any supernatural creatures) we intend to reward Omegas, we will lose for certain in the No-mega scenario, and vice versa. But we don't know whether Omegas outnumber No-megas in our universe, so the question "do you intend to reward Omega if/when it appears" is a bead jar guess.

Replies from: Vladimir_Nesov, Nisan, Jonathan_Graehl
comment by Vladimir_Nesov · 2010-06-01T19:08:29.971Z · LW(p) · GW(p)

The caveat is of course that Counterfactual Mugging or Newcomb Problem are not to be analyzed as situations you encounter in real life: the artificial elements that get introduced are specified explicitly, not by an update from surprising observation. For example, the condition that Omega is trustworthy can't be credibly expected to be observed.

The thought experiments explicitly describe the environment you play your part in, and your knowledge about it, the state of things that is much harder to achieve through a sequence of real-life observations, by updating your current knowledge.

Replies from: cousin_it
comment by cousin_it · 2010-06-02T11:45:00.735Z · LW(p) · GW(p)

I dunno, Newcomb's Problem is often presented as a situation you'd encounter in real life. You're supposed to believe Omega because it played the same game with many other people and didn't make mistakes.

In any case I want a decision theory that works on real life scenarios. For example, CDT doesn't get confused by such explosions of counterfactuals, it works perfectly fine "locally".

ETA: My argument shows that modifying yourself to never "regret your rationality" (as Eliezer puts it) is impossible, and modifying yourself to "regret your rationality" less rather than more requires elicitation of your prior with humanly impossible accuracy (as you put it). I think this is a big deal, and now we need way more convincing problems that would motivate research into new decision theories.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-02T11:49:45.460Z · LW(p) · GW(p)

If you do present observations that move the beliefs to represent the thought experiment, it'll work just as well as the magically contrived thought experiment. But the absence of relevant No-megas is part of the setting, so it too should be a conclusion one draws from those observations.

Replies from: cousin_it
comment by cousin_it · 2010-06-02T11:58:23.091Z · LW(p) · GW(p)

Yes, but you must make the precommitment to love Omegas and hate No-megas (or vice versa) before you receive those observations, because that precommitment of yours is exactly what they're judging. (I think you see that point already, and we're probably arguing about some minor misunderstanding of mine.)

Replies from: Vladimir_Nesov, Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-22T07:37:11.960Z · LW(p) · GW(p)

You never have to decide in advance, to precommit. Precommitment is useful as a signal to those that can't follow your full thought process, and so you replace it with a simple rule from some point on ("you've already decided"). For Omegas and No-megas, you don't have to precommit, because they can follow any thought process.

Replies from: cousin_it
comment by cousin_it · 2010-07-22T07:50:08.612Z · LW(p) · GW(p)

I thought about it some more and I think you're either confused somewhere, or misrepresenting your own opinions. To clear things up let's convert the whole problem statement into observational evidence.

Scenario 1: Omega appears and gives you convincing proof that Upsilon doesn't exist (and that Omega is trustworthy, etc.), then presents you with CM.

Scenario 2: Upsilon appears and gives you convincing proof that Omega doesn't exist, then presents you with anti-CM, taking into account your counterfactual action if you'd seen scenario 1.

You wrote: "If you do present observations that move the beliefs to represent the thought experiment, it'll work just as well as the magically contrived thought experiment." Now, I'm not sure what this sentence was supposed to mean, but it seems to imply that you would give up $100 in scenario 1 if faced with it in real life, because receiving the observations would make it "work just as well as the thought experiment". This means you lose in scenario 2. No?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-22T08:04:59.431Z · LW(p) · GW(p)

Omega would need to convince you that Upsilon not just doesn't exist, but couldn't exist, and that's inconsistent with scenario 2. Otherwise, you haven't moved your beliefs to represent the thought experiment. Upsilon must be actually impossible (less probable) in order for it to be possible for Omega to correctly convince you (without deception).

Being updateless, your decision algorithm is only interested in observations so far as they resolve logical uncertainty and say which situations you actually control (again, a sort of logical uncertainty), but observations can't refute logically possible, so they can't make Upsilon impossible if it wasn't already impossible.

Replies from: cousin_it
comment by cousin_it · 2010-07-22T08:14:22.919Z · LW(p) · GW(p)

Omega would need to convince you that Upsilon not just doesn't exist, but couldn't exist, and that's inconsistent with scenario 2.

No it's not inconsistent. Counterfactual worlds don't have to be identical to the real world. You might as well say that Omega couldn't have simulated you in the counterfactual world where the coin came up heads, because that world is inconsistent with the real world. Do you believe that?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-22T08:28:22.620Z · LW(p) · GW(p)

By "Upsilon couldn't exist", I mean that Upsilon doesn't live in any of the possible worlds (or only in insignificantly few of them), not that it couldn't appear in the possible world where you are speaking with Omega.

The convention is that the possible worlds don't logically contradict each other, so two different outcomes of coin tosses exist in two slightly different worlds, both of which you care about (this situation is not logically inconsistent). If Upsilon lives on such a different possible world, and not on the world with Omega, it doesn't make Upsilon impossible, and so you care what it does. In order to replicate Counterfactual Mugging, you need the possible worlds with Upsilons to be irrelevant, and it doesn't matter that Upsilons are not in the same world as the Omega you are talking to.

(How to correctly perform counterfactual reasoning on conditions that are logically inconsistent (such as the possible actions you could make that are not your actual action), or rather how to mathematically understand that reasoning is the septillion dollar question.)

Replies from: cousin_it
comment by cousin_it · 2010-07-22T08:43:23.110Z · LW(p) · GW(p)

Ah, I see. You're saying Omega must prove to you that your prior made Upsilon less likely than Omega all along. (By the way, this is an interesting way to look at modal logic, I wonder if it's published anywhere.) This is a very tall order for Omega, but it does make the two scenarios logically inconsistent. Unless they involve "deception" - e.g. Omega tweaking the mind of counterfactual-you to believe a false proof. I wonder if the problem still makes sense if this is allowed.

comment by Vladimir_Nesov · 2010-06-02T12:51:00.682Z · LW(p) · GW(p)

Sorry, can't parse that, you'd need to unpack more.

comment by Nisan · 2010-06-01T18:50:20.473Z · LW(p) · GW(p)

Whatever our prior for encountering No-mega, it should be counterbalanced by our prior for encountering Yes-mega (who rewards you if you are counterfactually-muggable).

Replies from: cousin_it
comment by cousin_it · 2010-06-01T18:53:16.165Z · LW(p) · GW(p)

You haven't considered the full extent of the damage. What is your prior over all crazy mind-reading agents that can reward or punish you for arbitrary counterfactual scenarios? How can you be so sure that it will balance in favor of Omega in the end?

Replies from: Nisan
comment by Nisan · 2010-06-01T19:29:46.770Z · LW(p) · GW(p)

In fact, I can consider all crazy mind-reading reward/punishment agents at once: For every such hypothetical agent, there is its hypothetical dual, with the opposite behavior with respect to my status as being counterfactually-muggable (the one rewarding what the other punishes, and vice versa). Every such agent is the dual of its own dual; in the universal prior, being approached by an agent is about as likely as being approached by its dual; and I don't think I have any evidence that one agent will be more likely to appear than its dual. Thus, my total expected payoff from these agents is 0.

Omega itself does not belong to this class of agent; it has no dual. (ETA: It has a dual, but the dual is a deceptive Omega, which is much less probable than Omega. See below.) So Omega is the only one I should worry about.

I should add that I feel a little uneasy because I can't prove that these infinitesimal priors don't dominate everything when the symmetry is broken, especially when the stakes are high.

Replies from: cousin_it
comment by cousin_it · 2010-06-01T20:01:05.045Z · LW(p) · GW(p)

Omega itself does not belong to this class of agent; it has no dual.

Why? Can't your definition of dual be applied to Omega? I admit I don't completely understand the argument.

Replies from: Nisan
comment by Nisan · 2010-06-01T20:23:14.853Z · LW(p) · GW(p)

Okay, I'll be more explicit: I am considering the class of agents who behave one way if they predict you're muggable and behave another way if they predict you're unmuggable. The dual of an agent behaves exactly the same as the original agent, except the behaviors are reversed. In symbols:

  • An agent A has two behaviors.
  • It it predicts you'd give Omega $5, it will exhibit behavior X; otherwise, it will exhibit behavior Y.
  • The dual agent A* exhibits behavior Y if it predicts you'd give Omega $5, and X otherwise.
  • A and A* are equally likely in my prior.

What about Omega?

  • Omega has two behaviors.
  • If it predicts you'd give Omega $5, it will flip a coin and give you $100 on heads; otherwise, nothing. In either case, it will tell you the rules of the game.

What would Omega* be?

  • If Omega predicts you'd give Omega $5, it will do nothing. Otherwise, it will flip a coin and give you $100 on heads. In either case, it will assure you that it is Omega, not Omega.

So the dual of Omega is something that looks like Omega but is in fact deceptive. By hypothesis, Omega is trustworthy, so my prior probability of encountering Omega* is negligible compared to meeting Omega.

(So yeah, there is a dual of Omega, but it's much less probable than Omega.)

Then, when I calculate expected utility, each agent A is balanced by its dual A , but Omega is not balanced by Omega.

Replies from: cousin_it
comment by cousin_it · 2010-06-01T20:42:46.892Z · LW(p) · GW(p)

If we assume you can tell "deceptive" agents from "non-deceptive" ones and shift probability weight accordingly, then not every agent is balanced by its dual, because some "deceptive" agents probably have "non-deceptive" duals and vice versa. No?

(Apologies if I'm misunderstanding - this stuff is slowly getting too complex for me to grasp.)

Replies from: Nisan
comment by Nisan · 2010-06-02T00:05:46.718Z · LW(p) · GW(p)

The reason we shift probability weight away from the deceptive Omega is that, in the original problem, we are told that we believe Omega to be non-deceptive. The reasoning goes like this: If it looks like Omega and talks like Omega, then it might be Omega or Omega . But if it were Omega* , then it would be deceiving us, so it's most probably Omega.

In the original problem, we have no reason to believe that No-mega and friends are non-deceptive.

(But if we did, then yes, the dual of a non-deceptive agent would be deceptive, and so have lower prior probability. This would be a different problem, but it would still have a symmetry: We would have to define a different notion of dual, where the dual of an agent has the reversed behavior and also reverses its claims about its own behavior.

What would Omega* be in that case? It would not claim to be Omega. It would truthfully tell you that if it predicted you would not give it $5 on tails, then it would flip a coin and give you $100 on heads; and otherwise it would not give you anything. This has no bearing on your decision in the Omega problem.)

Edit: Formatting.

Replies from: cousin_it
comment by cousin_it · 2010-06-02T10:10:05.727Z · LW(p) · GW(p)

By your definitions, Omega would condition its decision on you being counterfactually muggable by the original Omega, not on you giving money to Omega itself. Or am I losing the plot again? This notion of "duality" seems to be getting more and more complex.

Replies from: Nisan
comment by Nisan · 2010-06-02T15:16:39.468Z · LW(p) · GW(p)

"Duality" has become more complex because we're now talking about a more complex problem — a version of Counterfactual Mugging where you believe that all superintelligent agents are trustworthy. The old version of duality suffices for the ordinary Counterfactual Mugging problem.

My thesis is that there's always a symmetry in the space of black swans like No-mega.

In the case currently under consideration, I'm assuming Omega's spiel goes something like "I just flipped a coin. If it had been heads, I would have predicted what you would do if I had approached you and given my spiel...." Notice the use of first-person pronouns. Omega* would have almost the same spiel verbatim, also using first-person pronouns, and make no reference to Omega. And, being non-deceptive, it would behave the way it says it does. So it wouldn't condition on your being muggable by Omega.

You could object to this by claiming that Omega actually says "I am Omega. If Omega had come up to you and said....", in which case I can come up with a third notion of duality.

Replies from: cousin_it
comment by cousin_it · 2010-06-02T17:21:54.628Z · LW(p) · GW(p)

If Omega* makes no reference to the original Omega, I don't understand why they have "opposite behavior with respect to my status as being counterfactually-muggable" (by the original Omega), which was your reason for inventing "duality" in the first place. I apologize, but at this point it's unclear to me that you actually have a proof of anything. Maybe we can take this discussion to email?

comment by Jonathan_Graehl · 2010-06-01T18:58:27.692Z · LW(p) · GW(p)

Surely the last thing on anyone's mind, having been persuaded they're in the presence of Omega in real life, is whether or not to give $100 :)

I like the No-mega idea (it's similar to a refutation of Pascal's wager by invoking contrary gods), but I wouldn't raise my expectation for the number of No-mega encounters I'll have by very much upon encountering a solitary Omega.

Generalizing No-mega to include all sorts of variants that reward stupid or perverse behavior (are there more possible God-likes that reward things strange and alien to us?), I'm not in the least bit concerned.

I suppose it's just a good argument not to make plans for your life on the basis of imagined God-like beings. There should be as many gods who, when pleased with your action, intervene in your life in a way you would not consider pleasant, and are pleased at things you'd consider arbitrary, as those who have similar values they'd like us to express, and/or actually reward us copacetically.

Replies from: cousin_it
comment by cousin_it · 2010-06-01T19:03:11.580Z · LW(p) · GW(p)

I wouldn't raise my expectation for the number of No-mega encounters I'll have by very much upon encountering a solitary Omega.

You don't have to. Both Omega and No-mega decide based on what your intentions were before seeing any supernatural creatures. If right now you say "I would give money to Omega if I met one" - factoring in all belief adjustments you would make upon seeing it - then you should say the reverse about No-mega, and vice versa.

ETA: Listen, I just had a funny idea. Now that we have this nifty weapon of "exploding counterfactuals", why not apply it to Newcomb's Problem too? It's an improbable enough scenario that we can make up a similarly improbable No-mega that would reward you for counterfactual two-boxing. Damn, this technique is too powerful!

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-06-01T19:21:33.947Z · LW(p) · GW(p)

By not believing No-mega is probable just because I saw an Omega, I mean that I plan on considering such situations as they arise on the basis that only the types of godlike beings I've seen to date (so far, none) exist. I'm inclined to say that I'll decide in the way that makes me happiest, provided I believe that the godlike being is honest and really can know my precommitment.

I realize this leaves me vulnerable to the first godlike huckster offering me a decent exclusive deal; I guess this implies that I think I'm much more likely to encounter 1 godlike being than many.

comment by Alexandros · 2010-06-03T09:08:37.897Z · LW(p) · GW(p)

I would have thought everyone here would have seen this by now, but I hadn't until today so it may be new to someone else as well:

Charlie Munger on the 24 Standard Causes of Human Misjudgment

http://freebsd.zaks.com/news/msg-1151459306-41182-0/

comment by SilasBarta · 2010-06-02T22:11:50.627Z · LW(p) · GW(p)

Thought I might pass this along and file it under "failure of rationality". Sadly, this kind of thing is increasingly common -- getting deep in education debt, but not having increased earning power to service the debt, even with a degree from a respected university.

Summary: Cortney Munna, 26, went $100K into debt to get worthless degrees and is deferring payment even longer, making interest pile up further. She works in an unrelated area (photography) for $22/hour, and it doesn't sound like she has a lot of job security.

We don't find out until the end of the article that her degrees are in women's studies and religious studies.

There are much better ways to spend $100K. Twentysomethings like her are filling up the workforce. I'm worried about the future implications.

I thank my lucky stars I'm not in such a position (in the respects listed in the article -- Munna's probably better off in other respects). I didn't handle college planning as well as I could have, and I regret it to this day. But at least I didn't go deep into debt for a worthless degree.

Replies from: NancyLebovitz, Seth_Goldin
comment by NancyLebovitz · 2010-06-03T01:04:09.556Z · LW(p) · GW(p)

Twentysomethings like her are filling up the workforce.

Do you mean young people with unrepayable college debt, or young people with unrepayable debt for degrees which were totally unlikely to be of any use?

Replies from: SilasBarta
comment by SilasBarta · 2010-06-03T03:08:43.162Z · LW(p) · GW(p)

What's the substantive difference? In both cases, the young person has taken out a debt intended to amplify earnings by more than the debt costs, but that isn't going to happen. What does it matter whether the degree was of "any use" or not? What matters is whether it was enough use to cover the debt, not simply if there exist some gain in earnings due to the debt (which there probably is, though only via signaling, not direct enhancement of human capital).

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-03T10:37:22.188Z · LW(p) · GW(p)

I was making a distinction between extreme bad judgment (as shown in the article) and moderately bad judgment and/or bad luck.

Your emphasis upthread seemed to be on how foolish that woman and her family were.

comment by Seth_Goldin · 2010-06-03T00:23:49.479Z · LW(p) · GW(p)

Arnold Kling has some thoughts about the plight of the unskilled college grad.

1 2

Replies from: SilasBarta
comment by SilasBarta · 2010-06-03T03:28:32.351Z · LW(p) · GW(p)

Thanks for the links, I had missed those.

I agree with his broad points, but on many issues, I notice he often perceives a world that I don't seem to live in. For example, he says that people who can simply communicate in clear English and think clearly are in such short supply that he'd hire someone or take them on as a grad student simply for meeting that, while I haven't noticed the demand for my labor (as someone well above and beyond that) being like what that kind of shortage would imply.

Second, he seems to have this belief that the consumer credit scoring system can do no wrong. Back when I was unable to get a mortgage at prime rates due to lacking credit history despite being an ideal candidate [1], he claimed that the refusals were completely justified because I must have been irresponsible with credit (despite not having borrowed...), and he has no reason to believe my self-serving story ... even after I offered to send him my credit report and the refusals!

[1] I had no other debts, no dependents, no bad incidents on my credit report, stable work history from the largest private employer in the area, and the mortgage would be for less than 2x my income and have less than 1/6 of my gross in monthly payments. Yeah, real subprime borrower there...

Replies from: Vladimir_M, SoullessAutomaton
comment by Vladimir_M · 2010-06-03T17:40:19.756Z · LW(p) · GW(p)

One reason why the behavior of corporations and other large organizations often seems so irrational from an ordinary person's perspective is that they operate in a legal minefield. Dodging the constant threats of lawsuits and regulatory penalties while still managing to do productive work and turn a profit can require policies that would make no sense at all without these artificially imposed constraints. This frequently comes off as sheer irrationality to common people, who tend to imagine that big businesses operate under a far more laissez-faire regime than they actually do.

Moreover, there is the problem of diseconomies of scale. Ordinary common-sense decision criteria -- such as e.g. looking at your life history as you describe it and concluding that, given these facts, you're likely to be a responsible borrower -- often don't scale beyond individuals and small groups. In a very large organization, decision criteria must instead be bureaucratic and formalized in a way that can be, with reasonable cost, brought under tight control to avoid widespread misbehavior. For this reason, scalable bureaucratic decision-making rules must be clear, simple, and based on strictly defined categories of easily verifiable evidence. They will inevitably end up producing at least some decisions that common-sense prudence would recognize as silly, but that's the cost of scalability.

Also, it should be noted that these two reasons are not independent. Consistent adherence to formalized bureaucratic decision-making procedures is also a powerful defense against predatory plaintiffs and regulators. If a company can produce papers with clearly spelled out rules for micromanaging its business at each level, and these rules are per se consistent with the tangle of regulations that apply to it and don't give any grounds for lawsuits, it's much more likely to get off cheaply than if its employees are given broad latitude for common-sense decision-making.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-03T22:54:25.495Z · LW(p) · GW(p)

As nearly as I can figure it, people who rely on credit ratings mostly want to avoid loss, but aren't very concerned about missing chances to make good loans.

comment by SoullessAutomaton · 2010-06-03T03:43:30.520Z · LW(p) · GW(p)

For what it's worth, the credit score system makes a lot more sense when you realize it's not about evaluating "this person's ability to repay debt", but rather "expected profit for lending this person money at interest".

Someone who avoids carrying debt (e.g., paying interest) is not a good revenue source any more than someone who fails to pay entirely. The ideal lendee is someone who reliably and consistently makes payment with a maximal interest/principal ratio.

This is another one of those Hanson-esque "X is not about X-ing" things.

Replies from: NancyLebovitz, Douglas_Knight
comment by NancyLebovitz · 2010-06-03T10:44:05.023Z · LW(p) · GW(p)

I think there's also some Conservation of Thought (1) involved-- if you have a credit history to be looked at, there are Actual! Records!. If someone is just solvent and reliable and has a good job, then you have to evaluate that.

There may also be a weirdness factor if relatively few people have no debt history.

(1) Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed is partly about how a lot of what looks like tyranny when you're on the receiving end of it is motivated by the people in charge's desire to simplify your behavior enough to keep track of you and control you.

Replies from: JGWeissman, SilasBarta
comment by JGWeissman · 2010-06-03T22:15:10.924Z · LW(p) · GW(p)

what looks like tyranny when you're on the receiving end of it is motivated by the people in charge's desire to simplify your behavior enough to keep track of you and control you.

Simplifying my behavior enough to keep track of me and control me is tyranny.

comment by SilasBarta · 2010-06-03T15:04:31.276Z · LW(p) · GW(p)

I think there's also some Conservation of Thought (1) involved-- if you have a credit history to be looked at, there are Actual! Records!. If someone is just solvent and reliable and has a good job, then you have to evaluate that.

Except that there are records (history of paying bills, rent), it's just that the lenders won't look at them.

There may also be a weirdness factor if relatively few people have no debt history.

Maybe financial gurus should think about that before they say "stay away from credit cards entirely". It should be "You MUST get a credit card, but pay the balance." (This is another case of addictive stuff that can't addict me.)

(Please, don't bother with advice, the problem has since been solved; credit unions are run by non-idiots, it seems, and don't make the above lender errors.)

ETA: Sorry for the snarky tone; your points are valid, I just disagree about their applicability to this specific situation.

Replies from: Vladimir_M, NancyLebovitz, CronoDAS
comment by Vladimir_M · 2010-06-03T17:48:03.147Z · LW(p) · GW(p)

SilasBarta:

Except that there are records (history of paying bills, rent), it's just that the lenders won't look at them.

Well, is it really possible that lenders are so stupid that they're missing profit opportunities because such straightforward ideas don't occur to them? I would say that lacking insider information on the way they do business, the rational conclusion would be that, for whatever reasons, either they are not permitted to use these criteria, or these criteria would not be so good after all if applied on a large scale.

(See my above comment for an elaboration on this topic.)

(Please, don't bother with advice, the problem has since been solved; credit unions are run by non-idiots, it seems, and don't make the above lender errors.)

Or maybe the reason is that credit unions are operating under different legal constraints and, being smaller, they can afford to use less tightly formalized decision-making rules?

Replies from: SilasBarta, Douglas_Knight
comment by SilasBarta · 2010-06-03T17:54:36.996Z · LW(p) · GW(p)

Well, is it really possible that lenders are so stupid that they're missing profit opportunities because such straightforward ideas don't occur to them? I would say that lacking insider information on the way they do business, the rational conclusion would be that, for whatever reasons, either they are not permitted to use such criteria, or such criteria would not be so good after all if applied on a large scale.

No, they do require that information to get the subprime loan; it's just that they classified me as subprime based purely on the lack of credit history, irrespective of that non-loan history. Providing that information, though required, doesn't get you back into prime territory.

Or maybe the reason is that credit unions are operating under different legal constraints and, being smaller, they can afford to use less tightly formalized decision-making rules?

Considering that in the recent financial industry crisis, the credit unions virtually never needed a bailout, while most of the large banks did, there is good support for the hypothesis of CU = non-idiot, larger banks/mortgage brokers = idiot.

(Of course, I do differ from the general subprime population in that if I see that I can only get bad terms on a mortgage, I don't accept them.)

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-03T18:16:40.537Z · LW(p) · GW(p)

SilasBarta:

No, they do require that information to get the subprime loan; it's just that they classified me as subprime based purely on the lack of credit history, irrespective of that non-loan history. Providing that information, though required, doesn't get you back into prime territory.

This merely means that their formal criteria for sorting out loan applicants into officially recognized categories disallow the use of this information -- which would be fully consistent with my propositions from the above comments.

Mortgage lending, especially subprime lending, has been a highly politicized issue in the U.S. for many years, and this business presents an especially dense and dangerous legal minefield. Multifarious politicians, bureaucrats, courts, and prominent activists have a stake in that game, and they have all been using whatever means are at their disposal to influence the major lenders, whether by carrots or by sticks. All this has undoubtedly influenced the rules under which loans are handed out in practice, making the bureaucratic rules and procedures of large lenders seem even more nonsensical from the common person's perspective than they would otherwise be.

(I won't get into too many specifics in order to avoid raising controversial political topics, but I think my point should be clear at least in the abstract, even if we disagree about the concrete details.)

Considering that in the recent financial industry crisis, the credit unions virtually never needed a bailout, while most of the large banks did, which supports the CU = idiot, larger banks/mortgage brokers = non-idiot hypothesis.

Why do you assume that the bailouts are indicative of idiocy? You seem to be assuming that -- roughly speaking -- the major financiers have been engaged in more or less regular market-economy business and done a bad job due to stupidity and incompetence. That, however, is a highly inaccurate model of how the modern financial industry operates and its relationship with various branches of the government -- inaccurate to the point of uselessness.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-03T18:27:26.829Z · LW(p) · GW(p)

I actually agree with most of those points, and I've made many such criticisms myself. So perhaps larger banks are forced into a position where they rely too much on credit scores at one stage. Still, credit unions won, despite having much less political pull, while significantly larger banks toppled. Much as I disagree with the policies you've described, some of the banks' errors (like assumptions about repayment rates) were bad, no matter what government policy is.

If lending had really been regulated to the point of (expected) unprofitability, they could have gotten out of the business entirely, perhaps spinning off mortgage divisions as credit unions to take advantage of those laws. Instead, they used their political power to "dance with the devil", never adjusting for the resulting risks, either political or in real estate. There's stupidity in that somewhere.

Replies from: mattnewport
comment by mattnewport · 2010-06-03T18:40:23.671Z · LW(p) · GW(p)

Still, credit unions won, despite having much less political pull, while significantly larger banks toppled.

In some cases this was an example of the principal–agent problem - the interests of bank employees were not necessarily aligned with the interests of the shareholders. Bank executives can 'win' even when their bank topples.

Replies from: RobinZ
comment by RobinZ · 2010-06-03T18:50:40.339Z · LW(p) · GW(p)

The principal-agent problem should always be on the list of candidates, but it can occasionally be eliminated as an explanation. I was listening to the This American Life episode "Return to the Giant Pool of Money", and more than one of the agents in the chain had large amounts of their resources wiped out.

Replies from: mattnewport
comment by mattnewport · 2010-06-03T18:55:00.941Z · LW(p) · GW(p)

The question of whether an agent's interests are aligned with the principal's is largely orthogonal to the question of whether the agent achieves a positive return. The agent's expected return is more relevant.

Replies from: RobinZ
comment by RobinZ · 2010-06-03T19:08:14.196Z · LW(p) · GW(p)

There were many agents involved in the recent financial unpleasantness whose harm was enabled by the principal-agent problem. My intended examples did not suffer that problem. I could have made that clearer.

comment by Douglas_Knight · 2010-06-03T18:09:21.998Z · LW(p) · GW(p)

Well, is it really possible that lenders are so stupid ... not be so good after all if applied on a large scale.

These are not such different answers. Working on a large scale tends to require hiring (potentially) stupid people and giving them little flexibility.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-03T18:22:34.063Z · LW(p) · GW(p)

Yes, that's certainly true. In fact, what you say is very similar to one of the points I made in my first comment in this thread (see its second paragraph).

comment by NancyLebovitz · 2010-06-03T15:26:19.571Z · LW(p) · GW(p)

Except that there are records (history of paying bills, rent), it's just that the lenders won't look at them.

Fair point. This does replicate the Conservation of Thought theme. I think a good bit about business can be explained as not bothering because one's competitors haven't bothered either.

I've seen financial gurus recommend getting a credit card and paying the balance.

And thanks for the ETA.

Replies from: mattnewport
comment by mattnewport · 2010-06-03T17:56:38.778Z · LW(p) · GW(p)

I've seen financial gurus recommend getting a credit card and paying the balance.

Ramit Sethi for example. I had the impression that this was actually pretty much the standard advice from personal finance experts. Most of them are not worth listening to anyway though.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-03T22:08:23.755Z · LW(p) · GW(p)

This might be what they say in their books, where they give a detailed financial plan, though I doubt even that. What they advise is usually directed at the average mouthbreather who gets deep into credit card debt. They don'd need to advise such people to build a credit history by getting a credit card solely for that purpose -- that ship has already said!

All I ever hear from them is "Stay away from credit cards entirely! Those are a trap!" I had never once heard a caveat about, "oh, but make sure to get one anyway so you don't find yourself at 24 without a credit history, just pay the balance." No, for most of what they say to make sense, you have to start from the assumption that the listener typically doesn't pay the full balance, and is somehow enlightened by moving to such a policy.

Notice how the citation you give is from a chapter-length treatment from a less-known finance guru (than Ramsey, Orman, Howard, etc.), and it's about "optimizing credit cards" a kind of complex, niche strategy. Not standard, general advice from a household name.

Replies from: Blueberry
comment by Blueberry · 2010-06-04T01:26:04.921Z · LW(p) · GW(p)

All I ever hear from them is "Stay away from credit cards entirely! Those are a trap!"

That would be an insanely stupid thing for anyone to say. Credit cards are very useful if used properly. I agree with mattnewport that the standard advice given in financial books is to charge a small amount every month to build up a credit rating. Also, charge large purchases at the best interest rate you can find when you'll use the purchases over time and you have a budget that will allow you to pay them off.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-04T02:05:40.325Z · LW(p) · GW(p)

Well, then I don't know what to tell you. I'd listened to financial advice shows on and off and had read Clark Howard's book before applying for the mortgage back then, and never once did I hear or read that you should get a credit card merely to establish a credit history (and this is not why they issue them). I suspect it's because their advice begins from the assumption that you're in credit card debt, and you need to get out of that first, "you bozo".

And your comment about the usefulness of credit cards for borrowing is a bit ivory-tower. In actual experience, based on all the expose reports and news stories I've seen, it's pretty much impossible to do that kind of planning, since credit card companies reserve the right to make arbitrary changes to the terms -- and use that right.

I remember one case where a bank issued a card that had a "guaranteed" 1.9% rate for ~6 months with a ~$5000 limit -- but if you actually used anything approaching that limit, they would invoke the credit risk clauses of the agreement, deem you a high risk because of all the debt you're carrying, and jack up your rate to over 20%. So, a 1.9% loan that they can immediately change to 20% if they feel like it -- in what sense was it a 1.9% loan?

For that reason, I don't even consider using a credit card for installment purchases.

Replies from: Blueberry
comment by Blueberry · 2010-06-05T17:49:44.124Z · LW(p) · GW(p)

Wow, they can jack up the rate like that? I would definitely consider that fraud and abuse. That's not common, however, and Congress recently passed legislation to prevent that sort of abuse. Currently, I don't have the option of not using a credit card; I would starve to death without it.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-05T23:33:38.507Z · LW(p) · GW(p)

Wow, they can jack up the rate like that? I would definitely consider that fraud and abuse. That's not common ...

I thought so too, but then was overwhelmed with stories like that. Most credit cards agreements are written with a clause that says, "we can do whatever we want, and the most you can do to reject the new terms is pay off the entire debt in 15 days". This is one of the few instances where courts will honor a contract that gives one party such open-ended power over the other.

If you haven't been burned this way, it's just a matter of time.

And if you google the topic, I'm sure you'll find enough to satisfy your evidence threshold.

Currently, I don't have the option of not using a credit card; I would starve to death without it.

Would you starve to death with it? If you can service the debts, let me loan you the money; at this point, most investors would sell out their mother to get a fraction of the interest rate on their savings that most credit cards charge. (Not that I would, but I'd turn down the offer without my trademark rudeness...)

comment by CronoDAS · 2010-06-05T19:12:37.647Z · LW(p) · GW(p)

::followed link::

Did you ever experience nicotine withdrawal symptoms? In people who aren't long-time smokers, they can take up to a week to appear.

Replies from: Vladimir_M, SilasBarta
comment by Vladimir_M · 2010-06-06T00:40:15.089Z · LW(p) · GW(p)

For what that's worth, when I quit smoking, I didn't feel any withdrawal symptoms except being a bit nervous and irritable for a single day (and I'm not even sure if quitting was the cause, since it coincided with some stressful issues at work that could well have caused it regardless). That was after a few years of smoking something like two packs a week on average (and much more than that during holidays and other periods when I went out a lot).

From my experience, as well as what I observed from several people I know very well, most of what is nowadays widely believed about addiction is a myth.

comment by SilasBarta · 2010-06-05T23:26:47.251Z · LW(p) · GW(p)

No, never did. My best guess is that I didn't smoke heavily enough to get a real addiction, though I smoked enough to get the psychoactive effects.

Replies from: Kevin
comment by Kevin · 2010-06-05T23:36:49.574Z · LW(p) · GW(p)

Yes, I would think it would take around 5-10 cigarettes a day (or more) for at least a week to develop an addiction. While cigarettes (and heroin, and caffeine) are very physically addictive, it still takes sustained, moderately high use to develop a physical addiction. Most cigarette smokers describe their addictions in terms of "x packs per day".

Replies from: SilasBarta
comment by SilasBarta · 2010-06-05T23:40:23.590Z · LW(p) · GW(p)

Most cigarette smokers describe their addictions in terms of "x packs per day".

Okay, then I guess my case isn't informative ... I'd use the pack/year metric instead instead of the pack/day.

Replies from: CronoDAS
comment by CronoDAS · 2010-06-06T17:50:35.096Z · LW(p) · GW(p)

I wish I could direct you to this Scientific American article so I could ask how it compares to your experiences, but it's behind a paywall.

Replies from: SilasBarta
comment by SilasBarta · 2010-06-06T19:29:56.261Z · LW(p) · GW(p)

From what I can see before the paywall, it looks like I definitely didn't meet the threshold under the best science, but I could probably cross it from 5 cigarettes per day. I'd only try that out if I were rewarded for doing it (but not for stopping as that would defeat the purpose of such an experience).

Replies from: CronoDAS
comment by CronoDAS · 2010-06-06T21:20:09.062Z · LW(p) · GW(p)

I read the article on paper before it was hidden in a paywall, so I can summarize some of the findings:

1) Rat brains are irrevocably changed by a single dose of nicotine.

2) Brains of rats that have never been exposed to nicotine ("non-smokers"), those that are currently given nicotine on a regular basis ("current smokers"), and those that used to be given nicotine on a regular basis but have been deprived of it for a long time ("former smokers") are all distinguishable from each other.

3) The author notes that the primary effect of nicotine on addicted human smokers appears to be suppressing craving for itself.

4) The author hypothesizes that the brain has a craving-generating system and a separate craving-suppression system. (These systems apply to appetites in general, such as the desire to eat food.) He further goes on to speculate that the primary action of nicotine is to suppress craving. This has the effect of throwing the two systems out of equilibrium, so the brain's craving-generation system "works harder" to counter the effects of nicotine. When the effects of nicotine wear off (which can take much longer than the time it takes for the nicotine to leave the body), the equilibrium is once again thrown out of balance, resulting in cravings. (The effects of smoking on weight are mentioned as support for this hypothesis.)

comment by Douglas_Knight · 2010-06-03T14:59:09.283Z · LW(p) · GW(p)

For what it's worth, the credit score system makes a lot more sense when you realize it's not about evaluating "this person's ability to repay debt", but rather "expected profit for lending this person money at interest".

Expected profit explains much behavior of credit card companies, but I don't think it helps at all with the behavior of the credit score system or mortgage lenders (Silas's example!). Nancy's answer looks much better to me (except her use of the word "also").

comment by university_student · 2010-06-01T23:13:48.985Z · LW(p) · GW(p)

(Wherein I seek advice on what may be a fairly important decision.)

Within the next week, I'll most likely be offered a summer job where the primary project will be porting a space weather modeling group's simulation code to the GPU platform. (This would enable them to start doing predictive modeling of solar storms, which are increasingly having a big economic impact via disruptions to power grids and communications systems.) If I don't take the job, the group's efforts to take advantage of GPU computing will likely be delayed by another year or two. This would be a valuable educational opportunity for me in terms of learning about scientific computing and gaining general programming/design skill; as I hope to start contributing to FAI research within 5-10 years, this has potentially big instrumental value.

In "Why We Need Friendly AI", Eliezer discussed Moore's Law as a source of existential risk:

Moore’s Law does make it easier to develop AI without understanding what you’re doing, but that’s not a good thing. Moore’s Law gradually lowers the difficulty of building AI, but it doesn’t make Friendly AI any easier. Friendly AI has nothing to do with hardware; it is a question of understanding. Once you have just enough computing power that someone can build AI if they know exactly what they’re doing, Moore’s Law is no longer your friend. Moore’s Law is slowly weakening the shield that prevents us from messing around with AI before we really understand intelligence. Eventually that barrier will go down, and if we haven’t mastered the art of Friendly AI by that time, we’re in very serious trouble. Moore’s Law is the countdown and it is ticking away. Moore’s Law is the enemy.

Due to the quality of the models used by the aforementioned research group and the prevailing level of interest in more accurate models of solar weather, successful completion of this summer project will probably result in a nontrivial increase in demand for GPUs. It seems that the next best use of my time this summer would be to work full time on the expression-simplification abilities of a computer algebra system.

Given all this information and the goal of reducing existential risk from unFriendly AI, should I take the job with the space weather research group, or not? (To avoid anchoring on other people's opinions, I'm hoping to get input from at least a couple of LW readers before mentioning the tentative conclusion I've reached.)

ETA: I finally got an e-mail response from the research group's point of contact and she said all their student slots have been taken up for this summer, so that basically takes care of the decision problem. But I might be faced with a similar choice next summer, so I'd still like to hear thoughts on this.

Replies from: orthonormal, NaN, Kaj_Sotala, Roko, rwallace
comment by orthonormal · 2010-06-03T01:27:24.565Z · LW(p) · GW(p)

The amount you could slow down Moore's Law by any strategy is minuscule compared to the amount you can contribute to FAI progress if you choose. It's like feeling guilty over not recycling a paper cup, when you're planning to become a lobbyist for an environmentalist group later.

comment by NaN · 2010-06-01T23:21:03.619Z · LW(p) · GW(p)

Uninformed opinion: space weather modelling doesn't seem like a huge market, especially when you compare it to the truly massive gaming market. I doubt the increase in demand would be significant, and if what you're worried about is rate of growth, it seems like delaying it a couple of years would be wholly insignificant.

comment by Kaj_Sotala · 2010-06-02T21:18:46.355Z · LW(p) · GW(p)

I would say that there seem to be a lot of companies that are in one way or another trying to advance Moore's law. For as long as it doesn't seem like the one you're working on has a truly revolutionary advantage as compared to the other companies, just taking the money but donating a large portion of it to existential risk reduction is probably an okay move.

(Full disclosure: I'm an SIAI Visiting Fellow so they're paying my upkeep right now.)

comment by Roko · 2010-06-02T23:26:48.887Z · LW(p) · GW(p)

Personally trying to slow Moore's Law down is the kind of foolishness that Eliezer seems to inspire in young people...

Replies from: university_student
comment by university_student · 2010-06-03T04:17:26.270Z · LW(p) · GW(p)

Do you mean that he actively seeks to encourage young people to try and slow Moore's Law, or that this is an unintentional consequence of his writings on AI risk topics?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-03T04:25:26.882Z · LW(p) · GW(p)

I'm pretty sure that Roko means the second. If this idea got mentioned to Eliezer I'm pretty sure he'd point out the minimal impact that any single human can have on this, even before one gets to whether or not it is a good idea.

comment by rwallace · 2010-06-02T23:45:09.994Z · LW(p) · GW(p)

If you get an opportunity like that, take it. It's one thing to gain emotional comfort from believing fantasies about computers with magical powers, but when fantasy is being used as a reason to close off real life opportunity, something is badly wrong.

comment by roland · 2010-06-01T18:23:10.216Z · LW(p) · GW(p)

Should we buy insurance at all?

There is a small remark in Rational Choice in an Uncertain World: The Psychology of Judgment and Decision Making about insurance saying that all insurance has negative expected utility, we pay too high a price for too little a risk, otherwise insurance companies would go bankrupt. If this is the case should we get rid of all our insurances? If not, why not?

Replies from: SilasBarta, RobinZ, gwern, Jonathan_Graehl, JamesAndrix, gwern
comment by SilasBarta · 2010-06-01T19:19:06.193Z · LW(p) · GW(p)

There is a small remark in Rational Choice in an Uncertain World: The Psychology of Judgment and Decision Making about insurance saying that all insurance has negative expected utility, we pay too high a price for too little a risk, otherwise insurance companies would go bankrupt.

No -- Insurance has negative expected monetary return, which is not the same as expected utility. If your utility function obeys the law of diminishing marginal utility, then it also obeys the law of increasing marginal disutility. So, for example, losing 10x will be more than ten times as bad as losing x. (Just as gaining 10x is less than ten times as good as gaining x.)

Therefore, on your utility curve, a guaranteed loss of x can be better than a 1/1000 chance of losing 1000x.

ETA: If it helps, look at a logarithmic curve and treat it as your utility as a function of some quantity. Such a curve obeys diminishing marginal utility. At any given point, your utility increases less than proportionally going up, but more than proportionally going down.

(Incidentally, I acutally wrote an embarrasing article arguing in favor of the thesis roland presents, and you can still probably find on it the internet. That exchange is also an example of someone being bad at explaining. If my opponent had simply stated the equivalence between DMU and IMD, I would have understood why that argument about insurance is wrong. Instead, he just resorted to lots of examples of when people buy insurance that are totally unconvincing if you accept the quoted argument.)

Replies from: mkehrt
comment by mkehrt · 2010-06-01T20:25:51.674Z · LW(p) · GW(p)

I voted this up, but I want to comment to point out that this is a really important point. Don't be tricked into not getting insurance just because it has a negative expected monetary value.

Replies from: mattnewport
comment by mattnewport · 2010-06-01T20:49:06.639Z · LW(p) · GW(p)

I voted Silas up as well because it's an important point but it shouldn't be taken as a general reason to buy as much insurance as possible (I doubt Silas intended it that way either). Jonathan_Graehl's point that you should self-insure if you can afford to and only take insurance for risks you cannot afford to self-insure is probably the right balance.

Personally I don't directly pay for any insurance. I live in Canada (universal health coverage) and have extended health insurance through work (much to my dismay I cannot decline it in favor of cash) which means I have far more health insurance than I would purchase with my own money. Given my aversion to paperwork I don't even fully use what I have. I do not own a house or a car which are the other two areas arguably worth insuring. I don't have dependents so have no need for life or disability coverage. All other forms of insurance fall into the 'self-insure' category for me given my relatively low risk aversion.

comment by RobinZ · 2010-06-01T18:51:42.824Z · LW(p) · GW(p)

Risk is more expensive when you have a smaller bankroll. Many slot machines actually offer positive expected value payouts - they make their return on people plowing their winnings back in until they go broke.

Replies from: Douglas_Knight, roland
comment by Douglas_Knight · 2010-06-01T20:39:44.384Z · LW(p) · GW(p)

Citation please? A cursory search suggests that machines go through +EV phases, just like blackjack, but that individual machines are -EV. It's not just that they expect people to plow the money back in, but that pros have to wait for fish to plow money in to get to the +EV situation.

The difference with blackjack is that you can (in theory) adjust your bet to take advantage of the different phases of blackjack. Your first sentence seems to match Roland's comment about the Kelly criterion (you lose betting against snake eyes if you bet your whole bankroll every time), but that doesn't make sense with fixed-bet slots. There, if it made sense to make the first bet, it makes sense to continuing betting after a jackpot.

Replies from: Dagon, CronoDAS, RobinZ
comment by Dagon · 2010-06-01T22:29:07.332Z · LW(p) · GW(p)

This comes up frequently in gambling and statistics circles. "Citation please" is the correct response - casinos do NOT expect to make a profit by offering losing (for them) bets and letting "gambler's ruin" pay them off. It just doesn't work that way.

The fact that a +moneyEV bet can be -utilityEV for a gambler does NOT imply that a -moneyEV bet can be +utilityEV for the casino. It's -utility for both participants.

The only reason casinos offer such bets ever is for promotional reasons, and they hope to make the money back on different wagers the gambler will make while there.

The Kelly calculations work just fine for all these bets - for cyclic bets, it ends up you should bet 0 when -EV. When +EV, bet some fraction of your bankroll that maximizes mean-log-outcome for each wager.

comment by CronoDAS · 2010-06-01T22:30:12.623Z · LW(p) · GW(p)

Some casinos advertise that they have slots with "up to" a 101% rate of return. Good luck finding the one machine in the casino that actually has a positive EV, though!

comment by RobinZ · 2010-06-01T22:08:08.048Z · LW(p) · GW(p)

On the scale from "saw it in The Da Vinci Code" to "saw it in Nature", I'd have to say all I have is an anecdote from a respectable blogger:

Because slot machines are designed to hook you in, you're going to get some return on investment from them if you hold yourself to a specific amount. At the Casino de Lac Leamy, up in Canada (run, I would add, by the Quebec provincial government. Now that's a lottery system), the slots are 'loose.' They pay out relatively often. In fact, when Weds and I have played twenty dollars worth of slots together, we've never failed to leave the casino floor with more money than we had entering the floor. That twenty dollars has been anything from thirty to sixty-five dollars, the three or four times we've done this.

I'll give you that "many" is almost certainly flat wrong, on reflection, but such machines are (were?) probably out there.

Replies from: SilasBarta, bentarm
comment by SilasBarta · 2010-06-01T22:13:40.518Z · LW(p) · GW(p)

On the scale from "saw it in The Da Vinci Code"

That move was full of falsehoods. For example, people named Silas are actually no more or less likely than the general population to be tall homicidal albino monks -- but you wouldn't guess that from seeing the movie, now, would you?

Replies from: RobinZ
comment by RobinZ · 2010-06-02T02:28:04.046Z · LW(p) · GW(p)

That's why it represents the bottom end of my "source-reliability" scale.

comment by bentarm · 2010-06-01T23:02:55.117Z · LW(p) · GW(p)

The only relevant part of the quote seems to be:

That twenty dollars has been anything from thirty to sixty-five dollars, the three or four times we've done this.

I'm pretty sure it's not that unlikely to come up ahead 'three or four' times when playing slot machines (if it weren't so late I'd actually do the sums). It seems much more plausible that the blog author was just lucky than that the machines were actually set to regularly pay out positive amounts.

comment by roland · 2010-06-01T18:59:29.440Z · LW(p) · GW(p)

Ahh, Kelly criterion, correct?

Replies from: RobinZ
comment by RobinZ · 2010-06-01T20:18:38.135Z · LW(p) · GW(p)

...

*looks up Kelly criterion*

That's definitely a related result. (So related, in fact, that thinking about the +EV slots the other day got me wondering what the optimal fraction of your wealth was to bid on an arbitrary bet - which, of course, is just the Kelly criterion.)

comment by gwern · 2010-06-07T00:04:50.879Z · LW(p) · GW(p)

I'd like to pose a related question. Why is insurance structured as up-front payments and unlimited coverage, and not as conditional loans?

For example, one could imagine car insurance as a options contract (or perhaps a futures) where if your car is totaled, you get a loan sufficient for replacement. One then pays off the loan with interest.

The person buying this form of insurance makes fewer payments upfront, reducing their opportunity costs and also the risk of letting nsurance lapse due to random fluctuations. The entity selling this form of insurance reduces the risk of moral hazard (ie. someone taking out insurance, torching their car, and then letting insurance lapse the next month).

Except in assuming strange consumer preferences or irrationality, I don't see any obvious reason why this form of insurance isn't superior to the usual kind.

Replies from: Vladimir_M, Nick_Tarleton
comment by Vladimir_M · 2010-06-07T01:01:28.848Z · LW(p) · GW(p)

Well, look at a more extreme example. Imagine an accident in which you not just total a car, but you're also on the hook for a large bill in medical costs, and there's no way you can afford to pay this bill even if it's transmuted into a loan with very favorable terms. With ordinary insurance, you're off the hook even in this situation -- except possibly for the increased future insurance costs now that the accident is on your record, which you'll still likely be able to afford.

The goal of insurance is to transfer money from a large mass of people to a minority that happens to be struck by an improbable catastrophic event (with the insurer taking a share as the transaction-facilitating middleman, of course). Thus a small possibility of a catastrophic cost is transmuted into the certainty of a bearable cost. This wouldn't be possible if instead of getting you off the hook, the insurer burdened you with an immense debt in case of disaster.

(A corollary of this observation is that the notion of "health insurance" is one of the worst misnomers to ever enter public circulation.)

Replies from: gwern
comment by gwern · 2010-06-07T01:12:06.442Z · LW(p) · GW(p)

Alright, so this might not work for medical disasters late in life, things that directly affect future earning power. (Some of those could be handled by savings made possible by not having to make insurance payments.)

But that's just one small area of insurance. You've got housing, cars, unemployment, and this is just what comes to mind for consumers, never mind all the corporate or business need for insurance. Are all of those entities buying insurance really not in a position to repay a loan after a catastrophe's occurrence? Even nigh-immortal institutions?

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-07T03:27:09.680Z · LW(p) · GW(p)

I wouldn't say that the scenarios I described are "just one small area of insurance." Most things for which people buy insurance fit under that pattern -- for a small to moderate price, you buy the right to claim a large sum that saves you, or at least alleviates your position, if an improbable ruinous event occurs. (Or, in the specific case of life insurance, that sum is supposed to alleviate the position of others you care about who would suffer if you die unexpectedly.)

However, it should also be noted that the role of insurance companies is not limited to risk pooling. Since in case of disaster the burden falls on them, they also specialize in specific forms of damage control (e.g. by aggressive lawyering, and generally by having non-trivial knowledge on how to make the best out specific bad situations). Therefore, the expected benefit from insurance might actually be higher than the cost even regardless of risk aversion. Of course, insurers could play the same role within your proposed emergency loan scheme.

It could also be that certain forms of insurance are mandated by regulations even when it comes to institutions large enough that they'd be better off pooling their own risk, or that you're not allowed to do certain types of transactions except under the official guise of "insurance." I'd be surprised if the modern infinitely complex mazes of business regulation don't give rise to at least some such situations.

Moreover, there is also the confusion caused by the fact that governments like to give the name of "insurance" to various programs that have little or nothing to do with actuarial risk, and in fact represent more or less pure transfer schemes. (I'm not trying to open a discussion about the merits of such schemes; I'm merely noting that they, as a matter of fact, aren't based on risk pooling that is the basis of insurance in the true sense of the term.)

Replies from: gwern
comment by gwern · 2010-06-07T13:42:39.655Z · LW(p) · GW(p)

I wouldn't say that the scenarios I described are "just one small area of insurance." Most things for which people buy insurance fit under that pattern -- for a small to moderate price, you buy the right to claim a large sum that saves you, or at least alleviates your position, if an improbable ruinous event occurs.

Intrinsically, the average person must pay in more than they get out. Otherwise the insurance company would go bankrupt.

Since in case of disaster the burden falls on them, they also specialize in specific forms of damage control (e.g. by aggressive lawyering, and generally by having non-trivial knowledge on how to make the best out specific bad situations).

No reason a loan style insurance company couldn't do the exact same thing.

I'd be surprised if the modern infinitely complex mazes of business regulation don't give rise to at least some such situations.

'Rent-seeking' and 'regulatory capture' are certainly good answers to the question why doesn't this exist.

comment by Nick_Tarleton · 2010-06-07T00:43:46.958Z · LW(p) · GW(p)

For one thing, insurance makes expenses more predictable; though the desire for predictability (in order to budget, or the like) does probably indicate irrationality and/or bounded rationality.

Replies from: gwern
comment by gwern · 2010-06-07T01:01:13.532Z · LW(p) · GW(p)

What's unpredictable about a loan? You can predict what you'll be paying pretty darn precisely, and there's no intrinsic reason that your monthly loan repayments would have to be higher than your insurance pre-payments.

Replies from: Nick_Tarleton, Nick_Tarleton
comment by Nick_Tarleton · 2010-06-10T03:26:43.314Z · LW(p) · GW(p)

You can't predict when you'll have to start paying.

comment by Nick_Tarleton · 2010-06-10T03:26:23.237Z · LW(p) · GW(p)

It's not predictable when you'll have to start making payments.

comment by Jonathan_Graehl · 2010-06-01T19:09:52.086Z · LW(p) · GW(p)

Obviously if you know your utility function and the true distribution of possible risks, it's easy to decide whether to take a particular insurance deal.

The standard advice is that if you can afford to self-insure, you should, for the reason you cite (that insurance companies make a profit, on average).

That's a heuristic that holds up fine except when you know (for reasons you will keep secret from insurers) your own risk is higher than they could expect; then, depending on how competitive insurers are, even if you're not too risk-averse, you might find a good deal, even to the extent that you turn an expected (discounted) profit, and so should buy it even if you have zero risk aversion. Apparently in California, auto insurers are required to publish the algorithm by which they assign premiums (and are possibly prohibited from using certain types of information).

Conversely, you may choose to have no insurance (or extremely high deductible) in cases where you believe your personal risk is far below what the insurer appears to believe, even when you're actually averse to that risk.

Of course, it's not sufficient to know how wrong the insurer's estimate of your risk is; they insist on a pretty wide vig - not just to survive both uncertainties in their estimation of risk and the market returns on the float, but also to compensate for the observed amount of successful adverse selection that results from people applying the above heuristic.

I suppose it may also be possible that the insurer won't pay. I don't know what exactly what guarantees we have in the U.S.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-06-01T21:46:27.818Z · LW(p) · GW(p)

to compensate for the observed amount of successful adverse selection that results from people applying the above heuristic.

Actually, I think that for voluntary insurance, the observed adverse selection is negative, but I can't find the cite. People simply don't do cost-benefit calculations. People who buy insurance are those who are terribly risk-averse or see it as part of their role. Such people tend to be more careful than the general population. In a competitive market, the price of insurance would be bid down to reflect this, but it isn't.

comment by JamesAndrix · 2010-06-03T05:15:30.435Z · LW(p) · GW(p)

We should form large nonprofit risk pools.

comment by gwern · 2010-06-02T17:46:10.389Z · LW(p) · GW(p)

Some insurances are not worth getting, obviously. Like insurance on laptops or music players. But that insurance in general has negative expected utility assumes no risk aversion. If you can handle the risks on your own - if you are effectively self-insuring - then you probably should do that. But a house burning down or getting a rare cancer that will cost millions to treat: these are not self-insurable things unless you are a millionaire.

comment by Alexandros · 2010-06-05T11:08:52.022Z · LW(p) · GW(p)

Guided by Parasites: Toxoplasma Modified Humans

a ~20 minute (absolutely worth every minute) interview with, Dr. Robert Sapolsky, a leading researcher in the study of Toxoplasma & its effects on humans. This is a must see. Also, towards the end there is discussion of the effect of stress on telomere shortening. Fascinating stuff.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-05T18:18:34.306Z · LW(p) · GW(p)

Thanks for the link.

If people's desires are influenced by parasites, what does that do to CEV?

Replies from: Blueberry
comment by Blueberry · 2010-06-05T18:29:00.443Z · LW(p) · GW(p)

If your desires are influenced by parasites, then the parasites are part of what makes you you. You may as well ask "If people's desires are influenced by their past experience, what does that do to CEV?" or "If people's desires are influenced by their brain chemistry, what does that do to CEV?"

Replies from: Alexandros
comment by Alexandros · 2010-06-05T19:22:49.911Z · LW(p) · GW(p)

So what if Dr. Evil releases a parasite that rewires humanity's brains in a predetermined manner? Should CEV take that into account or should it aim to become Coherent Extrapolated Disinfected Volition?

Replies from: cupholder, Blueberry
comment by cupholder · 2010-06-05T20:19:39.753Z · LW(p) · GW(p)

What if Dr. Evil publishes a book or makes a movie that rewires humanity's brains in a predetermined manner?

Replies from: Alexandros
comment by Alexandros · 2010-06-05T20:31:15.995Z · LW(p) · GW(p)

Yep, I made a reference to cultural influence here. That's why I suspect CEV should be applied uniformly to the identity-space of all possible humans rather than the subset of humans that happen to exist when it gets applied. In that case defining humanity becomes very, very important.

Of course, perhaps the current formulation of CEV covers the entire identity-space equally and treats the living population as a sample, and I have misunderstood. But if that is the case, Wei Dai's last article is also bunk, and I trust him to have better understanding of all things FAI than myself.

Replies from: cupholder
comment by cupholder · 2010-06-05T21:25:11.247Z · LW(p) · GW(p)

Heh - my first instinct is to bite the bullet and apply CEV to existing humans only. I couldn't give a strong argument for that, though; I just can't immediately think of a reason to exclude non-culturally influenced humans while including culturally influenced humans.

Replies from: NancyLebovitz, Alexandros
comment by NancyLebovitz · 2010-06-06T01:15:23.629Z · LW(p) · GW(p)

It's hard to tell what counts as an influence and what doesn't.

It would be interesting to see what would happen if the effects of parasites could be identified and reversed. The results wouldn't necessarily all be good, though.

comment by Alexandros · 2010-06-05T21:53:21.472Z · LW(p) · GW(p)

I am not sure I follow your last sentence. Can you elaborate?

Replies from: cupholder
comment by cupholder · 2010-06-05T22:28:09.409Z · LW(p) · GW(p)

I'll give it a try. A human's mind and preferences might be influenced by cultural things like books and TV, and they might be influenced by non-cultural things like parasites. (And of course a lot of people will be influenced by both.) I can't think of a reason to include the former in CEV and exclude the latter that feels non-arbitrary to me, so I don't feel as if parasitically modified brains warrant different treatment, such as altering CEV to cover the space of all possible humans. My gut evaluates the prospect of parasite-driven brains as just another kind of human brain. (I'm presuming as well that CEV as currently formulated is just meant to cover existing humans, not all possible humans.) That makes me content to apply CEV to existing humans only - I don't feel I have to try to account for brain changes due to culture or parasites or what have you by expanding it to incorporate all of brain space.

comment by Blueberry · 2010-06-05T20:13:05.715Z · LW(p) · GW(p)

You may as well ask: "What if Dr. Evil kills every other living organism? Should CEV take that into account or should it aim to become Coherent Extrapolated Resurrected Volition?"

Of course, if someone modifies or kills all the other humans, that will change the result of CEV. Garbage in, garbage out.

comment by byrnema · 2010-06-27T06:47:33.984Z · LW(p) · GW(p)

I'm not certain this comment will be coherent, but I would like to compose it before I lose my train of thought. (I'm in an atypical mental state, so I easily could forget the pieces when feeling more normal.) The writing below sounds rather choppy and emphatic, but I'm actually feeling neutral and unconvinced. I wonder if anyone would be able to 'catch this train' and steer it somewhere else perhaps..?

It's an argument for dualism. Here is some background:


I've always been a monist: believing that everything should be coherent from within this reality. This is the idea that if things don't make sense, it is due to limited knowledge and a limited brain, not an incomplete universe. (Where the universe is the physical material world.)

While composing Less Wrong comments, I've often thought about what an incomplete universe would look like. (Since this is what dualists claim -- what do they mean by something existing differently or beyond material existence?)

I've written before that a simulation (a simulation is a reality S that is a subset of something larger) is just as good as (or the same as) "reality" if the simulation is complete within itself. That is, if an agent within the simulation would find that in principle everything within the simulation is coherent and can be understood from within the simulation. Importantly, there is no hint within the simulation of anything existing outside the simulation. (For example, in the multiple worlds theory, if the many worlds don't interact, each world is its own independent complete reality. The worlds are simulated within a larger entity of all the worlds.)

When physical materialists claim that the physical material world is our entire reality, they are claiming that the physical material world is a reality X, and you cannot deduce anything beyond X from within X. That is, there doesn't exist anything but X, as far as we're concerned. (We can speculate about many worlds, but unless the worlds interact, one world cannot deduce the others.) I've always found this to be obvious, because if you can deduce anything beyond X from within X, then what you've deduced is part of the physical material world (because you deduced it, through interaction) and it's part of X after all.

(end of background material)


It just occurred to me that we do have evidence that our physical material world X is incomplete. So I've stumbled on this argument for dualism. It's actually a very old one, but approached from a different angle. As I said, I stumbled upon it.

It's the problem of existence. Being a monist means believing that if things don't make sense, it is due to limited knowledge and a limited brain. But the problem of existence is such that no amount of knowledge will solve it: there's nothing we could ever learn (or even believe) within X that would solve this problem. Not a complete understanding of the physics of the beginning of the universe. Not even theism!

I cannot understand what the answer to the problem could possibly be, but I think that I can understand that there is no answer possible within X. So to the extent that I am correct that this problem is not in theory solvable in X means that X is incomplete.

I could be incorrect about whether this problem is in principle unsolvable in X. But I am relatively certain of it, on the same level as having confidence in logic. If I lose confidence in logic, I have nothing to reason with. So for now, I would find it more reasonable to guess that I'm in a simulation of some kind where this particular conundrum is embedded. X is a subset of a larger reality Y where existence is explained.

Given what we know about X, and the problem of existence, what can we deduce about the larger universe Y where existence is explained? Anything? What about deducing anything from the peculiar fact that X is missing information about existence?

Replies from: ata, Blueberry
comment by ata · 2010-06-27T07:12:32.928Z · LW(p) · GW(p)

I don't see where dualism comes in. Specifically what kind of dualism are you talking about?


Being a monist means believing that if things don't make sense, it is due to limited knowledge and a limited brain. But the problem of existence is such that no amount of knowledge will solve it: there's nothing we could ever learn (or even believe) within X that would solve this problem. ... So to the extent that I am correct that this problem is not in theory solvable in X means that X is incomplete.

A problem being unsolvable within some system does not imply that there is some outer system where it can be solved. Take the Halting Problem, for example: there are programs such that we cannot prove whether or not they will never halt, and this itself is provable. Yet there is a right answer in any given instance — a program will halt or it won't — but we can never know in some cases.

That you say "I cannot understand what the answer to the problem could possibly be" suggests that it is a wrong question. Ask "Why do I think the universe exists?" instead of "Why does the universe exist?". I have my tentatively preferred answer to that, but maybe you will come up with something interesting.

Replies from: Blueberry, byrnema, byrnema, wedrifid
comment by Blueberry · 2010-06-27T11:08:39.542Z · LW(p) · GW(p)

Ask "Why do I think the universe exists?" instead of "Why does the universe exist?". I have my tentatively preferred answer to that

What is it?

comment by byrnema · 2010-06-27T22:00:44.184Z · LW(p) · GW(p)

A problem being unsolvable within some system does not imply that there is some outer system where it can be solved.

Agreed, I was imprecise before. It is not generally 'a problem' if something is unknown. In the case of the halting problem, it's OK if the algorithm doesn't know when it is going to halt. (This doesn't make it incomplete.) However, it is a problem if X doesn't know how X was created (this makes X incomplete.)

The difference is that an algorithm can be implemeted -- and fully aware of how it is implemented, and know every line of its own code -- without knowing where it is going to halt. Where it's going to halt isn't squirreled away in some other domain to be read at the right moment, the rules for halting are known by the algorithm, it just doesn't know when those rules will be satisfied.

In contrast, X could not have created itself without any source code to do so. The analogous situation would be an algorithm that has halted but doesn't know why it halted. If it cannot know through self-inspection why it halted, then it is incomplete: it must deduce that something outside itself caused it to halt.

comment by byrnema · 2010-06-27T18:10:41.873Z · LW(p) · GW(p)

I agree that when a question doesn't have any possibility of an answer, it's probably a wrong question. But in this case, I don't see how it could be a wrong question. It seems like a perfectly reasonable question that we've gotten habituated to not having an answer to. It's evidence -- if we were looking for evidence -- that X is incomplete and we are in a simulation.

We take a lot of store in the convenient fact that our reality is causal. So why can't we ask what caused reality?

I have my tentatively preferred answer to that, but maybe you will come up with something interesting.

No, I don't come up with anything. I feel like anything that a person could possibly come up with would be philosophy (a non-scientific answer outside X). But please do share your answer (even if it is philosophy, as I expect).

(By dualism, I mean that there are aspects of reality we interact with beyond science, so that physical materialism or scientism, etc., would be incomplete epistemologies.)

Replies from: ata
comment by ata · 2010-06-28T04:40:10.301Z · LW(p) · GW(p)

No, I don't come up with anything. I feel like anything that a person could possibly come up with would be philosophy (a non-scientific answer outside X). But please do share your answer (even if it is philosophy, as I expect).

Here's where I stated it most recently, and I wrote an earlier post getting at the same sort of thing (where I see you posted a few comments), but at this point I've decided to abstain from actually advocating it until I have a better handle on some of the currently-unanswered questions raised by it. At the same time, I do feel like this line of reasoning (the conclusion I like to sum up as "Existence is what mathematical possibility feels like from the inside") is a step in the right direction. I do realize now that it is not as complete a solution as I originally thought — it makes me feel less confused about existence, but newly confused about other things — but I do still have the sense that the ultimately correct explanation of existence will not specially privilege this reality over others, and that our mental algorithms regarding "existence" are leading us astray. That seems to be the only state of affairs that does not compel us to believe in an infinite regress of causality, which doesn't really seem to explain anything, if it even makes logical sense. In any case, although I definitely have to concede that this problem is not solved, I am not convinced that it is not solvable. Metaphysical cosmology has been one of the most difficult areas of philosophy to turn into science or math, but it may yet fall.

(By dualism, I mean that there are aspects of reality we interact with beyond science, so that physical materialism or scientism, etc., would be incomplete epistemologies.)

Alright, that's what threw me off. I think "dualism" is usually used to refer specifically to theories that postulate ontologically-basic mental substances or properties separate from normal physical interactions; not that "there are aspects of reality we interact with beyond science", but that our consciousness or minds are made of something beyond science. Your reasoning does not imply the latter, correct?

Replies from: byrnema
comment by byrnema · 2010-06-28T16:54:12.284Z · LW(p) · GW(p)

Oh, that was you. I think the Ultimate Ensemble idea is really appealing as an explanation of what existence is. (The way possibility feels from the inside, as you wrote.)

comment by wedrifid · 2010-06-27T12:02:22.530Z · LW(p) · GW(p)

Ask "Why do I think the universe exists?" instead of "Why does the universe exist?"

My answer to those questions should be the same. The process of answering either question should bring the two into line even if they were previously cached somewhat differently.

comment by Blueberry · 2010-06-27T11:08:12.245Z · LW(p) · GW(p)

By "problem of existence" you mean why we exist and how we came to exist? Why do you think that can't be answered within our world? And what do you think a world would look like if you could solve the problem in it?

Replies from: byrnema
comment by byrnema · 2010-06-27T17:55:59.229Z · LW(p) · GW(p)

By "problem of existence" you mean why we exist and how we came to exist?

Yes. Why and how anything exists, and what existence is.

Why do you think that can't be answered within our world?

The reason that I think this problem can't be answered within our world is that the lack of an answer doesn't seem to be a matter of lack of information. It's a unique question in that although it seems to be a reasonable question, there's no possibility of an answer to this question, not even a false one.

It's a reasonable question because X is a causal reality, so it is reasonable to ask what caused X. There's no possibility of an answer to the question because causality is an arrow that always requires a point of departure. If you say the universe was created by a spark, and the rest followed by mathematics and logical necessity, still, what created that spark?

Religions have creation stories, but they explain the creation of X by the creation of X outside X. So creation stories don't resolve the conundrum of creation, they just move creation to someplace outside experience, where we cannot expect to understand anything. This may represent a universal insight that the existence of X cannot be explained within X.

And what do you think a world would look like if you could solve the problem in it?

This is analogous to being in flatland and wondering about edges. I suppose the main mysterious thing about the larger universe Y would be acausality. Here within X, it seems to be a rule, if not a logical principle, that everything is determined by something else. If something were to happen spontaneously, how did it decide to? What is the rule or pattern for its spontaneous appearance? These are all reasonable questions within X. Somehow Y gets around them.

Replies from: Blueberry
comment by Blueberry · 2010-06-28T08:31:18.029Z · LW(p) · GW(p)

There's no possibility of an answer to the question because causality is an arrow that always requires a point of departure.

What do you think of the following answer? There is some evidence that backward time travel may be possible under some circumstances in a way that is compatible with general relativity. So suppose, many years in the future, a team of physicists and engineers creates a wormhole in the universe and sends something back to the time of the Big Bang, causing it and creating our universe. That way, it's all self-contained.

Replies from: byrnema
comment by byrnema · 2010-06-28T16:42:51.049Z · LW(p) · GW(p)

Self-contained is good, though it doesn't resolve the existence problem. (What is the appropriate cliché there ... you can't pull yourself out of quicksand by pulling on your boots?)

Backward time travel itself opens up a number of wonderful possibilities, including universe self-reflection and the possibility of a post-hoc framework of objective value.

Replies from: wedrifid
comment by wedrifid · 2010-06-28T17:24:45.800Z · LW(p) · GW(p)

Backward time travel itself opens up a number of wonderful possibilities, including universe self-reflection and the possibility of a post-hoc framework of objective value.

It also makes encryption more difficult!

comment by Ruddiger · 2010-06-06T02:55:28.930Z · LW(p) · GW(p)

In Harry Potter and the Methods of Rationality, Quirrell talks about a list of the thirty-seven things he would never do as a Dark Lord.

Eliezer, do you have a full list of 37 things you would never do as a Dark Lord and what's on it?

  1. I will not go around provoking strong, vicious enemies.
  2. Don't Brag
  3. ?
Replies from: Richard_Kennaway, JoshuaZ, RomanDavis
comment by Richard_Kennaway · 2010-06-07T09:28:31.204Z · LW(p) · GW(p)

All of the replies to this should be in the thread for discussing HP&tMoR.

comment by JoshuaZ · 2010-06-06T03:27:35.400Z · LW(p) · GW(p)

This is a reference to the Evil Overlord List. That's why Harry starts snickering. Indeed, it almost is implied that Voldemort wrote the actual evil overlord list. For the most common version of the actual Evil Overlord List see Peter's Evil Overlord List. Having such a list for Voldemort seems to be at least partially just rule of funny.

Replies from: MBlume, RomanDavis
comment by MBlume · 2010-06-06T16:30:50.225Z · LW(p) · GW(p)

Did the evil overlord list exist publicly in 1991? I was actually a bit confused by Harry's laughter here. Eliezer seems to be working pretty hard to keep things actually in 1991 (truth and beauty, the journal of irreproducible results, etc.)

Replies from: JoshuaZ, Oscar_Cunningham
comment by JoshuaZ · 2010-06-06T16:59:46.847Z · LW(p) · GW(p)

That's a good point. I'm pretty sure the Evil Overlord List didn't exist that far back, at least not publicly. It seems like for references to other fictional or nerd-culture elements he's willing to monkey around with time. Thus for example, there was a Professor Summers for Defense Against the Dark Arts which wouldn't fit with the standard chronology for Buffy at all.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-06T17:22:03.133Z · LW(p) · GW(p)

Checking wikipedia, it looks possible but not likely that Harry could have seen the list in 1991.

Replies from: Blueberry
comment by Blueberry · 2010-06-06T18:11:59.597Z · LW(p) · GW(p)

Well, he and his father are described as being huge science fiction fans, so it's not that unlikely that they heard about the list at conventions, or had someone show them an early version of the list printed from email discussions, even if they didn't have Internet access back then.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-06T18:43:34.914Z · LW(p) · GW(p)

I'm pretty sure they did have internet access back then. It was more available through universities than it was to the general public.

Replies from: Blueberry
comment by Blueberry · 2010-06-07T00:49:24.868Z · LW(p) · GW(p)

I meant even if Harry's parents didn't have access back then, someone could still have printed out the list and showed it to them.

Replies from: RomanDavis
comment by RomanDavis · 2010-06-07T08:46:02.362Z · LW(p) · GW(p)

That doesn't sound very rational. The simplest answer seems to be, "Eliezer thought it would be funny" and he would have included the Evil Overlord List in the fanfic even if the Evil Overlord he was talking about was Caligula.

Replies from: Blueberry
comment by Blueberry · 2010-06-09T19:53:09.423Z · LW(p) · GW(p)

Of course it was included because Eliezer thought it would be funny. But I don't see what's so irrational about Harry reading the printed copy of the list.

Replies from: RomanDavis, JoshuaZ
comment by RomanDavis · 2010-06-09T23:22:03.560Z · LW(p) · GW(p)

Yes, but that's not the same as saying Eliezer actually went and looked up the earliest conceivable date to give Harry a reasonable chance of reading the list, or that he could pass the joke up even if he did.

comment by JoshuaZ · 2010-06-09T20:06:03.867Z · LW(p) · GW(p)

Well, would Harry have started laughing if he had just seen just a list before? I'm not sure, but the impression I got was that Harry was laughing because someone had made list identical in form to a well-known geek list. If he had just happened to have seen such a list before, would it be as funny? Moreover, would that be what the reader would have expected to understand from the text?

Replies from: RobinZ
comment by RobinZ · 2010-06-09T20:33:24.033Z · LW(p) · GW(p)

Maybe 'Quirrell' posted his version to FidoNet.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-09T20:37:12.031Z · LW(p) · GW(p)

Would not then Harry have noticed that Quirrel's list overlapped with the one he had seen?

Replies from: RobinZ
comment by RobinZ · 2010-06-09T20:38:08.220Z · LW(p) · GW(p)

Harry did correctly guess Item #2...

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-09T20:45:37.052Z · LW(p) · GW(p)

Good point. That makes it much more plausible. Although given Harry's personality I'd then expect him to test by trying to guess the third and fourth.

comment by Oscar_Cunningham · 2010-06-06T16:51:41.888Z · LW(p) · GW(p)

Good call, although the fic doesn't explicitly mention the evil overlord list.

comment by RomanDavis · 2010-06-06T03:35:54.224Z · LW(p) · GW(p)

The reason I think it might actually be plot relevant is that most people can't resist making a list that is much longer than 37 rules long. Plus most of the rules are just lampshades for tropes that show up again and again in fiction with evil overlords. They rarely are such basic, practical advice as "stop bragging so much."

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-06T03:52:42.339Z · LW(p) · GW(p)

Ah. I'm pretty sure it isn't a real list because of the number 37. 37 is one of the most common numbers for people to pick when they want to pick a small "random" number. Humans in general are very bad at random number generation. More specifically, they are more likely to pick an odd number, and given a specific range of the form 1 to n, they are most likely to pick a number that is around 3n/4. The really clear examples are from 1 to 4 (around 40% pick 3), 1 to 10 (I don't remember the exact number but I think it is around 30% that pick 7). and then 1 to 50 where a very large percentage will pick 37. The upshot is if you ever see an incomplete list claiming to have 37 items, you should assign a high probability that the rest of the list doesn't exist.

Replies from: Eliezer_Yudkowsky, Oscar_Cunningham
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-06-06T13:06:48.661Z · LW(p) · GW(p)

Ouch. I am burned.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-06T20:43:24.098Z · LW(p) · GW(p)

Well, that's ok. Because I just wrote a review of Chapter 23 criticizing Harry's rush to conclude that magic is a single-allele Mendellian trait and then read your chapter notes where you say the same thing. That should make us even.

comment by Oscar_Cunningham · 2010-06-06T13:18:02.393Z · LW(p) · GW(p)

It just occurred to me that the odd/even bias applies only because we work in base ten. Humans working in a prime base (like base 11) would be much less biased. (in this respect)

Replies from: JoshuaZ, JoshuaZ
comment by JoshuaZ · 2010-06-06T17:16:34.164Z · LW(p) · GW(p)

Well, that seems plausible, although what is going on there is being divisible by 2, not being prime. If your general hypothesis is correct, then if we used a base 9 system numbers divisible by 3 might seem off. However, I'm not aware of any bias against numbers divisible by 5. And there's some evidence that suggests that parity is ingrained human thinking (children can much more easily grasp the notion of whether a number is even or odd, and can do basic arithmetic with even/oddness much faster than with higher moduli).

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2010-06-06T17:36:25.634Z · LW(p) · GW(p)

I seared for "human random number" in Google and three of the results were polls on internet fora. Polls A & C were numbers in the range 1 to 10, poll B was in the range 1 to 20. C had the best participation. (By coincidence, I had participated in poll B)

I screwed up my experimental design by not thinking of a test before I looked at the results, so if anyone else wants to judge these they should think up a measure of whether certain numbers are preferred before they follow the links.

A B C

(You have a double post btw)

Replies from: RobinZ
comment by RobinZ · 2010-06-07T12:39:58.632Z · LW(p) · GW(p)

JoshuaZ's statement implies a peak near 15 for B and outright states 30% of responses to A and C near 7. I would guess that 13 and 17 would be higher than 15 for B and that 7 will still be prominent, and that odd numbers (and, specifically, primes) will be disproportionately represented.

I will not edit this comment after posting.

Replies from: Blueberry
comment by Blueberry · 2010-06-07T17:11:07.928Z · LW(p) · GW(p)

Why primes?

Replies from: RobinZ
comment by RobinZ · 2010-06-07T18:49:27.980Z · LW(p) · GW(p)

My instinct is that numbers with obvious factors (even numbers and multiples of five especially) will appear less random - and in the range from 1 to 20, that's all the composites.

comment by JoshuaZ · 2010-06-06T17:05:22.746Z · LW(p) · GW(p)

Well, that seems plausible, although what is going on there is being divisible by 2, not being prime. If your general hypothesis is correct, then if we used a base 9 system numbers divisible by 3 might seem off. However, I'm not aware of any bias against numbers divisible by 5. And there's some evidence that suggests that parity is ingrained human thinking (children can't much more easily grasp the notion of whether a number is even or odd, and can do basic arithmetic with even/oddness much faster than with higher moduli).

comment by RomanDavis · 2010-06-06T03:08:34.462Z · LW(p) · GW(p)

I have a feeling they are ammunition in Chekov's Gun, and and therefore any attempts to get more data will lead to spoilers.

comment by DanArmak · 2010-06-05T19:53:39.926Z · LW(p) · GW(p)

What does 'consciousness' mean?

I'm having an email conversation with a friend about Nick Bostrom's simulation argument and we're now trying to figure out what the word "consciousness" means in the first place.

People here use the C-word a lot, so it must mean something important. Unfortunately I'm not convinced it means the same thing for all of us. What does the theory that "X is conscious" predict? If we encounter an alien, what would knowing that it was "conscious" or "not conscious" tell us? How about if we encountered an android that looked and behaved identically to a human, but inside its head had a very different physical implementation? What would saying it was "conscious" or "not conscious" mean?

And, what does this have to do with my personal subjective experience? It's the foundation (or medium) of everything I know or believe; but most definitions of what it is tend to be dualism-like in that, once again, saying someone else has or doesn't have subjective experience tells us nothing about the physical world.

Help appreciated!

Replies from: Richard_Kennaway, RomanDavis
comment by Richard_Kennaway · 2010-06-07T10:23:35.154Z · LW(p) · GW(p)

What I mean by "consciousness" is my sensation of my own presence. Googling for definitions of "conscious" and "consciousness" gives mostly similar forms of words, so that concept would appear to be what is generally understood by the words.

Do philosophers have some other specific generally understood convention of exactly what they mean by these words?

Replies from: DanArmak
comment by DanArmak · 2010-06-07T14:36:38.459Z · LW(p) · GW(p)

What exactly do you mean by 'sensation'? Does it have to do with "subjective experience" and "qualia", or just the bare fact that you're modeling yourself as part of the world, like RomanDavis and Blueberry's definitions?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-06-07T15:21:46.083Z · LW(p) · GW(p)

By "sensation" I mean the subjective experience.

If you ask me what I mean by "subjective" and "experience", well, you could follow such a train of questions indefinitely and eventually I would have no answer. But what would that prove? You're not asking for a theory of how consciousness works, but a description of the thing that such a theory would be a theory of.

Ask someone five centuries ago what they mean by "water" and all they'll be able to say is something like "the wet stuff that flows in rivers and falls from the sky". And you can ask them what they mean by "rivers" and "sky", but to what end? All you're likely to get if you press the matter is some bad science about the four elements.

"Consciousness" is in a similar state. I have an experience I label with that word, but I can't tell you how that experience happens.

Replies from: DanArmak
comment by DanArmak · 2010-06-07T15:45:21.029Z · LW(p) · GW(p)

That's great - I use the word in the same way. As far as I can tell, some other people don't - see the comments by RomanDavis and Blueberry that I linked to. This confusion over the meaning of the word is what I wanted to highlight.

The way that some others use the word (to mean "an agent that models itself" or "an agent that perceives itself"), either they have successfully dissolved the question of what subjective experience is, or I don't understand them correctly, or indeed different people use the word to mean different things.

And the reason I started out talking about that is that I've seen this cause confusion both on LW and elsewhere.

comment by RomanDavis · 2010-06-05T20:10:33.819Z · LW(p) · GW(p)

There are a lot of hypotheses floating around.

Mine is:

We have awareness. That is, we observe things in the territory with our senses, and include them in our map of the territory. The phenomenon we observe as consciousness is just our ability to include ourselves (our own minds, and some of it's inner sensations) in the territory.

Some people think there are things you can only know if you experience them yourself. In theory, you could run a decent simulation of what it's like to be a bat, but you would still have memories of being human, and therefore awareness of bat territory wouldn't be enough.

My solution: implant memories, including bat memories of not having human memories, into yourself. In theory, this should work.

Replies from: DanArmak
comment by DanArmak · 2010-06-05T20:41:30.866Z · LW(p) · GW(p)

There are a lot of hypotheses floating around.

I hope you don't mean you're hypothesizing what the word "consciousness" means; rather, your hypotheses are alternate predictions about physical unknowns or about the future. Which is it?

I'm asking what the definition, the meaning, of the word consciousness is. Hypothesizing what a word means feels like the wrong way to do things. Well, unless we're hypothesizing what other people mean when they say "consciousness". But if we're using the word here at LW we shouldn't need to hypothesize, we can all just tell one another what we mean...

The phenomenon we observe as consciousness is just our ability to include ourselves (our own minds, and some of it's inner sensations) in the territory.

Under that definition, any agent that models the world and includes its own behavior in the model (and any good general model will do that) - is called conscious. (I would call that self-modeling or self-aware.)

So any moderately intelligent, effective agent - like my hypothetical aliens and androids - would be called conscious.

That's a fine definition, but if everyone thought that, there would be no place for arguments about whether it's possible for zombies (let alone p-zombies) to exist. It doesn't seem to me that people see consciousness as meaning merely self-modeling.

Replies from: RomanDavis
comment by RomanDavis · 2010-06-05T20:45:33.715Z · LW(p) · GW(p)

That's a fine definition, but if everyone thought that, there would be no place for arguments about whether it's possible for zombies (let alone p-zombies) to exist. It doesn't seem to me that people see consciousness as meaning merely self-modeling.

I think consensus here is that the idea of P Zombies is silly.

Replies from: DanArmak
comment by DanArmak · 2010-06-05T20:52:33.606Z · LW(p) · GW(p)

Certainly. But is the idea of ordinary zombies also silly? That's what your definition implies.

ETA: not that I'm against that conclusion. It would make things so much simpler :-) I just have the experience that many people mean something else by "consciousness", something that would allow for zombies.

Replies from: RomanDavis
comment by RomanDavis · 2010-06-05T21:15:29.769Z · LW(p) · GW(p)

What's the difference?

Replies from: DanArmak, Vladimir_Nesov
comment by DanArmak · 2010-06-05T21:28:23.646Z · LW(p) · GW(p)

If you define "consciousness" in a way that allows for unconscious but intelligent, even human-equivalent agents, then those are called zombies. Aliens or AIs might well turn out to be zombies. Peter Watt's vampires from Blindsight are zombies.

ETA: a p-zombie is physically identical to a conscious human, but is still unconscious. (And we agree that makes no sense). A zombie is physically different from a conscious human, and as a result is unconscious - but is capable of all the behavior that humans are capable of.

(My original comment was wrong (thanks Blueberry!) and said: The difference between a zombie and a p-zombie is that p-zombies claim to be conscious, while zombies neither claim nor believe to be conscious.)

Replies from: Blueberry, Jack
comment by Blueberry · 2010-06-05T21:45:28.209Z · LW(p) · GW(p)

This is very different from my understanding of the definition of those terms, which is that p-zombies are physically identical to a conscious human, and a zombie is an unconscious human-equivalent with a physical, neurological difference.

I don't see any reason why an unconscious human-equivalent couldn't erroneously claim to be conscious, any more than an unconscious computer could print out the sentence "I am conscious."

Replies from: DanArmak
comment by DanArmak · 2010-06-05T21:54:07.621Z · LW(p) · GW(p)

You're right. It's what I meant, but I see that my explanation came out wrong. I'll fix it.

I don't see any reason why an unconscious human-equivalent couldn't erroneously claim to be conscious

That's true. But the fact of the matter would be that such a zombie would be objectively wrong in its claim to be conscious.

My question is: what is being conscious defined to mean? If it's a property that is objectively present or not present and that you can be wrong about in this way, then it must be something more than a "pure subjective" experience or quale.

Replies from: torekp, Blueberry
comment by torekp · 2010-06-06T00:02:13.972Z · LW(p) · GW(p)

If a subjective experience is the same event, differently described, as a neural process, you can be wrong about whether you are having it. You can also be wrong about whether you and another being share the same or similar quale, especially if you infer such similarity solely from behavioral evidence.

Even aside from physical-side-of-the-same-coin considerations, a person can be mistaken about subjective experience. A tries the new soup at the restaurant and says "it tastes just like chicken". B says, "No, it tastes like turkey." A accepts the correction (and not just that it tastes like turkey to B). The plausibility of this scenario shows that we can be mistaken about qualia. Now, admittedly, that's a long way from being mistaken about whether one has qualia at all - but to rule that possibility in or out, we have to make some verbal choices clarifying what "qualia" will mean.

Roughly speaking, I see at least two alternatives for understanding "qualia". One would be to trot out a laundry list of human subjective feels: color sensations, pain, pleasure, tastes, etc., and then say "this kind of thing". That leaves the possibility of zombies wide open, since intelligent behavior is no guarantee of a particular familiar mental mechanism causing that behavior. (Compare: I see a car driving down the road, doing all the things an internal combustion engine-powered vehicle can do. That's no guarantee that internal combustion occurs within it.)

A second approach would be to define "qualia" by its role in the cognitive economy. Very roughly speaking, qualia are properties highly accessible to "executive function", which properties go beyond (are individuated more finely than by) their roles in representing, for the cognizer, the objective world. On this understanding of "qualia" zombies might be impossible - I'm not sure.

comment by Blueberry · 2010-06-05T22:14:15.533Z · LW(p) · GW(p)

But the fact of the matter would be that such a zombie would be objectively wrong in its claim to be conscious.

Well, the claim would be objectively incorrect; I'm not sure it's meaningful to say that the zombie would be wrong.

My question is: what is being conscious defined to mean? If it's a property that is objectively present or not present and that you can be wrong about in this way, then it must be something more than a "pure subjective" experience or quale.

As others have commented, it's having the capacity to model oneself and one's perceptions of the world. If p-zombies are impossible, which they are, there are no "pure subjective" experiences: any entity's subjective experience corresponds to some objective feature of its brain or programming.

Replies from: DanArmak
comment by DanArmak · 2010-06-06T00:25:50.248Z · LW(p) · GW(p)

it's having the capacity to model oneself and one's perceptions of the world.

That's not the definition that seems to be used in many of the discussions about consciousness. For instance, the term "Hard Problem of Consciousness" isn't talking about self-modeling.

Let's take the discussion about p-zombies as an example. P-zombies are physically identical to normal humans, so they (that is, their brains) clearly model themselves and their own perceptions of the world. Then the claim that they are unconscious is in direct contradiction to the definition of consciousness.

If proving that p-zombies are logically impossible was as simple as pointing this out, the whole debate wouldn't exist.

Beyond that example, I've gone through all LW posts that have "conscious" in their title:

I'm saying that "But you haven't explained consciousness!" doesn't reasonably seem like the responsibility of physicists, or an objection to a theory of fundamental physics.

And then he says:

however consciousness turns out to work, getting infected with virus X97 eventually causes your experience of dripping green slime.

I read that as using 'consciousness' to mean experience in the sense of subjective qualia.

If p-zombies are impossible, which they are, there are no "pure subjective" experiences: any entity's subjective experience corresponds to some objective feature of its brain or programming.

The reason "subjective experience" is called subjective is that it's presumed not to be part of the objective, material world. That definition is dated now, of course.

I don't want to turn this thread into a discussion of what consciousness is, or what subjective experience is. That's a discussion I'd be very interested in, but it should be separate. My original question was, what do people mean by "consciousness"? If I understood you correctly, that to you it simply means self-modeling systems, then I was right to think different people use the C-word to mean quite different things, even just here on LW.

Replies from: RomanDavis
comment by RomanDavis · 2010-06-06T01:22:12.676Z · LW(p) · GW(p)

Lets say you're having a subjective experience. Say, being stung by a wasp. How do you know? Right. You have to have to be a ware of yourself, and your skin, and have pain receptors, and blah blah blah.

But if you couldn't feel the pain, let's say because you were numb, you would still feel conscious. And if you were infected with a virus that made a wasp sting feel sugary and purple, rather than itchy and painful, you would also still be conscious.

It's only when you don't have a model of yourself that consciousness becomes impossible.

Replies from: DanArmak
comment by DanArmak · 2010-06-07T14:40:32.385Z · LW(p) · GW(p)

It's only when you don't have a model of yourself that consciousness becomes impossible.

That doesn't mean they're the same thing. Unless you define them to mean the same thing. But as I described above, not everyone does that. There is no "Hard Problem of Modeling Yourself".

comment by Jack · 2010-06-05T22:10:55.640Z · LW(p) · GW(p)

ETA: a p-zombie is physically identical to a conscious human, but is still unconscious. (And we agree that makes no sense). A zombie is physically different from a conscious human, and as a result is unconscious - but is capable of all the behavior that humans are capable of.

Where the heck is this terminology coming from? As I learned it the 'philosophical' in "philosophical zombie" is just there to distinguish it from Romero-imagined brain-eating undead.

Replies from: Blueberry
comment by Blueberry · 2010-06-05T22:19:22.575Z · LW(p) · GW(p)

Yes, but we need some other term for "unconscious human-like entity". I read one paper that used the terms "p-zombie" and "b-zombie", where the p stood for "physical" as well as "philosophical" and the b stood for "behavioral".

Replies from: Jack
comment by Jack · 2010-06-05T22:29:49.524Z · LW(p) · GW(p)

I'd rather call the first an n-zombie (meaning neurologically identical to a human). And, yeah, lets use b-zombie instead of zombie as all of these are varieties of philosophical zombie.

(But yes they're just words. Thanks for clarifying.)

comment by Vladimir_Nesov · 2010-06-05T21:25:58.664Z · LW(p) · GW(p)

P-zombies can write philosophical papers on p-zombies.

Replies from: RomanDavis
comment by RomanDavis · 2010-06-05T21:28:13.419Z · LW(p) · GW(p)

Oh, P Zombies are just the reductio ad absurdum version? Yeah, I don't believe in Zombies.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-05T21:31:59.546Z · LW(p) · GW(p)

P-zombies aren't just reducio ad absurda although most of LW does consider them to be. David Chalmers, who is a very respected philosopher takes the idea quite seriously as do a surprisingly large number of other philosophers.

Replies from: RomanDavis
comment by RomanDavis · 2010-06-05T21:35:50.060Z · LW(p) · GW(p)

Please explain to me how it is not.

You can't just say, "This smart guy takes this very seriously." Aristotle took a lot of things very seriously that turned out to be nonsense.

Replies from: RichardChappell, JoshuaZ
comment by RichardChappell · 2010-06-05T21:47:46.643Z · LW(p) · GW(p)

'Zombie Review' provides some background here...

comment by JoshuaZ · 2010-06-05T21:44:28.733Z · LW(p) · GW(p)

My point is that it isn't regarded in general as a a reducio. Indeed, it actually was originally constructed as an argument against physicalism. I see it as a reducio also or even more to the point as an indication of how much into a corner dualism has been pushed by science. The really scary thing is that some philosophers seem to think that P-zombies are a slam-dunk argument for dualism.

Replies from: Jack, RomanDavis
comment by Jack · 2010-06-05T22:00:37.865Z · LW(p) · GW(p)

The really scary thing is that some philosophers seem to think that P-zombies are a slam-dunk argument for dualism.

Who?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-05T22:12:14.421Z · LW(p) · GW(p)

Nagel and Chalmers all seem to think it is a strong argument. Kirk used to think it was but since then has gone about Pi radians on that. My impression is that Block also sees it as a strong argument but I haven't actually read anything by Block. That's the impression I get from seeing Block mentioned in passing.

Replies from: RichardChappell
comment by RichardChappell · 2010-06-05T22:33:32.090Z · LW(p) · GW(p)

Thinking it's a strong argument is, of course, still a long way from thinking it's a "slam dunk" (nobody that I'm aware of thinks that).

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-05T22:38:59.995Z · LW(p) · GW(p)

Yeah, that wording may be too strong, although the impression I get certainly is that Kirk was convinced it was a slam dunk for quite some time. Kirk's book "Zombies and Consciousness" (which I've only read parts of) seems to describe him as having once considered to be pretty close to a slamdunk. But yeah, my wording was probably too strong.

comment by RomanDavis · 2010-06-05T21:48:28.851Z · LW(p) · GW(p)

Okay, I agree.

It's just really easy to take the explicit, "this guy takes it seriously" and make the implicit connection, "and this is totally not a silly idea at all."

comment by Liron · 2010-06-04T07:16:52.675Z · LW(p) · GW(p)

What's the deal with female nymphomaniacs? Their existence seems a priori unlikely.

Replies from: RomanDavis, Vladimir_M, gwern, Richard_Kennaway, LucasSloan
comment by RomanDavis · 2010-06-05T04:07:02.627Z · LW(p) · GW(p)

Then your priors are wrong. Adjust accordingly.

Replies from: Liron
comment by Liron · 2010-06-05T07:43:13.042Z · LW(p) · GW(p)

"What's the deal with" means "What model would have generated a higher prior probability for". Noticing your confusion isn't the entire solution.

Replies from: Mitchell_Porter, RomanDavis
comment by Mitchell_Porter · 2010-06-05T08:11:26.164Z · LW(p) · GW(p)

If the existing model is sexual dimorphism, with high sexual desire a male trait, you could simply suppose that it's a "leaky" dimorphism, in which the sex-linked traits nonetheless show up in the other sex with some frequency. In humans this should especially be possible with male traits which depend not on the Y chromosome, but rather on having one X chromosome rather than two. That means that there is only one copy, rather than two, of the relevant gene, which means trait variance can be greater - in a woman, an unusual allele on one X chromosome may be diluted by a normal allele on the other X, whereas a man with an unusual X allele has no such counterbalance. But it would still be easy enough for a woman to end up with an unusual allele on both her Xs.

Also, regardless of the specific genetic mechanism, human dimorphism is just not very extreme or absolute (compared to many other species), and forms intermediate between stereotypical male and female extremes are quite common.

comment by RomanDavis · 2010-06-05T20:22:43.388Z · LW(p) · GW(p)

I thought it was pretty clear. Sexual Dimorphism doesn't operate the way you think it does. Women with high sex drives aren't rare at all.

I have heard that, for most men and most women, the time of highest sex drive happens at very different times (much younger for men than women). This might account for the entire difference, especially if your'e getting most of your information from the culture at large. As TVTropes will tell you, Most Writers Are Male.

comment by Vladimir_M · 2010-06-05T02:49:37.382Z · LW(p) · GW(p)

Their existence seems a priori unlikely.

Why?

comment by gwern · 2010-06-04T18:10:45.282Z · LW(p) · GW(p)

And they are accordingly rare, are they not?

Replies from: Blueberry, Liron
comment by Blueberry · 2010-06-05T02:36:02.508Z · LW(p) · GW(p)

No, women with a high sex drive are not rare.

comment by Liron · 2010-06-05T01:01:44.783Z · LW(p) · GW(p)

Maybe. I don't know.

comment by Richard_Kennaway · 2010-06-07T09:37:48.340Z · LW(p) · GW(p)

This question reads to me like it's out of the middle of some discussion I didn't hear the beginning of. Why were "nymphomaniacs" on your mind in the first place? What do you mean by the word? I don't think I've heard it in many years, and I associate it with the sexual superstitions of a former age.

comment by LucasSloan · 2010-06-07T02:46:05.379Z · LW(p) · GW(p)

female nymphomaniacs

What does the word "nymphomaniacs" mean? How do you judge someone to be sufficiently obsessed with sex to be a nymphomaniac? I think a lot of your confusion might be coming from you tendency to label people with this word with such negative connotations.

Does the question "what is with women who want to have sex [five times a week*] and will undertake to get it?" resolve any of your confusion? You should expect that those women who have more sex to be more salient wrt people talking about them, so they would seem more prominent, even if only 2% of the population.

*not sure about this number, just picked one that seemed alright.

Replies from: Alicorn, JoshuaZ, Vladimir_M
comment by Alicorn · 2010-06-07T02:55:47.596Z · LW(p) · GW(p)

Five times a week wouldn't be remotely enough to diagnose. It has to be problematic and clinically significant.

Replies from: LucasSloan
comment by LucasSloan · 2010-06-07T04:42:50.684Z · LW(p) · GW(p)

I think that's kinda my point. I was attempting to point out that he's probably confusing the term "nymphomaniac" with its negative connotations, with "likes to have [vaguely defined 'a lot'] of sex."

Replies from: Blueberry
comment by Blueberry · 2010-06-07T06:52:01.348Z · LW(p) · GW(p)

"Nymphomaniac" hasn't been a clinical diagnosis for a long time. In my experience, the word is now most commonly used colloquially to mean "a woman who likes to have a lot of sex". Whether this has negative connotations depends on your attitude to sex, I suppose.

comment by JoshuaZ · 2010-06-07T03:13:08.559Z · LW(p) · GW(p)

Picking a number for this seems like a really bad idea. For most modern clinical definitions of disorders what matters is whether it interferes with normal daily behavior. Even that is questionable since what constitutes interference is very hard to tell.

Societies have had very different notions of what is acceptable sexuality for both males and females. Until fairly recent homosexuality was considered a mental disorder in the US. And in the Victorian era, women were routinely diagnosed as nymphomaniacs for showing pretty minimal signs of sexuality.

comment by Vladimir_M · 2010-06-07T03:54:08.545Z · LW(p) · GW(p)

...[five times a week]...
[...]
not sure about this number, just picked one that seemed alright.

This is one of the more bizarre things I've read recently.

comment by Seth_Goldin · 2010-06-02T17:52:19.726Z · LW(p) · GW(p)

In A Technical Explanation of Technical Explanation, Eliezer writes,

You should only assign a calibrated confidence of 98% if you're confident enough that you think you could answer a hundred similar questions, of equal difficulty, one after the other, each independent from the others, and be wrong, on average, about twice. We'll keep track of how often you're right, over time, and if it turns out that when you say "90% sure" you're right about 7 times out of 10, then we'll say you're poorly calibrated.

...

What we mean by "probability" is that if you utter the words "two percent probability" on fifty independent occasions, it better not happen more than once

...

If you say "98% probable" a thousand times, and you are surprised only five times, we still ding you for poor calibration. You're allocating too much probability mass to the possibility that you're wrong. You should say "99.5% probable" to maximize your score. The scoring rule rewards accurate calibration, encouraging neither humility nor arrogance.

So I have a question. Is this not an endorsement of frequentism? I don't think I understand fully, but isn't counting the instances of the event exactly frequentist methodology? How could this be Bayesian?

Replies from: Morendil, None
comment by Morendil · 2010-06-02T18:21:25.731Z · LW(p) · GW(p)

As I understand it, frequentism requires large numbers of events for its interpretation of probability, whereas the bayesian interpretation allows the convergence of relative frequencies with probabilities but claims that probability is a meaningful concept when applied to unique events, as a "degree of plausibility".

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-03T20:01:22.640Z · LW(p) · GW(p)

Do you (or anyone else reading this) know of any attempts to give a precise non-frequentist interpretation of the exact numerical values of Bayesian probabilities? What I mean is someone trying to give a precise meaning to the claim that the "degree of plausibility" of a hypothesis (or prediction or whatever) is, say, 0.98, which wouldn't boil down to the frequentist observation that relative to some reference class, it would be right 98/100 of the time, as in the above quoted example.

Or to put it in a way that might perhaps be clearer, suppose we're dealing with the claim that the "degree of plausibility" of a hypothesis is 0.2. Not 0.19, or 0.21, or even 0.1999 or 0.2001, but exactly that specific value. Now, I have no intuition whatsoever for what it might mean that the "degree of plausibility" I assign to some proposition is equal to one of these numbers and not any of the other mentioned ones -- except if I can conceive of an experiment or observation (or at least a thought-experiment) that would yield that particular exact number via a frequentist ratio.

I'm not trying to open the whole Bayesian vs. frequentist can of worms at this moment; I'd just like to find out if I've missed any significant references that discuss this particular question.

Replies from: Wei_Dai, Morendil, JoshuaZ
comment by Wei Dai (Wei_Dai) · 2010-06-04T04:53:38.617Z · LW(p) · GW(p)

Have you seen my What Are Probabilities, Anyway? post?

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-04T18:25:58.061Z · LW(p) · GW(p)

Yes, I remember reading that post a while ago when I was still just lurking here. But I forgot about it in the meantime, so thanks for bringing it to my attention again. It's something I'll definitely need to think about more.

comment by Morendil · 2010-06-03T20:57:26.523Z · LW(p) · GW(p)

In the Bayesian interpretation, the numerical value of a probability is derived via considerations such as the principle of indifference - if I know nothing more about propositon A than I know about proposition B, then I hold both equally probable. (So, if all I know about a coin is that it is a biased coin, without knowing how it is biased, I still hold Heads or Tails equally probable outcomes of the next coin flip.)

If we do know something more about A or B, then we can apply formulae such as the sum rule or product rule, or Bayes' rule which is derived from them, to obtain a "posterior probability" based on our initial estimation (or "prior probability"). (In the coin example, I would be able to take into account any number of coin flips as evidence, but I would first need to specify through such a prior probability what I take "a biased coin" to mean in terms of probability; whereas a frequentist approach relies only on flipping the coin enough times to reach a given degree of confidence.)

(Note, this is my understanding based on having partially read through precisely one text - Jaynes' Probability Theory - on top of some Web browsing; not an expert's opinion.)

comment by JoshuaZ · 2010-06-03T20:04:24.523Z · LW(p) · GW(p)

Do you (or anyone else reading this) know of any attempts to give a precise non-frequentist interpretation of the exact numerical values of Bayesian probabilities?

Yes, you can do this precisely with measure theory, but some will argue that that is nice math but not a philosophically satisfying approach.

Edit: A more concrete approach is to just think about it as what bets you should make about possible outcomes.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-03T20:40:04.102Z · LW(p) · GW(p)

Yes, you can do this precisely with measure theory, but some will argue that that is nice math but not a philosophically satisfying approach.

I'm not sure I understand what exactly you have in mind. I am aware of the role of measure theory in the standard modern formalization of probability theory, and how it provides for a neat treatment of continuous probability distributions. However, what I'm interested in is not the math, but the meaning of the numbers in the real world.

Bayesians often make claims like, say, "I assign the probability of 0.2 to the hypothesis/prediction X." This is a factual claim, which asserts that some quantity is equal to 0.2, not any other number. This means that those making such claims should be able to point at some observable property of the real world related to X that gives rise to this particular number, not some other one. What I'd like to find out is whether there are attempts at non-frequentist responses to this sort of request.

Edit: A more concrete approach is to just think about it as what bets you should make about possible outcomes.

But it seems to me that betting advice is fundamentally frequentist in nature. As far as I can see, the only practical test of whether a betting strategy is good or bad is the expected gain (or loss) it will provide over a large number of bets. [Edit: I should have been more clear here -- I assume that you are not using an incoherent strategy vulnerable to a Dutch book. I had in mind strategies where you respect the axioms of probability, and the only question is which numbers consistent with them you choose.]

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2010-06-03T21:02:22.795Z · LW(p) · GW(p)

Bayesians, would say that the probability is (some function of) the expected value of one bet.

Frequentists, would say that it is (some function of) the actual value of many bets (as the amount of bets goes to infinity).

The whole point of looking at many bets is to make the average value close to the expected value (so that frequentists don't have to think about what "expected" actually means). You never have to say "the expected gain ... over a large number of bets." That would be redundant.

What does "expected" actually mean? It's just the probabilty you should bet at to avoid the possibility of being Dutch-booked on any single bet.

ETA: When you are being Dutch-booked, you don't get to look at all the offered bets at once and say "hold on a minute, you're trying to trick me". You get given each of the bets one at a time, and you have to bet Bayesianly for each one if you want to avoid any possibility of sure losses.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-04T00:18:20.112Z · LW(p) · GW(p)

I might be mistaken, but I think this still doesn't answer my question. I understand -- or at least I think I do -- how the Dutch book argument can be used to establish the axioms of probability and the entire mathematical theory that follows from them (including the Bayes theorem).

The way I understand it, this argument says that once I've assigned some probability to an event, I must assign all the other probabilities in a way consistent with the probability axioms. For example, if I assign P(A) = 0.3 and P(B) = 0.4, I would be opening myself to a Dutch book if I assigned, say, P(~A) != 0.7 or P(A and B) > 0.3. So far, so good.

However, I still don't see what, if anything, the Dutch book argument tells us about the ultimate meaning of the probability numbers. If I claim that the probability of Elbonia declaring war on Ruritania before next Christmas is 0.3, then to avoid being Dutch-booked, I must maintain that the probability of that event not happening is 0.7, and all the other stuff necessitated by the probability axioms. However, if someone comes to me and claims that the probability is not 0.3, but 0.4 instead, in what way could he argue, under any imaginable circumstances and either before or after the fact, that his figure is correct and mine not? What fact observable in physical reality could he point out and say that it's consistent with one number, but not the other?

I understand that if we both stick to our different probabilities and make bets based on them, we can get Dutch-booked collectively (someone sells him a bet that pays off $100 if the war breaks out for $39, and to me a bet that pays off $100 in the reverse case for $69 -- and wins $8 whatever happens). But this merely tells us that something irrational is going on if we insist (and act) on different probability estimates. It doesn't tell us, as far as I can see, how one number could be correct, and all others incorrect -- unless we start talking about a large reference class of events and frequencies at some point.

Replies from: Morendil
comment by Morendil · 2010-06-04T05:56:24.426Z · LW(p) · GW(p)

There's nothing mysterious about it as far as I can tell, it's "just math".

Give me a six-sided die and I'll compute the probability of it coming up 4 as 1/6. This simple exercise can become more complicated in one of two ways. You can ask me to compute the probability of a more complex event, e.g. "three even numbers in a row". This still has an exact answer.

The other complication is if the die is loaded. One way I might find out how that affects its single-throw probabilities is by throwing it a large number of times, but conceivably I can also X-ray the die, find out how its mass is distributed, and deduce from that how the single-throw probabilities differ. (Offhand I'd say that faces closer to the center of mass of the die are more likely to come up, but perhaps the calculation is more interesting than that.)

In the case of Elbonia vs Ruritania, the other guy has some information that you don't, perhaps for instance the transcript of an intercepted conversation between the Elbonian ruler and a nearby power assuring the former of their support against any unwarranted Ruritanian aggression: they think the war is more plausible given this information.

Further, if you agreed with that person in all other respects, i.e. if his derivation of the probability for war given all other relevant information was also 0.3 absent the interception, and you agreed on how verbal information translated into numbers, then you would have no choice but to also accept the final figure of 0.4 conditioning on the interception. Bayesian probability is presented as an exact system of inference (and Jaynes is pretty convincing on this point, I should add).

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-04T18:57:02.291Z · LW(p) · GW(p)

I agree about Jaynes and the exactness of Bayesian inference. (I haven't read his Probability Theory fully, but I should definitely get to it sometime. I did got through the opening chapters however, and it's indeed mighty convincing.) Yet, I honestly don't see how either Jaynes or your comments answer my question in full, though I seen no significant disagreement with what you've written. Let me try rephrasing my question once more.

In natural sciences, when you characterize some quantity with a number, this number must make sense in some empirical way, testable in an experiment, or at least with a thought experiment if a real one isn't feasible in practice. Suppose that you've determined somehow that the temperature of a bowl of water is 300K, and someone asks you what exactly this number means in practice -- why 300, and not 290, or 310, or 299, or 301? You can reply by describing (or even performing) various procedures with that bowl of water that will give predictably different outcomes depending on its exact temperature -- and the results of some such procedures with this particular bowl are consistent only with a temperature of 300K plus/minus some small value that can be made extremely tiny with a careful setup, and not any other numbers.

Note that the question posed here is not how we've determined what the temperature of the water is in the first place. Instead, the question is: once we've made the claim that the temperature is some particular number, what practical observation can we make that will show that this particular number is consistent with reality, and others aren't? If an number can't be justified that way, then it is not something science can work with, and there is no reason to consider one value as "correct" and another "incorrect."

So now, when I ask the same question about probability, I'm not asking about the procedures we use to derive these numbers. I'm asking: once we've made the claim that the probability of some event is p, what practical observations can we make that will show that this particular number is consistent with reality, and others aren't -- except by pointing to frequencies of events? I understand how we would reach a probability figure in the Elbonia vs. Ruritania scenario, I agree that Bayesian inference is an exact system, and I see what the possible sources of disagreement could be and how they should be straightened out when asymmetrical information is eliminated. I am not arguing with any of that (at least in the present context). Rather, what I'd like to know is whether the figures ultimately reached make any practical sense in terms of some observable properties of the universe, except for the frequentist ratios predicted by them. (And if the latter is the only answer, this presumably means that any sensible interpretation of probability would have to incorporate significant frequentist elements.)

Replies from: Morendil
comment by Morendil · 2010-06-04T21:04:53.742Z · LW(p) · GW(p)

That question, interesting as it is, is above my pay grade; I'm happy enough when I get the equations to line up the right way. I'll let others tackle it if so inclined.

comment by [deleted] · 2010-06-03T13:06:23.233Z · LW(p) · GW(p)

Morendil's explanation is, as far as I can tell, correct. What's much more interesting is that examples given in terms of frequencies is required to engage our normal intuitions about probability. There's at least some research that indicates that when questions of estimation and probability are given in terms of frequencies (ie: asking 'how many problems do you think you got correct?' instead of 'what is your confidence for this answer?'), many biases disappear completely.

comment by CronoDAS · 2010-06-02T15:45:44.614Z · LW(p) · GW(p)

Here's an interesting video.

Drive: The Surprising Truth About What Motivates Us

Replies from: Antisuji
comment by Antisuji · 2010-06-02T16:58:38.631Z · LW(p) · GW(p)

An engaging video, thanks. The study sounded familiar, so I looked for it... turns out I'd seen the guy's TED talk a while back: http://www.ted.com/talks/dan_pink_on_motivation.html

comment by xamdam · 2010-06-03T13:51:49.546Z · LW(p) · GW(p)

First I'd like to point out a good interview with Ray Kurzweil, which I found more enjoyable than a lot of his monotonous talks. http://www.motherboard.tv/2009/7/14/singularity-of-ray-kurzweil

As a follow-up, I am curious anyone attempted to mathematically model Ray's biggest and most disputed claim, which is the acceleration rate of technology. Most dispute the claim by pointing out that the data points are somewhat arbitrary and invoke data dredging. It would be interesting if the claim was based on a more of a model basis rather than basically a regression. I imagine a model that would represent the entire human society (including our technology) as an information processing machine and would argue that the processing capability gets better by X% after a (rather artificial) 'cycle', contributing to the next cycle.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-03T13:59:26.723Z · LW(p) · GW(p)

Note that Kurzweil's responded to the data dredging complaint by taking major lists compiled by other people, combining them and showing that they fit a roughly exponential graph. (I don't have a citation for this unfortunately).

Edit: I'm not aware of anyone making a model of the sort you envision but it seems to suffer they same problem that Kurzweil has in general which is a potential overemphasis on information processing ability.

Replies from: xamdam
comment by xamdam · 2010-06-03T14:16:09.971Z · LW(p) · GW(p)

Why is basing this argument on information processing bad?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-03T14:27:34.407Z · LW(p) · GW(p)

Information processing isn't the whole story of what we care about. For example, the amount of energy available to societies and the per a capita energy availability both matter. (In fairness, Kurzweil has discussed both of these albeit not as extensively as information issues).

Another obvious metric to look at is average lifespan. This is one where one doesn't get an exponential curve. Now, if you assert that most humans will live to at least 50 and so look at life span - 50 in major countries over the last hundred years, then the data starts to look slightly more promising, but Kurzweil's never discussed this as far as I'm aware because he hasn't discussed lifespan issues much at all, except in the most obvious fasion. You can modify the data in other ways also. One of my preferred metrics looks at the average lifespan of people who survive past age 3 (this helps deal with the fact that we've done a lot more to handle infant mortality than we have to actually extend lifespan on the upper end). And when you do this, most gains of lifespan go away.

Replies from: xamdam
comment by xamdam · 2010-06-03T14:35:05.364Z · LW(p) · GW(p)

Good points. Still I feel that basing the crux of the argument on information processing is valid, unless the other concerns you mention interfere with it at some point. Is that what you're saying?

Good observation about infant mortality; there should be an opposite metric of "% of centenarians", which would be a better measure in this context.

Replies from: JoshuaZ, JoshuaZ
comment by JoshuaZ · 2010-06-03T17:21:20.684Z · LW(p) · GW(p)

%Centenarians might not be a good metric given that one will get an increasing fraction of those as birth rates decline. For the US, going by the data here and here, I get a total of 1.4 10^-4 for the fraction of the US pop that is over 100 in 1990, and a result of 1.7 10^-4 in 2000. But I'm not sure how accurate this data is. For example, in the first of the two links they throw out the 1970 census data as given a clearly too high number. One needs a lot more data points to see if this curve looks exponential (obviously two isn't enough), but the linked paper claims that for the foreseeable future the fraction of the pop that will be over 100 will increase by 2/3rds each decade. If that is accurate, then that means we are seeing an exponential increase.

Another metric to use might be the age of the oldest person by year of birth worldwide. That data shows a clear increasing trend, but the trend is very weak. Also, one would expect such an increase simply by increasing the general population (Edit: and better record keeping since the list includes only those with good verification), so without a fair bit of statistical crunching, it isn't clear that this data shows anything.

comment by JoshuaZ · 2010-06-03T14:46:00.998Z · LW(p) · GW(p)

Well, they do interfere, for example, lifespan issues help tell us if we're actually taking advantage of the exponential growth in information processing, or for that matter if even if we are taking advantage that it actually matters. If for example information processing ability increases exponentially but the marginal difficulty in improving other things (like say lifespan) increases at a faster rate, then even with an upsurge in information processing one isn't necessarily going to see much in the way of direct improvements. Information processing is also clearly limited in use based on energy availability. If I went back to say 1950 and gave someone access to a set of black boxes that mimic modern computers, the overall rate of increase in tech won't be that high, because the information processing ability while sometimes the rate limiting step, often is not (for example, generation of new ideas and speed at which prototypes can be constructed and tested both matter). And this is even more apparent if I go further back in time. The timespan from 1900 to 1920 won't look very different with those boxes added, to a large extent because people don't know how to take advantage of their ability. So there are a lot of constraints other than just information processing and transmission capability.

Edit: Information processing might potentially work as one measure among a handful but by itself it is very crude.

comment by DZS · 2010-06-03T07:17:23.752Z · LW(p) · GW(p)

I couldn't post a article due to lack of karma so I had to post here:P

I notice this site is pretty much filled with proponents of MWI, so I thought it'd be interresting to see if there are anyone on here who are actually against MWI, and if so, why?

After reading through some posts it seems the famous Probability, Preferred Basis and Relativity problems are still unsolved.

Are there any more?

Replies from: JamesPfeiffer
comment by JamesPfeiffer · 2010-06-05T05:56:09.514Z · LW(p) · GW(p)

Welcome!

Here is a comment by Mitchell Porter.

http://lesswrong.com/lw/1kh/the_correct_contrarian_cluster/1csi

Replies from: torekp
comment by torekp · 2010-06-06T00:38:29.222Z · LW(p) · GW(p)

Seconding Mitchell Porter's friendly attitude toward the Transactional Interpretation, I recommend this paper by Ruth Kastner and John Cramer.

comment by NancyLebovitz · 2010-06-01T20:17:42.442Z · LW(p) · GW(p)

Any recommendations for how much redundancy is needed to make ideas more likely to be comprehensible?

Replies from: Eliezer_Yudkowsky, None, None
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-06-02T07:44:59.704Z · LW(p) · GW(p)

There's a general rule in writing that if you don't know how many items to put in a list, you use three. So if you're giving examples and you don't know how many to use, use three. Don't know if that helps, but it's the main heuristic I know that's actually concrete.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2010-06-03T03:17:04.093Z · LW(p) · GW(p)

So if you're giving examples and you don't know how many to use, use three.

I'm not sure I follow. Could you give a couple more examples of when to use this heuristic?

comment by [deleted] · 2010-06-02T14:39:04.814Z · LW(p) · GW(p)

The only guideline I'm familiar with is "Tell me three times - tell me what you're going to explain, then explain it, then tell me what you just explained." This seems to work on multiple scales - from complete books to shorter essays (though I'm not sure if it works on the level of individual paragraphs).

Replies from: dclayh
comment by dclayh · 2010-06-02T18:05:44.220Z · LW(p) · GW(p)

I believe that's called the Bellman's Rule.

comment by [deleted] · 2010-06-01T21:52:08.579Z · LW(p) · GW(p)

It really depends upon the topic and upon how much inferential difference there is between your ideas and the reader's understanding of the topic. Eliezer's earlier posts are easily understandable to someone with no prior experience in statistics, cognitive science, etc. because he uses a number of examples and metaphors to clearly illustrate his point. In fact, it might be helpful to use his posts as a metric to help answer your question. In general, though, it's probably best to repeat yourself by summarizing your point at both the beginning and end of your essay/post/whatever and by using several examples to illustrate whatever you are talking about, especially if writing for non-experts.

comment by aausch · 2010-06-03T16:49:31.713Z · LW(p) · GW(p)

I sometimes look at human conscious thought as software which is running on partially re-programmable hardware.

The hardware can be reprogrammed by two actors - the conscious one, mostly indirectly, and the unconscious one, which seems to have direct access to the wiring of the whole mechanism (including the bits that represent the conscious actor).

I haven't yet seen a coherent discussion of this kind of model - maybe it exists and I'm missing it. Is there already a coherent discussion of this point of view on this site, or somewhere else?

Replies from: Jordan, pjeby, NancyLebovitz
comment by Jordan · 2010-06-03T23:00:16.683Z · LW(p) · GW(p)

I look at conscious thought like a person trying to simultaneously ride multiple animals. Each animal can manage itself, if left to it's own devices it'll keep on walking in some direction, perhaps even a good one. The rider can devote different levels of attention to any given animal, but his level of control bottoms out at some point: he can't control the muscles of the animals, only the trajectory (and not always this).

One animal might be vision: it'll go on recognizing and paying attention to things unspurred, but the rider can rein the animal in and make it focus on one particular object, or even one point on that object.

The animals all interact with each other, and sometimes it's impossible to control one after being incited by another. And of course, the rider only has so much attention to devote to the numerous beasts, and often can only wrangle one or two at time.

Some riders even have reins on themselves.

comment by pjeby · 2010-06-03T23:01:20.678Z · LW(p) · GW(p)

Is there already a coherent discussion of this point of view on this site, or somewhere else?

It's a little old, but there's always The Multiple Self.

comment by NancyLebovitz · 2010-06-03T22:46:47.389Z · LW(p) · GW(p)

I think that's a part of PJEby's theories.

comment by [deleted] · 2010-06-03T15:19:51.557Z · LW(p) · GW(p)

In the same vein as Roko's investigation of LessWrong's neurotypicalness, I'd be interested to know the spread of Myers-Briggs personality types that we have here. I'd guess that we have a much higher proportion of INTPs than the general population.

Online Myers-Briggs test can be found here, though I'm not sure how accurate it is

Replies from: None, JoshuaZ
comment by [deleted] · 2010-06-05T09:41:16.572Z · LW(p) · GW(p)

del

Replies from: AdeleneDawner, mattnewport
comment by AdeleneDawner · 2010-06-06T01:35:51.662Z · LW(p) · GW(p)

http://lesswrong.com/lw/2a5/on_enjoying_disagreeable_company/22ga

That's a small sample, but we actually seem to score below average on Conscientiousness. Of the 7 responses to that request, the Conscientiousness scores were 1, 1, 8, 13, 41, 41, and 58.

Replies from: gimpf
comment by gimpf · 2010-06-06T21:03:55.088Z · LW(p) · GW(p)

Add another C5. Does not surprise me, as per all the akrasia talk here around.

comment by mattnewport · 2010-06-05T21:39:32.213Z · LW(p) · GW(p)

I tend to score very high on openness to experience and average to low on extraversion but only average to low on conscientiousness.

comment by JoshuaZ · 2010-06-03T15:33:55.675Z · LW(p) · GW(p)

There are a lot of problems with Myers-Briggs. For example, the test doesn't account for people saying things because they are considered socially good traits. Claims that Myers-Briggs is accurate seem often to be connected to the Forer effect. A paper which discusses these issues is Boyle's "Myers-Briggs Type Indicator (MBTI): Some psychometric limitations", 1995 Australian Psychologist 30, 71–74.

comment by CronoDAS · 2010-06-02T18:37:27.807Z · LW(p) · GW(p)

Anyone here live in California? Specifically, San Diego county?

The judicial election on June 8th has been subject to a campaign by a Christian conservative group. You probably don't want them to win, and this election is traditionally a low turnout one, so you might want to put a higher priority on this judicial election than you normally would. In other words, get out there and vote!