Open thread, July 29-August 4, 2013

post by David_Gerard · 2013-07-29T22:26:36.505Z · LW · GW · Legacy · 389 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Of course, for "every Monday", the last one should have been dated July 22-28. *cough*

389 comments

Comments sorted by top scores.

comment by Lumifer · 2013-07-30T01:57:48.281Z · LW(p) · GW(p)

An interesting story -- about science, what gets published, and what the incentives for scientists are. But really it is about whether you ought to believe published research.

The summary has three parts (I am quoting from the story).

Part 1 : We were inspired by the fast growing literature on embodiment that demonstrates surprising links between body and mind (Markman & Brendl, 2005; Proffitt, 2006) to investigate embodiment of political extremism. Participants from the political left, right and center (N = 1,979) completed a perceptual judgment task in which words were presented in different shades of gray. Participants had to click along a gradient representing grays from near black to near white to select a shade that matched the shade of the word. We calculated accuracy: How close to the actual shade did participants get? The results were stunning. Moderates perceived the shades of gray more accurately than extremists on the left and right (p = .01). Our conclusion: political extremists perceive the world in black-and-white, figuratively and literally. Our design and follow-up analyses ruled out obvious alternative explanations such as time spent on task and a tendency to select extreme responses.

Part 2 : Before writing and submitting, we paused. ... We conducted a direct replication while we prepared the manuscript. We ran 1,300 participants, giving us .995 power to detect an effect of the original effect size at alpha = .05.

Part 3 : The effect vanished (p = .59).

Replies from: Vladimir_Golovin
comment by Vladimir_Golovin · 2013-07-31T05:47:15.618Z · LW(p) · GW(p)

Warning: a contrarian anecdote: I worked with a guy who was a hardcore ultra-nationalist, bordering on nazism. As one would expect, he did perceive many aspects of the world in black-and-white. The catch is, the guy is an excellent digital painter, and his work often involved nuanced analogous color schemes, which rely on using shades of one or more similar colors.

comment by Prismattic · 2013-07-31T02:02:38.783Z · LW(p) · GW(p)

Ugh. I am generally in the unsympathetic-to-PUA thinking camp, so I offer the following not to bring up a controversial subject again, but because I think publicly acknowledging when one encounters inconvenient evidence for one's priors is a healthy habit to be in...

Recently I added the following (truthful) text to my OkCupid! profile:

Note, July 2013 -- I can't claim to be in a relationship yet, but I have had a couple of dates with a someone who had me totally enthralled within 30 minutes of meeting her. I'm flattered by the wave of other letters that have come in the past month, but I've put responding to anyone else on hold while I devote myself to worshiping the ground she walks on.

Having noted that I am a)unavailable and b)getting lots of competing offers, a high status combination, the result is... in three days, the number of women rating my profile highly has gone from 61 to 113.

Replies from: Eliezer_Yudkowsky, RomeoStevens, Matt_Simpson, army1987, army1987, Viliam_Bur, army1987
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-31T02:31:18.803Z · LW(p) · GW(p)

+1 for acknowledging the inconvenient (without regard to subject matter).

Replies from: SilasBarta
comment by SilasBarta · 2013-08-01T19:50:30.422Z · LW(p) · GW(p)

+1 for a (+1 for acknowledging the inconvenient) on a subject you dislike discussion of.

comment by RomeoStevens · 2013-07-31T07:55:10.476Z · LW(p) · GW(p)

OTOH I wouldn't at all be shocked to find out that profiles rated highly and profiles most often responded to are significantly different sets. Signalling preferences vs revealed preference yada yada.

comment by Matt_Simpson · 2013-07-31T16:57:19.559Z · LW(p) · GW(p)

Funny, I read your post and my initial reaction was that this evidence cuts against PUA. (Now I'm not sure whether it supports PUA or not, but I lean towards support).

PUA would predict that this phrase

...while I devote myself to worshiping the ground she walks on.

is unattractive.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-31T17:26:23.682Z · LW(p) · GW(p)

I dunno, in the context it sounds clearly tongue-in-cheek -- though you usually can't countersignal to people who don't know you (see also).

Replies from: Prismattic, Matt_Simpson
comment by Prismattic · 2013-07-31T22:33:54.671Z · LW(p) · GW(p)

The irony is that the phrase was sort of serious, but in the context of a profile much of which is a lengthy exercise in countersignalling to people who don't know me, I can probably count on most people making the same assumption you did.

Replies from: army1987
comment by A1987dM (army1987) · 2013-08-01T12:46:26.364Z · LW(p) · GW(p)

More specifically: “I devote myself to worshiping the ground she walks on” is the kind of sentence you mainly say for its connotations, not its denotations. In isolation, the connotation would be ‘she's so much awesome than me’, which is low status, but in context it's ‘she's so much more awesome than you’, which is high status.

comment by Matt_Simpson · 2013-08-01T19:42:55.654Z · LW(p) · GW(p)

Good point.

comment by A1987dM (army1987) · 2013-07-31T10:12:21.145Z · LW(p) · GW(p)

“People will be more likely to (say they) like you once you're in a relationship with someone else” isn't something only people in the sympathetic-to-PUA thinking camp usually say.

comment by A1987dM (army1987) · 2013-07-31T17:24:47.929Z · LW(p) · GW(p)

Note also that the same action may be interpreted as a sexual advance if the recipient is available (or at least there's no common knowledge to the contrary) and as a sincere compliment for its own sake otherwise; therefore, if someone is willing to do the former but not the latter for whatever reason (e.g. irrational fear of creep- or slut-shaming due to ethanol deficiency)...

comment by Viliam_Bur · 2013-08-10T21:32:24.941Z · LW(p) · GW(p)

in three days, the number of women rating my profile highly has gone from 61 to 113.

There is this competing hypothesis, that the women upvoted you for being honest with them, or for being faithful to the lady you wrote about. (As opposed to just trying to bed as much women as possible.)

So... how about the number of women contacting you -- has it increased, decreased, or remained the same? Perhaps that could provide some evidence to discriminate between the "he is unavailable, therefore attractive" and "he is unavailable, upvoted for not wasting my hopes" hypotheses.

comment by A1987dM (army1987) · 2013-08-06T11:21:27.760Z · LW(p) · GW(p)

in three days, the number of women rating my profile highly has gone from 61 to 113.

Wait a moment... How long did it take to go from 0 to 61? How long hadn't you logged into OkC before writing that? Maybe the increase is due to more people finding your profile when looking for people “Online today” or “Online this week”?

Replies from: Prismattic
comment by Prismattic · 2013-08-06T22:35:46.462Z · LW(p) · GW(p)

Alas, there are no loopholes here. 0-61 took almost exactly a year (it would have been more like 10 months, but you lose the votes of people who deactivate their profiles), and I was logging in at least weekly, usually more, during that time.

comment by Vaniver · 2013-07-30T21:38:01.636Z · LW(p) · GW(p)

I've noticed a few times how surprisingly easy it is to be in the upper echelon of some narrow area with a relatively small amount of expenditure (for an upper middle class American professional). This is easy to see in various entertainment hobbies- an American professional adult who puts, say, 10% of his salary into Legos will have a massive collection by the standards of most people who own Legos. Similarly, putting 10% of a professional's salary into buying gadgets means that you would be buying a new one or two every month.

I recently came across an article on political donations and saw the same effect- to be in the top .01% of American political donors, it only takes about $11k an election cycle (more in presidential years, less in legislative only years). Again, at 10% of income, that only takes an income of ~$55k a year (since the cycles occur every two years), which is comparable to the median American salary (and lower than the starting salaries for most of my friends who graduated with STEM bachelor's degrees).

It's not clear to me what percentage of people do this. It's the sort of thing that you could only do for a few narrow niches, since buying a ton of Legos impedes your ability to buy a bunch of gadgets, and it seems like most people go for broad niches instead of narrow niches. If you spend 10% of your income on clothes, say, then if most people spend 10% of their income on clothes you need to be in the top 1% of income-earners to be in the top 1% of clothes-buyers.

I know a handful of people in the LW sphere give a startlingly high percentage of their income to MIRI and are near the top of MIRI supporters. They probably also end up in the top percentile of charitable givers, but I don't have numbers on hand for that.

I'm curious if this is a worthwhile pattern to emulate. I currently do this for art collection in a narrow subfield, and noticed the benefits of being at the top percentage of expenditure mostly by accident, but don't have a good sense of how those benefits compare to marginal value comparisons between different potential hobbies. (Actually, now that I think about this, this might just be a special case of the general "specialization pays off" heuristic, where it may be better to have one extreme hobby than dabble in twenty things, but this may not be obvious when moving from twenty hobbies to nineteen hobbies.)

Replies from: Metus
comment by Metus · 2013-07-30T23:23:02.716Z · LW(p) · GW(p)

Some random points that came to my mind. The Pareto principle: 80% of the effect comes from 20% of the expenditure. So if we take the figure 10,000h to mastery, 2,000h will already lead to ridiculous effects, compared to the average Joe. The tighter the niche you choose is, the less competition there will be, so sheer probability dictates that you are more likely to be in a higher percentile of the distribution.

Overall, it seems to be better to be extremely invested in one niche and take a low interest in a couple of others for social purposes at least than to dabble moderately in a lot of them. What are the 'benefits' you alude to?

Finally, people spending a little bit on a lot of hobbies my be a symptom of an S-shaped response curve to money spent. The first few dollars increase pleasure a lot. Then you are just throwing money at it without obviousy return, so you forego the opportunity cost and get your high elsewhere. But should you for any reason get over this hypothetical plateu you reach again an interval of high return, maybe even higher than in the beginning and spend your money there.

Replies from: Vaniver
comment by Vaniver · 2013-07-31T01:38:33.168Z · LW(p) · GW(p)

What are the 'benefits' you alude to?

Mostly access to exceptional people / opportunities, and admiration / social status. For example, become a major donor to a wildlife rescue center, and you get invited to play with the tigers. I would be surprised if major MIRI donors that live in the Bay area don't get invited to dinner parties / similar social events with MIRI people.

For the status question, I think it's better to be high status in a narrow niche than medium status in many niches. It's not clear to me how the costs compare, though.

Replies from: spqr0a1
comment by spqr0a1 · 2013-07-31T21:05:26.124Z · LW(p) · GW(p)

Activity in many niches could credibly signal high status in some circles by making available many insights with short inferential distance to the general public (outside any of your niches). Allowing one to seem very experienced/intelligent.

Moreover, the benefits to being medium status in several hobby groups and the associated large number of otherwise unrelated social connections may be greater than readily apparent. https://en.wikipedia.org/wiki/Social_network#Structural_holes

Replies from: Vaniver
comment by Vaniver · 2013-08-01T05:30:29.345Z · LW(p) · GW(p)

Moreover, the benefits to being medium status in several hobby groups and the associated large number of otherwise unrelated social connections may be greater than readily apparent.

Agreed. It seems like there are several general-purpose hobby groups that seem to be particularly adept at serving this role, of which churches are the most obvious example.

comment by Tenoke · 2013-07-31T11:12:29.695Z · LW(p) · GW(p)

After a short discussion on irc regarding basilisks I declared that if anyone has any basilisks that they consider dangerous or potentially real in anyway to please private message me about them. I am extending this invitation here as well. Furthermore, I will be completely at fault for any harm caused to me by learning about it. Please don't let my potential harm discourage you.

Replies from: wedrifid, pinyaka, Username, Oscar_Cunningham, HungryHippo, sixes_and_sevens, drethelin, Rukifellth, Lumifer
comment by wedrifid · 2013-08-01T06:08:44.232Z · LW(p) · GW(p)

After a short discussion on irc regarding basilisks I declared that if anyone has any basilisks that they consider dangerous or potentially real in anyway to please private message me about them. I am extending this invitation here as well. Furthermore, I will be completely at fault for any harm caused to me by learning about it.

We almost need a list for this. This makes half a dozen people I've seen making the same declaration.

Please don't let my potential harm discourage you.

Without endorsing the reasoning at all I note that those with information suppressing inclinations put only a little weight on harm caused to you and even less on your preferences. If they believe that the basilisk is worthy of the name they will expect giving it to you to result in you spreading it to others and thereby causing all sorts of unspeakable misery and soforth. It'd be like infecting a bat with ebola.

comment by pinyaka · 2013-07-31T15:38:38.023Z · LW(p) · GW(p)

You are using basilisk in a manner that I don't understand. I assume you're not asking if anyone has a lizard that will literally turn you into stone, so what does basilisk mean in this context?

Replies from: Tenoke
comment by Tenoke · 2013-07-31T15:43:49.736Z · LW(p) · GW(p)

Memetic/Information Hazards - the term comes from here. Basically anything that makes you significantly worse off after you know it than before. Giving someone wrong instructions for how to build a bomb wouldn't count for example as I can just never build a bomb or just use other instructions etc.

Warning: Could be dangerous to look into it

Replies from: Lumifer, HungryHippo
comment by Lumifer · 2013-07-31T16:44:58.655Z · LW(p) · GW(p)

Memetic/Information Hazards

They really should be called Medusas -- since it's you looking at them, not them looking at you.

Replies from: Rukifellth, Tenoke
comment by Rukifellth · 2013-07-31T23:39:24.308Z · LW(p) · GW(p)

I think they both need to make eye contact.

comment by Tenoke · 2013-07-31T17:06:17.532Z · LW(p) · GW(p)

Yup, Medusa is what some blogposts use to describe them.

Replies from: Rukifellth
comment by Rukifellth · 2013-08-03T01:50:51.546Z · LW(p) · GW(p)

Which blogposts are these?

comment by HungryHippo · 2013-07-31T16:47:28.606Z · LW(p) · GW(p)

Do you of anyone claiming to be in possession of such a fact?

Replies from: Richard_Kennaway, Rukifellth, Tenoke
comment by Richard_Kennaway · 2013-08-01T11:48:48.253Z · LW(p) · GW(p)

Eliezer is in possession of a fact that he considers to be highly dangerous to anyone who knows it, and who does not have sufficient understanding of exotic decision theory to avoid being vulnerable to it. This is the original basilisk that drew LessWrong's attention to the idea. Whether he is right is disputed (but the disputation cannot take place here).

In HPMOR, he has fictionally presented another basilisk: Harry cannot tell some other wizards, including Dumbledore, about the true Patronus spell, because that knowledge would render them incapable of casting the Patronus at all, leaving them vulnerable to having their minds eaten by Dementors.

comment by Rukifellth · 2013-07-31T23:28:48.707Z · LW(p) · GW(p)

I know one.

Also I think you're missing the word "know"

comment by Tenoke · 2013-07-31T17:05:12.258Z · LW(p) · GW(p)

I know some basilisks, yes. Although, there is nothing I regard as actually dangerous. However, sharing things like this publicly is considered bad etiquette on LessWrong.

Replies from: pinyaka, Rukifellth, MixedNuts
comment by pinyaka · 2013-08-02T19:08:57.070Z · LW(p) · GW(p)

If it's not dangerous, how does it constitute a hazard?

comment by Rukifellth · 2013-07-31T23:57:41.383Z · LW(p) · GW(p)

I tried to rot13 my previous discussion and was only mocked. The attitude towards basilisks seems to be one of glib reassurance.

Replies from: wedrifid
comment by wedrifid · 2013-08-01T06:14:05.790Z · LW(p) · GW(p)

I tried to rot13 my previous discussion and was only mocked. The attitude towards basilisks seems to be one of glib reassurance.

Not just glib reassurance. There is also the outright mockery of those who advocate taking (the known pseudo-examples of) them seriously.

Replies from: Rukifellth
comment by Rukifellth · 2013-08-01T10:40:13.278Z · LW(p) · GW(p)

I can't imagine that anyone is advocating taking them seriously.

comment by MixedNuts · 2013-08-01T09:58:35.793Z · LW(p) · GW(p)

Can you send me yours? Please PM me here or on IRC. I already know the most famous one here.

comment by Username · 2013-07-31T20:24:41.339Z · LW(p) · GW(p)

Could you post how many you receive and your realistic estimation on whether any are actually dangerous? Without specifics of course. (If you take these things seriously, I suppose you should have a dead-man's switch.)

Though for the record I think the LW policy on not being able to discuss basilisks is ridiculous - a big banner at the top of a post saying for example 'Warning - Information Hazard to those who have suffered anxiety at the thought of AI acting acausally' should be fine. I strongly disagree with outright banning of discussion about specific basilisks/medusas, especially seeing as LW is one of the only places where one could have a meaningful conversation about them.

comment by Oscar_Cunningham · 2014-08-12T18:30:11.837Z · LW(p) · GW(p)

Did anything come of this in the end? Were any of the basilisks harmful or otherwise interesting?

Replies from: Tenoke
comment by Tenoke · 2014-08-12T19:07:57.971Z · LW(p) · GW(p)

I got some responses, but I wouldn't say they were.

comment by HungryHippo · 2013-07-31T16:46:10.359Z · LW(p) · GW(p)

Please let us know if you recieve anything interesting.

comment by sixes_and_sevens · 2013-07-31T14:00:14.181Z · LW(p) · GW(p)

Can you tell us what you're trying to achieve with this?

Replies from: Tenoke
comment by Tenoke · 2013-07-31T14:16:39.151Z · LW(p) · GW(p)

Interested in the responses since I actually think I can learn some useful things if anyone actually shares something good. Also, I assign significantly less than 1% chance that anyone will actually tell me anything 'dangerous' - for example I think roko's is as dangerous as pie. I don't plan to release memetic hazards on unsuspecting citizens if that's your fear.

Replies from: Rukifellth, sixes_and_sevens
comment by sixes_and_sevens · 2013-07-31T14:38:54.488Z · LW(p) · GW(p)

It's more that soliciting information hazards seems like really odd behaviour. Even if no-one sends you an Interactive Suicide Rock, you might still receive some horrible or annoying stuff you don't want to be carrying around in your head.

I'm really interested to find out what, if anything, people send you, but I'm not sure I want to know exactly what they are.

Replies from: Tenoke, David_Gerard
comment by Tenoke · 2013-07-31T14:53:22.888Z · LW(p) · GW(p)

I'm really interested to find out what, if anything, people send you, but I'm not sure I want to know exactly what they are.

Other people expressed a similar view and since I don't mind, I can at least help with satisfying people's curiosity in a way that would cause minimal harm. However, I have found nothing worth talking about after some fairly extensive google searches so I am currently trying to think if there is anyone knowledgeable that I can e-mail (already have a few people on the list) or if there are any good search terms that I haven't tried yet.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2013-07-31T15:09:46.625Z · LW(p) · GW(p)

It's probably worth clarifying what you consider a basilisk, as that might reduce any unpleasant-yet-irrelevant submissions.

comment by David_Gerard · 2013-07-31T22:02:16.828Z · LW(p) · GW(p)

The Motif of Harmful Sensation is a common fictional trope, but of real-life examples there are pretty much 0. (Excepting e.g. a subject with given mental susceptibilities such as depression or OCD.)

Replies from: gwern, FourFire, NancyLebovitz
comment by gwern · 2013-07-31T22:10:27.196Z · LW(p) · GW(p)

(Excepting e.g. a subject with given mental susceptibilities such as depression or OCD.)

And even more obviously, epilepsy. Yet, I don't understand why you would except them.

'You see, X does not exist, since I choose to ignore all the cases in which X does exist; I hope you'll agree that this argument is watertight once you grant my premises.'

Replies from: asr, David_Gerard
comment by asr · 2013-07-31T22:29:49.040Z · LW(p) · GW(p)

I think David has a point here.

The cases you two have mentioned of sensory hazards all affect people who have identifiable susceptibilities that those people usually know about in advance and that affect relatively small minorities.

Somebody might have a high confidence that they are non-depressed, non-OCD, non-epileptic, etc. Are there examples of sensory hazards that apply to people who do not have a recognized medical problem?

Replies from: gwern
comment by gwern · 2013-07-31T23:18:26.393Z · LW(p) · GW(p)

Are there examples of sensory hazards that apply to people who do not have a recognized medical problem?

But this is a different question. You have quietly redefined the question "are there harmful sensations to people?" - to which the answer is overwhelmingly, resoundingly, yes, there absolutely are - to 'are there harmful sensations to a newly redefined subset of people which we will immediately update if anyone produces further examples, so actually what I meant all along was "are there harmful sensations which we don't yet know about?"'

Or to put it more simply: 'Can you provide an example of a harmful sensation we don't yet know about?' Well... If I could produce a harmful sensation, you and David would simply say something like 'ah, well, I guess we now have a recognized medical problem, because look, we [commit suicide / collapse in convulsions / cease functioning / become obsessed with useless actions] if you expose us to X! That's a pretty serious psychiatric problem! But, are there examples of sensory hazards that apply to people who do not have a recognized medical problem?'

To which I can only shake my head no.

Replies from: asr
comment by asr · 2013-08-01T04:56:19.523Z · LW(p) · GW(p)

I hear you and I'm not trying to play the definition game or wriggle out of this. The way I conceptualized the question -- which I think the original poster had in mind and what I think is relevant to hazard risk assessment -- is more like one of these:

A) "What fraction of the public is seriously vulnerable to sensory hazards",

B) "Given that one knows one's medical history and demographics, what is the probability that there are sensory hazards one is vulnerable to but not already well aware of."

My hunch is that the answers are "less than 20%" and "close to zero." The example of epilepsy didn't shift my beliefs about either; epilepsy is rare and is rarely adult-onset for the non-elderly.

Replies from: gwern
comment by gwern · 2013-08-01T14:41:20.604Z · LW(p) · GW(p)

B) "Given that one knows one's medical history and demographics, what is the probability that there are sensory hazards one is vulnerable to but not already well aware of."

So you're asking, what new medical sensory hazards may be developed in the future.

Well, the example of photosensitive epilepsy, where no trigger is mentioned which could have existed before the 19th century or so, suggests you should be very wary of thinking the risk of new sensory hazards is close to zero. Flash grenades are another visual example of a historically novel sensation which badly damages ordinary people. Infrasound is another plausible candidate for future deliberate or accidental weaponization. And so on...

epilepsy is rare and is rarely adult-onset for the non-elderly.

There, see, you're doing it again! Why would you exclude the elderly? Keep in mind that you yourself should aspire to become elderly one day (after all, consider the most likely alternative...).

Replies from: asr
comment by asr · 2013-08-01T19:18:55.279Z · LW(p) · GW(p)

The photosensitive epilepsy and infrasound examples convinced me, thank you. I see that those are cases where a reasonably informed observer might be surprised by the vulnerability.

comment by David_Gerard · 2013-08-02T11:51:40.766Z · LW(p) · GW(p)

Gwern, this thread is about the Basilisk. Conflating that with epilepsy is knowing equivocation. Don't be dense, thanks.

Replies from: gwern
comment by gwern · 2013-08-02T14:35:27.973Z · LW(p) · GW(p)

No denser than thou, David:

The Motif of Harmful Sensation is a common fictional trope, but of real-life examples there are pretty much 0. (Excepting e.g. a subject with given mental susceptibilities such as depression or OCD.)

Who was it who brought up the Motif of Harmful Sensation, which is not limited to Roko's basilisk? Who was it who brought up in order to define away examples of depression or OCD? Thou, David, thou.

Replies from: David_Gerard
comment by David_Gerard · 2013-08-02T20:15:55.950Z · LW(p) · GW(p)

The fictional trope is of one you wouldn't expect to be harmful. That's the literary point of it, and of the Basilisk: the surprise factor.

Replies from: gwern
comment by gwern · 2013-08-02T20:53:53.393Z · LW(p) · GW(p)

The fictional trope is of one you wouldn't expect to be harmful.

And surely the animators who made that Pokemon episode expected it to be harmful and they made those kids seize because they're simply evil.

No denser than thou, David.

comment by FourFire · 2013-08-01T09:08:54.993Z · LW(p) · GW(p)

I think that most of the general examples have been mentioned: Religion among others, which has the rather mildly harmful "fear of hell" and it's own propagation.

I think that any majorly harmful hazard which the general population was susceptible to would cause them to all shortly win darwin awards and remove themselves from the genepool.

As such we only have minority groups which are vulnerable to specific stimuli.

Replies from: Leonhart
comment by Leonhart · 2013-08-05T19:17:33.615Z · LW(p) · GW(p)

the rather mildly harmful "fear of hell"

The Typical Mind Fallacy is strong with this one.

remove themselves from the genepool

It's a good thing that isn't a mortal sin! Oh no wait.

Replies from: FourFire
comment by FourFire · 2013-08-06T16:27:29.298Z · LW(p) · GW(p)

In what way are you attempting to counter my argument?

By 'harmful' I mean detrimental to procreation probability. I assume that highly fanatic religious people are likely to be in an environment with members of the opposite sex who are relatively equal in level of "indoctrination" and therefore are able to reproduce. though some religious practices are arguably detrimental to reproduction ability.

By 'remove themselves from the genepool' I mean, of course failure to produce offspring.

But please do let me know if you meant something else entirely.

Replies from: Leonhart
comment by Leonhart · 2013-08-06T22:28:28.228Z · LW(p) · GW(p)

Yes, we are completely talking past each other. In my framing "harmful" relates to number and intensity of suffering-moments, not reproductive success. I'm still kind of boggling that you think that's relevant.
You are correct to look to religion for archetypal information hazards; certain conceptions of sin, for example. Unlike Omega, sin cares about your decision theory; it applies to you if and only if you know it does, and the news is always bad. It's a cognitive event horizon. The Motif of Harmful Sensation is completely damn irrelevant. Information hazards don't make you go bleeble-bleeble-bleeble, they make you lie awake at night.

To be honest, I wasn't making sufficient effort to engage with you; I was venting irritation with this whole subthread, which largely consists of the emotionally privileged giving each other high-fives for getting lucky with their absurdity heuristic. You briefly became the embodiment of my irritation by describing the fear of hell as "mildly harmful", which it sort of isn't when you measure harm in actual caused fear. Some thoughts are black, and go nowhere, and can teach nothing, and any energy used to think them pours out of the universe and is gone. But I'm tapping out before I make a fool of myself further.

Replies from: FourFire
comment by FourFire · 2013-08-09T22:05:33.441Z · LW(p) · GW(p)

I'll agree that there was a mutual misunderstanding, my point has failed to be made. Ok. ;)

comment by NancyLebovitz · 2013-08-02T15:43:00.131Z · LW(p) · GW(p)

How harmful does it have to be? Noise can be hard on people, and sufficiently loud noise causes permanent damage.

There's something interesting in here about what counts as a sensation for purposes of this discussion-- probably "a sensation which most people wouldn't expect to be harmful".

comment by drethelin · 2013-07-31T13:25:13.356Z · LW(p) · GW(p)

Some basilisks are potentially contagious.

Replies from: Tenoke
comment by Tenoke · 2013-07-31T13:27:22.952Z · LW(p) · GW(p)

Please give me examples.

Replies from: drethelin, linkhyrule5
comment by drethelin · 2013-07-31T20:22:34.562Z · LW(p) · GW(p)

I think the most obvious semi-basilisk example is certain strains of religion. Insofar as it makes you believe you might go to hell, and all your friends are going to hell, these religions will make you feel bad an also make you want to spread them to everyone you know. Feeling bad is not the same as death or mental breakdown or other theoretical actual basilisk consequences but in essence there are meme complexes that contain elements that demand you spread the whole complex. If someone's in possession of such a concept but has defeated it or is in some way immune it may still be correct for them not to tell you for fear you are not and will spread it to others once it has worked it's will on you.

Replies from: Rukifellth
comment by Rukifellth · 2013-08-03T02:32:39.698Z · LW(p) · GW(p)

What do Christians do with the idea of "you're not spreading His Word fast enough"? It would be the same kind of scenario if there's nothing restraining Christian evangelical obligation.

Replies from: drethelin
comment by drethelin · 2013-08-03T16:04:07.554Z · LW(p) · GW(p)

Depends on the sect and person

comment by linkhyrule5 · 2013-07-31T21:54:20.073Z · LW(p) · GW(p)

Ever seen one of those "If you don't forward this email to five friends, your (relation) will DIE!!1!!!one!" emails?

comment by Rukifellth · 2013-07-31T23:24:33.350Z · LW(p) · GW(p)

You magnificent, magnanimous son of a bitch.

Replies from: Benito
comment by Ben Pace (Benito) · 2013-07-31T23:29:39.069Z · LW(p) · GW(p)

Well that escalated quickly.

Replies from: Rukifellth
comment by Rukifellth · 2013-07-31T23:33:19.787Z · LW(p) · GW(p)

I think a level of gaiety and excitement is appropriate given the subject.

comment by Lumifer · 2013-07-31T16:49:54.921Z · LW(p) · GW(p)

The classic internet basilisk is goatse :-D

Replies from: David_Gerard, NotInventedHere
comment by David_Gerard · 2013-07-31T22:00:34.038Z · LW(p) · GW(p)

I still haven't seen 2 Girls 1 Cup and have no plans ever to do so.

Replies from: Bill_McGrath, Rukifellth, Lumifer
comment by Bill_McGrath · 2013-08-01T01:07:51.156Z · LW(p) · GW(p)

I didn't have a strong reaction to it. It's gross, I shrugged and moved on.

comment by Rukifellth · 2013-08-03T01:49:16.833Z · LW(p) · GW(p)

I watched 2 Girls 1 Cup, then had to watch it again after I realized my speakers were off.

comment by Lumifer · 2013-07-31T23:35:13.157Z · LW(p) · GW(p)

But you know it exists :-)

comment by NotInventedHere · 2013-08-01T13:09:04.393Z · LW(p) · GW(p)

I really can't say that I was affected by it all that much. Just thought "Ew." and moved on.

comment by tim · 2013-07-30T01:54:40.155Z · LW(p) · GW(p)

So according to this article a large factor in rising tuition costs in American universities is attributable to increases in administration and overhead costs. For example,

Over the past four decades, though, the number of full-time professors or “full-time equivalents”—that is, slots filled by two or more part-time faculty members whose combined hours equal those of a full-timer—increased slightly more than 50 percent. That percentage is comparable to the growth in student enrollments during the same time period. But the number of administrators and administrative staffers employed by those schools increased by an astonishing 85 percent and 240 percent, respectively.

Certainly some of these increases are attributable to the need for more staff supporting new technological infrastructure such as network/computer administration but those needs don't explain the magnitude of the increases seen.

The author also highlights examples of excess and waste in administrative spending such as large pay hikes for top administrators in the face of budget cuts and the creation of pointless committees. How much these incidents contribute to the cost of tuition is somewhat questionable as the evidence is essentially a large list of anecdotes.

Anyway, this was surprising to me because I would naively predict that, if we were talking about almost any other product, we would begin to see less bureaucratically bloated competitors offering it for cheaper and driving the price down. What's unique about university that stops this from happening?

Possible explanations (based on an extremely basic understanding of economics, please correct),

  1. The author notes that the boards of trustees tend to be ill-prepared for making the kinds of decisions that might lead to a trimming of the fat. However, for this to be the reason (or at least a large part of the reason) boards would have to be almost universally incompetent else the few universities that take such action would have a market advantage over those that don't.

  2. Maybe, for whatever reason, its difficult for universities to grow past a certain point. If the market is already saturated with demand and universities are unable to expand in accommodation then they have no incentive to lower tuition. However, you would still expect lots of new universities to pop up as a result of this (which may or may not be the case as I couldn't find good statistics for this).

  3. The situation we find ourselves in appears to fit well with the signaling model of education. That is, college isn't about learning, it's about signaling your worth to potential employers via an expensive piece of paper. If this were the case it would be hard for a new or non-prestigious institution to break into the market or increase their market share even if the actual education was of high quality and inexpensive relative to competitors. In fact, under this model, more expensive schools may be preferred simply because they signal a higher level of prestige.

  4. Maybe I have been fooled by a misleading article that overblows the level of waste and inefficiency in American universities and that it would actually be quite difficult to run a modern educational institution without a comparable level of bureaucratic expenditure. There are parts of the article that do strike me as hyperbolic, but I've yet to come across a coherent argument that contends the current tuition levels are necessary and several that posit the opposite.

Replies from: beoShaffer, Randaly, ESRogs, Randy_M
comment by beoShaffer · 2013-07-30T02:40:41.774Z · LW(p) · GW(p)

Also, worth considering is the idea that increased administration is needed to deal with new regulations and/or norms. For example many schools have added positions dealing with diversity, sexual assault, and disability accommodations.

Replies from: gjm
comment by gjm · 2013-08-11T01:59:19.929Z · LW(p) · GW(p)

It seems very unlikely to me that this could account for more than a very small fraction of the budget. Surely these administrators are neither many in number, nor extravagantly paid?

Replies from: beoShaffer
comment by beoShaffer · 2013-08-11T03:21:28.430Z · LW(p) · GW(p)

I don't have exact numbers, but the reason I made this suggestion is that I noticed my college has a a large number of these people relative to our size. Also, that was just going by people's job titles, its possible that several administrative departs that have other reasons for existing but that intersect with heavily regulated areas have added more staff to cope. Furthermore, I got the impression that they were fairly well compensated. I doubt they're a huge chunk of the total budget, but I think its possible that they account for a decent amount of the increase in the size of the administration.

comment by Randaly · 2013-07-30T21:16:23.723Z · LW(p) · GW(p)

Anyway, this was surprising to me because I would naively predict that, if we were talking about almost any other product, we would begin to see less bureaucratically bloated competitors offering it for cheaper and driving the price down. What's unique about university that stops this from happening?

We do see competition.

ETA: Two additional points:

  • A lot of the spending/waste is on prestige projects like new buildings, rather than on administrators.

  • If you're wondering why nobody is challenging the top schools, I have three responses:

1) It would require too high an initial investment. 2) It would require attracting top students, which is more difficult given scholarships and lack of reputation. 3) This college is trying to do so.

comment by ESRogs · 2013-07-30T04:20:57.683Z · LW(p) · GW(p)

I think combining your 2 and 3 with the observation that demand is not particularly sensitive to price (citation needed) provides a strong argument for why administrators would not be incentivized to cut costs.

comment by Randy_M · 2013-07-30T15:18:08.832Z · LW(p) · GW(p)

Part of the reason the market can tolerate an increase in price is the same as the reason health care does likewise. The consumer is paying with someone elses money in many or most cases, and no one is looking closely at an itemized recipt/menu.

There are some new universities that aries and grow, especially technical colleges, things like U of Phoenix, etc., but it almost by definition they will be low status (signaling) and there are acrediting hurdles and other regulation that helps exiting universities function as a cartel.

comment by gwern · 2013-08-03T02:23:59.226Z · LW(p) · GW(p)

Question: where can I upload jailbroken PDFs that is public & Google-visible?

For a job, I compiled ~100MB of lipreading research, some of them extremely obscure & hard to find (I also have some Japanese literature PDFs in a similar situation); while I have no personal interest in the topic and do not want to host indefinitely the PDFs on gwern.net, I feel it would be a massive waste to simply delete them.

I cannot simply put them in a Dropbox public folder because they wouldn't show up in Google, and Scribd is an abomination I despise.

(crosspost from Google+)

Replies from: Douglas_Knight, hg00, DanielLC, gwern
comment by Douglas_Knight · 2013-09-13T17:45:51.353Z · LW(p) · GW(p)

wordpress.com has 3gb quota and pdfs are visible to google.

Replies from: gwern
comment by gwern · 2013-09-13T18:31:40.151Z · LW(p) · GW(p)

Interesting. I am giving it a try at http://gwern0.wordpress.com/ . We'll see in a month if any of the PDFs show up in Google.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-09-14T21:37:10.913Z · LW(p) · GW(p)

Where are the links to the documents?

Replies from: gwern
comment by gwern · 2013-09-14T22:22:36.216Z · LW(p) · GW(p)

I don't know. I uploaded the PDFs and 'attached' them to a post. I'm not sure what I'm supposed to do beyond that.

Replies from: Douglas_Knight, Douglas_Knight
comment by Douglas_Knight · 2013-09-14T23:30:08.852Z · LW(p) · GW(p)

How to use wordpress to upload and publicize files:

Files show up at gwern0.files.wordpress.com/2013/09/original_name.
There's also an "attachment page" at gwern0.wordpress.com/?attachment_id=##, but only after you publish the associated post, while the file is immediately world readable after upload, just secret.

To get wordpress to populate the post with links:

  1. Edit post
  2. "add media"
  3. (upload files via "upload files" pane)
  4. choose "media files" pane, if necessary
  5. select all files
  6. click "insert into post" at bottom.
Replies from: gwern
comment by gwern · 2013-09-15T18:24:43.704Z · LW(p) · GW(p)

I see, thanks. It looks like that works - I see PDF links in both posts now.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-09-15T22:06:32.378Z · LW(p) · GW(p)

OK, now I can find the links, but can google? It's not supposed to follow links from LW. I think WP advertises new accounts somewhere, but I don't think it's worth much. I suggest you link to it from gwern.net and/or google plus. Also that you link to your google drive public folder.

(I predict that if you don't link to the WP page, google will eventually find it and index it, but not index the pdfs. So if someone searches for the title of the article, google will produce the hit, but google scholar won't have it. And "eventually" might be more than month.)

Replies from: gwern, gwern
comment by gwern · 2013-10-13T22:25:35.300Z · LW(p) · GW(p)

So, I just opened up the WP blog and did Scholar searches for 3 or 4 of the lipreading PDFs. Not a single hit.

comment by gwern · 2013-09-15T22:16:30.810Z · LW(p) · GW(p)

We'll see in a month.

comment by Douglas_Knight · 2013-09-14T22:38:00.848Z · LW(p) · GW(p)

Let me get back to you about wordpress, but I wonder if this explains why google drive didn't work for you, when it did work for WB? Google could find everything on the google drive, unlike wp, but maybe they only look via links.

comment by hg00 · 2013-08-07T05:24:38.848Z · LW(p) · GW(p)

Scribd is an abomination I despise.

Hm? As far as I can tell, the worst thing they do is sometimes charge users to access older uploaded documents. They have to make money somehow. Would you rather them insert full-page ads in documents the way YouTube now plays ads before video clips?

Anyway, one idea is to find people who run sites on topics related to the PDFs and suggest that they upload them to their sites. Should increase the google juice of both the documents and the sites of those who upload them, so win/win, right?

Replies from: gwern
comment by gwern · 2013-08-07T23:31:25.486Z · LW(p) · GW(p)

As far as I can tell, the worst thing they do is sometimes charge users to access older uploaded documents.

Money which they have zero right to collect and which breaks the implied contract they had with their previous users who uploaded those documents.

And their interface is butt-ugly with PDFs completely unreadable in their HTML version - but of course they don't let you download the PDFs because they're all behind the Scribd paywall.

Hosting documents. A pretty simple task, one would think, and yet Scribd manages to do it both scuzzily and poorly.

They have to make money somehow.

A fully-general excuse. But they are not owed a living.

comment by DanielLC · 2013-08-03T03:57:02.247Z · LW(p) · GW(p)

I'd guess Google Drive.

You could get a website that points to wherever the download actually is.

Replies from: gwern
comment by gwern · 2013-08-03T14:39:46.822Z · LW(p) · GW(p)

That's one of the suggestions on G+ too. I didn't think that they would show up in Google proper and get indexed, but someone said they had for him, so maybe I will go with that. (Even if it doesn't work, I can always redownload and upload somewhere else, presumably.)

comment by gwern · 2014-05-26T02:22:56.457Z · LW(p) · GW(p)

I'm currently trying http://pdf.yt/ for PDF hosting. It seems to talk the talk.

comment by RolfAndreassen · 2013-07-29T22:42:54.546Z · LW(p) · GW(p)

Open comment thread, Monday July 29th

If it's worth saying, but not worth its own top-level comment in the open thread, it goes here.

Regarding the obvious recursion, please note that jokes are generally only funny the first time. :)

Replies from: Dorikka, army1987, linkhyrule5
comment by Dorikka · 2013-07-29T22:52:22.822Z · LW(p) · GW(p)

Regarding the obvious recursion, please note that jokes are generally only funny the first time. :)

In some cases, true iff you point it out in advance.

comment by A1987dM (army1987) · 2013-07-30T13:05:51.894Z · LW(p) · GW(p)

Regarding the obvious recursion, please note that jokes are generally only funny the first time. :)

Also the n-th time for n >> 1.

comment by linkhyrule5 · 2013-07-31T02:38:16.980Z · LW(p) · GW(p)

Terrible pun of the day: Bias-ian.

comment by pinyaka · 2013-08-01T12:19:42.568Z · LW(p) · GW(p)

Does anyone know why GiveWell is registered with the IRS under a different name (Clear Fund)? I am including a link to their recommendation for the AMF on a wedding registry and have already gotten a question about about why their name differs.

Replies from: army1987, Nisan
comment by A1987dM (army1987) · 2013-08-02T11:47:06.520Z · LW(p) · GW(p)

I had noticed that when I got a receipt for a donation I made to them, but I assumed “Clear Fund” was their former name and they hadn't bothered to legally change it or something and didn't worry too much about that.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-08-05T20:19:36.163Z · LW(p) · GW(p)

This is extremely common, though the link pinyaka gave has a column for "doing business as," which should say GiveWell, but is left blank.

comment by Nisan · 2013-08-04T15:53:59.450Z · LW(p) · GW(p)

I don't know, but I encourage you to ask them if you don't get an answer here.

comment by pragmatist · 2013-07-30T06:28:54.560Z · LW(p) · GW(p)

Any LW readers living in India? I recently moved here (specifically, New Delhi) from the United States and I'm interested in the possibility of a local meet-up.

Replies from: Ben_LandauTaylor
comment by Ben_LandauTaylor · 2013-07-30T14:05:31.033Z · LW(p) · GW(p)

The usual suggestion for cases like this is to unilaterally announce a meetup in a public place, and bring a book in case no one shows up. Best case: awesome people doing awesome things. Worst case: you spend a couple hours reading.

comment by PECOS-9 · 2013-08-01T17:31:23.094Z · LW(p) · GW(p)

I typed up the below message before discovering that the term I was looking for is "data dredging" or "hypothesis fishing." Still decided to post below so others know.

Is there a well-known term for the kind of error that pre-registrations of scientific studies is meant to avoid? I mean the error where an experiment is designed to test something like "This drug cures the common cold," but then when the results show no effect, the researchers repeatedly do the analysis on smaller slices of the data from the experiment, until eventually they have the results "This drug cures the common cold in males aged 40-60, p<.05," when of course that result is just due to random chance (because if you do the statistical tests on 20 subsets of the data, chances are one of them will show an effect with p<.05).

It's similar to the file drawer effect, except it's within a single experiment, not many.

Replies from: DanielLC
comment by NancyLebovitz · 2013-07-31T06:49:23.371Z · LW(p) · GW(p)

As I understand applying Bayes to science, the aim is to direct research into areas that make sense. However, sometimes valuable discoveries are made by accident.

Is there any way to tell whether your research is over-focused? To improve the odds of noticing valuable anomalies?

Replies from: Ben_LandauTaylor
comment by Ben_LandauTaylor · 2013-07-31T17:27:35.140Z · LW(p) · GW(p)

To improve the odds of noticing valuable anomalies?

Knowing a diverse network of people working on valuable projects seems like it could help. I can only think of one example; are there more?

comment by Peter Wildeford (peter_hurford) · 2013-07-30T20:13:20.873Z · LW(p) · GW(p)

In the past, people like Eliezer Yudkowsky and, I think, Luke Meulhauser have argued that MIRI has a medium probability of success. What is this probability estimate based on and how is success defined? I've read standard MIRI literature (like "Evidence and Import" and "Five Theses"), but I may have missed something.

Replies from: NotInventedHere
comment by NotInventedHere · 2013-08-01T12:02:20.814Z · LW(p) · GW(p)

Do you have a permalink to any of those instances? It would be helpful to know what they defined medium as.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-08-01T13:25:30.343Z · LW(p) · GW(p)

see 1 [? · GW], 2 [? · GW], 3 [? · GW], 4 [? · GW], and 5 [? · GW].

comment by Fhyve · 2013-07-30T04:38:04.677Z · LW(p) · GW(p)

Does anyone know of a good textbook on public relations (PR), or a good resource/summary of the state of the field? I think it would be interesting to know about this, especially with regards to school clubs, meetups, and online rationality advocacy.

comment by ESRogs · 2013-07-30T04:10:56.301Z · LW(p) · GW(p)

I have a question about the Simulation Argument.

Suppose that it's some point in the future, and we're able to run conscious simulations of our ancestors. We're considering whether or not to run such a simulations.

We are also curious about whether we are in a simulation ourselves, and we know that knowledge that civilizations like ours run ancestor simulations would be evidence for the proposition that we ourselves are in a simulation.

Could the choice at this point whether or not to run a simulation be used as a form of acausal control over the probability that we ourselves are living in a simulation?

Replies from: shminux, Jayson_Virissimo, Tenoke
comment by shminux · 2013-07-30T05:35:54.950Z · LW(p) · GW(p)

The most you can say is that all reflectively consistent ancestors would behave the same way you do. Wasn't there a Greg Egan's story about it?

Replies from: komponisto, ESRogs
comment by komponisto · 2013-07-30T08:58:51.149Z · LW(p) · GW(p)

Wasn't there a Greg Egan's story about it?

English tip: the possessive ending " 's " carries an implicit "the". Thus "Greg Egan's story" means "the story of Greg Egan", not just "story of Greg Egan". (This is unlike the corresponding construction in, for example, German.) Instead of the above, you wanted to write:

Wasn't there a Greg Egan story about it?

(This particular mistake occurs often among non-native-speakers, and indeed is a dead giveaway of one's status as such, so it's worth saying something about.)

Replies from: army1987, shminux
comment by A1987dM (army1987) · 2013-07-30T13:03:27.931Z · LW(p) · GW(p)

English tip: the possessive ending " 's " carries an implicit "the".

(Except in constructs like “girls' school” or “a ten minutes' walk”.)

Replies from: komponisto
comment by komponisto · 2013-07-30T16:34:10.249Z · LW(p) · GW(p)

You're right about "girls' school", but "a ten minutes' walk" is wrong (should be "a ten-minute walk" or "ten minutes' walk").

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-31T09:46:48.080Z · LW(p) · GW(p)

Thanks. I myself am a non-native speaker.

[Note to self: I should re-read the relevant chapter in my English grammar when I get back home. Meanwhile, I'll look at the overview here.]

(Semantically, “ten minutes' walk” still means ‘a ten-minute walk’ rather than ‘the ten-minute walk’, but your point in reply to shminux was about syntax not semantics anyway.)

Replies from: komponisto
comment by komponisto · 2013-07-31T20:27:25.774Z · LW(p) · GW(p)

(Semantically, “ten minutes' walk” still means ‘a ten-minute walk’ rather than ‘the ten-minute walk’, but your point in reply to shminux was about syntax not semantics anyway.)

The "proof of synonymy" looks like this:

ten minutes' walk = (the walk) of (ten minutes) = a (walk of ten minutes) = a ten-minute walk

...the second "equality" being where semantics is invoked.

comment by shminux · 2013-07-30T16:47:06.809Z · LW(p) · GW(p)

Thanks. This sounds plausible (if irrelevant), but I could not find an authoritative reference confirming it. Any links?

Replies from: arundelo, komponisto
comment by arundelo · 2013-07-30T17:54:39.094Z · LW(p) · GW(p)

A Student's Introduction to English Grammar, p. 90:

The determiner position in an NP [noun phrase] is usually filled by one of two kinds of expression.

  • In all the examples so far it has been a determinative [a word like the, a, this, some, or three], and some of these can be accompanied by their own modifiers, making a determinative phrase, abbreviated DP.

  • In addition, the determiner may have the form of a genitive NP.

Examples, with the determiners underlined [bolded], are given [below]:

DETERMINATIVE
*the city
**
some* rotten eggs

DP
*almost all politicians
**
very few* new books

GENITIVE NP
*her income
**
the senator's* young son

p. 109:

As a determiner, the genitive is always definite. Note, for example, that [one patient's father] corresponds to *the father of one patient, not **a* father of one patient.

See also the Wikipedia determiner and genitive case articles.

Replies from: shminux, Lumifer
comment by shminux · 2013-07-30T18:48:24.063Z · LW(p) · GW(p)

Thanks! Now, if only someone linked that Egan story :)

Replies from: Zack_M_Davis, pragmatist
comment by pragmatist · 2013-07-31T04:33:51.798Z · LW(p) · GW(p)

This story is not by Egan, but it might be what you're looking for.

Replies from: shminux
comment by shminux · 2013-07-31T07:26:51.554Z · LW(p) · GW(p)

Ah, yes, thanks. I wondered why I couldn't find it :) Hmm, I thought it was longer...

comment by Lumifer · 2013-07-30T20:33:27.595Z · LW(p) · GW(p)

Note, for example, that [one patient's father] corresponds to the father of one patient, not a father of one patient.

Hm. So how do you express the concept of an undetermined relative of some patient? The text you quoted would say that [one patient's relative] means the relative of one patient -- how do I express a relative of one patient?

Replies from: Randy_M
comment by Randy_M · 2013-07-30T21:12:48.916Z · LW(p) · GW(p)

Didn't you just?

Replies from: Lumifer
comment by Lumifer · 2013-07-30T21:26:05.849Z · LW(p) · GW(p)

Well, of course there are ways to rephrase most anything. I am, however, interested in whether there's a way to express the "a relative of one patient" notion through the possessive 's.

A related question is whether a native speaker would be sure that one patient's relative necessarily means the relative, or he would be ambiguous whether it means the relative or a relative.

Replies from: komponisto
comment by komponisto · 2013-07-30T23:41:07.563Z · LW(p) · GW(p)

In a specialized context (such as among people who work at a hospital), "patient's relative" could conceivably become a set phrase, in which case sentences such as "there are some patient's relatives waiting outside" would become possible (contrast * "there are some Greg Egan's stories on the shelf").

This is presumably what happened with "girls' school". Very rarely, it can even happen with proper nouns, as in the mathematical term Green's function. But this is not part of the syntax of the possessive ; it is the result of the whole possessive phrase being treated as a unit. (When you hear "the Green's function for this operator" for the first time, you immediately know that "Green's function" is a jargon phrase, because of the irregular syntax.)

comment by komponisto · 2013-07-30T17:43:12.946Z · LW(p) · GW(p)

(My comment was generated by the spontaneous reaction and reflection of a native speaker rather than memory of any deliberately learned rule.) Wikipedia has this to say:

In English and some other languages, the use of such a word implies the definite article. For example, my car implies the car that belongs to me/is used by me; it is not correct to precede possessives with an article (* the my car) or other definite determiner such as a demonstrative (* this my car)

One should indeed think of " 's " in this context as the equivalent for nouns of what "my" is for the pronoun "I".

comment by ESRogs · 2013-07-30T23:39:32.385Z · LW(p) · GW(p)

Haven't read it, but perhaps you mean this one? It sounds very interesting!

comment by Jayson_Virissimo · 2013-07-30T05:08:27.033Z · LW(p) · GW(p)

Taboo "acausal control."

Replies from: ESRogs
comment by ESRogs · 2013-07-30T09:57:29.524Z · LW(p) · GW(p)

Hmm, okay, to put it another way -- if we avoid running ancestor simulations for the purpose of maximizing the probability that we are not in a simulation, is it valid to, based on this fact, increase our credence in not being in a simulation?

Replies from: linkhyrule5
comment by linkhyrule5 · 2013-07-30T22:42:27.538Z · LW(p) · GW(p)

I think so. If we decided not to run a simulation, any would-be-simulators analogous to us would also choose not to run a simulation, so you've eliminated a bunch of worlds where simulations are possible.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-07-31T00:52:05.489Z · LW(p) · GW(p)

Only if those simulators are extremely similar to us. It may only take a very minor difference to decide to run simulations.

Replies from: linkhyrule5
comment by linkhyrule5 · 2013-07-31T01:07:49.548Z · LW(p) · GW(p)

That is true, but irrelevant. Making the decision eliminates possible worlds in which we are simulations. Therefore we end up with fewer simulation-worlds out of our total list of potential future worlds, and thus our probability estimate must increase.

Or, to put it in Bayesian terms: P(we're in a simulation|we chose not to be in a simulation)/P(we choose not to be in a simulation) is greater than 1.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-07-31T01:14:55.349Z · LW(p) · GW(p)

Sure, but by how much? If the ratio is something like 2 or even 5 or 10 this isn't going to matter much.

Replies from: linkhyrule5
comment by linkhyrule5 · 2013-07-31T02:30:00.254Z · LW(p) · GW(p)

That's not the question.

if we avoid running ancestor simulations for the purpose of maximizing the probability that we are not in a simulation, is it valid to, based on this fact, increase our credence in not being in a simulation?

That's the question, and the answer is "yes."

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-07-31T16:05:36.639Z · LW(p) · GW(p)

Unless you round sufficiently small increases down to zero, which is what people generally do. If somebody asked me that, and I estimated that the difference in probability was .00000000001, then I would answer "no".

Replies from: linkhyrule5
comment by linkhyrule5 · 2013-07-31T19:00:58.036Z · LW(p) · GW(p)

That is granted. However, I'm also fairly sure (p=.75) that the probability isn't that small, because by deciding not to simulate a civilization yourself, you have greatly decreased the probability of being in an infinite descending chain. There remains singleton chance simulations and dynamic equilibria of nested simulations, but those are both intuitively less dense in clones of your universe - so you've ruled out a significant fraction of possible simulation-worlds by deciding not to simulate yourself yourself.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-08-03T17:21:04.931Z · LW(p) · GW(p)

you have greatly decreased the probability of being in an infinite descending chain.

No matter what there aren't going to be any infinitely descending chains unless our understanding of the laws of physics is drastically wrong. You can't simulate n+1 bits with n qubits. So, even if you assume a quantum simulation for a purely classical setting, you still have strict limits.

There remains singleton chance simulations and dynamic equilibria of nested simulations, but those are both intuitively less dense in clones of your universe

I'm not sure what you mean here. Can you expand?

Replies from: linkhyrule5
comment by linkhyrule5 · 2013-08-03T21:24:07.702Z · LW(p) · GW(p)

Imagine that some Clarktech version of ourselves dedicates an entire galaxy to simulating the Milky Way. Would we have noticed by now?

Neither does the simulation need to be perfect: it only needs to be perfect wherever we actually look. This makes for a much more complex program, but might save on computing costs.

Anyway, yeah, you probably won't get an infinite chain, but you'll get a very long one, which leads to my second point:

A "singleton chance simulation" just means that someone randomly decided to simulate our universe in particular. This is rather unlikely.

A "dynamic equilibria of nested simulation" just means that Universe A simulates Universe B simulates Universe C which simulates Universe A, creating a descending chain that is not as dense as an immediate recursion, A->A->A.

Both these cases will contribute less possible universes than a (near-)infinite descending chain, so by eliminating the descending chain you've greatly decreased the probability of being in a simulation.

comment by Tenoke · 2013-07-30T10:27:38.669Z · LW(p) · GW(p)

No. It is unreasonable to think that all simulations are ancestral anyway. Even if no one runs ancestral simulations people will still run simulations of other possible words for a variety of reasons and we will be likely in one of those. And anyway, as soon as you can make a complete ancestral simulation (without knowing of any way to do so without giving consciousnesses/qualia/whatever to the simulated) you can be >99% that you live in a simulation no matter if you run anything yourself or not.

Replies from: NancyLebovitz, ESRogs, NancyLebovitz
comment by NancyLebovitz · 2013-07-30T16:57:42.759Z · LW(p) · GW(p)

I strongly recommend not using stupid. It's less distracting to just point out mistakes without using insults.

Replies from: Tenoke
comment by Tenoke · 2013-07-30T19:53:01.939Z · LW(p) · GW(p)

changed to unreasonable if that helps

Replies from: Ben_LandauTaylor, NancyLebovitz
comment by Ben_LandauTaylor · 2013-07-30T20:50:55.661Z · LW(p) · GW(p)

That is less insulting, and therefore an improvement. A version that's not even a little insulting might look something like "Not all simulations are ancestral." That approach expresses disagreement with the original claim, but doesn't connote anything about the person who made it.

Replies from: Tenoke, army1987
comment by Tenoke · 2013-07-30T21:08:46.817Z · LW(p) · GW(p)

However, your version completely skips what I am actually saying - that I think that whole line of thinking is bad.

comment by A1987dM (army1987) · 2013-07-31T10:08:12.323Z · LW(p) · GW(p)

A version that's not even a little insulting might look something like "Not all simulations are ancestral."

There's a difference between “it is unreasonable to think X” and “not X”. (Let X equal “the sixteenth decimal digit of the fine structure constant is 3”, for example.)

(I'd use “There's no obvious good reason to think that all simulations are ancestral.”)

comment by NancyLebovitz · 2013-07-30T22:20:26.118Z · LW(p) · GW(p)

"Unreasonable" is an improvement, but I'd take it further to "mistaken" or "highly implausible".

Actually, I agree with you about the likelihood of numerous sorts of simulations that highly outnumber ancestor simulations.

comment by ESRogs · 2013-07-30T23:25:55.883Z · LW(p) · GW(p)

It is unreasonable to think that all simulations are ancestral anyway.

Point taken regarding ancestor simulations, but I don't think that resolves the question. What we choose to do is still evidence about what others will choose to do whether or not the choice is about simulating ancestors or just other possible worlds.

as soon as you can make a complete ancestral simulation ... you can be >99% that you live in a simulation

In Bostrom's formulation there is also the possibility that civilizations capable of ancestor simulations will overwhelmingly choose not to. It's not obvious to me that this is one of the horns of the trilemma to reject.

I can think of at least two reasons why it might be a convergent behavior not to run ancestor simulations:

1) Civilizations capable of running ancestor simulations might overwhelmingly have morals that dissuade them from subjecting sentient beings to such low standards of living as their ancestors had.

2) Such civilizations may wish to exert acausal control over whether they are in a simulation. This is the motivation for my question.

Replies from: Tenoke
comment by Tenoke · 2013-07-31T09:46:23.996Z · LW(p) · GW(p)

In Bostrom's formulation there is also the possibility that civilizations capable of ancestor simulations will overwhelmingly choose not to. It's not obvious to me that this is one of the horns of the trilemma to reject.

Again, you are making Bostrom's mistake of focusing on ancestral simulations. This is likely why this option seems plausible to you like it did to him - it looks much more plausible that people will decide not to run any ancestral simulations because of their morals than it is that people will decide not to run any simulations whatsoever.

1) Civilizations capable of running ancestor simulations might overwhelmingly have morals that dissuade them from subjecting sentient beings to such low standards of living as their ancestors had.

This is theoretically possible but realistically there is little reason to expect all posthuman civilizations to have such morals in regards to arbitrary creatures. We certainly don't seem to be the type of civilization which would sacrifice the utility gained by running simulations for some questionable moral reasons - or at least not with a probability that is close to 1. Additionally, The mindspace for all posthuman agents is huge - you need a large amount of evidence to conclude that it is likely for all posthuman civilizations to be so moral.

Such civilizations may wish to exert acausal control over whether they are in a simulation. This is the motivation for my question.

Similarly, mind space is huge and it seems really unlikely by default that most posthuman societies will never run a simulation just on that basis. Furthermore, it is enough if only 1 for every billion posthuman civilizations runs simulations for it to be more likely that we are in a simulation than not, provided that the average simulator civilization runs more than a billion simulation in it's history.

Furthermore, in order for most posthuman civilizations to not run any simulations there needs to be some sort of a 100% efficent way to prevent rogue agents to develop simulations. This also could be possible but still mostly unlikely. Even if somehow all posthuman societies always decide to never run a single simulation (for which there is no evidence) it is unlikely that all those civilizations also have a world-wide simulation-prevention mechanism in place from the very moment when simulations are technologically possible in that world.

Replies from: ESRogs
comment by ESRogs · 2013-07-31T11:02:02.612Z · LW(p) · GW(p)

you are making Bostrom's mistake of focusing on ancestral simulations

Again, this seems irrelevant. I talked about ancestor simulations because that's how it's worded in the Simulation Argument, but as I said in the post above, as far as I can tell the logic doesn't depend on it. Just replace 'simulations of ancestors' with 'simulations of worlds containing sentient beings'.

As for the rest of your post, those are fine arguments for why the second horn of the trilemma should be rejected. I don't find them absolutely convincing, so I still assign non-negligible credence to option 2 (and thus still find the acausal control question interesting), but I don't have strong counterarguments either, so if you do assign negligible credence to option 2, perhaps we'll have to agree to disagree on this point.

Replies from: Tenoke
comment by Tenoke · 2013-07-31T11:05:19.240Z · LW(p) · GW(p)

so if you do assign negligible credence to option 2, perhaps we'll have to agree to disagree on this point.

I do and based on the wording of your comment you have no real reason not to either.

Replies from: ESRogs
comment by ESRogs · 2013-07-31T11:23:04.545Z · LW(p) · GW(p)

Did you miss this part?

I don't find them absolutely convincing

Replies from: Tenoke
comment by Tenoke · 2013-07-31T11:41:42.128Z · LW(p) · GW(p)

Nope. They weren't meant to be absolutely convincing - option 2) is possible just not probable.

Replies from: ESRogs
comment by ESRogs · 2013-07-31T14:23:07.695Z · LW(p) · GW(p)

Perhaps. I will have to think about it some more.

comment by NancyLebovitz · 2013-07-31T05:49:45.473Z · LW(p) · GW(p)

And anyway, as soon as you can make a complete ancestral simulation (without knowing of any way to do so without giving consciousnesses/qualia/whatever to the simulated) you can be >99% that you live in a simulation no matter if you run anything yourself or not.

Do inaccurate ancestral simulations count for anything in this argument? Admittedly, I'm extrapolating from humans as I know them, but the combination of incomplete research, simulations modified for convenience and/or tolerability and/or to improve the story, and interest in what-if scenarios implies that even if you're a ancestor of an ancestor simulation creating civilization, you won't be that much like the actual ancestor.

Just for the fun of it, the Borgias on tv.

Replies from: Tenoke
comment by Tenoke · 2013-07-31T09:49:10.579Z · LW(p) · GW(p)

It completely doesn't matter whether you are a simulation of an accurate ancestor, inaccurate ancestor or HJPEV. As I am trying to point out there is nothing special to ancestral simulations and no real reason to focus only on them.

comment by linkhyrule5 · 2013-08-02T08:04:07.300Z · LW(p) · GW(p)

Waffled between putting this here and putting this in the Stupid Questions thread:

Why is the default assumption that a superintelligence of any type will populate its light cone?

I can see why any sort of tiling AI would do this - paperclip maximizers and the like. And for obvious reasons there's an inherent problem with predicting the actions of an alien FAI (friendly relative to alien values).

But it certainly seems to me that a human CEV-equivalent wouldn't necessarily support lightspeed expansion. Certainly, humanity has expanded whenever it has the opportunity - but not at its maximum speed, nor did entire population centers move. The top few percent of adventurous or less-affluent people leave, and that is all.

On top of this, I ... well, I can't say "can't imagine," but I find it unlikely that a CEV would support mass cloning or generation of humans (though if it supports mass uploading, then accelerated living might produce a population boom sufficient to support luminal expansion.) In which case, an FAI that did occupy as much space as possible, as rapidly as possible, would find itself spending resources on planets that wouldn't be used for millenia, when it could instead focus on improving local life.

There is, of course, the intelligence-explosion argument, but I'd think even intelligence would hit diminishing marginal returns eventually.

So to sum up, it seems not unreasonable that certain plausible categories of superintelligences would willingly not expand at near-luminal velocities - in which case there's quite a bit more leeway in the Fermi Paradox.

Replies from: DanielLC, Oscar_Cunningham
comment by DanielLC · 2013-08-03T03:26:45.836Z · LW(p) · GW(p)

Due to the way the universe expends, even if you travel at the speed of light forever, you can only reach a finite portion of it. The longer you wait, the less that is. Because of this, an AI that doesn't send out probes as fast as possible and, to a lesser extent, as soon as possible, will only be able to control a smaller portion of the universe. If you have any preferences about what happens in the rest of the universe, you'd want to leave early.

Also, as Oscar said, you don't want the resources you can easily reach to go to waste while you're putting off using them.

comment by Oscar_Cunningham · 2013-08-02T11:22:48.297Z · LW(p) · GW(p)

It's because we want to secure as many resources as possible, before the aliens get to them.

I expect an FAI to expand rapidly, but merely securing resources and saving them for humans to use much later.

Replies from: Lumifer, linkhyrule5, wadavis
comment by Lumifer · 2013-08-02T20:04:12.974Z · LW(p) · GW(p)

I expect an FAI to expand rapidly, but merely securing resources and saving them for humans to use much later.

So maybe the Solar System has been secured by an alien-FAI and we're being saved for the aliens to use much later..?

Replies from: Oscar_Cunningham, None
comment by Oscar_Cunningham · 2013-08-02T20:36:52.125Z · LW(p) · GW(p)

It's totally possible, but they'd have to have a good reason for staying hidden for the reason nyan_sandwich gives.

comment by [deleted] · 2013-08-02T20:16:15.768Z · LW(p) · GW(p)

Most valuable of those resources is free energy. The sun is burning that into low grade light and heat at an incredible rate.

Replies from: Lumifer
comment by Lumifer · 2013-08-02T20:41:20.227Z · LW(p) · GW(p)

So does that imply that a rapidly expanding resource-saving FAI would go around extinguishing stars?

Replies from: None, DanielLC, Oscar_Cunningham
comment by [deleted] · 2013-08-02T22:10:58.245Z · LW(p) · GW(p)

Seems prudent to do.

Unless it values the existence of stars more than it values other things it could do with that energy.

Replies from: Nisan
comment by Nisan · 2013-08-04T16:03:26.428Z · LW(p) · GW(p)

Upvoted for being the first instance I've seen of someone describing extinguishing all the stars in the night sky as being prudent.

comment by DanielLC · 2013-08-03T03:23:38.325Z · LW(p) · GW(p)

I suspect using them is more likely. They certainly aren't going to just let them keep wasting fuel. Not unless they have the opportunity to prevent even more waste. For example, they will send out probes to other systems before worrying too much about this system.

comment by Oscar_Cunningham · 2013-08-03T00:16:39.872Z · LW(p) · GW(p)

extinguishing stars

Is that even possible!? The FAI would want to somehow pause the burning of the star, allowing it to begin producing energy again when needed. For example collapsing it into a black hole wouldn't be what we want, since the energy would be wasted.

Would star lifting be enough to slow the burning of a star to a standstill?

comment by linkhyrule5 · 2013-08-02T19:35:35.414Z · LW(p) · GW(p)

Hm. Point.

comment by wadavis · 2013-08-02T14:59:09.699Z · LW(p) · GW(p)

Read up on the Dominion Lands Act and the Homestead Act for a historic human precedent.

Replies from: linkhyrule5
comment by linkhyrule5 · 2013-08-02T19:32:32.285Z · LW(p) · GW(p)

Right, but I'm not sure that's the right precedent to use. Space is big: it'd be more equivalent to, oh, dumping the Lost Roman Legion in a prehistoric Asia and expecting them to divvy up the continent as fast as they could march.

Replies from: wadavis
comment by wadavis · 2013-08-02T20:19:43.995Z · LW(p) · GW(p)

Davy Jones: One Soul is not equal to another

Jack Sparrow: Aha! So we've established my proposal is sound in principle, now we're just haggling over price.

-- Pirates of the Caribbean: Dead Man's Chest

Or in this case, scope instead of price.

Jokes aside, the point being is the sponsored settlement of the prairies had an influence of the negotiations of the Canada / U.S.A. border. If an human civilization had belief that it may have future competition with aliens for territory in space it would make sense to them to secured as much as possible as a Schelling Point in negotiations / conflicts.

Replies from: linkhyrule5
comment by linkhyrule5 · 2013-08-03T00:32:34.745Z · LW(p) · GW(p)

Point granted.

... and once an FAI has sent out probes to claim territory anyway, it loses nothing by making those probes nanotech with a copy of the FAI loaded on it, so we would indeed expect to see lightspeed expansions of FAI-controlled civilizations. Fair enough, then.

comment by niceguyanon · 2013-07-31T23:02:44.795Z · LW(p) · GW(p)

If I believe that automation causing mass unemployment is around the corner (10-20 years), what do I do or invest in now to prepare for it?

Replies from: gwern, Username, Omid
comment by gwern · 2013-07-31T23:21:44.980Z · LW(p) · GW(p)

Acquire as much capital as you can, presumably. If the share of economic growth for labor is falling, that of capital must be rising. The topic has come up before but I'm not sure anyone had more concrete advice than index funds - it's tempting to try to invest in software or specific tech companies, except then you're basically being a VC and it's very hard to pick the winners.

Replies from: Jayson_Virissimo
comment by Username · 2013-08-02T17:02:51.891Z · LW(p) · GW(p)

You can train yourself in one of the industries you expect to thrive. This could either be the high-tech route of being the one programming and developing the machines, or it could be in a job that never goes away, like plumbing/carpentry/welding. All of which can earn 6 figures, it's a matter of the type of work you like doing.

comment by Omid · 2013-08-03T05:14:44.342Z · LW(p) · GW(p)

Move to a socialist country.

comment by Adele_L · 2013-07-31T02:38:27.401Z · LW(p) · GW(p)

What is the function of the karma awards page?

Replies from: Nornagest
comment by Nornagest · 2013-07-31T03:46:53.068Z · LW(p) · GW(p)

There's been some discussion about incentivizing people to do useful things for the community by putting up karma bounties, thus removing some of the uncertainty inherent in upvotes. The most comprehensive thread I could find is here; two years old, but LW development grinds slow.

That's my best guess, anyway.

Replies from: Adele_L
comment by Adele_L · 2013-07-31T03:57:45.563Z · LW(p) · GW(p)

Ok, thanks! Seems like an interesting plan, I hope it can get implemented.

comment by sixes_and_sevens · 2013-07-30T12:20:58.529Z · LW(p) · GW(p)

Warning: politics, etc., etc.

What do conservative political traditions squabble over?

My upbringing and social circles are moderately left-wing. There's a well-observed failure mode in these circles, not entirely dissimilar to what's discussed in Why Our Kind Can't Cooperate, where participants sabotage cooperation by going out of their way to find things to disagree about, presumably for moral posturing and virtue-signalling reasons.

In recent years I have become fairly sceptical of intrinsic differences between political groups, which leads me to my opening question: what do conservative political traditions squabble over? I find it hard to imagine what form this sort of self-sabotaging moral posturing might take. Can anyone who grew up on the other side of the fence offer any insight?

Replies from: palladias, Lumifer, Randaly, Randy_M, Alejandro1, JoshuaZ
comment by palladias · 2013-07-30T16:51:21.839Z · LW(p) · GW(p)

We used to nutshell it as Trads vs Libertarians in college. Here are the relevant strawmen each group has of the other. (Hey, you asked what the fights look like!)

Trads see libertarians as: Just as prone to utopian thinking as those wretched liberals, or else shamelessly callous. Either they really do believe that people will just be naturally good without laws or institutions (what piffle!) or they just don't care about the casualties and trust that they themselves will rise to the top of their brutal, anarchic meritocracy. Not to mention that some of them could be more accurately described as libertines and just want an excuse for license.

Libertarians see trads as: Hidebound stick in the muds. They'd rather have people following arbitrary rules than thinking critically. They despise modernity, but don't actually have a positive vision of what they want instead (they're prone to ruefully shaking their heads and saying "Everything went downhill after the 1950s, or the American Revolution, or the Fall of Man"). By proposing ridiculous schemes (a surprising number have monarchist sympthies!) and washing their hands of governance in a show of 'epistemological modesty' and 'subsidiarity' they wriggle out of putting principles into practice.

comment by Lumifer · 2013-07-30T20:44:05.922Z · LW(p) · GW(p)

The left-to-right political axis is a very poor tool for looking at political goal/values/theories/opinions/etc.

First, to even talk about it you need to specify at least the locality. "Left" (or, say, "liberal") in the US means something different from what "left" (or "liberal") means in Europe. I'd wager it means something different yet in China, Russia, India...

Second, one dimension is clearly inadequate for political analysis. For example consider a very important (IMHO) concept in politics: statism. Is the American left statist? Well, kinda. They are statist economically but not culturally. Is the American right statist? Well, kinda. They are statist morally but not economically. I'm, of course, speaking in crude generalizations here.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-31T10:47:18.001Z · LW(p) · GW(p)

First, to even talk about it you need to specify at least the locality. "Left" (or, say, "liberal") in the US means something different from what "left" (or "liberal") means in Europe.

“Left” and “liberal” in the US and “left” in Europe mean more-or-less similar things, whereas “liberal” in Europe often means something else entirely. (I once made a longer comment about that somewhere, I'll link to it when I find it. EDIT: here it is.)

Replies from: ChristianKl
comment by ChristianKl · 2013-07-31T12:51:27.294Z · LW(p) · GW(p)

Obama is considered left in the US.

From a German perspective he's a lot more right than Angela Merkel who Germany's right wing chancelor.

Angela Merkel wouldn't put the government employee who exposed torture into prison while not charging anyone who tortured with crimes.

Replies from: army1987, Eugine_Nier
comment by A1987dM (army1987) · 2013-07-31T13:19:20.891Z · LW(p) · GW(p)

I meant in a relative sense, not in an absolute one: AFAIK, Obama is more “left” than his competition (other mainstream American politicians), and Merkel is less “left” than her competition (other mainstream German politicians), where “left” in both cases refers to the south-westwards direction (direction, not region) on the Political Compass. AFAIK “liberal” in the US also generally refers to that direction, whereas ISTM that in Europe it often refers to the eastward direction.

Replies from: ChristianKl
comment by ChristianKl · 2013-07-31T14:06:48.897Z · LW(p) · GW(p)

Yes. in a relative sense I think left and right mean the same things.

Liberal is Europe refers to southwards on the compass. UK liberals wanted that the UK gets rid of nuclear weapons because they considered them too expensive.

In Europe we also tend to speak about neoliberalism. That basically means the Washington consensus policies and all the policies for which corporate money pays. That means things like free trade agreements like NAFTA, putting children year ealier into school so that they are sooner available to join the workforce, taking political power away from states and cities, PPP, reducing taxes and the social safety net.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-31T17:33:08.836Z · LW(p) · GW(p)

In Europe we also tend to speak about neoliberalism.

Yes, I guess that one was the meaning I was familiar with. (The Italian Liberal Party is in a centre-right coalition.)

comment by Eugine_Nier · 2013-08-02T04:50:17.698Z · LW(p) · GW(p)

From a German perspective he's a lot more right than Angela Merkel who Germany's right wing chancelor.

Angela Merkel wouldn't put the government employee who exposed torture into prison while not charging anyone who tortured with crimes.

That depends on the issue in question.

Replies from: ChristianKl
comment by ChristianKl · 2013-08-04T15:30:15.885Z · LW(p) · GW(p)

Could you give an example where Obama pushes a left policy that's more left than Merkel's position on the same issue?

Replies from: Jayson_Virissimo, Eugine_Nier
comment by Jayson_Virissimo · 2013-08-07T05:36:14.998Z · LW(p) · GW(p)

It depends when in time you compare them. Merkel did come out against a federal minimum wage at one point (during the election). IMO, that is more "right-wing" in the sense people usually mean by it (although I don't particularly like the term). As far as I know, Obama has never publicly criticized the federal minimum wage.

Replies from: ChristianKl
comment by ChristianKl · 2013-08-07T11:54:33.386Z · LW(p) · GW(p)

Basically both politicans don't want to change anything about the minimum wage but stay with the status quo.

The German solution was over long time to have binding contracts between employers and unions about what minimum wage had to be payed in certain sectors.

Even employers in that sector that didn't engage in the negotions were supposed to be bound by them.

Some sectors such as temp work then has gotten by law a minimum wage that pays €7.50 = $9.97 because there no binding labor contracts. That a lot higher than the US minimum wage of $7.25 = €5.44.

It fairly recent in Germany that the left started to call for a minimum wage. I think nearly nobody who calls for a minimum wage in Germany would feel that he reached much if the minimum wage would be at US levels.

Obama certainly doesn't try to get the minimum wage raised to the kind of level that the people who call for a minimum wage in Germany want to have.

comment by Eugine_Nier · 2013-08-06T01:47:53.888Z · LW(p) · GW(p)

I haven't been paying that much attention to German economic policy.

comment by Randaly · 2013-07-30T21:44:42.363Z · LW(p) · GW(p)

At least in the US since the 60's, another way to divide conservatives has been in the party's three big issues: economic classical liberalism, social conservatism, and foreign-policy neo-conservatism. The moderate, short-term goals of these groups are sometimes in alignment, but their desired end-states look very different:

  • Neo-conservatives want a big military and an aggressive foreign policy, whereas classical liberals hate war and want to shrink the military, along with the rest of the government; and religious conservatives (generally- the prevalence of the other groups has lead to abnormalities in the most famous preachers) hate war and love peace.

  • Religious conservatives are generally fine with the welfare state and regulations, and support restrictive social laws; whereas classical liberals hate all of the above.

  • Classical liberals want to shrink (or drown) the government, which both of the other groups oppose for various reasons: some to most religious conservatives like environmentalism and the idea of a safety net, and neoconservatives love the military.

There's also a distinction between traditional politicians who support negotiation, moderation, and compromise, and the Tea Party-backed groups who don't.

comment by Randy_M · 2013-07-30T16:00:40.553Z · LW(p) · GW(p)

(entirely based on recent USA politics) My instinct is the say conservatives do less jockying for status and have more subtantive disagreements with each other (not without vitriol, of course). I thik this is true, but likely not as much as it seems to me.

One main conservative divide is over how much to use the state to influence the country towards traditional insitutions versus staying with a libertarian framework. Social conservatives vs fiscal conservatives. Generally the first group still wants to work within the democratic process, and see left groups as wanting to appeal to judges to find novel interpretations of exisiting laws. (ie, conservatives amending the state consititution to define marriage vs liberals finding exisiting non-dscrimination amendments to apply more broadly they were likely intended).

Social conservatives will want ordered, controled immigration vs open, almost unregulated immigration of fiscal conservatives (probably justice vs pragmatism), though both will affirm legal immigrants and both will likely want to reduce direct incentives for immigrants (ie, welfare).

A mirror of this in foreign policy is libertarian isolationism vs hawkish/neo-con interventionism, the latter falling out of favor lately, as anger fades and war weariness sets in (or more charitably, people learn lessons and modify their theories).

There are other divisions that I don't think fall along the same lines. Another broad category is how radically to enact change. There is a bit of fundamental tension in a "conservative" philosophy in that at some point after losing a battle there is almost an obligation to conserve the victories of your opponents while fighting their next expansion. (By analogy, picture two nations fighting over borders where A wants to annex the B, but B has an ideological goal to keep the borders set in place by each most recent treaty. Hence, i suspect, the rise of internet Reactionaries who want to do more than draw new lines in the sand).

For example, all conservatives are going to be in favor of free markets, but some may differ on the needed level of intervention by regulators or quasi-governmental groups like the Fed, where those in favor of less are viewed as more conservative but may be called "out of the mainstream" or such. There are some who self-identify as conservatives and argue for expanded state-business cooperation/interference, such as GW Bush proposing TARP.

Another division, perhaps more petty, is over how much to compromise and work with liberals/Democrats vs standing on, and losing with, principles. Some argue that if Republicans articulate a conservative vision and do not sell out people will embrace that; some argue that people probably won't, but then we should let them get what they want by electing Democrats and not having policies that [conservatives view] are inevitable failures be painted with a bipartisan brush so as to be an object lesson, others that politics is messy, we have to compromise to get the best policies that we can while working together with the otherside. Optimism vs pessimism vs pragmatism.

Despite being overly long, I don't know if this answers your question or says anything non-obvious, as you seem to be asking for more petty disputes. I think that those tend to be a magification of a difference along some of the axis mentioned above into not just a quantitative difference but an unbridgeable qualitative one. But there are fundamental disagreements such that one can't say "I'm more conservative than you because I want more x than you" and expect it to hold sway and earn status points across the ideology. Well, maybe lower taxes.

comment by Alejandro1 · 2013-07-30T21:08:01.786Z · LW(p) · GW(p)

At the most basic level, the definitions are that the right wing wants to keep things as they are and the left wing wants to change them. There is one way to do the first, and innumerable to do the second. This probably accounts for a large part of the effect you observe.

(There are of course, many exceptions to the given definition; for example, conservatives wanting to eliminate government programs that are currently part of the status quo. But in this case, they are likely to frame this as a return to a previous state when they didn't exist, which is still a well-defined Schelling point. Right-wingers that do not fit this categorization, such as extreme libertarians calling for a minimal state that has never existed, are known to squabble among them as much as left-wingers.)

Replies from: Randaly
comment by Randaly · 2013-07-31T01:26:56.692Z · LW(p) · GW(p)

the right wing wants to keep things as they are

This is not actually accurate. On virtually any issue you can think of, the right-wing consensus supports changes in government policy. This is true to an extent such that some have argued that Republicans oppose everything about the liberal executive branch and civil service, simply because Obama is in office.

Replies from: Randy_M
comment by Randy_M · 2013-07-31T21:29:50.376Z · LW(p) · GW(p)

"This is true to an extent such that some have argued that Republicans oppose everything about the liberal executive branch and civil service, simply because Obama is in office." The arguments could be rhetorical, hence not demonstrative of the extent of the truth of such proposition. Weak evidence without discussing how those arguments are put forth.

Replies from: Randaly
comment by Randaly · 2013-07-31T23:33:44.150Z · LW(p) · GW(p)

Are you claiming that Republicans are only claiming to oppose Obama, and secretly support him on many issues despite their habit of verbal attacks, filibustering policies they claim to support as a means of threatening Obama on unrelated issues, and swearing to avoid compromise? I would need very strong evidence to believe this.

Replies from: Randy_M
comment by Randy_M · 2013-08-01T14:26:06.058Z · LW(p) · GW(p)

I don't know how you get that from what I said. I would claim the following three things, at least, that are relevant:

Republicans are not an especially united group; some will fillibuster the same policies that others support, like Rand Paul vs John McCain on the NSA programs.

Republicans, or pluralities of them, do not oppose all of the Presidents policies, such as much of the foreign policy and bank bailouts.

The opposition to the Presidents policies drives opposition to him being in office, and not vice versa.

Also, Republican and right wing are not synonyms.

Replies from: Randaly
comment by Randaly · 2013-08-01T22:17:14.772Z · LW(p) · GW(p)

I don't know how you get that from what I said.

Looking back, I misread your first post- I thought you were claiming that the Republicans' arguments were rhetorical. My response would've been, a) your response didn't really address my argument, since the section you disagreed with and b) you have no reason to assume bad faith.

Republicans are not an especially united group; some will fillibuster the same policies that others support, like Rand Paul vs John McCain on the NSA programs.

Well, yes, I wasn't claiming that every conservative holds the exact same opinion on everything; this is not true in politics in general, and is more-or-less assumed.

Republicans, or pluralities of them, do not oppose all of the Presidents policies, such as much of the foreign policy and bank bailouts.

The bank bailouts were conducted under President Bush, not Obama, and in any case poll poorly with all Americans, including Republicans. Americans as a whole oppose Obama's foreign policy, which has a 16% approval rating among Republicans.

The opposition to the Presidents policies drives opposition to him being in office, and not vice versa.

This is disproven by the fact that strong pluralities of Republicans supported almost identical policies under a different president.

Also, Republican and right wing are not synonyms.

In general, people base their identities around political parties or organizations like the Tea Party, not general political affiliation. Therefore, the relevant groups are political parties, not 'left-wing' vs 'right-wing'. Party membership is also a lot easier to measure. Therefore, people in general talk about the parties, rather than specific points on the left-right axis. (e.g. note that the above poll broke data down by Republicans vs. Democrats, not left-wing vs. right-wing)

Replies from: Randy_M, Eugine_Nier
comment by Randy_M · 2013-08-02T14:29:13.561Z · LW(p) · GW(p)

"This is disproven by the fact that strong pluralities of Republicans supported almost identical policies under a different president."

Well, look, I think you are casting people as acting in bad faith but it is a lot more complicated than that, for example, different nuances in how the policies are crafted, promoted, or enforced; learning from what are viewed as mistakes; or different sentiments among the population at large. It's hard to say because you haven't given any examples.

I'm also not sure if you mean congressional Republicans or individual voters or activists or whathaveyou.

But I'm not really interested in defending Republicans any further than this here.

comment by Eugine_Nier · 2013-08-02T04:58:26.399Z · LW(p) · GW(p)

Americans as a whole oppose Obama's foreign policy, which has a 16% approval rating among Republicans.

The pole in question fails to deal with the questions of whether they think it is too interventionist, not interventionist enough or something else.

comment by JoshuaZ · 2013-07-30T13:10:14.953Z · LW(p) · GW(p)

Not speaking based on what I've grown up but this seems slightly more common on the American left than the American right. That said, examples of squabbles of similar forms on the right include over religion such as arguing over whether voting for Mitt Romney was ok given that he was a Mormon. (See e.g. here, with similar attacks on Glenn Beck. Recently one had certain aspects of the Tea Party call for a boycott of Fox News for being too pro-Obama. Similarly, of the Protestants on the right are still not ok with Catholics although they aren't a very large group and seem to be getting smaller. There's also a running trend in the fight between the more interventionist end of the right and the more isolationist end. See e.g. here. Another example is when Rick Perry tried to get HPV mandatory vaccination in Texas, there was blowback from the right as well as from the general libertarians.

But it seems that overall, these sorts of fights occur at a smaller scale than they do on the left. They don't involve as much splintering of organizations. And like many of the similar issues on the left, few people who aren't personally involved are paying much attention to them and even when one does, the differences often look small to outsiders even as the arguments get very heated.

Replies from: Randaly
comment by Randaly · 2013-07-31T01:36:26.675Z · LW(p) · GW(p)

At least in American politics, this seems to me to be cyclical: conservatives were very tightly united during the 80's and 90's, and are presently fairly divided. (Their present divisions are partially papered over by the two other factors that lead to increased party-bloc voting- the end of racism as an effective issue that ran across party lines, and a general increase in party-line/ideological voting that also shows up among Democrats. Non-substantive votes like the historic near-failure of Boehner's run for House Majority Leader, and the Party's internal discussions, show divisions better.)

Replies from: Prismattic, None
comment by Prismattic · 2013-07-31T02:08:53.263Z · LW(p) · GW(p)

There have been some substantive examples as well. The TARP vote was considerably more divisive for Republicans than for Democrats. Both parties were about equally divided on the recent Amash Amendment vote (to defund the NSA).

comment by [deleted] · 2013-07-31T16:24:54.517Z · LW(p) · GW(p)

I don't think the racism as an effective issue is over. Atwater's southern strategy seems alive and well to me. This was first executed (successfully?) by Reagan and the pattern seems to hold. Here's Atwater's quote on the matter:

Atwater: You start out in 1954 by saying, "Nigger, nigger, nigger." By 1968 you can't say "nigger" — that hurts you. Backfires. So you say stuff like forced busing, states' rights and all that stuff. You're getting so abstract now [that] you're talking about cutting taxes, and all these things you're talking about are totally economic things and a byproduct of them is [that] blacks get hurt worse than whites. And subconsciously maybe that is part of it. I'm not saying that. But I'm saying that if it is getting that abstract, and that coded, that we are doing away with the racial problem one way or the other. You follow me — because obviously sitting around saying, "We want to cut this," is much more abstract than even the busing thing, and a hell of a lot more abstract than "Nigger, nigger.

Replies from: Randaly
comment by Randaly · 2013-08-01T00:01:09.481Z · LW(p) · GW(p)

This is not relevant to what I said, for several reasons. First, guessing at your beliefs, you almost certainly believe that only one party today is racist; therefore, racism is not an effective issue that runs across party lines. (Note that until the 60's-70's, the South was split between Democrats and Republicans; there were effectively four political groups in the US: racist Democrats, racist Republicans, non-racist Democrats, non-racist Republicans. This screwed with party-based analysis of voting patterns.) The second is that, so far as I know, Congress no longer holds any straight-up-or-down votes on racism ala the Voting Rights Act; racism itself is not an issue, as nobody would vote for it.

comment by Transfuturist · 2013-07-30T02:35:17.709Z · LW(p) · GW(p)

I believe I've encountered a problem with either Solomonoff induction or my understanding of Solomonoff induction. I can't post about it in Discussion, as I have less than 20 karma, and the stupid questions thread is very full (I'm not even sure if it would belong there).

I've read about SI repeatedly over the last year or so, and I think I have a fairly good understanding of it. Good enough to at least follow along with informal reasoning about it, at least. Recently I was reading Rathmanner and Hutter's paper, and Legg's paper, due to renewed interest in AIXI as the theoretical "best intelligence," and the Arcade Learning Environment used to test the computable Monte Carlo AIXI approximation. Then this problem came to me.

Solomonoff Induction uses the size of the description of the smallest Turing machine to output a given bitstring. I saw this as a problem. Say AIXI was reasoning about a fair coin. It would guess before each flip whether it would come up heads or tails. Because Turing machines are deterministic, AIXI cannot make hypotheses involving randomness. To model the fair coin, AIXI would come up with increasingly convoluted Turing machines, attempting to compress a bitstring that approaches Kolmogorov randomness as its length approaches infinity. Meanwhile, AIXI would be punished and rewarded randomly. This is not a satisfactory conclusion for a theoretical "best intelligence." So is the italicized statement a valid issue? An AI that can't delay reasoning about a problem by at least labeling it "sufficiently random, solve later" doesn't seem like a good AI, particularly in the real world where chance plays a significant part.

Naturally, Eliezer has already thought of this, and wrote about it in Occam's Razor:

The formalism of Solomonoff Induction measures the "complexity of a description" by the length of the shortest computer program which produces that description as an output. To talk about the "shortest computer program" that does something, you need to specify a space of computer programs, which requires a language and interpreter. Solomonoff Induction uses Turing machines, or rather, bitstrings that specify Turing machines. What if you don't like Turing machines? Then there's only a constant complexity penalty to design your own Universal Turing Machine that interprets whatever code you give it in whatever programming language you like. Different inductive formalisms are penalized by a worst-case constant factor relative to each other, corresponding to the size of a universal interpreter for that formalism.

In the better (IMHO) versions of Solomonoff Induction, the computer program does not produce a deterministic prediction, but assigns probabilities to strings. For example, we could write a program to explain a fair coin by writing a program that assigns equal probabilities to all 2^N strings of length N. This is Solomonoff Induction's approach to fitting the observed data. The higher the probability a program assigns to the observed data, the better that program fits the data. And probabilities must sum to 1, so for a program to better "fit" one possibility, it must steal probability mass from some other possibility which will then "fit" much more poorly. There is no superfair coin that assigns 100% probability to heads and 100% probability to tails.

Does this warrant further discussion, if at least to validate or refute this claim? I don't think Eliezer's proposal for a version of SI that assigns probabilities to strings is strong enough, it doesn't describe what form the hypotheses would take. Would hypotheses in this new description be universal nondeterministic Turing machines, with the aforementioned probability distribution summed over the nondeterministic outputs?

Replies from: Qiaochu_Yuan, Adele_L, pengvado, Wei_Dai, passive_fist, Richard_Kennaway
comment by Qiaochu_Yuan · 2013-07-30T04:14:36.583Z · LW(p) · GW(p)

Hypotheses in this description are probabilistic Turing machines. These can be cashed out to programs in a probabilistic programming language.

I think it's going too far to call this a "problem with Solomonoff induction." Solomonoff induction makes no claims; it's just a tool that you can use or not. Solomonoff induction as a mathematical construct should be cleanly separated from the claim that AIXI is the "best intelligence," which is wrong for several reasons.

Replies from: Transfuturist
comment by Transfuturist · 2013-07-30T04:21:40.568Z · LW(p) · GW(p)

Can probabilistic Turing machines be considered a generalization of deterministic Turing machines, so that DTMs can be described in terms of PTMs?

Editing in reply to your edit: I thought Solomonoff Induction was made for a purpose. Quoting from Legg's paper:

Solomonoff's induction method is an attempt to design a general all purpose inductive inference system. Ideally such a system would be able to accurately learn any meaningful hypothesis from a bare minimum of appropriately format- ted information.

I'm just pointing out what I see as a limitation in the domain of problems classical Solomonoff Induction can successfully model.

Replies from: Qiaochu_Yuan, Pfft
comment by Qiaochu_Yuan · 2013-07-30T05:14:55.736Z · LW(p) · GW(p)

Can probabilistic Turing machines be considered a generalization of deterministic Turing machines, so that DTMs can be described in terms of PTMs?

Yes.

I'm just pointing out what I see as a limitation in the domain of problems classical Solomonoff Induction can successfully model.

I don't think anyone claims that this limitation doesn't exist (and anyone who claims this is wrong). But if your concern is with actual coins in the real world, I suppose the hope is that AIXI would eventually learn enough about physics to just correctly predict the outcome of coin flips.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-07-30T12:50:52.382Z · LW(p) · GW(p)

The steelman is to replaces coin flips with radioactive decay and then go through with the argument.

comment by Pfft · 2013-07-30T05:05:50.428Z · LW(p) · GW(p)

Yes.

comment by Adele_L · 2013-07-30T04:29:25.888Z · LW(p) · GW(p)

the stupid questions thread is very full

Might be worth having those more often too; the last one was very popular, and had lots of questions that open threads don't typically attract.

Because Turing machines are deterministic, AIXI cannot make hypotheses involving randomness. To model the fair coin, AIXI would come up with increasingly convoluted Turing machines, attempting to compress a bitstring that approaches Kolmogorov randomness as its length approaches infinity. Meanwhile, AIXI would be punished and rewarded randomly.

Just a naïve thought, but maybe it would come up with MWI fairly quickly because of this. (I can imagine this being a beisutsukai challenge – show a student radioactive decay, and see how long it takes them to come up with MWI.) A probabilistic one is probably better for the other reasons brought up, though.

Replies from: David_Gerard, Transfuturist
comment by David_Gerard · 2013-07-30T12:21:47.406Z · LW(p) · GW(p)

the stupid questions thread is very full

Might be worth having those more often too; the last one was very popular, and had lots of questions that open threads don't typically attract.

Someone want to start one day after tomorrow? Run monthly or something? Let's see what happens.

comment by Transfuturist · 2013-07-30T04:54:49.619Z · LW(p) · GW(p)

To come up with MWI, it would have to conceive of different potentialities and then a probabilistic selection. I don't know, I'm not seeing how deterministic Turing machines could model that.

Replies from: Kawoomba, Mitchell_Porter
comment by Kawoomba · 2013-07-30T08:41:46.603Z · LW(p) · GW(p)

Do you see how a nondeterministic Turing machine could model that?

If so ... ... ...

comment by Mitchell_Porter · 2013-07-30T07:55:47.786Z · LW(p) · GW(p)

Suppose in QM you have a wavefunction which recognizably evolves into a superposition of wavefunctions. I'll write that psi0, the initial wavefunction, becomes m.psi' + n.psi'', where m and n are coefficients, and psi' and psi'' are basis wavefunctions.

Something slightly analogous to the MWI interpretation of this, could be seen in a Turing machine which started with one copy of a bitstring, PSI0, and which replaced it with M copies of the bitstring PSI' and N copies of the bitstring PSI''. That would be a deterministic computation which replaces one world, the single copy of PSI0, with many worlds, the multiple copies of PSI' and PSI''.

So it's straightforward enough for a deterministic state machine to invent rules corresponding to a proliferation of worlds. In fact, in the abstract theory of computation, this is one of the standard ways to model nondeterministic computation - have a deterministic computation which deterministically produces all the possible paths that could be produced by the nondeterministic process.

However, the way that QM works, and thus the way that a MWI theory would have to work, is rather more complicated, because the coefficients are complex numbers, the probabilities (which one might suppose correspond to the number of copies of each world) are squares of the absolute values of those complex numbers, and probability waves can recombine and destructively interfere, so you would need worlds / bitstrings to be destroyed as well as created.

In particular, it seems that you couldn't reproduce QM with a setup in which the only fact about each world / bitstring was the number of current copies - you need the "phase information" (angle in the complex plane) of the complex numbers, in order to know what the interference effects are. So your Turing machine's representation of the state of the multiverse would be something like:

(complex coefficient associated with the PSI' worlds) (list of M copies of the PSI' bitstring); (complex coefficient associated with the PSI'' worlds) (list of N copies of the PSI'' bitstring) ; ...

and the "lists of copies of worlds" would all be dynamically irrelevant, since the dynamics comes solely from recombining the complex numbers at the head of each list of copies. At each timestep, the complex numbers would be recomputed, and then the appropriate number of world-copies would be entered into each list.

But although it's dynamically irrelevant, that list of copies of identical worlds is still performing a function, namely, it's there to ensure that there actually are M out of every (M+N) observers experiencing worlds of type PSI', and N out of every (M+N) observers experiencing worlds of type PSI''. If the multiverse representation was just

(complex coefficient of PSI' world) (one copy of PSI' world) ; (complex coefficient of PSI'' world) (one copy of PSI'' world) ; ...

then all those complex numbers could still evolve according to the Schrodinger equation, but you would only have one observer seeing a PSI' world, and one observer seeing a PSI'' world, and this is inconsistent with observation, where we see that some quantum events are more probable than others.

This is the well-known problem of recovering the Born probabilities, or justifying the Born probability rule - mentioned in several places in the QM Sequence - but expressed in the unusual context of bit-strings on a Turing tape.

(Incidentally, I have skipped over the further problem that QM uses continuous rather than discrete quantities, because that's not a problem of principle - you can just represent the complex numbers the way we do on real computers, to some finite degree of binary precision.)

Replies from: Kawoomba
comment by Kawoomba · 2013-07-30T08:44:12.058Z · LW(p) · GW(p)

... keep in mind that deterministic Turing machines can trivially simulate nondeterministic Turing machines.

Replies from: JoshuaZ, Transfuturist
comment by JoshuaZ · 2013-07-30T12:49:06.739Z · LW(p) · GW(p)

... keep in mind that deterministic Turing machines can trivially simulate nondeterministic Turing machines.

The problem here seems to be one of notation. You are using nondeterministic Turing machine in the formal sense of the term, where Mitchell seems to be using nondeterministic closer to "has a source of random bits."

comment by Transfuturist · 2013-07-30T19:01:20.837Z · LW(p) · GW(p)

Trivially? I was under the impression that it involved up to a polynomial slowdown, while probabilistic Turing machines can simulate deterministic Turing machines by merely having only a single probability of 1 for each component of its transition function.

Replies from: Kawoomba
comment by Kawoomba · 2013-07-30T19:07:05.236Z · LW(p) · GW(p)

Algorithmically trivially, I didn't see anyone concerned about running times.

Replies from: Transfuturist
comment by Transfuturist · 2013-07-30T19:36:49.397Z · LW(p) · GW(p)

Well, wouldn't that be because it's all theorizing about computational complexity?

I see the point. Pseudorandom number generators would be what you mean by simulation of nondeterminism in a DTM? Would a deterministic UTM with an RNG be sufficient for AIXI to hypothesize randomness? I still don't see how SI would be able to hypothesize Turing machines that produce bitstrings that are probabilistically similar to the bitstring it is "supposed" to replicate.

comment by pengvado · 2013-07-30T20:44:40.801Z · LW(p) · GW(p)

Eliezer's proposal was a different notation, not an actual change in the strength of Solomonoff Induction. The usual form of SI with deterministic hypotheses is already equivalent to one with probabilistic hypotheses. Because a single hypothesis with prior probability P that assigns uniform probability to each of 2^N different bitstrings, makes the same predictions as an ensemble of 2^N deterministic hypotheses each of which has prior probability P*2^-N and predicts one of the bitstrings with certainty; and a Bayesian update in the former case is equivalent to just discarding falsified hypotheses in the latter. Given any computable probability distribution, you can with O(1) bits of overhead convert it into a program that samples from that distribution when given a uniform random string as input, and then convert that into an ensemble of deterministic programs with different hardcoded values of the random string. (The other direction of the equivalence is obvious: a computable deterministic hypothesis is just a special case of a computable probability distribution.)

Yes, if you put a Solomonoff Inductor in an environment that contains a fair coin, it would come up with increasingly convoluted Turing machines. This is a problem only if you care about the value of an intermediate variable (posterior probability assigned to individual programs), rather than the variable that SI was actually designed to optimize, namely accurate predictions of sensory inputs. This manifests in AIXI's limitation to using a sense-determined utility function. (Granted, a sense-determined utility function really isn't a good formalization of my preferences, so you couldn't build an FAI that way.)

comment by Wei Dai (Wei_Dai) · 2013-08-06T08:11:18.899Z · LW(p) · GW(p)

I don't think anyone has pointed you at quite the right direction for getting a fully satisfactory answer to your question. I think what you're looking for is the concept of Continuous Universal A Priori Probability:

The universal distribution m is defined in a discrete domain, its arguments are finite binary strings. For applications such as the prediction of growing sequences it is necessary to define a similar distribution on infinite binary sequences. This leads to the universal semi-measure M defined as the probability that the output of a monotone universal Turing machine U starts with x when provided with fair coin flips on the input tape.

For more details see the linked article, or if you're really interested in this field, get the referenced textbook by Li and Vitanyi.

EDIT: On second thought I'll spell out what I think is the answer, instead of just giving you this hint. This form of Solomonoff Induction, when faced with a growing sequence of fair coin flips, will quickly assign high probability to input tapes that start with the equivalent of "copy the rest of the input tape to the output tape as is", and therefore can be interpreted as assigning high probability to the hypothesis that it is facing a growing sequence of fair coin flips.

comment by passive_fist · 2013-07-31T02:20:53.067Z · LW(p) · GW(p)

Qiaochu has already answered your question about SI, but to also attack your question about AIXI:

Careful about what you're assuming. You're implicitly assuming that the AI doesn't know that what is being flipped is a random coin. If the AI had that knowledge, it could just replace all those convoluted descriptions with just a simple one: "Generate a pseudorandom number". This would be just as effective as any other predictor, and indeed it would be very short and easy to run.

Now, what if the AI doesn't know this? Then you are feeding it random numbers and expecting it to find order in them. In other words, you're asking the hardest problem of all. It makes sense that it would expend a huge amount of computational power trying to find some order in random numbers. Put yourself in the computer's place. How on Earth would you ever be able to know if the string of 0's and 1's you are being presented with is really just random or the result of some incredibly complicated computer program? No one's telling you!

Finally, if the coin is actually a real physical coin, the computer will keep trying more and more complicated hypotheses until it has modelled your fingers, the fluid dynamics of the air, and the structure of the ground. Once it has done so, it will indeed be able to predict the outcome of the coin flip with accuracy.

Note that the optimality of AIXI is subject to several important gotchas. It is a general problem solver, and can do better than any other general problem solver, but there's no guarantee that it will do better than specific problem solvers on certain problems. This is because a specifically-designed problem solver carries problem-specific information with it - information that AIXI may not have access to.

Even a very small amount of information (say, a few tens of bits) about a problem can greatly reduce the search space. Just 14 bits of information (two ASCII characters) can reduce the search space by a factor of 2^14 = 16384.

comment by Richard_Kennaway · 2013-07-30T22:42:14.713Z · LW(p) · GW(p)

Naturally, Eliezer has already thought of this, and wrote about it in Occam's Razor:

It seems to completely answer your question. That is, one can think about probabilities and formulate and test probabilistic hypotheses, without needing to generate any random numbers.

comment by GuySrinivasan · 2013-08-01T16:00:09.185Z · LW(p) · GW(p)

(link) Effective Altruism: Professionals donate expertise. Toyota sends some industrial engineers to improve NYC's Food Bank charity.

HT Hacker News

comment by David Althaus (wallowinmaya) · 2013-07-30T22:52:11.161Z · LW(p) · GW(p)

Does anyone else has problems with the appearance of Lesswrong? My account is somehow at the bottom of the site and the text of some posts transgresses the white background. I noticed the problem about 2 days ago. I didn't change my browser (Safari) or something else. I think.

Replies from: fubarobfusco
comment by fubarobfusco · 2013-08-03T18:06:00.577Z · LW(p) · GW(p)

When reporting problems with a user interface, it often helps to post screenshots. On the web, you can use an image-hosting service such as imgur to make them accessible to people who read your comment.

comment by Epiphany · 2013-08-03T04:26:19.605Z · LW(p) · GW(p)

I'm looking for a reading recommendation on the topic of perverse incentives, especially incentives that cause people to do unethical things. Yes, I checked "The Best Textbooks on Every Subject" thread and have recorded all the economics recommendations of interest. However, as interested as I am in reading about economics in general, my specific focus is on perverse incentives, especially ones that cause people to do unethical things. I was wondering if anyone has explored this in depth or happens to know a term for "perverse incentives that cause people to do unethical things", (regardless of whether it's part of economics or some other subject), as I can't seem to find one.

Replies from: NancyLebovitz, shminux
comment by NancyLebovitz · 2013-08-04T23:59:37.802Z · LW(p) · GW(p)

Hard Facts, Dangerous Half-Truths And Total Nonsense: Profiting From Evidence-Based Management has a fair amount about the limits of incentive plans.

From memory: incentives can work for work that's well-defined and can be done by one person. Otherwise, the result is people gaming the system and not cooperating with each other.

I don't remember whether the book covered something I heard about in the 70s or 80s about a car company which had incentives for teams assembling cars rather than an assembly line.

I was told about a shop owned by partners which had an incentive system for bringing in sales for the shifts the partners worked. The result was that the partners wouldn't tell customers to come back if it might be on someone else's shift.

Replies from: Epiphany
comment by Epiphany · 2013-08-06T17:52:15.698Z · LW(p) · GW(p)

Thank you. +1 karma.

Replies from: army1987
comment by A1987dM (army1987) · 2013-08-06T23:08:52.913Z · LW(p) · GW(p)

Who the hell downvotes this?

Replies from: Vladimir_Nesov, Epiphany
comment by Vladimir_Nesov · 2013-08-06T23:14:03.048Z · LW(p) · GW(p)

I do, mostly for the "+1 karma" ending, which I dislike stylistically.

Replies from: Epiphany
comment by Epiphany · 2013-08-20T01:35:55.156Z · LW(p) · GW(p)

Since we're supposed to use karma votes for weeding the garden, then I assume they are supposed to mean "you're acting like a weed". If you press the "you're acting like a weed" button for anything other than a weed-like act, then you're essentially crying wolf with the karma button which will result in people becoming indifferent just like they do when any other false alarm is raised often.

I like you Vladimir. I have observed that you've made an effort to be friendly and fair to me in the past. Since you have been honest with me, I'll be honest also: It's the fact that people use karma to express minor preferences like this one that keep me from taking most karma votes seriously.

I am also surprised to discover that people are communicating minor preferences using down votes. Look at it from my point of view: there are over 1000 people who regularly use this site. We don't even have a consensus on things like Newcomb's problem, free will or dust specks, let alone stylistic preferences. Were I to hypothesize about all the possible reasons why one of 1000+ people might down vote me that are not obvious in the way a Schelling point would be and calculate the probabilities of each, I would be spending an incredible amount of time just to figure out that you didn't like a particular turn of phrase.

This might be Expecting Short Inferential Distances. (I have this problem as well, though it comes out in different places.) I still like you, but I hope you will try not to communicate minor preferences to me via karma votes in the future.

Replies from: wedrifid
comment by wedrifid · 2013-08-22T02:01:40.467Z · LW(p) · GW(p)

We don't even have a consensus on things like Newcomb's problem, free will or dust specks, let alone stylistic preferences. Were I to hypothesize about all the possible reasons why one of 1000+ people might down vote me that are not obvious in the way a Schelling point would be and calculate the probabilities of each, I would be spending an incredible amount of time just to figure out that you didn't like a particular turn of phrase.

This is true, the overhead for adopting and learning somewhat arbitrary cultural norms can be significant. This is particularly the case for those whose instincts are less finely targeted towards social conformance. This is a class predictably overrepresented on less wrong. That said, you have now had the preference explained to you in clear English. The need for calculating probabilities for countless hypothetical downvote causes is largely removed and this one hypothesis "Saying +1 Karma causes some downvotes" is now comfortably high. Now it is a choice whether you want to spend emotional effort fighting that norm or whether you let it go and adapt.

There are many times where it worthwhile to fight the tide and attempt to influence social norms in a desired direction. I do this constantly in my battle against what I call bullshit. However it is important to chose one's battles. In no small part because if one spends their influence attempting to fight irrelevant things then there is less credibility remaining for fighting the battles that matter.

In this case people don't like "+1 karma" as part of a general distaste for all unnecessary references to the karma based meta-level. I expect that if you had responded "Ok, thank you for explaining. I'll adopt different word use in my reinforcement." then you would have been upvoted and also had people reverse their downvotes of the earlier comment. People being cooperative and updating tends to be appreciated.

I personally request that you change this detail of style rather than escalating your dissent. It is frustrating to watch otherwise rational people undermining their credibility due to what amounts to social awkwardness. Lose gracefully on small things so that you win more things that matter.

comment by Epiphany · 2013-08-20T03:37:07.204Z · LW(p) · GW(p)

Thank you for this, army1987. I am glad to know that others can see appreciation as being a good and necessary thing rather than treating it as spam, and am more glad to see that someone else is willing to show support for encouraging behaviors. +1 karma. The Power of Reinforcement, "What Works"

Replies from: wedrifid
comment by wedrifid · 2013-08-22T01:29:59.843Z · LW(p) · GW(p)

Thank you for this, army1987. I am glad to know that others can see appreciation as being a good and necessary thing rather than treating it as spam, and am more glad to see that someone else is willing to show support for encouraging behaviors. +1 karma. The Power of Reinforcement, "What Works"

This comment could be improved by the removal of "+1 karma.".

comment by shminux · 2013-08-03T05:25:33.022Z · LW(p) · GW(p)

perverse incentives that cause people to do unethical things

For example...?

Replies from: wedrifid
comment by wedrifid · 2013-08-03T15:55:51.047Z · LW(p) · GW(p)

For example...?

For example allocating funds to fire departments based on how many fires they put out. That encourages them to stop putting work into fire prevention and, at the extreme, creates an incentive for outright arson.

The medical system. (Does that even need explaining?)

Replies from: Zaine
comment by Zaine · 2013-08-04T02:31:02.793Z · LW(p) · GW(p)

I gather Australia's medical system is just as notoriously bad as America's (as per Yvain's excoriations)?

Finland's Healthcare system and to a lesser extent the NHS seem to mostly have proper incentives in place, as uncured folk means less capacity to treat oneself. Surely medical care the world over isn't guided by perverse incentives? That is more a question than an assertion.

comment by NancyLebovitz · 2013-07-30T15:35:51.895Z · LW(p) · GW(p)

You can't act on any object. You change its environment, and the object will flow.

Kate Stone, TED talk, paper with electronics

This seems like an interesting half truth since you can't change the environment without acting on objects. However, it's possible that the environment is a richer tool of influence than acting directly, and also possible that people are less apt to resent the environment for not doing what they want, therefore less likely to try to force it.

comment by linkhyrule5 · 2013-07-29T23:56:00.124Z · LW(p) · GW(p)

Random idea for the Lobian obstacle that turned out not to work, but I decided to post anyway on the off chance someone can salvage it:

Inspired by the human brains bicameral system: Split the system into two, A and B. A has ((B proves C) -> C), B has ((A proves C) -> C). A, trusting B, can build B' as strong as B; B, trusting A, can build A' as strong as A.

Obvious flaw: A has ((B proves ((A proves C) -> C)) -> ((A proves C) -> C), so A has ((A proves C) -> C), and vice versa.

comment by Omid · 2013-08-02T15:24:38.220Z · LW(p) · GW(p)

What's the most credible way to set up an information bounty?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-08-02T22:14:35.063Z · LW(p) · GW(p)

What's an information bounty? What kind of information are you looking for?

Replies from: Omid
comment by Omid · 2013-08-03T04:56:47.939Z · LW(p) · GW(p)

Sorry, I guess the proper term is "truth bounty." . The Truth Seal originally offered to arbitrate truth bounties, but it quickly went defunct.

comment by John_Maxwell (John_Maxwell_IV) · 2013-08-06T16:32:57.050Z · LW(p) · GW(p)

Recent effective altruism job openings (all close within the next 10 days):

More info.

comment by Jayson_Virissimo · 2013-08-03T02:56:39.848Z · LW(p) · GW(p)

I'm planning on taking Algorithms Part 1 and Part 2 through Coursera to complement my first year computer science (software engineering) courses. I am very much interested in collaborating with other LWers. The first course in the sequence starts August 23. Please let me know if you are interested and what form of collaboration you would be most comfortable with (weekly "book club" posts in Discussion, IRC studyhall, etc... ) if you are.

About the course:

An introduction to fundamental data types, algorithms, and data structures, with emphasis on applications and scientific performance analysis of Java implementations. Specific topics covered include: union-find algorithms; basic iterable data types (stack, queues, and bags); sorting algorithms (quicksort, mergesort, heapsort) and applications; priority queues; binary search trees; red-black trees; hash tables; and symbol-table applications.

Recommended Background:

All you need is a basic familiarity with programming in Java. This course is primarily aimed at first- and second-year undergraduates interested in engineering or science, along with high school students and professionals with an interest (and some background) in programming.

Suggested Readings:

Although the lectures are designed to be self-contained, students wanting to expand their knowledge beyond what we can cover in a 6-week class can find a much more extensive coverage of this topic in our book Algorithms (4th Edition) , published by Addison-Wesley.

Course Format:

There will be two lectures (75 minutes each) each week. The lectures are each broken into about 4-6 pieces, separated by interactive quiz questions for you to to help you process and understand the material. In addition, there will be a problem set and a programming assignment each week and there will be a final exam.

comment by niceguyanon · 2013-08-01T21:31:30.039Z · LW(p) · GW(p)

My priors tell me that statistical arbitrage opportunities in online poker to net 100k a year to be less than 2% for someone who has an IQ of 100. And likely to be diminishing quickly as the years go by.

A few reasons include: Bots are confirmed to be winning players, in full ring and NL games. Online poker is mature and has better players. Rake. New 'fish" to grinder ratio is getting smaller.

Does anyone have thoughts to the contrary? Perhaps more sophisticated software to catch botters? Or new regulations legalizing online poker to increase new fish?

Replies from: Duke, Tenoke
comment by Duke · 2013-08-04T02:06:26.657Z · LW(p) · GW(p)

Depending on you current skill level, I'd think that the less than 2% likelihood is a generous estimate. Online poker was a bubble back in the early to mid 00's. Presently, edges are razor thin and only a very elite group are making 100K+/year.

Players are highly skilled--and getting better all the time--and able to populate multiple tables simultaneously (as opposed to live poker where you can play only a single table at a time); rake is high; online poker legality is hazy in many parts of the world; transferring money off the site is problematic; you'll be paying taxes on your winnings; and, like you mentioned, fish are drying up.

Botting, player collusion and hacking certainly have negative effects on the game but it is unclear to what extent.

If you're an American and live near a casino, you're more likely to win $100k playing there in games with at least a $5 big blind. But, generally, playing poker for a living is a bitch for a lot of reasons, namely that you'll be spending a lot of your life in a casino with no windows. Also, statistical variance is difficult to handle emotionally--assuming that you become a winning player to begin with. For every story your read about some guy living high on his poker winnings, there are countless others who went broke and now are either hopeless degenerates scrounging around casinos or working square jobs.

If you do not have an obvious marketable skill set worth 100k/yr, might I suggest getting into sales of some sort. Generally, the barriers to entry are low, and while the success rates are small, the upper bounds of earning potential are very large.

comment by Tenoke · 2013-08-03T08:57:07.163Z · LW(p) · GW(p)

It's true and has been for years (since the early 00's boom). Except that bots (to my knowledge) are not really a big problem while the separation of countries (e.g. US players being able to play only with US players) from the general pool of players is. This is why I stopped playing in 2010.

comment by gothgirl420666 · 2013-08-01T18:16:38.786Z · LW(p) · GW(p)

Is there a word processing program for Windows that's similar to TextEdit on a Mac? I always preferred TextEdit over programs like Microsoft Word or Pages because it loads quickly and you can easily fit it in a small window for writing quick notes. In other words, it's "small", I guess you would say.

Right now I'm using CopyWriter, which is pretty good, but it has two problems 1) no spell check and 2) no autosave. Mostly I just use Evernote and Google Docs though.

Any suggestions?

Replies from: Lumifer
comment by Lumifer · 2013-08-01T19:52:42.667Z · LW(p) · GW(p)

WordPad is the built-in Windows light word processor. Other alternatives that come to mind are SciTE and Notepad++

comment by CAE_Jones · 2013-08-01T02:04:09.625Z · LW(p) · GW(p)

A few months ago, I decided to try a "gather impossible problems, hold off on proposing solutions until we've thoroughly understood them, then solve them" 'campaign'. The problems I came up with focused on blindness, so I started the discussion here rather than LW. I was surprised when I looked it up today and found that it only lasted for four days--I had been sure it had managed to drag on a little longer than that.

I recall someone tried something similar on LW, though considerably less focused and more willing to take things they couldn't be expected to solve without many more resources. I also recall that little if anything came of it.

Something tells me we're doing it wrong.

comment by Ben Pace (Benito) · 2013-07-31T23:29:01.797Z · LW(p) · GW(p)

Starting to write introductions to LW for friends; here's my fast-track.

Please comment with thoughts here (or there).

Replies from: Adele_L
comment by Adele_L · 2013-08-01T00:17:52.368Z · LW(p) · GW(p)

I got a 'page not found' error when I clicked on that link because of the period at the end.

Replies from: Benito
comment by Ben Pace (Benito) · 2013-08-01T06:02:48.219Z · LW(p) · GW(p)

Fixed.

comment by A1987dM (army1987) · 2013-07-30T12:51:15.316Z · LW(p) · GW(p)

Has anyone else's inbox icon been behaving erratically (i.e., turning red even when there were no new messages or comment replies)?

Replies from: Scott Garrabrant, Manfred
comment by Scott Garrabrant · 2013-07-30T19:04:25.439Z · LW(p) · GW(p)

You might be confused because pressing the "back" button to a time when the message was unread will make the symbol turn read.

Replies from: Vaniver
comment by Vaniver · 2013-07-30T21:11:51.927Z · LW(p) · GW(p)

I've also had this effect by opening a bunch of tabs, with my inbox being the last one.

comment by Manfred · 2013-07-30T13:20:57.657Z · LW(p) · GW(p)

Not mine.

comment by [deleted] · 2013-08-01T01:32:56.488Z · LW(p) · GW(p)

This is a call for Less Wrong users who do not wish to personally identify as rationalists, or do not perfectly relate to the community at a cultural level:

What do you use Less Wrong for? And, what are some reasons for why you do not identify as a rationalist? Are there some functions that you wish the community would provide which it otherwise does not?

Replies from: CAE_Jones, Zaine
comment by CAE_Jones · 2013-08-01T01:50:50.451Z · LW(p) · GW(p)

I think of "rationalist" as "one who applies rationality to real life". By that definition, I've identified as rationalist since age 2 at the latest (I said identified, not "been any good at it").

LW culture is hard to grasp. Politics is a minefield, there's apparently a terrible feminism problem, there seem to be two not so distinct factions: people who want more instrumental rationality, and people who get annoyed by this and only want to discuss philosophy. You have to read lots of things not optimized for keeping readers from falling asleep (I'm not talking about the sequences; I actually stay awake through those) in order to have the necessary background to participate in many discussions; I'm quite terrified of missteps (I make them quite often).

However, I know what I'm reading will be thoroughly vetted for truthfulness most of the time, and in spite of the utter failure to demonstrate rationality superpowers, applying science and reasoning to reality for good results is encouraged and seemingly the main thrust of the whole site. It's obviously far from optimal, otherwise we'd have tons of success stories rather than something trying very hard not to be a technoCult, but those aren't really detraction enough given the absence of a better alternative.

That, and solving CAPCHAs is quite inconvenient and so I'm kinda selective about where I register, so I registered here instead of Reddit and that means this is the only place I'm going to be able to talk about HPMoR. :P

(Also, I like emoticons an awful lot considering that I can't see them. I haven't encountered any emoticons on LW. In any other comment, I would have been much more wary of using one. ??? )

comment by Zaine · 2013-08-01T02:04:28.883Z · LW(p) · GW(p)

Being 'part of a community' and having a term that defines one's identity are two different conditions. In the former, one's participation in a community is merely another aspect to one's personality or character, which can be all-expansive.

In the latter, one is tied to others who share the identifier. Even if 'rationalist' just means one who subscribes to the importance of instrumental and epistemic rationality in daily life, accepting and embracing that or any identifier can have negatives. The former condition, representing a choice rather than a fact of identity, is absent those negatives while retaining the positive aspects of communal connection.

Exempli gratia:
One is trying to appeal to some high status figure. This high-status figure encounters a 'rationalist', and perceives them as low-status. If One has identified themselves as also being a rationalist, then the high-status person's perception of the 'rationalist' may taint their perception of One.
If One has instead identified themselves as being part of a certain community, to which this 'rationalist' may also claim affiliation, One can claim that while they find the community worthwhile for many pursuits, not all who flock to the community are representative of its worth.

If someone thinks this a losing strategy, please speak up, as it's generally applicable. Notable exceptions to its applicability include claiming oneself as identifiable by their association with a friend group or extended family, as in, "I am James Potter, Marauder," rather than, "I am James Potter, member of the Marauders"; and, "I am a Potter," rather than the simple, "My name is James Potter."

comment by Dorikka · 2013-07-29T22:50:30.203Z · LW(p) · GW(p)

...Apparently you can't delete your own comments with no replies anymore.

comment by tim · 2013-07-30T01:20:55.225Z · LW(p) · GW(p)

whoops, accidentally hit comment instead of show help. disregard for now.

comment by Multiheaded · 2013-08-04T09:26:40.718Z · LW(p) · GW(p)

I hate all the smug, condescending fascist fucks in this fucking community so much. Please just ban me or something, I can't fucking look at half the motherfucking comments here. From extreme, ruthless classism to casually invoked sexism to brazen authoritarianism to just complete fucking stone-cold inhumanity. I just can't go on.

You gentlemen can probably guess as to which ones of you I mean by this. Fuck you.

Replies from: MixedNuts, gothgirl420666, ArisKatsaris, Kawoomba, None, BlindIdiotPoster, cousin_it, wedrifid, David_Gerard, wedrifid, Richard_Kennaway
comment by MixedNuts · 2013-08-04T19:39:24.308Z · LW(p) · GW(p)

There's a certain breed of progressives that want to push widely-held positions out of the Overton window. While I feel a few shitloads more comfortable around such people than around people who are sympathetic to said positions, this worries me.

  • Shutting up debate (in every place Proper Decent People talk, not just specialised places where people want to move past the basic questions) is always somewhat dangerous, though admittedly that applies to every position. This can be circumvented by yelling at people who imply or baldly state these ideas are true, but not at those who argue them with enough formality and apologetic dances.

  • Condemning popular positions is going to make you yell at half the people you meet and isolate you from the mainstream. Inconvenient.

  • If we make it unpalatable to argue for one side but not the other, the goalposts shift to Crazytown quickly. Cull the least feminist person at every turn and soon enough Twisty Faster is sounding reasonable.

  • Relatedly, not having 101 debates over and over allows for semantic shift. It's all very well and good to point at horrific things and call them "ableism", but when the same word is then used to yell at me for saying "Crazytown" there is a leap in logic few think to plug.

  • Generic dangers of thinking people who disagree with you are evil rather than mistaken about what policies are helpful, and that there's no value in understanding their model of the world. (Even though you have a whole body of work about how these mistakes are made and you are ignoring it in favour of calling them mean names, rarrr.)

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-08-11T10:33:14.147Z · LW(p) · GW(p)

Cull the least feminist person at every turn and soon enough Twisty Faster is sounding reasonable.

Uhm, curiosity made me google Twisty Faster, and... well, how could anyone not love a blogger whose comment moderating policy includes: "Everyone dislikes reading un-excellent comments."? She says that men are not allowed to comment at her blog, which in my opinion is more fair that pretending to have a blog open for everyone and then silence any man who dares to disagree ("check your privilege", "mansplaining"). She even quotes the S.C.U.M. Manifesto, and her ideas about preventing rape sound like a coherent extrapolation of ideas already existing in a weaker form.

But this article and its 200 comments are pure gold. If you don't want to read it, the essence is this: -- Imagine that I (a man) simply declare myself to be a woman and enter a women's sauna. (In the original article it was women-only college, but the analogy with sauna was made in the comments.) Should I be allowed to do that? -- The extra challenge is in properly explaining why, without saying anything that could be interpreted as a sexist, trans-phobic, or otherwise politically incorrect argument. (For example you can't say it's because I'm a man, because I declared myself a woman, and who are you to question my identity?) Twisty Faster bites the bullet, if I understand her correctly, and says that I should be allowed to do that. Respect!

I am not sure if that lady is real, but the internal consistence of her opinions makes her sound deep. Also, she does a really good job moderating the discussion.

comment by gothgirl420666 · 2013-08-04T17:27:28.553Z · LW(p) · GW(p)

If you could write up an intelligent post arguing for progressivism, then you would probably get a lot farther on convincing the far-right faction of this site than by telling them they are evil for holding their beliefs without giving reasons as to why. (The problem, of course, is that it requires time and effort.)

For what it's worth, you seem like a cool person... one of the few people on this site who I could see myself wanting to hang out with in real life. (I don't necessarily have a real reason to believe this, I just see the name Multiheaded and my thoughts are "that guy's cool".) So I would be averse to you leaving the community.

Also, if Yvain ever writes the mega-rebuttal to Reaction that he has planned, I think it could really be a game changer. So there's hope that if you aren't up for the task, someone else will take care of it.

Replies from: None, pragmatist, NancyLebovitz, David_Gerard
comment by [deleted] · 2013-08-11T02:08:28.410Z · LW(p) · GW(p)

Also, if Yvain ever writes the mega-rebuttal to Reaction that he has planned, I think it could really be a game changer. So there's hope that if you aren't up for the task, someone else will take care of it.

Except that he seems to have decided to write a few satellite snipes at non-core beliefs and leave it at that. He has explicitly said that he's not willing to engage with HBD, in a way that shocked me and broke my model of him as a reasonable rationalist. He has said nothing about the Cathedral, or "importing a new people" as a theoretical problem for democracy, instead focusing on proving that the present is broadly superior to the past.

In his defense, reactionaries have not exactly got their shit together with respect to a concise statement of what the core important beliefs actually are.

Replies from: gothgirl420666
comment by gothgirl420666 · 2013-08-11T04:47:19.117Z · LW(p) · GW(p)

I asked him if he was ever going to write his mega-rebuttal, and he said "it's on my list of things to do, but that list also includes 'write a perfect philosophical language' and 'reach enlightenment'." So I think that's a pretty clear "maybe".

He has explicitly said that he's not willing to engage with HBD, in a way that shocked me and broke my model of him as a reasonable rationalist.

To be honest... if I had a blog, especially one linked to my real name and real-life identity, I would probably do the same thing that he seems to be doing and refuse to talk about race. Unfortunately, it seems hard to imagine that with the current evidence we have today, a true rationalist could get farther than the position of "it seems very unlikely that there are significant mental differences between races", and yet that's essentially the right edge of the Overton window. If that's his reason for not discussing the topic, he can't go out and say it, because that's essentially like admitting he's outside of the window.

Replies from: None
comment by [deleted] · 2013-08-11T07:24:51.198Z · LW(p) · GW(p)

Unfortunately, it seems hard to imagine that with the current evidence we have today, a true rationalist could get farther than the position of "it seems very unlikely that there are significant mental differences between races", and yet that's essentially the right edge of the Overton window. If that's his reason for not discussing the topic, he can't go out and say it, because that's essentially like admitting he's outside of the window.

I'm not sure what you're saying here. Are you saying that the correct position is outside the window or inside it? (IMO we have pretty overwhelming evidence on all lines of inquiry that a certain position is correct, and that position happens to be quite outside of "civilized" discourse.)

Replies from: gothgirl420666
comment by gothgirl420666 · 2013-08-11T17:08:03.854Z · LW(p) · GW(p)

I don't feel confident enough to say what the correct rational position given the evidence is, not having fully examined the evidence myself, but I cannot imagine that the correct position is comfortably inside the Overton window.

comment by pragmatist · 2013-08-04T17:32:20.679Z · LW(p) · GW(p)

If you could write up an intelligent post arguing for progressivism, then you would probably get a lot farther on convincing the far-right faction of this site than by telling them they are evil for holding their beliefs without giving reasons as to why.

I doubt this very much. The differences between the far-right faction and the progressives (among whom I count myself) on this website are not primarily of the sort that can be bridged by intelligent argument, for a number of reasons.

If Multiheaded wrote a post of the kind you recommend with the intent of convincing LW conservatives, it would be a wasted effort. Also, I'm pretty sure a post of that sort would be heavily downvoted, and not just by people who disagree with him politically.

Replies from: gothgirl420666, None
comment by gothgirl420666 · 2013-08-04T17:47:31.034Z · LW(p) · GW(p)

Why?

Some of those guys certainly seem irrational and stuck in their ways, but... to be honest, if there are any coherent responses to Moldbug & co. I have yet to see them. It's not like there's a whole bunch of literature that they're stubbornly ignoring. If you actually brought them rational arguments that they were forced to confront, I think at least some of them would update their beliefs - this is LessWrong, after all.

EDIT: In response to your edit: (For those reading, it initially just said "I doubt this very much.")

The differences between the far-right faction and the progressives (among whom I count myself) on this website are not primarily of the sort that can be bridged by intelligent argument, for a number of reasons.

This doesn't seem obvious to me. Could you list those reasons?

Replies from: pragmatist, FiftyTwo
comment by pragmatist · 2013-08-04T19:02:17.439Z · LW(p) · GW(p)

I think a lot of the disagreement between the left and the right boils down to disagreement about the appropriate form of the social welfare function. I think this applies not just to economic issues but also issues of gender and race.

While there quite likely is some degree of resolvable factual disagreement about the extent of certain inequities, and maybe-somewhat-resolvable disagreement about how those inequities might be lessened, there is also disagreement about how much those inequities should matter to us and affect our behavior, both political and personal. This is not the sort of disagreement I expect to see someone resolve in a blog post.

Now for a more blatantly left-wing argument: It is hard to get people to realize the extent and import of their privilege, to acknowledge that certain social inequities that are of minor significance when viewed from a privileged position are in fact deeply oppressive from the perspective of the marginalized. This is not the sort of thing that can be communicated by presenting scientific studies, because such studies may establish the existence of an inequity, but they do not fully convey the impact of that inequity on the lives and psyches of the population affected. The best way to acquire that sort of information is to listen to anecdotes from a number of marginalized people, a difficult thing to do on a website with demographics like LW has.

Replies from: Lumifer, Eugine_Nier, gothgirl420666, Eugine_Nier
comment by Lumifer · 2013-08-06T23:55:59.229Z · LW(p) · GW(p)

It is hard to get people to realize the extent and import of their privilege, to acknowledge that certain social inequities that are of minor significance when viewed from a privileged position are in fact deeply oppressive from the perspective of the marginalized. ... . The best way to acquire that sort of information is to listen to anecdotes from a number of marginalized people

Heh. Well, there was a period in my life when I was very very poor. No money to take public transportation (so I walked), no money to buy a can of soda (so I drank water), etc. I lived in a mostly-black area of the city with gunshots heard at night every week or so.

Unfortunately for your argument I'm not a leftist or a progressive, I do not get hysterical about social inequities and you probably would say that I don't realize the extent and import of my current privilege (I'm not very poor any more).

Belief update time? :-D

Replies from: pragmatist
comment by pragmatist · 2013-08-07T04:22:05.104Z · LW(p) · GW(p)

Unfortunately for your argument I'm not a leftist or a progressive, I do not get hysterical about social inequities and you probably would say that I don't realize the extent and import of my current privilege (I'm not very poor any more).

I don't see how any of this is all that unfortunate for my argument. Perhaps you think I'm saying that only progressives can recognize their privilege along some axis, or that recognizing privilege is sufficient to induce support for progressive policies? Well, I don't believe either of these things. What I do believe is that recognizing the consequences and extent of privilege undercuts the force of several right-wing arguments.

Replies from: Lumifer
comment by Lumifer · 2013-08-07T05:14:31.685Z · LW(p) · GW(p)

Your argument was

It is hard to get people to realize the extent and import of their privilege, to acknowledge that certain social inequities that are of minor significance when viewed from a privileged position are in fact deeply oppressive from the perspective of the marginalized. ... The best way to acquire that sort of information is to listen to anecdotes from a number of marginalized people

I have been a marginalized person. I did not acquire a realization of "the extent and import" of my privilege, neither do I acknowledge that certain social inequities (you didn' t specify which ones, so I can't be sure) are "deeply oppressive".

Replies from: pragmatist
comment by pragmatist · 2013-08-07T05:35:14.570Z · LW(p) · GW(p)

Ah, I see. My intent was not to suggest that all (or even most) marginalized people experience inequity as oppressive, although I can see how I could be read that way. I should also note that I believe there's something to the idea of false consciousness. Oppressed people often do not acknowledge the fact of their own oppression, although I'm not saying that's the case for past-you. Note that I didn't say the best way to acquire information about the impact of privilege is to be a marginalized person.

Also, the impact of marginalization along some axis (economic status, say) can be considerably mitigated by privilege along other axes (race/education/gender/etc.). I've been quite poor too -- while I was a grad student -- but my experience of poverty was, I'm pretty sure, qualitatively different from that of an inner-city African American single mother (even one with the same income I had) or a Dalit in rural India.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-08-09T04:30:34.836Z · LW(p) · GW(p)

I should also note that I believe there's something to the idea of false consciousness.

"False consciousness" seems suspiciously like an excuse to protect one's social theories from conflicting evidence.

Replies from: NancyLebovitz, pragmatist
comment by NancyLebovitz · 2013-08-11T03:11:38.155Z · LW(p) · GW(p)

Libertarian Feminism: Can This Marriage Be Saved-- an essay which I value because it drew parallels between the way libertarians think most people kid themselves about the value of government and the way (most?) feminists think most people fail to notice patriarchy.

comment by pragmatist · 2013-08-09T06:41:33.077Z · LW(p) · GW(p)

Perhaps the concept does have that role in Marxism, but I'm not a Marxist. I don't buy "false consciousness" because it is an integral part of some rickety theoretical superstructure that I need to preserve. I think "false consciousness" is a useful concept because there is evidence that various groups that are provably disadvantaged according to certain indicators either underestimate their disadvantage or deny it entirely when asked. There is also evidence that in many of these cases the cause of this is a social system that either hides relevant information from the disadvantaged group or molds their outlook on the world so that they are motivated to deny (or ignore) the evidence.

As such, it's no more an excuse to protect against conflicting evidence than, say, the claim that people in general dramatically overestimate their relative performance at everyday tasks.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-08-09T07:37:48.726Z · LW(p) · GW(p)

I think "false consciousness" is a useful concept because there is evidence that various groups that are provably disadvantaged according to certain indicators either underestimate their disadvantage or deny it entirely when asked.

As opposed to being evidence that you're looking at the wrong indicators. At best this amounts to "the people don't care enough about the things I think they should, therefore there's something wrong with the people".

Edit: Also up-thread you said regarding the basis of your argument:

This is not the sort of thing that can be communicated by presenting scientific studies, because such studies may establish the existence of an inequity, but they do not fully convey the impact of that inequity on the lives and psyches of the population affected. The best way to acquire that sort of information is to listen to anecdotes from a number of marginalized people, a difficult thing to do on a website with demographics like LW has.

And yet you're perfectly willing to dismiss those same anecdotes as "false consciousness" if they don't support your ideas about how much impact there should be on the "lives and psyches of the population affected".

Replies from: pragmatist
comment by pragmatist · 2013-08-09T11:19:10.572Z · LW(p) · GW(p)

At best this amounts to "the people don't care enough about the things I think they should, therefore there's something wrong with the people".

It could amount to this, I guess. But I don't see why you'd think this is all it could amount to at best. Do you really consider it outside the realm of possibility that people could be genuinely better off with certain social changes and yet fail to acknowledge this fact due to conditioning?

And yet you're perfectly willing to dismiss those same anecdotes as "false consciousness" if they don't support your ideas about how much impact there should be on the "lives and psyches of the population affected".

Just because I think an anecdote reflects false consciousness doesn't mean I'm dismissing it's evidentiary value. A marginalized person doesn't have to be saying "Look how I oppressed I am" in order for us to listen to them and realize they're oppressed. Judgments of oppression are judgments about the objective conditions of people's lives, not subjective facts about how they feel.

A personal example: I've volunteered to conduct surveys in rural India in the past, and this involved talking to women in Indian villages. Virtually none of these women explicitly referred to themselves as oppressed, and I doubt most of them consider themselves oppressed, because they have a host of bullshit religious and traditional beliefs that prevent that realization. But hearing about their lives, it was evident to someone who does not share those bullshit beliefs that they were in fact oppressed.

So when I said that one needs to listen to marginalized people in order to fully appreciate the impact of a lack of privilege, I wasn't just referring to marginalized people who're yelling about oppression. The only thing I'm "dismissing" (although this is probably not the right word) when I talk about false consciousness is the idea that people's subjective judgments about their oppression are a reliable guide to the objective facts.

And just to be somewhat even-handed, let me acknowledge that I think there are certain social justice communities where the unreliability runs in the opposite direction, where people are conditioned to view everything through a framework of oppression, and they overestimate the extent to which various practices are oppressive.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-08-10T04:08:08.125Z · LW(p) · GW(p)

Being "oppressed" is starting to seem like an XML tag with no connection to reality. At the very least can you give a definition of being "oppressed" that doesn't cash out as "whatever pragmatist says it is".

comment by Eugine_Nier · 2013-08-09T04:58:28.180Z · LW(p) · GW(p)

I think a lot of the disagreement between the left and the right boils down to disagreement about the appropriate form of the social welfare function. I think this applies not just to economic issues but also issues of gender and race.

As a right-winger I must strongly disagree with the characterization of the right wing position given in your comment. In particular it seems to me that the left-wing position contains a number of specific falsifiable (and false) beliefs, for example, the false belief that all the policies leftists tend to promote to "help the poor and oppressed" actually help the poor and oppressed in the long run.

In fact the main value disagreement that I can see is that some leftist tend to have a pathological form of egalitarianism where they're willing to pursue policies that make everyone worse off in order to make the distribution more equal.

Replies from: pragmatist, army1987, ILikeLogic
comment by pragmatist · 2013-08-09T06:28:44.939Z · LW(p) · GW(p)

I did say this:

While there quite likely is some degree of resolvable factual disagreement about the extent of certain inequities, and maybe-somewhat-resolvable disagreement about how those inequities might be lessened, there is also disagreement about how much those inequities should matter to us and affect our behavior, both political and personal.

So I agree there are a number of falsifiable beliefs on both sides. But the mere fact of falsifiability doesn't mean the disagreements are easy to resolve, partly for "politics is the mind-killer" type reasons, and partly because it is legitimately difficult to find conclusive experimental evidence for causal claims in the social sciences.

I do, however, think there are important value disagreements about how to trade off efficiency and equity between left and right, and I also think your description of the "main value disagreement" is a caricature. I'm pretty sure I could easily come up with socio-political thought experiments where all (non-moral) facts are made explicit, leaving no room for disagreement on them, but where we would still disagree about the best policy, and I assure you I'm not one of the "pathological" egalitarians you describe (although you would probably consider my views pathological for other reasons).

comment by A1987dM (army1987) · 2013-08-11T19:47:21.928Z · LW(p) · GW(p)

In fact the main value disagreement that I can see is that some leftist tend to have a pathological form of egalitarianism where they're willing to pursue policies that make everyone worse off in order to make the distribution more equal.

A few examples? (Preferably ones where the conclusion that the policy leads to an anti-Pareto improvement is based on real-world data rather than on dry-water economic models.)

comment by ILikeLogic · 2013-08-10T16:36:51.202Z · LW(p) · GW(p)

That's an interesting thought. Maybe I do think that it is better to make everyone a little bit worse off materially to make the distribution more equal. I don't think this is pathological. In somewhat of a paradox what matters most to absolute well-being is our relative material wealth not our absolute wealth. Now, of course, when looked at as a ranking nothing can be done about the fact that some will have more wealth than others. Nothing short of trying to make everyone equal (and no one wants that). But the ranking is not the only thing that matters. There has always been a distribution of wealth but the those at the top have not always had so much more than the median. Making everyone a little worse off materially to make the distribution a bit narrower may make the absolute well-being greater.

Also I wonder if right wingers would support a distributionist policy to help the poor and oppressed even if such a policy were certain to be effective. My hunch is that they would not because they are opposed, in principle, to any redistribution.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-08-10T21:56:16.128Z · LW(p) · GW(p)

Maybe I do think that it is better to make everyone a little bit worse off materially to make the distribution more equal.

Maybe some policies fail at helping the poor and at making people more equal.

I can imagine a policy done in the name of the poor which results at everyone being poorer... except for the people who organized the redistribution... you know, the powerful good guys.

comment by gothgirl420666 · 2013-08-04T22:17:35.812Z · LW(p) · GW(p)

I think a lot of the disagreement between the left and the right boils down to disagreement about the appropriate form of the social welfare function. I think this applies not just to economic issues but also issues of gender and race.

I'll be honest, it was really difficult for me to understand the linked wiki page. (I need to learn economics...) It sounds like what you're saying is maybe leftists tend to inherently value socioeconomic equality more than rightists do? But... I don't understand how this applies to race and gender.

(This is of interest to me because I'm currently politically agnostic and I plan on someday doing an unbiased inquiry in order to figure out what my views should be. Knowing what the disagreement between the left and the right stems from would be very useful.)

As for your last point, I can definitely see why privileged people would need emotional arguments to understand how marginalized people suffer. I think here on LW we have a perhaps deserved mistrust of emotional appeals in moral tradeoffs - we all know about scope insensitivity and how one dying child feels more painful than seven. The logical brain really does better than the emotional brain on this kind of stuff a lot of the time. But on the other hand, I can see how maybe I, a man, value sexual harassment as -5 utilons, whereas if I take the time to read an article explaining how sexual harassment feels from a female perspective I will realize that it should be more like -15 utilons. So my utilitarian math will be off unless I re-calibrate.

I disagree though that it's necessarily a difficult thing to do on LessWrong. Well, perhaps difficult, but definitely not impossible. I remember a blog post by Yvain where he was talking about unemployment, and at the beginning he linked to an article of some woman's experience in a terrible job, saying "read this first to get an emotional calibration for just how terrible minimum wage jobs can be". I don't see why we can't do the same here. It's not that hard to find stories of marginalized people's experiences on the Internet now that Tumblr SJ is becoming such a thing.

comment by Eugine_Nier · 2013-08-09T08:05:00.383Z · LW(p) · GW(p)

Also, while we're here, would you mind defining what you mean by "privilege"?

Replies from: FiftyTwo, Viliam_Bur
comment by FiftyTwo · 2013-08-11T11:00:51.757Z · LW(p) · GW(p)

To phrase it in more statistical terms it would be something like "take into account how selection bias has changed your impressions of things."

E.g. as a white male in a liberal western nation I intuitively think buying food or finding a place to live is easy, so might not credit reports of someone else finding it difficult. But if prejudice against a group I am not part of was endemic I wouldn't be aware of it. So checking your privilege is a reminder that your experience may differ from others and to be aware of that.

comment by Viliam_Bur · 2013-08-10T21:59:06.004Z · LW(p) · GW(p)

Guessing by how this word is typically used, it means: "My opponents are cognitively inferior. They can't understand my situation, because they have never experienced it. On the other hand, I can perfectly understand their situation (despite never experiencing it, too)."

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-08-11T03:17:14.708Z · LW(p) · GW(p)

I don't think it's implausible to believe that people pay more attention to those who have higher status than themselves, and less attention to those who have lower status. Furthermore, I believe in the snafu principle (people don't give accurate information if they'll be punished for it*).

Unfortunately, the true parts of the idea of privilege are apt to get swamped by the way it's used as a power grab.

*The original version framed this as an absolute. I'm quite willing to be probabilistic about it.

Replies from: Viliam_Bur, Eugine_Nier
comment by Viliam_Bur · 2013-08-11T09:01:40.494Z · LW(p) · GW(p)

I don't think it's implausible to believe that people pay more attention to those who have higher status than themselves, and less attention to those who have lower status.

Do I read it correctly as: "..therefore, to focus on the opinions of lower-status people, it is necessary to exclude the higher-status people from the debate (because otherwise people would by instinct turn their attention only to what the higher-status people said -- which is probably not a new information for anyone -- and ignore the rest of the debate)."?

I would agree with that. -- And by the way, in some situations an average woman is actually higher-status than an average man, so perhaps we should debate those situations by excluding the women's voice. Actually, if a "dating market" is an example of such situation, that would explain the necessity of PUA debates (as in: the debate about dating is culturally framed by women's terms, so we need a place where men are allowed to explain how they feel without automatically taking a status hit for doing so).

Perhaps the problem is at not making a difference between "hypothesis generating" and "hypothesis debating" parts of reasoning. Excluding higher status people from some hypothesis-generating discussions is good, because it allows people to hear the opinions they would otherwise not hear. But when those hypotheses are already generated, they shouldn't be accepted automatically. (There is a difference between "you oppress me by using your status to prevent me from speaking my hypothesis" and "you oppress me by providing an argument against my hypothesis".) In theory, a group of lower-status people doesn't have a monolithic opinion, so they could make the debate among themselves. But sometimes the dissenting subgroup can be accused of being not-low-status-enough. (As in: "this topic should be only discussed by women, because only women understand how women feel. oh, you are a woman and you still disagree with me? well, that's because you are a privileged white woman!")

As an unpolitical analogy, it makes sense to use some special rules for brainstorming, to help generate new ideas. But it does not mean that the ideas generated by these special rules should be protected by them forever. It makes sense to use brainstorming for generating ideas, and then to use experiments and peer review for testing them. -- So while it can be good to use brainstorming to generate an idea for a peer-reviewed journal... it would be silly to insist that the journal must accept the idea uncritically, because otherwise it ruins the spirit of brainstorming.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-08-11T20:43:49.140Z · LW(p) · GW(p)

"..therefore, to focus on the opinions of lower-status people, it is necessary to exclude the higher-status people from the debate

I would like to point out that this is impossible by the definition of "high status".

comment by Eugine_Nier · 2013-08-11T21:50:09.154Z · LW(p) · GW(p)

I don't think it's implausible to believe that people pay more attention to those who have higher status than themselves, and less attention to those who have lower status. Furthermore, I believe in the snafu principle (people don't give accurate information if they'll be punished for it*).

I would like to point out that Yvain's post that progressive like to site elsewhere in this thread makes the exact opposite argument.

comment by FiftyTwo · 2013-08-11T11:19:48.609Z · LW(p) · GW(p)

It's not like there's a whole bunch of literature that they're stubbornly ignoring.

The mainstream of political philosophy and political science is pretty much opposed to their positions. While none of it specifically addresses the topics covered by Molbuggians and neo-reactionaries in the terms they use, the burden of proof seems to be on them to prove there is something massively wrong with the mainstream before the mainstream has to specifically craft responses to their arguments.

(For reference here's an example of what I mean by mainstream 'progressive' writing which argue that democracy has empirically better outcomes for its citizens and outlines democratic peace theory )

comment by [deleted] · 2013-08-11T02:14:42.315Z · LW(p) · GW(p)

I doubt this very much. The differences between the far-right faction and the progressives (among whom I count myself) on this website are not primarily of the sort that can be bridged by intelligent argument, for a number of reasons.

Wrong. I used to be a reasonable, well adjusted progressive, even a quite ideologically passionate one (leaning towards anarchism). Moldbug's intelligent analysis (for all its flaws) convinced me that I was wrong. I can't be the only one who is capable of responding to argument.

comment by NancyLebovitz · 2013-08-06T23:30:46.210Z · LW(p) · GW(p)

If you could write up an intelligent post arguing for progressivism, then you would probably get a lot farther on convincing the far-right faction of this site than by telling them they are evil for holding their beliefs without giving reasons as to why. (The problem, of course, is that it requires time and effort.)

It may depend on what you mean by the far-right faction-- I think the farthest right has already left. It might be more possible to move the middle rather than convince the extreme.

comment by David_Gerard · 2013-08-04T19:03:55.737Z · LW(p) · GW(p)

He's written substantial chunks of it on his blog, but there is no quantity of words he could write that could convince them.

(We're talking about a seriously minor viewpoint held by cranks, but they're cranks who include LW regulars, or you and I would never have heard of them in the first place. It is entirely unclear to me how convincing them is a game changer.)

Replies from: Risto_Saarelma, gothgirl420666
comment by Risto_Saarelma · 2013-08-05T04:51:28.330Z · LW(p) · GW(p)

The problem with the reactionaries getting rebuttals in the form of "that's a socially horrible thing to say and you're a horrible person, fuck you" is that viewpoints that seem supported by science but are socially shunned as unthinkable are catnip here. A rebuttal that shows that actually the argument isn't very solid, or the science isn't being interpreted right, or that there are large chunks of important stuff being left out of the argument would have a lot more staying power, while just bringing on the social shaming sends the signal that the reactionaries might be on to something since it seems to be hard to come up with a rebuttal that works on the same level as their arguments.

Replies from: David_Gerard, Document
comment by David_Gerard · 2013-08-05T06:43:41.870Z · LW(p) · GW(p)

... Did you actually read Yvain's posts on the matter? That's not an accurate description of them at all.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2013-08-05T07:12:39.878Z · LW(p) · GW(p)

Yvain's stuff is looking pretty good, though there could be more of it. I was talking about the general pattern of discussion that shows up in places like this subthread and seems to keep the dynamic going, and about the thing of the cranks being unconvincable. There are always going to be hardliners who stick to their guns no matter what, but there's also going to be an audience who sees one side going, hey, argument and pile of citations here and the other side going, that's a horrible thing to say and you're horrible, and making their conclusions.

I'd really like to have lots more stuff at the level of quality of the (embarrassingly, also Yvain's) Non-Libertarian FAQ arguing for progressive views, but don't really know where to look. Everything seems to be a soup of lazy ingroup flag-waving. (This might be actually another thing that makes reaction tick. With topics polite society is inimical to, anything with obvious argumentation flaws or sloppiness gets quickly torn down and ignored, leaving behind a small group of careful and cleverly argued articles, while with progressive writings there isn't any similar mechanism culling sloppy, but with the heart in the right place writings from the very clear, careful and well-researched ones, so the latter ones won't get similar visibility.)

Replies from: David_Gerard, Vaniver
comment by David_Gerard · 2013-08-05T11:22:35.821Z · LW(p) · GW(p)

With topics polite society is inimical to, anything with obvious argumentation flaws or sloppiness gets quickly torn down and ignored, leaving behind a small group of careful and cleverly argued articles, while with progressive writings there isn't any similar mechanism culling sloppy, but with the heart in the right place writings from the very clear, careful and well-researched ones, so the latter ones won't get similar visibility.

Yvain has also written on this (though I can't find the post quickly): that bad ideas will tend to have better arguments for them than good ideas, because the bad ideas need good arguments more. Though I think that is more in the form you put it: that unaccepted ideas will generally have better arguments than accepted ideas.

Replies from: Eugine_Nier, Risto_Saarelma
comment by Eugine_Nier · 2013-08-09T05:12:01.912Z · LW(p) · GW(p)

Yvain has also written on this (though I can't find the post quickly): that bad ideas will tend to have better arguments for them than good ideas, because the bad ideas need good arguments more. Though I think that is more in the form you put it: that unaccepted ideas will generally have better arguments than accepted ideas.

Yvain's post was about popular ideas, not necessarily good ideas. In particular this rephrasing violates the law of conservation of expected evidence.

Yvain also fails to note that his argument also implies that over time the mainstream position will itself drift further and further away from truth towards whatever is most convenient for signaling.

Replies from: army1987, pragmatist
comment by A1987dM (army1987) · 2013-08-11T19:04:30.459Z · LW(p) · GW(p)

Yvain also fails to note that his argument also implies that over time the mainstream position will itself drift further and further away from truth towards whatever is most convenient for signaling.

He did note that, in section IV in this post.

comment by pragmatist · 2013-08-09T06:45:44.710Z · LW(p) · GW(p)

In particular this rephrasing violates the law of conservation of expected evidence.

Can you elaborate? I don't see this.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-08-09T07:29:43.699Z · LW(p) · GW(p)

David wrote:

bad ideas will tend to have better arguments for them than good ideas,

So would he actually treat hearing a good argument for an idea as evidence against that idea?

Replies from: pragmatist, tut
comment by pragmatist · 2013-08-09T11:49:08.996Z · LW(p) · GW(p)

In my experience, most of the best philosophers working on the philosophy of religion are in fact theists, due to fairly obvious selection effects. Of course, most philosophers of religion, good or not, are theists, but I think in the upper echelons of the discipline the disparity is even more stark. Atheists who get into the field tend not to be very good philosophers, with a few honorable exceptions.

So a disproportionate number of the most careful and clever arguments (which, I think, is what David meant by "best arguments") in that field support the theistic side, the wrong side. If the only thing I knew about an article on the philosophy of religion is that it is extremely well argued, I would consider that evidence that its conclusion is false. Does this violate conservation of expected evidence?

Note that there's a distinction between all the arguments that exist in some Platonic sense and all the arguments that exist in published form, and that there's a distinction between arguments that are "good" in the sense of significantly raising the probability of their conclusions and arguments that are "good" in the sense of being clever and carefully constructed. In both cases, I think David was talking about the second option. He was attributing to Yvain the claim that most clever and carefully constructed arguments out of the set of published arguments are for bad ideas. I don't agree with this claim but I don't think it violates basic rules of probability.

comment by tut · 2013-08-10T08:38:55.136Z · LW(p) · GW(p)

Unless I misunderstand, only if he hasn't already updated on how many believe in the idea.

comment by Risto_Saarelma · 2013-08-05T12:33:02.966Z · LW(p) · GW(p)

Yvain has also written on this (though I can't find the post quickly): that bad ideas will tend to have better arguments for them than good ideas, because the bad ideas need good arguments more.

Could be that too. But it seems like we're currently in the tail end of some 150 years of reactionary attitudes mostly being mainstream and mostly being argued for with poor and lazy ingroup flag-waving arguments, with the occasional clever argument for progressivism that the mainstream finds disagreeable popping up now and then and nudging things around. Then mainstream society started actually going progressive, and now we've started seeing the opposite pattern, even though the reactionary and progressive ideologies don't seem to have changed significantly.

Replies from: Eugine_Nier, Eugine_Nier
comment by Eugine_Nier · 2013-08-09T05:18:40.894Z · LW(p) · GW(p)

But it seems like we're currently in the tail end of some 150 years of reactionary attitudes mostly being mainstream and mostly being argued for with poor and lazy ingroup flag-waving arguments,

Have you ever actually read any of the original argument from 150 years ago, or are you merely going by the progressive characterization of their opponents' arguments?

Replies from: FiftyTwo, Risto_Saarelma
comment by FiftyTwo · 2013-08-11T11:09:46.914Z · LW(p) · GW(p)

Mill's "Vindication of the Rights of Women" and "On liberty" are good examples of arguing for those positions before they were mainstream.

Replies from: Protagoras, Eugine_Nier
comment by Protagoras · 2013-08-12T00:30:08.330Z · LW(p) · GW(p)

Vindication of the Rights of Women was by Mary Wollstonecraft, and from well before Mill wrote The Subjection of Women. Still, Mill's views were certainly, as you say, radical for his time (and also more radical than those in Wollstonecraft's essay, if I recall it at all accurately).

comment by Eugine_Nier · 2013-08-11T21:23:55.792Z · LW(p) · GW(p)

Yes, and Mill's position would be considered libertarian today. In other words, if his books were published today, a lot of the people in this thread would denounce them as "reactionary", and probably far worse names.

Replies from: Protagoras
comment by Protagoras · 2013-08-12T00:34:56.583Z · LW(p) · GW(p)

Which of Mill's views do you think would be regarded as reactionary? I admit some of his views would be regarded as weird in light of more recent experience (e.g. his views on education in On Liberty are based on a very different baseline than the modern situation), but I'm having a hard time thinking of clear cases of overlap between Mill's views and those commonly denounced as reactionary these days.

Replies from: Viliam_Bur, Eugine_Nier
comment by Viliam_Bur · 2013-09-20T10:49:54.204Z · LW(p) · GW(p)

Funny thing, I was also thinking about Mill's "On liberty" when reading this thread. I believe the issue is deeper:

In politics you often have a winning side and a losing side. The winning side can use various techniques to silence the losing side. People sympathetic to the losing side will move to meta arguments about why it is wrong to silence your opponents. -- The unfortunate, but logical, consequence is that arguing about why it is wrong to silence your opponents becomes an evidence for belonging to the losing side. An automatic status hit.

Therefore, it was easy to interpret Mill as an advocate for losing side of his days; and it is also easy to believe that he would support the losing side of today (at least indirectly by his meta arguments) if he were alive today... if what you know about him is that he argued that it is wrong to silence your opponents instead of debating them (which is a part that impressed me strongly).

If Mill advocated that even people guilty of the horrible crime of atheism should be able to publish their opinions, even if just to increase the quality of the theist arguments against them... it seems logical that today he could say the same thing about people guilty of believing in differences between people, or similar stuff. (Of course this assumes that he would be consistent in his beliefs and willing to bite the bullet.)

The problem would not be with Mills beliefs per se, but with inferences people would make from his meta arguments. And he would not even have to support the low-status people to create this association; the low-status people would create the association by quoting him often. -- And then he would have to choose between implicitly denying his support to them, or being considered a silent supporter.

comment by Eugine_Nier · 2013-08-14T06:06:22.937Z · LW(p) · GW(p)

For example, his views on economics, what we would today call libertarian, have been denounced by several people in this thread.

comment by Risto_Saarelma · 2013-08-09T06:25:31.307Z · LW(p) · GW(p)

I was thinking that the random person's opinions about how the mainstream moral climate is the right and proper thing haven't been preserved for me to read, and whatever did get preserved has probably been heavily filtered for being well-argued and interesting, but that's not quite right. They did have newspapers, which would be archived somewhere no matter what the content, and all sorts of weird random pamphlets probably are as well. Still a lot more editorial control than Reddit, but editorial control by the contemporary people, not by present-day scholars composing the Collection of Olden Time Moral Arguments Affirming The Great Historical Narrative For Moral And Intellectual Progress.

I was analogizing the current blogs and reddits thing with people ranting to each other at bars or something, with most of the arguments being at the quality of a random people ranting at you, but on second thought that's not really a good analogy. Face to face socializing has pretty different dynamics than media culture, and the media culture was editorialized newspapers and books and the odd self-published pamphlet by someone with enough money for that.

A third thing, which would be relevant and would be hard to go back and assess now was how community level social persuasion got done. What kind of arguments did priests use trying to convince the congregation that women's right to vote would lead to the apocalypse, what kind of arguments did the scruffy guy on the soapbox use trying to convince factory workers to start hanging fat people with top hats and monocles from the lampposts and so on.

So I could figure out what was a popular newspaper and go read through its archives, or try to figure out which books where bestsellers and see if I can somehow find a copy and read that, and I might actually learn something more interesting about the common quality of argumentation used than by just picking up filtered-by-present-day recommended books that might be systematic outliers. I haven't done this because it sounds like a lot of work.

EDIT: Adam Cadre's reviews of old post apocalyptic books are informing my expectations about what sort of stuff I might find if I skipped the present-day list of exemplary books the people of a past era read and went digging into the piles of half-forgotten stuff they actually read.

comment by Eugine_Nier · 2013-08-10T03:00:48.884Z · LW(p) · GW(p)

with the occasional clever argument for progressivism that the mainstream finds disagreeable popping up now and then and nudging things around.

So why aren't those arguments being used to defend progressivism today? The answer, which isn't hard to notice if you actually look at old progressive arguments, is that those arguments tend to have premisses that modern progressives no longer believe and their conclusions are also very different from modern progressive positions.

even though the reactionary and progressive ideologies don't seem to have changed significantly.

This is not the case as I mentioned above.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2013-08-10T06:04:32.989Z · LW(p) · GW(p)

Has there been significant change in the underlying trends where reactionaries are pessimistic about inherent human nature, consider it basically impossible to change significantly, and consider the problem of developing social mechanisms to control it something vital to social stability that takes generations and centuries to solve and is likely to end up with constraining and unintuitive solutions which will nevertheless be the best bet available, while progressives are optimistic that human nature is either benign or malleable enough that it's possible to enact large and fast social changes and eventually educate people to make the new system work across the board, without messy, nasty and basically impossible to change facets of human nature ending up causing persistent problems?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-08-11T20:49:41.259Z · LW(p) · GW(p)

Well, if you abstract things all the way up to that level.

comment by Vaniver · 2013-08-11T02:06:33.780Z · LW(p) · GW(p)

I'd really like to have lots more stuff at the level of quality of the (embarrassingly, also Yvain's) Non-Libertarian FAQ arguing for progressive views, but don't really know where to look.

I will point out that the Non-Libertarian FAQ isn't actually anti-Libertarian; it's targeted at a specific (vocal) branch of libertarianism, which I'd call Moral Libertarianism. From the FAQ:

To the first type of libertarian, I apologize for writing a FAQ attacking a caricature of your philosophy, but unfortunately that caricature is alive and well and posting smug slogans on Facebook.

comment by Document · 2013-08-08T20:14:31.124Z · LW(p) · GW(p)

Did you mean to post that here?

comment by gothgirl420666 · 2013-08-04T20:27:07.853Z · LW(p) · GW(p)

I mean a game changer in this community. Obviously Yvain's blog posts will not have far-reaching real-world political implications.

By the way, singularitarianism is also a seriously minor viewpoint held by cranks.

Replies from: David_Gerard
comment by David_Gerard · 2013-08-05T06:46:21.212Z · LW(p) · GW(p)

By the way, singularitarianism is also a seriously minor viewpoint held by cranks.

Yes, but that doesn't make crank magnetism a good idea.

Replies from: niceguyanon
comment by niceguyanon · 2013-08-05T18:33:22.848Z · LW(p) · GW(p)

Wow, that is awesome, I did not know there was a term for that. I always wondered why there seems to be a stacking effect when it came to bad beliefs. Thanks

Replies from: David_Gerard
comment by David_Gerard · 2013-08-05T18:57:10.787Z · LW(p) · GW(p)

I think a lot of it is the "vindication of all kooks" effect. NaturalNews is an excellent example.

comment by ArisKatsaris · 2013-08-08T10:37:01.543Z · LW(p) · GW(p)

I hate drunken hate-filled rants, so I'm downvoting you.

Replies from: BlindIdiotPoster
comment by BlindIdiotPoster · 2013-08-08T11:07:35.438Z · LW(p) · GW(p)

I mildly disapprove of posts with no purpose other than to state the posters unqualified opinion. Public yea/nea voting is imo not needed or desirable, especially on a forum with a karma system.

Richard Kennaway's post below yours is just as bad for exactly the same reason, of course.

comment by Kawoomba · 2013-08-04T10:33:22.028Z · LW(p) · GW(p)

If you want to leave this board, but suspect you lack the willpower to do so, then there's a better way than Suicide By Cop. Just scramble your password, add a firewall filter for LW (or a new "lesswrong.com 127.0.0.1" entry in your hosts file) and be done with it.

No need to burn the commons on your way out.

If what you're looking for are reassurances, PMs to those you've positively interacted with are the way to achieve that (no bystander effect, less drama etc.).

If, however, you're drunk, um, don't comment while drunk? Yea, I'm not too good with that one either.

This is meant as honest advice since IIRC I've upvoted you a number of times. (If I hadn't read Why Our Kind Can't Cooperate I might have gone for "Don't cry. You know, like a girl." Luckily I have, so I didn't.)

Replies from: Multiheaded
comment by Multiheaded · 2013-08-04T10:37:51.316Z · LW(p) · GW(p)

If, however, you're drunk

Yeah.

If you want to leave this board, but suspect you lack the willpower to do so

I feel like I have a duty before a community that I see massive potential in. To stand up for my values and denounce all the shit I hate here in an articulate, reasoned manner. But I'm very much not up to the task, and this makes me feel frustrated and miserable. And angry at my own impotence in the matter.

It'd be a big amount of work to even call out the most egregious shitty shit here on a regular basis, with some citations and explanations for why I did so. And it feels like people hardly even care.

Replies from: Viliam_Bur, Kawoomba, Vaniver, Oscar_Cunningham, ChristianKl, None, wedrifid
comment by Viliam_Bur · 2013-08-04T23:45:38.517Z · LW(p) · GW(p)

I feel like I have a duty before a community that I see massive potential in. To stand up for my values and denounce all the shit I hate here in an articulate, reasoned manner.

Then collect the worst examples and make an article of them. Preferably the ones that were upvoted (because if they were downvoted it means the community already disagrees with them). If the situation is so horrible, you should have an easy job. Just create a text file on your desktop, and anytimes something pisses you off, put the permalink there. Wait until you have enough material (please, don't make it a series of short articles), then process it.

It'd be a big amount of work to even call out the most egregious shitty shit here on a regular basis, with some citations and explanations for why I did so.

If the things are really so horrible, is the explanation even necessary? Just give a dozen citations, which prove it wasn't a one-time event, and that's it. Preferably they should be citations of different people.

If you want to make a drama, please put some work into it. Or use your rationality and outsource the job -- take 10 of your most ideologically close friends and ask them to spend one afternoon trying to find the most horrible upvoted comments, so that you can write a critical article.

comment by Kawoomba · 2013-08-04T10:59:15.403Z · LW(p) · GW(p)

I understand. However, there isn't some binary pass/fail criterion. This community can become incrementally better (obligatory "less wrong") or worse. Your contributions are helping steer it along a good path (ahem, usually).

If you've set extremely ambitious goals for yourself ("I will make this community live up to its full potential"), and those then stop you from pursuing more realistic milestones along that trajectory, then you've shot yourself in the foot:

The perfect is the enemy of the good, and all that. Compare "Can't stop world hunger altogether [assuming that's your ultimate objective], so I should stop donating anything towards that goal."

People care, at least more than we allow ourselves to think from within our protective cynical bubble.

comment by Vaniver · 2013-08-11T02:35:20.059Z · LW(p) · GW(p)

To stand up for my values and denounce all the shit I hate here in an articulate, reasoned manner. But I'm very much not up to the task, and this makes me feel frustrated and miserable. And angry at my own impotence in the matter.

I think it's important to separate out preferences and predictions, and try to limit values to the first. If you want to do something about posts you think should be responded to civilly, send them to me and I'll take a look.

That said...

You gentlemen can probably guess as to which ones of you I mean by this.

I feel like I should point out that I put about 20% probability that I'm included in this. In general, people are not as good at guessing this sort of information as you would expect.

comment by Oscar_Cunningham · 2013-08-04T17:28:26.535Z · LW(p) · GW(p)

denounce all the shit I hate here in an articulate, reasoned manner. But I'm very much not up to the task, and this makes me feel frustrated and miserable. And angry at my own impotence in the matter.

Can you just write a single big manifesto of why LessWrong is shit, and then link to it all the time?

Replies from: wedrifid
comment by wedrifid · 2013-08-05T05:06:24.652Z · LW(p) · GW(p)

Can you just write a single big manifesto of why LessWrong is shit, and then link to it all the time?

I'd really prefer he didn't. This practice would make lesswrong worse, not better.

comment by ChristianKl · 2013-08-04T16:06:32.903Z · LW(p) · GW(p)

To stand up for my values and denounce all the shit I hate here in an articulate, reasoned manner.

Calling people "smug, condescending fascist fucks" is not expressing yourself in an articulate, reasoned manner.

Replies from: None
comment by [deleted] · 2013-08-04T16:39:12.427Z · LW(p) · GW(p)

Did you not read the sentence immediately after?

comment by [deleted] · 2013-08-04T15:55:52.001Z · LW(p) · GW(p)

I feel like I have a duty before a community that I see massive potential in.

I don't think that community exists anymore.

And it feels like people hardly even care.

As far as I can tell, they don't.

comment by wedrifid · 2013-08-04T16:56:37.935Z · LW(p) · GW(p)

And angry at my own impotence in the matter.

I've heard that's one of the common symptoms.

comment by [deleted] · 2013-08-11T01:56:42.082Z · LW(p) · GW(p)

Methinks you hate too much.

extreme, ruthless classism
casually invoked sexism
brazen authoritarianism

There are two things you could be referring to with those:

  1. Unfounded hatred or harmful policies based on superficial concerns and lack of moral reasoning. If there is that kind of stuff here, I think you would not be alone in calling out those who perpetuate it. Please continue to confront it.

  2. Facts and hypotheses that contradict certain dominant social narratives. These things may need care in discussion due to their sensitive nature and similarity to #1, but I think it's incorrect to simply condemn them. What happens if reality doesn't cooperate with your politics? Perhaps you think it's improbable, but I think you should be able to handle the eventuality. You should especially expect that these things will come up in a community of people who are more interested in truth than politics. If reality is evil, I think the correct response is to condemn reality, not those who dare to study it.

Now of course, the first thing tries to seem like the second thing as much as it can, so I appreciate that someone bringing up certain subjects under the guise of the second thing is not strong enough evidence to overcome the higher rate of haters than scholars. Still, I think a certain level of charity is warranted.

But now I have a question. Suppose I have come to "racist", "sexist", "classist", and "authoritarian" beliefs in the course of investigating reality, or at least I believe I have, but have no particular sympathy for ignorant hatred. What is your advice in this situation? I don't think of myself as evil, and don't seem to respond as intended to shaming, so the usual "advice" won't work.

From my perspective, there are three explanations for your behaviour:

  1. You are a passionate liberal such that you feel the urge to condemn anything that looks anything like Xism, and don't think the type-1 Xism/type 2 Xism distinction I outlined above is legitimate.

  2. You are a reasonable person who is capable of appreciating the distinction, but nonetheless have reason to believe that certain facts would be so harmful to discuss that they need to be shamed out of consideration regardless of the negative consequences for the epistemic health of the community.

  3. You've had really bad experiences with Xism such that you have a visceral negative reaction to anything that looks like it, and are unable to engage with that subject rationally, and you would appreciate if other people avoided it. (If this is the case, I actually have a lot of sympathy. I know there are things like that for me.)

I'm trying to be charitable and assume you are a type 2 anti-Xist, except that if that is a reasonable position, I don't know yet what you could know that would justify it. So again I ask for your advice: Given that I believe I am a type-2 Xist, and you are a type 2 anti-Xist, what is it that you think I should know?

You gentlemen can probably guess as to which ones of you I mean by this. Fuck you.

I love you too, friend.

Replies from: PrometheanFaun
comment by PrometheanFaun · 2013-08-11T04:43:16.414Z · LW(p) · GW(p)

Oh, hey. Is this the lecture hall for Utopic Fascism Deprogramming 101? Cool, d'you mind if I sit next to you? I'm really excited about this class. We might have to drop it though, I hear that the lecturer might not even be planning on showing up.

comment by BlindIdiotPoster · 2013-08-04T22:52:10.174Z · LW(p) · GW(p)

One thing that bothers me about this community is that we all clearly have political views and regularly express them, but for some reason explicit discussion and debate is discouraged. The end result here is that lots of people casually assert extremely controversial opinions as fact and people are expected to approve via silence.

Replies from: tut, Vladimir_Nesov, John_Maxwell_IV, PrometheanFaun
comment by tut · 2013-08-06T16:15:02.922Z · LW(p) · GW(p)

... and people are expected to approve via silence.

That's what the downvote (and upvote) button is for. Reading == agreeing isn't a very good heuristic, and with the karma system you don't have to use it.

comment by Vladimir_Nesov · 2013-08-05T13:33:47.968Z · LW(p) · GW(p)

and people are expected to approve via silence

False. Silent disapproval and indifference look exactly the same.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-08-10T21:49:11.952Z · LW(p) · GW(p)

The downvote button should be the difference.

If we are not willing to even spend the energy required to push the button to protect the level of discussion we have here, that almost seems like we don't care about it.

comment by John_Maxwell (John_Maxwell_IV) · 2013-08-07T05:17:39.166Z · LW(p) · GW(p)

people casually assert extremely controversial opinions as fact

So, is this something we want people to do? If not, maybe we should start calling it out? I suspect it's a bad thing myself.

Replies from: Richard_Kennaway, BlindIdiotPoster
comment by Richard_Kennaway · 2013-08-08T08:55:49.664Z · LW(p) · GW(p)

A problem is that people uttering political opinions often do not see themselves as doing that. For as has been written, what it feels like to have a belief, while you are having it, is not that you believe something, but that you are looking straight at reality.

Then someone contests the obliviously held belief, and it is they who are accused of bringing in politics.

The pattern is especially clear in spaces where there is a prevailing political consensus. Only people posting against it are accused of politics, everyone else is merely speaking the truth. I have seen this in both left-wing and right-wing spaces. In fact, it is the default behaviour in such spaces. Regardless of the cause, the stronger the consensus, the more invisible it becomes to its members.

comment by BlindIdiotPoster · 2013-08-08T08:42:44.165Z · LW(p) · GW(p)

I think It's a bad thing to the extent that it could lead to opinions propagating without debate.

In the wider world, even things like atheism are "extremely controversial," but I don't think we need to make dramatic shows of uncertainty and humility every time someone brings it up; most all of us here are atheists and we need to move on and discuss the more difficult questions. What I worry about is that a community norm of being vocal about our opinions but not discussing them rationally or even at all most of the time then we may wind up deciding what to think via memetic exposure and perhaps evaporative cooling instead of rationality. This sort of effect would also be a danger if we had a norm of being verbally abusive to anyone with an unpopular opinion, of course.

Note that I can't offer evidence that this is a real risk or a phenomenon that actually happens in online communities, but it worries me.

Replies from: Vaniver
comment by Vaniver · 2013-08-11T02:01:10.137Z · LW(p) · GW(p)

In the wider world, even things like atheism are "extremely controversial," but I don't think we need to make dramatic shows of uncertainty and humility every time someone brings it up; most all of us here are atheists and we need to move on and discuss the more difficult questions.

I don't think the primary reason to not discuss atheism and theism at LW is because most readers of LW are atheists. What that implies to me is "if we all believe X, X is not worth discussing; if we are conflicted about Y, then Y is worth discussing."

What I would say instead is "Z is worth discussing to the extent that discussing Z is productive." There are topics where it would be great if we all agreed, but discussing those topics predictably does not lead to more agreement. That is, I would view it not as we are interested in more difficult questions, but in easier discussions.

The easier discussions are often on more sophisticated topics. For example, it's often easier to have an abstract discussion on what it means to believe something, and what it means to change your mind, than a concrete discussion on Lewis's trilemma.

comment by PrometheanFaun · 2013-08-05T01:27:42.746Z · LW(p) · GW(p)

but for some reason explicit discussion and debate is discouraged

The reason is an assumption that if we discuss those topics, rationality will leave the building. Since rationality is what we're here for, we must not discuss those topics. Maybe one day we'll be ready to discuss those topics, but I don't think we are at this point.

Replies from: David_Gerard
comment by David_Gerard · 2013-08-05T06:48:31.414Z · LW(p) · GW(p)

This doesn't make the approval by silence a good thing.

comment by cousin_it · 2013-08-04T19:01:35.403Z · LW(p) · GW(p)

From my experience, both "progressives" and "reactionaries" on LW often think that they're the true oppressed minority, and the other side claims to be oppressed only to gain status or something. So if you're angry because you feel oppressed, it's probably pointless to direct that anger at the other side, because the other side feels the same way. Be angry at human nature instead...

Replies from: pragmatist
comment by pragmatist · 2013-08-04T19:05:08.282Z · LW(p) · GW(p)

I doubt his anger has to do with feeling like an oppressed minority for his political beliefs. It sounds like he's more angry about classism, sexism and authoritarianism.

Replies from: Eugine_Nier, cousin_it
comment by Eugine_Nier · 2013-08-09T04:42:46.386Z · LW(p) · GW(p)

Given his history, I suspect it's because he doesn't like the positions held by the "fascists" but can't refute their arguments.

comment by cousin_it · 2013-08-04T19:26:56.499Z · LW(p) · GW(p)

Whoops, sorry, you're right. I should've said "weak" instead of "oppressed".

comment by wedrifid · 2013-08-04T16:51:19.868Z · LW(p) · GW(p)

I hate all the smug, condescending fascist fucks in this fucking community so much.

There are smug condescending fucks in this fucking community but considered opinion is that there aren't fascists. I think that would be an exaggeration.

comment by David_Gerard · 2013-08-04T15:26:41.565Z · LW(p) · GW(p)

Which ones particularly annoy you? For me it's the racists, sexists and Libertarians in about that order, and particularly the assumption that these are fine positions to hold and variance from them is mind-killing.

Replies from: wedrifid, FiftyTwo
comment by wedrifid · 2013-08-04T16:52:17.097Z · LW(p) · GW(p)

Which ones particularly annoy you? For me it's the racists, sexists and Libertarians

Are Libertarian fascists a thing? Is that even possible?

Replies from: MixedNuts
comment by MixedNuts · 2013-08-04T18:04:08.431Z · LW(p) · GW(p)

Start out on a volunteer basis, use donations to accumulate wealth, and use that, rather than political power, as a lever to keep the Jews/women/poor down and make people have kids and other fascist policies? You can't use violence, but you can get a monopoly on everything and make people obey or starve.

Replies from: wedrifid
comment by wedrifid · 2013-08-04T18:34:33.397Z · LW(p) · GW(p)

I like the way you think.

comment by FiftyTwo · 2013-08-11T11:08:29.009Z · LW(p) · GW(p)

racists, sexists and Libertarians

Slightly off topic, but I genuinely don't think I've encountered explicit sexism or racism on LW. The neo-reactionary community includes some elements of that, but they seem to be mostly on their own blogs rather than on LW itself.

comment by wedrifid · 2013-08-04T16:59:07.602Z · LW(p) · GW(p)

You gentlemen can probably guess as to which ones of you I mean by this. Fuck you.

Aha! A clue. But an ambiguous one. Do you mean to say that none of the 'fucks' are female or that those that are female likely lack the ability to guess?

comment by Richard_Kennaway · 2013-08-08T08:54:54.448Z · LW(p) · GW(p)

I approve of this posting.

(In this case, I didn't think my silent upvote was enough.)

Replies from: wedrifid, BlindIdiotPoster
comment by wedrifid · 2013-08-08T10:11:18.062Z · LW(p) · GW(p)

I approve of this posting.

I disapprove of the posting. Moreover, I oppose the active encouragement of barely coherent temper tantrums.

(In this case, I didn't think my silent upvote was enough.)

Your upvote and then active expression of approval requires that I upgrade my silent downvoting of every comment that endorses, takes seriously or encourages the posting in question to a similar verbal expression. By way of contrast, if Multiheaded had thought through his frustrations and expressed them candidly while sober and after having got a grip on his emotions then I would encourage the expression. Rewarding tantrums with attention and approval is precisely the wrong thing to do. It tends to be bad both for the recipient and the community.

comment by BlindIdiotPoster · 2013-08-08T11:30:22.078Z · LW(p) · GW(p)

Be aware you're playing a zero-sum game at best here.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-08-08T11:58:06.725Z · LW(p) · GW(p)

I fail to see how. I had an opinion about the thread, and in addition, an opinion about that opinion: that it was worth expressing. So I did. Others may disagree with either of those opinions (for example you), but they do not choose my actions. I do. What is the score in this hypothetical game? (It isn't the karma rating of Multiheaded's post.)

Replies from: BlindIdiotPoster
comment by BlindIdiotPoster · 2013-08-08T12:16:46.207Z · LW(p) · GW(p)

If you make your opinion more prominent by expressing it in a post instead of an upvote, you encourage others to do the same, thus lesswrong has more non-content posts and nothing much is accomplished by anyone. Since so far this thread has two posts of the type I describe, I guess the score is 1-1.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-08-08T12:31:29.250Z · LW(p) · GW(p)

If you make your opinion more prominent by expressing it in a post instead of an upvote, you encourage others to do the same

Only in the language of political correctness. In the real world, encouraging others to do the same looks like this: "Hey everyone, post your opinion!"

Since so far this thread has two posts of the type I describe, I guess the score is 1-1.

This "score" is in your own head. Anyone can keep "score" by whatever rules they like. It is of no importance.

Replies from: BlindIdiotPoster
comment by BlindIdiotPoster · 2013-08-08T12:59:11.747Z · LW(p) · GW(p)

I concede, my original post was poorly thought out and sort of meaningless.