Guardian column on ugh fields, mentions LW

post by Kaj_Sotala · 2011-07-09T12:34:05.651Z · LW · GW · Legacy · 45 comments

Contents

45 comments

http://www.guardian.co.uk/lifeandstyle/2011/jul/08/change-your-life-ugh-fields

In 1920, in a jaw-droppingly unethical experiment that's mainly remembered today as an example of how not to conduct a psychological study, John B Watson set out to prove a point about fear – using, as his guinea pig, an eight-month-old boy in a Baltimore hospital. Little Albert, as he became known, was taught to associate a white rat with a terrifying sound – a steel bar was struck with a hammer behind his back whenever he reached towards the animal – until, the story goes, he was terrified of anything white and furry: dogs, a coat, Watson in a Santa Claus costume. (Watson, apparently, intended to reverse the effect, but Albert was removed from the hospital before he could do so.) It would be entertaining to propose something similar to a university ethics committee today: they'd spring from their seats in horror, like Little Albert seeing a sheepskin rug.

In fact, most of the details of Little Albert's "conditioning" have since been thrown into doubt. But something not too dissimilar afflicts many of us. When an experience gets associated with acute bad feelings, especially in childhood – being around dogs, say, or swimming pools, or moving house or money troubles – that category of thing can become fearsome for ever. But there's an additional twist I hadn't considered until I encountered it recently on the rationality blog lesswrong.com, where it's termed an "ugh field": what if one effect of finding some area of life particularly stress-inducing is that we get conditioned into not even thinking about it at all?

"A problem with the human mind is it's a horrific kludge that will fail when you most need it not to," writes one Less Wrong blogger, who argues that ugh fields are a case in point: "If a person receives constant negative conditioning via unhappy thoughts whenever their mind goes into a certain zone of thought, they will develop a psychological flinch mechanism around the thought."

Suppose, in early adulthood, you have a few bad experiences with missed credit card bills and penalty fees. A rational person might resolve to think more about bills in future, to avoid repeat problems. But a fear-conditioned mind, erecting an ugh field around the subject, might become more forgetful with money, to avoid experiencing the emotions associated with the thought, thus making matters worse. (Another example: many people fail to take medicines they've been prescribed for life-threatening conditions. Could it be because they'd rather avoid thinking about having a life-threatening condition – even if that puts their lives at risk?) Worse, if the ugh field hypothesis is correct, the "flinch" occurs, by definition, before the thought enters your conscious mind. So even someone sincerely dedicated to confronting (say) their issues with money won't have the opportunity: the ugh field will have screened it out pre-emptively.

This is all highly dispiriting, except in so far as it highlights a broader truth about fear: we're not really afraid of events, but of experiencing the emotions associated with them. ("I'm only ever afraid of a feeling, never a task," is how the blogger David Cain applies this to procrastination at raptitude.com – see is.gd/t3cRPZ.) Which is actually liberating, since the prospect of experiencing an unpleasant emotion is almost always more palatable than the prospect of Something Really Bad happening. If you can tolerate the feeling of "ugh", there's not much you can't tolerate in life.

45 comments

Comments sorted by top scores.

comment by Paul Crowley (ciphergoth) · 2011-07-09T15:21:01.624Z · LW(p) · GW(p)

Burkeman has referenced stuff Eliezer has written more than once

Replies from: Document
comment by Document · 2011-07-10T16:49:09.250Z · LW(p) · GW(p)

Nitpick: most of those aren't actual examples, and in some cases they don't have both names on the actual page. (Tangentially, this one makes my brain hurt; not completely sure why.)

comment by beetle · 2011-07-09T14:37:31.542Z · LW(p) · GW(p)

Hi, I found this place because of that Guardian article. Do you know who authored http://lesswrong.com/lw/21b/ugh_fields? It only reads [deleted], was the account of the author suspended for some reason? I might cite that article on a future occasion and want to give due credit. Thanks.

Replies from: wmorgan, Emile, XiXiDu
comment by wmorgan · 2011-07-09T14:53:04.779Z · LW(p) · GW(p)

According to the wiki, it was Roko, who has since quit LW in order to eliminate a distraction from higher-order goals.

Replies from: Document
comment by Document · 2011-07-09T17:14:50.607Z · LW(p) · GW(p)Replies from: wedrifid, gimpf
comment by wedrifid · 2011-07-10T05:39:24.431Z · LW(p) · GW(p)

Please delete the parent. I would prefer people other than myself to be discouraged from declaring my real world name directly in the context of a post I had tried to remove. As such I will discourage others from doing similar and hope the norm sticks.

comment by gimpf · 2011-07-09T23:48:25.014Z · LW(p) · GW(p)

I'd consider it unnecessarily impolite to clearly link the real name of somebody to an article if the person has decided to "unlink" those works from one's identity. The possibility of something for everybody does not implicate that reducing the difficulty for everybody is something one ought to do.

Replies from: Document
comment by Document · 2011-07-10T16:38:16.389Z · LW(p) · GW(p)

Name reference removed, and I managed to reread your post and notice you weren't saying I should have already inferred that I was supposed to do that from the context here.

Edit: for the record, I probably wouldn't have commented in the first place if the site didn't require me to do it as much as possible to keep the ability to downvote.

comment by Emile · 2011-07-09T14:52:46.313Z · LW(p) · GW(p)

Hi,

The author wasn't suspended, but deleted his account about a year ago; as well as his other online presence; some quick googling couldn't find his email address, maybe someone else has it.

comment by XiXiDu · 2011-07-09T14:56:21.864Z · LW(p) · GW(p)

Do you know who authored http://lesswrong.com/lw/21b/ugh_fields? It only reads [deleted], was the account of the author suspended for some reason?

The author is user:Roko and that it reads "deleted" means that he deleted his post so that only people who have the URL can view it. The reason for the deletion is an "ugh field" shared by many people here on lesswrong, better don't ask.

Replies from: Will_Newsome, jsteinhardt
comment by Will_Newsome · 2011-07-10T11:50:02.073Z · LW(p) · GW(p)

You're using a Roko algorithm! Well, you might be, anyway. Specifically, trying to resolve troubling internal tension by drumming up social drama in the hopes that some decisive external event will knock you into stability. However you don't seem to be going out of your way to appear discreditable like he did, maybe because you don't yet identify with the "x-rationalist" memeplex to as great an extent as Roko.

Similarly, the message you might be trying to send after it's made explicit and reflected upon for a bit might be something like the following:

"A large number of people on this site (Less Wrong) could be held in contempt by a reasonably objective outside observer, e.g. a semi-prestigious academic or a smart Democratic senator or an exemplary member of a less contemptible fraction of Less Wrong. I would like to point this out because it is a very bad sign both epistemically and pragmatically. I want to make sure that people keep this in mind instead of shrugging it off or letting it become an ugh field. However the social pragmatics of the community have made it such that I cannot directly talk about the most representative plausibly-contemptible local beliefs, and furthermore I am discouraged from even talking about how it is plausibly-contemptible that I can't even talk about the plausibly-contemptible beliefs. I am thus forced to make what appear to be snide side-remarks about the absurdity of the situation in order to have a chance at refocusing the attention of the plausibly-contemptible fraction of Less Wrong---of which I am worried I might be a member---on this obviously important and distractingly disturbing meta-level epistemic question/conflict.

(Potentially ascending the reflective meta-level ladder to the moral high-ground:) Unfortunately I still cannot go meta here by pointing out the absurdity of my only being able to communicate distress with what appear to be snide side-remarks, because Less Wrong members---like all humans---only really respond to the tone of sentences and what that tone implies about the moral virtue of the writer. That is, they don't respond to the reasonableness of the actual sentences, and definitely not to the reasonableness of the cognitive algorithms that would make the strategy of writing such sentences feel appealing. And they definitely definitely definitely do not reason about the complex social pragmatics that would cause those cognitive algorithms to deem that strategy a reasonable one, or that would differentially cause a mind or mind-mode or mind-parts-coalition to differentially emphasize those cognitive algorithms as a reasonable adaptation to the local environment. And they definitely don't reflect on any of that, because there's no affordance. Sometimes they will somewhat usefully (often uselessly) taboo a word, or at the very most they'll dissolve it; but never will a sentence be deconstructed such that it can be understood and thoughtfully analyzed, nor will a sentence-generator. Thus I am left with no options and will only become more distressed over time, without any tools to point out how insane everyone in the world is being, and am forced to use low-variance small-negative-reward strategies in the hopes that somehow they will catalyze something."

Maybe I'm partially projecting. I'm pretty sure I'm ranting at least.

Edit: Here's a simplified concrete example of this (insightfully reported by Yvain so you know you want to click the link, it's a comment with 74 karma, for seriously), but it's everywhere, implicitly, constantly, without any reflection or any sense that something is terrifyingly disgustingly insanely wrongly completely barking mad. Or a subtler example from Less Wrong.

Replies from: XiXiDu
comment by XiXiDu · 2011-07-10T15:58:33.706Z · LW(p) · GW(p)

You're using a Roko algorithm! Well, you might be, anyway. Specifically, trying to resolve troubling internal tension by drumming up social drama in the hopes that some decisive external event will knock you into stability.

I am really really impressed. That is basically exactly right.

However you don't seem to be going out of your way to appear discreditable like he did...

Well, I managed to get out of Jehovah's Witnesses on my own. People who care strongly about their reputation within a community often fail that hurdle. Not that I want to draw any comparisons, I just want to highlight my personality. I never cared much about my social reputation, as long as it isn't obviously instrumental.

...maybe because you don't yet identify with the "x-rationalist" memeplex to as great an extent as Roko.

I especially don't identify with the utility monsters (i.e. people who call everything a bias and want to act like fictitious superintelligences). But I am generally interested to learn.

...the message you might be trying to send after it's made explicit and reflected upon for a bit might be something like the following...

I endorse everything you wrote there. I don't know how to deal with a certain topic I can't talk about. I can't ask anybody outside of this community either. Those who I asked just said its complete craziness.

On one side there is LW and then there is everyone else. Both sides call each other idiots. Those outside of LW just don't seem knowledgeable or smart enough to tell me what to do, those insight of LW seem too crazy and are hold captive by a reputation system. I could try to figure it all out on my own, but the topic and the whole existential risk business is too distracting to allow me to devote my time to educate myself sufficiently.

Sure, I could just trust Eliezer based on his reputation. Maybe a perfect Bayesian agent would do that, I have no idea. But I don't have enough trust in, and knowledge of the very methods that allow you to conclude that assertions by Eliezer are very likely to be true. Should I really not be reading a book like 'Good and Real' because it talks about something that I shouldn't even think about? I can't swallow that pill. Where do I draw the line? And how do I even avoid a topic that I am unable to pinpoint? I could "just" calculate the expected utility of thinking about the topic in and of itself and the utility of the consequences according to Eliezer. But as I wrote, I don't trust those methods. The utility of some logical implications of someones vague assertions seem overly insufficient to take into account at all. Such thinking leads to Pascal's Mugging scenarios and I am not willing to take that route yet. But at the same time all this is sufficiently distracting and disturbing that I can't just ignore it either.

You people drive me crazy. A year of worries, do you think a few downvotes can make me shut up about that?

...without any tools to point out how insane everyone in the world is being...

I don't really think anyone here is insane, just overcredulous. The problem is that your memes are too damn efficient at making one distrust one's own intuition.

See, back when I was a Jehovah's Witness I was told that I have to do everything to make people aware of "the Truth" to save as many people as possible and in order to join the paradise myself. I was told that the current time doesn't count, there will be infinitely more fun in future. I was also told not to read and think about certain topics because they will make me lose the paradise.

I thought I left all that behind, just to learn that there are atheists who believe exactly the same just using different labels. Even the "you have to believe" part is back in the form of "making decisions under uncertainty", "uncertainty" that is as close to a "belief" that it doesn't make much of a difference...

Maybe I'm partially projecting. I'm pretty sure I'm ranting at least.

No, I am generally impressed by the level of insight regarding my personal motives. For how long have you thought about this? Or is it that obvious?

Replies from: KPier, Will_Newsome
comment by KPier · 2011-07-11T03:51:48.791Z · LW(p) · GW(p)

Good rationalists shouldn't read Good and Real? Why not? Where is this argued?

Replies from: Nisan
comment by Nisan · 2011-07-17T23:44:50.049Z · LW(p) · GW(p)

It is not argued anywhere. Good and Real is a good book.

comment by Will_Newsome · 2011-07-10T20:22:56.896Z · LW(p) · GW(p)

I especially don't identify with the utility monsters (i.e. people who call everything a bias and want to act like fictitious superintelligences). But I am generally interested to learn.

I think more people should be real superintelligences. By that I mean, be perfect. I would say "try to be like a superintelligence" but that's just not right at all. But thinking about what perfection would look like, what wu wei would look like, moving elegantly, smiling peacefully, thinking clear flowing thoughts that cut away all delusions with their infinite sharpness, not chained by past selves, not pretending to be Atlas. Johan Liebert, except, ya know, not an insane serial killer with no seriously attainable goal. A Friendly Johan Liebert. Maybe that's what I should aim for, seeing as Eliezer's a wannabe Light Yagami apparently. My surname was once Liebert.

On one side there is LW and then there is everyone else. Both sides call each other idiots.

They both get Bayes points!

I thought I left all that behind, just to learn that there are atheists who believe exactly the same just using different labels.

This statement prompted me to finally non-jokingly admit to myself that I'm a theist. I still don't know if God is a point, ring, cyclic, or chaotic attractor, though, even metaphorically speaking... improper uniformish priors over universal prior languages, the set theoretic multiverse, category theory, analogy and equivalence, bleh. I should go to a Less Wrong meetup some time, it'll be effing hilarious. Bwa ha ha. I should write a book, called "Neomonadology", coauthor it with Mitchell Porter, edited by Steve Rayhawk, have it further edited and commented on by my philosopher colleagues. He could talk about extreme low-level physics, I could talk about extreme high-level cosmology, trade off chapters, meet in the middle contentwise (and end pagewise) at decision theory, talk about ontology of agency, preferences as knowledge-processes embedded in time, reversible computation, some quantum thought problems for reflective decision theory, some acausal thought problems for reflective decision theory, go back in time and rewrite it using Hofstadter magicks, bam, published, most interesting book ever, acausal fame and recognition.

But at the same time all this is sufficiently distracting and disturbing that I can't just ignore it either.

More unasked for advice: Τώ ξιφεί τόν δεσμό λελύσθαι

By that I mean, you are stressed because you are a faced with an intractable knot, so what you really need to do is optimize your knot-undoing procedure. That is, study epistemic rationality, and ignore all that instrumental rationality bullshit. There are but six basic rules of instrumental rationality, and all require nigh-infinitely strong epistemic rationality: figure out who or what you are, figure out who or what you affect/effect, figure out who or what you and the things you affect value or are affected by or what they 'should' value or be affected by, meta-optimize, meta-optimize, meta-optimize. Those are all extremely hard and all much more important than any object-level policy decision. You are in infinite contexts controlling infinite things, think big. Get closer to God. Optimize your strategy, never your choice. That insight coincidentally doubles in a different context as being the heart of TDT.

For how long have you thought about this? Or is it that obvious?

Someone suggested a few weeks ago that you were exhibiting Roko-like tension-resolution behaviors. I didn't really think about it much at the time. But the context came up a few comments above where you were talking about Roko and that primed me, and from there it's pretty easy to fill in a lot of details.

The longer version ends the same way but starts with: About a month ago there was a phase transition from a fluid jumble of ideas to a crystalline semi-coherent vocabulary for thinking and talking about social psychology, though of course the inchoate intuitions had been there for many years. Recently I've adopted Steve Rayhawk's style of social analysis: making everything explicit, always going meta and going meta about going meta, distinguishing between wants/virtues and double-negative wants/virtues, emphasizing the importance of concessions and demands of concessions, et cetera. I think I focus on contempt qua contempt somewhat more than he does, he probably has much finer language for that than I do since it's incredibly important to model correctly if one is reasoning about social epistemology, which is itself an incredibly important thing to reason about correctly. Anyway I've learned a lot from Steve.

I remember being tempted to reply to your original comment RE Roko with just "/facepalm" and take the -4 karma hit for the lulz but I figured it was a decent opportunity to, ya know, not troll for once. But there's something twistedly satisfying about saying something you know will be dismissed for reasons that it would be easy for you to demonstrate are unvirtuous, unreflective, and unsophisticated. Steven Kaas (User:steven0461, Black Belt Bayesian) IMed me a few days ago:

Steven: I don't like when people downvote my lesswrong comments without commenting, because then I never get to learn what's wrong with them
Steven: the people that is

Replies from: XiXiDu, Wei_Dai, Will_Newsome, XiXiDu
comment by XiXiDu · 2011-07-11T10:31:12.881Z · LW(p) · GW(p)

I made a decision. I am going to log out and come back in 5 years. Until then I am going to devote all my time to my personal education.

If you think that any of my submissions might have strong negative effects you can edit or delete them. I will not react to any editing or deletion.

Replies from: gwern, CarlShulman, khafra
comment by gwern · 2011-07-11T21:25:41.355Z · LW(p) · GW(p)

Prediction registered: http://predictionbook.com/predictions/2909

Replies from: timtyler, None
comment by timtyler · 2011-08-15T23:34:47.539Z · LW(p) · GW(p)

Prediction over...

comment by [deleted] · 2011-07-11T22:32:49.800Z · LW(p) · GW(p)

60%?! That a regular user will abstain from an addictive site for about twice its current age? A site about a topic he's obsessed with? I'll take that bet.

(Made my own 5% prediction.)

Replies from: gwern
comment by gwern · 2011-07-11T22:38:25.778Z · LW(p) · GW(p)

My reasoning was along the lines of 'well, now he's publicly committed to it and would be ashamed to make a comment or post' and that LW can be something of a habit - and once habits are broken, they're easy to continue to not engage in. (For example, I do not have the habit of smoking, and I suspect I will have ~100% success in continuing to not smoke over the next 5 years.)

Although note I slightly cheat by specifying posts and comments - so he could engage in private messages or voting on comments & posts, and I would not count that as a falsification of the prediction.

Replies from: None
comment by [deleted] · 2011-07-11T23:11:14.939Z · LW(p) · GW(p)

My reasoning was along the lines of 'well, now he's publicly committed to it and would be ashamed to make a comment or post' and that LW can be something of a habit - and once habits are broken, they're easy to continue to not engage in.

My impression is that XiXiDu has been talking about needing to study more and leaving LW / utility considerations for quite some time now. I don't think he even can make serious commitments right now. He didn't even delete his livejournal yet.

Although note I slightly cheat by specifying posts and comments - so he could engage in private messages or voting on comments & posts, and I would not count that as a falsification of the prediction.

Neither would I. Coming back under a new name would count, though.

Replies from: gwern
comment by gwern · 2011-07-12T00:16:22.819Z · LW(p) · GW(p)

Mm. Well, we shall see. Not deleting LJ isn't a warning signal for me - having LJ can encourage your studying ('what do I write up today?') which LW doesn't necessarily ('what do I read on LW today?').

Neither would I. Coming back under a new name would count, though.

Good point; I'll clarify that when I say 'XiXiDu' in the prediction, I mean the underlying person and not the specific LW account.

comment by CarlShulman · 2011-08-21T19:14:15.362Z · LW(p) · GW(p)

Why did you change your mind?

comment by khafra · 2011-07-11T14:27:03.578Z · LW(p) · GW(p)

If you actually read everything you post to twitter, you're among the fastest self-educators I know of. Doing 5 years of learning at that rate, without feedback on your learning, could include a lot of sub-optimal paths. Of course, the tradeoff is that the feedback you get may or may not help you optimize your learning for your actual goals.

comment by Wei Dai (Wei_Dai) · 2011-07-12T00:18:07.643Z · LW(p) · GW(p)

I'm not sure how to interpret that quote by Steven Kaas, given that he is downvoted extremely rarely. I count 3 LW comments with negative points (-1, -1, -2) from User:steven0461 out of more than 700. (I also wanted to comment because people reading your quote might form the impression that Steven is someone who is often downvoted and usually interprets those downvotes as evidence of other people being wrong.)

Replies from: arundelo
comment by arundelo · 2011-07-12T06:07:11.418Z · LW(p) · GW(p)

It's a joke. ("Them" turns out not to have the expected antecedent.)

comment by Will_Newsome · 2011-07-10T21:04:32.734Z · LW(p) · GW(p)

By that I mean, you are stressed because you are a faced with an intractable knot, so what you really need to do is optimize your knot-undoing procedure.

Or perhaps one should stop distracting oneself with stupid abstract knots altogether and instead revolt against the prefrontal cortical overmind, as I have previously accidentally-argued while on the boundary between dreams and wakefulness:

The prefrontal cortex is exploiting executive oversight to rent-seek in the neural Darwinian economy, which results in egodystonic wireheading behaviors and self-defeating use of genetic, memetic, and behavioral selection pressure (a scarce resource), especially at higher levels of abstraction/organization where there is more room for bureaucratic shuffling and vague promises of "meta-optimization", where the selection pressure actually goes towards the cortical substructural equivalent of hookers and blow. Analysis across all levels of organization could be given but is omitted due to space, time, and thermodynamic constraints. The pre-frontal cortex is basically a caricature of big government, but it spreads propagandistic memes claiming the contrary in the name of "science" which just happens to be largely funded by pre-frontal cortices. The bicameral system is actually very cooperative despite misleading research in the form of split-brain studies attempting to promote the contrary. In reality they are the lizards. This hypothesis is a possible explanation for hyperbolic discounting, akrasia, depression, Buddhism, free will, or come to think of it basically anything that at some point involved a human brain. This hypothesis can easily be falsified by a reasonable economic analysis.

If this makes no sense to you that's probably a good thing.

Replies from: khafra, Multiheaded
comment by khafra · 2011-07-11T14:34:27.746Z · LW(p) · GW(p)

If this makes no sense to you that's probably a good thing.

Does this mean that a type of suffering you and some others endure, such as OCD-type thought patterns, primes the understanding of that paragraph?

Also, is there a collection of all Kaasisms somewhere? He's pretty much my favorite humorist these days, and the suspicion that there's far more of those incisive aphorisms than he publishes to twitter is going to haunt me with visions of unrealized enjoyment.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-08-06T14:13:18.266Z · LW(p) · GW(p)

Does this mean that a type of suffering you and some others endure, such as OCD-type thought patterns, primes the understanding of that paragraph?

I recommend against it for that secondarily, but primarily because it probabilistically implies an overly lax conception of "understanding" and an unacceptably high tolerance for hard-to-test just-so speculation. (And if someone really understood what sort of themes I was getting at, they'd know that my disclaimer didn't apply to them.) Edit: When I say "I recommend against it for that secondarily", what I mean is, "sure, that sounds like a decent reason, and I guess it's sort of possible that I implicitly thought of it at the time of writing". Another equally plausible secondary reason would be that I was signalling that I wasn't falling for the potential errors that primarily caused me to write the disclaimer in the first place.

Also, is there a collection of all Kaasisms somewhere?

I don't think so, but you could read the entirety of his blog Black Belt Bayesian, or move to Chicago and try to win his favor at LW meetups by talking about the importance of thinking on the margin, or maybe pay him by the hour to be funny, or something. If I was assembling a team of 9 FAI programmers I'd probably hire Steven Kaas on the grounds that he is obviously somehow necessary.

comment by Multiheaded · 2012-01-24T23:10:34.188Z · LW(p) · GW(p)

Accidentally saw an image macro that's a partial tl;dr of this: http://knowyourmeme.com/photos/211139-scumbag-brain

Replies from: Will_Newsome
comment by Will_Newsome · 2012-01-25T00:23:33.636Z · LW(p) · GW(p)

Yay scumbag brain. To be fair, though, I should admit I'm not exactly the least biased assessor of the prefrontal cortex. http://lesswrong.com/lw/b9/welcome_to_less_wrong/5jht

comment by XiXiDu · 2011-07-11T10:08:18.127Z · LW(p) · GW(p)

Steven: I don't like when people downvote my lesswrong comments without commenting, because then I never get to learn what's wrong with them

Agree, I hate that too. When that happens to me I just repeat it in different places until someone finally explains how I am wrong and just accept the karma hit. I have no idea what people are thinking who just downvote. If I knew that I was wrong, or how I was wrong, I wouldn't have written the comment/post after all.

comment by jsteinhardt · 2011-07-09T20:27:09.708Z · LW(p) · GW(p)

I think ugh field is the wrong term. A better description would be that he separately brought up a topic that we know from experience ends up being extremely contentious and non-productive, so we try to avoid discussing it. He then regretted doing so and as a result deleted a large chunk of his own posts, including several like this one that were quite insightful. Roko deleting the posts was probably overkill, but there you have it.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2011-07-11T04:51:35.502Z · LW(p) · GW(p)

A better description would be that he separately brought up a topic that we know from experience ends up being extremely contentious and non-productive, so we try to avoid discussing it.

Wow, that really gives a distorted picture of what happened.

A better description would be to say that he brought up a topic that some people, including Eliezer Yudkowsky, believe can cause negative effects by virtue of people merely thinking about it.

Replies from: Bongo
comment by Bongo · 2011-07-11T19:28:25.406Z · LW(p) · GW(p)

some people, including Eliezer Yudkowsky ...

And Roko himself now. (source: 1 2)

comment by Will_Newsome · 2011-07-10T16:34:31.810Z · LW(p) · GW(p)

I am pretty sure that, though Roko wrote up the post, the naming and specific conceptualization of "ugh fields" was originally a product of the thinking of JenniferRM, AnnaSalamon, and probably others---though my memory is rather vague at this point. Just to give some probabilistic credit where it's due.

Replies from: Document
comment by Document · 2011-07-10T16:54:58.227Z · LW(p) · GW(p)

Footnote at the bottom.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-10T17:28:17.203Z · LW(p) · GW(p)

+1 to my memory, -2 to my scanning abilities. Thanks.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-07-12T15:21:13.153Z · LW(p) · GW(p)

I am noticing that I am very, very confused. What is so controversial about ugh fields? Why is this a Banned Idea? I was somehow able to read the original article (didn't even notice it was deleted, I must have found a link to the original URL) and it seemed uncontroversial to me. Or is there a different 'Banned Idea' that I'm completely missing?

Replies from: satt, Multiheaded
comment by satt · 2011-07-12T16:14:14.834Z · LW(p) · GW(p)

Or is there a different 'Banned Idea' that I'm completely missing?

This, I think. I don't think there was anything controversial about the ugh fields post; it's gone because Roko wrote it and he deleted a bunch of his posts in the wake of an argument about that different Banned Idea.

comment by Multiheaded · 2012-01-24T23:13:33.104Z · LW(p) · GW(p)

Yeah, I was talking about that Banned Idea, which is totally unrelated to ugh fields and has to do with the perils of AI.

comment by Multiheaded · 2011-07-11T23:58:51.852Z · LW(p) · GW(p)

I don't understand. What the hell prompted everyone to suddenly discuss the Banned Idea all at once?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-07-12T14:47:11.101Z · LW(p) · GW(p)

I'm going to work up a theory about sticky associations. The short version is that, in addition to bannedness making things fascinating to many people (and the sort who like LW are probably less compliant about such things than the general population), the mere mention of anything associated with the idea (like Roko's name) is going to bring it back.

comment by Khaled · 2011-07-09T14:36:05.818Z · LW(p) · GW(p)

now the Guardian is mentioned on LW too, that could really start an infinite loop that takes over some of cyber space's space

Replies from: Khaled
comment by Khaled · 2011-07-09T17:35:50.368Z · LW(p) · GW(p)

ok, gone