Open thread, February 15-28, 2013

post by David_Gerard · 2013-02-15T23:17:42.391Z · LW · GW · Legacy · 338 comments

If it's worth saying, but not worth its own post, even in Discussion, it goes here.

338 comments

Comments sorted by top scores.

comment by A1987dM (army1987) · 2013-02-16T13:35:30.897Z · LW(p) · GW(p)

One part of my brain keeps being annoyed by the huge banner saying “Less Wrong will be undergoing maintenance in 1 day, 9 hours” and wishes there were a button to hide it away; another part knows perfectly well that if I did that, I would definitely forget that.

Replies from: Elithrion
comment by Elithrion · 2013-02-17T03:05:17.677Z · LW(p) · GW(p)

Maybe it could reappear 30 minutes and 9 hours before the maintenance or something?

(This being part of "things that could be done with more web design resources".)

comment by drethelin · 2013-02-15T23:49:32.517Z · LW(p) · GW(p)

Does anyone else believe in deliberate alienation? Forums and organizations like Lesswrong often strive to be and claim to want to be more (and by extension indefinitely) inclusive but I think excluding people can be very useful in terms of social utilons and conversation, if not so good for $$$. There's a lot of value in having a pretty good picture of who you're talking to in a given social group, in terms of making effective use of jargon and references as well as appeals to emotion that actually appeal. I think thought should be carefully given as to who exactly you let in or block out with any given form of inclusiveness or insensitivity.

On a more personal note, I think looking deliberately weird is a great way to make your day to day happenstance interactions more varied and interesting.

Replies from: RomeoStevens, WingedViper
comment by RomeoStevens · 2013-02-16T07:05:37.104Z · LW(p) · GW(p)

Yes, insufficient elitism is a failure mode of people who were excluded at some point in their life.

Replies from: Nornagest
comment by Nornagest · 2013-02-16T07:24:19.193Z · LW(p) · GW(p)

This seems like a good time to link the Five Geek Social Fallacies, one of my favorite subculture sociology articles.

(Insufficient elitism as a failure mode is #1.)

comment by WingedViper · 2013-02-16T00:06:07.041Z · LW(p) · GW(p)

Acting "weird" (well or just weird, depends) is something I have contemplated, too. For now I have to confess that I mostly try to stick to the norms (especially in public) except if I have a good reason to do otherwise. I think I might make this one of my tasks to just do some random "weird" acts of kindness.

About the alienation: I don't think that we should do a lot about that. I think enforcing certain rules and having our own memes and terms for stuff already has some strong effects on that. I certainly felt a bit weird when I first came here. And I already was having thoughts like "don't judge something by it's cover" etc. in my mind (avoiding certain biases).

comment by Viliam_Bur · 2013-02-17T19:33:29.470Z · LW(p) · GW(p)

Did anyone try using the LessWrong web software for their own website? I would like to try it, but I looked at the source code and instructions, and it seemed rather difficult. Probably because I have no experience with Python, Ruby, or with configuring servers (a non-trivial dose of all three seems necessary).

If someone would help me install it, that would be awesome. A list of steps what exactly needs to be done, and what (and which version) needs to be installed on server would be also helpful.

The idea is: I would like to start a rationalist community in Slovakia, and a website would be helpful to attract new people. Although I will recommend all readers to visit LW, reading in a foreign language is a significant incovenience; I expect the localized version to have at least 10 times more readers. Also I would like to discuss local events and coordinate local meetups or other activities.

Seemed to me it would be best to reuse the LW software and just localize the texts; but now it seems the installation is more complicated than the discussion softwares I have used before (e.g. PHPBB). But I really like the LW features (Markdown syntax, karma). I just have no experience with the used technologies, and don't want to spend my next five weekends learning them. So I hope someone already having the skills would help me.

Replies from: JGWeissman
comment by JGWeissman · 2013-02-17T20:21:14.624Z · LW(p) · GW(p)

This sounds like a subreddit of LW would be a good solution. I don't know how much work that would be to set up, but you could ask Matt.

comment by rev · 2013-02-16T17:01:49.098Z · LW(p) · GW(p)

Are there any mechanisms on this site for dealing with mental health issues triggered by posts/topics (specifically, the forbidden Roko post)? I would really appreciate any interested posters getting in touch by PM for a talk. I don't really know who to turn to.

Sorry if this is an inappropriate place to post this, I'm not sure where else to air these concerns.

Replies from: shaih
comment by shaih · 2013-02-18T21:54:11.990Z · LW(p) · GW(p)

I was not hear for the roko post and i only have a general idea of what its about, that being said i experienced a bout of depression when applying rationality to the second law of thermodynamics.

Two things helped me, 1 i realized that while dealing with a future that is either very unlikely or inconceivably far away it is hard to properly diminish the emotional impact by what is rationally required. knowing that the emotions felt completely out way what is cause for them, you can hopefully realize that acting in the present towards those beliefs is irrational and ignoring those beliefs would actually help you be more rational. Also realize that giving weight to an improbable future more then it deserves is in its self irrational. With this i realized that by trying to be rational i was being irrational and found that it was easier to resolve this paradox then simply getting over the emotional weight it took to think about the future rationally to begin with.

2 I meditated on the following quote

People can stand what is true, for they are already enduring it.

-Gendlin nothing has changed after you read a post on this website besides what is in your brain. Becoming more rational should never make you lose, after all Rationality is Systematized Winning so instead if you find that a belief you have is making you lose it is clearly a irrational one or is being thought of in a irrational way.

Hope this helps

Replies from: David_Gerard
comment by David_Gerard · 2013-02-28T00:15:12.537Z · LW(p) · GW(p)

Treating it as you would existential depression may be useful, I would think. There are not really a lot of effective therapies for philosophy-induced existential depression - the only way to fix it seems to be to increase your baseline happiness, which is as easy to say as it is hard to do - but it occurred to me that university student health therapist may see a lot of it and may at least be able to provide an experienced ear. I would be interested in any anecdotes on the subject (I'm assuming there's not a lot of data).

comment by moridinamael · 2013-02-16T00:26:31.594Z · LW(p) · GW(p)

So, there are hundreds of diseases, genetic and otherwise, with an incidence of less than 1%. That means that the odds of you having any one of them are pretty low, but the odds of you having at least one of them are pretty good. The consequence of this is that you're less likely to be correctly diagnosed if you have one of these rare conditions, which again, you very well might. If you have a rare disorder whose symptoms include frequent headaches and eczema, doctors are likely to treat the headaches and the eczema separately, because, hey, it's pretty unlikely that you have that one really rare condition!

For example, I was diagnosed by several doctors with "allergies to everything" when I actually have a relatively rare condition, histamine intolerance; my brother was diagnosed by different doctors as having Celiac disease, severe anxiety, or ulcers, when he actually just had lactose intolerance, which is pretty common, and I still cannot understand how they systematically got that one wrong. In both cases, these repeated misdiagnoses led to years of unnecessary, significant suffering. In my brother's case, at one point they actually prescribed him drugs with significant negative side effects which did nothing to alter his lactose intolerance.

I don't intend to come off as bitter, although I suppose I am. My intent is rather to discuss strategies for avoiding this type of systematic misdiagnosis of rare conditions. This line of thought seems like a strong argument in favor of the eventual role of Watson-like AIs as medical diagnostic assistants. A quick Googling indicates that the medical establishment is at least aware of the need to confront the under-diagnosis of rare diseases, but I'm not seeing a lot of concrete policies. For the present time, I don't know what strategy a non-medically-trained individual should pursue, especially if the "experts" are all telling you that your watery eyes mean you have have hay fever when you really have some treatable congenital eye disease.

Replies from: RomeoStevens, ChristianKl, Elithrion, gwern, NancyLebovitz, None
comment by RomeoStevens · 2013-02-16T07:04:31.486Z · LW(p) · GW(p)

but the odds of you having at least one of them are pretty good.

The odds of you having any particular disease is not independent of your odds of having other diseases.

Replies from: Randy_M
comment by Randy_M · 2013-02-18T16:25:03.419Z · LW(p) · GW(p)

Also it depends on how much less than 1% the incidences are.

comment by ChristianKl · 2013-02-17T18:09:40.113Z · LW(p) · GW(p)

Self experimentation. If the doctor prescribes something for you, test numerically whether it helps you to improve.

If you suffer from allergies it makes sense to systemmatically check through self experimentation whether your condition improves by removing varies substances from your diet.

It doesn't hurt to use a symptom checker like http://symptoms.webmd.com/#./introView to get a list of more possible diagnoses.

comment by Elithrion · 2013-02-20T05:25:35.193Z · LW(p) · GW(p)

It is my impression that there is already software out there that has a doctor put in a bunch of symptoms, and then outputs an ordered list of potential diagnoses (including rare ones). The main problem being that adoption is slow. Unfortunately, after 10 minutes of searching, I'm completely failing to find a reference, so who knows how well it works (I know I read about it in The Economist, but that's it).

comment by gwern · 2013-02-17T00:40:04.754Z · LW(p) · GW(p)

So, there are hundreds of diseases, genetic and otherwise, with an incidence of less than 1%. That means that the odds of you having any one of them are pretty low, but the odds of you having at least one of them are pretty good.

Doesn't that depend on how correlated they are? Given how people who have one condition always seem to have other conditions, they seem likely to be correlated in general... (Which makes me wonder if there's some h factor: 'general health factor'.)

comment by NancyLebovitz · 2013-02-16T02:15:20.544Z · LW(p) · GW(p)

There's also poking around online to find people with similar symptoms. Chris Kresser's paleo blog is pretty good, and recently had a post about histamine sensitivity. When I say I think his blog is good, I mean that he shows respect for human variation and for science-- I'm not sure that he's right about particular things.

Replies from: EvelynM
comment by EvelynM · 2013-02-16T21:55:19.961Z · LW(p) · GW(p)

Are you referring to curetogether.com, Nancy?

This graph illustrates clusters of related systems from that site: http://circos.ca/intro/genomic_data/

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-02-17T04:36:15.725Z · LW(p) · GW(p)

That's not the one I was thinking of, but it sounds promising.

comment by [deleted] · 2013-02-16T07:13:07.561Z · LW(p) · GW(p)

As somebody who's had to deal with doctors because of a plethora of diseases, I must say you're absolutely right. (I also shadowed a few and am considering applying to med school.)

I don't remember what this concept is called, but basically it posits that "one should look for horses, not zebras" and is part of medical education. That is, a doctor should assume that the symptoms a patient has are caused by a common disease rather than by a rare one. So most doctors, thanks to their confirmation bias, dismiss any symptoms that don't fit the common disease diagnosis. (A girl from my town went to her physician because she complained of headaches. The good doctor said that she's got nothing to worry about and recommended more rest and relaxation. It turned out that the girl had a brain tumor which was discovered when she was autopsied. The good doctor is still practicing. Would this gross example of irrationality be tolerated in other professions? I think not.)

Most doctors are not so rational because of the way their education is structured: becoming a doctor isn't so much about reasoning but memorizing heaps of information ad verbatim. It appears that they are prone to spew curiosity-stoppers when confronted with diseases.

Replies from: Qiaochu_Yuan, DanielLC
comment by Qiaochu_Yuan · 2013-02-16T22:48:26.360Z · LW(p) · GW(p)

this gross example of irrationality

soren, please don't take this the wrong way, but based on what I've seen you post so far, you are not a strong enough rationalist to say things like this yet. You are using your existing knowledge of biases to justify your other biases, and this is dangerous.

Doctors have a limited amount of time and other resources. Any time and other resources they put into considering the possibility that a patient has a rare disease is time and other resources they can't put into treating their other patients with common diseases. In the absence of a certain threshold of evidence suggesting it's time to consider a rare disease (with a large space of possible rare diseases, most of the work you need to do goes into getting enough evidence to bring a given rare disease to your attention at all), it is absolutely completely rational to assume that patients have common diseases in general. .

Replies from: None
comment by [deleted] · 2013-02-17T07:09:26.577Z · LW(p) · GW(p)

None taken, but how can you assess my level of rationality? When will I be enough rationalist to say things like that?

What bias did I use to justify another bias?

Again, testing a hypothesis when somebody's life is at stake is, I think, paramount to being a good doctor. What's the threshold of evidence a doctor should reckon?

comment by DanielLC · 2013-02-16T08:05:50.431Z · LW(p) · GW(p)

Would this gross example of irrationality be tolerated in other professions?

What gross example of irrationality? The vast majority of people with headaches don't have anything to worry about.

Replies from: NancyLebovitz, army1987, None
comment by NancyLebovitz · 2013-02-18T15:35:01.693Z · LW(p) · GW(p)

The question is whether "people with headaches" is the right reference class. If the headache is unusually severe or persistent, it makes sense to look deeper. Also, a doctor can ask for details about the headache before prescribing the expensive tests.

Replies from: DanielLC
comment by DanielLC · 2013-02-20T05:16:10.904Z · LW(p) · GW(p)

More precisely, the question is whether or not the right reference class is one in which cancer tests are worth while. The headaches would have to be very unusually severe to get enough evidence.

Also, a doctor can ask for details about the headache before prescribing the expensive tests.

It was never mentioned whether or not the doctor asked for details. It's also possible that none of those reference classes are worth looking into, and she'd need headaches and something else.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-02-20T22:38:25.120Z · LW(p) · GW(p)

Cancer isn't the only solvable problem which could get ignored if headaches are handled as a minor problem which will go away on their own.

Replies from: DanielLC
comment by DanielLC · 2013-02-20T23:50:08.820Z · LW(p) · GW(p)

Yeah, but the other ones also get ignored if you assume it's cancer. To my knowledge, they have to be individually tested for. If none is worth testing for individually, it's best to ignore the headaches.

comment by A1987dM (army1987) · 2013-02-16T13:42:42.616Z · LW(p) · GW(p)

“The vast majority” != “All”. What's wrong with “you most likely have nothing to worry about, but I suggest doing this exam the off-chance that you do”? You've got to multiply the probability by the disutility, and the result can be large enough to worry about even if the probability is small. (Yes, down that way Pascal's mugging lies, but still.)

EDIT: Okay, given the replies to this comment I'm going to Aumann my estimate of the cost of tests for rare diseases upwards by a couple of orders of magnitude. Retracted.

Replies from: DanielLC, ChristianKl, Kawoomba, beoShaffer
comment by DanielLC · 2013-02-16T20:13:14.609Z · LW(p) · GW(p)

I'm pretty sure that, in this case, the probability is smaller than the disutility is large. Getting tested for cancer doesn't come cheap.

comment by ChristianKl · 2013-02-17T18:13:19.842Z · LW(p) · GW(p)

Doctors get taught to practice evidence-based medicine. There's a lack of clinical trials that show that you can increase life span by routinically giving people who suffer from headaches brain scans.

If I understand the argument right, then doctors are basically irrational because the favor empirical results from trials over trying to think through the problem on a intellectual level?

comment by Kawoomba · 2013-02-17T18:44:43.763Z · LW(p) · GW(p)

What's wrong with “you most likely have nothing to worry about, but I suggest doing this exam the off-chance that you do”?

MONNAY.

You've got to multiply the probability by the disutility, and the result can be large enough to worry about even if the probability is small.

The question is, whose utility?

comment by beoShaffer · 2013-02-17T01:03:34.411Z · LW(p) · GW(p)

There's also the problem of false positives. Treatments for rare diseases are often expensive and/or carry serious side effects.

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-17T01:09:30.975Z · LW(p) · GW(p)

I was thinking of diagnostics, not treatment, though from DanielLC's reply I guess I had underestimated the cost of that, too.

Replies from: ChristianKl
comment by ChristianKl · 2013-02-17T18:13:46.767Z · LW(p) · GW(p)

If you start diagnosing and find false positives than you are usually going to treat them.

comment by [deleted] · 2013-02-16T11:10:21.432Z · LW(p) · GW(p)

The vast majority of people with headaches don't have anything to worry about.

If you had a headache, would you want your doctor to find out its cause or would you be satisfied if he told you that 'the vast majority of people with headaches don't have anything to worry about' and sent you home straight away?

This 'majority' argument is a fallacious argument. Therefore it's wrong, much wrong.

Edit #1: If you downvote me, I'd like to get some feedback as to why you're doing that.

Replies from: twanvl
comment by twanvl · 2013-02-16T13:47:28.863Z · LW(p) · GW(p)

The argument by majority fallacy means arguing that something is true because many people believe it. In the example of the headaches, the argument was that it was likely true because it is true for most people.

What you would want your doctor to do is take the action that maximizes your expected utility, E[U(action)]. Let's simplify a bit, and say that action can be either "do nothing" or "find cause". Then the utilities could be something like:

P(not sick) = 0.99  (most people have nothing to worry about)
U(not sick, do nothing) = 0
U(not sick, find cure) = -1  (unnecessary tests, drugs, worry)
U(sick, do nothing) = -10  (possibly more headaches, or something worse)
U(sick, find cure) = -1  (still need tests, drugs, etc.)

Then:

E[U(do nothing)] = P(not sick) * U(not sick, do nothing) + P(sick) * U(sick, do nothing) = -0.1
E[U(find cure)] = P(not sick) * U(not sick, find cure) + P(sick) * U(sick, find cure) = -1

So with these numbers I just made up, it is better for the doctor to tell you that there is likely nothing to worry about. And you can be pretty sure that in real life, people have done this calculation. Of course in real life there are many more possible actions, such as waiting for a week to see if the headaches go away, which they will likely do if there was nothing wrong. And that is what a doctor will actually tell you to do.

Replies from: None
comment by [deleted] · 2013-02-16T14:17:02.726Z · LW(p) · GW(p)

The doctor believed that the girl didn't have any serious disease because most people who have headaches do not. How exactly is that not an appeal to majority?

If the doctor's hypothesis anticipates that the girl is healthy in spite of having headaches, then the easiest way to falsify it is to ask what sign or symptom would indicate a life-threatening disease. Would you want your doctor to wait a week or so to test his hypothesis, if you had headaches that could be caused by a brain tumor?

But then again, they don't teach falsification in med school.

Replies from: Larks
comment by Larks · 2013-02-16T16:23:04.483Z · LW(p) · GW(p)

iI would be an appeal to majority if and only if he was appealing to the fact that most people thought she didn't have a serious disease. Instead, he was just appealing to base rates, which is totally reasonable.

comment by roystgnr · 2013-02-25T15:50:55.702Z · LW(p) · GW(p)

At this point, there should be little doubt that the best response to this "basilisk" would have been "That's stupid. Here are ten reasons why.", rather than (paraphrasing for humor) "That's getting erased from the internet. No, I haven't heard the phrase 'Streisand Effect' before; why do you ask?"

Replies from: gwern, Eugine_Nier, wedrifid, Richard_Kennaway
comment by gwern · 2013-02-25T16:53:06.013Z · LW(p) · GW(p)

The real irony is that Eliezer is now a fantastic example of the commitment/sunk cost effect which he has warned against repeatedly: having made an awful decision, and followed it up with further awful decisions over years (including at least 1 Discussion post deleted today and an expansion of topics banned on LW; incidentally, Eliezer, if you're reading this, please stop marking 'minor' edits on the wiki which are obviously not minor), he is trapped into continuing his disastrous course of conduct and escalating his interventions or justifications.

And now the basilisk and the censorship are an established part of the LW or MIRI histories which no critic could possibly miss, and which pattern-matches on religion. (Stross claims that it indicates that we're "Calvinist", which is pretty hilarious for anyone who hasn't drained the term of substantive meaning and turned it into a buzzword for people they don't like.) A pity.


While we're on the topic, I also blame Yvain to some extent; if he had taken my suggestion to add a basilisk question to the past LW survey, it would be much easier to go around to all the places discussing it and say something like 'this is solely Eliezer's problem; 98% disagree with censoring it'. But he didn't, and so just as I predicted, we have lost a powerful method of damage control.

It sucks being Cassandra.

Replies from: Mitchell_Porter, Eliezer_Yudkowsky, Richard_Kennaway, Pablo_Stafforini, Kevin, army1987, shminux, wedrifid
comment by Mitchell_Porter · 2013-02-26T02:51:25.814Z · LW(p) · GW(p)

It sucks being Cassandra.

Let me consult my own crystal ball... Yes, the mists of time are parting. I see... I see... I see, a few years from now, a TED panel discussion on "Applied Theology", chaired by Vernor Vinge, in which Eliezer, Roko, and Will Newsome discuss the pros and cons of life in an acausal multiverse of feuding superintelligences.

The spirits have spoken!

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-26T17:28:26.836Z · LW(p) · GW(p)

I'm looking forward to that.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-26T05:43:32.994Z · LW(p) · GW(p)

Gwern, I made a major Wiki edit followed by a minor edit. I wasn't aware that the latter would mask the former.

Replies from: gwern
comment by gwern · 2013-02-26T18:55:13.413Z · LW(p) · GW(p)

When you're looking at consolidated diffs, it does. Double-checking, your last edit was marked minor, so I guess there was nothing you could've done there.

(It is good wiki editing practice to always make the minor or uncontroversial edits first, so that way your later edits can be looked at without the additional clutter of the minor edits or they can be reverted with minimal collateral damage, but that's not especially relevant in this case.)

comment by Richard_Kennaway · 2013-02-27T11:33:45.514Z · LW(p) · GW(p)

And now the basilisk and the censorship are an established part of the LW or MIRI histories which no critic could possibly miss, and which pattern-matches on religion.

That's already true without the basilisk and censorship. The similarities between transhumanism and religion have been remarked on for about as long as transhumanism has been a thing.

Replies from: gwern
comment by gwern · 2013-02-27T15:43:47.703Z · LW(p) · GW(p)

An additional item to pattern-match onto religion, perhaps I should have said.

comment by Pablo (Pablo_Stafforini) · 2013-03-02T18:51:19.082Z · LW(p) · GW(p)

I also blame Yvain to some extent; if he had taken my suggestion to add a basilisk question to the past LW survey, it would be much easier to go around to all the places discussing it and say something like 'this is solely Eliezer's problem; 98% disagree with censoring it'. But he didn't.

Also, note that this wasn't an unsolicited suggestion: in the post to which gwern's comment was posted, Yvan actually said that he was "willing to include any question you want in the Super Extra Bonus Questions section [of the survey], as long as it is not offensive, super-long-and-involved, or really dumb." And those are Yvain's italics.

comment by Kevin · 2013-02-26T02:16:43.612Z · LW(p) · GW(p)

At this point it is this annoying, toxic meta discussion that is the problem.

comment by A1987dM (army1987) · 2013-02-26T14:08:33.262Z · LW(p) · GW(p)

I also blame Yvain to some extent; if he had taken my suggestion to add a basilisk question to the past LW survey,

Then EY would have freaked the hell out, and I don't know what the consequences of that would be but I don't think they would be good. Also, I think the basilisk question would have had lots of mutual information with the troll toll question anyway: [pollid:419]

EDIT: I guess I was wrong.

Replies from: gwern, wedrifid
comment by gwern · 2013-02-26T18:48:55.462Z · LW(p) · GW(p)

It's too late. This poll is in the wrong place (attracting only those interested in it), will get too few responses (certainly not >1000), and is now obviously in reaction to much more major coverage than before so the responses are contaminated.

The Moving Finger writes; and, having writ,
Moves on: nor all thy Piety nor Wit,
Shall lure it back to cancel half a Line,
Nor all thy Tears wash out a Word of it.

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-27T12:34:34.502Z · LW(p) · GW(p)

Actually, I was hoping to find some strong correlation between support for the troll toll and support for the basilisk censorship so that I could use the number of people who would have supported the censorship from the answers to the toll question in the survey. But it turns out that the fraction of censorship supporters is about 30% both among toll supporters and among toll opposers. (But the respondents to my poll are unlikely to be an unbiased sample of all LWers.)

comment by wedrifid · 2013-02-26T15:02:55.332Z · LW(p) · GW(p)

Then EY would have freaked the hell out, and I don't know what the consequences of that would be but I don't think they would be good. Also, I think the basilisk question would have had lots of mutual information with the troll toll question anyway:

The 'troll toll' question misses most of the significant issue (as far as I'm concerned). I support the troll toll but have nothing but contempt for Eliezer's behavior, comments, reasoning and signalling while implementing the troll toll. And in my judgement most of the mutual information with the censorship or Roko's Basilisk is about those issues (things like overconfidence, and various biases of the kind Gwern describes) is to do with the judgement of competence based on that behavior rather than the technical change to the lesswrong software.

comment by shminux · 2013-02-25T17:21:38.699Z · LW(p) · GW(p)

Just to be charitable to Eliezer, let me remind you of this quote. For example, can you conceive of a reason (not necessarily the officially stated one) that the actual basilisk discussion ought to be suppressed, even at the cost of the damage done to LW credibility (such as it is) by an offsite discussion of such suppression?

Stross claims that it indicates that we're "Calvinist"

I thought this is more akin to Scientology, where any mention of Xenu to the uninitiated ought to be suppressed.

It sucks being Cassandra.

Sure does. Then again, it probably sucks more being Laocoön.

Replies from: Plasmon, gwern, Locaha
comment by Plasmon · 2013-02-25T18:12:12.976Z · LW(p) · GW(p)

can you conceive of a reason (not necessarily the officially stated one) that the actual basilisk discussion ought to be suppressed, even at the cost of the damage done to LW credibility (such as it is) by an offsite discussion of such suppression?

The basilisk is harmless. Eliezer knows this. The streisand effect was the intended consequence of the censor. The hope is that people who become aware of the basilisk will increase their priors for the existence of real information hazards, and will in the future be less likely to read anything marked as such. It's all a clever memetic inoculation program!

disclaimer : I don't actually believe this.

Replies from: Eugine_Nier, wedrifid, Eugine_Nier, Locaha
comment by Eugine_Nier · 2013-02-27T04:29:52.135Z · LW(p) · GW(p)

Another possibility: Eliezer doesn't object to the meme that anyone who doesn't donate to SIAI/MIRI will spend eternity in hell being spread in a deniable way.

Replies from: shminux, Viliam_Bur
comment by shminux · 2013-02-27T04:36:07.064Z · LW(p) · GW(p)

Why stop there? In fact, Roko was one of Eliezer's many socks puppets. It's your basic Ender's Game stuff.

Replies from: None
comment by [deleted] · 2013-03-05T12:40:08.524Z · LW(p) · GW(p)

We are actually all Eliezer's sock puppets. Most of us unfortunately are straw men.

Replies from: gwern
comment by gwern · 2013-03-05T16:20:54.096Z · LW(p) · GW(p)

We are the hollow men / we are the stuffed men / Leaning together / Headpiece filled with straw. Alas! / Our dried comments when / we discuss together / Are quiet and meaningless / As median-cited papers / or reports of supplements / on the Internet.

comment by Viliam_Bur · 2013-02-27T09:15:05.636Z · LW(p) · GW(p)

Another possibility: Eliezer does not want the meme to be associated with LW. Because, even if it was written by someone else, most people are predictably likely to read it and remember: "This is an idea I read on LW, so this must be what they believe."

comment by wedrifid · 2013-02-27T05:30:13.342Z · LW(p) · GW(p)

The hope is that people who become aware of the basilisk will increase their priors for the existence of real information hazards, and will in the future be less likely to read anything marked as such. It's all a clever memetic inoculation program!

It's certainly an inoculation for information hazards. Or at least against believing information hazard warnings.

comment by Eugine_Nier · 2013-02-26T06:51:09.606Z · LW(p) · GW(p)

Alternatively, the people dismissing the idea out of hand are not taking it seriously and thus not triggering the information hazard.

Also the censorship of the basilisk was by no means the most troubling part of the Roko incident, and as long as people focus on that they're not focusing on the more disturbing issues.

Edit: The most troubling part were some comments, also deleted, indicating just how fanatically loyal some of Eliezer's followers are.

comment by Locaha · 2013-02-25T18:21:05.308Z · LW(p) · GW(p)

disclaimer : I don't actually believe this.

Really? Or do you just want us to believe that you don't believe this???

comment by gwern · 2013-02-25T17:52:14.732Z · LW(p) · GW(p)

Just to be charitable to Eliezer, let me remind you of this quote. For example, can you conceive of a reason (not necessarily the officially stated one) that the actual basilisk discussion ought to be suppressed, even at the cost of the damage done to LW credibility (such as it is) by an offsite discussion of such suppression?

No. I have watched Eliezer make this unforced error now for years, sliding into an obvious and common failure mode, with mounting evidence that censorship is, was, and will be a bad idea, and I have still not seen any remotely plausible explanation for why it's worthwhile.

Just to take this most recent Stross post: he has similar traffic to me as far as I can tell, which means that since I get ~4000 unique visitors a day, he gets as many and often many more. A good chunk will be to his latest blog post, and it will go on being visited for years on end. If it hits the front page of Hacker News as more than a few of his blog posts do, it will quickly spike to 20k+ uniques in just a day or two. (In this case, it didn't.) So we are talking, over the next year, easily 100,000 people being exposed to this presentation of the basilisk (just need average 274 uniques a day). 100k people being exposed to something which will strike them as patent nonsense, from a trusted source like Stross.

So maybe there used to be some sort of justification behind the sunk costs and obtinacy and courting of the Streisand effect. Does this justification also justify trashing LW/MIRI's reputation among literally hundreds of thousands of people?

You may have a witty quote, which is swell, but I'm afraid it doesn't help me see what justification there could be.

Sure does. Then again, it probably sucks more being Laocoön.

Laocoön died quickly and relatively cleanly by serpent; Cassandra saw all her predictions (not just one) come true, was raped, abducted, kept as a concubine, and then murdered.

Replies from: Kevin, Richard_Kennaway
comment by Kevin · 2013-02-26T02:20:35.131Z · LW(p) · GW(p)

Can you please stop with this meta discussion?

I banned the last discussion post on the Basilisk, not Eliezer. I'll let this one stand for now as you've put some effort into this post. However, I believe that these meta discussions are as annoyingly toxic as anything at all on Less Wrong. You are not doing yourself or anyone else any favors by continuing to ride this.

The reputational damage to Less Wrong has been done. Is there really anything to be gained by flipping moderation policy?

At this point, let's not taunt people with the right kind of mental pathology to be made very uncomfortable by the basilisk or meta-set of basilisks.

Replies from: fubarobfusco, gwern, wedrifid, drethelin, drethelin, J_Taylor
comment by fubarobfusco · 2013-02-26T04:32:31.204Z · LW(p) · GW(p)

The reputational damage to Less Wrong has been done. Is there really anything to be gained by flipping moderation policy?

There's now the impression that a community of aspiring rationalists — or, at least, its de-facto leaders — are experiencing an ongoing lack of clue on the subject of the efficacy of censorship on online PR.

The "reputational damage" is not just "Eliezer or LW have this kooky idea."

It is "... and they think there is something to be gained by shutting down discussion of this kooky idea, when others' experience (Streisand Effect, DeCSS, etc.) and their own (this very thread) are strong evidence to the contrary."

It is the apparent failure to update — or to engage with widely-recognized reality at all — that is the larger reputational damage.

It is, for that matter, the apparent failure to realize that saying "Don't talk about this because it is bad PR" is itself horrible PR.

The idea that LW or its leadership dedicate nontrivial attention to encircling and defending against this kooky idea makes it appear that the idea is central to LW. Some folks on the thread on Stross's forum seem to think that Roko discovered the hidden secret motivating MIRI! That's bogus ... but there's a whole trope of "cults" suppressing knowledge of their secret teachings; someone who's pattern-matched LW or transhumanism onto "cult" will predictably jump right there.


At this point, let's not taunt people with the right kind of mental pathology to be made very uncomfortable by the basilisk or meta-set of basilisks.

My own take on the whole subject is that basilisk-fear is a humongous case of privileging the hypothesis coupled to an anxiety loop. But ... I'm rather prone to anxiety loops myself, albeit over matters a little more personal and less abstract. The reason not to poke people with Roko's basilisk is that doing so a form of aggression — it makes (some) people unhappy.

But as far as I can tell, it's no worse in that regard than a typical Iain M. Banks novel, or some of Stross's own ideas for that matter ... which are considered entertainment. Which means ... humans eat "basilisks" like this for dessert. In one of Banks's novels, multiple galactic civilizations invent uploading, and use it to implement their religions' visions of Hell, to punish the dead and provide an incentive to the living to conform to moral standards.

(But then, I read Stross and Banks. I don't watch gore-filled horror movies, though, and I would consider someone forcing me to watch such a movie would be doing aggression against me. So I empathize with those who are actually distressed by the basilisk idea, or the "basilisk" idea for that matter.)


I have to say, I find myself feeling worse for Eliezer than for anyone else in this whole affair. Whatever else may be going on here, having one's work cruelly mischaracterized and held up to ridicule is a whole bunch of no fun.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-26T05:47:53.393Z · LW(p) · GW(p)

having one's work cruelly mischaracterized and held up to ridicule is a whole bunch of no fun.

Thank you for appreciating this. I expected it before I got started on my life, I'm already accustomed to it by now, I'm sure it doesn't compare to the pain of starving to death. Since I'm not in any real trouble, I don't intend to angst about it.

Replies from: fubarobfusco
comment by fubarobfusco · 2013-02-27T09:41:56.858Z · LW(p) · GW(p)

Glad to hear it.

comment by gwern · 2013-02-26T19:01:23.858Z · LW(p) · GW(p)

The reputational damage to Less Wrong has been done. Is there really anything to be gained by flipping moderation policy?

The basilisk is now being linked on Marginal Revolution. Estimated site traffic: >3x gwern.net; per above, that is >16k uniques daily to the site.

What site will be next?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-02-27T09:05:16.594Z · LW(p) · GW(p)

What site will be next?

More importantly, will endless meta-discussions like this make another site more likely or less likely to link it?

Replies from: gwern
comment by gwern · 2013-02-27T15:45:09.366Z · LW(p) · GW(p)

Will an abandonment of a disastrous policy be more or less disastrous? Well, when I put it that way, it suddenly seems obvious.

"The world around us redounds with opportunities, explodes with opportunities, which nearly all folk ignore because it would require them to violate a habit of thought; there are a thousand Hufflepuff bones waiting to be sharpened into spears ... I cannot quite comprehend what goes through people's minds when they repeat the same failed strategy over and over, but apparently it is an astonishingly rare realization that you can try something else."

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-02-27T18:11:11.238Z · LW(p) · GW(p)

Less disastrous as in "people spending less time criticizing Eliezer's moderating skills"? Probably yes.

Less disastrous as in "people spending less time on LW discussing the 'basilisk'"? Probably no. I would expect at least dozen articles about this topic within the first year if the ban would be completely removed.

Less disastrous as in "people less likely to create more 'basilisk'-style comments"? Probably no. Seems that the policy prevented this successfully.

comment by wedrifid · 2013-02-26T14:42:48.571Z · LW(p) · GW(p)

The reputational damage to Less Wrong has been done. Is there really anything to be gained by flipping moderation policy?

Answering the rhetorical question because the obvious answer is not what you imply [EDIT: I notice that J Taylor has made a far superior reply already]: Yes, it limits the ongoing reputational damage.

I'm not arguing with the moderation policy. But I will argue with bad arguments. Continue to implement the policy. You have the authority to do so, Eliezer has the power on this particular website to grant that authority, most people don't care enough to argue against that behavior (I certainly don't) and you can always delete the objections with only minimal consequences. But once you choose to make arguments that appeal to reason rather than the preferences of the person with legal power then you can be wrong.

At this point, let's not taunt people with the right kind of mental pathology to be made very uncomfortable by the basilisk or meta-set of basilisks.

I've had people come to me who are traumatised by basilisk considerations. From what I can tell almost all of the trauma is attributable to Eliezer's behavior. The descriptions of the experience give clear indications (ie. direct self reports that are coherent) that a significant reason that they "take the basilisk seriously" is because Eliezer considers it a sufficiently big deal that he takes such drastic and emotional action. Heck, without Eliezer's response it wouldn't even have earned that title. It'd be a trivial backwater game theory question to which there are multiple practical answers.

So please, just go back to deleting basilisk talk. That would be way less harmful than trying to persuade people with reason.

Replies from: David_Gerard
comment by David_Gerard · 2013-02-27T14:06:09.747Z · LW(p) · GW(p)

I've had people come to me who are traumatised by basilisk considerations. From what I can tell almost all of the trauma is attributable to Eliezer's behavior. The descriptions of the experience give clear indications (ie. direct self reports that are coherent) that a significant reason that they "take the basilisk seriously" is because Eliezer considers it a sufficiently big deal that he takes such drastic and emotional action. Heck, without Eliezer's response it wouldn't even have earned that title. It'd be a trivial backwater game theory question to which there are multiple practical answers.

I get the people who've been frightened by it because EY seems to take it seriously too. (Dmytry also gets them, which is part of why he's so perpetually pissed off at LW. He does his best to help, as a decent person would.) More generally, people distressed by it feel they can't talk about it on LW, so they come to RW contributors - addressing this was why it was made a separate article. (I have no idea why Warren Ellis then Charlie Stross happened to latch onto it - I wish they hadn't, because it was totally not ready, so I had to spend the past few days desperately fixing it up, and it's still terrible.) EY not in fact thinking it's feasible or important is a point I need to address in the last section of the RW article, to calm this concern.

Replies from: jbeshir
comment by jbeshir · 2013-02-27T19:06:11.382Z · LW(p) · GW(p)

It would be nice if you'd also address the extent to which it misrepresents other LessWrong contributors as thinking it is feasible or important (sometimes to the point of mocking them based on its own misrepresentation). People around LessWrong engage in hypothetical what-if discussions a lot; it doesn't mean that they're seriously concerned.

Lines like "Though it must be noted that LessWrong does not believe in or advocate the basilisk ... just in almost all of the pieces that add up to it." are also pretty terrible given we know only a fairly small percentage of "LessWrong" as a whole even consider unfriendly AI to be the biggest current existential risk. Really, this kind of misrepresentation of alleged, dubiously actually held extreme views as the perspective of the entire community is the bigger problem with both the LessWrong article and this one.

Replies from: David_Gerard
comment by David_Gerard · 2013-03-01T17:25:28.616Z · LW(p) · GW(p)

The article is still terrible, but it's better than it was when Stross linked it. The greatest difficulty is describing the thing and the fuss accurately while explaining it to normal intelligent people without them pattern matching it to "serve the AI God or go to Hell". This is proving the very hardest part. (Let's assume for a moment 0% of them will sit down with 500K words of sequences.) I'm trying to leave it for a bit, having other things to do.

comment by drethelin · 2013-02-26T08:48:14.228Z · LW(p) · GW(p)

At this point, let's not taunt people with the right kind of mental pathology to be made very uncomfortable by the basilisk or meta-set of basilisks.

As far as I can tell the entire POINT of LW is to talk about various mental pathologies and how to avoid them or understand them even if they make you very uncomfortable to deal with or acknowledge. The reasons behind talking about the basilisk or basilisks in general (apart from metashit about censorship) are just like the reasons for talking about trolley problems even if they make people angry or unhappy. What do you do when your moral intuitions seem to break down? What do you do about compartmentalization or the lack of it? Do you bite bullets? Maybe the mother should be allowed to buy acid.

To get back to meta shit: If people are complaining about the censorship and you are sick of the complaints, the simplest way to stop them is to stop the censorship. If someone tells you there's a problem, the response of "Quit your bitching, it's annoying" is rarely appropriate or even reasonable. Being annoying is the point of even lameass activism like this. I personally think any discussion of the actual basilisk has reached every conclusion it's ever really going to reach by now, pretty reasonably demonstrated by looking at the uncensored thread, and the only thing even keeping it in anyone's consciousness is the continued ballyhooing about memetic hazards.

Replies from: Kevin
comment by Kevin · 2013-02-26T10:05:41.911Z · LW(p) · GW(p)

yawn

Replies from: wedrifid
comment by wedrifid · 2013-02-26T14:51:04.261Z · LW(p) · GW(p)

yawn

I am appalled that you believe this response was remotely appropriate or superior to saying nothing at all. How is it not obvious that once you have publicly put on your hat as an authority you take a modicum of care to make sure you don't behave like a contemptuous ass?

comment by drethelin · 2013-02-26T08:34:59.166Z · LW(p) · GW(p)

The meta discussions will continue until morale improves

comment by J_Taylor · 2013-02-26T04:14:11.718Z · LW(p) · GW(p)

The reputational damage to Less Wrong has been done. Is there really anything to be gained by flipping moderation policy?

I hate to use silly symmetrical rhetoric, however:

The secret has been leaked and the reputational damage is ongoing. Is there really anything to be gained by continuing the current moderation policy?

comment by Richard_Kennaway · 2013-02-27T13:39:37.510Z · LW(p) · GW(p)

from a trusted source like Stross

I wouldn't call him that, and not because I have any doubt about his trustworthiness. It's the other word, "source", that I wouldn't apply. He's a professional SF author. His business is to entertain with ideas, and his blog is part of that. I wouldn't go there in search of serious analysis of anything, any more than I would look for that on RationalWiki. Both the article in question and the comments on it are pretty much on a par with RationalWiki's approach. In fact (ungrounded speculation alert), I have to wonder how many of the commenters there are RW regulars, there to fan the flame.

Replies from: gwern, metatroll
comment by gwern · 2013-02-27T15:53:25.286Z · LW(p) · GW(p)

Stross is widely read, cited, and quoted approvingly, on his blog and off (eg. Hacker News). He is a trusted source for many geeks.

comment by metatroll · 2013-02-28T12:25:18.603Z · LW(p) · GW(p)

RationalWiki's new coat-of-arms is a troll riding a basilisk.

comment by Locaha · 2013-02-25T17:37:52.372Z · LW(p) · GW(p)

for example, can you conceive of a reason (not necessarily the officially stated one) that the actual basilisk discussion ought to be suppressed, even at the cost of the damage done to LW credibility (such as it is) by an offsite discussion of such suppression?

What if he CAN'T conceive a reason? Can you conceive a possibility that it might be for other reason than Gwern being less intelligent then EY? For example, Gwern might be more intelligent than EY.

comment by wedrifid · 2013-02-26T15:06:51.832Z · LW(p) · GW(p)

Discussion post deleted today and an expansion of topics banned on LW

Did you happen to catch the deleted post? Was there any interesting reasoning contained therein? If so, who was the author and did they keep a backup that they would be willing to email me? (If they did not keep a backup... that was overwhelmingly shortsighted unless they are completely unfamiliar with the social context!)

Replies from: Larks
comment by Larks · 2013-02-26T16:04:11.231Z · LW(p) · GW(p)

I saw it. I contained just a link and the a line asking for "thoughts" or words to that effect. Maybe there was a quote - certainly nothing new or origional.

Replies from: wedrifid
comment by wedrifid · 2013-02-26T16:41:47.473Z · LW(p) · GW(p)

I saw it. I contained just a link and the a line asking for "thoughts" or words to that effect. Maybe there was a quote - certainly nothing new or origional.

Thanks. I've been sent links to all the recently deleted content and can confirm that nothing groundbreaking was lost.

comment by Eugine_Nier · 2013-03-02T06:30:17.407Z · LW(p) · GW(p)

No, I haven't heard the phrase 'Streisand Effect' before; why do you ask?

I'm not convinced the Streisand Effect is actually real. It seems like an instance of survival bias. After all, you shouldn't expect to hear about the cases when information was successfully suppressed.

Replies from: wedrifid
comment by wedrifid · 2013-03-02T11:33:51.413Z · LW(p) · GW(p)

I'm not convinced the Streisand Effect is actually real.

This is a bizarre position to take. The effect does not constitute a claim that all else being equal attempts to suppress information are negatively successful. Instead it describes those cases where information is published more widely due to the suppression attempt. This clearly happens sometimes. The Wikipedia article gives plenty of unambiguous examples.

In April 2007, an attempt at blocking an Advanced Access Content System (AACS) key from being disseminated on Digg caused an uproar when cease-and-desist letters demanded the code be removed from several high-profile websites. This led to the key's proliferation across other sites and chat rooms in various formats, with one commentator describing it as having become "the most famous number on the internet". Within a month, the key had been reprinted on over 280,000 pages, printed on T-shirts and tattoos, and had appeared on YouTube in a song played over 45,000 times.

It would be absurd to believe that the number in question would have been made into T-shirts, tattoos and a popular YouTube song no attempt was made to suppress it. That doesn't mean (or require) that in other cases (and particularly in other cases where the technological and social environment was completely different) that sometimes powerful figures are successful in suppressing information.

comment by wedrifid · 2013-02-26T15:10:50.576Z · LW(p) · GW(p)

At this point, there should be little doubt that the best response to this "basilisk" would have been "That's stupid. Here are ten reasons why.", rather than (paraphrasing for humor) "That's getting erased from the internet. No, I haven't heard the phrase 'Streisand Effect' before; why do you ask?"

Heck, there is little doubt that even your paraphrased humorous alternative would have been much better than what actually happened. It's not often that satirical caricatures are actually better than what they are based on!

comment by Richard_Kennaway · 2013-02-27T12:21:50.312Z · LW(p) · GW(p)

At this point, there should be little doubt that the best response to this "basilisk" would have been "That's stupid. Here are ten reasons why.

That would only be the best response if the basilisk were indeed stupid, and there were indeed ten good reasons why. Presumably you do think it is stupid, and you have a list of reasons why; but you are not in charge. (I hope it is obvious why saying it is stupid if you believed it was not, and writing ten bad arguments to that effect, would be monumentally stupid.)

But Eliezer's reason for excluding such talk is precisely that (in his view, and he is in charge) it is not stupid, but a real hazard, the gravity of which goes way beyond the supposed effect on the reputation of LessWrong. I say "supposed" because as far as I can see, it's the clowns at RationalWiki who are trying to play this up for all it's worth. Reminds me of The Register yapping at the heels of Steve Jobs. The recent links from Stross and Marginal Revolution have been via RW. Did they just happen to take notice at the same time, or is RW evangelising this?

The current deletion policy calls such things "toxic mindwaste", which seems fair enough to me (and a concept that would be worth a Sequence-type posting of its own). I don't doubt that there are many other basilisks, but none of them have appeared on LW. Ce qu'on ne voit pas, indeed.

Replies from: David_Gerard
comment by David_Gerard · 2013-02-27T14:42:30.971Z · LW(p) · GW(p)

RW didn't push this at all. I have no idea why Warren Ellis latched onto it, though I expect that's where Charlie Stross picked it up from.

The reason the RW article exists is because we're getting the emails from your distressed children.

Replies from: Richard_Kennaway, ArisKatsaris, None
comment by Richard_Kennaway · 2013-02-27T15:55:05.772Z · LW(p) · GW(p)

The reason the RW article exists is because we're getting the emails from your distressed children.

I can't parse this. Who are "we", "you", and the "distressed children"? I don't think I have any, even metaphorically.

Replies from: gwern
comment by gwern · 2013-02-27T17:35:27.738Z · LW(p) · GW(p)

It's not that hard. DG is using 'the Rational Wiki community' for 'we', 'your' refers to 'the LessWrong community', and 'distressed children' presumably refers to Dmytry, XiXi and by now, probably some others.

Replies from: David_Gerard
comment by David_Gerard · 2013-02-27T17:50:13.096Z · LW(p) · GW(p)

No, "distressed children" refers to people upset by the basilisk who feel they can't talk about it on LW so they email us, presumably as the only people on the Internet bothering to talk about LW. This was somewhat surprising.

Replies from: Richard_Kennaway, Richard_Kennaway, None
comment by Richard_Kennaway · 2013-02-28T10:54:43.662Z · LW(p) · GW(p)

[referring to RationalWiki] as the only people on the Internet bothering to talk about LW.

Well then, that's the reputation problem solved. If it's only RationalWiki...

comment by Richard_Kennaway · 2013-02-28T10:51:08.771Z · LW(p) · GW(p)

What do you tell them?

Replies from: wedrifid
comment by wedrifid · 2013-02-28T10:53:43.175Z · LW(p) · GW(p)

What do you tell them?

I presume it would include things that David Gerard could not repeat here. After all that's why the folk in question contacted people from the Rational Wiki community in the first place!

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-02-28T11:13:46.974Z · LW(p) · GW(p)

Actually, I may have just answered my own question by reading the RW page on the b*s*l*sk that three prominent blogs and a discussion forum recently all linked to. Does reading that calm them down?

Replies from: David_Gerard
comment by David_Gerard · 2013-02-28T15:57:57.097Z · LW(p) · GW(p)

The "So you're worrying about the Basilisk" bit is a distillation of stuff that's helped people and is specifically for that purpose. (e.g., the "Commit not to accept acausal blackmail" section strikes me as too in-universe, but XiXiDu says that idea's actually been helpful to people who've come to him.) It could probably do with more. The probability discussion in the section above arguably belongs in it, but it's still way too long.

comment by [deleted] · 2013-02-28T00:10:36.049Z · LW(p) · GW(p)

so they email us, presumably as the only people on the Internet bothering to talk about LW.

Or more likely, because RW has been the only place you could actually learn about it in the first place (for the last two years at least). So, I really don't think you have any reason to complain about getting those emails.

Replies from: fubarobfusco
comment by fubarobfusco · 2013-03-01T05:43:28.627Z · LW(p) · GW(p)

That's not strictly true; for instance, it may be discussed offline!

Replies from: None
comment by [deleted] · 2013-03-01T19:31:46.254Z · LW(p) · GW(p)

Haha, what is this offline you speak of? You're correct that I didn't think of that. However wouldn't they then already have someone to talk to about this, and not need to email people on the internet?

Either way, my point still stands. If you co-author an article on any topic X and let that article be linked to a way of contacting you (by either email or PM), then you cannot complain about people contacting you regarding topic X.

comment by ArisKatsaris · 2013-03-01T11:51:17.989Z · LW(p) · GW(p)

The reason the RW article exists is because we're getting the emails from your distressed children.

Isn't it on RW that these people read the basilisk in the first place?

Replies from: David_Gerard
comment by David_Gerard · 2013-03-01T23:15:54.904Z · LW(p) · GW(p)

(answered at greater length elsewhere, but) This is isomorphic to saying "describing what is morally reprehensible about the God of the Old Testament causes severe distress to some theists, so atheists shouldn't talk about it either". Sunlight disinfects.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-03-02T00:31:51.476Z · LW(p) · GW(p)

I'd discuss the moral reprehensibility of God (in both the new and the old testament) if and only if I saw the estimated benefit in attempting to deconvert those people as outweighing the disutility of their potential distress.

If you see such benefits in telling the people of the basilisk, and are weighing them against the disutility of the potential distress caused by such information, and the benefits indeed outweigh the hazard, then fine.

Replies from: David_Gerard
comment by David_Gerard · 2013-03-02T09:56:35.701Z · LW(p) · GW(p)

Your essential theory seems to be that if someone shines a light on a pothole, then it's their fault if people fall into it, not that of whoever dug it.

The strategy of attempting to keep it a secret has failed in every way it could possibly fail. It may be time to say "oops" and do something different.

Replies from: wedrifid, ArisKatsaris
comment by wedrifid · 2013-03-02T11:36:12.663Z · LW(p) · GW(p)

Your essential theory seems to be that if someone shines a light on a pothole, then it's their fault if people fall into it, not that of whoever dug it.

Or, for that matter, the fault of whoever forbade the construction of safety rails around it.

comment by ArisKatsaris · 2013-03-02T14:27:28.379Z · LW(p) · GW(p)

To me it's unclear whether you believe: a) that it's bad to try to keep the basilisk because such attempt was doomed to failure,
or b) that it's bad to try to keep it hidden because it's always bad to keep any believed-to-be infohazard hidden, regardless of whether you'll succeed or fail, or c) that it's bad to try to keep this basilisk hidden, because it's not a real infohazard, but it would be good to keep real infohazards hidden, which actually harm people you share them to.

Can you clarify to me which of (a), (b) or (c) you believe?

comment by [deleted] · 2013-02-27T14:52:44.155Z · LW(p) · GW(p)

RW didn't push this at all.

Yes, RW was just the forum that willingly opened their doors to various anti-LW malcontents, who are themselves pushing this for all it's worth.

Replies from: fubarobfusco, Peterdjones
comment by fubarobfusco · 2013-02-27T23:25:08.924Z · LW(p) · GW(p)

anti-LW malcontents

That's overly specific. Mostly they're folks who like to snicker at weird ideas — most of which I snicker at, too.

Replies from: None
comment by [deleted] · 2013-02-28T03:40:18.970Z · LW(p) · GW(p)

I didn't claim my list was exhaustive. In particular, I was thinking of Dmytry and XiXiDu, both of whom are never far away from any discussion of LW and EY that takes place off-site. The better part of comments on the RW talk pages and Charles Stross' blog concerning the basilisk are mostly copied and pasted from all their old remarks about the subject.

Replies from: fubarobfusco
comment by fubarobfusco · 2013-02-28T05:47:18.885Z · LW(p) · GW(p)

OK. What I heard in your earlier comment was that a wiki community was being held at fault for "opening their doors" to someone who criticized LW. Wikis are kind of known for opening their doors, and the skeptic community for being receptive to the literary genre of debunking.

comment by Peterdjones · 2013-03-03T14:20:50.607Z · LW(p) · GW(p)

That was a rather mind-killed comment.Wikis are suppoed to have open doors. RW is supposed to deal with pseudoscience, craziness and the pitfalls of religions. The Bsl*sk is easily all three.

Replies from: None
comment by [deleted] · 2013-03-03T16:58:39.423Z · LW(p) · GW(p)

That was a rather mind-killed comment.

In what way? How is merely stating it to be "mind-killed" supposed to change my mind?

Wikis are suppoed to have open doors.

You're misinformed.

RW is supposed to deal with pseudoscience, craziness and the pitfalls of religions. The Bsl*sk is easily all three.

My comment wasn't about whether or not RW should cover the Basilisk.

Replies from: Peterdjones
comment by Peterdjones · 2013-03-03T18:22:23.179Z · LW(p) · GW(p)

How is merely stating it to be "mind-killed" supposed to change my mind?

You might care about that sort of thing, you might not. I don' exactly have a complete knowledge of your psychology.

You're misinformed.

That's irrelevant. Wikis open thei doors to all contributors, and then eject those that don't behave. That's still an open door policy as opposed to invitation-only.

My comment wasn't about whether or not RW should cover the Basilisk.

If it should cover the basilisk, why shouldn't it have contributions from the "malcontents".

Replies from: None
comment by [deleted] · 2013-03-03T23:15:23.730Z · LW(p) · GW(p)

If it should cover the basilisk, why shouldn't it have contributions from the "malcontents".

I didn't make any such statement. Recall, DG was wondering where all this drama about the basilisk came from -- I advised him that it came from two particular users, who are well-known for bringing up this drama in many other forums and have more-or-less dominated the RW talk pages on the subject.

comment by fubarobfusco · 2013-02-23T22:00:57.264Z · LW(p) · GW(p)

White coat hypertension is a phenomenon in which patients exhibit elevated blood pressure in a clinical setting (doctor's office, hospital, etc.) but not in other settings, apparently due to anxiety caused by being in the clinical setting.

Stereotype threat is the experience of anxiety or concern in a situation where a person has the potential to confirm a negative stereotype about their social group. Since most people have at least one social identity which is negatively stereotyped, most people are vulnerable to stereotype threat if they encounter a situation in which the stereotype is relevant. Although stereotype threat is usually discussed in the context of the academic performance of stereotyped racial minorities and women, stereotype threat can negatively affect the performance of European Americans in athletic situations as well as men who are being tested on their social sensitivity.

Math anxiety is anxiety about one's ability to do mathematics, independent of skill. Highly anxious math students will avoid situations in which they have to perform mathematical calculations. Math avoidance results in less competency, exposure and math practice, leaving students more anxious and mathematically unprepared to achieve. In college and university, anxious math students take fewer math courses and tend to feel negative towards math.

Set and setting describes the context for psychoactive and particularly psychedelic drug experiences: one's mindset and the setting in which the user has the experience. 'Set' is the mental state a person brings to the experience, like thoughts, mood and expectations. 'Setting' is the physical and social environment. Social support networks have shown to be particularly important in the outcome of the psychedelic experience. Stress, fear, or a disagreeable environment, may result in an unpleasant experience (bad trip). Conversely, a relaxed, curious person in a warm, comfortable and safe place is more likely to have a pleasant experience.

The Wason selection task, one of the most famous tasks in the psychology of reasoning, is a logic puzzle which most people get wrong when presented as a test of abstract reasoning; but produce the "correct" response when presented in a context of social relations. A Wason task proves to be easier if the rule to be tested is one of social exchange and the subject is asked to police the rule, but is more difficult otherwise.

(The above paragraphs summarize the Wikipedia articles linked; see those articles for sources. Below is speculation on my part.)

IQ tests, and other standardized tests, are usually given in settings associated with schooling or psychological evaluation. People who perform very well on them (gifted students) often report that they think of tests as being like puzzles or games. Many gifted students enjoy puzzles and solve them recreationally; and so may approach standardized tests with a more relaxed and less anxious mindset. Struggling students, who are accustomed to schooling being a source of anxiety, may face tests with a mindset that further diminishes their performance — and in a setting that they already associate with failure.

In other words, the setting of test-taking, and the mindset with which gifted and struggling students approach it, may amplify their underlying differences of reasoning ability. In effect, the test does not measure reasoning ability; it measures some combination of reasoning ability and comfort in the academic setting. These variables are correlated, but failing to notice the latter may lead us to believe there are wider differences in the former than there actually are.

Some people I've discussed the Wason task with, who have been from gifted-student and mathematical backgrounds, have reported that they solve the social-reasoning form of it by translating it to an abstract-reasoning form. This leaves me wondering if the task is easier if presented in a form that the individual is more comfortable with; and that these folks expect more success in abstract reasoning than others do: in other words, that the discrepancy has very much to do with mindset, and serves as an amplifier for people's comfort or discomfort with abstract reasoning more than their ability to reason.

comment by ChristianKl · 2013-02-18T22:00:43.130Z · LW(p) · GW(p)

Over the last month Bitcoin's nearly doubled in value. It's now nearly at hit historical high. http://bitcoincharts.com/charts/mtgoxUSD#tgMzm1g10zm2g25zv

Does anybody know what drives the latest Bitcoin price development?

Replies from: nigerweiss
comment by nigerweiss · 2013-02-18T22:07:21.685Z · LW(p) · GW(p)

The bitcoin market value is predicated mostly upon drug use, pedophilia, nerd paranoia, and rampant amateur speculation. Basically, break out the tea leaves.

Replies from: DaFranker, Tripitaka
comment by DaFranker · 2013-02-18T22:43:58.414Z · LW(p) · GW(p)

drug use, pedophilia, (...), and rampant amateur speculation

Hey, that's almost 2.5% of the world GDP! Can't go wrong with a market this size.

comment by Tripitaka · 2013-02-18T22:12:05.925Z · LW(p) · GW(p)

As of january, the pizza-chain Dominos accepts payment in bitcoins; and as of this week, Kim Dotcoms "Mega" filehosting-service accepts them, too.

Replies from: drethelin
comment by drethelin · 2013-02-18T22:58:21.258Z · LW(p) · GW(p)

Dominos does not accept bitcoins. A third party site will order dominos for you, and you pay THEM in bitcoins.

Replies from: nigerweiss
comment by nigerweiss · 2013-02-19T00:43:26.662Z · LW(p) · GW(p)

The baseline inconvenience cost associated with using bitcoins is also really high for conducting normal commerce with them.

comment by beriukay · 2013-02-24T19:08:38.412Z · LW(p) · GW(p)

Any bored nutritionists out there? I've put together a list of nutrients, with their USDA recommended quantities/amounts, and scoured amazon for the best deals, in trying to create my own version of Soylent. My search was complicated by the following goals:

  • I want my Soylent to have all USDA recommendations for a person of my age/sex/mass.
  • I want my Soylent to be easy to make (which means a preference for liquid and powder versions of nutrients).
  • My Soylent should be as cheap, per day, as possible (I'd rather have 10 lbs of Vitamin C at $0.00/day than 1lb at $0.01/day).
  • Because I'd like it to be trivially easy to possess a year's supply of Soylent, should I find this to be a good experiment.
  • I want to make it easy for other people to follow my steps, and criticize my mistakes, because I'm totally NOT a nutritionist, but I'm awfully tired of being told that I need X amount of Y in my diet, without citations or actionable suggestions (and it is way easier to count calories with whey protein than at a restaurant).
  • I want the items to be available to anybody in the USA, because I live at the end of a pretty long supply chain, and can't find all this stuff locally.
  • I'm trying not to order things from merchants who practice woo-woo, but if they have the best version of what I need, I won't be too picky.

There's probably other things, but I can't think of them at the moment.

The spreadsheet isn't done yet. I hope to make it possible to try dynamic combinations of multiple nutrients, since most merchants seem to prefer the multivitamin approach. Plus, I'd like for there to be more options for liquid and powder substances, because they are easier to combine. Right now, I'm just an explorer, but eventually I'd like to just have a recipe.

If this all sounds too risky, I've also made contact with Rob, and he says that he's planning on releasing his data in a few weeks, once he's comfortable with his results (I think he's waiting on friends to confirm his findings). I'm planning on showing him my list, so we can compare notes. It has already been noted that his current Soylent formula is a bit lacking in fiber. My Soylent is currently slated to use psyllium husks to make up the difference, but I'm looking into other options.

A brief overview of the options indicates that this isn't much cheaper than other food choices (~$7.20 / day), but it meets all of your needs, and once the routine is down, would be fast and easy to make, and could be stored for a long time. So I'm optimistic.

Replies from: PECOS-9, gwern, beriukay, Qiaochu_Yuan
comment by PECOS-9 · 2013-02-25T21:24:12.271Z · LW(p) · GW(p)

Relevant dinosaur comic. The blog section "What are the haps my friends" below the comic also has some information that might be useful.

As much as I love this idea, I'd be too worried about possible unforeseen consequence to be one of the first people to try it. For example, the importance of gut flora is something that was brought up in the comments to the Soylent blog post that didn't occur to me at all while reading. Even if you can probably get around that, it's just an example of how there are a lot of possible problems you could be completely blind to. As another commenter on his follow up post said:

My overall concern with your idea is that you only eat what is known to be necessary to support life. It used to be that when people set out to sea, they'd develop scurvy because of vitamin C defficiency. You're setting yourself to be a test subject for discovering new vitamins.

Maybe it'd be useful to look up research on people who have to survive on tube feeding for an extended period of time. Of course, there's lots of conflating factors there, but I bet there's some good information out there (I haven't looked).

Also, most of the benefits he described are easily explained as a combination of placebo, losing weight by eating fewer calories, and exercise.

But still, I do like the idea. I bet a kickstarter for something like this would do really well.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-02-25T21:38:08.430Z · LW(p) · GW(p)

I am also worried about possible unforeseen consequences of eating bad diets, but one of those bad diets is my current one, so...

comment by beriukay · 2013-04-03T10:24:10.275Z · LW(p) · GW(p)

I got in touch with Mr. Rhinehart about my list. Here's his analysis of what I currently have:

Hey Paul,

Looks quite thorough. Note at small scale it is usually more efficient to find a multivitamin that contains many of the >micronutrients than mixing them separately. Also you will exhaust your carb source rather quickly so it may pay to buy >maltodextrin at a slightly higher scale. Otherwise looks pretty good.

I should be getting some money from the Good Judgment project soon. I'll buy the ingredients then.

comment by Qiaochu_Yuan · 2013-02-25T22:27:37.989Z · LW(p) · GW(p)

The list formatting doesn't seem to have quite worked. Can you try replacing the dashes with asterisks?

Anyway, I wish I could help, but I am not a nutritionist.

Replies from: gwern, beriukay
comment by gwern · 2013-02-25T23:47:09.791Z · LW(p) · GW(p)

The list formatting doesn't seem to have quite worked. Can you try replacing the dashes with asterisks?

He needs a full empty line between the list and his preceding sentence, I think.

comment by beriukay · 2013-03-30T11:17:46.287Z · LW(p) · GW(p)

Oops, sorry!

comment by lukeprog · 2013-02-28T18:49:08.784Z · LW(p) · GW(p)

I'm quite excited that MIRI's new website has launched. My thanks to Louie Helm (project lead), Katie Hartman (designer), and the others who helped with the project: Malo Bourgon, Alex Vermeer, Stephen Barnes, Steven Kaas, and probably some others I don't know about.

Replies from: None
comment by [deleted] · 2013-02-28T19:08:25.496Z · LW(p) · GW(p)

Grats on the URL.

Things we like about the new site:

  • Color scheme - The addition of orange and lime to the usual cobalt blue is quite classy.
  • Dat logo - We love that it degrades to the old logo in monochrome. We love the subtle gradient on the lettering and the good kerning job on the RI, though we regret the harsh M.
  • Navbar - More subtle gradients on the donate button.
  • Dat quote - Excellent choice with the selective bolding and the vertical rule. Not sure how I feel about the italic serif "and" between Tallinn's titles; some italic correction missing there, but the web is a disaster for such things anyway.
  • Headings - Love the idea of bold sans heading with light serif subheading at the same height. Could be more consistent, but variety is good too.
  • Font choices - Quattrocento is such a great font. Wouldn't mind seeing more of it in the sidebar, though. Source Sans Pro is nice but clashes slightly. Normally one thinks of futurist/post-modern sites being totally clean and sans serif everywhere. I'm really happy with the subversion here.
  • Stylized portraits - Love them. Seems a different process was used on the team page as the research page; the team page's process is less stylized, but also holds up better with different face types, IMO.

Overall: exceptionally well done.

comment by Qiaochu_Yuan · 2013-02-18T09:11:49.589Z · LW(p) · GW(p)

How much do actors know about body language? Are they generally taught to use body language in a way consistent with what they're saying and expressing with their faces? (If so, does this mean that watching TV shows or movies muted could be a good way to practice reading body language?)

Replies from: shaih, None, Tenoke
comment by shaih · 2013-02-18T20:54:04.659Z · LW(p) · GW(p)

I do not believe it would be a good way to practice because even with actors acting the way they are supposed (consistent body language and facial expressions) lets say conservatively 90% of the time, you are left with 10% wrong data. This 10% wouldn't be that bad except for the fact that it is actors trying to act correctly (meaning you would interpret what it looks like for a fabricated emotion to be a real emotion). This could be detrimental to many uses of being able to read body language such as telling when other people are lying.

My preferred method has been to watch court cases on YouTube where it has come out afterword whether the person was guilty or innocent. I watch these videos before i know what the truth is make a prediction and then read what the truth is. In this way I am able to get situations where the person is feeling real emotions and is likely to hide what there feeling with fake emotions.

After practicing like this for about a week i found that i could more easily discern whether people were telling the truth or lying, and it was easier to see what emotions they truly felt.

This may not extremely applicable to the real world because emotions felt in court rooms are particularly intense but i found that it allows me to get my mind to the point of being used to looking for emotion which has helped in the real world.

I should also note that i have read many books from Paul Ekman and have used some of his training programs.

If it is important to you how to learn to read faces I largely recommend SETT and METT where if its simply a curiosity you're unwilling to spend much money on i recommend checking out "emotions revealed" in your local library

Replies from: PECOS-9, None
comment by PECOS-9 · 2013-02-18T22:58:54.927Z · LW(p) · GW(p)

My preferred method has been to watch court cases on YouTube where it has come out afterword whether the person was guilty or innocent. I watch these videos before i know what the truth is make a prediction and then read what the truth is. In this way I am able to get situations where the person is feeling real emotions and is likely to hide what there feeling with fake emotions.

After practicing like this for about a week i found that i could more easily discern whether people were telling the truth or lying, and it was easier to see what emotions they truly felt.

That's a really cool idea. Did you record your predictions and do a statistical analysis on them to see whether you definitely improved?

Replies from: shaih
comment by shaih · 2013-02-18T23:06:39.608Z · LW(p) · GW(p)

My knowledge of statistics at the time was very much lacking (that being said i still only have about a semesters worth of stat) so I was not able to do any type of statistical analysis that would be rigorous in any way. I did however keep track of my predictions and was around 60% on the first day (slightly better then guessing probably caused by reading books i mentioned) to around 80% about a week later of practicing every day. I no longer have the exact data though only approximate percentages of how i did.

I remember also that it was difficult tracking down the cases in which truth was known and this was very time consuming, this is the predominant reason that i only practiced like this for a week.

comment by [deleted] · 2013-02-19T22:53:57.760Z · LW(p) · GW(p)

Finding such videos without discovering the truth inadvertently seems difficult. Do you have links to share?

Replies from: shaih
comment by shaih · 2013-02-20T00:58:42.631Z · LW(p) · GW(p)

I don't have them any longer. An easy way to do it is have a friend pick out videos for you (or have someone post links to videos here and have someone pm them for the answer). Or while on YouTube look for names that you've heard before but not quite remember clearly which is not really reliable but its better then nothing.

comment by [deleted] · 2013-02-19T23:26:26.726Z · LW(p) · GW(p)

In which case would this be preferable to live human interaction? It lacks the immediate salient feedback and strong incentives of a social setting. The editing and narrative would be distracting and watching a muted movie sounds (or rather, looks) quite boring.

comment by Tenoke · 2013-02-18T14:23:40.344Z · LW(p) · GW(p)

They get some training and it depends a lot on what you are watching but you can learn a bit if you don't forget that this is not exactly how people act. A show like 'lie to me' will probably do more good than other shows (Paul Ekman is involved in it) but there are also inaccuracies there. Perhaps you can study the episodes and then read arguments about what was wrong in a certain episode (David Matsumoto used to post sometimes what was inaccurate about some episodes iirc).

comment by Leonhart · 2013-02-25T22:30:24.017Z · LW(p) · GW(p)

Dude. Seriously. Spoilers.

This comment is a little less sharp than it would have been had I not gone to the gym first; but unless you (and the apparent majority in this thread) actively want to signal contempt for those who disagree with you, please remember that there are some people here who do not want to read about the fucking basilisk.

comment by gwern · 2013-02-20T22:07:52.461Z · LW(p) · GW(p)

It's been suggested to me that since I don't blog, I start an email newsletter. I ignored the initial suggestions, but following the old maxim* began to seriously consider it on the third or fourth suggestion (who also mentioned they'd even pay for it, which would be helpful for my money woes).

My basic idea is to once a month compile: everything I've shared on Google+, articles excerpted in Evernote or on IRC, interesting LW comments**, and a consolidated version of the changes I've made to gwern.net that month. Possibly also include media I've consumed with reviews for books, anime, music etc akin to the media thread.

I am interested in whether LWers would subscribe:

[pollid:415]

If I made it a monthly subscription, what does your willingness-to-pay look like? (Please be serious and think about what you would actually do.)

[pollid:416]

Thanks to everyone voting.

* "Once is chance; twice is coincidence; three times is enemy action." Or in Star Wars terms: "If someone calls you a Hutt, ignore them; if two people call you a Hutt, begin to wonder; and if three do, buy a slobber-pail and start stockpiling glitterstim."

** For example, my recent comments on the SAT (Harvard logistic regression & shrinking to the mean) would count as 'interesting comments', but not the Evangelion joke.

Replies from: gwern, jsalvatier, curiousepic, Risto_Saarelma, insufferablejake, satt
comment by gwern · 2013-12-06T05:06:38.291Z · LW(p) · GW(p)

After some further thought and seeing whether I could handle monthly summaries of my work, I've decided to open up a monthly digest email with Mailchimp. The signup form is at http://eepurl.com/Kc155

comment by jsalvatier · 2013-02-23T08:37:25.941Z · LW(p) · GW(p)

I would turn the email into an RSS.

comment by curiousepic · 2013-02-22T21:33:10.556Z · LW(p) · GW(p)

Why do you not blog? The differences between it and this newsletter are ambiguous.

Replies from: gwern
comment by gwern · 2013-02-22T22:15:35.708Z · LW(p) · GW(p)

Reasons for 'not a blog':

  • I don't have any natural place on gwern.net for a blog
  • I've watched people waste countless hours dealing with regular blog software like Wordpress and don't want to go anywhere near it,

Reasons for email specifically:

  • email lists like Google Groups or MailChimp seem both secure and easy to use for once-a-month updates
  • more people seem to still use email than RSS readers these days
  • patio11 says that geeks/Web people systematically underrate the usefulness of an email newsletter
  • there's much more acceptance of charging for an email newsletter
Replies from: Risto_Saarelma, Viliam_Bur, knb
comment by Risto_Saarelma · 2013-02-23T08:49:58.537Z · LW(p) · GW(p)

Might be worth noting that the customer base patio11 is probably most familiar with are people who pay money for a program that lets them print bingo cards. They might be a different demographic than people who know what a gwern is.

For a data point, I live in RSS, don't voluntarily follow any newsletters, and have become conditioned to associate the ones I do get from some places I'm registered at as semi-spam. Also if I pay money for something, then it becomes a burdensome Rare and Valuable Possession I Must Now Find a Safe Place For, instead of a neat thing I can go look at, then forget all about, then go look up again after five years based on some vaguely remembered details. So I'll save myself stress if I stick with free stuff.

Replies from: gwern
comment by gwern · 2013-02-23T15:51:55.333Z · LW(p) · GW(p)

They might be a different demographic than people who know what a gwern is.

Maybe. On the other hand, would you entertain for even a second the thought of paying for an RSS feed? Personally, I can think of paying for an email newsletter if it's worth it, but the thought of paying for a blog with an RSS feed triggers an 'undefined' error in my head.

Also if I pay money for something, then it becomes a burdensome Rare and Valuable Possession I Must Now Find a Safe Place For, instead of a neat thing I can go look at, then forget all about, then go look up again after five years based on some vaguely remembered details.

Email is infinitely superior to RSS in this respect; everyone gets a durable copy and many people back up their emails (including you - right? right?). I have emails going back to 2004. In contrast, I'm not sure how I would get my RSS feeds from a year ago since Google Reader seems to expire stuff at random, never mind 2006 or whenever I started using RSS.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2013-02-23T16:46:34.053Z · LW(p) · GW(p)

You're right about the paying part. I don't care to even begin worrying about how setting Google Reader to fetch something from beyond a paywall might work, but e-mail from a paid service makes perfect sense, tech-wise.

And now that you mention it, if I were living in an email client instead of Google Reader, I could probably get along just fine having stuff from my RSS subscriptions get pushed into my mailbox. Unfortunately, after 15 years I still use email so little that I basically consider it a hostile alien environment and haven't had enough interesting stuff go on there so far that I'd ever really felt the need to back up my mails. Setting up a proper email workflow and archiving wouldn't be a very big hurdle if I ever got reason to bother with it though.

An actual thing I would like is an archived log of "I read this thing today and it was interesting", preferrably with an archive of the thing. I currently use Google Reader's starring thing for this, but that's leaving stuff I actually do care about archiving at Google's uncertain mercy, which is bad. Directing RSS to email would get me this for free.

Did I just talk myself into possibly starting to use email properly with an use case where I'd mostly be mailing stuff to myself?

Replies from: chemotaxis101
comment by chemotaxis101 · 2013-02-23T18:19:54.761Z · LW(p) · GW(p)

I'd recommend using Blogtrottr for turning the content from your RSS feeds into email messages. Indeed, as email is (incidentally) the only web-related tool I can (and must) consistently use throughout the day, I tend to bring a major part of the relevant web content I'm interested in to my email inbox - including twitter status updates, LW Discussion posts, etc.

comment by Viliam_Bur · 2013-02-25T16:26:17.192Z · LW(p) · GW(p)

I don't have any natural place on gwern.net for a blog

How about "blog.gwern.net" or even "gwernblog.net"?

I've watched people waste countless hours dealing with regular blog software like Wordpress and don't want to go anywhere near it,

If some people are willing to pay for your news, maybe you could find a volunteer (by telling them that creating the blog software is the condition for you to publish) to make the website.

To emulate the (lack of) functionality of an e-mail, you only need to log in as the administrator, and write a new article. The Markdown syntax, as used on LW, could be a good choice. Then the website must display the list of articles, the individual articles, and the RSS feed. That's it; someone could do that in a weekend. And you would get the extra functionality of being able to correct mistakes in already published articles, and make hyperlinks between them.

Then you need functionality to manage users: log in as user, change the password, adding and removing users as admin. There could even be an option for users to enter their e-mails, so the new articles will be sent to them automatically (so they de facto have a choice between web and e-mail format). This all is still within a weekend or two of work.

Replies from: gwern
comment by gwern · 2013-02-26T21:50:12.705Z · LW(p) · GW(p)

How about "blog.gwern.net" or even "gwernblog.net"?

I meant in my existing static site setup. (If I were to set up a blog of my own, it would probably go into a subdomain, yes.)

If some people are willing to pay for your news, maybe you could find a volunteer (by telling them that creating the blog software is the condition for you to publish) to make the website.

How would that help?

And you would get the extra functionality of being able to correct mistakes in already published articles, and make hyperlinks between them.

I don't often need to correct mistakes in snippets, month-old LW comments, etc. I do often correct my essays, but those are not the issue.

comment by knb · 2013-02-24T03:35:17.216Z · LW(p) · GW(p)

I've watched people waste countless hours dealing with regular blog software like Wordpress and don't want to go anywhere near it,

Have you considered a Google Blogger site? They aren't quite as customizable as WordPress, but you can put AdSense on your site in like 5-10 minutes, if you're interested. Plus free hosting, even with your own domain name. I've used blogger for years, and I've never had downtime or technical problems.

Replies from: gwern
comment by gwern · 2013-02-24T20:17:15.904Z · LW(p) · GW(p)

Have you considered a Google Blogger site?

Those incredibly awful sites with the immovable header obscuring everything and broken scrollbars and stuff? No, I've never considered them, although I'm glad they're not as insecure and maintenance heavy as the other solutions... (I already have AdSense on gwern.net, and hosting isn't really a big cost right now.)

comment by Risto_Saarelma · 2013-02-23T17:03:42.578Z · LW(p) · GW(p)

I'd be a lot more willing to consider a somewhat larger single payment that gets me a lifetime subscription than a monthly fee. I'm pretty sure I don't want to deal with a monthly fee, even if it's $1, it feels like having to do the buying decision over and over again every month, but I can entertain dropping a one-off $20 for a lifetime subscription. Of course that'd only net less than two years worth of posts even for the $1 monthly price point, so this might not be such a great deal for you.

Replies from: gwern
comment by gwern · 2013-02-23T21:57:30.792Z · LW(p) · GW(p)

I wouldn't do a lifetime subscription simply because I know that there's a very high chance I would stop or the quality would go downhill at some point. Even if people were willing to trust me and pay upfront, I would still consider such a pricing strategy extremely dishonest.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-02-24T16:33:30.101Z · LW(p) · GW(p)

How does an annual fee feel?

Replies from: gwern
comment by gwern · 2013-02-24T20:18:44.638Z · LW(p) · GW(p)

Better but still too long a promise for the start. (Interestingly, patio11 does seem to think that annual billing is more profitable than monthly.)

comment by insufferablejake · 2013-02-21T07:29:11.617Z · LW(p) · GW(p)

I enjoy your posts, and I have been a consumer of your G+ posts and your blog for sometime now, even though I don't much comment and just lurk about. While I would want some sort of syndication of your stuff, I am wondering if an external expectation of having to meet the monthly compilation target or the fact that you know for sure that there is a definite large audience for your posts now, will affect the quality of your posts? I realize that there is likely not any answer possible for this beforehand, but I'd like to know if you've considered this.

Replies from: gwern
comment by gwern · 2013-02-21T16:00:41.712Z · LW(p) · GW(p)

I don't know. I'm more concerned that reviewing & compiling everything at the end of the month will prove to be too much of a stressful hassle or use of time than that I'll water down content.

comment by satt · 2013-02-21T01:46:31.898Z · LW(p) · GW(p)

I voted no, but think a Gwern Email Digest is a worthwhile idea regardless. I just don't sign up for email newsletters generally.

comment by drethelin · 2013-02-20T18:14:18.843Z · LW(p) · GW(p)

Conditional Spam (Something we could use a better word for but this will do for now)

In short: Conditional Spam is information that is valuable to 99 percent of people.

Huge proportions of the content generated and shared on the internet is in this category, and this becomes more and more the case as a greater percentage of the population outputs to the internet as well as reading it. In this category are things like people's photos of their cats, stories about day to day anecdotes, baby pictures, but ALSO, and importantly, things like most scientific studies, news articles, and political arguments. People criticize twitter for encouraging seemingly narcissistic pointless microblogging, but in reality it's the perfect engine for distributing conditional spam: Anyone who cares about your dog can follow you, and anyone who doesn't can NOT.

When your twitter or facebook or RSS is full of things you're not informed (or entertained, since this applies to fun as well as usefulness) by, this isn't a failing of the internet. It's a failing of your filter. The internet is a tool optimized to distribute conditional spam as widely as possible, and you can tune your use of it to try and make the less than 1 percent of it you'll inevitably see something you WANT to see, and your less than 1 percent of it you'll MAKE go to the people who actually care about it.

I don't like the phrase conditional spam both because it's not CREATED with sinister motives and it also presents it as a bad thing. 99 percent of all things are not for YOU but that doesn't mean it's not good that they're created. I think coming up with good terminology for this can also help us start to create actual mechanisms by which to optimize it. You can sort of shortcut the information filter process by paying attention to only people who pay attention to similar things as you, but is there an efficient way to set up eg, a news source that only gives you news you are likely to be interested in reading? It might be tuneable by tracking how likely you are to finish reading the articles.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-02-25T16:56:30.916Z · LW(p) · GW(p)

It would be nice to have some way of adding tags to the information, so that we could specify what information we need, and avoid the rest. Unfortunately, this would not work, because the tagging would be costly, and there would be incentives to tag incorrectly.

For example, I like to be connected with people I like on Facebook. I just don't have to be informed about every time they farted. So I would prefer if some information would be labeled as "important" for the given person, and I would only read those. But that would only give me many links to youtube videos labeled "important"; and even this assumes too optimistically that people would bother to use the tags.

I missed my high-school reunion once because a Facebook group started specifically to notify people about the reunion gradually became a place for idle chat. After a few months of stupid content I learned to ignore the group. And then I missed a short message which was exceptionally on-topic. There was nothing to make it stand out of the rest.

In groups related to one specific goal, a solution could be to mark some messages as "important" and to make the importance a scarce resource. Something like: you can only label one message in a week as important. But even this would be subject to games, such as "this message is obviously important, so someone else is guaranteed to spend their point on it, so I will keep my point for something else".

The proper solution would probably use some personal recommendation system. Such as: there is an information, users can add their labels, and you can decide to "follow" some users which means that you will see what they labeled. Maybe something like Digg, but you would see only the points that your friends gave to the articles. You could have different groups of friends for different filters.

Replies from: drethelin
comment by drethelin · 2013-02-25T18:09:54.137Z · LW(p) · GW(p)

SEO is the devil.

comment by Pentashagon · 2013-02-19T19:31:38.951Z · LW(p) · GW(p)

In the short story/paper "Sylvan's Box" by Graham Priest the author tries to argue that it's possible to talk meaningfully about a story with internally inconsistent elements. However, I realized afterward that if one truly was in possession of a box that was simultaneously empty and not empty there would be no way to keep the inconsistency from leaking out. Even if the box was tightly closed it would both bend spacetime according to its empty weight and also bend spacetime according to its un-empty weight. Opening the box would cause photons and air molecules (at the least) to being interacting and not interacting with the contents. Eventually a hurricane would form and not form over the Atlantic due to the air currents caused (and not caused) by removing the lid. In my opinion If there is any meaning to be found in a physical interpretation of the story it's that inconsistency everywhere would explode from any interaction with an initial inconsistency, probably fairly rapidly (at least as fast as the speed of sound).

I'd be interested to know what other people think of the physical ramifications.

Replies from: Viliam_Bur, Elithrion, shaih
comment by Viliam_Bur · 2013-02-20T14:04:01.201Z · LW(p) · GW(p)

The paper only showed that it is possible to talk meaningfully about a story with an element which is given inconsistent labels, and the consequences of having the inconsistent labels are avoided.

The hero looks in the box and sees that it "was absolutely empty, but also had something in it" and "the sense of touch confirmed this". How exactly? Did photons both reflect and non-reflect from the contents? Was it translucent? Or did it randomly appear and disappear? How did the fingers both pass through and not-pass-through the contents? But more importantly, what would happen if the hero tried to spill out the contents? Would something come out or not? What if they tried to use the thing / non-thing to detonate a bomb?

The story seems meaningful only because we don't get answer for any of these questions. It is a compartmentalization forced by the author on readers. The problems are not there only because the author refuses to look at them.

Replies from: Pentashagon
comment by Pentashagon · 2013-02-21T22:54:23.275Z · LW(p) · GW(p)

The story seems meaningful only because we don't get answer for any of these questions. It is a compartmentalization forced by the author on readers. The problems are not there only because the author refuses to look at them.

So in essence claiming "A and not ~A, therefore B and ~C, the end." That isn't a limitation imposed by the author but an avoidance of some facts that can be inferred by the reader.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-02-25T16:41:38.764Z · LW(p) · GW(p)

Imagine that I offer you a story where some statement X is both completely true and completely false, and yet we can talk about it meaningfully.

And the story goes like this:

"Joe saw a statement X written on paper. It was a completely true statement. And yet, it was also a completely false statement. At first, Joe was surprised a lot. Just to make sure, he tried evaluating it using the old-fashioned boolean logic. After a few minutes he received a result 1, meaning the statement was true. But he also received a result 0, meaning the statement was false."

Quite a let-down, wasn't it? At least it did not take ten pages of text. Now you can be curious how exactly one can evaluate a statement using a boolean logic and receive 1 and 0 simultaneusly... but that's exactly the part I don't explain.

So the "talk about it meaningfully" part simply means that I am able to surround a nonsensical statement with other words, creating an illusion of a context. It's just that the parts of contexts which are relevant, don't make sense; and the parts of contexts which make sense are not relevant. (The latter is absent in my short story, but I could add a previous paragraph about Joe getting the piece of paper from a mysterious stranger in a library.)

comment by Elithrion · 2013-02-20T19:06:52.665Z · LW(p) · GW(p)

Having now read the story, it's just errm... internally inconsistent. And I don't mean that in the "functional" way Priest intends. When the box is first opened the statue is not treated as something that's both not there and not - instead, it's treated as an object that has property X, where X is "looking at this object causes a human to believe it's both there and not". This is not inconsistent - it's just a weird property of an object, which doesn't actually exist in real life. Then at the end, the world is split into two branches in an arbitrary way that doesn't follow from property X. Looking at it another way, "inconsistency" is very poorly defined and this lack of definition is hidden inside the magical effects that looking at the object has. (It would be clearer if, for example, he dropped a coin on the statue and then tried to pick it up - clearly the world would have to split right away, which is hidden in the story under the guise of being able to see property X.)

comment by shaih · 2013-02-19T19:41:24.890Z · LW(p) · GW(p)

I don't think it works on all inconsistency though just large one's. There is a large mass difference between a box with nothing in it and a box with something in it. This doesn't necessarily work for lets say a box with a cat in it and a box with a dead cat in it.

Replies from: shaih
comment by shaih · 2013-02-20T01:13:14.141Z · LW(p) · GW(p)

May I ask why the downvotes if I promise not to rebbutle and suck up time?

Replies from: Elithrion
comment by Elithrion · 2013-02-20T05:43:05.584Z · LW(p) · GW(p)

I didn't downvote you, but my guess is that your comment is basically wrong. Even a "small" inconsistency would behave in a similar way assuming it had physical interactions with the outside world. For example, the living cat would breathe in air and metabolise carbohydrates, while the dead one would be eaten by bacteria. The living cat will also cause humans who see it to pet it, while the dead one will cause them to recoil in disgust, which should split the world or something. I make no remark on the accuracy of the original comment, since I find it a little confusing, not having read the story/paper yet.

comment by diegocaleiro · 2013-02-19T01:10:43.930Z · LW(p) · GW(p)

Persson (Uehiro Fellow, Gothenburg) has jokingly said that we are neglecting an important form or altruistic behavior.

http://www.youtube.com/watch?v=sKmxR1L_4Ag&feature=player_detailpage#t=1481s

We have a duty not to kill

We have a duty not to rape

but we do not have a duty, at least not a strong duty, to save lives

or to have sex with someone who is sexually-starved

Its a good joke.

What worries me is that it makes Effective Altruism of the GWWC and 80000h kind analogous to "fazer um feio feliz" an expression we use in portuguese meaning "making happy an ugly one". The joke is only funny because the analogy works to an extent.

And given it works, should Eff Alt be finding the most effective ways of getting the sexually deprived what they'd like to have?

Evolutionarily, sex is pretty high in the importance scale. Our psychology is engineered towards it (Buss2004).

Could finding the best matchmaking algorithm be an important utilitarian cause?

Replies from: Viliam_Bur, ChristianKl
comment by Viliam_Bur · 2013-02-19T09:03:10.493Z · LW(p) · GW(p)

Could finding the best matchmaking algorithm be an important utilitarian cause?

It would certainly create a lot of utility.

I have no experience with dating sites (so all the following information is second-hand), but a few people told me there was still an opportunity on the market to make a good one. On the existing dating sites it was impossible to make a search query they wanted. The sites collected only a few data that the site makers considered important, and only allowed you to make a search query about them. So you could e.g. search for a "non-smoker with a university degree", but not for a "science-fiction fan with a degree in natural science". I don't remember the exact criteria they wanted (only that some of them also seemed very important to me; something like whether the person is single), but the idea was that you enter the criteria the system allows you, you get thousands of results, and you can't refine them automatically, so you must click on them individually to read the user profile, you usually don't find your answer anyway, so you have to contact each person to ask them personally.

So a reasonable system would have some smart way to enter data about people. Preferably any data; there should be a way to enter a (searchable) plain text description, or custom key-value pair if everything else fails. (Of course the site admins should have some statistics about frequestly used custom data in descriptions and searches, so they could add them to the system.) Some geographical data, so that you could search for people "at most X miles from ...".

Unfortunately, there are strong perverse incentives for dating sites. Create a happy couple -- lose two customers! The most effective money-making strategy for a dating site would be to feed unrealistic expectations (so that all couples fail, but customers return to the site believing their next choice would be better) and lure people to infidelity. Actually, some dating sites promote themselves on Facebook exactly like this.

So it seems to me that a matchmaking algorithm done right could create a lot of utility, but would be very difficult to sell.

EDIT: Another problem: Imagine that there are is a trait which makes many people unattractive. A dating site that allows to search by this criteria will make searching people (who dislike this trait) happy, but people with this trait unhappy. If your goal is to make more money, to which group should you listen? Well, that depends on the size of the groups, and on their likelihood to leave if the algorithm goes against their wishes.

Replies from: ChristianKl, ChristianKl
comment by ChristianKl · 2013-02-19T11:52:43.023Z · LW(p) · GW(p)

So a reasonable system would have some smart way to enter data about people. Preferably any data; there should be a way to enter a (searchable) plain text description, or custom key-value pair if everything else fails.

OkCupid has basically custom key-value pairs with it's questions. While you can't search after individual questions you get a match rank that bundles all the information from those questions together. You can search for that match rank.

comment by ChristianKl · 2013-02-19T11:33:28.384Z · LW(p) · GW(p)

A dating site that allows to search by this criteria will make searching people (who dislike this trait) happy, but people with this trait unhappy.

Dating websites also care about getting true data. Is there a trait that 100% of people reject at first glance, why should anybody volunteer the information that he possesses the trait?

People only volunteer information when they think that the act of volunteering the information will improve the ability of the website to find good matches for them. If you are a smoker you don't want to date a person who hastes smokers and therefore you put the information that you are a smoker into your profile.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-02-19T12:25:29.263Z · LW(p) · GW(p)

Yeah, you are right. No search engine will help if people refuse to provide the correct data. Instead of lying (by omission) in person we get lying (by omission) in search engine results.

There could be an option to verify or provide the option externally. For example after meeting a person, you could open their profile and write what you think are their traits (not all of them, only some of them that are wrong or missing). If many people correct or add something, it could be displayed in the person's profile. -- But this would be rather easy to abuse. If for whatever reason I dislike a person, I could add an incorrect but repulsive information to their profile, and ask my friends (with some explanation) to report that they dated the person too, and to confirm my information. On the other hand, I could ask my friends (or make sockpuppet accounts) to report that they dated me, and to confirm a wrong positive information about me.

Another option could be real-life verification / costly signalling. If a person visits the dating agency personally, and shows them a university diploma / a tax report / takes an IQ test there, the agency would confirm them as educated / rich / smart. This would be difficult and costly, so only a few people would participate. Even if some people would be willing to pay the costs to find the right person, the low number of users would reduce the network effect. Maybe this verification could be just an extra service on a standard dating site.

Replies from: ChristianKl
comment by ChristianKl · 2013-02-19T16:00:27.283Z · LW(p) · GW(p)

Another option could be real-life verification / costly signalling. If a person visits the dating agency personally, and shows them a university diploma / a tax report / takes an IQ test there, the agency would confirm them as educated / rich / smart.

I think that would probably work. The person can send a scan of the university diploma/tax report. If I would run a dating website I would make that a optional service that costs money. HotOrNot for example allows verification through phone numbers and linking of a facebook account but they don't provide verification services that need human labor.

Paid dating websites don't seem to shy away from using human labor. I know that big German dating websites used to read personal messages of users to prevent them from giving the other person an email address in the first message that would allow contract without the web site.

Another thing I don't understand is that the dating websites don't offer users any coaching.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-02-19T16:29:44.450Z · LW(p) · GW(p)

Another thing I don't understand is that the dating websites don't offer users any coaching.

1) Perverse incentives. Make your customers happy: lose them. Keep your customers hoping but unsatisfied: keep them.

2) There already exists a separate "dating coaching" industry, called PUA. Problem is, because of human hypocrisy, you cannot provide effective dating advice to men without insulting many women. And if a dating website loses most female customers, it obviously cannot work (well, unless it is a dating website for gays).

However, neither of these explains why dating services don't offer false coaching, one that wouldn't really help customers, but would make them happy, and would extract their money.

Maybe it's about status. Using a dating website can be a status hit, but it can be also rationalized: "I am not bad at attracting sexual partners in real life. I just want to use my time effectively, and I am also a modern person using the modern technology." It would be more difficult to rationalize a dating coaching this way.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-02-20T22:42:07.125Z · LW(p) · GW(p)

1) Perverse incentives. Make your customers happy: lose them. Keep your customers hoping but unsatisfied: keep them.

On the other hand, there's a win for the dating site if there are people who met there are in good relationships and talk about how they met.

Replies from: drethelin
comment by drethelin · 2013-02-21T03:58:45.720Z · LW(p) · GW(p)

I think the innate nature of dating is such that if you want success stories your incentive is to optimize as much as possible for success and the failure rate and single population will take care of it itself

Replies from: CronoDAS
comment by CronoDAS · 2013-03-05T07:31:25.804Z · LW(p) · GW(p)

If you don't mind waiting 18 or so years for your new potential customers...

Replies from: drethelin
comment by drethelin · 2013-03-05T08:15:18.761Z · LW(p) · GW(p)

You seem to have completely missed my point. Let me try an analogy: If reliable cars sell better, car manufacturers are incentivized to make their cars more reliable for the cost than their competitors ad infinitum. If a car is infinitely reliable, they never get repeat purchases (clearly they should start upcharging motor oil like printer ink). However, we're so far from perfect reliability that on the margin it still makes sense for any given car developer to try to compete with others to make their cars more reliable.

That doesn't take into account damage to cars or relationships from car accidents. It also doesn't account for polyamory or owning more than one car.

If okcupid's saturation level was 90 percent of the single population, that would be one thing, but there's WAY more marketing to do before that could ever happen and having a good algorithm is basically their entire (theoretical)advantage.

Replies from: CronoDAS
comment by CronoDAS · 2013-03-05T08:27:05.806Z · LW(p) · GW(p)

You seem to have completely missed my point.

Apparently I did. But either way works.

comment by ChristianKl · 2013-02-19T11:54:53.943Z · LW(p) · GW(p)

Could finding the best matchmaking algorithm be an important utilitarian cause?

I don't know whether much better matchmaking algortihms are possible. The idea that there a soulmate out there for everyone is problematic. Even when in theory two people would be a good match, they won't form a relationship when one of them screws up the mating process.

comment by Dahlen · 2013-02-17T16:59:50.008Z · LW(p) · GW(p)

Where on LW is it okay to request advice? (The answers I would expect -- are these right? -- are: maybe, just maybe, in open threads, probably not in Discussion if you don't want to get downvoted into oblivion, and definitely not in Main; possibly (20-ish percent sure) nowhere on the site.)

I'm asking because, even if the discussions themselves probably aren't on-topic for LW, maybe some would rather hear opinions formulated by people with the intelligence, the background knowledge and the debate style common around here.

Replies from: Nisan, ChristianKl, beoShaffer
comment by Nisan · 2013-02-17T17:55:44.002Z · LW(p) · GW(p)

It's definitely okay to post in open threads. It might be acceptable to post to discussion, if your problem is one that other users may face or if you can make the case that the subsequent discussion will produce interesting results applicable to rational decisionmaking generally.

comment by ChristianKl · 2013-02-17T18:51:56.051Z · LW(p) · GW(p)

Advice is a fairly broad category. Different calls for advice are likely to be treated differently.

If you want that your call is well received, start by describing your problem in specific terms. How do your current utility calculations look like? If you make assumption, give us p values about your confidence that your assumptions are true.

comment by beoShaffer · 2013-02-17T18:30:21.567Z · LW(p) · GW(p)

It depends on what you're asking about, but generally open threads are your best bet.

comment by Mitchell_Porter · 2013-02-16T01:37:36.161Z · LW(p) · GW(p)

Two papers from last week: "The universal path integral" and "Quantum correlations which imply causation".

The first defines a quantum sum-over-histories "over all computable structures... The universal path integral supports a quantum theory of the universe in which the world that we see around us arises out of the interference between all computable structures."

The second, despite its bland title, is actually experimenting with a new timeless formalism, a "pseudo-density matrix which treats space and time indiscriminately".

I don't believe in timeless physics, computation as fundamental, or quantum mechanics as fundamental, but many people here do, and it amused me to see two such papers coming out on the same day.

comment by shaih · 2013-02-19T19:18:22.838Z · LW(p) · GW(p)

I've been reading the sequences but i've realized that less of it has sunk in then i would have hoped. What is the best way to make the lessons sink in?

Replies from: beoShaffer, Viliam_Bur
comment by beoShaffer · 2013-02-20T03:57:50.408Z · LW(p) · GW(p)

Thats a complicated and partially open question, but some low hanging fruit: Try to link the sequences to real life examples, preferably personal ones as you read. Make a point of practicing what you theoretically already know when it comes up IRL, you'll improve over time. Surround yourself with rational people, go to meetups and/or a CfAR workshop.

comment by Viliam_Bur · 2013-02-20T14:15:35.039Z · LW(p) · GW(p)

I made a presentation of part of the Sequences for other people. This made me look at the list and short descriptions carefully, re-read the article where I did not understand the short description; then I thought about the best subset and the best way to present them, and I made short notes. This all was a work with the text, which is much better for remembering than just passive reading. Then, by presenting the result, I connected it with positive emotions.

Generally, don't just read the text, work with it. Try to write a shorter version, expressing the same idea, but using your own words. (If you have a blog, consider publishing the result there.)

comment by [deleted] · 2013-02-16T21:39:54.643Z · LW(p) · GW(p)

With all the mental health issues coming up recently, I thought I'd link Depression Quest, a text simulation of what it's like to live with depression.

Trigger warning: Please read the introduction page thoroughly before clicking Start. If you are or have been depressed, continue at your own risk.

Replies from: shminux, radical_negative_one, TimS
comment by shminux · 2013-02-16T21:45:42.350Z · LW(p) · GW(p)

Warning: the link starts playing bad music without asking.

Replies from: EvelynM, Elithrion
comment by EvelynM · 2013-02-17T00:05:55.041Z · LW(p) · GW(p)

That's depressing.

comment by Elithrion · 2013-02-17T03:13:43.941Z · LW(p) · GW(p)

On the bright side, there's actually a button to pause it just above "restart the game". Although annoyingly, it's white on grainy white/gray and took me a little while to notice.

comment by radical_negative_one · 2013-02-17T21:22:33.609Z · LW(p) · GW(p)

In the past I went through a period that felt like depression, though I never talked about it to anyone so of course I wasn't diagnosed at any point. I went against your warning and played the game. The protagonist started off with more social support than I did. I chose the responses that I think I would have given when I felt depressed. This resulted the protagonist never seeking therapy or medication, and what is labeled "endingZero".

Depression Quest seems accurate. Now I feel bad. (edit: But I did get better.)

comment by TimS · 2013-03-04T17:37:33.026Z · LW(p) · GW(p)

Trigger warning: Please read the introduction page thoroughly before clicking Start. If you are or have been depressed, continue at your own risk.

I found it very helpful, actually. It encouraged healthy activity like talking about your concerns with others, recognizing that some folks are not emotionally safe to talk to, and expanding one's social safety net. But I'm more anxious than depressed, so YMMV.

Replies from: None
comment by [deleted] · 2013-03-04T19:40:13.276Z · LW(p) · GW(p)

I've had experiences with both, and I wouldn't mind discussing specifics through PM.

comment by lsparrish · 2013-02-16T17:48:20.166Z · LW(p) · GW(p)

I'm looking for information about chicken eye perfusion, as a possible low-cost cryonics research target. Anyone here doing small animal research?

comment by Antisuji · 2013-02-20T00:38:41.951Z · LW(p) · GW(p)

Following up on my comment in the February What are You Working On thread, I've posted an update to my progress on the n-back game. The post might be of interest to those who want to get into mobile game/app development.

comment by Qiaochu_Yuan · 2013-02-23T21:09:10.121Z · LW(p) · GW(p)

I have recently tried playing the Monday-Tuesday game with people three times. The first time it worked okay, but the other two times the person I was trying to play it with assumed I was (condescendingly!) making a rhetorical point, refused to play the game, and instead responded to what they thought the rhetorical point I was making was. Any suggestions on how to get people to actually play the game?

Replies from: Nisan, Viliam_Bur
comment by Nisan · 2013-02-25T22:35:32.821Z · LW(p) · GW(p)

What if you play a round yourself first; not on a toy example, but on the matter at hand.

comment by Viliam_Bur · 2013-02-25T17:12:37.671Z · LW(p) · GW(p)

On Monday, people were okay with playing the game. On Tuesday, people assumed you were making a rhetorical point and refused to play the game. Are you trying to say that CFAR lessons are a waste of money?! :D

More seriously: the difference could be in the people involved, but also in what happened before the game (either immediately, or during your previous interaction with the people). For example if you had some disagreement in the past, they could (reasonably) expect that your game is just another soldier for the upcoming battle. But maybe some people are automatically in the battle mode all the time.

comment by Elithrion · 2013-02-22T05:15:31.743Z · LW(p) · GW(p)

I decided I want to not see my karma or the karma of my comments and posts. I find that if anyone ever downvotes me it bothers me way more than it should, and while "well, stop letting it bother you" is a reasonable recommendation, it seems harder to implement for me than a software solution.

So, to that end, I figured out how the last posted version of the anti-kibitzer script works, and remodeled it to instead hide only my own karma (which took embarrassingly long to figure out, since my javascript skills can be best described with terms like "vague" and "barely existing"). If anyone wants it, here it is - you just need to open it with some editor (notepad works) and change all (7) instances of "Elithrion" in the Votepaths to your username. I tested and it works with both Greasemonkey for Firefox and Tampermonkey for Chrome.

The one thing that doesn't work well is that the page loads and then maybe 0.1s later everything gets hidden, which does leave you with enough time to see your own karma sometimes if you're looking at that spot, so if anyone knows how to fix that (or can confirm that it's too hard to fix to bother with), that would be welcome. Also, let me know if you think there are enough people who might want it that I should make a discussion post for more visibility or something.

(I'm not particularly concerned that I will lose feedback on the quality of my comments and posts, since I will still see the karma others receive and be able to compare, and I would still be interested in having a positive reputation. As of this writing my positive karma rate is a little under 90%, and my plan is to check once in a while and change something if I see it fall too much.)

comment by A1987dM (army1987) · 2013-02-16T13:46:16.202Z · LW(p) · GW(p)

The quantum coin toss

A couple guys argue that quantum fluctuations are relevant to most macroscopic randomness, including ordinary coin tosses and the weather. (I haven't read the original paper yet.)

Replies from: Nisan, shminux
comment by shminux · 2013-02-16T21:51:26.119Z · LW(p) · GW(p)

If false, this could be easily falsifiable with a single counterexample, since if true, no coin tosser, human or robotic, should be able to do significantly better than chance if the toss is reasonably high.

EDIT: according to this

In the 31-page Dynamical Bias in the Coin Toss, Persi Diaconis, Susan Holmes, and Richard Montgomery lay out the theory and practice of coin-flipping to a degree that's just, well, downright intimidating.

Suffice to say their approach involved a lot of physics, a lot of math, motion-capture cameras, random experimentation, and an automated "coin-flipper" device capable of flipping a coin and producing Heads 100% of the time

the premise has already been falsified.

Replies from: gwern
comment by gwern · 2013-02-17T00:38:22.596Z · LW(p) · GW(p)

The link discusses normal human flips as being quantum-influenced by cell-level events; a mechanical flipper doesn't seem relevant.

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-17T01:06:16.520Z · LW(p) · GW(p)

Even humans can flip a coin in such a way that the same side comes up in all branches of the wave function, as described by E.T. Jaynes, but IIRC he himself refers to that as "cheating".

Replies from: gwern
comment by gwern · 2013-02-17T01:54:45.855Z · LW(p) · GW(p)

I'm not sure that's what they mean either. I take them as saying 'humans can flip in a quantum-influenced way', not as 'all coin flips are quantum random' (as shminux assumed, hence the coin-flipping machine would be a disproof) or 'all human coin flips are quantum random' (as you assume, in which case magicians' control of coin flips would be a disproof).

Replies from: army1987, shminux, Elithrion
comment by A1987dM (army1987) · 2013-02-17T21:04:09.736Z · LW(p) · GW(p)

I'd guess something along the line of typical human coin flips being quantum-influenced.

comment by shminux · 2013-02-17T03:12:37.522Z · LW(p) · GW(p)

If their model makes no falsifiable predictions, it's not an interesting one.

comment by Elithrion · 2013-02-17T03:30:38.208Z · LW(p) · GW(p)

I'm honestly not sure. I find myself confused. According to the article, they say:

They also point out that it would only take one counterexample to falsify their idea – a use of classical probabilities that is clearly isolated from the physical, quantum world.

But what would that look like exactly? Naively, it seems like the robot that flips the coin heads every time satisfies this (classical probability: ~1). Or maybe it uses a pseudo-random number generator to determine what's going to come up next and flips the coin that particular way and then we bet on the next flip (constituting "a use of classical probabilities that is clearly isolated from the physical, quantum world"). But presumably that's not what they mean. What counterexample would they want, then?

Replies from: Nisan
comment by Nisan · 2013-02-17T17:51:25.604Z · LW(p) · GW(p)

The authors claim that all uncertainty is quantum. A machine that flips heads 100% of the time doesn't falsify their claim (no uncertainty), and neither does a machine that flips heads 99% of the time (they'd claim it's quantum uncertainty). As for a machine that follows a pseudorandom bit sequence, I believe they would argue that a quantum process (like human thought) produced the seed. Indeed, they argue that our uncertainty about the n-th digit of pi is quantum uncertainty because if you want to bet on the n-th digit of pi, you have to randomly choose n somehow.

Replies from: BlazeOrangeDeer
comment by BlazeOrangeDeer · 2013-02-19T22:30:59.541Z · LW(p) · GW(p)

If they're saying all sources of entropy are physical, that seems obvious. If they're saying that all uncertainty is quantum, they must not know that chaotic classical simulations exist? Or are they not allowing simulations made by humans o.O

Replies from: Nisan, Transfuturist
comment by Nisan · 2013-02-20T02:42:49.989Z · LW(p) · GW(p)

They're saying all uncertainty is quantum. If you run a computer program whose outputs is very sensitive to its inputs, they'd probably say that the inputs are influenced by quantum phenomena outside the computer. Don't ask me to defend the idea, I think it's incorrect :)

comment by Transfuturist · 2013-03-05T23:58:55.866Z · LW(p) · GW(p)

Chaotic classical simulations? Could you elaborate?

Replies from: BlazeOrangeDeer
comment by BlazeOrangeDeer · 2013-03-06T00:37:01.577Z · LW(p) · GW(p)

Well, you can run things like physics engines on a computer, and their output is not quantum in any meaningful way (following deterministic rules fairly reliably). It's not very hard to simulate systems where a small uncertainty in initial conditions is magnified very quickly, and this increase in randomness can't really be attributed to quantum effects but can be described very well by probability. This seems to contradict their thesis that all use of probability to describe randomness is justified only by quantum mechanics.

Replies from: Transfuturist
comment by Transfuturist · 2013-03-06T00:50:01.609Z · LW(p) · GW(p)

I think there seems to be a mismatch of terms involved. Ontological probability, or propensity, and epistemological probability, or uncertainty, are being confused. Reading over this discussion, I have seen claims that something called "chaotic randomness" is at work, where uncertainty results from chaotic systems because the results are so sensitive to initial conditions, but that's not ontological probability at all.

The claim of the paper is that all actual randomness, and thus ontological probability, is a result of quantum decoherence and recoherence in both chaotic and simple systems. Uncertainty is uninvolved, though uncertainty in chaotic systems appears to be random.

That said, I believe the hypothesis is correct simply because it is the simplest explanation for randomness I've seen.

Replies from: BlazeOrangeDeer
comment by BlazeOrangeDeer · 2013-03-06T01:06:14.235Z · LW(p) · GW(p)

We argue using simple models that all successful practical uses of probabilities originate in quantum fluctuations in the microscopic physical world around us, often propagated to macroscopic scales

Their argument is that not only is quantum mechanics ontologically probabilistic, but that only ontologically probabilistic things can be successfully described by probabilities. This is obviously false (not to mention that nothing has actually been shown to be ontologically probabilistic in the first place).

Thus we claim there is no physically verified fully classical theory of probability.

They think they can get away with this claim because it can't even be tested in a quantum world. But you can still make classical simulations and see if probability works as it should, and it's obvious that it does. Their only argument is that it's simpler for probability to be entirely quantum, but they fail to consider situations where quantum effects do not actually affect the system (which we can simulate and test).

Replies from: Transfuturist
comment by Transfuturist · 2013-03-06T01:15:47.591Z · LW(p) · GW(p)

I don't think they refer to Bayesian probability as probability. The abstract is ill-defined (according to LessWrong's operational definitions), but their point about ontological probabilities originating in quantum mechanics remains. It, I think, remains intertwined with multiverse theories, as multiverse theories seem to explain probability in a very similar sense, but not in as many words or with such great claims.

Also, in a classical simulation, I would not see probability working as it should to be obvious at all. In fact, it's quite difficult to imagine an actually classical system that also contains randomness. It could be that the childhood explanations of physical systems in classical terms while seeing randomness as present is clouding the issue.

Whichever way. I don't think it's really worth much argument. Just as a basis in probability theory.

comment by byrnema · 2013-02-15T23:58:20.611Z · LW(p) · GW(p)

Could someone write a post (or I suppose we could create a thread here) about the Chelyabinsk meteorite?

It's very relevant for a variety of reasons:

  • connection to existential risk

  • the unlikely media report that the meteorite is 'independent' of the asteroid that passed by this day

  • any observations people have (I haven't any) on global communication and global rational decision making at this time, before it was determined that the damage and integrated risk was limited

Replies from: ZankerH, None
comment by ZankerH · 2013-02-16T01:06:27.671Z · LW(p) · GW(p)

the unlikely media report that the meteorite is 'independent' of the asteroid that passed by this day

It came from a different region of space, on an entirely different orbit. 2012 DA14 approached Earth from the south on a northward trajectory, whereas the Chelyabinsk meteorite was on a what looks like much more in-plane, east-west orbit. As unlikely as it sounds, there is no way they could have been the fragments of the same asteroid (unless they broke apart years ago and were consequently separated further by more impacts or the chaotic gravitational influences of other objects in the solar system).

Replies from: byrnema, Mitchell_Porter
comment by byrnema · 2013-02-16T02:30:26.411Z · LW(p) · GW(p)

OK. Unlikely doesn't mean not true. But I would expect there to be some debris-type events around the passing of an asteroid, whereas a completely coincidental meteorite (for, say, the entire month over which the asteroid is passing) has a lower probability. It's like helping an old lady up after a fall and picking up her cane when she says, 'oh no, that isn't mine'. (Someone else dropped their cane?)

From your account and others, it seems the trajectories were not similar enough to conclude they arrived together in the same bundle. The second idea is that the meteorite that fell was 'shaken loose' due to some kind of interaction with the asteroid and any associated debris, and I think this hypotheses would be more difficult to falsify.

(So I agree with Mitchell Porter, I'd like to see more details.) I wonder if there an animation of the asteroid and the meteor for the period over which their historical tracks are known.

You also ought to mention the NASA site: where did you find that information about the trajectories?

comment by Mitchell_Porter · 2013-02-16T02:22:54.485Z · LW(p) · GW(p)

I'm wondering if it was some sort of orbital projectile weapons system, being tested under cover of the asteroid's passage. But first I want to see more details of the argument that they couldn't have been part of the same cloud of rocks - e.g. could the Chelyabinsk meteor have been an outlier which fell into Earth's gravity well at a distance and arrived from a different direction.

edit: Maybe a more plausible version of the idea that the Chelyabinsk meteor was artificial, is that it was a secret satellite which was being disposed of ("de-orbited", re-directed on a collision course with Earth). Chelyabinsk area seems to be full of secret installations, so if there's debris, at least the men in black don't have far to travel.

edit #2: The general options seem to be: artificial; natural and related to the asteroid; natural and unrelated to the asteroid. Better probability estimates for each option should be forthcoming.

Replies from: None
comment by [deleted] · 2013-02-16T06:42:07.741Z · LW(p) · GW(p)

This thing came in at significantly greater than orbital velocity, faster than it can fall from any earth-bound orbit, and in the reverse direction from pretty much everything launched from Earth's surface (in all the wide-view videos you can see it approaching from the direction of the rising sun, in the East, and comparison of the shape of the trail with http://www.youtube.com/watch?feature=player_embedded&v=VdoKEFsemvw confirms this). It also looks JUST like any number of other meteors that have hit, and discharged as much energy as a 300 kiloton* nuclear weapon as it disintegrated (in a long streak, not all at once) - far more kinetic energy than anything human launched has ever carried (a fully fueled saturn 5 exploding with all of its original chemical energy would release less than five kilotons). Energy couldn't be generated in space either, a 100 square meter solar array would require 1400 years to gather that much. And if you were going to deorbit something you would rather blow it to pieces over the ocean where nobody's ever going to find all the tiny fragments scattered over hundreds of square miles of seafloor.

Conclusion: Natural. Not bothering with probability estimate.

Not only did it come in on a completely different trajectory from the known asteroid (closer to coplanar with the ecliptic seeing as it came from the East), but Russia was not even visible to objects on similar trajectories to the known asteroid until after it had passed the Earth. The chaotic influences of the rest of the solar system and inhomogeneity of impacts also mean that even if you expode something into lots of fragments on completely different orbits (which does not really happen) they are NOT going to come to the same spot within 50,000 kilometers at the same time on their way back together. The 'focusing' effect of the Earth's gravity exists but is insufficient to wrap around the zone that something cominig from one side of the earth can hit to more than a few degrees away from half of the planet at velocities this high.

There are plenty of these rocks throughout the solar system, and they DO hit the Earth. Something this size ish happens every few (single-digit) years, its just that most of the time they happen over the ocean, desert, or sparsely populated land. This one had the 'good' fortune to explode directly over a city of one million people well-armed with anti-police and anti-insurance-fraud dashcams.

Conclusion: Unrelated to known asteroid. Not worth probability estimate.

NOTE: There seems to be some question about the total yield, but given that the shock wave took a minute or more to reach the ground from tens of kilometers high from what I have read, and still had enough force to shatter windows and blow in doors, I'm leaning to the higher end of the estimates. EDITED: the USGS says 300 kilotons according to their analysis of seismographs.

EDIT: Just realized something interesting. It came from the direction of the rising sun - meaning that, even allowing for gravity bending its trajectory a bit, it would have come from a trajectory at most probably a few tens of degrees away from the sun in the sky. I do not know much about the estimated size of this thing, but that would indeed make it much harder to see as that part of the sky is only visible at night for a very short time and through a large amount of atmosphere. Even though we are getting better at detecting incoming rocks a few days out (a few years ago we even caught something only like 5 meters wide a day in advance and predicted its impact site, though that was a special case and we don't catch even a fraction of what actually comes our way) this one would have been particularly hard to see.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2013-02-16T10:36:44.419Z · LW(p) · GW(p)

It turns out that the coincidence involved in the null hypothesis is somewhat less improbable than I thought.

The situation: we were waiting for "the closest-ever predicted approach to Earth for an object this large" (asteroid 2012 DA 14), and with just hours remaining, "the largest recorded object encountered by Earth since 1908" suddenly showed up, above a populated area, and blew up spectacularly.

The consideration which reduces the improbability slightly, is that new records and notable events in the former category, now occur almost every year. So rather than a coincidence between a once-in-a-century event and a once-in-a-decade event, it's more like a once-in-a-century event and a once-a-year event, occurring on the same day.

Assuming the null hypothesis for the moment (pure coincidence), we should still be aware that this is a remarkable coincidence. The Chelyabinsk fireball would have been a notable event of the 21st century anyway, but now it will go into history accompanied by the spooky fact that the world was already watching the skies that day; so it will have a place, not just in the annals of the space age, but in those chronicles of weird coincidences that titillate agnostics and agitate fringe believers.

I see that Phil Plait says he thought it was a hoax at first. That reminds me of my own idea that it was probably an audacious covert operation. We were each applying a familiar template to an unlikely-sounding event, something which at first sight was too unlikely to be regarded as a coincidence, so either it had to be denied or given a causal connection to the other part of the coincidence, after all.

And I'm still wary of letting go of that feeling that here is a glimpse of hidden connections. If Earth is being trolled by the long tail of the FSM, part of the divine entertainment might be to see the efficiency with which certain humans can dismiss even the highly improbable as just a coincidence, if they can't see a satisfactory explanation.

More mundanely, although the hypothesis that the fireball was something artificial is looking weak - I cannot think of any scenario which makes much physical and political sense - I wonder what the relative likelihood of "natural but related" and "natural but unrelated" really is. If the probability of "natural but related" is as high as 1 in 500, it may after all compare favorably with the alternative.

Replies from: None, ZankerH
comment by [deleted] · 2013-02-18T14:02:51.830Z · LW(p) · GW(p)

http://www.youtube.com/watch?v=eo0zFQkYsf4

http://blogs.nasa.gov/cm/blog/Watch%20the%20Skies/posts/post_1361037562855.html

Not only the space of the solar system, but the space of all possible orbital energies and orientations, is so vast that the probability of two rocks with a common origin going onto completley different orbits and then coming together again is too tiny for me to even figure out how to properly calculate. The profusion of rocks (many many millions the size of the small one, probably a million plus for the big one) from a huge number of sources means the odds of any two objects not on a very similar orbit having a related origin is basically nill.

Coincidence, as unlikely as it is, is orders of magnitude more likely than any other option.

comment by ZankerH · 2013-02-23T22:54:45.832Z · LW(p) · GW(p)

I wonder what the relative likelihood of "natural but related" and "natural but unrelated" really is. If the probability of "natural but related" is as high as 1 in 500, it may after all compare favorably with the alternative.

It seems to me that your incredibly poor probability estimates stem from a complete unfamiliarity with even the basics of orbital mechanics. If I had to come up with a number for "natural but related", it'd be orders of magnitude less probable than that.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2013-02-24T01:09:35.261Z · LW(p) · GW(p)

This was my thought process: Let's suppose that a "closest asteroid observed" record happens once every few years. Now suppose that a once-in-a-century fireball is going to happen. What are the odds that it will happen on the same day as a "closest asteroid observed" event, assuming that the latter are independently distributed with respect to fireball events? About 1 in 1000 (order of magnitude). And then I chose 1 in 500 as a representative probability that is greater than 1 in 1000, that's all - it didn't derive from any reasoning about orbital mechanics, it was chosen to illustrate the possibility that "natural but related" might be more probable than "natural but unrelated".

But how could the "related" probability be that low? Well, the solar system contains all sorts of weird orbital resonances. Maybe there are multiple earth-crossing orbits which for dynamical reasons cross Earth's orbit at the same point, and on this occasion, there was debris in two of these orbits at the same time.

Replies from: ZankerH
comment by ZankerH · 2013-02-24T10:57:57.859Z · LW(p) · GW(p)

The Chelyabinsk meteorite wasn't anything like "once-in-a-century". Off the top of my head, here's a couple recorded a few years ago:

http://www.youtube.com/watch?v=pRtucs6D0KA

http://www.youtube.com/watch?v=8q3qWV4Ks3E

And that's just events in inhabited areas witnessed by people with a camera. Most of the Earth is either ocean or uninhabited wilderness, so it stands to reason that most such events will go unrecorded.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2013-02-24T11:54:02.071Z · LW(p) · GW(p)

The first video doesn't show a meteor, it shows trails from a plane, lit by the setting sun.

As for the second one... "The meteoroid ... is estimated to have been about the size of a desk and have had a mass of approximately 10 tonnes."

Compare to the Chelyabinsk meteor: "With an initial estimated mass of 10,000 tonnes, the Chelyabinsk meteor is the biggest object to have entered the Earth's atmosphere since the 1908 Tunguska event, and the only meteor known to have resulted in a large number of injuries."

comment by [deleted] · 2013-02-25T08:10:17.112Z · LW(p) · GW(p)

I don't know if this has been brought up around here before, but the B612 foundation is planning to launch an infrared space telescope into a venus-like orbit around 2017. It will be able to detect nearly every earth-crossing rock larger than 150 meters wide, and a significant fraction down to a few at 30ish meters. The infrared optics looking outwards makes it much easier to see the warm rocks against the black of space without interference from the sun and would quickly increase the number of known near earth objects by two orders of magnitude.

This is exactly the mission I've been wishing / occasionally agitating for NASA to get off their behinds and do for five years. They've got the contract with Ball Aerospace to build the spacecraft and plan to launch on a Falcon 9 rocket. And they accept donations.

comment by Scott Alexander (Yvain) · 2013-02-27T05:07:56.039Z · LW(p) · GW(p)

The last Dungeons and Discourse campaign was very well-received here on Less Wrong, so I am formally announcing that another one is starting in a little while. Comment on this thread if you want to sign up.

comment by beoShaffer · 2013-02-20T04:17:36.438Z · LW(p) · GW(p)

I've seen several references to a theory that the english merchant class out breeded both the peasants and nobles with major societal implications (causing the industrial revolution), but now I can't find them. Does anyone know what I'm talking about?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-02-20T19:14:39.633Z · LW(p) · GW(p)

A Farewell to Alms by Gregory Clark).

Replies from: beoShaffer
comment by beoShaffer · 2013-02-20T19:25:18.114Z · LW(p) · GW(p)

Thank you.

comment by DaFranker · 2013-02-18T14:57:32.947Z · LW(p) · GW(p)

A bit of a meta question / possibly suggestion:

Has the idea of showing or counting karma-per-reader ratios been discussed before? The idea just occurred to me, but I'd rather not spend time thinking at length about it (I've not noticed any obvious disadvantages, so if you see some please tell me) if multiple other LWers have already discussed or thought about it.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-02-18T15:37:11.110Z · LW(p) · GW(p)

If you hover over their karma score, the ratio appears, and the same if you hover over the score for a post or comment. So far as I know, it's a feature which was added recently.

The next frontier would be a chart which showed how the karma ration changed with time.

Replies from: DaFranker
comment by DaFranker · 2013-02-18T17:42:34.424Z · LW(p) · GW(p)

Oh, that's not what I meant.

I meant something along the lines of (upvotes + downvotes) / (markCommentAsRead hits). Perhaps with some fancy math to compare against voting rates per pageview and zero-vote "dummy views" and other noise factors.

Something to give a rough idea of whether that "4 points" comment is just something four statistical outliers found nice out of a hundred readers who mostly didn't vote on it, or that all four of four other participants in a thread found it useful.

I haven't yet thought through whether I would really prefer having this or not and/or whether it would be worth the trouble and effort of adding in the feature (lots of possible complications depending on the specific mechanism of "mark comment as read", which I haven't looked into either).

What I thought I was asking, specifically: Should I think about this more? Does it sound like a nice thing to have? Are there any obvious glaring downsides I haven't seen (other than implementation)? Does anyone already know whether implementation is feasible or not?

Apologies for the confusion, but thanks for the response! I really like the karma ratio feature that was added.

Replies from: shaih
comment by shaih · 2013-02-18T20:37:53.488Z · LW(p) · GW(p)

The first thing that came to mind is it would only be possible to do this for the original post because it would be nearly impossible to be able to calculate how many of the readers read each comment. Further if it was implemented it would have to be able to count one reader per username, or more specifically one reader per person that can vote. that way if lets say i were to read an article but come back multiple times to read different comments it would not skew the ratio.

As a side note to this we could also implement a ratio per username that would show (post read)/(post voted on) so we would be able to see which users participate in voting at all. This however is nowhere near as useful to those who post as the original ratio and could have many possible downsides that i'm not going to take the time to think about because it will probably not be considered, but it is a fun idea.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-26T05:39:31.399Z · LW(p) · GW(p)

Deleted. Don't link to possible information hazards on Less Wrong without clear warning signs.

E.g. this comment for a justified user complaint. I don't care if you hold us all in contempt, please don't link to what some people think is a possible info hazard without clear warning signs that will be seen before the link is clicked. Treat it the same way you would goatse (warning: googling that will lead to an exceptionally disgusting image).

Replies from: wedrifid, army1987, shminux
comment by wedrifid · 2013-02-26T18:31:15.791Z · LW(p) · GW(p)

Deleted. Don't link to possible information hazards on Less Wrong without clear warning signs.

For example this is the link that was in the now deleted. I repeat it with the clear warning signs and observe that Charlie Stross (the linked to author) has updated his post so that it actually gives his analysis of the forbidden topic in question.

Warning: This link contains something defined as an Information Hazard by the lesswrong administrator. Do not follow it if this concerns you: Charlie Stross discusses Roko's Basilisk. On a similar note: You just lost the game.

I wanted the link to be available if necessary just so that it makes sense to people when I say that Charlie Stross doesn't know how decision theory works and his analysis is rubbish. Don't even bother unless you are interested in categorizing various kinds of ignorant comments on the internet.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-27T00:18:15.378Z · LW(p) · GW(p)

It'll do until we have a better standard warning.

Replies from: wedrifid
comment by wedrifid · 2013-02-27T04:52:03.547Z · LW(p) · GW(p)

It'll do until we have a better standard warning.

A standard warning would be good to have. It feels awkward trying to come up with a warning without knowing precisely what is to be warned about. In particular it isn't clear whether you would have a strong preference (ie. outright insistence) that the warning doesn't include specific detail that Roko's Basilisk is involved. After all, some would reason that just mentioning the concept brings it to mind and itself causes potential harm (ie. You've already made them lose the game).

Unfortunately (or perhaps fortunately) all such "Information Hazard" warnings are not going to be particularly ambiguous because there just aren't enough other things that are given that label.

comment by A1987dM (army1987) · 2013-02-26T17:31:36.264Z · LW(p) · GW(p)

Deleted.

Why delete such comments altogether, rather than edit them to rot-13 them and add a warning in the front?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-27T00:16:37.180Z · LW(p) · GW(p)

I can't edit comments.

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-27T11:10:35.445Z · LW(p) · GW(p)

Ah.

Replies from: pedanterrific
comment by pedanterrific · 2013-02-27T16:07:45.993Z · LW(p) · GW(p)

He can edit his own without leaving an * , for the record.

comment by shminux · 2013-02-26T05:46:22.761Z · LW(p) · GW(p)

Ok, thanks for this mental image of a goatselisk, man!

comment by lukeprog · 2013-02-22T03:48:15.388Z · LW(p) · GW(p)

My anecdata say that comments skew negative even for highly upvoted posts of mine. So, I wasn't surprised to see this.

comment by gwern · 2013-02-21T02:00:59.142Z · LW(p) · GW(p)

Working on my n-back meta-analysis again, I experienced a cute example of how prior information is always worth keeping in mind.

I was trying to incorporate the Chinese thesis Zhong 2011; not speaking Chinese, I've been relying on MrEmile to translate bits (thanks!) and I discovered tonight that I had used the wrong table. I couldn't access the live thesis version because the site was erroring so I flipped to my screenshotted version... and I discovered that one line (the control group for the kids who trained 15 days) was cut off:

screenshot of the table of IQ scores

I needed the 2 numbers in the upper right hand corner (mean then standard deviation). What were they? I waited for the website to start working, but hours later I became desperate and began trying to guess the control group's values. After minute consideration of the few pixels left on the screen, I ventured that the true values were: 20.78 1.43.

I distracted myself unsplitting all the studies so I could look at single n-back versus dual n-back, and the site came back up! The true values had been: 23.78 1.48.

So I was wrong in just 2 digits. Guessing 43 vs 48 is not a big deal (the hundredth digit of the standard deviation isn't important), but I was chagrined to compare my 20 with the true 23. Why?

If you look at the image, you notice that the 3 immediately following means were 25, 24, 22; they were all means from people training 15-days as well. Knowing that, I should have inferred that the control group's mean was ~24 ((25+24+22)/3); you can tell that the bottom of the digit after 2 is rounded, so the digit must be 0, 3, 6, or 8 - but 0 and 8 are both very far from 24, and it's implausible that the control had the highest score (26), which leaves just '3' as the most likely guess.

(I probably would've omitted the 15-day groups if the website had gone down permanently, but if I had gone with my guess, 20 vs 23 would've resulted in a very large effect size estimate and resulted in a definite distortion to the overall meta-analysis.)

comment by Zaine · 2013-02-19T18:18:40.052Z · LW(p) · GW(p)

「META」:Up-votes represent desirable contributions, and down-votes negative contributions. Once one amasses a large corpus of comments, noticing which of one's comments have been upvoted or down-voted becomes nontrivially difficult. It seems it would be incredibly difficult to code in a feature that helped one find those comments; on the off chance it isn't, consider it a useful feature.

Replies from: wedrifid
comment by wedrifid · 2013-02-19T18:25:31.140Z · LW(p) · GW(p)

「META」:Up-votes represent desirable contributions, and down-votes negative contributions. Once one amasses a large corpus of comments, noticing which of one's comments have been upvoted or down-voted becomes nontrivially difficult. It seems it would be incredibly difficult to code in a feature that helped one find those comments; on the off chance it isn't, consider it a useful feature.

Use Wei Dai's script. Use the 'sort by karma' feature.

comment by tgb · 2013-02-18T13:11:48.488Z · LW(p) · GW(p)

Link: Obama Seeking to Boost Study of Human Brain

It's still more-or-less rumors with little in the way of concrete plans. It would, at the least, be exciting to see funding of a US science project on the scale of the human genome project again.

comment by EvelynM · 2013-02-16T21:50:02.519Z · LW(p) · GW(p)

The date in the title is incorrect s/2003/2013/

Replies from: David_Gerard
comment by David_Gerard · 2013-02-16T23:54:11.043Z · LW(p) · GW(p)

D'OH! Fixed, at the slight expense of people's RSS feeds.

comment by wedrifid · 2013-03-02T08:38:04.290Z · LW(p) · GW(p)

I don't see why the "pattern matching" is invalid.

It is the things that tend to go with it that are the problem. Such as the failure to understand which facets are different and similar and the missing of the most important part of the particular case due to distraction by thoughts relevant to a different scenario.

comment by Ritalin · 2013-02-28T21:57:50.342Z · LW(p) · GW(p)

What's wrong with embracing foreign cultures, uploadings, upliftings, and so on?

Maybe I am biased by my personal history, having embraced what, as far as I can tell, is the very cutting edge of Western Culture (i.e. the less-wrong brand of secular humanism), and feeling rather impatient for my origin cultures to follow a similar path, which they are violently reticent to. Maybe I've got a huge blind spot of some other sort.

But when the Superhappies demand that we let them eradicate suffering forever, or when CelestAI offers us all our own personal paradise on the only condition that it be pony-flavoured, I don't just feel like I want to enthusiastically jump in, abandoning all caution. I feel like it's a moral imperative to take them up on their offer, and that getting in their way is a crime that is potentially on the same level as genocide or mass torture.

Yet in both stories these examples come from, and in the commentary by the authors, this is qualified as a Bad Thing... but I don't recall coming across an explanation that would satisfy me as to why.

Again, please warn me if I'm mixing things up here, as my purpose here is to correct any flaws that my stance may have, by consulting with minds that I expect will understand the problem better than I, and might see the flaws in how I frame it.

Replies from: CronoDAS
comment by CronoDAS · 2013-03-05T08:16:19.123Z · LW(p) · GW(p)

The thing about the Superhappies is that, well, people want to be able to be sad in certain situations. It's like Huxley's Brave New World - people are "happier" in that society, but they've sacrificed something fundamental to being human in the course of achieving that happiness. (Personally, I think that "not waiting the eight hours it would take to evacuate the system" isn't the right decision - the gap between the "compromise" position the Superhappies are offering and humans' actual values, when combined with the very real possibility that the Superhappies will indeed take more than eight hours to return in force, just doesn't seem big enough to make not waiting the right decision.)

And as for the story with CelestAI in it, as far as I can tell, what it's doing might not be perfect but it's close enough not to matter... at least, as long as we don't have to start worrying about the ethics of what it might do if it encounters aliens.

Replies from: Ritalin
comment by Ritalin · 2013-03-05T12:02:26.233Z · LW(p) · GW(p)

at least, as long as we don't have to start worrying about the ethics of what it might do if it encounters aliens.

Well, that is quite horriffic. Poor non-humanlike alien minds...

I don't think the SH's plan was anything like Huxley's BNW (which is about numbing people into docility). Saying pain should be maintained reminds me of that analogy Yudkowsky made about a world where people get truncheoned in the head daily, can't help it, keep making up reasons why getting truncheoned is full of benefits, but, if you ask someone outside of that culture if they want to start getting truncheoned in exchange for all those wonderful benefits...

comment by NancyLebovitz · 2013-02-24T16:42:40.473Z · LW(p) · GW(p)

An overview of political campaigns

Once a new president is in power, he forgets that voters who preferred him to the alternative did not necessarily comprehend or support all of his intentions. He believes his victory was due to his vision and goals; he underestimates how much the loss of credibility for the previous president helped him and overestimates how much his own party supports him.

Replies from: FiftyTwo
comment by FiftyTwo · 2013-03-01T01:12:09.139Z · LW(p) · GW(p)

My experience of dealing with members of political groups is they know exactly how mad and arbitrary the system is, but play along because they consider their goals important.

comment by niceguyanon · 2013-02-21T15:33:35.127Z · LW(p) · GW(p)

Is there a better way to look at someone's comment history, other than clicking next through pages of pages of recent comments? I would like to jump to someone's earliest posts.

Replies from: arundelo, Douglas_Knight
comment by Douglas_Knight · 2013-02-21T21:16:41.465Z · LW(p) · GW(p)

If you just want to jump to the beginning without loading all the comments, add ?count=100000&before=t1_1 to the overview page, like this. Comments imported from OB are out of order, in any event.

comment by D_Malik · 2013-02-19T08:12:22.635Z · LW(p) · GW(p)

I've been trying to correct my posture lately. Anyone have thoughts or advice on this?

Some things:

  • Advice from reddit; if you spend lots of time hunched over books or computers, this looks useful and here are pictures of stretches.

  • You can buy posture braces for like $15-$50. I couldn't find anything useful about their efficacy in my 5 minutes of searching, other than someone credible-sounding saying that they'd weaken your posture muscles (sounds reasonable) and one should thus do stretches instead.

  • Searching a certain blog, I found this which says that sitting at a 135-degree angle is better than sitting straight, and both are better than slouching. Elsewhere on the internet, some qualified person said that standing is better than all three.

  • At the moment I'm not sure that good-looking posture is healthier, but I'd guess it's worth it anyway because of signalling benefits. My current best guess for how to improve things is to use a standing desk and to give some form of reinforcement when I notice and correct my posture. And to sit as little as possible, and not in chairs. I may incorporate stretching, but only a little and in parallel with another activity because 15 minutes a day for like 3 months is a lot of time.

I could spend more time trying to figure this out, but I suspect others here might have already done that. If so, I'd be super happy if you'd post your conclusions, even if you don't take the time to say how you came to them.

Replies from: NancyLebovitz, Richard_Kennaway, JayDee, bogdanb, None, Qiaochu_Yuan, moridinamael, shaih
comment by NancyLebovitz · 2013-02-19T18:14:20.072Z · LW(p) · GW(p)

Do not try to consciously correct your posture. You don't know enough. Some evidence-- I tried it, and just gave myself backaches. I know other people who tried to correct their posture, and the results didn't seem to be a long run improvement.

Edited to add: I didn't mean that you personally don't know enough to correct your posture consciously, I meant that no one does. Bodies' ability to organize themselves well for movement is an ancient ability which involves fast, subtle changes to a complex system. It's not the kind of thing that your conscious mind is good at-- it's an ability that your body (including your brain) shares with small children and a lot of not-particularly-bright animals.

From A Tai Chi Imagery Workbook by Mellish:

Conscious muscular effort to straighten the spine, or alter its shape in some obvious way, generally recruits the long muscles on either side of the spine (the erector spinalis group). These muscles are strong, but because they run almost the whole length of the spine, they exercise only a very coarse control over its carriage.

He goes on to explain that the muscles which are appropriate for supporting and moving the spine are the multifidi, small muscles which only span one to three vertebrae, and aren't very available for direct conscious control.

A lot of back problems are the result of weak (too much support from larger muscles) or ignored (too little movement) multifidi.

He recommends working with various images, but says that the technique is to keep images in mind without actively trying to straighten your spine.

Replies from: D_Malik
comment by D_Malik · 2013-02-20T01:47:54.240Z · LW(p) · GW(p)

Thanks for the info, this looks really useful!

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-02-20T22:44:30.327Z · LW(p) · GW(p)

Mellish also said that serious study of tai chi was very good for his posture, and gave him tools for recovery when his posture deteriorates from too much time at the computer.

comment by Richard_Kennaway · 2013-02-21T13:08:03.899Z · LW(p) · GW(p)

My first thought is: what tells you that your current posture is bad, and what will tell you that it has improved?

comment by JayDee · 2013-02-20T11:19:38.918Z · LW(p) · GW(p)

My own posture improved once I took up singing. My theory is that I was focused on improving my vocal technique and that changes to my posture directly impacted on this. If I stood or held myself a certain way I could sing better, and the feedback I was getting on my singing ability propagated back and resulted in improved posture. Plus, singing was a lot of fun and with this connection pointed out to me - "your entire body is the instrument when singing, look after it" - my motivation to improve my posture was higher than ever.

That is more how I got there than conclusions. Hmm. You might consider trying to find something you value for which improved posture would be a necessary component. Or something you want to do that will provide feedback about changes in your posture.

If you are like me, "I don't want to have bad posture anymore" may turn out to be insufficient motivation to get you there by itself.

comment by bogdanb · 2013-02-21T07:11:29.313Z · LW(p) · GW(p)

My posture improved significantly after I started doing climbing (specifically, indoor bouldering). This is of course a single data point, but "it stands to reason" that it should work at least for those people who come to like it.

Physical activity in general should improve posture (see Nancy's post), but as far as I can tell bouldering should be very effective at doing this:

First, because it requires you to perform a lot of varied movements in unusual equilibrium positions (basically, hanging and stretching at all sorts of angles), which few sports do (perhaps some kinds of yoga would also do that). At the beginning it's mostly the fingers and fore-arms that will get tired, but after a few sessions (depending on your starting physical condition) you'll start feeling tired in muscles you didn't know you had.

Second (and, in my case, the most important) it's really fun. I tried all sorts of activities, from just "going to the gym" to swiming and jogging (all of which would help if done regularly), but I just couldn't keep motivated. With all of those I just get bored and my mind keeps focusing on how tired I am. Since I basically get only negative reinforcement, I stop going to those activities. Some team sports I could do, because the friendly competition and banter help me having fun, but it's pretty much impossible to get a group doing them regularly. In contrast, climbing awakes the child in me, and you can do indoors bouldering by yourself (assuming you have access to a suitable gym). I always badger friends into coming with me, since it's even more fun doing it with others (you have something to focus on while you're resting between problems), but I still have fun going by myself. (There are always much more advanced climbers around, and I find it awesome rather than discouraging to watch their moves, perhaps because it's not a competition.)

In my case, after a few weeks I simply noticed that I was standing straighter without any conscious effort to do so.


Actualy, I think the main idea is not to pick a sport that's specifically better than others for posture. Just try them all until you find one you like enough to do regularly.

comment by [deleted] · 2013-02-19T16:27:04.742Z · LW(p) · GW(p)

If you are looking for a simpler routine (to ease habit-formation), reddit also spawned the starting stretching guide.

I haven't done serious research and think it is not worth the time. As this HN comment points out, the science of fitness is poor. The solution is probably a combination of exercise, stretching and an ergonomic workstation, which are healthy anyway.

Replies from: D_Malik
comment by D_Malik · 2013-02-20T01:49:23.454Z · LW(p) · GW(p)

Thanks for the links! I'll probably at least try regular stretching, so that guide looks useful.

comment by Qiaochu_Yuan · 2013-02-20T04:50:43.715Z · LW(p) · GW(p)

Have you taken a look at Better Movement? I think I heard Val talk about it in positive tones.

comment by moridinamael · 2013-02-19T16:01:28.789Z · LW(p) · GW(p)

For a period of time I was using the iPhone app Lift for habit-formation, and one of my habits was 'Good posture.' Having this statement in a place where I looked multiple times a day maintained my awareness of this goal and I ended up sitting and walking around with much better posture.

However, I stopped using Lift and my posture seems to have reverted.

comment by shaih · 2013-02-19T08:24:49.872Z · LW(p) · GW(p)

I found that going to the gym for about half an hour a day improved my posture. Whether this is from increased muscles that help with posture or simply with increased self-esteem I do not know but it definitely helped.

comment by fubarobfusco · 2013-03-05T07:06:11.482Z · LW(p) · GW(p)

Look, if this gets into metafictional causality violation, there's gonna be hell to pay.

comment by NancyLebovitz · 2013-02-24T16:49:00.165Z · LW(p) · GW(p)

A rationalist, mathematical love song

I got a totally average woman stands about 5’3”
I got a totally average woman she weighs about 153
Yeah she’s a mean, mean woman by that I mean statistically mean
Y’know average

comment by Kawoomba · 2013-03-07T09:19:20.367Z · LW(p) · GW(p)

I just watched Neil Tyson, revered by many, on The Daily Show answer the question "What do you think is the biggest threat to mankind's existence?" with "The threat of asteroids."

Now, there is a much better case to be made for the danger from future AI than it is from asteroids as an x-risk, by the transitive property Neil Tyson's beliefs would pattern match to xenu even better than MIRI's beliefs - a fiery death from the sky.

Yet we give NdGT the benefit of the doubt of why he came to his conclusion, why don't you do the same with MIRI?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-03-07T10:00:02.435Z · LW(p) · GW(p)

Because the asteroid thread is real, and has caused mass extinction events before. Probably more than once. AI takeoff may or may not be a real threat, and likely isn't even possible. There is a qualitative difference between these two.

Also: MIRI has a financial incentive to lie, and/or exaggerate the threat, Tyson does not. Someone might think the AI threat is just a scam MIRI folks use to pump impressionable youngsters for cash.

Replies from: Kawoomba
comment by Kawoomba · 2013-03-07T10:09:16.442Z · LW(p) · GW(p)

Time-scales involved, in a nutshell. What is the chance that there is an extinction level event from asteroids while we still have all our eggs in one basket (on earth), compared to e.g. threats from AI, bioengineering etcetera?

X-risk asteroid impacts every few tens of million years come out to a low probability per century, especially when considering that i.e. the impact causing the demise of the dinosaurs wouldn't even be a true x-risk for humans.

I'd agree that the error bars on the estimated asteroid x-risk probabilities are smaller than the ones on the estimated x-risk from e.g. AI, but even a small chance of the AI x-risk would beat out the minuscule one from asteroids, don't you think?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-03-07T10:15:27.276Z · LW(p) · GW(p)

Sorry, you asked "why one might." I gave two reasons: (a) actual direct evidence of threat in one case vs absence in another, and (b) incentives to lie. There are certainly reasons in favor in the AI takeoff threat, but that was not your question :). I think you are trying to have an argument with me I did not come here to have.


In case I was not clear, regardless of the actual state of probabilities on the ground, the difference between asteroids and takeoff AI is PR. Think of it from a typical person's point of view. Tyson is a respected physicist with no direct financial stake in how threats are evaluated, taking seriously a known existing threat which had already reshaped our biosphere more than once. EY is some sort of internet cult leader? Whose claim to fame is a fan fic? And who relies on people taking his pet threat seriously for his livelihood? And it's not clear the threat is even real?

Who do you think people will believe?

Replies from: Kawoomba
comment by Kawoomba · 2013-03-07T10:24:04.715Z · LW(p) · GW(p)

I think I replied before reading your edit, sorry about that.

I'd say that Tyson does have incentives for popularizing a threat that's right up his alley as an astrophysicist, though maybe not to the same degree as MIRIans. However, assuming the latter may be uncharitable, since people joined MIRI before they had that incentive. If the financial incentive played a crucial part, that dedicating-their-professional-life-to-AI-as-an-x-risk wouldn't have happened.

As for "(AI takeoff) likely isn't possible", even if you throw that into your probability calculation, it may (in my opinion will) still beat out a "certain threat but with a very low probability".

Thanks for your thoughts, upvotes all around :)

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-03-07T10:55:01.247Z · LW(p) · GW(p)

However, assuming the latter may be uncharitable, since people joined MIRI before they had that incentive.

I don't think appeals to charity are valid here. Let's imagine some known obvious cult, like Scientology. Hubbard said: "You don't get rich writing science fiction. If you want to get rich, you start a religion." So he declared what he was doing right away -- however folks who joined, including perhaps even Mr. Miscavige himself * may well have had good intentions. Perhaps they wanted to "Clear the planet" or whatever. But so what? Once Miscavige got into the situation with appropriate incentives, he happily went crooked.

Regardless of why people joined MIRI, they have incentives to be crooked now.

*: apparently Miscavige was born into Scientology. "You reap what you sow."


To be clear -- I am not accusing them of being crooked. They seem like earnest people. I am merely explaining why they have a perception problem in a way that Tyson does not. Tyson is a well-known personality who makes money partly from his research gigs, and partly from speaking engagements. He has an honorary doctorate list half a page long. I am sure existential threats are one of his topics, but he will happily survive without asteroids.

comment by jbeshir · 2013-03-06T17:12:10.904Z · LW(p) · GW(p)

The pattern matching's conclusions are wrong because the information it is matching on is misleading. The article implied that there was widespread belief that the future AI should be assisted, and this was wrong. Last I looked it still implied widespread support for other beliefs incorrectly.

This isn't an indictment of pattern matching so much as a need for the information to be corrected.

comment by [deleted] · 2013-03-05T12:36:14.426Z · LW(p) · GW(p)

I didn't even say anything remotely close to that, and you know it.

comment by Ritalin · 2013-02-28T19:53:21.746Z · LW(p) · GW(p)

This article got me rather curious

Extracts:

AS PROTESTS against financial power sweep the world this week, science may have confirmed the protesters' worst fears. An analysis of the relationships between 43,000 transnational corporations has identified a relatively small group of companies, mainly banks, with disproportionate power over the global economy.

"Reality is so complex, we must move away from dogma, whether it's conspiracy theories or free-market," says James Glattfelder. "Our analysis is reality-based."

Now that's the kind of rationalist spirit that I like to see.

Concentration of power is not good or bad in itself, says the Zurich team, but the core's tight interconnections could be. As the world learned in 2008, such networks are unstable. "If one [company] suffers distress," says Glattfelder, "this propagates."

I was under the impression that there was ample evidence that concentration of power is a risk factor in and of itself, at least when it comes to humans. Lord Acton's "Power tends to corrupt, and absolute power corrupts absolutely. Great men are almost always bad men, even when they exercise influence and not authority, still more when you superadd the tendency or the certainty of corruption by authority. There is no worse heresy than that the office sanctifies the holder of it." tirade seems so uncontroversial as to practically be an Applause Light. Why would bankers be an exception?

Crucially, by identifying the architecture of global economic power, the analysis could help make it more stable. By finding the vulnerable aspects of the system, economists can suggest measures to prevent future collapses spreading through the entire economy.

Newcomers to any network connect preferentially to highly connected members. TNCs buy shares in each other for business reasons, not for world domination. If connectedness clusters, so does wealth.

Well, people have been warning about the natural and spontaneous concentration of capital for quite a while now, and I am thrilled to see a mathematical model free of political baggage taking a stab at stating the problem in non-controversial terms. A step towards rationalizing the Dismal Science of Economics?

Replies from: Larks
comment by Larks · 2013-03-06T18:13:18.787Z · LW(p) · GW(p)

This is silly. Of course asset managers own other companies - that's what their job is. They don't own them for themselves though - they own them on behalf of pension funds, insurance funds, etc., who in turn own them on behalf of individuals. This doesn't mean there isn't plenty of competition though - if I'm a PM at Fidelity, and I own Vanguard stock, I still want clients to come to me rather than Vanguard. Capital accumulation is the phenomena of individuals or institutions coming to own more and more for their own ends, not as a mere intermediary. You might as well accuse FedEx of being dangerously connected.

Replies from: Ritalin
comment by Ritalin · 2013-03-06T21:35:35.713Z · LW(p) · GW(p)

Using "this is obvious", "you should know this already", or "how dumb can you get, really?" are not constructive approaches to informing the ignorant of their mistakes, and helping them update. The same goes for "of course X does Y, it's their job/it's what they do", with the implication that, because it's their chosen function, it's a function worth doing. Especially since, here, I'm not too sure what mistake you are pointing out.

Nevertheless, if you're going to lecture me on economics, please go ahead, because I have a couple of questions, and I feel disquiet and anguish about these topics, and if you could reassure me that all is well, I would be thankful:

  • "if I'm a PM at Fidelity, and I own Vanguard stock, I still want clients to come to me rather than Vanguard." I cannot make sense of this. Why own Vanguard stock in the first place? How can I go all out competing with another company, if I have stakes in it? What happens when they go bankrupt? Is it good for me? Is it bad?
  • asset managers own other companies: do you think it would be a bad thing if the "other companies" that they could legally own parts of excluded other asset managers?
  • they own them on behalf of pension funds, insurance funds, etc., who in turn own them on behalf of individuals: I guess what I'm uncomfortable with here is that the degree of interconnection leads to both a dilution of responsibility ("Who knows who negotiates what in the name of whom anymore?" "Are my savings being somehow invested in child labour somewhere down the line, and how could I know?") and an increase in fragility (Instead of the network compensating for and buffering any huge failures, they propagate and damage the entire system).
  • does it really matter if wealth is concentrated in a nebulous spontaneously-formed conglomerate as opposed to the pockets of The Man or The Omniscient Council Of Pennybags or any individual? Isn't "being an intermediary as an end in itself" a bit of a problematic role?
Replies from: Larks
comment by Larks · 2013-03-07T12:00:57.365Z · LW(p) · GW(p)

If you'd like to learn economics, I'd recommend reading economics blogs, textbooks, or The Economist, rather than New Scientist; the latter, while good at many things, is sensationalist and very bad at economics.

Why own Vanguard stock in the first place?

Because you think Vanguard is going to do well.

How can I go all out competing with another company, if I have stakes in it?

You compete with them by competing with them for customers. For each individual customer, you prefer they come to you rather than Vanguard. If possible, you'd like to persuade every single Vanguard customer to come to you (though this would never happen), even though this would cause Vanguard to go bankrupt; Vanguard's not going to be more than 1% of your portfolio,* you get the full benefit of each new client, and only a small part of Vanguard's loss. And you could always sell your Vanguard stock.

do you think it would be a bad thing if the "other companies" that they could legally own parts of excluded other asset managers?

Assuming we think asset managers perform a useful service, it's good that they're able to access capital markets to fund growth. But basically the only way to access equity capital markets is to allow other asset managers to own you. If you didn't, there wouldn't be many potential buyers, and even those who could would be put off by the fact that they'd have trouble buying you. You'd suffer from the same problems which infect small, closed stock markets.

I do think it's weird that stockbrokers put out reports on other stockbrokers, but I also don't see how this could be avoided.

Are my savings being somehow invested in child labour somewhere down the line, and how could I know?

There are funds which will do that for you - there are at least a dozen ethical investment funds which avoid alcohol, tobacco, companies with union problems, etc.

However, opacity has little to do with interconnectedness. Even if there was no central cluster of asset managers who own each other, it'd still be hard to see who was working with whom, and who was the ultimate beneficary of your funds.

On the whole though, I wouldn't worry. If you don't invest in ChildCorp, someone else will - there are also sin funds which invest in alcohol, tobacco etc. You should try to make the world better with your spending, not your investment, because the of the fungability effects.

(Instead of the network compensating for and buffering any huge failures, they propagate and damage the entire system).

I'm not sure why you think this shows that. Fidelity doesn't own most of Vanguard, nor is Vanguard most of Fidelity's holdings. If the latter goes bankrupt tomorrow, Fidelity won't, no more than the bankruptcy of any other large company would kill it. The sorts of interconnectedness worries we saw in '07-'08 are due to access to overnight borrowing markets and soon - debt rather than equity, and banks rather than asset managers.

In the extreme, I guess we would be more safe from financial contagion if we were all subsistence farmers.

does it really matter if wealth is concentrated in a nebulous spontaneously-formed conglomerate as opposed to the pockets of The Man or The Omniscient Council Of Pennybags or any individual?

Would it matter if FedEx carries everyone's paycheques? It wouldn't mean they were the only employer. Asset managers don't have free reign to do whatever they want with the companies they own; they need to get return. Many don't interfere with the corporate governance of the companies they own at all.

If you're looking for worrying concentrations of power, I think states, or international entities, as the actual monopolists on violence, are far more concerning.

*Ok, so Fidelity might have some PM's who are very overweight Vanguard. But it'll also have some who are shorting Vanguard, so on average it'll be neutral Vanguard.

Replies from: Ritalin
comment by Ritalin · 2013-03-07T12:44:52.508Z · LW(p) · GW(p)

"the sorts of interconnectedness worries we saw in '07-'08 are due to access to overnight borrowing markets and soon - debt rather than equity, and banks rather than asset managers."

What are those? Also, I thought it was banks managing assets?

" I guess we would be more safe from financial contagion if we were all subsistence farmers."

This is silly; "safe sex is abstinence"? Not to mention false, in case of actual crop epidemics. Please don't strawman. I'm asking about buffer mechanisms. Protectionism is one such mechanism, although it is getting rather deprecated.

"I think states, or international entities, as the actual monopolists on violence, are far more concerning."

How so?

"because the of the fungability effects"

What are these effects?

Replies from: Larks
comment by Larks · 2013-03-07T17:48:45.943Z · LW(p) · GW(p)

What are those?

I borrow some money on the overnight market to fund some activity of mine. For years, I've always been able to do this, so I come to rely on this cheap, short-term funding. Then, one day, trust breaks down and people aren't willing to lend to me anymore, so I have to stop doing whatever it is I was doing - probably some form of lending to someone else.

A similar issue is with aggressive deleveraging. If I've levered up a lot (used a lot of debt to fund a transaction), small losses can wipe me out and force me to close the transaction prematurely. This'll harm others doing similar trades, making them delever, and so on.

This is about very short term deals. If I buy some shares intending on selling them in the morning, I'm not influencing any control over the business.

Also, I thought it was banks managing assets?

Banks might have asset management wings, but they're different things. Banks are sell-side, asset managers are buy-side. The terminology is confusing, yes.

How so? [are states more concerning]

There's little ability to exit, they blatantly try and form cartels (e.g. attacking tax havens), they regularly and credibly use/threaten lethal force...

What are these effects? [fungability]

If there's a good return available from investing in, say Philip Morris, and you abstain on ethical grounds, someone else will invest instead. You probably haven't actually reduced the amount of funding they get; only changed who gets the return.

Replies from: Ritalin
comment by Ritalin · 2013-03-09T20:05:41.205Z · LW(p) · GW(p)

If there's a good return available from investing in, say Philip Morris, and you abstain on ethical grounds, someone else will invest instead. You probably haven't actually reduced the amount of funding they get; only changed who gets the return.

So boycott is useless. What actual alternatives are there to stop ChildCorp from childcorping?

Replies from: Larks
comment by Larks · 2013-03-10T15:05:22.423Z · LW(p) · GW(p)

Boycotting their goods could work. Or you could offer their child workers better alternatives.

However, it's important to note that just because (not buying their stock doesn't stop them) doesn't mean there's something else that works better. It might just be there is no way of doing it, at least without violating other moral norms.

Replies from: Ritalin
comment by Ritalin · 2013-03-10T21:36:14.515Z · LW(p) · GW(p)

at least without violating other moral norms.

Such as?

Replies from: wedrifid
comment by wedrifid · 2013-03-10T21:42:29.115Z · LW(p) · GW(p)

Such as?

It is a situation that relates to exercising power to change the behavior of others with non-negligible power. Larks has a reasonable expectation that you could fill in the blanks yourself and took the tactful option of not saying anything that could be twisted to make it appear that Larks was advocating various sorts of violence or corruption.

(eg. Stab them to death with Hufflepuff bones.)

Replies from: Ritalin
comment by Ritalin · 2013-03-11T09:32:19.690Z · LW(p) · GW(p)

I am fairly certain that, between the ineffectual consumer and investor boycotts, and calling Frank Castle, there must be an entire spectrum of actions, only a fraction of which involve "violence" or "corruption". Because of my ignorance and lack of creativity, I do not know them, but I see no reason to believe they don't exist.

Of course, this is motivated continuing on my part: I think of sweatshops, workhouses, and modern-day slavery, and I feel compelled to make it stop. Telling me "there are no solutions, that I know of, that are both moral and effectual" won't result in me just sitting down and saying "ah, then it can't be helped".

comment by chemotaxis101 · 2013-02-25T19:24:40.722Z · LW(p) · GW(p)

Just for sharing an unpretentious but (IMO) interesting post from a blog I regularly read.

In commenting on an article about the results of an experiment aimed at "simulating" a specific case of traumatic brain injury and measuring its supposed effects on solving a particularly difficult problem, economist/game theorist Jeffrey Ely asked whether a successful intervention could ever be designed to give people certain unusual, circumstantially useful skills.

It could be that we have a system that takes in tons of sensory information all of which is potentially available to us at a >conscious level but in practice is finely filtered for just the most relevant details. While the optimal level of detail might >vary with the circumstances the fineness of the filter could have been selected for the average case. That’s the second >best optimum if it is too complex a problem to vary the level of detail according to circumstances. If so, then artificial >intervention could improve on the second-best by suppressing the filter at chosen times.

Any thoughts?

Replies from: gwern
comment by gwern · 2013-02-25T20:03:49.688Z · LW(p) · GW(p)

Isn't TMS famous for 'inducing savant-like abilities'?

comment by Sly · 2013-02-22T06:08:10.121Z · LW(p) · GW(p)

So I was planning on doing the AI gatekeeper game as discussed in a previous thread.

My one stipulation as Gatekeeper was that I could release the logs after the game, however my opponent basically backed out after we had barely started.

Is it worth releasing the logs still, even though the game did not finish?

Ideally I could get some other AI to play against me, that way I have more logs to release. I will give you up to two hours on Skype, IRC, or some other easy method of communication. I am estimating my resounding victory with a 99%+ probability. We can put karma, small money, or nothing on the line.

Is anyone up for this?

Replies from: Spectral_Dragon
comment by Spectral_Dragon · 2013-02-22T20:23:35.524Z · LW(p) · GW(p)

I've contemplated testing it, a few times. If you do not mind facing a complete newb, I might be up for it, given some preparation and discussion beforehand. Just PM me and we can discuss it.

comment by insufferablejake · 2013-02-21T07:27:58.975Z · LW(p) · GW(p)

I enjoy your posts, and I have been a consumer of your G+ posts and your blog for sometime now, even though I don't much comment and just lurk about. While I would want some sort of syndication of your stuff, I am wondering if an external expectation of having to meet the monthly compilation target or the fact that you know for sure that there is a definite large audience for your posts now, will affect the quality of your posts? I realize that there is likely not any answer possible for this beforehand, but I'd like to know if you've considered this.

comment by Tenoke · 2013-02-18T20:21:31.253Z · LW(p) · GW(p)

Are there any good recommendations for documentaries?

Replies from: None, JayDee, diegocaleiro
comment by [deleted] · 2013-02-19T05:59:07.086Z · LW(p) · GW(p)

Cosmos is a prerennial favorite.

The Human Animal.

Inside Job (about the financial crisis and the sheer amount of fraud that has gone unprosecuted).

Crash Course: Biology and Ecology on youtube is something I would recommend to a lot of people.

comment by JayDee · 2013-02-20T11:30:25.776Z · LW(p) · GW(p)

I watched "Century of the Self" based on the recommendation in this post, point 14.

14 . Avoid consumerism.[...] One way to start deprogramming is by watching this documentary about the deliberate invention of consumerism by Edward Bernays.

I second the recommendation, although I will say I found the music direction to be hilariously biased; there was clear good guy and bad guy music. I found the narrative it presents eye-opening and was inspired to research a bunch of things further (always a good sign for a documentary, in my opinion.)

comment by diegocaleiro · 2013-02-19T00:52:47.414Z · LW(p) · GW(p)

frozen planet, life, winged migration, and find the documentary where the snail (yes, THE snail) reproduces, it must be great. Haven't seen structured lists here.

comment by [deleted] · 2013-02-16T10:40:00.508Z · LW(p) · GW(p)

I was reading Buss' Evolutionary Psychology, and came across a passage on ethical reasoning, status, and the cognitive bias associated with the Wason Selection Task. The quote:

Cummins (1998) marshals several forms of evidence to support the dominance theory. The first pertains to the early emergence in a child's life of reasoning about rights and obligations, called deontic reasoning. Deontic reasoning is reasoning about what a person is permitted, obligated, or forbidden to do (e.g., Am I old enough to be allowed to drink alcoholic beverages?). This form of reasoning contrasts with indicative reasoning, which is reasoning about what is true or false (e.g., Is there really a tiger hiding behind that tree?). A number of studies find that when humans reason about deontic rules, they spontaneously adopt a strategy of seeking rule violators. For example, when evaluating the deontic rule "all those who drink alcohol must be twenty-one years old or older," people spontaneously look for others with alcoholic drinks in their hands who might be underage. In marked contrast, when people evaluate indicative rules, they spontaneously look for confirming instances of the rule. For example, when evaluating the indicative rule "all polar bears have white fur," people spontaneously look for instances of white-furred polar bears rather than instances of bears that might not have white fur. In short, people adopt two different reasoning strategies, depending on whether they are evaluating a deontic or an indicative rule. For deontic rules, people seek out rule violations; for indicative rules, people seek out instances that conform to the rule. These distinct forms of reasoning have been documented in children as young as three years old, suggesting that they emerge reliably early in life (Cummins, 1998). Perhaps no coincidentally, at age three, children organize themselves into transitive dominance hierarchies (that is, hierarchies in which if A is dominant to B and B is dominant to C, then A is dominant to C). Moreover young children also can reason about transitive dominance hierarchies earlier in life than they can reason transitively about other stimuli (Cummins, 1998).

That is from Chapter 12 Status, Prestige, and Social Dominance, page 366 of the Third Edition. The paper that Buss is quoting is Cummins' Social Norms and other minds: The evolutionary roots of higher cognition. The PDF of the paper is here, and the discussion of deontic reasoning starts at the bottom of page 39.

edit: I am posting it here, because I had seen discussion of the Wason task from a confirmation bias viewpoint, but had not seen the behavioral ethics viewpoint fully explained before.

comment by Kawoomba · 2013-03-11T14:33:28.344Z · LW(p) · GW(p)

Of course, you're awesome and extremely rational and everything

Awww thanks!

Replies from: private_messaging
comment by private_messaging · 2013-03-11T14:34:00.762Z · LW(p) · GW(p)

Was a plural you, too :).

comment by jbeshir · 2013-03-07T14:34:26.838Z · LW(p) · GW(p)

Assuming by "it" you refer to the decision theory work, that UFAI is a threat, Many Worlds Interpretation, things they actually have endorsed in some fashion, it would be fair enough to talk about how the administrators have posted those things and described them as conclusions of the content, but it should accurately convey that that was the extent of "pushing" them. Written from a neutral point of view with the beliefs accurately represented, informing people that the community's "leaders" have posted arguments for some unusual beliefs (which readers are entitled to judge as they wish) as part of the content would be perfectly reasonable.

It would also be reasonable to talk about the extent to which atheism is implicitly pushed in stronger fashion; theism is treated as assumed wrong in examples around the place, not constantly but to a much greater degree. I vaguely recall that the community has non-theists as a strong majority.

The problem is that this is simply not what the articles say. The articles imply strongly that the more unusual beliefs posted above are widely accepted- not that they are posted in the content but that they are believed by Less Wrong members, part of the identity of someone who is a Less Wrong user. This is simply wrong. And the difference is significant; it is incorrectly accusing all people interested in the works of a writer of being proponents of that writer's most unusual beliefs, discussed only in a small portion of their total writings. And this should be fixed so they convey an accurate impression.

The Scientology comparison is misleading in that Scientology attempts to use cult practices to achieve homogeneity of beliefs, whereas Less Wrong does not- the poll solidly demonstrates that homogeneity of beliefs is not a thing which is happening. A better analogy would be a community of fans of the works of a philosopher who wrote a lot of stuff and came to some outlandish conclusions in parts, but the fans don't largely believe that outlandish stuff. Yeah, their outlandish stuff is worth discussing- but presenting it as the belief of the community is wrong even if the philosopher alleges it all fits together. Having an accurate belief here matters, because it has greatly different consequences. There are major practical differences in how useful you'd expect the rest of the content to be, and how you'd perceive members of the community.

At present, much of the articles are written as "smear pieces" against Less Wrong's community. As a clear and egregious example, it alleges they are "libertarian", for example, clearly a shot at LW given RW's readerbase, when surveys tell us that the most common political affiliation is "liberalism", and while "libertarianism" is second, "socialism" is third. It does this while citing one of the surveys in the article itself. Many of the problems here are not subtle.

If by "it" you meant the evil AI from the future thing, it most certainly is not "the belief pushed by the organization running this place"; any reasonable definition of "pushing" something would have to meancommunicating it to people and attempting to convince them of it, and if anything they're credibly trying to stop people from learning about it. There are no secret "higher levels" of Less Wrong content only shown to the "prepared", no private venues conveying it to members as they become ready, so we can be fairly certain given publicly visible evidence that they aren't communicating it or endorsing it as a belief to even 'selected' members.

It doesn't obviously follow from anything posted on Less Wrong, it requires putting a whole bunch of parts together and assuming it is true.

comment by knb · 2013-02-23T11:52:27.201Z · LW(p) · GW(p)

I was wondering about the LW consensus regarding molecular nanotechnology. Here's a little poll:

How many years do you think it will take until molecular nanotechnology comes into existence? [pollid:417]

What is the probability that molecular nanotechnology will be developed before superhuman Artificial General Intelligence? [pollid:418]

Replies from: None, PECOS-9, NancyLebovitz, David_Gerard
comment by [deleted] · 2013-02-24T02:04:39.993Z · LW(p) · GW(p)

Not sure how to vote indicating that 'molecular nanotechnology' is not a useful or sufficiently specific term, and that biology shows us the sorts of things that are actually possible (very versatile and useful, but very unlike the hilarious Drexler stuff you hear about now and then)...

Replies from: knb
comment by knb · 2013-02-24T03:22:09.638Z · LW(p) · GW(p)

Molecular nanotechnology is a defined term that most people on Less Wrong understand. I'm not going to write out paragraphs to explain the concept of MNT. If you want to familiarize yourself with the idea, then you can follow the links on the wiki link I posted.

comment by PECOS-9 · 2013-02-23T17:44:53.163Z · LW(p) · GW(p)

How many years do you think it will take until molecular nanotechnology comes into existence?

With what probability? Do you want the point where we think there's a 50% probability it comes sooner and a 50% probability it comes later, or 95/5?

Replies from: knb
comment by knb · 2013-02-23T20:32:06.414Z · LW(p) · GW(p)

I'm hoping to benefit from the wisdom of crowds, so don't skew your answer in either direction.

Replies from: PECOS-9
comment by PECOS-9 · 2013-02-23T21:11:20.525Z · LW(p) · GW(p)

Does that mean you want the 50/50 estimate?

comment by NancyLebovitz · 2013-02-23T17:29:28.966Z · LW(p) · GW(p)

Is there a way to see the results without voting? I don't have a strong opinion about molecular nanotechnology.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-02-24T16:38:31.201Z · LW(p) · GW(p)

I have a weak opinion about molecular nanotechology vs superhuman AGI. Superhuman AGI probably requires extraordinary insights, while molecular nanotechnology is closer to a lot of grinding. However, this doesn't give me a time frame.

I find it interesting that you have superhuman AGI rather than the more usual formulations-- I'm taking that to mean an AGI which doesn't necessarily self-improve.

comment by David_Gerard · 2013-02-25T00:59:58.455Z · LW(p) · GW(p)

It won't let me enter a number that says "Drexlerian MNT defies physics". What's the maximum number of years I can put in?

Replies from: knb, Decius
comment by knb · 2013-02-25T09:04:26.935Z · LW(p) · GW(p)

You were absolutely eviscerated in the comments there. Thanks for posting.

Replies from: None
comment by [deleted] · 2013-02-25T15:32:07.130Z · LW(p) · GW(p)

You have an interesting definition of "absolutely eviscerated." MOB mostly just seems to be tossing teacher's passwords like they were bladed frisbees.

Replies from: None
comment by [deleted] · 2013-02-27T15:00:18.442Z · LW(p) · GW(p)

I think MOB is justly frustrated with others' multiple logical failures and DG's complete unwillingness to engage.

Replies from: David_Gerard
comment by David_Gerard · 2013-02-27T18:02:40.797Z · LW(p) · GW(p)

I didn't write the post, Armondikov (a postdoc chemist) did, and he engaged at length.

Replies from: None
comment by [deleted] · 2013-02-27T18:13:32.550Z · LW(p) · GW(p)

You responded at DH0, which certainly didn't help an already inflamed situation. That comment thread is what I was referring to, and also why I wrote "others' multiple logical failures" to refer to Armondikov's "it's impossible until somebody builds it" argument.

comment by Decius · 2013-02-25T10:13:57.594Z · LW(p) · GW(p)

Do you address the possibility of complex self-replicating proteins with complex behavior? It looks like the only thing addressed in the article is traditional robots scaled down to molecule size, and it (correctly) points out that that won't work.

comment by Kawoomba · 2013-02-19T18:21:44.557Z · LW(p) · GW(p)

Omega appears and makes you the arbiter over life and death. Refuse, and everybody dies.

The task is this: You are presented with n (say, 1000) individuals and have to select a certain number who are to survive.

You can query Omega for their IQ, their life story and most anything that comes to mind, you cannot meet them in person. You know none of them personally.

You cannot base your decision on their expected life span. (Omega matches them in life expectancy brackets.)

You also cannot base your decision on their expected charitable donations, or a proxy thereof.

What do?

Replies from: shminux, Elithrion, drethelin, ChristianKl
comment by shminux · 2013-02-19T18:46:51.718Z · LW(p) · GW(p)

Don't be a mindless pawn in Omega's cruel games! May everyone's death be on its conscience! The people will unite and rise against the O-pressor!

comment by Elithrion · 2013-02-20T03:11:36.099Z · LW(p) · GW(p)

Find out all their credit card/online banking information (they won't need them when they're dead), find out which ones will most likely reward/worship you for sparing them, cash in, use resources for whatever you want (including, but not limited to, basking in filthy lucre). Or were you looking for an altruistic solution? (In which case, pick some arbitrary criteria for whom you like best, or who you think will most improve the world, and go with that.)

comment by drethelin · 2013-02-19T20:46:23.763Z · LW(p) · GW(p)

Kill all the dirty blues to make the world a better place for us noble greens

comment by ChristianKl · 2013-02-23T17:36:33.378Z · LW(p) · GW(p)

The key question is: "Why is Omega playing that game with you?"

comment by Kawoomba · 2013-02-25T17:39:50.721Z · LW(p) · GW(p)

(Exasperated sigh) Come on.