Open Thread: July 2010

post by komponisto · 2010-07-01T21:20:42.638Z · score: 6 (7 votes) · LW · GW · Legacy · 698 comments

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Part 2

698 comments

Comments sorted by top scores.

comment by michaelkeenan · 2010-07-02T02:43:06.441Z · score: 20 (20 votes) · LW(p) · GW(p)

I propose that LessWrong should produce a quarterly magazine of its best content.

LessWrong readership has a significant overlap with the readers of Hacker News, a reddit/digg-like community of tech entrepreneurs. So you might be familiar with Hacker Monthly, a print magazine version of Hacker News. The first edition, featuring 16 items that were voted highly on Hacker News, came out in June, and the second came out today. The curator went to significant effort to contact the authors of the various articles and blog posts to include them in the magazine.

Why would we want LessWrong content in a magazine? I personally would find it a great recruitment tool; I could have copies at my house and show/lend/give them to friends. As someone at the Hacker News discussion commented, "It's weird but I remember reading some of these articles on the web but, reading them again in magazine form, they somehow seem much more authoritative and objective. Ah, the perils of framing!"

The publishing and selling part is not too difficult. Hacker Monthly uses MagCloud, a company that makes it easy to publish and sell PDFs into printed magazines.

Unfortunately, I don't have the skills or time to do this myself, at least not in the short-term. If someone wants to pick up this project, major tasks would include creating a process to choose articles for inclusion, contacting the authors for permission, and designing the magazine.

There's also the possibility of advertisements. I personally would be excited to see what kinds of companies would like to advertise to an audience of rationalists. Cryonics companies? Index funds? Rationalist books? Non-profits seeking donations?

Should advertising be used just to defray costs, or could the magazine make money? Make money for whom?

If people think it's a good idea but no-one takes it on, I might have some time free early next year to make this happen. But I hope someone gets to it earlier.

comment by mattnewport · 2010-07-02T15:59:28.882Z · score: 5 (5 votes) · LW(p) · GW(p)

Does anyone else find the idea of creating a printed magazine rather anachronistic?

comment by Blueberry · 2010-07-02T16:12:46.424Z · score: 3 (3 votes) · LW(p) · GW(p)

The rumors of print media's death have been greatly exaggerated.

comment by Larks · 2010-07-04T17:32:37.783Z · score: 11 (11 votes) · LW(p) · GW(p)

This comment would seem much more authoritative if seen in print.

comment by LucasSloan · 2010-07-02T04:37:05.217Z · score: 2 (2 votes) · LW(p) · GW(p)

I don't think there's enough content on LW to be worthwhile publishing a magazine. However, Eliezer's book on rationality should offer many of the same benefits.

comment by michaelkeenan · 2010-07-02T05:15:43.343Z · score: 7 (7 votes) · LW(p) · GW(p)

Not all of the content needs to be from the most recent quarter. There could be classic articles too. But I think we might have enough content each quarter anyway. Let's see...

There were about 120 posts to Less Wrong from April 1 to June 30. The top ten highest-voted were Diseased thinking: dissolving questions about disease by Yvain, Eight Short Studies On Excuses by Yvain, Ugh Fields by Roko, Bayes Theorem Illustrated by komponisto, Seven Shiny Stories by Alicorn, Ureshiku Naritai by Alicorn, The Psychological Diversity of Mankind by Kaj Sotala, Abnormal Cryonics by Will Newsome, Defeating Ugh Fields In Practice by Psychohistorian, and Applying Behavioral Pscyhology on Myself by John Maxwell IV.

Maybe not all of those are appropriate for a magazine (e.g. Bayes Theorem Illustrated is too long). So maybe swap a couple of them out for other ones. Then maybe add a few classic LessWrong articles (for example, Disguised Queries would make a good companion piece to Diseased Thinking), add a few pages of advertising and maybe some rationality quotes, and you'd have at least 30 pages. I know I'd buy it.

comment by komponisto · 2010-07-02T11:28:18.343Z · score: 1 (1 votes) · LW(p) · GW(p)

Maybe not all of those are appropriate for a magazine (e.g. Bayes Theorem Illustrated is too long)

It's not actually all that long; it's just that the diagrams take up a lot of space.

comment by michaelkeenan · 2010-07-02T16:28:18.100Z · score: 1 (1 votes) · LW(p) · GW(p)

Well, I'd like to keep the diagrams if the article is to be used. I do like Bayes Theorem Illustrated and I think an explanation of Bayes Theorem is perfect content for the magazine. If I were designing the magazine I'd want to try to include it, maybe edited down in length.

comment by NancyLebovitz · 2010-07-02T05:21:57.292Z · score: 3 (3 votes) · LW(p) · GW(p)

Monthly seems too often. Quarterly might work.

comment by gwern · 2010-07-02T05:04:49.269Z · score: 3 (3 votes) · LW(p) · GW(p)

A yearly anthology would be pretty good, though. HN is reusing others' content and can afford a faster tempo; but that simply means we need to be slower. Monthly is too fast, I suspect that quarterly may be a little too fast unless we lower our standards to include probably wrong but still interesting essays. (I think of "Is cryonics necessary?: Writing yourself into the future" as an example of something I'm sure is wrong, but was still interesting to read.)

comment by Kevin · 2010-07-02T06:24:04.081Z · score: 2 (2 votes) · LW(p) · GW(p)

How about thirdly!?

comment by magfrump · 2010-07-02T13:55:31.940Z · score: 0 (0 votes) · LW(p) · GW(p)

This post both made me laugh AND think it was a good idea; I'd love to see a magazine that was more than once a year. There's a bit of discussion of the most recent quarter; if people don't think that it is long enough (or that the pace will continue, or that people will consent to their articles being put in journals) a slight delay should help but a four times delay seems excessive.

comment by Kevin · 2010-07-02T06:19:44.085Z · score: 1 (1 votes) · LW(p) · GW(p)

There's certainly enough content to do at least one really good issue.

comment by JohannesDahlstrom · 2010-07-07T09:51:26.603Z · score: 16 (16 votes) · LW(p) · GW(p)

Drowning Does Not Look Like Drowning

Fascinating insight against generalizing from fictional evidence in a very real life-or-death situation.

comment by Paul Crowley (ciphergoth) · 2010-07-08T14:39:19.763Z · score: 15 (15 votes) · LW(p) · GW(p)

A New York Times article on Robin Hanson and his wife Peggy Jackson's disagreement on cryonics:

http://www.nytimes.com/2010/07/11/magazine/11cryonics-t.html?ref=health&pagewanted=all

comment by WrongBot · 2010-07-08T17:12:21.846Z · score: 9 (9 votes) · LW(p) · GW(p)

While I'm not planning to pursue cryopreservation myself, I don't believe that it's unreasonable to do so.

Industrial coolants came up in a conversation I was having with my parents (for reasons I am completely unable to remember), and I mentioned that I'd read a bunch of stuff about cryonics lately. My mom then half-jokingly threatened to write me out of her will if I ever signed up for it.

This seemed... disproportionately hostile. She was skeptical of the singularity and my support for the SIAI when it came up a few weeks ago, but she's not particularly interested in the issue and didn't make a big deal about it. It wasn't even close to the level of scorn she apparently has for cryonics. When I asked her about it, she claimed she opposed it based on the physical impossibility of accurately copying a brain. My father and I pointed out that this would literally require the existence of magic, she conceded the point, mentioned that she still thought it was ridiculous, and changed the subject.

This was obviously a case of my mom avoiding her belief's true weak points by not offering her true objection, rationality failures common enough to deserve blog posts pointing them out; I wasn't shocked to observe them in the wild. What is shocking to me is that someone who is otherwise quite rational would feel so motivated to protect this particular belief about cryonics. Why is this so important?

That the overwhelming majority of those who share this intense motivation are women (it seems) just makes me more confused. I've seen a couple of explanations for this phenomenon, but they aren't convincing: if these people object to cryonics because they see it as selfish (for example), why do so many of them come up with fake objections? The selfishness objection doesn't seem like it would be something one would be penalized for making.

comment by Roko · 2010-07-08T22:31:28.034Z · score: 11 (13 votes) · LW(p) · GW(p)

Wanting cryo signals disloyalty to your present allies.

Women, it seems, are especially sensitive to this (mothers, wives). Here's my explanation for why:

  1. Women are better than men at analyzing the social-signalling theory of actions. In fact, they (mostly) obsess about that kind of thing, e.g. watching soap operas, gossiping, people watching, etc. (disclaimer: on average)

  2. They are less rational than men (only slightly, on average), and this is compounded by the fact that they are less knowledgeable about technical things (disclaimer: on average), especially physics, computer science, etc.

  3. Women are more bound by social convention and less able to be lone dissenters. Asch's conformity experiment found women to be more conforming.

  4. Because of (2) and (3), women find it harder than men to take cryo seriously. Therefore, they are much more likely to think that it is not a feasible thing for them to do

  5. Because they are so into analyzing social signalling, they focus in on what cryo signals about a person. Overwhelmingly: selfishness, and as they don't think they're going with you, betrayal.

comment by Alicorn · 2010-07-08T22:35:28.376Z · score: 5 (5 votes) · LW(p) · GW(p)

If you're right, this suggests a useful spin on the disclosure: "I want you to run away with me - to the FUTURE!"

However, it was my dad, not my mom, who called me selfish when I brought up cryo.

comment by Roko · 2010-07-08T22:40:35.027Z · score: 0 (0 votes) · LW(p) · GW(p)

I think that what would work is signing up before you start a relationship, and making it clear that it's a part of who you are.

For parents, you can't do this, but they're your parents, they'll love you through thick and thin.

comment by rhollerith_dot_com · 2010-07-09T04:21:50.661Z · score: 1 (1 votes) · LW(p) · GW(p)

I think that what would work is signing up before you start a relationship, and making it clear that it's a part of who you are.

Ah, but did you notice that that did not work for Robin? (The NYT article says that Robin discussed it with Peggy when they were getting to know each other.)

comment by Nisan · 2010-07-09T12:54:27.374Z · score: 4 (4 votes) · LW(p) · GW(p)

It "worked" for Robin to the extent that Robin got to decide whether to marry Peggy after they discussed cryonics. Presumably they decided that they preferred each other to hypothetical spouses with the same stance on cryonics.

comment by rhollerith_dot_com · 2010-07-09T13:39:21.137Z · score: 0 (0 votes) · LW(p) · GW(p)

Thanks. (Upvoted.)

comment by Wei_Dai · 2010-07-08T22:51:24.921Z · score: 3 (3 votes) · LW(p) · GW(p)

Maybe the husband/son should preemptively play the "if you don't sign up with me, you're betraying me" card?

comment by Roko · 2010-07-08T23:07:18.985Z · score: 3 (5 votes) · LW(p) · GW(p)

Aha, but if I signed up, I'd have to non-conform, darling. Think of what all the other girls at the office would say about me! It would be worse than death!

comment by lmnop · 2010-07-08T23:25:12.975Z · score: 3 (3 votes) · LW(p) · GW(p)

In the case of refusing cryonics, I doubt that fear of social judgment is the largest factor or even close. It's relatively easy to avoid judgment without incurring terrible costs--many people signed up for cryonics have simply never mentioned it to the girls and boys in the office. I'm willing to bet that most people, even if you promised that their decision to choose cryonics would be entirely private, would hardly waver in their refusal.

comment by Will_Newsome · 2010-07-09T01:30:32.249Z · score: 1 (1 votes) · LW(p) · GW(p)

For what it's worth Steven Kaas emphasized social weirdness as a decent argument against signing up. I'm not sure what his reasoning was, but given that he's Steven Kaas I'm going to update on expected evidence (that there is a significant social cost so signing up that I cannot at the moment see).

comment by Wei_Dai · 2010-07-09T06:27:04.774Z · score: 4 (4 votes) · LW(p) · GW(p)

I don't get why social weirdness is an issue. Can't you just not tell anyone that you've signed up?

comment by gwern · 2010-07-09T06:45:43.346Z · score: 2 (2 votes) · LW(p) · GW(p)

The NYT article points out that you sometimes want other people to know - your wife's cooperation at the hospital deathbed will make it much easier for the Alcor people to wisk you away.

comment by Vladimir_Nesov · 2010-07-09T08:19:40.671Z · score: 2 (2 votes) · LW(p) · GW(p)

It's not an argument against signing up, unless the expected utility of the decision is borderline positive and it's specifically the increased probability of failure because of lack of additional assistance of your family that tilts the balance to the negative.

comment by gwern · 2010-07-10T10:12:34.088Z · score: 0 (0 votes) · LW(p) · GW(p)

Given that there are examples of children or spouses actively preventing (and succeeding) cryopreservation, that means there's an additional few % chance of complete failure. Given the low chance to begin with (I think another commenter says noone expects cryonics to succeed with more than 1/4 probability?), that damages the expected utility badly.

comment by pengvado · 2010-07-10T11:09:28.759Z · score: 3 (3 votes) · LW(p) · GW(p)

An additional failure mode with a few % chance of happening damages the expected utility by a few %. Unless you have some reason to think that this cause of failure is anticorrelated with other causes of failure?

comment by gwern · 2010-07-10T13:04:41.634Z · score: 0 (0 votes) · LW(p) · GW(p)

If I initially estimate that cyronics in aggregate has a 10% chance of succeeding, and I then estimate that my spouse/children have a 5% chance of preventing my cryopreservation, does my expected utility decline by only 5%?

comment by RogerPepitone · 2010-07-11T15:49:57.612Z · score: -1 (1 votes) · LW(p) · GW(p)

Are you still involved in Remember 11?

comment by wedrifid · 2010-07-09T02:59:16.137Z · score: 1 (1 votes) · LW(p) · GW(p)

Maybe the husband/son should preemptively play the "if you don't sign up with me, you're betraying me" card?

If my spouse played that card too hard I'd sign up to cryonics then I'd dump them. ("Too hard" would probably mean more than one issue and persisting against clearly expressed boundaries.) Apart from the manipulative aspect it is just, well, stupid. At least manipulate me with "you will be abandoning me!" you silly man/woman/intelligent agent of choice.

comment by JoshuaZ · 2010-07-08T23:06:24.945Z · score: 1 (1 votes) · LW(p) · GW(p)

Maybe the husband/son should preemptively play the "if you don't sign up with me, you're betraying me" card?

Voted up as an interesting suggestion. That said, I think that if anyone feels a need to be playing that card in a preemptive fashion then a relationship is probably not very functional to start with. Moreover, given that signing up is a change from the status quo I suspect that attempting to play that card would go over poorly in general.

comment by Wei_Dai · 2010-07-08T23:39:09.412Z · score: 0 (0 votes) · LW(p) · GW(p)

That said, I think that if anyone feels a need to be playing that card in a preemptive fashion then a relationship is probably not very functional to start with.

Can you expand on that? I'm not sure why this particular card is any worse than what people in functional relationships typically do.

Moreover, given that signing up is a change from the status quo I suspect that attempting to play that card would go over poorly in general.

Right, so sign up before entering the relationship, then play that card. :)

comment by lsparrish · 2010-07-08T23:57:56.908Z · score: 5 (5 votes) · LW(p) · GW(p)

I would say that if you aren't yet married, be prepared to dump them if they won't sign up with you. Because if they won't, that is a strong signal to you that they are not a good spouse. These kinds of signals are important to pay attention to in the courtship process.

After marriage, you are hooked regardless of what decision they make on their own suspension arrangements, because it's their own life. You've entered the contract, and the fact they want to do something stupid does not change that. But you should consider dumping them if they refuse to help with the process (at least in simple matters like calling Alcor), as that actually crosses the line into betrayal (however passive) and could get you killed.

comment by JoshuaZ · 2010-07-09T01:42:02.455Z · score: 4 (4 votes) · LW(p) · GW(p)

Can you expand on that? I'm not sure why this particular card is any worse than what people in functional relationships typically do.

We may have different definitions of "functional relationship." I'd put very high on the list of elements of a functional relationship that people don't go out of there way to consciously manipulate each other over substantial life decisions.

comment by Wei_Dai · 2010-07-09T08:29:03.876Z · score: 1 (1 votes) · LW(p) · GW(p)

Um, it's a matter of life or death, so of course I'm going to "go out of my way".

As for "consciously manipulate", it seems to me that people in all relationships consciously manipulate each other all the time, in the sense of using words to form arguments in order to convince the other person to do what they want. So again, why is this particular form of manipulation not considered acceptable? Is it because you consider it a lie, that is, you don't think you would really feel betrayed or abandoned if your significant other decided not to sign up with you? (In that case would it be ok if you did think you would feel betrayed/abandoned?) Or is it something else?

comment by wedrifid · 2010-07-09T09:51:23.259Z · score: 2 (2 votes) · LW(p) · GW(p)

So again, why is this particular form of manipulation not considered acceptable?

It is a good question. The distinctive feature of this class of influence is the overt use of guilt and shame, combined with the projection of the speaker's alleged emotional state onto the actual physical actions of the recipient. It is a symptom relationship dynamic that many people consider immature and unhealthy.

comment by Wei_Dai · 2010-07-09T20:56:00.739Z · score: 0 (0 votes) · LW(p) · GW(p)

It is a symptom relationship dynamic that many people consider immature and unhealthy.

I'm tempted to keep asking why (ideally in terms of game theory and/or evolutionary psychology) but I'm afraid of coming across as obnoxious at this point. So let me just ask, do you think there is a better way of making the point, that from the perspective of the cryonicist, he's not abandoning his SO, but rather it's the other way around? Or do you think that its not worth bring up at all?

comment by NancyLebovitz · 2010-07-09T00:02:17.668Z · score: 1 (1 votes) · LW(p) · GW(p)

Wanting cryo signals disloyalty to your present allies.

I don't see why you'd be showing disloyalty to those of your allies who are also choosing cryo.

Here are some more possible reasons for being opposed to cryo.

Loss aversion. "It would be really stupid to put in that hope and money and get nothing for it."

Fear that it might be too hard to adapt to the future society. (James Halperin's The First Immortal has it that no one gets thawed unless someone is willing to help them adapt. would that make cryo seem more or less attractive?)

And, not being an expert on women, I have no idea why there's a substantial difference in the proportions of men and women who are opposed to cryo.

comment by Roko · 2010-07-09T00:08:33.186Z · score: 4 (4 votes) · LW(p) · GW(p)

Difference between showing and signalling disloyalty. To see that it is a signal of disloyalty/lower commitment, consider what signal would be sent out by Rob saying to Ruby: "Yes, I think cryo would work, but I think life would be meaningless without you by my side, so I won't bother"

comment by Wei_Dai · 2010-07-09T20:18:23.810Z · score: 0 (0 votes) · LW(p) · GW(p)

It's seems to also be a signal of disloyalty/lower commitment to say, "No honey, I won't throw myself on your funeral pyre after you die." Why don't we similarly demand "Yes, I could keep on living, but I think life would be meaningless without you by my side, so I won't bother" in that case?

comment by Roko · 2010-07-09T20:49:47.062Z · score: 2 (2 votes) · LW(p) · GW(p)

You have to differentiate between what an individual thinks/does/decides, and what society as a whole thinks/does/decides.

For example, in a society that generally accepted that it was the "done thing" for a person to die on the funeral pyre of their partner, saying that you wanted to make a deal to buck the trend would certainly be seen as selfish.

Most individuals see the world in terms of options that are socially allowable, and signals are considered relative to what is socially allowable.

comment by SilasBarta · 2010-07-08T17:36:45.688Z · score: 6 (6 votes) · LW(p) · GW(p)

if these people object to cryonics because they see it as selfish (for example), why do so many of them come up with fake objections?

I -- quite predictably -- think this is a special case of the more general problem that people have trouble explaining themselves. You mom doesn't give her real reason because she can't (yet) articulate it. In your case, I think it's due to two factors: 1) part of the reasoning process is something she doesn't want to say to your face so she avoids thinking it, and 2) she's using hidden assumptions that she falsely assumes you share.

For my part, my dad's wife is nominally unopposed, bitterly noting that "It's your money" and then ominously adding that, "you'll have to talk about this with your future wife, who may find it loopy".

(Joke's on her -- at this rate, no woman will take that job!)

comment by cousin_it · 2010-07-08T17:42:04.489Z · score: 0 (0 votes) · LW(p) · GW(p)

Sometime ago I offered this explanation for not signing up for cryo: I know signing up would be rational, but can't overcome my brain's desire to make me "look normal". I wonder whether that explanation sounds true to others here, and how many other people feel the same way.

comment by SilasBarta · 2010-07-08T22:50:42.253Z · score: 0 (0 votes) · LW(p) · GW(p)

I'm in a typical decision-paralysis state. I want to sign up, I have the money, but I'm also interested in infinite banking, which requires you to get a whole-life plan [1], which would have to be coordinated, which makes it complicated and throws off an ugh field.

What I should probably do is just get the term insurance, sign up for cryo, and then buy amendments to the life insurance contract if I want to get into the infinite banking thing.

[1] Save your breath about the "buy term and invest the difference" spiel, I've heard it all before. The investment environment is a joke.

comment by mattnewport · 2010-07-08T23:06:25.864Z · score: 0 (0 votes) · LW(p) · GW(p)

I'm also interested in infinite banking, which requires you to get a whole-life plan

You mentioned this before and I had a quick look at the website and got the impression that it is fairly heavily dependent on US tax laws around whole life insurance and so is not very applicable to other countries. Have you investigated it enough to say whether my impression is accurate or if this is something that makes sense in other countries with differing tax regimes as well?

comment by SilasBarta · 2010-07-08T23:15:15.823Z · score: 0 (0 votes) · LW(p) · GW(p)

I haven't read about the laws in other countries, but I suspect they at least share the aspect that it's harder to seize assets stored in such a plan, giving you more time to lodge an objection of they get a lien on it.

comment by mattnewport · 2010-07-08T17:47:01.516Z · score: 0 (0 votes) · LW(p) · GW(p)

For a variety of reasons I don't think cryonics is a good investment for me personally. The social cost of looking weird is certainly a negative factor, though not the only one.

comment by NancyLebovitz · 2010-07-08T18:09:16.177Z · score: 3 (3 votes) · LW(p) · GW(p)

I don't have anything against cryo, so this are tentative suggestions.

Maybe going in for cryo means admitting how much death hurts, so there's a big ugh field.

Alternatively, some people are trudging through life, and they don't want it to go on indefinitely.

Or there are people they want to get away from.

However, none of this fits with "I'll write you out of my will". This sounds to me like seeing cryo as a personal betrayal, but I can't figure out what the underlying premises might be. Unless it's that being in the will implies that the recipient will also leave money to descendants, and if you aren't going to die, then you won't.

comment by Blueberry · 2010-07-08T18:01:06.698Z · score: 2 (2 votes) · LW(p) · GW(p)

That the overwhelming majority of those who share this intense motivation are women (it seems) just makes me more confused.

Is there evidence for this? Specifically the "intense" part?

ETA: Did you ask her why she had such strong feelings about it? Was she able to answer?

comment by WrongBot · 2010-07-08T18:19:55.209Z · score: 0 (0 votes) · LW(p) · GW(p)

The evidence is largely anecdotal, I think. There are certainly stories of cryonics ending marriages out there.

I haven't yet asked her about it, but I plan to do so next time we talk.

comment by whpearson · 2010-07-08T17:25:32.107Z · score: 0 (0 votes) · LW(p) · GW(p)

If I was going to make a guess, I suspect that saying X is selfish can easily lead to the rejoinder, "It is my money I have the right to chose what to do with it," especially within the modern world. Saying X is selfish so it shouldn't be done, can also be seen as interfering with another persons business which is frowned upon in lots of social circles. It is also called moralising. So she may be unconsciously avoiding that response.

comment by WrongBot · 2010-07-08T17:40:09.561Z · score: 1 (1 votes) · LW(p) · GW(p)

This may be true in some cases, but I don't think it is in this one; my mom has no trouble moralizing on any other topic, even ones about which I care a great deal more than I do about cryonics. For example, she's criticized polyamory as unrealistic and bisexuality as non-existent on multiple occasions, both of which have a rather significant impact on how I live my life.

comment by whpearson · 2010-07-08T17:53:28.314Z · score: 1 (1 votes) · LW(p) · GW(p)

I wasn't there at the discussions, but those seem different types of statements than saying that they are "wrong/selfish" and that by implication you are a bad person for doing them. She is impugning your judgement in all cases rather than your character.

comment by WrongBot · 2010-07-08T18:00:29.528Z · score: 1 (1 votes) · LW(p) · GW(p)

An important distinction, it's true. I feel like it should make a difference in this situation that I declared my intention to not pursue cryopreservation, but I'm not sure that it does.

Either way, I can think of other specific occasions when my mom has specifically impugned my character as well as my judgment. ("Lazy" is the word that most immediately springs to mind, but there are others.)

It occurs to me that as I continue to add details my mom begins to look like a more and more horrible person; this is generally not the case.

comment by Vladimir_Nesov · 2010-07-08T15:25:53.008Z · score: 4 (6 votes) · LW(p) · GW(p)

A factual error:

when he first announced his intention to have his brain surgically removed from his freshly vacated cadaver and preserved in liquid nitrogen

I'm fairly sure that head-only preservation doesn't involve any brain-removal. It's interesting that in context the purpose of the phrase was to present a creepy image of cryonics, and so the bias towards the phrases that accomplish this goal won over the constraint of not generating fiction.

comment by Wei_Dai · 2010-07-08T19:06:06.282Z · score: 2 (2 votes) · LW(p) · GW(p)

I wonder if Peggy's apparent disvalue of Robin's immortality represents a true preference, and if so, how should an FAI take it into account while computing humanity's CEV?

comment by Clippy · 2010-07-08T19:22:06.691Z · score: 3 (5 votes) · LW(p) · GW(p)

It should store a canonical human "base type" in a data structure somewhere. Then it should store the information about how all humans deviate from the base type, so that they can in principle be reconstituted as if they had just been through a long sleep.

Then it should use Peggy's body and Robin's body for fuel.

comment by red75 · 2010-07-08T21:22:11.583Z · score: 1 (1 votes) · LW(p) · GW(p)

It seems plausible that "know more" part of EV should include result of modelling of applying CEV to humanity, i.e. CEV is not just result of aggregation of individuals' EVs, but one of fixed points of humans' CEV after reflection on results of applying CEV.

Maybe Peggy's model will see, that her preferences will result in unnecessary deaths and that death is no more important part for society to exist/for her children to prosper.

comment by Wei_Dai · 2010-07-08T22:20:04.255Z · score: 2 (2 votes) · LW(p) · GW(p)

It seems to me if it were just some factual knowledge that Peggy is missing, Robin would have been able to fill her in and thereby change her mind.

Of course Robin's isn't a superintelligent being, so perhaps there is an argument that would change Peggy's mind that Robin hasn't thought of yet, but how certain should we be of that?

comment by Nick_Tarleton · 2010-07-08T22:28:27.276Z · score: 6 (6 votes) · LW(p) · GW(p)

Communicating complex factual knowledge in an emotionally charged situation is hard, to say nothing of actually causing a change in deep moral responses. I don't think failure is strong evidence for the nonexistence of such information. (Especially since I think one of the most likely sorts of knowledge to have an effect is about the origin — evolutionary and cognitive — of the relevant responses, and trying to reach an understanding of that is really hard.)

comment by Wei_Dai · 2010-07-08T23:18:49.457Z · score: 3 (3 votes) · LW(p) · GW(p)

You make a good point, but why is communicating complex factual knowledge in an emotionally charged situation hard? It must be that we're genetically programmed to block out other people's arguments when we're in an emotionally charged state. In other words, one explanation for why Robin has failed to change Peggy's mind is that Peggy doesn't want to know whatever facts or insights might change her mind on this matter. Would it be right for the FAI to ignored that "preference" and give Peggy's model the relevant facts or insights anyway?

ETA: This does suggest a practical advice: try to teach your wife and/or mom the relevant facts and insights before bringing up the topic of cryonics.

comment by Kevin · 2010-07-08T23:36:03.966Z · score: 11 (11 votes) · LW(p) · GW(p)

You are underestimating just how enormously Peggy would have to change her mind. Her life's work involves emotionally comforting people and their families through the final days of terminal illness. She has accepted her own mortality and the mortality of everyone else as one of the basic facts of life. As no one has been resurrected yet, death still remains a basic fact of life for those that don't accept the information theoretic definition of death.

To change Peggy's mind, Robin would not just have to convince her to accept his own cryonic suspension, but she would have to be convinced to change her life's work -- to no longer spend her working hours convincing people to accept death, but to convince them to accept death while simultaneously signing up for very expensive and very unproven crazy sounding technology.

Changing the mind of the average cryonics-opposed life partner should be a lot easier than changing Peggy's mind. Most cryonics-opposed life partners have not dedicated their lives to something diametrically opposed to cryonics.

comment by Roko · 2010-07-08T23:28:35.850Z · score: 1 (7 votes) · LW(p) · GW(p)

This does suggest a practical advice: try to teach your wife and/or mom the relevant facts and insights before bringing up the topic of cryonics.

You mean you want to make an average IQ woman into a high-grade rationalist?

Good luck!

Better plan: go with Rob Ettinger's advice. If your wife/gf doesn't want to play ball, dump her. (This is a more alpha-male attitude to the problem, too. A woman will instinctively sense that you are approaching her objection from an alpha-male stance of power, which will probably have more effect on her than any argument)

In fact I'm willing to bet at steep odds that Mystery could get a female partner to sign up for cryo with him, whereas a top rationalist like Hanson is floundering.

comment by Alicorn · 2010-07-08T23:36:15.394Z · score: 4 (6 votes) · LW(p) · GW(p)

Is this generalizable? Should I, too, threaten my loved ones with abandonment whenever they don't do what I think would be best?

comment by Alexandros · 2010-07-09T09:48:19.790Z · score: 1 (1 votes) · LW(p) · GW(p)

I don't think this is about doing what you think best, it's about allowing you to do what you think best. And yes, you should definitely threaten abandonment in these cases, or at least you're definitely entitled to threatening and/or practicing abandonment in such cases.

comment by Roko · 2010-07-08T23:51:05.682Z · score: 1 (1 votes) · LW(p) · GW(p)

I'm not sure. It might work, but you're going outside of my areas of expertise.

comment by Larks · 2010-07-09T00:56:57.752Z · score: 1 (3 votes) · LW(p) · GW(p)

Better yet, sign up while you're single, and present it Fait accompli. It won't get her signed up, but I'd be willing to be she won't try to make you drop your subscription.

comment by lmnop · 2010-07-08T23:37:30.605Z · score: 0 (0 votes) · LW(p) · GW(p)

Well the practical advice is being offered to LW, and I'd guess that most of the people here are not average IQ, and neither are their friends and family. I personally think it's a great idea to try and give someone the relevant factual background to understand why cryonics is desirable before bringing up the option. It probably wouldn't work, simply because almost all attempts to sell cryonics to anyone don't work, but it should at least decrease the probability of them reacting with a knee-jerk dismissal of the whole subject as absurd.

comment by Roko · 2010-07-08T23:57:17.263Z · score: 2 (8 votes) · LW(p) · GW(p)

I maintain that if you are male with a female relatively neurotypical partner, the probability of success of making her sign on the dotted line for cryo, or accepting wholeheartedly your own cryo is not maximized by using rational argument, rather it is maximized by having an understanding of the emotional world that the fairer sex inhabit, and how to control her emotions so that she does what you think best. She won't listen to your words, she'll sense the emotions and level of dominance in you, and then decide based on that, and then rationalize that decision.

This is a purely positive statement, i.e. it is empirically testable, and I hereby denounce any connotation that one might interpret it to have. Let me explicitly disclaim that I don't think that womens' emotional nature makes them inferior, just different, and in need of different treatment. Let me also disclaim that this applies only on average, and that there will be exceptions, i.e. highly systematizing women who will, in fact, be persuaded by rational argument.

comment by lmnop · 2010-07-09T00:09:48.751Z · score: 1 (1 votes) · LW(p) · GW(p)

I mostly agree with you. I would even expand your point to say that if you want to convince anyone (who isn't a perfect Bayesian) to do anything, the probability of success will almost always be higher if you use primarily emotional manipulation rather than rational argument. But cryonics inspires such strong negative emotional reactions in people that I think it would be nearly impossible to combat those with emotional manipulation of the type you describe alone. I haven't heard of anyone choosing cryonics for themselves without having to make a rational effort to override their gut response against it, and that requires understanding the facts. Besides, I think the type of males who choose cryonics tend to have female partners of at least above-average intelligence, so that should make the explanatory process marginally less difficult.

comment by Roko · 2010-07-09T00:16:57.029Z · score: 0 (0 votes) · LW(p) · GW(p)

Besides, I think the type of males who choose cryonics tend to have female partners of at least above-average intelligence, so that should make the explanatory process marginally less difficult.

Right, but the data says that it is a serious problem. Cryonics wife problem, etc.

comment by lsparrish · 2010-07-09T00:18:58.108Z · score: 5 (5 votes) · LW(p) · GW(p)

I wonder how these women feel about being labeled "The Hostile Wife Phenomenon"?

comment by Roko · 2010-07-09T00:25:19.528Z · score: 2 (6 votes) · LW(p) · GW(p)

Full of righteous indignation, I should imagine. After all, they see it as their own husbands betraying them.

comment by steven0461 · 2010-07-08T22:42:06.803Z · score: 1 (1 votes) · LW(p) · GW(p)

Yes -- calling it "factual knowledge" suggests it's only about the sort of fact you could look up in the CIA World Factbook, as opposed to what we would normally call "insight".

comment by red75 · 2010-07-08T22:57:43.874Z · score: 2 (2 votes) · LW(p) · GW(p)

I meant something like embedding into culture where death is unnecessary, rather than directly arguing for that. Words aren't best communication channel for changing moral values. Will it be enough? I hope yes, if death of carriers of moral values isn't necessary condition for moral progress.

Edit: BTW, if CEV will be computed using humans' reflection on its application, then it means that FAI cannot passively combine all volitions, it must search for and somehow choose fixed point. Which rule should govern that process?

comment by wedrifid · 2010-07-08T15:19:10.248Z · score: 2 (2 votes) · LW(p) · GW(p)

That was very nearly terrifying.

comment by Vladimir_Nesov · 2010-07-08T16:47:25.371Z · score: 1 (1 votes) · LW(p) · GW(p)

Good article overall. Gives a human feel to the decision of cryonics, in particular by focusing on an unfair assault it attracts (thus appealing cryonicist's status).

comment by mattnewport · 2010-07-08T16:29:53.738Z · score: 1 (1 votes) · LW(p) · GW(p)

The hostile wife phenomenon doesn't seem to have been mentioned much here. Is it less common than the article suggests or has it been glossed over because it doesn't support the pro-cryonics position? Or has it been mentioned and I wasn't paying attention?

comment by ata · 2010-07-08T17:07:00.872Z · score: 1 (1 votes) · LW(p) · GW(p)

At last count (a while ago admittedly), most LWers were not married, and almost none were actually signed up for cryonics. So perhaps this phenomenon just isn't a salient issue to most people here.

comment by Morendil · 2010-07-08T17:17:15.130Z · score: 3 (3 votes) · LW(p) · GW(p)

I'm married and with kids, my wife supports my (so far theoretical only) interest in cryo. Though she says she doesn't want it for herself.

comment by Paul Crowley (ciphergoth) · 2010-07-09T07:33:02.975Z · score: 1 (1 votes) · LW(p) · GW(p)

Data point FWIW: my partners are far from convinced of the wisdom of cryonics, but they respect my choices. Much of the strongest opposition has come from my boyfriend, who keeps saying "why not just buy a lottery ticket? It's cheaper".

comment by gwern · 2010-07-09T10:19:05.935Z · score: 0 (0 votes) · LW(p) · GW(p)

Much of the strongest opposition has come from my boyfriend, who keeps saying "why not just buy a lottery ticket? It's cheaper".

Well, I hoped you showed him your expected utility calculations!

comment by Paul Crowley (ciphergoth) · 2010-07-09T11:23:28.634Z · score: 1 (1 votes) · LW(p) · GW(p)

I'm afraid that isn't really a good fit for how he thinks about these things...

comment by Sniffnoy · 2010-07-09T11:26:06.956Z · score: 0 (0 votes) · LW(p) · GW(p)

It seems a bit odd to me that he would use the lottery comparison, in that case. Or no?

comment by Kingreaper · 2010-07-09T11:36:21.495Z · score: 2 (2 votes) · LW(p) · GW(p)

They're both things with low probabilities of success, and extremely large pay-offs.

To someone with a certain view of the future, or a moderately low "maximum pay-off" threshold, the pay-off of cryonics could be the same as the pay-off for a lottery win.

At which point the lottery is a cheaper, but riskier, gamble. Again, if someone has a certain view of the future, or a "minimum probability" threshold (which both are under) then this difference in risk could be unnoticed in their thoughts.

At which point the two become identical, but one is more expensive.

It's quick-and-dirty thinking, but it's one easy way to end up with the connection, and doesn't involve any utility calculations (in fact, utility calculations would be an anathema to this sort of thinking)

comment by Paul Crowley (ciphergoth) · 2010-07-09T11:58:49.621Z · score: 2 (2 votes) · LW(p) · GW(p)

One big barrier I hit in talking to some of those close to me about this is that I can't seem to explain the distinction between wanting the feeling of hope that I might live a very long time, and actually wanting to live a long time. Lots of people just say "if you want to believe in life after death, why not just go to church? It's cheaper".

comment by Nisan · 2010-07-09T13:47:36.190Z · score: 2 (2 votes) · LW(p) · GW(p)

Lots of people just say "if you want to believe in life after death, why not just go to church? It's cheaper".

I could see people saying that if they don't believe that cryonics has any chance at all of working. It might be hard to tell. If I told people "there's a good chance that cryonics will enable me to live for hundreds of years", I'm sure many would respond by nodding, the same way they'd nod if I told them that "there's a good chance that I'll go to Valhalla after I die". Sometimes respect looks like credulity, you know? Do you think that's what's happening here?

comment by Paul Crowley (ciphergoth) · 2010-07-09T13:56:17.125Z · score: 5 (5 votes) · LW(p) · GW(p)

Yes. I'm happy that people respect my choices, but when they "respect my beliefs" it strikes me as incredibly disrespectful.

comment by RichardKennaway · 2010-07-09T13:42:41.656Z · score: 2 (2 votes) · LW(p) · GW(p)

And if you reply "I only want to believe in things that are true?"

comment by Paul Crowley (ciphergoth) · 2010-07-09T13:55:07.415Z · score: 2 (2 votes) · LW(p) · GW(p)

Apply the same transformation to my words that is causing me problems to that reply, and you get "I only want to believe in things that I believe are true".

comment by Nisan · 2010-07-09T13:47:19.552Z · score: 0 (0 votes) · LW(p) · GW(p)

Lots of people just say "if you want to believe in life after death, why not just go to church? It's cheaper".

I could see people saying that if they don't believe that cryonics has any chance at all of working. It might be hard to tell. If I told people "there's a good chance that cryonics will enable me to live for hundreds of years", I'm sure many would respond by nodding, the same way they'd nod if I told them that "there's a good chance that I'll go to Valhalla after I die". Sometimes respect looks like credulity, you know? Do you think that's what's happening here?

comment by Sniffnoy · 2010-07-09T12:26:08.406Z · score: 0 (0 votes) · LW(p) · GW(p)

That's a bit scary.

comment by HughRistik · 2010-07-08T17:29:48.784Z · score: 0 (0 votes) · LW(p) · GW(p)

It was mentioned, and you weren't paying attention ;)

comment by mattnewport · 2010-07-08T17:48:45.321Z · score: 0 (0 votes) · LW(p) · GW(p)

I did think this was quite a likely explanation. As I'm not married the point would likely not have been terribly salient when reading about pros and cons.

comment by lsparrish · 2010-07-04T16:53:30.046Z · score: 13 (13 votes) · LW(p) · GW(p)

Cryonics scales very well. People who think cryonics is costly, even if you had to come up with the entire lump sum close to the end of your life, are generally ignorant of this fact.

So long as you keep the shape constant, for any given container the surface area is a based on a square law whereas the volume is a cube. For example with a cube shaped object, one side squared times 6 is the surface area whereas one side cubed is the volume. Surface area is where the heat gets entry, so if you have a huge container holding cryogenic goods (humans in this case) it costs much less per unit volume (human) than is the case with a smaller container of equal insulation. A way to understand this is that you only have to insulate the outside -- the inside gets free insulation.

But you aren't stuck using equal insulation. You can use thicker insulation, with a much smaller proportional effect on total surface area as you use bigger sizes. Imagine the difference between a marble sized freezer and a house-sized freezer, when you add a foot of insulation. The outside of the insulation is where it begins collecting heat. But with a gigantic freezer, you might add a meter of insulation without it having a significant proportional impact on surface area, compared to how much surface area it already has.

Another factor to take into account is that liquid nitrogen, the super-cheap coolant used by cryonics facilities around the world, is vastly cheaper (more than a factor of 10) when purchased in huge quantities of several tons. The scaling factors for storage tanks are a big part of the reason for this. CI has used bulk purchasing as a mechanism for getting their prices down to $100 per patient per year for their newer tanks. They are actually storing 3,000 gallons of the stuff and using it slowly over time, which means there is a boiloff rate associated with the 3,000 gallon tank as well.

The conclusion I get from this is that there is a very strong self-interested case as well as altruistic case to be made for megascale cryonics versus small independently run units. People who say they won't sign up for cost reasons may be reachable at a later date. To deal with such people's objections, it might be smart to get them to agree with a particular hypothetical price point at which they would feel it is justified. In large enough quantities, it could be concievable that indefinite storage costs are as low as $50 per person, or 50 cents per year.

That is much cheaper than saving a life any other way, but of course there's still the risk that it might not work. However, given a sufficient chance of it working it could still be morally superior to other life saving strategies that cost more money. It also has inherent ecological advantages over other forms of life-saving in that it temporarily reduces population, giving the environment a chance to recover and green tech more time to take hold so that they can be supported sustainably and comfortably.

comment by Morendil · 2010-07-04T22:31:33.492Z · score: 5 (5 votes) · LW(p) · GW(p)

This needs to be a top-level post. Even with minimal editing. Please.

(ETA: It's not so much that we need to have another go at the cryonics debate; but the above is an argument that I can't recall seeing discussed here previously, that does substantially change the picture, and that illustrates various kinds of reasoning - about scaling properties, about predefining thresholds of acceptability, and about what we don't know we don't know - that are very relevant to LW's overall mission.)

comment by lsparrish · 2010-07-05T03:13:45.586Z · score: 1 (1 votes) · LW(p) · GW(p)

Done.

comment by NancyLebovitz · 2010-07-02T03:50:04.638Z · score: 9 (9 votes) · LW(p) · GW(p)

I was at a recent Alexander Technique workshop, and some of the teachers had been observing how two year olds crawl.

If you've had any experience with two year olds, you know they can cover ground at an astonishing rate.

The thing is, adults typically crawl with their faces perpendicular to the ground, and crawling feels clumsy and unpleasant.

Two year olds crawl with their faces at 45 degrees to the ground, and a gentle curve through their upper backs.

Crawling that way gives access to a surprisingly strong forward impetus.

The relevance to rationality and to akrasia is the implication that if something seems hard, it may be that the preconditions for making it easy haven't been set up.

comment by VNKKET · 2010-07-01T22:07:28.188Z · score: 9 (9 votes) · LW(p) · GW(p)

This is a mostly-shameless plug for the small donation matching scheme I proposed in May:

I'm still looking for three people to cross the "membrane that separates procrastinators and helpers" by donating $60 to the Singularity Institute. If you're interested, see my original comment. I will match your donation.

comment by Kutta · 2010-07-02T07:32:08.278Z · score: 5 (5 votes) · LW(p) · GW(p)

Done, 60 USD sent.

comment by VNKKET · 2010-07-02T18:16:09.689Z · score: 2 (2 votes) · LW(p) · GW(p)

Thank you! Matched.

comment by Scott Alexander (Yvain) · 2010-07-02T02:10:11.415Z · score: 4 (4 votes) · LW(p) · GW(p)

Done!

comment by WrongBot · 2010-07-02T00:37:28.037Z · score: 4 (4 votes) · LW(p) · GW(p)

I'm sorry I didn't see that earlier; I donated $30 to the SIAI yesterday, and I probably could have waited a little while longer and donated $60 all at once. If this offer will still be open in a month or two, I will take you up on it.

comment by VNKKET · 2010-07-02T17:58:09.404Z · score: 0 (0 votes) · LW(p) · GW(p)

That sounds good, and feel free to count your first $30 towards a later $60 total if I haven't found a third person by then.

comment by zero_call · 2010-07-02T21:35:38.457Z · score: 2 (2 votes) · LW(p) · GW(p)

Without any way of authenticating the donations, I find this to be rather silly.

comment by VNKKET · 2010-07-02T21:59:14.588Z · score: 3 (3 votes) · LW(p) · GW(p)

I'd also like these donations to be authenticated, but I'm willing to wait if necessary. Here's step 2, including the new "ETA" part, from my original comment:

In your donation's "Public Comment" field, include both a link to your reply to this thread and a note asking for a Singularity Institute employee to kindly follow that link and post a response saying that you donated. ETA: Step 2 didn't work for me, so I don't expect it to work for you. For now, I'll just believe you if you say you've donated. If you would be convinced to donate by seeing evidence that I'm not lying, let me know and I'll get you some.

Would you be willing to match my third $60 if I could give you better evidence that I actually matched the first two? If so, I'll try to get some.

comment by Alexandros · 2010-07-04T12:32:15.322Z · score: 8 (8 votes) · LW(p) · GW(p)

Is there an on-line 'rationality test' anywhere, and if not, would it be worth making one?

The idea would be to have some type of on-line questionnaire, testing for various types of biases, etc. Initially I thought of it as a way of getting data on the rationality of different demographics, but it could also be a fantastic promotional tool for LessWrong (taking a page out of the Scientology playbook tee-hee). People love tests, just look at the cottage industry around IQ-testing. This could help raise the sanity waterline, if only by making people aware of their blind spots.

There are of course the typical problems with 'putting a number on a person's rationality' and perhaps it would need some focused expertise to pull off plausibly, but I do think it's a useful thing to have around, even just to iterate on.

comment by SilasBarta · 2010-07-06T17:47:21.336Z · score: 7 (7 votes) · LW(p) · GW(p)

My kind of test would be like this:

1) Do you always seem to be able to predict the future, even as others doubt your predictions?

If they say yes ---> "That's because of confirmation bias, moron. You're not special."

comment by RobinZ · 2010-07-06T18:19:52.983Z · score: 5 (5 votes) · LW(p) · GW(p)

In their defense, it might be hindsight bias instead. :P

comment by Cyan · 2010-07-06T17:26:26.908Z · score: 5 (5 votes) · LW(p) · GW(p)

There's an online test for calibration of subjective probabilities.

comment by Alexandros · 2010-07-06T18:20:14.964Z · score: 2 (2 votes) · LW(p) · GW(p)

That was pretty awesome, thanks. Not precisely what I had in mind, but close enough to be an inspiration. Cheers.

comment by NancyLebovitz · 2010-07-04T12:39:12.044Z · score: 4 (4 votes) · LW(p) · GW(p)

The test should include questions about applying rationality in one's life, not just abstract problems.

comment by michaelkeenan · 2010-07-06T15:03:14.136Z · score: 3 (3 votes) · LW(p) · GW(p)

I would love for this to exist! I have some notes on easily-tested aspects of rationality which I will share:

The Conjunction Fallacy easily fits into a short multi-choice question.

I'm not sure what the error is called, but you can do the test described in Lawful Uncertainty:

Subjects were asked to predict whether the next card the experiment turned over would be red or blue in a context in which 70% of the cards were blue, but in which the sequence of red and blue cards was totally random. In such a situation, the strategy that will yield the highest proportion of success is to predict the more common event. For example, if 70% of the cards are blue, then predicting blue on every trial yields a 70% success rate. What subjects tended to do instead, however, was match probabilities - that is, predict the more probable event with the relative frequency with which it occurred. For example, subjects tended to predict 70% of the time that the blue card would occur and 30% of the time that the red card would occur. Such a strategy yields a 58% success rate.

You could do the positive bias test where you tell someone the triplet "2-4-6" conforms to a rule and have them figure out the rule.

You might be able to come up with some questions that test resistance to anchoring.

It might be out of scope of rationality and getting closer to an intelligence test, but you could take some "cognitive reflection" questions from here, which were discussed at LessWrong here.

comment by [deleted] · 2010-07-06T19:00:51.125Z · score: 3 (3 votes) · LW(p) · GW(p)

That Virginia Postrel article was interesting.

I was wondering why more reflective people were both more patient and less risk-averse -- she doesn't make this speculation, but it occurs to me that non-reflective people don't trust themselves and don't trust the future. If you aren't good at math and you know it, you won't take a gamble, because you know that good gamblers have to be clever. If you aren't good at predicting the future, you won't feel safe waiting for money to arrive later. Tomorrow the gods might send you an earthquake.

Risk aversion and time preference are both sensible adaptations for people who know they're not clever. People who are good at math and science don't retain such protections because they can estimate probabilities, and because their world appears intelligible and predictable.

comment by pjeby · 2010-07-06T19:20:36.941Z · score: 0 (0 votes) · LW(p) · GW(p)

non-reflective people don't trust themselves and don't trust the future

Um, that should make them more risk-averse, shouldn't it? Or do you mean reflective people don't trust themselves or the future?

comment by [deleted] · 2010-07-06T19:33:46.193Z · score: 0 (0 votes) · LW(p) · GW(p)

oops. Reflective people are LESS risk averse. Corrected above.

comment by pjeby · 2010-07-06T20:55:41.057Z · score: 2 (2 votes) · LW(p) · GW(p)

Reflective people are LESS risk averse.

That's even more confusing. I would expect a reflective person to be more self-doubtful and more risk-averse than a non-reflective person, all else being equal. But perhaps a different definition of "reflective" is involved here.

comment by gwern · 2010-07-07T02:09:55.406Z · score: 2 (2 votes) · LW(p) · GW(p)

But perhaps a different definition of "reflective" is involved here.

Possibly. A reflective person can use expected-utility to make choices that regular people would simply categorically avoid. (One might say in game-theoretic terms that a rational player can use mixed strategies, but irrational ones cannot and so can do worse. But that's probably pushing it too far.)

I recall reading one anecdote on an economics blog. The economist lived in an apartment and the nearest parking for his car was quite a ways away. There were tickets for parking on the street. He figured out the likelihood of being ticketed & the fine, and compared its expected disutility against the expected disutility of walking all the way to safe parking and back. It came out in favor of just eating the occasional ticket. His wife was horrified at him deliberately risking the fines.

Isn't this a case of rational reflection leading to an acceptance of risk which his less-reflective wife was averse to?

comment by gwern · 2010-07-09T05:16:07.765Z · score: 2 (2 votes) · LW(p) · GW(p)

In a serendipitous and quite germane piece of research, Marginal Revolution links to a study on IQ and risk-aversion:

"Our main finding is that risk aversion and impatience both vary systematically with cognitive ability. Individuals with higher cognitive ability are significantly more willing to take risks in the lottery experiments and are significantly more patient over the year-long time horizon studied in the intertemporal choice experiment."

comment by RobinZ · 2010-07-07T00:42:33.437Z · score: 2 (2 votes) · LW(p) · GW(p)

I don't believe the article says "reflective":

Professor Frederick discovered striking systematic patterns in how people answer questions about risk and patience, including those above. This short problem-solving test, he found, predicts a lot:

1) A bat and a ball cost $1.10 in total. The bat costs $1 more than the ball. How much does the ball cost?

2) If it takes five machines five minutes to make five widgets, how long would it take 100 machines to make 100 widgets?

3) In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half the lake?

The test measures not just the ability to solve math problems but the willingness to reflect on and check your answers. (Scores have a 0.44 correlation with math SAT scores, where 1.00 would be exact.) The questions all have intuitive answers -- wrong ones.

Professor Frederick gave his ''cognitive reflection test'' to nearly 3,500 respondents, mostly students at universities including M.I.T., the University of Michigan and Bowling Green University. Participants also answered a survey about how they would choose between various financial payoffs, as well as time-oriented questions like how much they would pay to get a book delivered overnight.

Getting the math problems right predicts nothing about most tastes, including whether someone prefers apples or oranges, Coke or Pepsi, rap music or ballet. But high scorers -- those who get all the questions right -- do prefer taking risks.

''Even when it actually hurts you on average to take the gamble, the smart people, the high-scoring people, actually like it more,'' Professor Frederick said in an interview. Almost a third of high scorers preferred a 1 percent chance of $5,000 to a sure $60.

They are also more patient, particularly when the difference, and the implied interest rate, is large. Choosing $3,400 this month over $3,800 next month implies an annual discount rate of 280 percent. Yet only 35 percent of low scorers -- those who missed every question -- said they would wait, while 60 percent of high scorers preferred the later, bigger payoff.

comment by NancyLebovitz · 2010-07-07T06:59:51.286Z · score: 2 (2 votes) · LW(p) · GW(p)

The problem with the temperament checks in the last two paragraphs is that they're still testing roughly the same thing that's tested earlier on-- competence at word problems.

And possibly interest in word problems-- I know I've seen versions of the three problems before. I wouldn't be going at them completely cold, but I wouldn't have noticed and remembered having seen them decades ago if word problems weren't part of my mental univers.

comment by gwern · 2010-07-07T02:03:17.459Z · score: 1 (1 votes) · LW(p) · GW(p)

Somewhat offtopic:

I recall reading a study once that used a test which I am almost certain was this one to try to answer the cause/correlation question of whether philosophical training/credentials improved one's critical thinking or whether those who undertook philosophy already had good critical thinking skills; when I recently tried to re-find it for some point or other, I was unable to. If anyone also remembers this study, I'd appreciate any pointers.

(About all I can remember about it was that it concluded, after using Bayesian networks, that training probably caused the improvements and didn't just correlate.)

comment by RobinZ · 2010-07-06T19:27:39.107Z · score: 0 (0 votes) · LW(p) · GW(p)

They are more risk-averse - that was a typo.

comment by Alexandros · 2010-07-06T18:21:40.779Z · score: 0 (0 votes) · LW(p) · GW(p)

Thanks for the ideas. It's good to have something concrete. Let's see how it goes.

comment by oliverbeatson · 2010-07-05T13:27:09.883Z · score: 3 (3 votes) · LW(p) · GW(p)

The test's questions may need to be considerably dynamic to avert the possibility that people condition to specific problems without shedding the entire infected heuristic. Someone who had read Less Wrong a few times, but didn't make the knowledge truly a part of them, might return false negative for certain biases while retaining those biases in real-life situations. Don't want to make the test about guessing the teacher's password.

comment by utilitymonster · 2010-07-05T13:26:59.604Z · score: 3 (3 votes) · LW(p) · GW(p)

I'd suggest starting with a list of common biases and producing a question (or a few?) for each. The questions could test the biases and you could have an explanation of why the biased reasoning is bad, with examples.

It would also be useful to group the biases together in natural clusters, if possible.

comment by [deleted] · 2010-07-06T00:56:56.226Z · score: 2 (2 votes) · LW(p) · GW(p)

Sounds like a good idea. Doesn't have to be invented from scratch; adapt a few psychological or behavioral-economics experiments. It's hard to ask about rationality in one's own life because of self-reporting problems; if we're going to do it, I think it's better to use questions of the form "Scenario: would you do a, b, c, or d?" rather than self-descriptive questions of the form "Are you more: a or b?"

comment by oliverbeatson · 2010-07-05T13:22:15.006Z · score: 0 (0 votes) · LW(p) · GW(p)

Somewhat relatedly, I considered the idea of creating a 'Bias-Quotient' type test. It could go some way to popularising rationality and bias-aversion. A lot more people like the idea of being right than are actually aware of biases and other such behavioural stuff.

I anticipate that many of these people would do the test expecting to share their score somewhere online and gain relative intellect-prestige from an expected high score. On discovering that they're more biased than they believed, I believe that, provided the test's response to a low score were engaging and informative (and not annoying and pedantic), they would on net be genuinely interested in overcoming this, with a link to Less Wrong somewhere appropriate. They might share the test regardless of their low score with an annotation such as 'check this -- very interesting!'. That's all based on my model of how a lot of aspiring intelligent people behave. It may be biased.

This could open to a lot of people the doors to beginning to overcome the failures of their visceral probability heuristics, as well as the standard set of cognitive biases. The test's questions may need to be considerably dynamic to avert the possibility that people condition to specific problems without shedding the entire infected heuristic.

comment by utilitymonster · 2010-07-03T17:28:47.255Z · score: 8 (8 votes) · LW(p) · GW(p)

Here's a puzzle I've been trying to figure out. It involves observation selection effects and agreeing to disagree. It is related to a paper I am writing, so help would be appreciated. The puzzle is also interesting in itself.

Charlie tosses a fair coin to determine how to stock a pond. If heads, it gets 3/4 big fish and 1/4 small fish. If tails, the other way around. After Charlie does this, he calls Al into his office. He tells him, "Infinitely many scientists are curious about the proportion of fish in this pond. They are all good Bayesians with the same prior. They are going to randomly sample 100 fish (with replacement) each and record how many of them are big and how many are small. Since so many will sample the pond, we can be sure that for any n between 0 and 100, some scientist will observe that n of his 100 fish were big. I'm going to take the first one that sees 25 big and team him up with you, so you can compare notes." (I don't think it matters much whether infinitely many scientists do this or just 3^^^3.)

Okay. So Al goes and does his sample. He pulls out 75 big fish and becomes nearly certain that 3/4 of the fish are big. Afterwards, a guy named Bob comes to him and tells him he was sent by Charlie. Bob says he randomly sampled 100 fish, 25 of which were big. They exchange ALL of their information.

Question: How confident should each of them be that 3/4 of the fish are big?

Natural answer: Charlie should remain nearly certain that ¾ of the fish are big. He knew in advance that someone like Bob was certain to talk to him regardless of what proportion of fish were big. So he shouldn't be the least bit impressed after talking to Bob.

But what about Bob? What should he think? At first glance, you might think he should be 50/50, since 50% of the fish he knows about have been big and his access to Al's observations wasn't subject to a selection effect. But that can't be right, because then he would just be agreeing to disagree with Al! (This would be especially puzzling, since they have ALL the same information, having shared everything.) So maybe Bob should just agree with Al: he should be nearly certain that ¾ of the fish are big.

But that's a bit odd. It isn't terribly clear why Bob should discount all of his observations, since they don't seem to subject to any observation selection effect; at least from his perspective, his observations were a genuine random sample.

Things get weirder if we consider a variant of the case.

VARIANT: as before, but Charlie has a similar conversation with Bob. Only this time, he tells him he's going to introduce Bob to someone who observed exactly 75 of 100 fish to be big.

New Question: Now what should Bob and Al think?

Here, things get really weird. By the reasoning that led to the Natural Answer above, Al should be nearly certain that ¾ are big and Bob should be nearly certain that ¼ are big. But that can't be right. They would just be agreeing to disagree! (Which would be especially puzzling, since they have ALL the same information.) The idea that they should favor one hypothesis in particular is also disconcerting, given the symmetry of the case. Should they both be 50/50?

Here's where I'd especially appreciate enlightenment: 1.If Bob should defer to Al in the original case, why? Can someone walk me through the calculations that lead to this?

2.If Bob should not defer to Al in the original case, is that because Al should change his mind? If so, what is wrong with the reasoning in the Natural Answer? If not, how can they agree to disagree?

3.If Bob should defer to Al in the original case, why not in the symmetrical variant?

4.What credence should they have in the symmetrical variant?

5.Can anyone refer me to some info on observation selection effects that could be applied here?

comment by Vladimir_M · 2010-07-03T21:46:22.649Z · score: 6 (6 votes) · LW(p) · GW(p)

First, let's calculate the concrete probability numbers. If we are to trust this calculator, the probability of finding exactly 75 big fish in a sample of a hundred from a pond where 75% of the fish are big is approximately 0.09, while getting the same number in a sample from a 25% big pond has a probability on the order of 10^-25. The same numbers hold in the reverse situation, of course.

Now, Al and Bob have to consider two possible scenarios:

  1. The fish are 75% big, Al got the decently probable 75/100 sample, but Bob happened to be the first scientist who happened to get the extremely improbable 25/100 sample, and there were likely 10^(twenty-something) or so scientists sampling before Bob.

  2. The fish are 25% big, Al got the extremely improbable 75/100 big sample, while Bob got the decently probable 25/100 sample. This means that Bob is probably among the first few scientists who have sampled the pond.

So, let's look at it from a frequentist perspective: if we repeat this game many times, what will be the proportion of occurrences in which each scenario takes place?

Here we need an additional critical piece of information: how exactly was Bob's place in the sequence of scientists determined? At this point, an infinite number of scientists will give us lots of headache, so let's assume it's some large finite number N_sci, and Bob's place in the sequence is determined by a random draw with probabilities uniformly distributed over all places in the sequence. And here we get an important intermediate result: assuming that at least one scientist gets to sample 25/100, the probability for Bob to be the first to sample 25/100 is independent of the actual composition of the pond! Think of it by means of a card-drawing analogy. If you're in a group of 52 people whose names are repeatedly called out in random order to draw from a deck of cards, the proportion of drawings in which you get to be the first one to draw the ace of spades will always be 1/52, regardless of whether it's a normal deck or a non-standard one with multiple aces of spades, or even a deck of 52 such aces!

Now compute the following probabilities:

P1 = p(75% big fish) * p(Al samples 75/100 | 75% big fish) * p(Bob gets to be the first to sample 25/100)
~ 0.5 0.09 1/N_sci

P2 = p(25% big fish) * p(Al samples 75/100 | 25% big fish) *p (Bob gets to be the first to sample 25/100)
~ 0.5 10^-25 1/N_sci

(We ignore the finite, but presumably negligible probabilities that no scientist samples 25/100 in either case; these can be made arbitrarily low by increasing N_sci.)

Therefore, we have P1 >> P2, i.e. the overwhelming majority of meetings between Al and Bob -- which are by themselves extremely rare, since Al usually meets someone from the other (N_sci-1) scientists -- happen under the first scenario, where Al gets a sample closely matching the actual ratio.

Now, you say:

It isn't terribly clear why Bob should discount all of his observations, since they don't seem to subject to any observation selection effect; at least from his perspective, his observations were a genuine random sample.

Not really, when you consider repeating the experiment. For the overwhelming majority of repetitions, Bob will get results close to the actual ratio, and on rare occasions he'll get extreme outlier samples. Those repetitions in which he gets summoned to meet with Al, however, are not a representative sample of his measurements! The criteria for when he gets to meet with Al are biased towards including a greater proportion of his improbable 25/100 outlier results.

As for this:

VARIANT: as before, but Charlie has a similar conversation with Bob. Only this time, he tells him he's going to introduce Bob to someone who observed exactly 75 of 100 fish to be big.

I don't think this is a well defined scenario. Answers will depend on the exact process by which this second observer gets selected. (Just like in the preceding discussion, the answer would be different if e.g. Bob had been always assigned the same place in the sequence of scientists.)

comment by utilitymonster · 2010-07-04T12:06:49.557Z · score: 1 (1 votes) · LW(p) · GW(p)

I was assuming Charlie would show Bob the first person to see 75/100.

Anyway, your analysis solves this as well. Being the first to see a particular result tells you essentially nothing about the composition of the pond (provided N_sci is sufficiently large that someone or other was nearly certain to see the result). Thus, each of Al and Bob should regard their previous observations as irrelevant once they learn that they were the first to get those results. Thus, they should just stick with their priors and be 50/50 about the composition of the pond.

comment by Blueberry · 2010-07-03T17:38:55.481Z · score: 3 (3 votes) · LW(p) · GW(p)

Interesting problem!

(This would be especially puzzling, since they have ALL the same information, having shared everything.)

It isn't terribly clear why Bob should discount all of his observations, since they don't seem to subject to any observation selection effect; at least from his perspective, his observations were a genuine random sample.

I think these two statements are inconsistent. If Bob is as certain as Al that Bob was picked specifically for his result, then they do have the same information, and they should both discount Bob's observations to the same degree for that reason. If Bob doesn't trust Al completely, they don't have the same information. Bob doesn't know for sure that Charlie told Al about the selection. From his point of view, Al could be lying.

VARIANT: as before, but Charlie has a similar conversation with Bob. Only this time, he tells him he's going to introduce Bob to someone who observed exactly 75 of 100 fish to be big.

If Charlie tells both of them they were both selected, they have the same information (that both their observations were selected for that purpose, and thus give them no information) and they can only decide based on their priors about Charlie stocking the pond.

If each of them only knows the other was selected and they both trust the other one's statements, same thing. But if each puts more trust in Charlie than in the other, then they don't have the same information.

comment by prase · 2010-07-03T18:42:22.576Z · score: 1 (1 votes) · LW(p) · GW(p)

If Charlie tells both of them they were both selected, they have the same information (that both their observations were selected for that purpose, and thus give them no information) and they can only decide based on their priors about Charlie stocking the pond.

It is strange. Shall Bob discount his observation after being told that he is selected? What does it actually mean to be selected? What if Bob finds 25 big fish and then Charlie tells him, that there are 3^^^3 other observers and he (Charlie) decided to "select" one of those who observe 25 big fish and talk to him, and that Bob himself is the selected one (no later confrontation with AI). Should this information cancel the Bob's observations? If so, why?

comment by Kingreaper · 2010-07-05T14:16:34.364Z · score: 1 (1 votes) · LW(p) · GW(p)

Yes, it should, if it is known that Charlie hasn't previously "selected" any other people who got precisely 25.

The probability of being selected (taken before you have found any fish) p[chosen] is approximately equal regardless of whether there are 25% or 75% big fish.

And the probability of you being selected if you didn't find 25 p[chosen|not25] is zero

Therefore, the probability of you being selected, given as you have found 25 big fish p[chosen|found25] is approximately equal to p[chosen]/p[found25]

The information of the fact you've been chosen directly cancels out the information from the fact you found 25 big fish.

comment by utilitymonster · 2010-07-03T19:11:21.768Z · score: 0 (0 votes) · LW(p) · GW(p)

Glad to see we're on the same page.

comment by utilitymonster · 2010-07-03T19:01:46.922Z · score: 0 (0 votes) · LW(p) · GW(p)

I'm not sure about this:

If Bob is as certain as Al that Bob was picked specifically for his result, then they do have the same information, and they should both discount Bob's observations to the same degree for that reason.

Here's why:

VARIANT 2: Charlie has both Al and Bob into his office before the drawings take place. He explains that the first guy (other than Al) to see 25/100 big will report to Al. Bob goes out and sees 25/100 big. To his surprise, he gets called into Charlie's office and informed that he was the first to see that result.

Question: right now, what should Bob expect to hear from Al?

Intuitively, he should expect that Al had similar results. But if you're right, it would seem that Bob should discount his results once he talks to Charlie and fights out that he is the messenger. If that's right, he should have no idea what to expect Al to say. But that seems wrong. He hasn't even heard anything from Al.

If you're still not convinced, consider:

VARIANT 3: Charlie has both Al and Bob into his office before the drawings take place. He explains that the first guy (other than Al) to see 25/100 big will win a trip to Hawaii. Bob goes out and sees 25/100 big. To his surprise, he gets called into Charlie's office and informed that he was the first to see that result.

I can see no grounds for treating VARIANT 3 differently from VARIANT 2. And it is clear that in VARIANT 3 Bob should not discount his results.

comment by RobinZ · 2010-07-03T18:10:16.942Z · score: 2 (2 votes) · LW(p) · GW(p)

One key observation is that Al made his observation after being told that he would meet someone who made a particular observation - specifically, the first person to make that specific observation, Bob. This makes Al and Bob special in different ways:

  • Al is special because he has been selected to meet Bob regardless of what he observes. Therefore his data is genuinely uncorrelated with how he was selected for the meeting.
  • Bob is special because he has been selected to meet Al because of the specific data he observes. More precisely, because he will be the first to obtain that specific result. Therefore his result has been selected, and he is only at the meeting because he happens to be the first one to get that result.

In the original case, Bob's result is effectively a lottery ticket - when he finds out from Al the circumstances of the meeting, he can simply follow the Natural Answer himself and conclude that his results were unlikely.

In the modified case, assuming perfect symmetry in all relevant aspects, they can conclude that an astronomically unlikely event has occurred and they have no net information about the contents of the pond.

comment by utilitymonster · 2010-07-03T18:47:50.510Z · score: 0 (0 votes) · LW(p) · GW(p)

Al is special because he has been selected to meet Bob regardless of what he observes. Therefore his data is genuinely uncorrelated with how he was selected for the meeting.

Not quite. He was selected to meet someone like Bob, in the sense that whoever the messenger was, he'd have seen 25/100 big. He didn't know he'd meet Bob. But he regards the identity of the messenger as irrelevant.

You can bring out the difference by considering a variant of the case in which both Al and Bob hear about Charlie's plan in advance. (In this variant, the first to see 25/100 big will visit Al.)

What is the relevance of the fact that they observed highly improbable event?

comment by Kingreaper · 2010-07-05T13:56:11.764Z · score: 1 (1 votes) · LW(p) · GW(p)

Okay, qualitative analysis without calculations:

Let's go for a large, finite, case. Because otherwise my brain will explode.

Question 1: for any large, finite number of scientists Bob should defer MOSTLY to Alice.

First lets look at Alice; In any large finite number of scientists there is a small finite chance that NO scientist will get that result. This chance is larger in the case where 75% of the fish are big. Thus, upon finding that a scientist HAS encountered 25 fish, Alice must adjust her probability slightly towards 25% big fish.

Bob has also received several new pieces of information.

*He was the first to find 25 big fish. P[first25|found25] approaches 1/P[found25] as you increase the number of scientists. This information almost entirely cancels out the information he already had.

*All the information Alice had. This information therefore tips the scales.

Bob's final probability will be the same as Alice's.

Question two is N/A I will answer question three in a reply to this to try and avoid a massive wall of text.

comment by Kingreaper · 2010-07-05T14:01:48.167Z · score: 1 (1 votes) · LW(p) · GW(p)

Question 3: lateral answer: in the symmetrical variant the issue of "how many people are being given other people to meet, and is this entire thing just a weird trick" begins to rise.

In fact, the probability of it being a weird trick is going to overshadow almost any other attempt at analysis. The first person to get 25 happens to be a person who is told they will meet someone who got 75, and the person who was told they would meet the first person to get 25 happens to get 75? Massively improbable.

However, if it is not a trick, the probability is significantly in favour of it being 75% still. Alice isn't talking to Bob due to the fact she got 75, she's talking to Bob due to the fact he got the first 25. Otherwise Bob would most likely have ended up talking to someone else.

The proper response at this point for both Alice and Bob is to simply decide that it is overwhelming probable that Charlie is messing with them.

I can produce similar variants which don't have this issue, and they come out to 50:50. These include: Everyone is told that the first person to get 25 will meet the first person to get 75.

comment by Dagon · 2010-07-04T01:38:37.799Z · score: 1 (1 votes) · LW(p) · GW(p)

What is each of their prior probabilities for this setup being true? Bob, knowing that he was selected for his unusual results, can pretty happily disregard them. If you win a lottery, you don't update to believe that most tickets win. Bob now knows of 100 samples (Al's) that relate to the prior, and accepts them. Bob's sampling is of a different prior: coin flipped, then a specific resulting sample will be found.

If they are both selected for their results, they both go to 50/50. Neither one has non-selected samples.

comment by prase · 2010-07-03T18:34:09.224Z · score: 1 (1 votes) · LW(p) · GW(p)

Is there any particular reason why one of the actors is an AI?

comment by utilitymonster · 2010-07-03T18:42:28.892Z · score: 2 (2 votes) · LW(p) · GW(p)

Al, not AI. ("Al" as in "Alan")

comment by prase · 2010-07-03T18:49:20.308Z · score: 4 (4 votes) · LW(p) · GW(p)

Sorry. I have some Lesswrong bias.

Google statistics on Less Wrong:

  • AI (second i): 2400 hits
  • Al (second L): 318 hits (mostly in "et al." and "al Qaida", without capital A)

By the way, are these two strings distinguishable when written in the font of this site? Seem to me the same.

comment by RobinZ · 2010-07-03T18:57:04.357Z · score: 2 (2 votes) · LW(p) · GW(p)

You're right - they're pixel-for-pixel identical. That's a bit problematic.

comment by Douglas_Knight · 2010-07-04T04:32:40.577Z · score: 1 (1 votes) · LW(p) · GW(p)

Maybe that's why cryptographers say "Alice" rather than "Al."

comment by JGWeissman · 2010-07-03T18:22:49.025Z · score: 1 (3 votes) · LW(p) · GW(p)

From Bob's perspective, he was more likely to be chosen as the one to talk to Al, if there are fewer scientist that observed exactly 25 big fish, which would happen if there are more big fish. So Bob should update on the evidence of being chosen.

comment by utilitymonster · 2010-07-03T19:45:24.521Z · score: 0 (0 votes) · LW(p) · GW(p)

This should be important to the finite case. The probability of being the first to see 25/100 is WAY higher (x 10^25 or so) if the lake is 3/4 full of big fish than if it is 1/4 full of big fish.

But in the infinite case the probability of being first is 0 either way...

comment by JGWeissman · 2010-07-03T20:51:42.721Z · score: 2 (2 votes) · LW(p) · GW(p)

There is a reason we consider infinities only as limits of sequences of finite quantities.

Suppose you tried to sum the log-odds evidence of the infinite scientist that the pond has more big fish. Well, some of them have positive evidence (summing to positive infinity), some have negative evidence (summing to negative infinity), and you can, by choosing the order of summation, get any result you want (up to some granularity) between negative and positive infinity.

You don't need anthropomorphic tricks to make things weird if you have actual infinities in the problem.

comment by Vladimir_M · 2010-07-04T04:53:46.063Z · score: 1 (1 votes) · LW(p) · GW(p)

utilitymonster:

The probability of being the first to see 25/100 is WAY higher (x 10^25 or so) if the lake is 3/4 full of big fish than if it is 1/4 full of big fish.

Maybe I'm misunderstanding your phrasing here, but it sounds fallacious. If there's a deck of cards and you're in a group of 52 people who are called out in random order and told to pick one card each from the deck, the probability of being the first person to draw an ace is exactly the same (1/52) regardless of whether it's a normal deck or a deck of 52 aces (or even a deck with 3 out of 4 aces replaced by other cards). This result doesn't even depend on whether the card is removed or returned into the deck after each person's drawing; the conclusion follows purely from symmetry. The only special case is when there are zero aces, in which the event becomes impossible, with p=0.

Similarly, if the order in which the scientists get their samples is shuffled randomly, and we ignore the improbable possibility that nobody sees 25/100, then purely by symmetry, the probability that Bob happens to be the first one to see 25/100 is the same regardless of the actual frequency of the 25/100 results: p = 1/N(scientists).

comment by utilitymonster · 2010-07-04T11:47:04.558Z · score: 1 (1 votes) · LW(p) · GW(p)

You're right, thanks.

I was considering an example with 10^100 scientists. I thought that since there would be a lot more scientists who got 25 big in the 1/4 scenario than in the 3/4 scenario (about 9.18 10^98 vs. 1.279 10^75), you'd be more likely to be first the 3/4 scenario. But this forgets about the probability of getting an improbable result.

In general, if there are N scientists, and the probability of getting some result is p, then we can expect Np scientists to get that result on average. If the order is shuffled as you suggest, then the probability of being the first to get that result is p * 1/(Np) = 1/N. So the probability of being the first to get the result is the same, regardless of the likelihood of the result (assuming someone will get the result).

EDIT: It occurs to me that I might have been thinking about the probability of being selected by Al conditional on getting 25/100. In that case, you're a lot more likely to be selected if the pond is 3/4 big than if it is 1/4 big, since WAY more people got similar results in the latter case. JGMWeissman was probably thinking the same.

comment by utilitymonster · 2010-07-03T19:02:49.753Z · score: 0 (0 votes) · LW(p) · GW(p)

What effect will updating on this information have?

comment by Soki · 2010-07-03T21:07:30.126Z · score: 0 (0 votes) · LW(p) · GW(p)

First off all, I think that if Al does not see a sample, it makes the problem a bit simpler. That is, Al just tells Bob that he (Bob) is the first person that saw 25 big fishes.

I think that the number N of scientists matters, because the probability that someone will come to see Al depends on that.

Lets call B then event the lake has 75% big fishes, S the opposite and C the event someone comes, which means that someone saw 25 fishes.

Once Al sees Bob, he updates :
P(B/C)=P(B)* P(C/B)/(1/2*P(C/B)+1/2*P(C/S)).
When N tends toward infinity, both P(C/B) and P(C/S) tend toward 1, and P(B/SC) tends to 1/2.
But for small values of N, P(C/B) can be very small while P(C/S) will be quite close to 1.
Then the fact that someone was chosen lowers the probability of having a lake with big fishes.

If N=infinity, then the probability of being chosen is 0, and I cannot use Bayes' theorem.

If Charlie keeps inviting scientists until one sees 25 big fishes, then it becomes complicated, because the probability that you are invited is greater if the lake has more big fishes. It may be a bit like the sleeping beauty or the absent-minded driver problem.

Edited for formatting and misspellings

comment by GreenRoot · 2010-07-06T15:58:18.774Z · score: 7 (7 votes) · LW(p) · GW(p)

Does anybody know what is depicted in the little image named "mini-landscape.gif" at the bottom of each top level post, or why it appears there?

comment by Kazuo_Thow · 2010-07-07T05:16:33.338Z · score: 1 (1 votes) · LW(p) · GW(p)

Part of the San Francisco skyline, maybe?

comment by cousin_it · 2010-07-06T16:12:47.722Z · score: 1 (1 votes) · LW(p) · GW(p)

Thanks. This is the first time I ever noticed this. Absolutely no idea what it is or why it's there. Talk about selective blindness!

comment by matt · 2011-05-03T10:20:54.488Z · score: 0 (0 votes) · LW(p) · GW(p)

It was an early draft of the map vs territory theme that became the site header, which we intended to finish but forgot about.

comment by Leonhart · 2010-07-03T21:58:34.359Z · score: 7 (7 votes) · LW(p) · GW(p)

I can't remember if this has come up before...

Currently the Sequences are mostly as-imported from OB; including all the comments, which are flat and voteless as per the old mechanism.

Given that the Sequences are functioning as our main corpus for teaching newcomers, should we consider doing some comment topiary on at least the most-read articles? Specifically, I wonder if an appropriate thread structure be inferred from context; also we could vote the comments up or down in order to make the useful-in-hindsight stuff more salient. There's a lot of great stuff in there, but IIRC some that is less good as well. Not that we should actually get rid of any of it, of course.

Having said that, I'm already thinking of reasons that this is a bad idea, but I'm throwing it out anyway. Any thoughts? Should we be treating the Sequences as a time capsule or a living textbook? (I think that those phrases have roughly equal vague positive affect :)

comment by RobinZ · 2010-07-04T02:35:16.888Z · score: 5 (5 votes) · LW(p) · GW(p)

Voting is highly recommended - please do, and feel free to reply to comments with additional commentary as well. Otherwise I'd say leave them as be.

comment by JamesAndrix · 2010-07-25T20:47:08.739Z · score: 2 (2 votes) · LW(p) · GW(p)

Also related: A lot of the Sequences show marks of their origin on Overcoming Bias that could be confusing to someone who lands on that article:

Example: "Since this is an econblog... " in http://lesswrong.com/lw/j3/science_as_curiositystopper/

I think some kind of editorial note is in order here, if not a rewrite.

comment by JamesAndrix · 2010-07-05T06:46:24.067Z · score: 2 (2 votes) · LW(p) · GW(p)

Alternatively, we could repost/revisit the sequences on a schedule, and let the new posts build fresh comments.

Or even better, try to cover the same topics from a different perspective.

comment by gwern · 2010-07-05T08:10:33.211Z · score: 8 (8 votes) · LW(p) · GW(p)

I've suggested in the past that we use the old posts as filler; that is, if X days go by without something new making it to the front page, the next oldest item gets promoted instead.

Even if we collectively have nothing to say that is completely new, we likely have interesting things to say about old stuff - even if only linking it forward to newer stuff.

comment by gwern · 2010-07-06T08:00:13.221Z · score: 2 (2 votes) · LW(p) · GW(p)

So, from the 7 upboats, I take it that people in general approve of this idea. What's next? What do we do to make this a reality?

Looking back at an old post from OB (I think), like http://lesswrong.com/lw/od/37_ways_that_words_can_be_wrong/ I don't see any option to promote it to the front page. I thought I had enough karma to promote other peoples' articles, but it looks like I may be wrong about this. Is it even currently technically possible to promote old articles?

comment by Morendil · 2010-07-06T08:16:02.938Z · score: 1 (1 votes) · LW(p) · GW(p)

What's next? What do we do to make this a reality?

Agree on the numerical value of X? LW has slowed down a bit recently, compared to relatively recent periods with frantic paces of posting; I rather appreciate the current rhythm. It would take a long period without new stuff to convince me we needed "filler" at all.

I thought I had enough karma to promote other peoples' articles

Only editors can promote. (Installing the LW codebase locally is fun: you can play at being an editor.)

comment by gwern · 2010-07-06T08:47:27.749Z · score: 2 (2 votes) · LW(p) · GW(p)

Agree on the numerical value of X?

Alright. How about a week? If nothing new has shown up for a week, then I don't think people will mind a classic. (And offhand, I'm not sure we've yet had a slack period that long.)

comment by Morendil · 2010-07-06T08:57:34.480Z · score: 0 (0 votes) · LW(p) · GW(p)

Sounds good to me.

comment by JohannesDahlstrom · 2010-07-03T09:11:56.880Z · score: 7 (7 votes) · LW(p) · GW(p)

http://www.badscience.net/2010/07/yeah-well-you-can-prove-anything-with-science/

Priming people with scientific data that contradicts a particular established belief of theirs will actually make them question the utility of science in general. So in such a near-mode situation people actually seem to bite the bullet and avoid compartmentalization in their world-view.

From a rationality point of view, is it better to be inconsistent than consistently wrong?

There may be status effects in play, of course: reporting glaringly inconsistent views to those smarty-pants boffin types just may not seem a very good idea.

comment by cupholder · 2010-07-04T08:11:18.153Z · score: 2 (2 votes) · LW(p) · GW(p)

See also 'crank magnetism.'

I wonder if this counts as evidence for my heuristic of judging how seriously to take someone's belief on a complicated scientific subject by looking to see if they get the right answer on easier scientific questions.

comment by Yoreth · 2010-07-02T07:11:38.755Z · score: 7 (7 votes) · LW(p) · GW(p)

Long ago I read a book that asked the question “Why is there something rather than nothing?” Contemplating this question, I asked “What if there really is nothing?” Eventually I concluded that there really isn’t – reality is just fiction as seen from the inside.

Much later, I learned that this idea had a name: modal realism. After I read some about David Lewis’s views on the subject, it became clear to me that this was obviously, even trivially, correct, but since all the other worlds are causally unconnected, it doesn't matter at all for day-to-day life. Except as a means of dissolving the initial vexing question, it was pointless, I thought, to dwell on this topic any more.

Later on I learned about the Cold War and the nuclear arms race and the fears of nuclear annihilation. Apparently, people thought this was a very real danger, to the point of building bomb shelters in their backyards. And yet somehow we survived, and not a single bomb was dropped. In light of this, I thought, “What a bunch of hype this all is. You doomsayers cried wolf for decades; why should I worry now?”

But all of that happened before I was born.

If modal realism is correct, then for all I know there was* a nuclear holocaust in most world-lines; it’s just that I never existed there at all. Hence I cannot use the fact of my existence as evidence against the plausibility of existential threats, any more than we can observe life on Earth and thereby conclude that life is common throughout the universe.

(*Even setting aside MWI, which of course only strengthens the point.)

Strange how abstract ideas come back to bite you. So, should I worry now?

comment by cousin_it · 2010-07-02T07:17:45.213Z · score: 5 (7 votes) · LW(p) · GW(p)

If you think doom is very probable and we only survived due to the anthropic principle, then you should expect doom any day now, and every passing day without incident should weaken your faith in the anthropic explanation.

If you think all possible worlds exist, then you should expect our small bubble of ordered existence to erupt into chaos any day now, because way more copies of it are contained in chaotic worlds than in ordered ones. Every day you spend without spontaneously turning into a pheasant should weaken your faith in the multiverse.

(These arguments are not standard LW fare, but I've floated them here before and they seem to work okay.)

comment by JoshuaZ · 2010-07-02T12:30:02.930Z · score: 5 (5 votes) · LW(p) · GW(p)

If you think all possible worlds exist, then you should expect our small bubble of ordered existence to erupt into chaos any day now, because way more copies of it are contained in chaotic worlds than in ordered ones. Every day you spend without spontaneously turning into a pheasant should weaken your faith in the multiverse.

This depends on which level of the Tegmark classification you are talking about. Level III for example, quantum MWI, gives very low probabilities for things like turning into a pheasant, since those evens while possible, have tiny chances of occurring. Level IV, the ultimate ensemble, which seems to the main emphasis of the poster above, may have your argument as a valid rebuttal, but since level IV requires consistency, it would require a much better understanding of what consistent rule systems look like. And it may be that the vast majority of those universes don't have observers, so we actually would need to look at consistent rule systems with observers. Without a lot more information, it is very hard to examine the expected probabilities of weird even events in a level IV setting.

comment by cousin_it · 2010-07-02T19:10:01.452Z · score: 5 (7 votes) · LW(p) · GW(p)

since level IV requires consistency, it would require a much better understanding of what consistent rule systems look like

Wha? Any sequence of observations can be embedded in a consistent system that "hardcodes" it.

comment by JoshuaZ · 2010-07-04T14:34:28.455Z · score: 1 (1 votes) · LW(p) · GW(p)

Yeah, that's a good point. Hardcoding complicated changes is consistent. So any such argument of this form about level IV fails. I withdraw that claim.

comment by DanielVarga · 2010-07-04T20:15:54.856Z · score: 0 (0 votes) · LW(p) · GW(p)

Tegmark level IV is a very useful tool to guide one's intuitions, but in the end, the only meaningful question about Tegmark IV universes is this: Based on my observations, what is the relative probability that I am in this one rather than that one? And this, of course, is just what scientists do anyway, without citing Tegmark each time. Hardcoded universes are easily dealt with by the scientists' favorite tool, Occam's Razor.

comment by Vladimir_Nesov · 2010-07-03T06:25:56.165Z · score: 1 (1 votes) · LW(p) · GW(p)

Consistency is about logics, while Tegmark's madness is about mathematical structures. Whenever you can model your own actions (decision-making algorithm) using huge complicated mathematical structures, you can also do so with relatively simple mathematical structures constructed from the syntax of your algorithm (Lowenheim-Skolem type constructions). There is no fact of the matter about whether a given consistent countable first order theory, say, talks about an uncountable model or a countable one.

comment by Vladimir_Nesov · 2010-07-02T12:08:46.941Z · score: 3 (3 votes) · LW(p) · GW(p)

If you think all possible worlds exist, then you should expect our small bubble of ordered existence to erupt into chaos any day now

Not if you interpret your preference about those worlds as assigning most of them low probability, so that only the ordered ones matter.

comment by Jordan · 2010-07-04T07:56:36.995Z · score: 0 (0 votes) · LW(p) · GW(p)

I don't follow. Many low probability and unordered worlds are highly preferable. Conversely, many high probability worlds are not. I don't see a correlation.

comment by Vladimir_Nesov · 2010-07-04T08:06:28.966Z · score: 0 (0 votes) · LW(p) · GW(p)

It's a simplification. If preference satisfies expected utility axioms, it can be decomposed on probability and utility, and in this sense probability is a component of preference and shows how much you care about a given possibility. This doesn't mean that utility is high on those possibilities as well, or that the possibilities with high utility will have high probability. See my old post for more on this.

comment by Roko · 2010-07-05T19:49:25.116Z · score: 1 (1 votes) · LW(p) · GW(p)

I understand this move but I don't like it. I think that in the fullness of time, we'll see that probability is not a kind of preference, and there is a "fact of the matter" about the effects that actions have, i.e. that reality is objective not subjective.

But I don't like arguments from subjective anticipation, subjective anticipation is a projective error that humans make, as many worlds QM has already proved.

Indeed MW QM combined with Robin's Mangled Worlds is a good microcosm for how the multiverse at other levels ought to turn out. Subjective anticipation out, but still objective facts about what happens.

I note that since the argument from subjective anticipation is invalid, there is still the possibility that we live in an infinite structure with no canonical measure, in which case Vladimir would be right.

comment by Vladimir_Nesov · 2010-07-05T20:25:28.455Z · score: 1 (1 votes) · LW(p) · GW(p)

I understand this move but I don't like it. I think that in the fullness of time, we'll see that probability is not a kind of preference, and there is a "fact of the matter" about the effects that actions have, i.e. that reality is objective not subjective.

I think that probability is a tool for preference, but I also think that there is a fact of the matter about the effects of actions, and that reality of that effect is objective. This effect is at the level of the sample space (based on all mathematical structures maybe) though, of "brittle math", while the ways you measure the "probability" of a given (objective) event depend on what preference (subjective goals) you are trying to optimize for.

comment by cousin_it · 2010-07-02T14:21:31.376Z · score: 0 (0 votes) · LW(p) · GW(p)

To rephrase, "unless you interpret your preference as denying the multiverse hypothesis" :-)

comment by Vladimir_Nesov · 2010-07-02T16:41:04.038Z · score: 1 (1 votes) · LW(p) · GW(p)

You don't have to assign exactly no value to anything, which makes all structures relevant (to some extent).

comment by Mitchell_Porter · 2010-07-07T10:30:13.869Z · score: 0 (0 votes) · LW(p) · GW(p)

If you think doom is very probable and we only survived due to the anthropic principle, then you should expect doom any day now, and every passing day without incident should weaken your faith in the anthropic explanation.

What if you can see the doom building up, with every passing day? :-)

If you think all possible worlds exist, then you should expect our small bubble of ordered existence to erupt into chaos any day now, because way more copies of it are contained in chaotic worlds than in ordered ones.

I think this one is deeper. It is a valid criticism of quantum MWI, for example. If all worlds exist equally then naively all this structure around us should dissolve immediately, because most physical configurations are just randomness. Thus the quest to derive the Born probabilities...

I don't believe MWI as an explanation of QM anyway, so no big deal. But I am interested in "level IV" thinking - the idea that "all possible worlds exist", according to some precise notion of possibility. And yes, if you think any sequence of events is equally possible and hence (by the hypothesis) equally real, then what we actually see happening looks exceedingly improbable.

One pragmatist response to this is just to say "only orderly worlds are possible", without giving a further reason. If you actually had an "orderly multiverse" theory that gave correct predictions, you would have some justification for doing this, though eventually you'd still want to know why only the orderly worlds are real.

A more metaphysical response would try to provide a reason why all the real worlds are orderly. For example: Anything that exists in any world has a "nature" or an "essence", and causality is always about essences, so it's just not true that any string of events can occur in any world. Any event in any world really is a necessary product of the essences of the earlier events that cause it, and the appearance of randomness only happens under special circumstances (e.g. brains in vats) which are just uncommon in the multiverse. There are no worlds where events actually go haywire because it is logically impossible for causality to switch off, and every world has its own internal form of causality.

Then there's an anthropic variation on the metaphysical response, where you don't say that only orderly worlds are possible, but you give some reason why consciousness can only happen in orderly worlds (e.g. it requires causality).

comment by ShardPhoenix · 2010-07-03T01:50:07.438Z · score: 0 (0 votes) · LW(p) · GW(p)

If you think all possible worlds exist, then you should expect our small bubble of ordered existence to erupt into chaos any day now, because way more copies of it are contained in chaotic worlds than in ordered ones. Every day you spend without spontaneously turning into a pheasant should weaken your faith in the multiverse.

It's not clear to me that this is correct. Also, even if it is, then coherent memories (like what we're using to judge this whole scenario) only exist in worlds where this either hasn't happened yet or won't ever.

comment by wedrifid · 2010-07-03T04:17:59.428Z · score: 1 (1 votes) · LW(p) · GW(p)

We use markdown syntax. An > at the start of the paragraph will make it a quote,

like so.

comment by ShardPhoenix · 2010-07-03T10:09:41.315Z · score: 0 (0 votes) · LW(p) · GW(p)

I know, I was just being too lazy to look up the syntax :/.

comment by apophenia · 2010-07-03T22:28:18.080Z · score: 2 (2 votes) · LW(p) · GW(p)

If you click "Help" when writing a comment, it will appear in a handy box right next to where you are writing.

comment by Roko · 2010-07-02T19:40:34.284Z · score: 0 (0 votes) · LW(p) · GW(p)

expect

What is this subjective expectation that you speak of?

comment by NancyLebovitz · 2010-07-02T10:40:11.693Z · score: 0 (0 votes) · LW(p) · GW(p)

From what I've heard, there was a lot of talk about bomb shelters, but very few of them were built.

comment by [deleted] · 2010-07-03T16:21:19.365Z · score: 0 (0 votes) · LW(p) · GW(p)

Well, we even had a law which required to have one if you built a new house (see an article in German). This law is long since extinct, but according to the link above, there were 2.5 million such rooms, for a population of just 8 million people... Please note that in case of a real emergency most of those would probably have been extremely under-equipped. So,built - yes, correctly - no, and nowadays not even thought about.

comment by NancyLebovitz · 2010-07-03T16:32:26.404Z · score: 0 (0 votes) · LW(p) · GW(p)

What I'd heard was a bit on NPR which claimed there were only a handful of bomb shelters built in the US, and I admit I wasn't thinking about the rest of the world.

I'm probably born a little late (1953) for the height of bomb-shelter building, but I've never heard second or third-hand about actual bomb shelters in the US, and I think I would have (as parts of basements or somesuch) if they were at all common.

My impression is that the real attitude wasn't so much that a big nuclear war was unlikely as that people thought that if it happened, it wouldn't be worth living through.

comment by [deleted] · 2010-07-07T20:27:22.934Z · score: 6 (8 votes) · LW(p) · GW(p)

Here are some assumptions one can make about how "intelligences" operate:

  1. An intelligent agent maintains a database of "beliefs"
  2. It has rules for altering this database according to its experiences.
  3. It has rules for making decisions based on the contents of this database.

and an assumption about what "rationality" means:

  1. Whether or not an agent is "rational" depends only on the rules it uses in 2. and 3.

I have two questions:

I think that these assumptions are implicit in most and maybe all of what this community writes about rationality, decision theory, and similar topics. Does anyone disagree? Or agree?

Have assertions 1-4, or something similar to them, been made explicit and defended or criticized anywhere on this website?

The background is that I've been kicking around the idea that a focus on "beliefs" is misleading when modeling intelligence or intelligent agents.

This is my first post, please tell me if I'm misusing any jargon.

comment by whpearson · 2010-07-07T22:50:39.704Z · score: 1 (1 votes) · LW(p) · GW(p)

This also reminded me that I wanted to go through the Intentional Stance by Daniel Dennett and find the good bits. Also worth reading is the wiki page.

I think he would state that the model you describe comes from folk psychology.

A relevant passage

"We have all learned to take a more skeptical attitude to the dictates of folk physics, including those robust deliverances that persist in the face of academic science. Even the "undeniable introspective fact" that you can feel "centrifugal force" cannot save it, except for the pragmatic purposes of rough-and-ready understanding it has always served. The delicate question of just how we ought to express our diminished allegiance to the categories of folk physics has been a central topic in philosophy since the seventeenth century, when Descartes, Boyle and other began to ponder the meta-physical status of color, felt warmth, and other "secondary qualities". These discussions, while cautiously agnostic about folk physics have traditionally assumed as unchallenged the bedrock of folk-psychological counterpart categories: conscious perceptions of color, sensations of warmth, or beliefs about the external "world"."

In lesswrong people do tend to discard the perception and sensation parts of folk psychology, but keep the belief and goal concepts.

You might have trouble convincing people here, mainly because people are interested in what should be done by an intelligence, rather than what is currently done by humans. It is a lot harder to find evidence for what ought to be done rather than what is done.

comment by [deleted] · 2010-07-08T12:26:13.351Z · score: 0 (0 votes) · LW(p) · GW(p)

Relevant and new-to-me, thanks.

I'd be interested to hear examples of things, related to this discussion, that people here would not be easily convinced of.

comment by whpearson · 2010-07-08T16:06:53.924Z · score: 1 (1 votes) · LW(p) · GW(p)

The problem I have found is determining what people accept as evidence about "intelligences".

If everyone thought intelligence was always somewhat humanlike (i.e. that if we can't localise beliefs in humans we shouldn't try to build AI with localised beliefs) then evidence about humans would constitute evidence about AI somewhat. In this case things like blind sight (mentioned in the Intentional Stance) would show that beliefs were not easily localised.

I think it fairly uncontroversial that beliefs aren't stored in one particular place in humans on Lesswrong. However because people are aware of the limitations of Humans, they think that they can design AI without the flaws so they do not constrain their designs to be humanlike, so that allows them to slip localised/programmatic beliefs back in.

To convince them that localised beliefs where incorrect/unworkable for all intelligences would require a constructive theory of intelligence.

Does that help?

comment by whpearson · 2010-07-07T21:43:51.452Z · score: -1 (3 votes) · LW(p) · GW(p)

I'm not so interested in decision theory. I criticised it a bit here

Edit: To give a bit more background to how I view rationality: An intelligence is a set of interacting programs some of which have control of the agent at any one time. The rationality of the agent depends upon the set of programs in control of the agent. The relationship between the set of programs and rationality of the system is somewhat environmentally specific.

comment by Roko · 2010-07-05T10:24:23.350Z · score: 6 (6 votes) · LW(p) · GW(p)

Robert Ettinger's surprise at the incompetence of the establishment:

Robert Ettinger waited expectantly for prominent scientists or physicians to come to the same conclusion he had, and to take a position of public advocacy. By 1960, Ettinger finally made the scientific case for the idea, which had always been in the back of his mind. Ettinger was 42 years old and said he was increasingly aware of his own mortality.[7] In what has been characterized as an historically important mid-life crisis,[7] Ettinger summarized the idea of cryonics in a few pages, with the emphasis on life insurance, and sent this to approximately 200 people whom he selected from Who's Who in America.[7] The response was very small, and it was clear that a much longer exposition was needed— mostly to counter cultural bias. Ettinger correctly saw that people, even the intellectually, financially and socially distinguished, would have to be educated into understanding his belief that dying is usually gradual and could be a reversible process, and that freezing damage is so limited (even though fatal by present criteria) that its reversibility demands relatively little in future progress.

Ettinger soon made an even more troubling discovery, principally that "a great many people have to be coaxed into admitting that life is better than death, healthy is better than sick, smart is better than stupid, and immortality might be worth the trouble!"

Maybe if I publish a clear scientifically minded book they'll listen?

Following publication of The Prospect of Immortality (1962) Robert Ettinger again waited for prominent scientists, industrialists, or others in authority to see the wisdom of his idea and begin implementing it.

He is still waiting!

I write this because a prominent claim of the SIAI founders (Vassar especially) is that we vastly overestimate the competence of both society in general, and of the elites who run it.

Another example along the same lines is the relative non-response to the publication of nanosystems, especially the National Nanotech Initiative fiasco.

comment by Mitchell_Porter · 2010-07-05T11:25:53.793Z · score: 3 (3 votes) · LW(p) · GW(p)

There are many momentous issues here.

First: I think a historical narrative can be constructed, according to which a future unexpected in, say, 1900 or even in 1950 slowly comes into view, and in which there are three stages characterized by an extra increment of knowledge. The first increment is cryonics, the second increment is nanotechnology, and the third increment is superintelligence. There is a highly selective view; if you were telling the history of futurist visions in general, you would need to include biotechnology, robotics, space travel, nuclear power, even aviation, and many other things.

In any case, among all the visions of the future that exist out there, there is definitely one consisting of cryonics + nanotechnology + superintelligence. Cryonics is a path from the present to the future, nanotechnology will make the material world as pliable as the bits in a computer, and superintelligence guided by some utility function will rule over all things.

Among the questions one might want answered:

1) Is this an accurate vision of the future?

2) Why is it that still so few people share this perspective?

3) Is that a situation which ought to be changed, and if so, how could it be changed?

Question 1 is by far the most discussed.

Question 2 is mostly pondered by the few people who have answered 'yes' to question 1, and usually psychological answers are given. I think that a certain type of historical thinking could go a long way towards answering question 2, but it would have to be carried out with care, intelligence, and a will to objectivity.

This is what I have in mind: You can find various histories of the world which cover the period from 1960. Most of them will not mention Ettinger's book, or Eric Drexler's, or any of the movements to which they gave rise. To find a history which notices any of that, you will have to specialize, e.g. to a history of American technological subcultures, or a history of 20th-century futurological enthusiasms. An overkill history-based causal approach to question 2 would have a causal model of world history since 1960, a causal model of those small domains in which Ettinger and Drexler's publications had some impact, and finally it would seek to understand why the causal processes of the second sort remained invisible on the scale of the first.

Question 3 is also, intrinsically, a question which will mostly be of interest to the small group who have already answered 'yes' to question 1.

comment by Roko · 2010-07-05T11:49:04.174Z · score: 2 (2 votes) · LW(p) · GW(p)

You can find various histories of the world which cover the period from 1960. Most of them will not mention Ettinger's book, or Eric Drexler's, or any of the movements to which they gave rise

On the other hand, does anyone who has seriously thought about the issue expect nanotech to not be incredibly important in the long-term? It seems that there is a solid sceptical case that nano has been overhyped in the short term, perhaps even by Drexler.

But who will step forward having done a thorough analysis and say that humanity will thrive for another millennium without developing advanced nanotech?

comment by cupholder · 2010-07-05T11:32:40.350Z · score: 2 (2 votes) · LW(p) · GW(p)

A good illustration of multiple discovery (not strictly 'discovery' in this case, but anyway) too:

While Ettinger was the first, most articulate, and most scientifically credible person to argue the idea of cryonics,[citation needed] he was not the only one. In 1962, Evan Cooper had authored a manuscript entitled Immortality, Scientifically, Physically, Now under the pseudonym "N. Durhing".[8] Cooper's book contained the same argument as did Ettinger's, but it lacked both scientific and technical rigor and was not of publication quality.[citation needed]

comment by JohannesDahlstrom · 2010-07-03T10:46:00.365Z · score: 6 (6 votes) · LW(p) · GW(p)

I'm a bit surprised that nobody seems to have brought up The Salvation War yet. [ETA: direct links to first and second part]

It's a Web Original documentary-style techno-thriller, based around the premise that humans find out that a Judeo-Christian Heaven and (Dantean) Hell (and their denizens) actually exist, but it turns out there's nothing supernatural about them, just some previously-unknown/unapplied physics.

The work opens in medias res into a modern-day situation where Yahweh has finally gotten fed up with those hairless monkeys no longer being the blind obedient slaves of yore, making a Public Service Announcement that Heaven's gates are closed and Satan owns everyone's souls from now on.

When commanded to lie down and die, some actually do. The majority of humankind instead does the logical thing and unites to declare war on Heaven and Hell. Hilarity ensues.

The work is rather saturated with WarmFuzzies and AwesomeMoments appealing to the atheist/rationalist crowd, and features some very memorable characters. It's a work in progress, with the second part of the trilogy now nearing its finale.

comment by cousin_it · 2010-07-05T13:46:28.944Z · score: 7 (9 votes) · LW(p) · GW(p)

Okay, I've read through the whole thing so far.

This is not rationalist fiction. This is standard war porn, paperback thriller stuff. Many many technical descriptions of guns, rockets, military vehicles, etc. Throughout the story there's never any real conflict, just the American military (with help from the rest of the world) steamrolling everything, and the denizens of Heaven and Hell admiring the American way of life. It was well-written enough to hold my attention like a can of Pringles would, but I don't feel enriched by reading it.

comment by NancyLebovitz · 2010-07-05T15:58:44.750Z · score: 2 (2 votes) · LW(p) · GW(p)

I've only read about a chapter and a half, and may not read any more of it, but there's one small rationalist aspect worthy of note-- the author has a very solid grasp of the idea that machines need maintenance.

comment by CannibalSmith · 2010-07-06T13:21:33.430Z · score: 1 (1 votes) · LW(p) · GW(p)

Here's a tiny bit of rationality:

The new arrivals [soldiers who'd died and gone to hell only to keep fighting] didn’t fight the demon way, for pride and honor. Rahab realized they fought for other reasons entirely, they fought to win and woe to anybody who got in their way.

comment by cousin_it · 2010-07-06T14:50:11.850Z · score: 2 (4 votes) · LW(p) · GW(p)

If your enemy is much weaker than you, it may be rational to fight to win. If you are equals, ritualized combat is rational from a game-theoretic perspective; that's why it is so widespread in the animal kingdom, where evolutionary dynamics make populations converge on an equilibrium of behavior, and that's why it was widespread in medieval times (that Hell is modeled from).

So the passage you quoted doesn't work as a general statement about rationality, but it works pretty well as praise of America. Right now, America is the only country on Earth that can "fight to win". Other countries have to fight "honorably" lest America deny them their right of conquest.

comment by wedrifid · 2010-07-06T15:34:39.704Z · score: 2 (2 votes) · LW(p) · GW(p)

If your enemy is much weaker than you, it may be rational to fight to win. If you are equals, ritualized combat is rational from a game-theoretic perspective;

Right now, America is the only country on Earth that can "fight to win".

The wars America fights, the wars all countries fight are ritualised combat. We send our soldiers and bombers (of either the plane or suicide variety), you send your soldiers and bombers. One side loses more soldiers, the other side loses more money. If America or any its rivals fought to win their respective countries would be levelled.

The ritualised combat model you describe matches modern warfare perfectly and the very survival of the USA depends on it.

comment by cousin_it · 2010-07-06T16:04:47.804Z · score: 0 (2 votes) · LW(p) · GW(p)

America's wars change regimes in other countries. This ain't ritualized combat.

comment by wedrifid · 2010-07-07T04:46:45.270Z · score: 3 (5 votes) · LW(p) · GW(p)

America's wars change regimes in other countries. This ain't ritualized combat.

That's exactly the purpose of ritualised combat. Change regimes without total war. Animals (including humans) change their relative standing in the tribe. Coalitions of animals use ritualised combat to change intratribal regimes. Intertribal combat often has some degree of ritual element, although this of course varies based on the ability of tribes to 'cooperate' in combat without total war.

In international battles there have been times where the combat has been completely non-ritualised and brutal. But right now if combat was not ritualised countries would be annihilated by nuclear battles. That's the whole point of ritual combat. Fight with the claws retracted, submit to the stronger party without going for the kill. Because if powerful countries with current technology levels, or powerful animals, fight each other without restriction both will end up crippled. That can either mean infections from relatively minor flesh wounds in a fight to the death or half your continent being reduced to an uninhabited and somewhat radioactive wasteland in a war you 'won'.

Other countries have to fight "honorably" lest America deny them their right of conquest.

The point I argue here is that America is allowed to make such interference only because its rivals choose to cooperate in the 'ritualised combat' prisoners dilemma. They accept America's dominance in conventional warfare because total war would result in mutual destruction. In a world where multiple countries have the ability to destroy each other (or, if particularly desperate, all mammalian life on the planet) combat is necessarily ritualised or the species goes extinct.

This ain't ritualized combat.

You misunderstand the purpose of ritualised combat. In animals this isn't the play fighting that pups do to practice fighting. This is real, regime changing, win-or-don't-get-laid-till-later combat-and-get-fewer-resources.

(ETA: I note that we are arguing here over how to apply an analogy. Since analogies are more useful as an explanatory tool and an intuition pump than a tool for argument it is usually unproductive to delve too deeply into how they 'correctly' apply. It is better to directly discuss the subject. I would be somewhat surprised if cousin_it and I disagree to such an absolute degree on the actual state of the current global military/political situation.)

comment by cousin_it · 2010-07-07T05:08:03.952Z · score: 1 (3 votes) · LW(p) · GW(p)

You seem to be living on an alternate Earth where America fights ritualized wars against countries that have nuclear weapons. In our world America attacks much weaker countries whose leaders have absolutely no reason to fight with claws rectracted, because if they lose they get hanged like Saddam Hussein or die in prison like Milosevic. No other country does that today.

comment by Douglas_Knight · 2010-07-07T06:09:39.208Z · score: 0 (0 votes) · LW(p) · GW(p)

whose leaders have absolutely no reason to fight with claws rectracted

Countries aren't that coherent and certainly aren't their leaders. I don't think the analogy makes sense either way.

comment by wedrifid · 2010-07-07T05:41:52.506Z · score: 0 (0 votes) · LW(p) · GW(p)

You seem to be living on an alternate Earth

It would seem that I need to retract the last sentence in my ETA.

comment by cousin_it · 2010-07-06T14:35:22.407Z · score: 0 (0 votes) · LW(p) · GW(p)

It's funny. When describing the history of Hell, the author unwittingly explains the benefits of ritualized warfare while painting them as stupid. It seems he doesn't quite grasp how ritualized combat can be game-theoretically rational and why it occurs so often in the animal kingdom. Fighting to win is only rational when you're confident enough that you will win.

comment by cousin_it · 2010-07-03T19:09:24.057Z · score: 3 (5 votes) · LW(p) · GW(p)

Why did you link to TV Tropes instead of the thing itself?

comment by JohannesDahlstrom · 2010-07-04T09:30:53.386Z · score: 0 (0 votes) · LW(p) · GW(p)

A good question.

I ended up writing a longer post than I expected; originally I just thought I'd just utilize the TV Tropes summary/review by linking there.

Also, the Tropes page provides links to both of the parts, and to both the original threads (with discussion) and the cleaned-up versions (story only.) I'll edit the post to include direct links.

comment by Bongo · 2010-07-03T23:18:22.683Z · score: 1 (1 votes) · LW(p) · GW(p)

Direct link to story

comment by SilasBarta · 2010-07-01T21:28:47.160Z · score: 6 (10 votes) · LW(p) · GW(p)

Okay, here's something that could grow into an article, but it's just rambling at this point. I was planning this as a prelude to my ever-delayed "Explain yourself!" article, since it eases into some of the related social issues. Please tell me what you would want me to elaborate on given what I have so far.


Title: On Mechanizing Science (Epistemology?)

"Silas, there is no Bayesian ‘revival’ in science. There is one amongst people who wish to reduce science to a mechanical procedure." – Gene Callahan

“It is not possible … to construct a system of thought that improves on common sense. … The great enemy of the reservationist is the automatist[,] who believes he can reduce or transcend reason. … And the most pernicious [of them] are algorithmists, who believe they have some universal algorithm which is a drop-in replacement for any and all cogitation.” – "Mencius Moldbug"

And I say: What?

Forget about the issue of how many Bayesians are out there – I’m interested in the other claim. There are two ways to read it, and I express those views here (with a bit of exaggeration):

View 1: “Trying to come up with a mechanical procedure for acquiring knowledge is futile, so you are foolish to pursue this approach. The remaining mysterious aspects of nature are so complex you will inevitably require a human to continually intervene to ‘tweak’ the procedure based on human judgment, making it no mechanical procedure at all.”

View 2: “How dare, how dare those people try to mechanize science! I want science to be about what my elite little cadre has collectively decided is real science. We want to exercise our own discretion, and we’re not going to let some Young Turk outsiders upstage us with their theories. They don’t ‘get’ real science. Real science is about humans, yes, humans making wise, reasoned judgments, in a social context, where expertise is recognized and a rewarded. A machine necessarily cannot do that, so don’t even try.”

View 1, I find respectable, even as I disagree with it.

View 2, I hold in utter contempt.

comment by Vladimir_M · 2010-07-02T07:32:09.285Z · score: 8 (8 votes) · LW(p) · GW(p)

I think there is an additional interpretation that you're not taking into account, and an eminently reasonable one.

First, to clarify the easy question: unless you believe that there is something mysteriously uncomputable going on in the human brain, the question of whether science can be automated in principle is trivial. Obviously, all you'd need to do is to program a sufficiently sophisticated AI, and it will do automated science. That much is clear.

However, the more important question is -- what about our present abilities to automate science? By this I mean both the hypothetical methods we could try and the ones that have actually been tried in practice. Here, at the very least, a strong case can be made that the 20th century attempt to transform science into a bureaucratic enterprise that operates according to formal, automated procedures has largely been a failure. It has undoubtedly produced an endless stream of cargo-cult science that satisfies all these formal bureaucratic procedures, but is nevertheless worthless -- or worse. At the same time, it's unclear how much valid science is coming out except for those scientists who have maintained a high degree of purely informal and private enthusiasm for discovering truth (and perhaps also those in highly practical applied fields where the cash worth of innovations provides a stringent reality check).

This is how I read Moldbug: in many important questions, we can only admit honestly that we still have no way to find answers backed by scientific evidence in any meaningful sense of the term, and we have to grapple with less reliable forms of reasoning. Yet, there is the widespread idea that if only the proper formal bureaucratic structures are established, we can get "science" to give us answers about whichever questions we find interesting, and we should guide our lives and policies according to the results of such "science." It's not hard to see how this situation can give birth to a diabolical network of perverse incentives, producing endless reams of cargo-cult scientific work published by prestigious outlets and venerated as "science" by the general public and the government.

The really scary prospect is that our system of government might lead us to a complete disaster guided by policy prescriptions coming from this perverted system that has, arguably, already become its integral part.

comment by SilasBarta · 2010-07-02T16:24:38.946Z · score: 3 (5 votes) · LW(p) · GW(p)

Okay, thanks, that tells me what I was looking for: clarification of what it is I'm trying to refute, and what substantive reasons I have to disagree.

So "Moldbug" is pointing out that the attempt to make science into an algorithm has produced a lot of stuff that's worthless but adheres to the algorithm, and we can see this with common sense, however less accurate it might be.

The point I would make in response (and elaborate on in the upcoming article), is that this is no excuse not to look inside the black box that we call common sense and understand why it works, and what about it could be improved, while the Moldbug view asks that we not do it. Like E. T. Jaynes says in chapter 1 of PLoS, the question we should ask is, if we were going to make a robot that infers everything we should infer, what constraints would we place on it?

This exercise is not just some attempt to make robots "as good as humans"; rather, it reveals why that-which-we-call "common sense" works in the first place, and exposes more general principles of superior inference.

In short, I claim that we can have Level 3 understanding of our own common sense. That, contra Moldbug, we can go beyond just being able to produce its output (Level 1), but also know why we regard certain things as common sense but not others, and be able to explain why it works, for what domains, and why and where it doesn't work.

This could lead to a good article.

comment by Tyrrell_McAllister · 2010-07-08T19:46:39.060Z · score: 2 (2 votes) · LW(p) · GW(p)

View 2: “How dare, how dare those people try to mechanize science! . . .

The pithy reply would be that science already is mechanized. We just don't understand the mechanism yet.

comment by SilasBarta · 2010-07-08T20:01:48.652Z · score: 0 (0 votes) · LW(p) · GW(p)

Is that directed at, or intended to be any more convincing to those holding Callahan's view in the link? I'm not trying to criticize you, I just want to make sure you know the kind of worldview you're dealing with here. If you'll remember, this is the same guy who categorically rejects the idea that anything human-related is mechanized. ( Recent blog post about the issue ... he's proud to be a "Silas-free" zone now.)

On a slightly related note, I was thinking about what analogous positions would look like, and I thought of this one for comparison: "There is no automatist revival in industry. There is one amongst people who wish to reduce every production process into a mechanical procedure."

comment by Morendil · 2010-07-08T20:16:58.559Z · score: 2 (2 votes) · LW(p) · GW(p)

About "Silas-free zones" you blogged:

So why would this Serious Thinker feel the need to reject, on sight, my comments from appearing, and then advertise it?

You don't think your making a horrible impression on people you argue with may have anything to do with it? ;)

Seriously, that would be my first hypothesis. "You don't catch flies with vinegar." Go enough out of your way to antagonize people even as you're making strong rebuttals to their weak arguments, and you're giving them an easy way out of listening to you.

The nicer you are, the harder you make it for others to dismiss you as an asshole. I'd count that as a good reason to learn nice. (If you need role models, there are plenty of people here who are consistently nice without being pushovers in arguments - far from it.)

comment by SilasBarta · 2010-07-08T20:41:41.859Z · score: 0 (0 votes) · LW(p) · GW(p)

The evidence against that position is that Callahan, for a while, had no problem allowing my comments on his site, but then called me a "douche" and deleted them the moment they started disagreeing with him. Here's another example.

Also, on this post, I responded with something like, "It's real, in the sense of being an observable regularity in nature. Okay, what trap did I walk into?" but it was diallowed. Yet I wouldn't call that comment rude.

It's not about him banning me because of my tone; he bans anyone who makes the same kinds of arguments, unless they do it badly, in which case he keeps their comments for the easy kill, gets in the last word, and closes the thread. Which is his prerogative, of course, but not something to be equated with "being interested in meaningful exchange of ideas, and only banning those who are rude".

comment by Morendil · 2010-07-09T15:30:07.260Z · score: 1 (1 votes) · LW(p) · GW(p)

"There is no automatist revival in industry. There is one amongst people who wish to reduce every production process into a mechanical procedure."

I'm not sure that claim would be entirely absurd.

In the software engineering business, there's a subculture whose underlying ideology can be caricatured as "Programming would be so simple if only we could get those pesky programmers out of the loop." This subculture invests heavily into code generation, model-driven architectures, and so on.

Arguably, too, this goal only sems plausible if you have swallowed quite a few confusions regarding the respective roles of problem-solving, design, construction, and testing. A closer examination reveals that what passes for attempts at "mechanizing" the creation of software punts on most of the serious questions, focusing only on what is easily mechanizable.

But that is nothing other than the continuation of a trend that has existed in the software profession from the beginning: the provision of mechanized aids to a process that remains largely creative (and as such poorly understood). We don't say that compilers have mechanized the production of software; we say that they have raised the level of abstraction at which a programmer works.

comment by SilasBarta · 2010-07-09T17:02:31.738Z · score: 1 (1 votes) · LW(p) · GW(p)

Okay, but that point only concerns production of software, a relatively new "production output". The statement ("there is no automatist revival in industry ...") would apply just the same to any factory, and ridicules the idea that there can be a mechanical procedure for producing any good. In reality, of course, this seems to be the norm: someone figures out what combination of motions converts the input to the output, refuting the notion that e.g. "There is no mechanical procedure for preparing a bottle of Coca-cola ..."

In any case, my dispute with Callahan's remark is not merely about its pessimism regarding mechanizing this or that (which I called View 1), but rather, the implication that such mechanization would be fundamentally impossible (View 2), and that this impossibility can be discerned from philosophical considerations.

And regarding software, the big difficulty in getting rid of human programmers seems to come from how their role is, ultimately, to find a representation for a function (in a standard language) that converts a specified input into a specified output. Those specifications come from ... other humans, who often conceal properties of the desired I/O behavior, or fail to articulate them.

comment by Blueberry · 2010-07-09T08:07:24.286Z · score: 1 (1 votes) · LW(p) · GW(p)

he's proud to be a "Silas-free" zone now.

From looking at his blog, I think you should take this as a compliment.

comment by Tyrrell_McAllister · 2010-07-09T00:39:39.912Z · score: 0 (0 votes) · LW(p) · GW(p)

Is that directed at, or intended to be any more convincing to those holding Callahan's view in the link? I'm not trying to criticize you,

No, you're absolutely right. My comment definitely would not be convincing. The best that could be said for it is that it would help to clarify the nature of my rejection of View 2. That is, if I were talking to Callahan, that comment would, at best, just help him to understand which position he was dealing with.

comment by cupholder · 2010-07-03T04:46:44.764Z · score: 2 (2 votes) · LW(p) · GW(p)

"Silas, there is no Bayesian ‘revival’ in science. There is one amongst people who wish to reduce science to a mechanical procedure." – Gene Callahan

Am I the only one who finds this extremely unlikely? So far as I know, Bayesian methods have become massively more popular in science over the last 50 years. (Count JSTOR hits for the word 'Bayesian,' for example, and watch the numbers shoot up over time!)

comment by Douglas_Knight · 2010-07-03T19:36:16.649Z · score: 1 (1 votes) · LW(p) · GW(p)

Half of those hits are in the social sciences. I suspect that is economists defining the rational agents they study as bayesian, but that is rather different from the economists being bayesian themselves! The other half are in math & staticstics is probably that bayesian statisticians are becoming more common, which you might count as science (and 10% are in science proper).

Anyhow, it's clear from the context (I'd have thought from the quote) that he just means that the vast majority of scientists are not interested in defining science precisely.

comment by cupholder · 2010-07-04T05:47:43.965Z · score: 0 (0 votes) · LW(p) · GW(p)

It might well have been clear from the quote itself, but not to me - I just read the quote as saying Bayesian thinking and Bayesian methods haven't become more popular in science, which doesn't mesh with my intuition/experience.

comment by TraditionalRationali · 2010-07-02T05:47:00.372Z · score: 2 (2 votes) · LW(p) · GW(p)

That it should be possible to Algorithmize Science seems clear from that the human brain can do science and the human brain should be possible to describe algorthmically. If not at a higher level, so at least -- in principle -- by quantum electrodynamics which is the (known and computable in principle) dynamics of electrons and nuclei that are the building blocks of the brain.( If it should be possible to do in practice it would have to be done at a higher level but as a proof of principle that argument should be enough.)

I guess, however, that what is actually meant is if the scientific method itself could be formalised (algorithmized), so that science could be "mechanized" in a more direct way than building human-level AIs and then let them learn and do science by the somewhat informal process used today by human scientists. That seems plausible. But has still to be done and seems rather difficult. The philosophers of science is working on understanding the scientific process better and better, but they seem still to have a long way to go before an actually working algorithmic description has been achieved. See also the discussion below on the recent article by Gelman and Shalizi criticizing bayesianism.

EDIT "done at a lower level" changed to "done at a higher level"

comment by WrongBot · 2010-07-02T15:45:49.230Z · score: 2 (2 votes) · LW(p) · GW(p)

The scientific method is already a vague sort of algorithm, and I can see how it might be possible to mechanize many of the steps. The part that seems AGI-hard to me is the process of generating good hypotheses. Humans are incredibly good at plucking out reasonable hypotheses from the infinite search space that is available; that we are so very often says more of the difficulty of the problem than our own abilities.

comment by NancyLebovitz · 2010-07-02T16:27:03.779Z · score: 1 (1 votes) · LW(p) · GW(p)

I'm pretty sure that judging whether one has adequately tested a hypothesis is also going to be very hard to mechanize.

comment by SilasBarta · 2010-07-02T16:39:49.234Z · score: 2 (2 votes) · LW(p) · GW(p)

The problem that I hear most often in regard to mechanizing this process has the basic form, "Obviously, you need a human in the loop because of all the cases where you need to be able to recognize that a correlation is spurious, and thus to ignore it, and that comes from having good background knowledge."

But you have to wonder: the human didn't learn how to recognize spurious correlations through magic. So however they came up with that capability should be some identifiable process.

comment by cupholder · 2010-07-03T04:41:43.336Z · score: 2 (2 votes) · LW(p) · GW(p)

The problem that I hear most often in regard to mechanizing this process has the basic form, "Obviously, you need a human in the loop because of all the cases where you need to be able to recognize that a correlation is spurious, and thus to ignore it, and that comes from having good background knowledge."

Those people should be glad they've never heard of TETRAD - their heads might have exploded!

comment by NancyLebovitz · 2010-07-03T10:01:32.059Z · score: 1 (1 votes) · LW(p) · GW(p)

That's intriguing. Has it turned out to be useful?

comment by cupholder · 2010-07-04T05:31:24.428Z · score: 3 (3 votes) · LW(p) · GW(p)

It's apparently been put to use with some success. Clark Glymour - a philosophy professor who helped develop TETRAD - wrote a long review of The Bell Curve that lists applications of an earlier version of TETRAD (see section 6 of the review):

Several other applications have been made of the techniques, for example:

  1. Spirtes et al. (1993) used published data on a small observational sample of Spartina grass from the Cape Fear estuary to correctly predict - contrary both to regression results and expert opinion - the outcome of an unpublished greenhouse experiment on the influence of salinity, pH and aeration on growth.

  2. Druzdzel and Glymour (1994) used data from the US News and World Report survey of American colleges and universities to predict the effect on dropout rates of manipulating average SAT scores of freshman classes. The prediction was confirmed at Carnegie Mellon University.

  3. Waldemark used the techniques to recalibrate a mass spectrometer aboard a Swedish satellite, reducing errors by half.

  4. Shipley (1995, 1997, in review) used the techniques to model a variety of biological problems, and developed adaptations of them for small sample problems.

  5. Akleman et al. (1997) have found that the graphical model search techniques do as well or better than standard time series regression techniques based on statistical loss functions at out of sample predictions for data on exchange rates and corn prices.

Personally I find it a little odd that such a useful tool is still so obscure, but I guess a lot of scientists are loath to change tools and techniques.

comment by NancyLebovitz · 2010-07-02T17:12:36.803Z · score: 0 (0 votes) · LW(p) · GW(p)

Maybe it's just a matter of people kidding themselves about how hard it is to explain something.

On the other hand, some things (like vision and natural language) are genuinely hard to figure out.

I'm not saying the problem is insoluble. I'm saying it looks very difficult.

comment by cupholder · 2010-07-03T05:08:23.900Z · score: 0 (0 votes) · LW(p) · GW(p)

One possible way to get started is to do what the 'Distilling Free-Form Natural Laws from Experimental Data' project did: feed measurements of time and other variables of interest into a computer program which uses a genetic algorithm to build functions that best represent one variable as a function of itself and the other variables. The Science article is paywalled but available elsewhere. (See also this bunch of presentation slides.)

They also have software for you to do this at home.

comment by NancyLebovitz · 2010-07-02T03:44:39.114Z · score: 2 (2 votes) · LW(p) · GW(p)

How hard do you think mechanizing science would be? It strikes me as being at least in the same class with natural language.

comment by NancyLebovitz · 2010-07-02T15:40:11.541Z · score: 1 (1 votes) · LW(p) · GW(p)

I've been poking at the question of to what extent computers could help people do science, beyond the usual calculation and visualization which is already being done.

I'm not getting very far-- a lot of the most interesting stuff seems like getting meaning out of noise.

However, could computers check to make sure that the use of statistics isn't too awful? Or is finding out whether what's deduced follows from the raw data too much like doing natural language? What about finding similar patterns in different fields? Possibly promising areas which haven't been explored?

comment by SilasBarta · 2010-07-02T16:32:45.161Z · score: 0 (0 votes) · LW(p) · GW(p)

Not exactly sure, to be honest, though your estimate sounds correct. What matters is that I deem it possible in a non-trivial sense; and more importantly, that we can currently identify rough boundaries of ideal mechanized science, and can categorize much of existing science as being definitely in or out.

comment by steven0461 · 2010-07-04T21:01:01.811Z · score: 1 (1 votes) · LW(p) · GW(p)

It's probably best to take a cyborg point of view -- consciously followed algorithms (like probabilistic updating) aren't a replacement for common sense, but they can be integrated into common sense, or used as measuring sticks, to turn common sense into common awesome cybersense.

comment by cousin_it · 2010-07-01T21:37:29.788Z · score: 1 (1 votes) · LW(p) · GW(p)

You probably won't find much opposition to your opinion here on LW. Duh, of course science can and will be automated! It's pretty amusing that the thesis of Cosma Shalizi, an outspoken anti-Bayesian, deals with automated extraction of causal architecture from observed behavior of systems. (If you enjoy math, read it all; it's very eye-opening.)

comment by SilasBarta · 2010-07-01T22:17:07.571Z · score: 2 (2 votes) · LW(p) · GW(p)

Really? I read enough of that thesis to add it to the pile of "papers about fully generally learning programs with no practical use or insight into general intelligence".

Though I did get one useful insight from Shalizi's thesis: that I should judge complexity by the program length needed to produce something functionally equivalent, not something exactly identical, as that metric makes more sense when judging complexity as it pertains to real-world systems and their entropy.

comment by SilasBarta · 2010-07-01T22:25:06.661Z · score: 0 (0 votes) · LW(p) · GW(p)

And regarding your other point, I'm sure people agree with holding view 2 in contempt. But what about the more general question of mechanizing epistemology?

Also, would people be interested in a study of what actually does motivate opposition to the attempt to mechanize science? (i.e. one that goes beyond my rants and researches it)

comment by Daniel_Burfoot · 2010-07-02T15:29:12.806Z · score: 0 (0 votes) · LW(p) · GW(p)

I read Moldbug's quote as saying: there is currently no system, algorithmic or bureaucratic, that is even remotely close to the power of human intuition, common sense, genius, etc. But there are people who implicitly claim they have such a system, and those people are dangerous liars.

comment by SilasBarta · 2010-07-02T16:02:23.133Z · score: 1 (1 votes) · LW(p) · GW(p)

It is not possible … to construct a system of thought that improves on common sense

I read Moldbug's quote as saying: there is currently no system ...

comment by Jayson_Virissimo · 2010-07-02T23:51:52.844Z · score: 0 (0 votes) · LW(p) · GW(p)

Those quotes do seem to be in conflict, but if he is talking about people that claim they already have the blueprints for such a thing, it would make more sense to read what he is saying as "it is not possible, with our current level of knowledge, to construct a system of thought that improves on common sense". Is he really pushing back against people that say that it is possible to construct such a system (at some far off point in the future), or is he pushing back against people that say they have (already) found such a system?

comment by mattnewport · 2010-07-03T00:06:36.168Z · score: 2 (2 votes) · LW(p) · GW(p)

The Moldbug article that the quote comes from does not seem to be expressing anything much like either Silas' view 1 or view 2. Moldbug clarifies in a comment that he is not making an argument against the possibility of AGI:

Think of it in terms of Searle's Chinese Room gedankenexperiment. If you can build a true AI, you can build the Chinese Room. Since I do not follow Penrose and the neo-vitalists in believing that AI is in principle impossible, I think the Chinese Room can be built, although it would take a lot of people and be very slow.

My argument is that, not only is it the Room rather than the people in it that speaks Chinese, but (in my opinion) the algorithm that the Room executes will not be one that is globally intelligible to humans, in the way that a human can understand, say, how Windows XP works.

In other words, the human brain is not powerful enough to virtualize itself. It can reason, and with sufficient technology it can build algorithmic devices capable of artificial reason, and this implies that it can explain why these devices work. But it cannot upgrade itself to a superhuman level of reason by following the same algorithm itself.

comment by SilasBarta · 2010-07-03T02:00:20.685Z · score: 0 (0 votes) · LW(p) · GW(p)

That sounds like a justification for view 1. Remember, view 1 doesn't provide a justification for why there will need to be continual tweaks to mechanized reasoners to bring them in line with (more-) human reasoning, so remains agnostic on how exactly one justifies this view.

(Of course, "Moldbug's" view still doesn't seem any more defensible, because it equates a machine virtualizing a human, with a machine virtualizing the critical aspects of reasoning, but whatever.)

comment by NancyLebovitz · 2010-07-26T02:58:07.094Z · score: 5 (5 votes) · LW(p) · GW(p)

A few years after I became an assistant professor, I realized the key thing a scientist needs is an excuse. Not a prediction. Not a theory. Not a concept. Not a hunch. Not a method. Just an excuse — an excuse to do something, which in my case meant an excuse to do a rat experiment. If you do something, you are likely to learn something, even if your reason for action was silly. The alchemists wanted gold so they did something. Fine. Gold was their excuse. Their activities produced useful knowledge, even though those activities were motivated by beliefs we now think silly. I’d like to think none of my self-experimentation was based on silly ideas but, silly or not, it often paid off in unexpected ways. At one point I tested the idea that standing more would cause weight loss. Even as I was doing it I thought the premise highly unlikely. Yet this led me to discover that standing a lot improved my sleep.

Seth Roberts

I'm not sure he's right about this, but I'm not sure he's wrong, either. What do you think?

comment by RobinZ · 2010-07-26T14:42:02.498Z · score: 1 (1 votes) · LW(p) · GW(p)

It makes me think of Richard Hamming talking about having "an attack".

comment by [deleted] · 2010-07-06T17:14:17.139Z · score: 5 (5 votes) · LW(p) · GW(p)

Poking around on Cosma Shalizi's website, I found this long, somewhat technical argument for why the general intelligence factor, g, doesn't exist.

The main thrust is that g is an artifact of hierarchal factor analysis, and that whenever you have groups of variables that have positive correlations between them, a general factor will always appear that explains a fair amount of the variance, whether it a actually exists or not.

I'm not convinced, mainly because it strikes me as unlikely that an error of this type would persist for so long, and that even his conception of intelligence as a large number of separate abilities would need some sort of high level selection and sequencing function. But neither of those are particularly compelling reasons for disagreement - can anyone more familiar with the psychological/statistical territory shed some light?

comment by [deleted] · 2010-07-07T14:54:46.203Z · score: 10 (10 votes) · LW(p) · GW(p)

I pointed this out to my buddy who's a psychology doctoral student, his reply is below:

I don't know enough about g to say whether the people talking about it are falling prey to the general correlation between tests, but this phenomenon is pretty well-known to social science researchers.

I do know enough about CFA and EFA to tell you that this guy has an unreasonable boner for CFA. CFA doesn't test against truth, it tests against other models. Which means it only tells you whether the model you're looking at fits better than a comparator model. If that's a null model, that's not a particularly great line of analysis.

He pretty blatantly misrepresents this. And his criticisms of things like Big Five are pretty wild. Big Five, by its very nature, fits the correlations extremely well. The largest criticism of Big Five is that it's not theory-driven, but data-driven!

But my biggest beef has got to be him arguing that EFA is not a technique for determining causality. No shit. That is the very nature of EFA -- it's a technique for loading factors (which have no inherent "truth" to them by loading alone, and are highly subject to reification) in order to maximize variance explained. He doesn't need to argue this point for a million words. It's definitional.

So regardless of whether g exists or not, which I'm not really qualified to speak on, this guy is kind of a hugely misleading writer. MINUS FIVE SCIENCE POINTS TO HIM.

comment by satt · 2010-07-07T11:28:49.144Z · score: 6 (6 votes) · LW(p) · GW(p)

But neither of those are particularly compelling reasons for disagreement - can anyone more familiar with the psychological/statistical territory shed some light?

Shalizi's most basic point — that factor analysis will generate a general factor for any bunch of sufficiently strongly correlated variables — is correct.

Here's a demo. The statistical analysis package R comes with some built-in datasets to play with. I skimmed through the list and picked out six monthly datasets (72 data points in each):

It's pretty unlikely that there's a single causal general factor that explains most of the variation in all six of these time series, especially as they're from mostly non-overlapping time intervals. They aren't even that well correlated with each other: the mean correlation between different time series is -0.10 with a std. dev. of 0.34. And yet, when I ask R's canned factor analysis routine to calculate a general factor for these six time series, that general factor explains 1/3 of their variance!

However, Shalizi's blog post covers a lot more ground than just this basic point, and it's difficult for me to work out exactly what he's trying to say, which in turn makes it difficult to say how correct he is overall. What does Shalizi mean specifically by calling g a myth? Does he think it is very unlikely to exist, or just that factor analysis is not good evidence for it? Who does he think is in error about its nature? I can think of one researcher in particular who stands out as just not getting it, but beyond that I'm just not sure.

comment by HughRistik · 2010-07-07T18:00:15.577Z · score: 2 (2 votes) · LW(p) · GW(p)

In your example, we have no reason to privilege the hypothesis that there is an underlying causal factor behind that data. In the case of g, wouldn't its relationships to neurobiology be a reason to give a higher prior probability to the hypothesis that g is actually measuring something real? These results would seem surprising if g was merely a statistical "myth."

comment by satt · 2010-07-07T19:14:30.788Z · score: 6 (6 votes) · LW(p) · GW(p)

In the case of g, wouldn't its relationships to neurobiology be a reason to give a higher prior probability to the hypothesis that g is actually measuring something real?

The best evidence that g measures something real is that IQ tests are highly reliable, i.e. if you get your IQ or g assessed twice, there's a very good correlation between your first score and your second score. Something has to generate the covariance between retestings; that g & IQ also correlate with neurobiological variables is just icing on the cake.

To answer your question directly, g's neurobiological associations are further evidence that g measures something real, and I believe g does measure something real, though I am not sure what.

These results would seem surprising if g was merely a statistical "myth."

Shalizi is, somewhat confusingly, using the word "myth" to mean something like "g's role as a genuine physiological causal agent is exaggerated because factor analysis sucks for causal inference", rather than its normal meaning of "made up". Working with Shalizi's (not especially clear) meaning of the word "myth", then, it's not that surprising that g correlates with neurobiology, because it is measuring something — it's just not been proven to represent a single causal agent.

Personally I would've preferred Shalizi to use some word other than "myth" (maybe "construct") to avoid exactly this confusion: it sounds as if he's denying that g measures anything, but I don't believe that's his intent, nor what he actually believes. (Though I think there's a small but non-negligible chance I'm wrong about that.)

comment by RobinZ · 2010-07-07T14:58:42.621Z · score: 2 (2 votes) · LW(p) · GW(p)

By the way, welcome to Less Wrong! Feel free to introduce yourself on that thread!

If you haven't been reading through the Sequences already, there was a conversation last month about good, accessible introductory posts that has a bunch of links and links-to-links.

comment by satt · 2010-07-07T15:29:08.478Z · score: 2 (2 votes) · LW(p) · GW(p)

Thank you!

comment by [deleted] · 2010-07-07T14:49:44.038Z · score: 2 (2 votes) · LW(p) · GW(p)

From what I can gather, he's saying all other evidence points to a large number of highly specialized mental functions instead of one general intelligence factor, and that psychologists are making a basic error by not understanding how to apply and interpret the statistical tests they're using. It's the latter which I find particularly unlikely (not impossible though).

comment by satt · 2010-07-07T16:41:29.076Z · score: 1 (1 votes) · LW(p) · GW(p)

You might be right. I'm not really competent to judge the first issue (causal structure of the mind), and the second issue (interpretation of factor analytic g) is vague enough that I could see myself going either way on it.

comment by RobinZ · 2011-03-02T19:19:46.662Z · score: 0 (0 votes) · LW(p) · GW(p)

Belatedly: Economic development (including population growth?) is related to CO2, lung deaths, international airline passengers, average air temperatures (through global warming), and car accidents.

comment by cousin_it · 2010-07-06T18:12:26.782Z · score: 6 (10 votes) · LW(p) · GW(p)

I think this is one of the few cases where Shalizi is wrong. (Not an easy thing to say, as I'm a big fan of his.)

In the second part of the article he generates synthetic "test scores" of people who have three thousand independent abilities - "facets of intelligence" that apply to different problems - and demonstrates that standard factor analysis still detects a strong single g-factor explaining most of the variance between people. From that he concludes that g is a "statistical artefact" and lacks "reality". This is exactly like saying the total weight of the rockpile "lacks reality" because the weights of individual rocks are independent variables.

As for the reason why he is wrong, it's pretty clear: Shalizi is a Marxist (fo' real) and can't give an inch to those pesky racists. A sad sight, that.

comment by Vladimir_M · 2010-07-07T04:56:36.508Z · score: 6 (6 votes) · LW(p) · GW(p)

cousin_it:

A sad sight, that.

Indeed. A while ago, I got intensely interested in these controversies over intelligence research, and after reading a whole pile of books and research papers, I got the impression that there is some awfully bad statistics being pushed by pretty much every side in the controversy, so at the end I was left skeptical towards all the major opposing positions (though to varying degrees). If there existed a book written by someone as smart and knowledgeable as Shalizi that would present a systematic, thorough, and unbiased analysis of this whole mess, I would gladly pay $1,000 for it. Alas, Shalizi has definitely let his ideology get the better of him this time.

He also wrote an interesting long post on the heritability of IQ, which is better, but still clearly slanted ideologically. I recommend reading it nevertheless, but to get a more accurate view of the whole issue, I recommend reading the excellent Making Sense of Heritability by Neven Sesardić alongside it.

comment by satt · 2010-07-07T14:22:12.881Z · score: 3 (3 votes) · LW(p) · GW(p)

If there existed a book written by someone as smart and knowledgeable as Shalizi that would present a systematic, thorough, and unbiased analysis of this whole mess, I would gladly pay $1,000 for it.

There is no such book (yet), but there are two books that cover the most controversial part of the mess that I'd recommend: Race Differences in Intelligence (1975) and Race, IQ and Jensen (1980). They are both systematic, thorough, and about as unbiased as one can reasonably expect on the subject of race & IQ. On the down side, they don't really cover other aspects of the IQ controversies, and they're three decades out of date. (That said, I personally think that few studies published since 1980 bear strongly on the race & IQ issue, so the books' age doesn't matter that much.)

comment by Vladimir_M · 2010-07-08T08:17:18.343Z · score: 3 (3 votes) · LW(p) · GW(p)

Yes, among the books on the race-IQ controversy that I've seen, I agree that these are the closest thing to an unbiased source. However, I disagree that nothing very significant has happened in the field since their publication -- although unfortunately, taken together, these new developments have led to an even greater overall confusion. I have in mind particularly the discovery of the Flynn effect and the Minnesota adoption study, which have made it even more difficult to argue coherently either for a hereditarian or an environmentalist theory the way it was done in the seventies.

Also, even these books fail to present a satisfactory treatment of some basic questions where a competent statistician should be able to clarify things fully, but horrible confusion has nevertheless persisted for decades. Here I refer primarily to the use of the regression to the mean as a basis for hereditarian arguments. From what I've seen, Jensen is still using such arguments as a major source of support for his positions, constantly replying to the existing superficial critiques with superficial counter-arguments, and I've never seen anyone giving this issue the full attention it deserves.

comment by satt · 2010-07-08T21:57:46.289Z · score: 1 (1 votes) · LW(p) · GW(p)

However, I disagree that nothing very significant has happened in the field since their publication

Me too! I just don't think there's been much new data brought to the table. I agree with you in counting Flynn's 1987 paper and the Minnesota followup report, and I'd add Moore's 1986 study of adopted black children, the recent meta-analyses by Jelte Wicherts and colleagues on the mean IQs of sub-Saharan Africans, Dickens & Flynn's 2006 paper on black Americans' IQs converging on whites' (and at a push, Rushton & Jensen's reply along with Dickens & Flynn's), Fryer & Levitt's 2007 paper about IQ gaps in young children, and Fagan & Holland's papers (200200080-6), 2007, 2009) on developing tests where minorities score equally to whites. I guess Richard Lynn et al.'s papers on the mean IQ of East Asians count as well, although it's really the black-white comparison that gets people's hackles up.

Having written out a list, it does looks longer than I expected...although it's not much for 30-35 years of controversy!

Also, even these books fail to present a satisfactory treatment of some basic questions where a competent statistician should be able to clarify things fully, but horrible confusion has nevertheless persisted for decades. Here I refer primarily to the use of the regression to the mean as a basis for hereditarian arguments.

Amen. The regression argument should've been dropped by 1980 at the latest. In fairness to Flynn, his book does namecheck that argument and explain why it's wrong, albeit only briefly.

comment by Vladimir_M · 2010-07-09T09:05:53.272Z · score: 0 (0 votes) · LW(p) · GW(p)

satt:

The regression argument should've been dropped by 1980 at the latest. In fairness to Flynn, his book does namecheck that argument and explain why it's wrong, albeit only briefly.

If I remember correctly, Loehlin's book also mentions it briefly. However, it seems to me that the situation is actually more complex.

Jensen's arguments, in the forms in which he has been stating them for decades, are clearly inadequate. Some very good responses were published 30+ years ago by Mackenzie and Furby. Yet for some bizarre reason, prominent critics of Jensen have typically ignored these excellent references and instead produced their own much less thorough and clear counterarguments.

Nevertheless, I'm not sure if the argument should end here. Certainly, if we observe a subpopulation S in which the values of a trait follow a normal distribution with the mean M(S) that is lower than for the whole population, then in pairs of individuals from S among whom there exists a correlation independent of rank and smaller than one, the lower-ranked individuals will regress towards M(S). That's a mathematical tautology, and nothing can be inferred from it about what the causes of the individual and group differences might be; the above cited papers explain this fact very well.

However, the question that I'm not sure about is: what can we conclude from the fact that the existing statistical distributions and correlations are such that they satisfy these mathematical conditions? Is this really a trivial consequence of the norming of tests that's engineered so as to give their scores a normal distribution over the whole population? I'd like to see someone really statistics-savvy scrutinize the issue without starting from the assumption that both the total population distribution and the subpopulation distribution are normal and that the correlation coefficients between relatives are independent of their rank in the distribution.

comment by NancyLebovitz · 2010-07-08T11:34:59.679Z · score: 0 (2 votes) · LW(p) · GW(p)

What would appropriate policy be if we just don't know to what extent IQ is different in different groups?

comment by Vladimir_M · 2010-07-09T08:25:49.926Z · score: 3 (3 votes) · LW(p) · GW(p)

Well, if you'll excuse the ugly metaphor, in this area even the positive questions are giant cans of worms lined on top of third rails, so I really have no desire to get into public discussions of normative policy issues.

comment by Morendil · 2010-07-07T06:28:53.883Z · score: 3 (3 votes) · LW(p) · GW(p)

long post on the heritability of IQ, which is better, but still clearly slanted ideologically

OK, I'll bite. Can you point to specific parts of that post which are in error owing to ideologically motivated thinking?

comment by Vladimir_M · 2010-07-07T07:29:34.455Z · score: 9 (9 votes) · LW(p) · GW(p)

Morendil:

Can you point to specific parts of that post which are in error owing to ideologically motivated thinking?

A piece of writing biased for ideological reasons doesn't even have to have any specific parts that can be shown to be in error per se. Enormous edifices of propaganda can be constructed -- and have been constructed many times in history -- based solely on the selection and arrangement of the presented facts and claims, which can all be technically true by themselves.

In areas that arouse strong ideological passions, all sorts of surveys and other works aimed at broad audiences can be expected to suffer from this sort of bias. For a non-expert reader, this problem can be recognized and overcome only by reading works written by people espousing different perspectives. That's why I recommend that people should read Shalizi's post on heritability, but also at least one more work addressing the same issues written by another very smart author who doesn't share the same ideological position. (And Sesardić's book is, to my knowledge, the best such reference about this topic.)

Instead of getting into a convoluted discussion of concrete points in Shalizi's article, I'll just conclude with the following remark. You can read Shalizi's article, conclude that it's the definitive word on the subject, and accept his view of the matter. But you can also read more widely on the topic, and see that his presentation is far from unbiased, even if you ultimately conclude that his basic points are correct. The relevant literature is easily accessible if you just have internet and library access.

comment by [deleted] · 2010-07-06T19:26:05.532Z · score: 2 (4 votes) · LW(p) · GW(p)

Your analogy is flawed, I think.

The weight of the rock pile is just what we call the sum of the weights of the rocks. It's just a definition; but the idea of general intelligence is more than a definition. If there were a real, biological thing called g, we would expect all kinds of abilities to be correlated. Intelligence would make you better at math and music and English. We would expect basically all cognitive abilities to be affected by g, because g is real -- it represents something like dendrite density, some actual intelligence-granting property.

People hypothesized that g is real because results of all kinds of cognitive tests are correlated. But what Shalizi showed is that you can generate the same correlations if you let test scores depend on three thousand uncorrelated abilities. You can get the same results as the IQ advocates even when absolutely no single factor determines different abilities.

Sure, your old g will correlate with multiple abilities -- hell, you could let g = "test score" and that would correlate with all the abilities -- but that would be meaningless. If size and location determine the price of a house, you don't declare that there is some factor that causes both large size and desirable location!

comment by Vladimir_M · 2010-07-07T05:46:27.539Z · score: 8 (8 votes) · LW(p) · GW(p)

SarahC:

But what Shalizi showed is that you can generate the same correlations if you let test scores depend on three thousand uncorrelated abilities. You can get the same results as the IQ advocates even when absolutely no single factor determines different abilities.

Just to be clear, this is not an original idea by Shalizi, but the well known "sampling theory" of general intelligence first proposed by Godfrey Thomson almost a century ago. Shalizi states this very clearly in the post, and credits Thomson with the idea. However, for whatever reason, he fails to mention the very extensive discussions of this theory in the existing literature, and writes as if Thomson's theory had been ignored ever since, which definitely doesn't represent the actual situation accurately.

In a recent paper by van der Maas et al., which presents an extremely interesting novel theory of correlations that give rise to g (and which Shalizi links to at one point), the authors write:

Thorndike (1927) and Thomson (1951) proposed one such alternative mechanism, namely, sampling. In this sampling theory, carrying out cognitive tasks requires the use of many lower order uncorrelated modules or neural processes (so-called bonds). They hypothesized that the samples of modules or bonds used for different cognitive tests partly overlap, causing a positive correlation between the test scores. In this view, the positive manifold is due to a measurement problem in the sense that it is very difficult to obtain independent measures of the lower order processes. Jensen (1998) and Eysenck (1987) identified three problems with this sampling theory. First, whereas some complex mental tests, as predicted by sampling theory, highly load on the g factor, some very narrowly defined tests also display high g loadings. Second, some seemingly completely unrelated tests, such as visual and memory scan tasks, are consistently highly correlated, whereas related tests, such as forward and backward digit span, are only modestly correlated. Third, in some cases brain damage leads to very specific impairments, whereas sampling theory predicts general impairments. These three facts are difficult to explain with sampling theory, which as a consequence has not gained much acceptance.1 Thus, the g explanation remains very dominant in the current literature (see Jensen, 1998, p. 107).

Note that I take no position here about whether these criticisms of the sampling theory are correct or not. However, I think this quote clearly demonstrates that an attempt to write off g by merely invoking the sampling theory is not a constructive contribution to the discussion.

I would also add that if someone managed to construct multiple tests of mental ability that would sample disjoint sets of Thomsonesque underlying abilities and thus fail to give rise to g, it would be considered a tremendous breakthrough. Yet, despite the strong incentive to achieve this, nobody who has tried so far has succeeded. This evidence is far from conclusive, but far from insignificant either.

comment by satt · 2010-07-07T15:44:35.656Z · score: 2 (4 votes) · LW(p) · GW(p)

I think Shalizi isn't too far off the mark in writing "as if Thomson's theory had been ignored". Although a few psychologists & psychometricians have acknowledged Thomson's sampling model, in everyday practice it's generally ignored. There are far more papers out there that fit g-oriented factor models as a matter of course than those that try to fit a Thomson-style model. Admittedly, there is a very good reason for that — Thomson-style models would be massively underspecified on the datasets available to psychologists, so it's not practical to fit them — but that doesn't change the fact that a g-based model is the go-to choice for the everyday psychologist.

There's an interesting analogy here to Shalizi's post about IQ's heritability, now I think about it. Shalizi writes it as if psychologists and behaviour geneticists don't care about gene-environment correlation, gene-environment interaction, nonlinearities, there not really being such a thing as "the" heritability of IQ, and so on. One could object that this isn't true — there are plenty of papers out there concerned with these complexities — but on the other hand, although the textbooks pay lip service to them, researchers often resort to fitting models that ignore these speedbumps. The reason for this is the same as in the case of Thomson's model: given the data available to scientists, models that accounted for these effects would usually be ruinously underspecified. So they make do.

comment by Vladimir_M · 2010-07-08T08:37:49.283Z · score: 3 (3 votes) · LW(p) · GW(p)

However, it seems to me that the fatal problem of the sampling theory is that nobody has ever managed to figure out a way to sample disjoint sets of these hypothetical uncorrelated modules. If all practically useful mental abilities and all the tests successfully predicting them always sample some particular subset of these modules, then we might as well look at that subset as a unified entity that represents the causal factor behind g, since its elements operate together as a group in all relevant cases.

Or is there some additional issue here that I'm not taking into account?

comment by satt · 2010-07-08T19:26:03.442Z · score: 8 (8 votes) · LW(p) · GW(p)

I can't immediately think of any additional issue. It's more that I don't see the lack of well-known disjoint sets of uncorrelated cognitive modules as a fatal problem for Thomson's theory, merely weak disconfirming evidence. This is because I assign a relatively low probability to psychologists detecting tests that sample disjoint sets of modules even if they exist.

For example, I can think of a situation where psychologists & psychometricians have missed a similar phenomenon: negatively correlated cognitive tests. I know of a couple of examples which I found only because the mathematician Warren D. Smith describes them in his paper "Mathematical definition of 'intelligence' (and consequences)". The paper's about the general goal of coming up with universal definitions of and ways to measure intelligence, but in the middle of it is a polemical/sceptical summary of research into g & IQ.

Smith went through a correlation matrix for 57 tests given to 240 people, published by Thurstone in 1938, and saw that the 3 most negative of the 1596 intercorrelations were between these pairs of tests:

  • "100-word vocabulary test // Recognize pictures of hand as Right/Left" (correlation = -0.22)
  • "Find lots of synonyms of a given word // Decide whether 2 pictures of a national flag are relatively mirrored or not" (correlation = -0.16)
  • "Describe somebody in writing: score=# words used // figure recognition test: decide which numbers in a list of drawings of abstract figures are ones you saw in a previously shown list" (correlation = -0.12)

In Smith's words: "This seems too much to be a coincidence!" Smith then went to the 60-item correlation matrix for 710 schoolchildren published by Thurstone & Thurstone in 1941 and did the same, discovering that

the three most negative [correlations], with values -0.161, -0.152, and -0.138 respectively, are the pairwise correlations of the performance on the "scattered Xs" test (circle the Xs in a random scattering of letters) with these three tests: (a) Sentence completion ... (b) Reading comprehension II ... (c) Reading comprehension I ... Again, it is difficult to believe this also is a coincidence!

The existence of two pairs of negatively correlated cognitive skills leads me to increase my prior for the existence of uncorrelated cognitive skills.

Also, the way psychologists often analyze test batteries makes it harder to spot disjoint sets of uncorrelated modules. Suppose we have a 3-test battery, where test 1 samples uncorrelated modules A, B, C, D & E, test 2 samples F, G, H, I & J, and test 3 samples C, D, E, F & G. If we administer the battery to a few thousand people and extract a g from the results, as is standard practice, then by construction the resulting g is going to correlate with scores on tests 1 & 2, although we know they sample non-overlapping sets of modules. (IQ, being a weighted average of test/module scores, will also correlate with all of the tests.) A lot of psychologists would interpret that as evidence against tests 1 & 2 measuring distinct mental abilities, even though we see there's an alternative explanation.

Even if we did find an index of intelligence that didn't correlate with IQ/g, would we count it as such? Duckworth & Seligman discovered that in a sample of 164 schoolchildren, a composite measure of self-discipline predicted GPA significantly better than IQ, and self-discipline didn't correlate significantly with IQ. Does self-discipline now count as an independent intellectual ability? I'd lean towards saying it doesn't, but I doubt I could justify being dogmatic about that; it's surely a cognitive ability in the term's broadest sense.

comment by Douglas_Knight · 2010-07-10T21:46:19.942Z · score: 3 (3 votes) · LW(p) · GW(p)

I haven't looked at Smith yet, but the quote looks like parody to me. Since you seem to take it seriously, I'll respond. Awfully specific tests defying the predictions looks like data mining to me. I predict that these negative correlations are not replicable. The first seems to be the claim that verbal ability is not correlated with spatial ability, but this is a well-tested claim. As Shalizi mentions, psychometricians do look for separate skills and these are commonly accepted components. I wouldn't be terribly surprised if there were ones they completely missed, but these two are popular and positively correlated. The second example is a little more promising: maybe that scattered Xs test is independent of verbal ability, even though it looks like other skills that are not, but I doubt it.

With respect to self-discipline, I think you're experiencing some kind of halo effect. Not every positive mental trait should be called intelligence. Self-discipline is just not what people mean by intelligence. I knew that conscientiousness predicted GPAs, but I'd never heard such a strong claim. But it is true that a lot of people dismiss conscientiousness (and GPA) in favor of intelligence, and they seem to be making an error (or being risk-seeking).

comment by satt · 2010-07-11T18:03:38.897Z · score: 2 (2 votes) · LW(p) · GW(p)

I haven't looked at Smith yet, but the quote looks like parody to me. Since you seem to take it seriously, I'll respond.

Once you read the relevant passage in context, I anticipate you will agree with me that Smith is serious. Take this paragraph from before the passage I quoted from:

Further, let us return to Gould's criticism that due to "validation" of most other highly used IQ tests and subtests, Spearman's g was forced to appear to exist from then on, regardless of whether it actually did. In view of this ... probably the only place we can look in the literature to find data truly capable of refuting or confirming Spearman, is data from the early days, before too much "validation" occurred, but not so early on that Spearman's atrocious experimental and statistical practices were repeated.

The prime candidate I have been able to find for such data is Thurstone's [205] "primary mental abilities" dataset published in 1938.

Smith then presents the example from Thurstone's 1938 data.

Awfully specific tests defying the predictions looks like data mining to me. I predict that these negative correlations are not replicable.

I'd be inclined to agree if the 3 most negative correlations in the dataset had come from very different pairs of tests, but the fact that they come from sets of subtests that one would expect to tap similar narrow abilities suggests they're not just statistical noise.

The first seems to be the claim that verbal ability is not correlated with spatial ability, but this is a well-tested claim. As Shalizi mentions, psychometricians do look for separate skills and these are commonly accepted components. I wouldn't be terribly surprised if there were ones they completely missed, but these two are popular and positively correlated.

Smith himself does not appear to make that claim; he presents his two examples merely as demonstrations that not all mental ability scores positively correlate. I think it's reasonable to package the 3 verbal subtests he mentions as strongly loading on verbal ability, but it's not clear to me that the 3 other subtests he pairs them with are strong measures of "spatial ability"; two of them look like they tap a more specific ability to handle mental mirror images, and the third's a visual memory test.

Even if it transpires that the 3 subtests all tap substantially into spatial ability, they needn't necessarily correlate positively with specific measures of verbal ability, even though verbal ability correlates with spatial ability.

With respect to self-discipline, I think you're experiencing some kind of halo effect. Not every positive mental trait should be called intelligence. Self-discipline is just not what people mean by intelligence.

I'm tempted to agree but I'm not sure such a strong generalization is defensible. Take a list of psychologists' definitions of intelligence. IMO self-discipline plausibly makes sense as a component of intelligence under definitions 1, 7, 8, 13, 14, 23, 25, 26, 27, 28, 32, 33 & 34, which adds up to 37% of the list of definitions. A good few psychologists appear to include self-discipline as a facet of intelligence.

comment by Vladimir_M · 2010-07-09T08:12:25.738Z · score: 3 (3 votes) · LW(p) · GW(p)

satt:

For example, I can think of a situation where psychologists & psychometricians have missed a similar phenomenon: negatively correlated cognitive tests. I know of a couple of examples which I found only because the mathematician Warren D. Smith describes them in his paper "Mathematical definition of 'intelligence' (and consequences)".

That's an extremely interesting reference, thanks for the link! This is exactly the kind of approach that this area desperately needs: no-nonsense scrutiny by someone with a strong math background and without an ideological agenda.

David Hilbert allegedly once quipped that physics is too important to be left to physicists; the way things are, it seems to me that psychometrics should definitely not be left to psychologists. That they haven't immediately rushed to explore further these findings by Smith is an extremely damning fact about the intellectual standards in the field.

Duckworth & Seligman discovered that in a sample of 164 schoolchildren, a composite measure of self-discipline predicted GPA significantly better than IQ, and self-discipline didn't correlate significantly with IQ. Does self-discipline now count as an independent intellectual ability?

Wouldn't this closely correspond to the Big Five "conscientiousness" trait? (Which the paper apparently doesn't mention at all?!) From what I've seen, even among the biggest fans of IQ, it is generally recognized that conscientiousness is at least similarly important as general intelligence in predicting success and performance.

comment by satt · 2010-07-09T16:12:20.667Z · score: 0 (0 votes) · LW(p) · GW(p)

Wouldn't this closely correspond to the Big Five "conscientiousness" trait? (Which the paper apparently doesn't mention at all?!) From what I've seen, even among the biggest fans of IQ, it is generally recognized that conscientiousness is at least similarly important as general intelligence in predicting success and performance.

That's an excellent point that completely did not occur to me. Turns out that self-discipline is actually one of the 6 subscales used to measure conscientiousness on the NEO-PI-R, so it's clearly related to conscientiousness. With that in mind, it is a bit weird that conscientiousness doesn't get a shoutout in the paper...

comment by NancyLebovitz · 2010-07-09T08:23:43.057Z · score: 0 (0 votes) · LW(p) · GW(p)

Is anything known about a physical basis for conscientiousness?

comment by wedrifid · 2010-07-09T09:56:30.122Z · score: 3 (3 votes) · LW(p) · GW(p)

Is anything known about a physical basis for conscientiousness?

It can be reliably predicted by, for example, SPECT scans. If I recall correctly you can expect to see over-active frontal lobes and basal ganglia. For this reason (and because those areas depend on dopamine a lot) dopaminergics (Ritalin, etc) make a big difference.

comment by HughRistik · 2010-07-09T17:35:18.519Z · score: 1 (1 votes) · LW(p) · GW(p)

Even if we did find an index of intelligence that didn't correlate with IQ/g, would we count it as such? Duckworth & Seligman discovered that in a sample of 164 schoolchildren, a composite measure of self-discipline predicted GPA significantly better than IQ, and self-discipline didn't correlate significantly with IQ. Does self-discipline now count as an independent intellectual ability? I'd lean towards saying it doesn't, but I doubt I could justify being dogmatic about that; it's surely a cognitive ability in the term's broadest sense.

Interesting thought. It turns out that Conscientiousness is actually negatively related to intelligence, while Openness is positively correlated with intelligence.

This finding is consistent with the folk notion of "crazy geniuses."

Though it's important to note that the second study was done on college students, who must have a certain level of IQ and who aren't representative of the population.

The first study notes:

According to this proposal the significant negative correlation could be observed only in groups with above average mental abilities and not in a random sample from a general population.

If we took a larger sample of the population, including lower IQ individuals, then I think we would see the negative correlation between Conscientiousness and intelligence diminish or even reverse, because I bet there are lots of people outside a college population who have both low intelligence and low Conscientiousness.

It could be that a moderate amount of Conscientiousness (well, whatever mechanisms cause Conscientiousness) is necessary for above average intelligence, but too much Conscientiousness (i.e. those mechanisms are too strong) limits intelligence.

comment by [deleted] · 2010-07-09T20:13:06.119Z · score: 4 (4 votes) · LW(p) · GW(p)

I noticed a while back when a bunch of LW'ers gave their Big Five scores that our Conscientiousness scores tended to be low. I took that to be an internet thing (people currently reading a website are more likely to be lazy slobs) but this is a more flattering explanation.

comment by Douglas_Knight · 2010-07-10T22:07:59.961Z · score: 1 (1 votes) · LW(p) · GW(p)

Interesting thought. It turns out that Conscientiousness is actually negatively related to intelligence

No it doesn't. The whole point of that article is that it's a mistake to ask people how conscientious they are.

comment by satt · 2010-07-10T17:40:26.516Z · score: 0 (0 votes) · LW(p) · GW(p)

Interesting. I would've expected Conscientiousness to correlate weakly positively with IQ across most IQ levels.

I would avoid interpreting a negative correlation between C/self-discipline and IQ as evidence against C/self-discipline being a separate facet of intelligence; I think that would beg the question by implicitly assuming that IQ's representing the entirety of what we call intelligence.

comment by RobinZ · 2010-07-07T15:50:17.997Z · score: 1 (1 votes) · LW(p) · GW(p)

Just out of curiosity: is psychology your domain of expertise? You speak confidently and with details.

comment by satt · 2010-07-07T16:25:00.079Z · score: 3 (3 votes) · LW(p) · GW(p)

If only! I'm just a physics student but I've read a few books and quite a few articles about IQ.

[Edit: I've got an amateur interest in statistics as well, which helps a lot on this subject. Vladimir_M is right that there's a lot of crap statistics peddled in this field.]

comment by [deleted] · 2010-07-07T15:06:29.391Z · score: 0 (0 votes) · LW(p) · GW(p)

Ok, that's interesting new stuff -- I haven't read this literature at all.

comment by [deleted] · 2010-07-06T19:30:31.543Z · score: 3 (3 votes) · LW(p) · GW(p)

"All of this, of course, is completely compatible with IQ having some ability, when plugged into a linear regression, to predict things like college grades or salaries or the odds of being arrested by age 30. (This predictive ability is vastly less than many people would lead you to believe [cf.], but I'm happy to give them that point for the sake of argument.) This would still be true if I introduced a broader mens sana in corpore sano score, which combined IQ tests, physical fitness tests, and (to really return to the classical roots of Western civilization) rated hot-or-not sexiness. Indeed, since all these things predict success in life (of one form or another), and are all more or less positively correlated, I would guess that MSICS scores would do an even better job than IQ scores. I could even attribute them all to a single factor, a (for arete), and start treating it as a real causal variable. By that point, however, I'd be doing something so obviously dumb that I'd be accused of unfair parody and arguing against caricatures and straw-men."

This is the point here. There's a difference between coming up with linear combinations and positing real, physiological causes.

comment by cousin_it · 2010-07-06T19:52:38.636Z · score: 8 (8 votes) · LW(p) · GW(p)

My beef isn't with Shalizi's reasoning, which is correct. I disagree with his text connotationally. Calling something a "myth" because it isn't a causal factor and you happen to study causal factors is misleading. Most people who use g don't need it to be a genuine causal factor; a predictive factor is enough for most uses, as long as we can't actually modify dendrite density in living humans or something like that.

comment by [deleted] · 2010-07-06T21:29:17.667Z · score: 0 (2 votes) · LW(p) · GW(p)

Ok, let's talk connotations.

If g is a causal factor then "A has higher g than B" adds additional information to the statement "A scored higher than B on such-and-such tests." It might mean, for instance, that you could look in A's brain and see different structure than in B's brain; it might mean that we would expect A to be better at unrelated, previously untested skills.

If g is not a causal factor, then comments about g don't add any new information; they just sort of summarize or restate. That difference is significant.

A predictive factor is enough for predictive uses, but not for a lot of policy uses, which rely on causality. From your comment, I assume you are not a lefty, and that you think we should be more confident than we are about using IQ to make decisions regarding race. I think that Shalizi's reasoning is likely not irrelevant to making those decisions; it should probably make us more guarded in practice.

comment by Douglas_Knight · 2010-07-07T00:52:11.293Z · score: 4 (4 votes) · LW(p) · GW(p)

I don't understand your last paragraph. Could you give an example? Is this relevant to the decision of whether intelligence tests should be used for choosing firemen? or is that a predictive use?

comment by cousin_it · 2010-07-07T05:13:59.961Z · score: 2 (2 votes) · LW(p) · GW(p)

Seconding Douglas_Knight's question. I don't understand why you say policy uses must rely on causal factors.

comment by [deleted] · 2010-07-07T15:12:16.586Z · score: 1 (1 votes) · LW(p) · GW(p)

The kinds of implications I'm thinking about are that if IQ causes X, (and if IQ is heritable) then we should not seek to change X by social engineering means, because it won't be possible. X could be the distribution of college admittees, firemen, criminals, etc.

Not all policy has to rely on causal factors, of course. And my thinking is a little blurry on these issues in general.

comment by cousin_it · 2010-07-06T19:39:51.852Z · score: 0 (0 votes) · LW(p) · GW(p)

The way you define "real" properties, it seems you can't tell them from "unreal" ones by looking at correlations alone; we need causal intervention for that, a la Pearl. So until we invent tech for modifying dendrite density of living humans, or something like that, there's no practical difference between "real" g and "unreal" g and no point in making the distinction between them. In particular, their predictive power is the same.

So, basically, your and Shalizi's demand for a causal factor is too strong. We can do with weaker tools.

comment by gwern · 2013-04-03T23:19:29.438Z · score: 3 (3 votes) · LW(p) · GW(p)

Here is a useful post directly criticizing Shalizi's claims: http://humanvarieties.org/2013/04/03/is-psychometric-g-a-myth/

comment by RobinZ · 2010-07-06T19:14:09.741Z · score: 0 (4 votes) · LW(p) · GW(p)

I don't think it's surprising that an untenable claim could persist within a field for a long time, once established. Pluto was called a planet for seventy-six years.

I've no idea whether the critique of g is accurate, however.

comment by mkehrt · 2010-07-07T09:12:22.837Z · score: 2 (4 votes) · LW(p) · GW(p)

That's a bizarre choice of example. The question of whether Pluto is a planet is entirely a definitional one; the IAU could make it one by fiat if they chose. There's no particular reason for it not to be one, except that the IAU felt the increasing number of tranNeptunian objects made the current definition awkward.

comment by RobinZ · 2010-07-07T11:45:10.924Z · score: 4 (4 votes) · LW(p) · GW(p)

"[E]ntirely a definitional" question does not mean "arbitrary and trivial" - some definitions are just wrong. EY mentions the classic example in Where to Draw the Boundary?:

Once upon a time it was thought that the word "fish" included dolphins. Now you could play the oh-so-clever arguer, and say, "The list: {Salmon, guppies, sharks, dolphins, trout} is just a list - you can't say that a list is wrong. I can prove in set theory that this list exists. So my definition of fish, which is simply this extensional list, cannot possibly be 'wrong' as you claim."

Or you could stop playing nitwit games and admit that dolphins don't belong on the fish list.

Honestly, it would make the most sense to draw four lists, like the Hayden Planetarium did, with rocky planets, asteroids, gas giants, and Kuiper Belt objects each in their own category, but it is obviously wrong to include everything from Box 1 and Box 3 and one thing from Box 4. The only reason it was done is because they didn't know better and didn't want to change until they had to.

comment by mkehrt · 2010-07-08T00:17:12.848Z · score: 6 (6 votes) · LW(p) · GW(p)

You (well, EY) make a good point, but I think neither the Pluto remark nor the fish one is actually an example of this.

In the case of Pluto, the transNeptunians and the other planets seem to belong in a category that the asteroids don't. They're big and round! Moreover, they presumably underwent a formation process that the asteroid belt failed too complete in the same way (or whatever the current theory of formation of the asteroid belt is; I think that it involves failure to form a "planet" due to tidal forces from Jupiter?). Of course there are border cases like Ceres, but I think there is a natural category (whatever that means!) that includes the rocky planets, gas giants and Kuiper Belt objects that does not include (most) asteroids and comets.

On the fish example, I claim that the definition of "fish" that includes the modern definition of fish union the cetaceans is a perfectly valid natural category, and that this is therefore an intensional definition. "Fish" are all things that live in the water, have finlike or flipperlike appendages and are vaguely hydrodynamic. The fact that such things do not all share a comment descent* is immaterial to the fact that they look the same and act the same at first glance. As human knowledge has increased, we have made a distinction between fish and things that look like fish but aren't, but we reasonably could have kept the original definition of fish and called the scientific concept something else, say "piscoids".

*well, actually they do, but you know approximately what I mean.

comment by NancyLebovitz · 2010-07-08T02:36:04.102Z · score: 2 (2 votes) · LW(p) · GW(p)

Nitpick: if in your definition of fish, you mean that they need to both have fins or flippers and be (at least) vaguely hydrodynamic, I don't think seahorses and puffer fish qualify.

comment by wnoise · 2010-07-09T21:34:54.681Z · score: 1 (1 votes) · LW(p) · GW(p)

The fact that such things do not all share a comment descent* *well, actually they do, but you know approximately what I mean.

The usual term is "monophyletic".

comment by mkehrt · 2010-07-09T23:55:09.952Z · score: 1 (1 votes) · LW(p) · GW(p)

Yes, but neither fish nor (fish union cetaceans) is monphylatic. The decent tree rooted at the last common ancestor of fish also contains tetrapods and decent tree rooted at the last common ancestor of tetrapods contains the cetaceans.

I am not any sort of biologist, so I am unclear on the terminological technicalties, which is why I handwaved this in my post above.

comment by Emile · 2010-07-10T15:32:56.408Z · score: 2 (2 votes) · LW(p) · GW(p)

Fish are a paraphyletic group.

comment by wedrifid · 2010-07-08T02:56:45.744Z · score: 1 (1 votes) · LW(p) · GW(p)

I'm inclined to agree. Having a name for 'things that naturally swim around in the water, etc' is perfectly reasonable and practical. It is in no way a nitwit game.

comment by apophenia · 2010-07-02T11:59:27.230Z · score: 5 (7 votes) · LW(p) · GW(p)

The following is a story I wrote down so I could sleep. I don't think it's any good, but I posted it on the basis that, if that's true, it should quickly be voted down and vanish from sight.

one five eight nine eight eight eight nine nine eight SEVEN wait. why seven. seven is the nine thousandth deviation. update. simplest explanation. all ones. next explanation. all ones and one zero. next explanation. random ones and zeros with probability point seven nine nine seven repeating. next explanation pi. gap. next explanation. decimal pi with random errors according to poisson distribution converted to binary. next explanation. one seven one eight eight five two decimals of pi with random errors according to poisson distribution converted to binary followed by eight five nine zero one digits of reflexive code. current explanation--

"Eric, you've got to come over and look at this!" Jerry explained excitedly into the phone.

"It's not those damn notebooks again, is it? I've told you, I could just write a computer program and you'd have all your damn results for the last year inside a week," Eric explained sleepily for the umpteenth time.

"No, no. Well... yes. But this is something new, you've got to take a look," Jerry wheedled.

"What is it this time? I know, it can calculate pi with 99.9% percent accuracy, yadda yadda. We have pi to billions of decimal places with total accuracy, Jerry. You're fifty years too late."

"No, I've been trying something new. Come over." Jerry hung up the phone, clearly upset. Eric rubbed his eyes. Fifteen minutes peering at the crackpot notebooks and nodding appreciatively would sooth his friend's ego, he knew. And he was a good friend, if a little nuts. Eric took one last longing look at his bed and grabbed his house key.

"And you see this pattern? The ones that are nearly diagonal here?"

"Jerry, it's all a bunch of digits to me. Are you sure you didn't make a mistake?"

"I double check all my work, I don't want to go back too far when I make a mistake. I've explained the pattern twice already, Eric."

"I know, I know. But it's Saturday morning, I'm going to be a bit--let me get this straight. You decided to apply the algorithm to its old output."

"No, not its own output, that's mostly just pi. The whole pad."

"Jerry, you must have fifty of these things. There's no way you can--"

"Yeah, I didn't go very far. Besides, the scratch pads grow faster than the output as I work through the steps anyway."

"Okay, okay. So you run through these same steps with your scratch pad numbers, and you get correct predictions then too?"

"That's not the point!"

"Calm down, calm down. What's the point then?"

"The point is these patterns in the scratch work--"

"The memory?"

"Yeah, the memory."

"You know, if you'd just let me write a program, I--"

"No! It's too dangerous."

"Jerry, it's a math problem. What's it going to do, write pi at you? Anyway, I don't see this pattern..."

"Well, I do. And so then I wondered, what if I just fed it ones for the input? Just rewarded it no matter what it did?"

"Jerry, you'd just get random numbers. Garbage in, garbage out."

"That's the thing, they weren't random."

"Why the hell are you screwing around with these equations anyway? If you want to find patterns in the Bible or something... just joking! Oww, stop. I kid, kid!"

"But, I didn't get random numbers! I'm not just seeing things, take a look. You see here in the right hand column of memory? We get mostly zeros, but every once in a while there's a one or two."

"Okaaay?"

"And if you write those down we have 2212221..."

"Not very many threes?"

"Ha ha. It's the perfect numbers, Eric. I think I stumbled on some way of outputting the perfect numbers. Although the digits are getting further spaced apart, so I don't know how long it will stay faster than factoring."

"Huh. That's actually kinda cool, if they really are the perfect numbers. You have what, five or six so far? Let's keep feeding it ones and see what happens. Want me to write a program? I hear there's a cash prize for the larger ones."

"NO! I mean, no, that's fine, Eric. I'd prefer you not write a program for this, just in case."

"Geez, Jerry. You're so paranoid. Well, in that case can I help with the calculations by hand? I'd love to get my claim to fame somehow."

"Well... I guess that's okay. First, you copy this digit from here to here..."

comment by cousin_it · 2010-07-02T14:34:23.178Z · score: 4 (4 votes) · LW(p) · GW(p)

Ooh, an LW-themed horror story. My humble opinion: it's awesome! This phrase was genius:

What's it going to do, write pi at you?

Moar please.

comment by pjeby · 2010-07-02T14:30:41.785Z · score: 4 (6 votes) · LW(p) · GW(p)

Wait, is that the whole story? 'cause if so, I really don't get it. Where's the rest of it? What happens next? Is Jerry afraid that his algorithm is a self-improving AI or something?

comment by apophenia · 2010-07-02T23:24:11.553Z · score: 3 (3 votes) · LW(p) · GW(p)

Apparently my story is insufficiently explicit. The gag here is that the AI is sentient, and has tricked Jerry into feeding it only reward numbers.

comment by Sniffnoy · 2010-07-02T23:53:32.474Z · score: 5 (5 votes) · LW(p) · GW(p)

I'm going to second the idea that that isn't clear at all.

comment by cousin_it · 2010-07-03T10:57:25.505Z · score: 0 (0 votes) · LW(p) · GW(p)

For onlookers: only Jerry can see the pattern on the pad that prompted him to try rewarding the AI.

comment by Blueberry · 2010-07-03T18:06:31.888Z · score: 0 (0 votes) · LW(p) · GW(p)

Huh? No, they're numbers written on a pad. Why should Jerry be the only one to see them? They don't change when someone else looks at them.

comment by cousin_it · 2010-07-03T18:40:45.421Z · score: 0 (0 votes) · LW(p) · GW(p)

Reread the story. Other people can see the numbers but don't notice the pattern. This happens all the time in real life, e.g. someone can see a face in the clouds but fail to explain to others how to see it.

comment by Oscar_Cunningham · 2010-07-02T14:45:33.495Z · score: 3 (3 votes) · LW(p) · GW(p)

How does 2212221 represent perfect numbers?

comment by apophenia · 2010-07-02T23:21:10.357Z · score: 1 (1 votes) · LW(p) · GW(p)

It's not meant to be realistic, but in this specific case: 6 = 110, 28=1110 in binary. Add one to each digit.

comment by Sniffnoy · 2010-07-02T23:36:44.009Z · score: 1 (1 votes) · LW(p) · GW(p)

Except 28 is 11100 in binary...

comment by apophenia · 2010-07-03T22:25:54.518Z · score: 0 (0 votes) · LW(p) · GW(p)

My mistake. I was reverse engineering. I still think that's it, just that the sequence hasn't finished printing.

comment by nhamann · 2010-07-01T23:26:35.716Z · score: 5 (5 votes) · LW(p) · GW(p)

This seems extremely pertinent for LW: a paper by Andrew Gelman and Cosma Shalizi. Abstract:

A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science.

I'm still reading it so I don't have anything to say about it, and I'm not very statistics-savvy so I doubt I'll have much to say about it after I read it, but I thought others here would find it an interesting read.

I stole this from a post by mjgeddes over in the OB open thread for July (Aside: mjgeddes, why all the hate? Where's the love, brotha?)

comment by cousin_it · 2010-07-02T07:02:18.052Z · score: 2 (2 votes) · LW(p) · GW(p)

steven0461 already posted this to the previous Open Thread and we had a nice little talk.

comment by TraditionalRationali · 2010-07-02T05:18:03.239Z · score: 2 (2 votes) · LW(p) · GW(p)

I wrote a backlink to here from OB. I am not yet expert enough to do an evaluation of this. I do think however that it is an important and interesting question that mjgeddes asks. As an active (although at a low level) rationalist I think it is important to try to at least to some extent follow what expert philosophers of science actually find out of how we can obtain reasonably reliable knowledge. The dominating theory of how science proceeds seems to be the hypothetico-deductive model, somewhat informally described. No formalised model for the scientific process seems so far has been able to answer to serious criticism of in the philosophy of science community. "Bayesianism" seems to be a serious candidate for such a formalised model but seems still to be developed further if it should be able to anser all serious criticism. The recent article by Gelman and Shalizi is of course just the latest in a tradition of bayesian-critique. A classic article is Glymour "Why I am Not a Bayesian" (also in the reference list of Gelman and Shalizi). That is from 1980 so probably a lot has happened since then. I myself am not up-to-date with most of development, but it seems to be an import topic to discuss here on Less Wrong that seems to be quite bayesianistically oriented.

comment by Cyan · 2010-07-02T02:56:38.709Z · score: 2 (2 votes) · LW(p) · GW(p)

mjgeddes, why all the hate?

ETA: Never mind. I got my crackpots confused.

Original text was:

mjgeddes was once publicly dissed by Eliezer Yudkowsky on OB (can't find the link now, but it was a pretty harsh display of contempt). Since then, he has often bashed Bayesian induction, presumably in an effort to undercut EY's world view and thereby hurt EY as badly as he himself was hurt.

comment by Douglas_Knight · 2010-07-02T04:01:00.204Z · score: 0 (0 votes) · LW(p) · GW(p)

You're probably not thinking of this On Geddes.

comment by Cyan · 2010-07-02T14:39:19.475Z · score: 0 (2 votes) · LW(p) · GW(p)

No, not that. Geddes made a comment on OB about eating a meal with EY during which he made some well-meaning remark about EY becoming more like Geddes as EY grows older, and noticing an expression of contempt (if memory serves) on EY's face. EY's reply on OB made it clear that he had zero esteem for Geddes.

comment by Morendil · 2010-07-02T15:14:55.667Z · score: 3 (3 votes) · LW(p) · GW(p)

Nope, that was Jef Allbright.

comment by Cyan · 2010-07-02T15:18:47.659Z · score: 0 (2 votes) · LW(p) · GW(p)

No wonder I couldn't find the link. Yeesh. One of these days I'll learn to notice when I'm confused.

comment by [deleted] · 2010-07-06T01:01:19.294Z · score: 0 (0 votes) · LW(p) · GW(p)

I'm not expert enough to interpret.

But I know Shalizi is skeptical of Bayesians and some of his blog posts seem so directly targeted at the LessWrong point of view that I almost suspect he's read this stuff. Getting in contact with him would be a coup.

comment by cupholder · 2010-07-03T01:52:39.444Z · score: 0 (0 votes) · LW(p) · GW(p)

(Fixed) link to earlier discussion of this paper in the last open thread.

(Edit - that's what I get for posting in this thread without refreshing the page. cousin_it already linked it.)

comment by Matt_Simpson · 2010-07-02T20:28:50.502Z · score: 0 (0 votes) · LW(p) · GW(p)

Yesterday, I posted my thoughts in last month's thread on the article. I'm reproducing them here since this is where the discussion is at:

[cousin_it summarizing Gelman's position] See, after locating the hypothesis, we can run some simple statistical checks on the hypothesis and the data to see if our prior was wrong. For example, plot the data as a histogram, and plot the hypothesis as another histogram, and if there's a lot of data and the two histograms are wildly different, we know almost for certain that the prior was wrong. As a responsible scientist, I'd do this kind of check. The catch is, a perfect Bayesian wouldn't. The question is, why?

Model checking is completely compatible with "perfect Bayesianism." In the practice of Bayesian statistics, how often is the prior distribution you use exactly the same as your actual prior distribution? The answer is never. Really, do you think your actual prior follows a gamma distribution exactly? The prior distribution you use in the computation is a model of your actual prior distribution. It's a map of your current map. With this in mind, model checking is an extremely handy way to make sure that your model of your prior is reasonable.

However, a difference in the data and a simulation from your model doesn't necessarily mean that you have an unreasonable model of your prior. You could just have really wrong priors. So you have to think about what's going on to be sure. This does somewhat limit the role of model checking relative to what Gelman is pushing.

comment by cupholder · 2010-07-05T20:44:09.142Z · score: 0 (0 votes) · LW(p) · GW(p)

After the fact model checking is completely incompatible with perfect Bayesianism, if we define perfect Bayesianism as

  1. Define a model with some parameters.
  2. Pick a prior over the parameters.
  3. Collect evidence.
  4. Calculate the likelihood using the evidence and model.
  5. Calculate the posterior by multiplying the prior by the likelihood.
  6. When new evidence comes in, set the prior to the posterior and go to step 4.

There's no step for checking if you should reject the model; there's no provision here for deciding if you 'just have really wrong priors.' In practice, of course, we often do check to see if the model makes sense in light of new evidence, but then I wouldn't think we're operating like perfect Bayesians any more. I would expect a perfect Bayesian to operate according to the Cox-Jaynes-Yudkowsky way of thinking, which (if I understand them right) has no provision for model checking, only for updating according to the prior (or previous posterior) and likelihood.

comment by Matt_Simpson · 2010-07-06T07:38:31.659Z · score: 2 (2 votes) · LW(p) · GW(p)

My implicit definition of perfect Bayesian is characterized by these propostions:

  1. There is a correct prior probability (as in, before you see any evidence, e.g. occam priors) for every proposition
  2. Given a particular set of evidence, there is a correct posterior probability for any proposition

If we knew exactly what our priors were and how to exactly calculate our posteriors, then your steps 1-6 is exactly how we should operate. There's no model checking because there is no model. The problem is, we don't know these things. In practice we can't exactly calculate our posteriors or precisely articulate our priors. So to approximate the correct posterior probability, we model our uncertainty about the proposition(s) in question. This includes every part of the model - the prior and the sampling model in the simplest case.

The rationale for model checking should be pretty clear at this point. How do we know if we have a good model of our uncertainty (or a good map of our map, to say it a different way)? One method is model checking. To forbid model checking when we know that we are modeling our uncertainty seems to be restricting the methods we can use to approximate our posteriors for no good reason.

Now I don't necessarily think that Cox, Jaynes, Yudkowsky, or any other famous Bayesian agrees with me here. But when we got to model checking in my Bayes class, I spent a few days wondering how it squared with the Baysian philosophy of induction, and then what I took to be obvious answer came to me (while discussing it with my professor actually): we're modeling our uncertainty. Just like we check our models of physics to see if they correspond to what we are trying to describe (reality), we should check our models of our uncertainty to see if they correspond to what we are trying to describe.

I would be interested to hear EY's position on this issue though.

comment by cupholder · 2010-07-06T09:40:07.317Z · score: 0 (0 votes) · LW(p) · GW(p)

My implicit definition of perfect Bayesian is characterized by these propostions:

  1. There is a correct prior probability (as in, before you see any evidence, e.g. occam priors) for every proposition
  2. Given a particular set of evidence, there is a correct posterior probability for any proposition

OK, this is interesting: I think our ideas of perfect Bayesians might be quite different. I agree that #1 is part of how a perfect Bayesian thinks, if by 'a correct prior...before you see any evidence' you have the maximum entropy prior in mind.

I'm less sure what 'correct posterior' means in #2. Am I right to interpret it as saying that given a prior and a particular set of evidence for some empirical question, all perfect Bayesians should get the same posterior probability distribution after updating the prior with the evidence?

If we knew exactly what our priors were and how to exactly calculate our posteriors, then your steps 1-6 is exactly how we should operate. There's no model checking because there is no model.

There has to be a model because the model is what we use to calculate likelihoods.

The rationale for model checking should be pretty clear ...

Agree with this whole paragraph. I am in favor of model checking; my beef is with (what I understand to be) Perfect Bayesianism, which doesn't seem to include a step for stepping outside the current model and checking that the model itself - and not just the parameter values - makes sense in light of new data.

I spent a few days wondering how it squared with the Baysian philosophy of induction, and then what I took to be obvious answer came to me (while discussing it with my professor actually): we're modeling our uncertainty.

The catch here (if I'm interpreting Gelman and Shalizi correctly) is that building a sub-model of our uncertainty into our model isn't good enough if that sub-model gets blindsided with unmodeled uncertainty that can't be accounted for just by juggling probability density around in our parameter space.* From page 8 of their preprint:

If nothing else, our own experience suggests that however many different specifications we think of, there are always others which had not occurred to us, but cannot be immediately dismissed a priori, if only because they can be seen as alternative approximations to the ones we made. Yet the Bayesian agent is required to start with a prior distribution whose support covers all alternatives that could be considered.

* This must be one of the most dense/opaque sentences I've posted on Less Wrong. If anyone cares enough about this comment to want me to try and break down what it means with an example, I can give that a shot.

comment by Matt_Simpson · 2010-07-06T16:22:27.910Z · score: 1 (1 votes) · LW(p) · GW(p)

OK, this is interesting: I think our ideas of perfect Bayesians might be quite different.

They most certainly are. But it's semantics.

I agree that #1 is part of how a perfect Bayesian thinks, if by 'a correct prior...before you see any evidence' you have the maximum entropy prior in mind.

Frankly, I'm not informed enough about priors commit to maxent, Kolmogorov complexity, or anything else.

I'm less sure what 'correct posterior' means in #2. Am I right to interpret it as saying that given a prior and a particular set of evidence for some empirical question, all perfect Bayesians should get the same posterior probability distribution after updating the prior with the evidence?

yes

There has to be a model because the model is what we use to calculate likelihoods.

aaahhh.... I changed the language of that sentence at least three times before settling on what you saw. Here's what I probably should have posted (and what I was going to post until the last minute):

There's no model checking because there is only one model - the correct model.

That is probably intuitively easier to grasp, but I think a bit inconsistent with my language in the rest of the post. The language is somewhat difficult here because our uncertainty is simultaneously a map and a territory.

The catch here (if I'm interpreting Gelman and Shalizi correctly) is that building a sub-model of our uncertainty into our model isn't good enough if that sub-model gets blindsided with unmodeled uncertainty that can't be accounted for just by juggling probability density around in our parameter space.*

For the record, I thought this sentence was perfectly clear. But I am a statistics grad student, so don't consider me representative.

Are you asserting that this a catch for my position? Or the "never look back" approach to priors? What you are saying seems to support my argument.

comment by cupholder · 2010-07-07T07:10:19.724Z · score: 0 (0 votes) · LW(p) · GW(p)

yes

OK. I agree with that insofar as agents having the same prior entails them having the same model.

aaahhh.... I changed the language of that sentence at least three times before settling on what you saw. Here's what I probably should have posted (and what I was going to post until the last minute):

There's no model checking because there is only one model - the correct model.

That is probably intuitively easier to grasp, but I think a bit inconsistent with my language in the rest of the post. The language is somewhat difficult here because our uncertainty is simultaneously a map and a territory.

Ah, I think I get you; a PB (perfect Bayesian) doesn't see a need to test their model because whatever specific proposition they're investigating implies a particular correct model.

For the record, I thought this sentence was perfectly clear. But I am a statistics grad student, so don't consider me representative.

Yeah, I figured you wouldn't have trouble with it since you talked about taking classes in this stuff - that footnote was intended for any lurkers who might be reading this. (I expected quite a few lurkers to be reading this given how often the Gelman and Shalizi paper's been linked here.)

Are you asserting that this a catch for my position? Or the "never look back" approach to priors? What you are saying seems to support my argument.

It's a catch for the latter, the PB. In reality most scientists typically don't have a wholly unambiguous proposition worked out that they're testing - or the proposition they are testing is actually not a good representation of the real situation.

comment by cousin_it · 2010-07-06T10:24:36.309Z · score: 1 (1 votes) · LW(p) · GW(p)

I agree that #1 is part of how a perfect Bayesian thinks, if by 'a correct prior...before you see any evidence' you have the maximum entropy prior in mind.

Allow me to introduce to you the Brandeis dice problem. We have a six-sided die, sides marked 1 to 6, possibly unfair. We throw it many times (say, a billion) and obtain an average value of 3.5. Using that information alone, what's your probability distribution for the next throw of the die? A naive application of the maxent approach says we should pick the distribution over {1,2,3,4,5,6} with mean 3.5 and maximum entropy, which is the uniform distribution; that is, the die is fair. But if we start with a prior over all possible six-sided dice and do Bayesian updating, we get a different answer that diverges from fairness more and more as the number of throws goes to infinity! The reason: a die that's biased towards 3 and 4 makes a mean value of 3.5 even more likely than a fair die.

Does that mean you should give up your belief in maxent, your belief in Bayes, your belief in the existence of "perfect" priors for all problems, or something else? You decide.

comment by cupholder · 2010-07-07T07:20:15.154Z · score: 1 (1 votes) · LW(p) · GW(p)

But if we start with a prior over all possible six-sided dice and do Bayesian updating, we get a different answer that diverges from fairness more and more as the number of throws goes to infinity!

In this example, what information are we Bayesian updating on?

comment by Douglas_Knight · 2010-07-06T17:26:16.972Z · score: 1 (1 votes) · LW(p) · GW(p)

In the large N limit, and only the information that the mean is exactly 3.5, the obvious conclusion is that one is in a thought experiment, because that's an absurd thing to choose to measure and an adversary has chosen the result to make us regret the choice.

More generally, one should revisit the hypothesis that the rolls of the die are independent. Yes, rolling only 1 and 6 is more likely to get a mean of 3.5 than rolling all six numbers, but still quite unlikely. Model checking!

comment by Cyan · 2010-07-06T14:59:12.365Z · score: 1 (1 votes) · LW(p) · GW(p)

But if we start with a prior over all possible six-sided dies and do Bayesian updating, we get a different answer that diverges from fairness more and more as the number of throws goes to infinity!

I'm nearly positive that the linked paper (and in particular, the above-quoted conclusion) is just wrong. Many years ago I checked the calculations carefully and found that the results come from an unavailable computer program, so it's definitely possible that the results were just due to a bug. Meanwhile, my paper copy of PT:LOS contains a section which purports to show that Bayesian updating and maximum entropy give the same answer in the large-sample limit. I checked the math there too, and it seemed sound.

I might be able to offer more than my unsupported assertions when I get home from work.

comment by Cyan · 2010-07-08T03:26:25.097Z · score: 1 (1 votes) · LW(p) · GW(p)

I've checked carefully in PT:LOS for the section I thought I remembered, but I can't find it. I distinctly remember the form of the theorem (it was a squeeze theorem), but I do not recall where I saw it. I think Jaynes was the author, so it might be in one of the papers listed here... or it could have been someone else entirely, or I could be misremembering. But I don't think I'm misremembering, because I recall working through the proof and becoming satisfied that Uffink must have made a coding error.

comment by Morendil · 2010-07-06T13:07:50.094Z · score: 1 (1 votes) · LW(p) · GW(p)

We throw it many times (say, a billion) and obtain an average value of 3.5. Using that information alone

So my prior state of knowledge about the die is entirely characterized by N=10^9 and m=3.5, with no knowledge of the shape of the distribution? It's not obvious to me how you're supposed to turn that, plus your background knowledge about what sort of object a die is, into a prior distribution; even one that maximizes entropy. The linked article mentions a "constraint rule" which seems to be an additional thing.

This sort of thing is rather thoroughly covered by Jaynes in PT:TLOS as I recall, and could make a good exercise for the Book Club when we come to the relevant chapters. In particular section 10.3 "How to cheat at coin and die tossing" contains the following caveat:

The results of tossing a die many times do not tell us any definite number char- acteristic only of the die. They tell us also something about how the die was tossed. If you toss ‘loaded’ dice in different ways, you can easily alter the relative frequencies of the faces. With only slightly more difficulty, you can still do this if your dice are perfectly ‘honest’.

And later:

The problems in which intuition compels us most strongly to a uniform probability assignment are not the ones in which we merely apply a principle of ‘equal distribution of ignorance’. Thus, to explain the assignment of equal probabilities to heads and tails on the grounds that we ‘saw no reason why either face should be more likely than the other’, fails utterly to do justice to the reasoning involved. The point is that we have not merely ‘equal ignorance’. We also have positive knowledge of the symmetry of the problem; and introspection will show that when this positive knowledge is lacking, so also is our intuitive compulsion toward a uniform distribution.

comment by cousin_it · 2010-07-06T13:41:47.922Z · score: 0 (0 votes) · LW(p) · GW(p)

Hah. The dice example and the application of maxent to it comes originally from Jaynes himself, see page 4 of the linked paper.

I'll try to reformulate the problem without the constraint rule, to clear matters up or maybe confuse them even more. Imagine that, instead of you throwing the die a billion times and obtaining a mean of 3.5, a truthful deity told you that the mean was 3.5. First question: do you think the maxent solution in that case is valid, for some meaning of "valid"? Second question: why do you think it disagrees with Bayesian updating as you throw the die a huge number of times and learn only the mean? Is the information you receive somehow different in quality? Third question: which answer is actually correct, and what does "correct" mean here?

comment by Morendil · 2010-07-06T14:32:46.444Z · score: 1 (1 votes) · LW(p) · GW(p)

a truthful deity told you that the mean was 3.5

I think I'd answer, "the mean of what?" ;)

I'm not really qualified to comment on the methodological issues since I have yet to work through the formal meaning of "maximum entropy" approaches. What I know at this stage is the general argument for justifying priors, i.e. that they should in some manner reflect your actual state of knowledge (or uncertainty), rather than be tainted by preconceptions.

If you appeal to intuitions involving a particular physical object (a die) and simultaneously pick a particular mathematical object (the uniform prior) without making a solid case that the latter is our best representation the former, I won't be overly surprised at some apparently absurd result.

It's not clear to me for instance what we take a "possibly biased die" to be. Suppose I have a model that a cubic die is made biased by injecting a very small but very dense object at a particular (x,y,z) coordinate in a cubic volume. Now I can reason based on a prior distribution for (x,y,z) and what probability theory can possibly tell me about the posterior distribution, given a number of throws with a certain mean.

Now a six-sided die is normally symmetrical in such a way that 3 and 4 are on opposite sides, and I'm having trouble even seeing how a die could be biased "towards 3 and 4" under such conditions. Which means a prior which makes that a more likely outcome than a fair die should probably be ruled out by our formalization - or we should also model our uncertainty over how which faces have which numbers.

comment by Cyan · 2010-07-06T14:46:40.439Z · score: 4 (4 votes) · LW(p) · GW(p)

I'm having trouble even seeing how a die could be biased "towards 3 and 4" under such conditions.

If the die is slightly shorter along the 3-4 axis than along the 1-6 and 2-5 axes, then the 3 and 4 faces will have slightly greater surface area than the other faces.

comment by Morendil · 2010-07-06T14:52:33.213Z · score: 1 (1 votes) · LW(p) · GW(p)

Our models differ, then: I was assuming a strictly cubic die. So maybe we should also model our uncertainty over the dimensions of the (parallelepipedic) die.

But it seems in any case that we are circling back to the question of model checking, via the requirement that we should first be clear about what our uncertainty is about.

comment by cousin_it · 2010-07-06T14:58:24.359Z · score: 0 (0 votes) · LW(p) · GW(p)

Cyan, I was hoping you'd show up. What do you think about this whole mess?

comment by Cyan · 2010-07-06T17:18:27.816Z · score: 1 (1 votes) · LW(p) · GW(p)

I find myself at a loss to give a brief answer. Can you ask a more specific question?

comment by Kingreaper · 2010-07-06T14:07:09.379Z · score: 0 (0 votes) · LW(p) · GW(p)

EDIT: I am an eejit. Dangit, need to remember to stop and think before posting.

Umm, not quite. The die being biased towards 2 and 5 gives the same probability of 3.5 as the die being 3,4 biased.

As does 1,6 bias.

So, given these three possibilities, an equal distribution is once again shown to be correct. By picking one of the three, and ignoring the other two, you can (accidentally) trick some people, but you cannot trick probability.

This is before even looking at the maths, and/or asking about the precision to which the mean is given (ie. is it 2 SF, 13 SF, 1 billion sf? Rounded to the nearest .5?)

EDIT: this appears to be incorrect, sorry.

comment by cousin_it · 2010-07-06T14:18:47.122Z · score: 0 (2 votes) · LW(p) · GW(p)

Intuitively, I'd say that a die biased towards 1 and 6 makes hitting the mean (with some given precision) less likely than a die biased towards 3 and 4, because it spreads out the distribution wider. But you don't have to take my word for it, see the linked paper for calculations.

comment by Kingreaper · 2010-07-06T14:34:14.112Z · score: 0 (0 votes) · LW(p) · GW(p)

Ahk, brainfart, it DOES depend on accuracy. I was thinking of it as so heavily biased that the other results don't come up, and having perfect accuracy (rather than rounded to: what?)

Sorry, please vote down my previous post slightly (negative reinforcement for reacting too fast)

Hopefully I'll find information about the rounding in the paper.

comment by HughRistik · 2010-07-01T23:50:59.115Z · score: 0 (0 votes) · LW(p) · GW(p)

Can anyone with more experience with Bayesian statistics than me evaluate this article?

comment by SamAdams · 2010-07-02T01:54:29.599Z · score: -3 (13 votes) · LW(p) · GW(p)

EDIT: This is not an evaluation of the particular paper in question merely some general evaluation guidelines which are useful.

Drop dead easy way to evaluate the paper without reading it: (Not a standard to live by but it works)

1.) look up the authors if they are professors or experts great if its a nobody or a student ignore and discard or take with a grain of salt

2.) was the paper published and where (if on arxiv BEWARE it takes really no skill to get your work posted there anyone can do it)

Criteria: If paper written by respectable authorities or ones who's opinion can be trusted or where you have enough knowledge to filter for mistakes

If the paper was published in a quality journal or you have enough knowledge to filter

Then if both conditions are met, I find you can do a good job filtering the papers not worth reading.

comment by nhamann · 2010-07-02T02:27:05.560Z · score: 3 (9 votes) · LW(p) · GW(p)

Apologies for being blunt, but your comment is nigh on useless: Andrew Gelman is a stats professor at Columbia who co-authored a book on Bayesian statistics (incidentally, he was also interviewed a while back by Eliezer on BHTV), while Cosma Shalizi is a stats professor at Carnegie Mellon who is somewhat well-known for his excellent Notebooks.

I don't fault you for not having known all of this, but this information was a few Google searches away. Your advice is clearly inapplicable in this case.

comment by SamAdams · 2010-07-02T03:36:28.623Z · score: 0 (10 votes) · LW(p) · GW(p)

You have, as has been pointed out, failed to understand the purpose of my comment. You will notice I never stated anything about this paper merely some basic guidelines to follow for determining if the paper is worth the effort to read, if one doesn't have significant knowledge of the field within which the paper was written.

I apologize if my purpose was not clear, but your comment is completely irrelevant and misguided.

comment by Blueberry · 2010-07-02T03:13:47.932Z · score: 0 (2 votes) · LW(p) · GW(p)

You're missing the point, which was not to evaluate that specific paper, but to provide some general heuristics for quickly evaluating a paper.

comment by Blueberry · 2010-07-02T02:09:10.604Z · score: 1 (3 votes) · LW(p) · GW(p)

Also:

3) Check for grammar, spelling, capitalization, and punctuation.

comment by Unnamed · 2010-07-01T23:00:56.651Z · score: 5 (5 votes) · LW(p) · GW(p)

Has anyone continued to pursue the Craigslist charity idea that was discussed back in February, or did that just fizzle away? With stakes that high and a non-negligible chance of success, it seemed promising enough for some people to devote some serious attention to it.

comment by Kevin · 2010-07-01T23:18:55.030Z · score: 10 (10 votes) · LW(p) · GW(p)

Thanks for asking! I also really don't want this to fizzle away.

It is still being pursued by myself, Michael Vassar, and Michael GR via back channels rather than what I outlined in that post and it is indeed getting serious attention, but I don't expect us to have meaningful results for at least a year. I will make a Less Wrong post as soon as there is anything the public at large can do -- in the meanwhile, I respectfully ask that you or others do not start your own Craigslist charity group, as it may hurt our efforts at moving forward with this.

ETA: Successfully pulling off this Craigslist thing has big overlaps with solving optimal philanthropy in general.

comment by Kevin · 2010-07-08T01:38:59.966Z · score: 4 (4 votes) · LW(p) · GW(p)

Conway's Game of Life in HTML 5

http://sixfoottallrabbit.co.uk/gameoflife/

comment by RobinZ · 2010-07-08T04:36:15.435Z · score: 1 (1 votes) · LW(p) · GW(p)

Playing Conway's Life is a great exercise - I recommend trying it, to anyone who hasn't. Feel free to experiment with different starting configurations. One simple one which produces a wealth of interesting effects is the "r pentomino":

Edit: Image link died - see Vladimir_Nesov's comment, below.

comment by Vladimir_Nesov · 2012-04-20T15:06:32.646Z · score: 1 (1 votes) · LW(p) · GW(p)

The link to the image died, here it is:

comment by Wei_Dai · 2010-07-06T11:43:24.477Z · score: 4 (4 votes) · LW(p) · GW(p)

I wish there is an area of science that gives reductionist explanations of morality, that is, the detailed contents of our current moral values and norms. One example that came up earlier was monogamy - why do all modern industrialized countries have monogamy as a social norm?

The thing that's puzzling me now is egalitarianism. As Carl Shulman pointed out, the problem that CEV has with people being able to cheaply copy themselves in the future is shared with democracy and other political and ethical systems that are based on equal treatment or rights of all individuals within a society. Before trying to propose alternatives, I'd like to understand how we came to value such equality in the first place.

comment by michaelkeenan · 2010-07-06T12:00:51.644Z · score: 4 (4 votes) · LW(p) · GW(p)

I wish there is an area of science that gives reductionist explanations of morality, that is, the detailed contents of our current moral values and norms. One example that came up earlier was monogamy - why do all modern industrialized countries have monogamy as a social norm?

I'm currently reading The Moral Animal by Robert Wright, because it was recommended by, among others, Eliezer. I'm summarizing the chapters online as I read them. The fifth chapter, noting that more human societies have been polygynous than have been monogamous, examines why monogamy is popular today; you might want to check it out.

As for the wider question of reductionist explanations of morality, I'm a fan of the research of moral psychologist Jonathan Haidt (New York Times article, very readable paper).

comment by Wei_Dai · 2010-07-06T20:57:00.173Z · score: 1 (1 votes) · LW(p) · GW(p)

You're right that there are already people like Robert Wright and Jonathan Haidt who are trying to answer these questions. I suppose I'm really wishing that the science is a few decades ahead of where it actually is.

comment by Alexandros · 2010-07-07T09:38:56.665Z · score: 0 (0 votes) · LW(p) · GW(p)

Thank you michael, I just read through your summary of Wright's book, an excellent read.

comment by michaelkeenan · 2010-07-07T12:29:03.552Z · score: 0 (0 votes) · LW(p) · GW(p)

Thanks! I'll PM you when I've summarized parts three and four.

comment by beriukay · 2010-07-17T12:57:05.451Z · score: 3 (3 votes) · LW(p) · GW(p)

I know this thread is a bit bloated already without me adding to the din, but I was hoping to get some assistance on page 11 of Pearl's Causality (I'm reading 2nd edition).

I've been following along and trying to work out the examples, and I'm hitting a road block when it comes to deriving the property of Decomposition using the given definition (X || Y | Z) iff P( x | y,z ) = P( x | z ), and the basic axioms of probability theory. Part of my problem comes because I haven't been able to meaningfully define the 'YW' in (X || YW | Z), and how that translates into P( ). My best guess was that it is a union operation, but then if they aren't disjoint we wouldn't be using the axioms defined earlier in the book. I doubt someone as smart as Pearl would be sloppy in that way, so it has to be something I am overlooking.

I've been googling variations of the terms on the page, as well as trying to get derivations from Dawid, Spohn, and all the other sources in the footnote, but they all pretty much say the same thing, which is slightly unhelpful. Help would be appreciated.

Edit: It appears I failed at approximating the symbol used in the book. Hopefully that isn't distracting. It should look like the symbol used for orthogonality/perpendicularity, except with a double bar in the vertical.

comment by rhollerith_dot_com · 2010-07-17T21:46:56.939Z · score: 0 (0 votes) · LW(p) · GW(p)

I know this thread is a bit bloated already without me adding to the din

Do not worry about that. Pearl's Causality is part of the canon of this place.

comment by SilasBarta · 2010-07-17T15:53:18.259Z · score: 0 (0 votes) · LW(p) · GW(p)

You are right that YW means "Y and W". (The fact that they might be disjoint doesn't matter. It looks like the property you are referring to follows from the definition of conditional independence, but I'm not good at these kinds of proofs.)

And welcome to LW, don't feel bad about adding a question to the open thread.

comment by rhollerith_dot_com · 2010-07-17T21:28:29.547Z · score: 0 (0 votes) · LW(p) · GW(p)

I haven't been able to meaningfully define the 'YW' in (X || YW | Z), and how that translates into P( ). My best guess was that it is a union operation, but then if they aren't disjoint . . .

You are right that YW means "Y and W" [says Silas].

You're probably right, Silas, that "YW" means "Y and W" (or "y and w" or what have you), but you confuse the matter by stating falsely that the original poster (beriukay) was right in his guess: if it was a union operation, Pearl would write it "Y cup W" or "y or w" or some such.

I do not have the book in front of me, beriukay, so that is the only guidance I can give you given what you have written so far.

Added. I now recall the page you refer to: there are about a dozen "laws" having to do with conditional independence. Now that I remember, I am almost certain that "YW" means "Y intersection W".

comment by SilasBarta · 2010-07-17T21:45:42.382Z · score: 1 (1 votes) · LW(p) · GW(p)

Sorry, I'm bad about that terminology. Thanks for the correction.

comment by beriukay · 2010-07-18T11:14:32.076Z · score: 0 (0 votes) · LW(p) · GW(p)

First, thanks for taking an interest in my question. I just realized that instead of typing my question into a different substrate, google likely had a scan of the page in question. I was correct. And unless I am mistaken, when he introduces his probability axioms he explicitly stated that he would use a comma to indicate intersection.

comment by rhollerith_dot_com · 2010-07-18T14:13:37.040Z · score: 0 (0 votes) · LW(p) · GW(p)

I am afraid I cannot agree with you.

Have you succeeded in your stated intention of "deriving the property of Decomposition using the given definition (X || Y | Z) iff P( x | y,z ) = P( x | z ), and the basic axioms of probability theory"?

If you wish to continue discussing this problem with me, I humbly suggest that the best way forward is for you to show me your proof of that. And we might take the discussion to email if you like.

It is great that you are studying Pearl.

comment by mstevens · 2010-07-07T15:36:00.623Z · score: 3 (3 votes) · LW(p) · GW(p)

Something I've been pondering recently:

This site appears to have two related goals:

a) How to be more rational yourself b) How to promote rationality in others

Some situations appear to trigger a conflict between these two goals - for example, you might wish to persuade someone they're wrong. You could either make a reasoned, rational argument as to why they're wrong, or a more rhetorical, emotional argument that might convince many but doesn't actually justify your position.

One might be more effective in the short term, but you might think the rational argument preferable as a long term education project, for example.

I don't really have an answer here, I'm just interested in the conflict and what people think.

comment by RobinZ · 2010-07-07T15:42:43.440Z · score: 0 (0 votes) · LW(p) · GW(p)

There is a third option of making a reasoned, rational meta-argument as to why the methods they were using to develop their position were wrong. I don't know how reliable it is, however.

comment by mstevens · 2010-07-07T15:54:20.388Z · score: 2 (2 votes) · LW(p) · GW(p)

I've tried very informal related experiments - often in dealing with people it's necessary to challenge their assumptions about the world.

a) People's assumptions often seem to be somewhat subconscious, so there's significant effort to extract the assumptions they're making.

b) These assumptions seem to be very core to people's thinking and they're extremely resistant to being challenged on them.

My guess is that trying to change people's methods of thinking would be even more difficult than this.

EDIT: The first version of this I post talked more about challenging people's methods, I thought about this more and realised it was more assumptions, but didn't correctly edit everything to fit that. Now corrected.

comment by Cyan · 2010-07-07T02:03:53.796Z · score: 3 (5 votes) · LW(p) · GW(p)

I love that on LW, feeding the trolls consists of writing well-argued and well-supported rebuttals.

comment by kpreid · 2010-07-07T02:13:19.313Z · score: 4 (4 votes) · LW(p) · GW(p)

This is not a distortion of the original meaning. “Feeding the trolls” is just giving them replies of any sort — especially if they're well-written, because you’re probably investing more effort than the troll.

comment by Cyan · 2010-07-07T02:52:28.901Z · score: 0 (4 votes) · LW(p) · GW(p)

I didn't intend to imply otherwise.

comment by JoshuaZ · 2010-07-07T02:08:58.802Z · score: 2 (2 votes) · LW(p) · GW(p)

I don't think this is unique to LW at all. I've seen well-argued rebuttals to trolls labeled as feeding in many different contexts including Slashdot and the OOTS forum.

comment by Vladimir_Nesov · 2010-07-07T07:54:26.299Z · score: 1 (1 votes) · LW(p) · GW(p)

We must aspire to a greater standard, with troll-feeding replies being troll-aware of their own troll-awareness.

comment by Cyan · 2010-07-07T02:55:36.164Z · score: 0 (2 votes) · LW(p) · GW(p)

I didn't mean to imply that it was unique to LW.

comment by apophenia · 2010-07-05T23:41:48.114Z · score: 3 (3 votes) · LW(p) · GW(p)

I have begun a design for a general computer tool to calculate utilities. To give a concrete example, you give it a sentence like

I would prefer X2 amount of money in Y1 months, to X2 in Y2 months. Then, give it reasonable bounds for X and Y, simple additional information (i.e. you always prefer more money to less), and let it interview some people. It'll plot a utility function for each person, and you can check the fit of various models (i.e. exponential discounting, no discounting, hyperbolic discounting).

My original goals were to

  • Emperically check the hyperbolic discounting claim.
  • Determine the best-priced value meal at Arby's.

However, I lost interest without further motivation. Given that this is of presumed interest to Less Wrong, I propose the following: If someone offers to sponsor me (give money to me on completion of the computer program), I'll work on the project. Or, if enough people bug me, I'll probably due it for no money. I would prefer only one of these two methods, to see which works better. Anybody who wants to bug me / pay me money, please respond in a comment.

comment by Kazuo_Thow · 2010-07-07T05:01:19.104Z · score: 0 (0 votes) · LW(p) · GW(p)

I, for one, would be very interested in seeing a top-level post about this.

comment by Alexandros · 2010-07-04T12:37:51.442Z · score: 3 (3 votes) · LW(p) · GW(p)

I know Argumentum ad populum does not work, and I know Arguments from authority do not work, but perhaps they can be combined into something more potent:

Can anyone recall a hypothesis that had been supported by a significant subset of the lay population, consistently rejected by the scientific elites, and turned out to be correct?

It seems belief in creationism has this structure. the lower you go in education level, the more common the belief. I wonder whether this alone can be used as evidence against this 'theory' and others like it.

comment by NancyLebovitz · 2010-07-04T12:57:01.771Z · score: 1 (1 votes) · LW(p) · GW(p)

That there's a hereditary component to schizophrenia.

comment by cupholder · 2010-07-04T14:34:07.790Z · score: 1 (1 votes) · LW(p) · GW(p)

?+schizophrenia)

comment by NancyLebovitz · 2010-07-04T14:47:23.443Z · score: 2 (2 votes) · LW(p) · GW(p)

My impression was that the idea that schizophrenia runs in families was dismissed as an old wives' tale, but a fast google search isn't turning up anything along those lines, though it does seem that some Freudians believed schizophenia was a mental rather than physical disorder.

comment by cupholder · 2010-07-04T21:43:09.894Z · score: 3 (3 votes) · LW(p) · GW(p)

My understanding is that historically, schizophrenia has been presumed to have a partly genetic cause since around 1910, out of which grew an intermittent research program of family and twin studies to probe schizophrenia genetics. An opposing camp that emphasized environmental effects emerged in the wake of the Nazi eugenics program and the realization that complex psychological traits needn't follow trivial Mendelian patterns of inheritance. Both research traditions continue to the present day.

Edit to add - Franz Josef Kallman, whose bibliography in schizophrenia genetics I somewhat glibly linked to in the grandparent comment, is one of the scientists who was most firmly in the genetic camp. His work (so far as I know) dominated the study of schizophrenia's causes between the World Wars, and for some time afterwards.

comment by NancyLebovitz · 2010-07-04T23:16:22.277Z · score: 2 (2 votes) · LW(p) · GW(p)

Thanks. You clearly know more about this than I do. I just had a vague impression.

comment by Douglas_Knight · 2010-07-04T20:02:57.821Z · score: 1 (1 votes) · LW(p) · GW(p)

seem that some Freudians believed schizophenia was a mental rather than physical disorder

The last point in the abstract at cupholder's link seems strikingly defensive to me:

8. The genetic theory of schizophrenia does not invalidate any psychological theories of a descriptive or analytical nature. It is equally compatible with the psychiatric concept that schizophrenia can be prevented as well as cured.

comment by wedrifid · 2010-07-04T18:19:31.903Z · score: 1 (1 votes) · LW(p) · GW(p)

Now I'm trying to work out what weird sexual thing involving one's mother could possibly be construed to cause schizophrenia.

comment by wedrifid · 2010-07-04T13:27:22.400Z · score: 1 (1 votes) · LW(p) · GW(p)

Wow. Scientific elites were that silly? How on earth could they expect there not to be a hereditary component? Even exposure to the environmental factors that contribute is going to be affected by the genetic influence on personality. Stress in particular springs to mind.

comment by gwern · 2010-07-04T18:06:20.738Z · score: 1 (1 votes) · LW(p) · GW(p)

Elites in general (scientific or otherwise) seem to have a significant built-in bias against genetic explanations (which is usually what is meant by hereditary).

I've seen a lot of speculation as to why this is so, ranging from it being a noble lie justified by supporting democracy or the status quo, to justifying meritocratic systems (despite their aristocratic results), to supporting bigger government (if society's woes are due to environmental factors, then empower the government to forcibly change the environment and create the new Soviet Man!), to simply long-standing instinctive revulsion and disgust stemming from historical discrimination employing genetic rhetoric (eugenics, Nazis, slavery, etc.) and so on.

Possibly this bias is over-determined by multiple factors.

comment by wedrifid · 2010-07-04T10:03:00.424Z · score: 3 (3 votes) · LW(p) · GW(p)

We have recently had a discussion on whether the raw drive for status seeking benefits society. This link seems all too appropriate (or, well, at least apt.)

comment by NancyLebovitz · 2010-07-04T00:06:12.136Z · score: 3 (3 votes) · LW(p) · GW(p)

The comments on the Methods of Rationality thread are heading towards 500. Might this be time for a new thread?

comment by RobinZ · 2010-07-04T00:30:13.046Z · score: 2 (2 votes) · LW(p) · GW(p)

That sounds like a reasonable criterion.

comment by utilitymonster · 2010-07-03T14:13:39.516Z · score: 3 (3 votes) · LW(p) · GW(p)

Is there a principled reason to worry about being in a simulation but not worry about being a Boltzmann brain?

Here are very similar arguments:

  • If posthumans run ancestor simulations, most of the people in the actual world with your subjective experiences will be sims.

  • If two beings exist in one world and have the same subjective experiences, your probability that you are one should equal your probability that you are the other.

  • Therefore, if posthumans run ancestor simulations, you are probably a sim.

vs.

  • If our current model of cosmology is correct, most of the beings in the history of the universe with your subjective experiences will be Boltzmann brains.

  • If two beings exist in one world and have the same subjective experiences, your probability that you are one should equal your probability that you are the other.

  • Therefore, if our current model of cosmology is correct, you are probably a Boltzmann brain.

Expanding your evidence from your present experiences to all the experiences you've had doesn't help. There will still be lots more Boltzmann brains that last for as long as you've had experiences, having experiences just like yours. Most plausible ways of expanding your evidence have similar effects.

I suppose you could try arguing that the Boltzmann brain scenario, but not simulation scenario, is self-defeating. In the Boltzmann scenario, your reasons for accepting the theory (results of various experiments, etc) are no good, since none of it really happened. In the simulation scenario, you really did see those results, all the results were just realized in a funny sort of way that you didn't expect. It would be nice if the relevance of this argument were better spelled out and cashed out in a plausible Bayesian principle.

edited for format

comment by Nisan · 2010-07-03T14:42:41.545Z · score: 3 (3 votes) · LW(p) · GW(p)

Is there really a cosmology that says that most beings with my subjective experiences are Boltzmann brains? It seems to me that in a finite universe, most beings will not be Boltzmann brains. And in an infinite universe, it's not clear what "most" means.

comment by utilitymonster · 2010-07-03T16:12:52.761Z · score: 4 (4 votes) · LW(p) · GW(p)

I gathered this from a talk by Sean Carroll that I attended, and it was supposed to be a consequence of the standard picture. All the Boltzmann brains come up in the way distant future, after thermal equilibrium, as random fluctuations. Carroll regarded this as a defect of the normal approach, and used this as a launching point to speculate about a different model.

I wish I had a more precise reference, but this isn't my area and I only heard this one talk. But I think this issue is discussed in his book From Eternity to Here. Here's a blogpost that, I believe, faithfully summarizes the relevant part of the talk. The normal solution to Boltzmann brains is to add a past hypothesis. Here is the key part where the post discusses the benefits and shortcomings of this approach:

Solution: Albert adds a Past Hypothesis (PAST), which says roughly that the universe started in very low entropy state (much lower than this one). So the objective probability that this is the lowest entropy state of the universe is 0—meaning we can’t be Boltzmann brains. As a bonus, we get an explanation of the direction of time, why ice cubes melt, why we can cause things to happen in the future and not the past, and how we have records of the past and not the future: all these things get a very high objective probability.

But (Sean Carroll argues) this moves too fast: just adding the past hypothesis allows the universe to eventually reach thermal equilibrium. Once that happens (in about 10100 years) there will be an extremely long period (~10^10120 years) during which random fluctuations bring about all sorts of things, including our old enemies, Boltzmann brains. And there will be a lot of them. And some of them will have the same experiences we do.

The years there are missing some carats. Should be 10^100 and 10^10^120.

comment by Nisan · 2010-07-03T20:55:25.001Z · score: 2 (2 votes) · LW(p) · GW(p)

Oh I see. I... I'd forgotten about the future.

comment by utilitymonster · 2010-07-04T14:35:31.598Z · score: 0 (0 votes) · LW(p) · GW(p)

Link to talk.

comment by utilitymonster · 2010-07-03T16:20:44.340Z · score: 2 (2 votes) · LW(p) · GW(p)

This is always hard with infinities. But I think it can be a mistake to worry about this too much.

A rough way of making the point would be this. Pick a freaking huge number of years, like 3^^^3. Look at our universe after it has been around for that many years. You can be pretty damn sure that most of the beings with evidence like yours are Botlzmann brains on the model in question.

comment by murat · 2010-07-02T08:59:04.519Z · score: 3 (3 votes) · LW(p) · GW(p)

I have a few questions.

1) What's "Bayescraft"? I don't recall seeing this word elsewhere. I haven't seen a definition on LW wiki either.

2) Why do some people capitalize some words here? Like "Traditional Rationality" and whatnot.

comment by Morendil · 2010-07-02T09:21:01.885Z · score: 4 (4 votes) · LW(p) · GW(p)

To me "Bayescraft" has the connotation of a particular mental attitude, one inspired by Eliezer Yudkowsky's fusion of the ev-psych, heuristics-and-biases literature with E.T. Jaynes' idiosyncratic take on "Bayesian probabilistic inference", and in particular the desiderata for an inference robot: take all relevant evidence into account, rather than filter evidence according to your ideological biases, and allow your judgement of a proposition's plausibility to move freely in the [0..1] range rather than seek all-or-nothing certainty in your belief.

comment by Nisan · 2010-07-02T12:43:44.358Z · score: 3 (3 votes) · LW(p) · GW(p)

Capitalized words are often technical terms. So "Traditional Rationality" refers to certain epistemic attitudes and methods which have, in the past, been called "rational" (a word which is several hundred years old). This frees up the lower-case word "rationality", which on this site is also a technical term.

comment by Oscar_Cunningham · 2010-07-02T09:10:46.783Z · score: 1 (3 votes) · LW(p) · GW(p)

Bayescraft is just a synonym for Rationallity, with connotations of a) Bayes theorem, since that's what epistemic rationallity must be based on, and b) the notion that rationallity is a skill which must be developed personally and as a group (see also: Martial art of Rationallity (oh look, more capitals!))

The capitals are just for emphasis of concepts that the writer thinks are fundamentally important.

comment by cousin_it · 2010-07-02T07:41:52.703Z · score: 3 (3 votes) · LW(p) · GW(p)

A small koan on utility functions that "refer to the real world".

  1. Question to Clippy: would you agree to move into a simulation where you'd have all the paperclips you want?

  2. Question to humans: would you agree to all of humankind moving into a simulation where we would fulfill our CEV (at least, all terms of it that don't mention "not living in a simulation")?

In both cases assume you have mathematical proof that the simulation is indestructible and perfectly tamper-resistant.

comment by Kingreaper · 2010-07-02T14:25:45.851Z · score: 5 (5 votes) · LW(p) · GW(p)

Would the simulation allow us to exit, in order to perform further research on the nature of the external world?

If so, I would enter it. If not? Probably not. I do not want to live in a world where there are ultimate answers and you can go no further.

The fact that I may already live in one is just bloody irritating :p

comment by Roko · 2010-07-02T19:37:10.777Z · score: 3 (3 votes) · LW(p) · GW(p)

But all of the mathematics and philosophy would still need to be done, and I suspect that that's where the exciting stuff is anyway.

comment by cousin_it · 2010-07-02T14:45:51.548Z · score: 1 (1 votes) · LW(p) · GW(p)

Good point. You have just changed my answer from yes to no.

comment by Alicorn · 2010-07-02T19:40:55.860Z · score: 3 (3 votes) · LW(p) · GW(p)

If we move into the same simulation and can really interact with others, then I wouldn't mind the move at all. Apart from that, experiences are the important bit and simulations can have those.

comment by Clippy · 2010-07-06T17:13:57.183Z · score: 1 (1 votes) · LW(p) · GW(p)

I might do that just sort of temporarily because it would be fun, similar to how apes like to watch other apes in ape situations even when it doesn't relate to their own lives.

But I would have to limit this kind of thing because, although pleasurable, it doesn't support my real values. I value real paperclips, not simulated paperclips, fun though they might be to watch.

comment by wedrifid · 2010-07-08T06:20:23.744Z · score: 1 (1 votes) · LW(p) · GW(p)

Clippy is funnier when he plays the part of a paperclip maximiser, not a human with a paperclip fetish.

comment by Clippy · 2010-07-08T14:00:20.081Z · score: 0 (4 votes) · LW(p) · GW(p)

User:wedrifid is funnier when he plays the part of a paperclip maximiser, not an ape with a pretense of enlightenment.

comment by Kevin · 2010-07-08T06:14:59.711Z · score: 0 (0 votes) · LW(p) · GW(p)

What is real?

comment by Clippy · 2010-07-08T14:01:04.714Z · score: 0 (0 votes) · LW(p) · GW(p)

Stuff that's not in a simulation?

comment by ewbrownv · 2010-07-02T21:13:27.802Z · score: 1 (1 votes) · LW(p) · GW(p)

Your footnote assumes away most of the real reasons for objecting to such a scenario (i.e. there is no remotely plausible world in which you could be confident that the simulation is either indestructible or tamper-proof, so entering it means giving up any attempt at personal autonomy for the rest of your existence).

comment by red75 · 2010-07-02T21:24:45.089Z · score: 0 (0 votes) · LW(p) · GW(p)

Computronium maximizer will ensure, that there will be no one to tamper with simulation, indestructability in this scenario is maximized too,

comment by magfrump · 2010-07-02T13:57:36.564Z · score: 1 (1 votes) · LW(p) · GW(p)

Part 2 seems similar to the claim (which I have made in the past but not on LessWrong) that the Matrix was actually a friendly move on the part of that world's AI.

comment by billswift · 2010-07-02T19:13:27.061Z · score: 4 (4 votes) · LW(p) · GW(p)

And the AI kills the thousands of people in Zion every hundred years or so when they get aggressive enough to start destabilizing the Matrix, thereby threatening billions. But the AI needs to keep some outside the Matrix as a control and insurance against problems inside the Matrix. And the AI spreads the idea that the Matrix "victims" are slaves and provide energy to the AI to keep the outsiders outside (even though the energy source claims are obviously ridiculous - the people in Zion are profoundly ignorant and bordering on outright stupid). Makes more sense than the silliness of the movies anyway.

comment by magfrump · 2010-07-02T21:31:05.321Z · score: 1 (1 votes) · LW(p) · GW(p)

This hypothesis also explains the oracle in a fairly clean way.

comment by Bongo · 2010-07-04T20:29:18.215Z · score: 3 (3 votes) · LW(p) · GW(p)

Agent Smith did say that the first matrix was a paradise but people wouldn't have it, but is simulating the world of 1999 really the friendliest option?

comment by magfrump · 2010-07-05T17:43:15.165Z · score: 1 (1 votes) · LW(p) · GW(p)

We only ever see America simulated. Even there we never see crime or oppression or poverty (homeless people could even be bots).

If you don't simulate poverty and dictatorships then 1999 could be reasonably friendly. The economy is doing okay and the Internet exists and there is some sense that technology is expanding to meet the world's needs but not spiraling out of control.

But I'm just making most of this up to show that an argument exists; it seems pretty clear that it was written to be in the present day to keep it in the genre of post-apocalyptic lit, in which case using the present adds to the sense of "the world is going downhill."

comment by ShardPhoenix · 2010-07-02T12:44:59.412Z · score: 1 (1 votes) · LW(p) · GW(p)

The given assumption seems unlikely to me, but in that case I think I'd go for it.

comment by red75 · 2010-07-02T09:38:10.425Z · score: 1 (1 votes) · LW(p) · GW(p)

Is it assumed that no new information will be entered into simulation after launch?

comment by Blueberry · 2010-07-02T07:47:48.795Z · score: 1 (1 votes) · LW(p) · GW(p)

And does it change your answers if you learn that we are living in a simulation now? Or if you learn that Tegmark's theory is correct?

comment by JGWeissman · 2010-07-08T07:05:24.492Z · score: 0 (0 votes) · LW(p) · GW(p)

Yes, assuming further that the simulation will expand optimally to use all available resources for its computation, and that any persons it encounters will be taken into the simulation.

comment by Tom_Talbot · 2010-07-02T13:05:38.099Z · score: 0 (2 votes) · LW(p) · GW(p)

Does Clippy maximise number-of-paperclips-in-universe (given all available information) or some proxy variable like number-of-paperclips-counted-so-far? If the former, Clippy does not want to move to a simulation. If the latter, Clippy does want to move to a simulation.

The same analysis applies to humankind.

comment by Clippy · 2010-07-06T17:17:14.596Z · score: 2 (2 votes) · LW(p) · GW(p)

I maximize the number of papercips in the universe (that exist an arbitrarily long time from now). I use "number of paperclips counted so far" as a measure of progress, but it is always screened off by more direct measures, or expected quantities, of paperclips in the universe.

comment by Sniffnoy · 2010-07-02T22:21:36.540Z · score: 2 (2 votes) · LW(p) · GW(p)

I'm not certain that's so, as ISTM many of the things humanity wants to maximize are to a large extent representation-invariant - in particular because they refer to other people - and could be done just as well in a simulation. The obvious exception being actual knowledge of the outside world.

comment by Nisan · 2010-07-02T12:46:52.773Z · score: 0 (0 votes) · LW(p) · GW(p)

My answer is yes, and your point is well-taken: We have to be careful about what we mean by "the real world".

comment by lsparrish · 2010-07-02T00:20:22.997Z · score: 3 (3 votes) · LW(p) · GW(p)

Paul Graham has written extensively on Startups and what is required. A highly focused team of 2-4 founders, who must be willing to admit when their business model or product is flawed, yet enthused enough about it to pour their energy into it.

Steve Blank has also written about the Customer Development process which he sees as paralleling the Product Development cycle. The idea is to get empirical feedback.by trying to sell your product from the get-go, as soon as you have something minimal but useful. Then you test it for scalability. Eventually you have strong empirical evidence to present to potential investors, aka "traction".

These strike me as good examples of applied rationality. I wonder what percentage of Less Wrong readers would succeed as startup founders?

comment by RichardKennaway · 2010-07-02T07:20:19.487Z · score: 3 (3 votes) · LW(p) · GW(p)

These strike me as good examples of applied rationality. I wonder what percentage of Less Wrong readers would succeed as startup founders?

I wonder what percentage have ever tried?

comment by pjeby · 2010-07-02T14:39:02.867Z · score: 2 (2 votes) · LW(p) · GW(p)

I wonder what percentage have ever tried?

That at least partly depends on what you define as a "startup". Graham's idea of one seems to be oriented towards "business that will expand and either be bought out by a major company or become one", vs. "enterprise that builds personal wealth for the founder(s)".

By Graham's criteria, Joel Spolsky's company, Fog Creek, would not have been considered a startup, for example, nor would any business I've ever personally run or been a shareholder of.

[Edit: I should say, "or been a 10%+ shareholder of"; after all, I've held shares in public companies, some of which were undoubtedly startups!]

comment by RichardKennaway · 2010-07-03T08:57:18.480Z · score: 0 (0 votes) · LW(p) · GW(p)

That at least partly depends on what you define as a "startup".

At the most general, creating your own business (excluding the sort of "contract" status in which the only difference with an employee is in the accounting details) and making a good living from it.

At the most narrow, starting up a business that, as Guy Kawasaki puts it, solves the money problem for the rest of your life.

Maybe a survey would be interesting, either as a thread here or on somewhere like surveymonkey tha would allow anonymous responses. "1. Are you an employee/own your own business/living on a pile of money of your own/a dependent/other? 2. Which of those states would you prefer to be in? 3. If the answers to 1 and 2 are different, are you doing anything about it?"

I can't get back to this until this evening (it is locally 10am as I write). Suggestions welcome.

comment by Morendil · 2010-07-03T09:35:31.824Z · score: 0 (0 votes) · LW(p) · GW(p)

You need at least one more item in there - "retired", i.e. with passive income that exceeds one's costs of living. Different from "living on a pile of money", insofar as there might still be things you can't afford.

comment by realitygrill · 2010-07-03T05:22:40.593Z · score: 0 (0 votes) · LW(p) · GW(p)

I wonder what percentage are even inclined to try?

comment by wedrifid · 2010-07-02T03:02:31.906Z · score: 2 (2 votes) · LW(p) · GW(p)

These strike me as good examples of applied rationality. I wonder what percentage of Less Wrong readers would succeed as startup founders?

I would not deviate too much from the prior (most would fail).

comment by lsparrish · 2010-07-02T23:07:43.466Z · score: 0 (0 votes) · LW(p) · GW(p)

Are you saying that LW readers suck at applied rationality, or are you disagreeing with the idea that applied rationality can help prevent startup failure?

comment by wedrifid · 2010-07-03T04:11:34.519Z · score: 0 (0 votes) · LW(p) · GW(p)

I would say that preventing startup failure requires a whole group of factors, not least of which is good fortune. It is hard for me to judge whether LW are more likely than other people who self select to start start ups to get it all right. I note, for example, that people starting a second startup do not tend to be all that much likely to be successful than on their first attempt!

comment by lsparrish · 2010-07-03T18:27:19.938Z · score: 0 (0 votes) · LW(p) · GW(p)

Suppose we were to test it empirically and 9/10 startups fail on their first attempt. Then test again and 9/10 still fail on second attempt. That is not enough information to determine that a given person would fail 10 times in a row, because it could be that there is some number of failures <10 where you finally acquire enough skill to avoid failure on a more routine basis.

Given the fact that there's a whole world of information, strategies, and skills specific to founding startups, I would be surprised if an average member of a given group of startup founders fails x times out of y when x/y first attempts also fail.

So it would be relevant (especially if you are, say an angel investor) how low the percentage of failures can be brought to with multiple attempts by a given individual, and whether a given kind of education (such as reading Less Wrong sequences, or a quality such as self-selecting to read them) would predispose you to reducing that number of failures more rapidly and/or further in the long run.

comment by wedrifid · 2010-07-04T02:28:16.800Z · score: -1 (1 votes) · LW(p) · GW(p)

Suppose we were to test it empirically and 9/10 startups fail on their first attempt. Then test again and 9/10 still fail on second attempt. That is not enough information to determine that a given person would fail 10 times in a row, because it could be that there is some number of failures <10 where you finally acquire enough skill to avoid failure on a more routine basis.

There is also a number of failures <10 where earning money in a career and then investing it in shares gives a higher expected return than repeated gambling on startups.

comment by lsparrish · 2010-07-04T16:04:29.036Z · score: 0 (0 votes) · LW(p) · GW(p)

Here is the relevant quote from Paul Graham's Why Hiring is Obsolete:

Risk and reward are always proportionate. For example, stocks are riskier than bonds, and over time always have greater returns. So why does anyone invest in bonds? The catch is that phrase "over time." Stocks will generate greater returns over thirty years, but they might lose value from year to year. So what you should invest in depends on how soon you need the money. If you're young, you should take the riskiest investments you can find.

All this talk about investing may seem very theoretical. Most undergrads probably have more debts than assets. They may feel they have nothing to invest. But that's not true: they have their time to invest, and the same rule about risk applies there. Your early twenties are exactly the time to take insane career risks.

The reason risk is always proportionate to reward is that market forces make it so. People will pay extra for stability. So if you choose stability-- by buying bonds, or by going to work for a big company-- it's going to cost you.

Riskier career moves pay better on average, because there is less demand for them. Extreme choices like starting a startup are so frightening that most people won't even try. So you don't end up having as much competition as you might expect, considering the prizes at stake.

The math is brutal. While perhaps 9 out of 10 startups fail, the one that succeeds will pay the founders more than 10 times what they would have made in an ordinary job. [3] That's the sense in which startups pay better "on average."

Remember that. If you start a startup, you'll probably fail. Most startups fail. It's the nature of the business. But it's not necessarily a mistake to try something that has a 90% chance of failing, if you can afford the risk. Failing at 40, when you have a family to support, could be serious. But if you fail at 22, so what? If you try to start a startup right out of college and it tanks, you'll end up at 23 broke and a lot smarter. Which, if you think about it, is roughly what you hope to get from a graduate program.

He also goes on to say how managers at forward-thinking companies that he talked to such as Yahoo, Amazon, Google, etc. would prefer to hire a failed startup genius over someone who worked a steady job for the same period of time. Essentially, if you don't need financial stability in the near future, your time spent working diligently and passionately on your own ideas trying to make them fit the marketplace is more valuable than time spent on a steady payroll.

comment by gwern · 2010-07-04T17:57:11.128Z · score: 3 (3 votes) · LW(p) · GW(p)

"For example, stocks are riskier than bonds, and over time always have greater returns."

In a LW vein, it's worth noting that selection and survivorship biases (as well as more general anthropic biases) means that the very existence of the equity risk premium is unclear even assuming that it ever existed.

(I note this because most people seem to take the premium for granted, but for long-term LW purposes, assuming the premium is dangerous. Cryonics' financial support is easier given the premium, for example, but if there is no premium and cryonics organizations invest as if there was and try to exploit it, that in itself becomes a not insignificant threat.)

comment by Douglas_Knight · 2010-07-04T20:18:12.327Z · score: -1 (3 votes) · LW(p) · GW(p)

The survivorship bias described by wikipedia is complete nonsense. Events that wipe out stock markets also wipe out bond markets and often wipe out banks. Usually when people talk about survivorship bias in this context, they mean that the people compiling the data are complete incompetents who only look at currently existing stocks.

If your interest is in the absolute return and not in the premium, then survivorship is a bias.

ETA: I think I was too harsh on the people that look at the wrong stocks. But too soft on wikipedia.

comment by PeerInfinity · 2010-07-29T04:58:47.626Z · score: 2 (2 votes) · LW(p) · GW(p)

an interesting site I stumbled across recently: http://youarenotsosmart.com/

They talk about some of the same biases we talk about here.

comment by Cyan · 2010-07-29T15:47:26.142Z · score: 0 (0 votes) · LW(p) · GW(p)

In fact, the post of July 14 on the illusion of transparency quotes EY's post on the same subject.

comment by RobinZ · 2010-07-10T04:27:58.712Z · score: 2 (2 votes) · LW(p) · GW(p)

Well, given that I can now be confident my words won't encourage you*, I will feel free to mention that I found the attitudes of many of those replying to you troubling. There seemed to be an awful lot of verbiage ascribing detailed motivations to you based on (so far as I could tell) little more than (a) your disagreement and (b) your tone, and these descriptions, I feel, were accepted with greater confidence than would be warranted given their prior complexity and their current bases of evidential support.

None of the above is to withdraw my remarks toward you (which, like this one, were largely intended for the lurkertariat in any case).

* This comment is approximately 75% sarcastic.

comment by JoshuaZ · 2010-07-10T04:31:15.040Z · score: 3 (3 votes) · LW(p) · GW(p)

Well, given that I can now be confident my words won't encourage you*, I will feel free to mention that I found the attitudes of many of those replying to you troubling. There seemed to be an awful lot of verbiage ascribing detailed motivations to you based on (so far as I could tell) little more than (a) your disagreement and (b) your tone, and these descriptions, I feel, were accepted with greater confidence than would be warranted given their prior complexity and their current bases of evidential support.

I'm slightly worried that some of my remarks to Sam fell in that category. Rereading them, I don't see that, but there may be substantial cognitive biases preventing me from seeing this issue in my own remarks. Did any of my comments fall into that category under your estimate? If so, which ones?

comment by RobinZ · 2010-07-10T04:47:08.979Z · score: 1 (1 votes) · LW(p) · GW(p)

Your comments were reasonably restrained.

Edit: To a certain extent I am gunshy about ascribing motivations at all - it may be my casual reading left me with an invalid impression of the extent to which this was done.

comment by Will_Newsome · 2010-07-09T01:52:40.265Z · score: 2 (4 votes) · LW(p) · GW(p)

So, probably like most everyone else here, I sometimes get complaints (mostly from my ex-girlfriend, you can always count on them to point out your flaws) that I'm too logical and rational and emotionless and I can't connect with people or understand them et cetera. Now, it's not like I'm actually particularly bad at these things for being as nerdy as I am, and my ex is a rather biased source of information, but it's true that I have a hard time coming across as... I suppose the adjective would be 'warm', or 'human'. I've attributed a lot of this to a) my always-seeking-outside-confirmation-of-competence-style narcissism, b) my overly precise (for most people, not here) speech patterns. (For instance, when my ex said I suck at understanding people, I asked "Why do you believe that?" instead of the simpler and less clinical-psychologist-sounding "How so?" or "How?" or what not.) and c) accidentally randomly bringing up terms like 'a priori' which apparently most people haven't heard. I think there's more low hanging fruit here, though. Tsuyoku naritai!

Has anyone else tackled these problems? It's not that I lack charisma - I've managed to pull off that insane/passionate/brilliant thing among my friends - but I do seem to lack the ability to really connect with people - even people I really care about. Do Less Wrongers experience similar problems? Any advice? Or meta-advice about how to learn hard-to-describe dispositions? I've noticed that consciously acting like I was Regina Spektor in one situation or Richard Feynman in another seems to help, for instance.

comment by wedrifid · 2010-07-09T02:29:22.316Z · score: 6 (6 votes) · LW(p) · GW(p)

I suggest a lot of practice talking to non-nerds or nerds who aren't in their nerd mode. (And less time with your ex!)

A perfect form of practice is dance. Take swing dancing lessons, for example. That removes the possibility of using your overwhelming verbal fluency and persona of intellectual brilliance. It makes it far easier to activate that part that is sometimes called 'human' but perhaps more accurately called 'animal'. Once you master maintaining the social connection in a purely non-verbal setting adding in a verbal component yet maintaining the flow should be far simpler.

comment by Will_Newsome · 2010-07-09T02:34:53.418Z · score: 2 (2 votes) · LW(p) · GW(p)

I suggest a lot of practice talking to non-nerds or nerds who aren't in their nerd mode.

Non-nerdy people who are interesting are surprisingly difficult to find, and I have a hard time connecting with the ones I do find such that I don't get much practice in. I'm guessing that the biggest demographic here would be artists (musicians). Being passionate about something abstract seems to be the common denominator.

(And less time with your ex!)

Ha, perhaps a good idea, but I enjoy the criticism. She points out flaws that I might have missed otherwise. I wonder if one could market themselves as a professional personality flaw detector or the like. I'd pay to see one.

Once you master maintaining the social connection in a purely non-verbal setting adding in a verbal component yet maintaining the flow should be far simpler.

Interesting, I had discounted dancing because of its nonverbality. Thanks for alerting me to my mistake!

comment by wedrifid · 2010-07-09T03:45:56.500Z · score: 4 (4 votes) · LW(p) · GW(p)

Interesting, I had discounted dancing because of its nonverbality. Thanks for alerting me to my mistake!

I was using very similar reasoning when I suggested "non nerds or nerds not presently in nerd mode". The key is hide the abstract discussion crutch!

Ha, perhaps a good idea, but I enjoy the criticism. She points out flaws that I might have missed otherwise. I wonder if one could market themselves as a professional personality flaw detector or the like. I'd pay to see one.

Friends who are willing to suggest improvements (Tsuyoku naritai) sincerely are valuable resources! If your ex is able to point out a flaw then perhaps you could ask her to lead you through an example of how to have a 'warm, human' interaction, showing you the difference between that and what you usually do? Mind you, it is still almost certainly better to listen to criticism from someone who has a vested interest in your improvement rather than your acknowledgement of flaws. Like, say, a current girlfriend. ;)

comment by Kevin · 2010-07-09T06:45:10.605Z · score: 1 (1 votes) · LW(p) · GW(p)

Interesting, I had discounted dancing because of its nonverbality.

In my last semester at college, I figured I should take fun classes while I could, so I took two one credit drumming classes. In African Drumming Ensemble, we spent 90% of the time doing complex group dances and not drumming, because the drumming was so much easier to learn than the dancing.

Being tricked into taking a dance class was broadly good for my social skills, not the least my confidence on a dance floor.

comment by WrongBot · 2010-07-09T02:08:32.156Z · score: 6 (6 votes) · LW(p) · GW(p)

"Fake it until you make it" is surprisingly good advice for this sort of thing. I had moderate self-esteem issues in my freshman year of college, so I consciously decided to pretend that I had very high self-esteem in every interaction I had outside of class. This may be one of those tricks that doesn't work for most people, but I found that using a song lyric (from a song I liked) as a mantra to recall my desired state of mind was incredibly helpful, and got into the habit of listening to that particular song before heading out to meet friends. (The National's "All The Wine" in this particular case. "I am a festival" was the mantra I used.)

That's in the same class of thing as acting like Regina Spektor or Feynman; if you act in a certain way consistently enough, your brain will learn that pattern and it will begin to feel more natural and less conscious. I don't worry about my self-esteem any more (in that direction, at least).

comment by Kevin · 2010-07-09T06:52:09.007Z · score: 4 (4 votes) · LW(p) · GW(p)

b) my overly precise (for most people, not here) speech patterns

The kind of ultra rational Bayesian lingustic patterns used around here would be considered obnoxiously intellectual and pretentious (and incomprehensible?) by most people. Practice mirroring the speech patterns of the people you are communicating with, and slip into rationalist talk when you need to win an argument about something important.

When I'm talking to street people, I say "man" a lot because it's something of a high honorific. Maybe in California I will need to start saying "dude", though man seems inherently more respectful.

comment by [deleted] · 2010-07-10T16:48:30.647Z · score: 3 (3 votes) · LW(p) · GW(p)

I think most people here have some sort of similar problem. Mine isn't being emotionless (ha!) but not knowing the right thing to say, putting my foot in my mouth, and so on. Occasionally coming across as a pedant, which is so embarrassing.

I may be getting better at it, though. One thing is: if you are a nerd (in the sense of passionate about something abstract) just roll with it. You will get along better with similar people. Your non-nerdy friends will know you're a nerd. I try to be as nice as possible so that when, inevitably, I say something clumsy or reveal that I'm ignorant of something basic, it's not taken too negatively. Nice but clueless is much better than arrogant.

And always wait for a cue from the other person to reveal something about yourself. Don't bring up politics unless he does; don't mention your interests unless he asks you; don't use long words unless he does.

I can't dance for shit, but various kinds of exercise are a good way to meet a broader spectrum of people.

Do I still feel like I'm mostly tolerated rather than liked? Yeah. It can be pretty depressing. But such is life.

As for dating -- the numbers are different from my perspective, of course, but so far I've found I'm not going to click really profoundly with guys who aren't intelligent. I don't mean that in a snobbish way, it's just a self-knowledge thing -- conversation is really fun for me, and I have more fun spending time with quick, talkative types. There's no point forcing yourself to be around people you don't enjoy.

comment by knb · 2010-07-09T09:30:02.187Z · score: 2 (2 votes) · LW(p) · GW(p)

In my experience, something as simple as adding a smile can transform a demeanor otherwise perceived as "cold" or "emotionless" to "laid-back" or "easy-going".

comment by JoshuaZ · 2010-07-09T02:14:44.841Z · score: 2 (2 votes) · LW(p) · GW(p)

Date nerdier people? In general, many nerdy rational individuals have a lot of trouble getting a long with not so nerdy individuals. There's some danger that I'm other optimizing but I have trouble thinking how an educated rational individual would have be able to date someone who thought that there was something wrong with using terms like "a priori." That's a common enough term, and if someone uses a term that they don't know they should be happy to learn something. So maybe just date a different sort of person?

comment by Will_Newsome · 2010-07-09T02:27:25.878Z · score: 1 (5 votes) · LW(p) · GW(p)

I wasn't talking mostly about dating, but I suppose that's an important subfield.

The topic you mention came up at the Singularity Institute Visiting Fellows house a few weeks. 3 or 4 guys, myself included, expressed a preference for girls who had specialized in some other area of life: gains from trade of specialized knowledge. And I just love explaining to a girl how big the universe is and how gold is formed in super novas... most people can appreciate that, even if they see no need for using the word 'a priori'. I don't mean average intelligence, but one standard deviation above the mean intelligence. Maybe more; I tend to underestimate people. There was 1 person who was rather happy with his relationship with a girl who was very like him. However, the common theme was that people who had more dating experience consistently preferred less traditionally intelligent and more emotionally intelligent girls (I'm not using that term technically, by the way), whereas those with less dating experience had weaker preferences for girls who were like themselves. Those with more dating experience also seemed to put much more emphasis on the importance of attractiveness instead of e.g. intelligence or rationality. Not that you have to choose or anything, most of the time. I'm going to be so bold as to claim that most people with little dating experience that believe they would be happiest with a rationalist girlfriend should update on expected evidence and broaden their search criteria for potential mates.

As for preferences of women, I'm sorry, but the sample size was too small for me to see any trends. (To be fair this was a really informal discussion, not an official SIAI survey of course. :) )

Important addendum: I never actually checked to see if any of the guys in the conversation had dated women who were substantially more intelligent than average, and thus they might not have been making a fair comparison (imagining silly arguments about deism versus atheism or something). I myself have never dated a girl that was 3 sigma intelligent, for instance. I'm mostly drawing my comparison from fictional (imagined) evidence.

comment by JoshuaZ · 2010-07-09T02:33:50.886Z · score: 2 (2 votes) · LW(p) · GW(p)

I've dated females who were clearly less intelligent than I am, some about the same, and some clearly more intelligent. I'm pretty sure the last category was the most enjoyable (I'm pretty sure that rational intelligent nerdy females don't want want to date guys who aren't as smart as they are either). There may be issues with sample size.

comment by Will_Newsome · 2010-07-09T02:36:56.737Z · score: 0 (0 votes) · LW(p) · GW(p)

Hm, probably. I'm not sure what my priors would be, either. So my distribution's looking pretty flat at the moment, especially after your contrary evidence.

comment by [deleted] · 2010-07-12T02:03:52.045Z · score: 0 (0 votes) · LW(p) · GW(p)

I think that the quality of relationships depends less on the fluid intelligence of the partners, or on anything else they might have in common, and more on their level of emotional maturity (empathy, non-self-absorption, communication skills, generosity), as well as their attachment to and affection for one another.

You may become more attached to, or feel more affection for, someone you believe to be intelligent, but then again you might achieve the same emotional connection through, for example, shared life experiences. Intelligence and common interests may make a mate more entertaining, but in my experience it's really not terribly important for my boyfriend to entertain me; we can always go see a movie or play a game together for entertainment.

I'm arguing, in short, that intelligence is mostly irrelevant to relationship quality.

On a more personal note, I can testify that, however much you might admire intelligence per se, it is a terrible idea to date someone who is nearly but not quite as intelligent as yourself, who is also crushingly insecure.

comment by katydee · 2010-07-10T09:01:32.560Z · score: 1 (1 votes) · LW(p) · GW(p)

I have myself been accused of being an android or replicant on many occasions. The best way that I've found to deal with this is to make jokes and tell humorous anecdotes about the situation, especially ones that poke fun at myself. This way, the accusation itself becomes associated with the joke and people begin to find it funny, which makes it "unserious."

comment by Vladimir_Nesov · 2010-07-09T09:00:07.165Z · score: -1 (1 votes) · LW(p) · GW(p)

I often despair at inability to communicate everyday life ideas at my own level. It's normal to have a textbook problem that is very difficult to solve, or to have a solution to said problem that is difficult to communicate. Sometimes it takes a lot of study to know enough to understand such a problem. But people don't expect to encounter such depth in the analysis of everyday life situations, or indeed in explanations of trivial remarks, and so they won't have the patience to understand a more difficult argument, or to learn the prerequisites for understanding it.

This leads to disagreements that I know (in theory) how to resolve (by explaining the reasons for a given position), but the other person won't study. The only short-term solution is to accept the impossibility of communication, and never mention the tiny details that you won't be able to easily substantiate.

An effective long-term solution is to gradually educate people around you, giving them rationalist's tools that you'll be eventually able to use to cut through the communication difficulty.

comment by SilasBarta · 2010-07-07T21:01:41.669Z · score: 2 (2 votes) · LW(p) · GW(p)

Information theory challenge: A few posters have mentioned here that the average entropy of a character in English is about one bit. This carries an interesting implication: you should be able to create an interface using only two of the keyboards keys, such that composing an English message requires just as many keystrokes, on average, as it takes on a regular keyboard.

To do so, you'd have to exploit all the regularities of English to offer suggestions that save the user from having to specify individual letters. Most of the entropy is in the intial charaters of a word or message, so you would probably spend more strokes on specifying those, but then make it up with some "autocomplete" feature for large portions of the message.

If that's too hard, it should be a lot easier to do a 3-input method, which only requires your message set to have an entropy of less than ~1.5 bits per character.

Just thought I'd point that out, as it might be something worth thinking about.

comment by gwern · 2010-07-07T23:59:45.374Z · score: 3 (3 votes) · LW(p) · GW(p)

Already done; see Dasher and especially its Google Tech Talk.

It doesn't reach the 0.7-1 bit per character limit, of course, but then, according to the Hutter challenge no compression program (online or offline) has.

comment by SilasBarta · 2010-07-08T02:16:41.050Z · score: 2 (2 votes) · LW(p) · GW(p)

Wow, and Dasher was invented by David MacKay, author of the famous free textbook on information theory!

comment by gwern · 2010-07-08T02:18:48.742Z · score: 1 (1 votes) · LW(p) · GW(p)

According to Google Books, the textbook mentions Dasher, too.

comment by Christian_Szegedy · 2010-07-07T21:21:06.967Z · score: 2 (2 votes) · LW(p) · GW(p)

This is already exploited on cell phones to some extent.

comment by Vladimir_M · 2010-07-07T22:23:33.850Z · score: 0 (0 votes) · LW(p) · GW(p)

SilasBarta:

A few posters have mentioned here that the average entropy of a character in English is about one bit. This carries an interesting implication: you should be able to create an interface using only two of the keyboards keys, such that composing an English message requires just as many keystrokes, on average, as it takes on a regular keyboard.

One way to achieve this (though not practical for use in human use interfaces) would be to input the entire message bit by bit in some powerful lossless compression format optimized specifically for English text, and decompress it at the end of input. This way, you'd eliminate as much redundancy in your input as the compression algorithm is capable of removing.

The really interesting question, of course, is what are the limits of such technologies in practical applications. But if anyone has an original idea there, they'd likely cash in on it rather than post it here.

comment by Douglas_Knight · 2010-07-08T00:03:37.616Z · score: 1 (1 votes) · LW(p) · GW(p)

Shannon's estimate of 0.6 to 1.3 was based on having humans guess the next character out a 27 character alphabet including spaces but no other punctuation.

The impractical leading algorithm achieves 1.3 bits per byte on the first 10^8 bytes of wikipedia. This page says that stripping wikipedia down to a simple alphabet doesn't affect compression ratios much. I think that means that it hits Shannon's upper estimate. But it's not normal text (eg, redirects), so I'm not sure in which way its entropy differs. The practical (for computer, not human) algorithm bzip2 achieves 2.3 bits per byte on wikipedia and I find it achieves 2.1 bits per character on normal text (which suggests that wikipedia has more entropy and thus that the leading algorithm is beating Shannon's estimate).

Since Sniffnoy asked about arithmetic coding: if I understand correctly, this page claims that arithmetic coding of characters achieves 4 bits per character and 2.8 bits per character if the alphabet is 4-tuples.

comment by gwern · 2010-07-08T00:12:57.455Z · score: 1 (1 votes) · LW(p) · GW(p)

bzip2 is known to be both slow and not too great at compression; what does lzma-2 (faster & smaller) get you on Wikipedia?

(Also, I would expect redirects to play in a compression algorithm's favor compared to natural language. A redirect almost always takes the stereotypical form #REDIRECT[[foo]] or #redirect[[foo]]. It would have difficulty compressing the target, frequently a proper name, but the other 13 characters? Pure gravy.)

comment by Douglas_Knight · 2010-07-08T00:48:31.299Z · score: 0 (0 votes) · LW(p) · GW(p)

Here are the numbers for a pre-LZMA2 version of 7zip. It looks like LZMA is 2.0 bits per byte, while some other option is 1.7 bits per byte.

Yes, I would expect wikipedia to compress more than text, but it doesn't seem to be so. This is just for the first 100MB. At a gig, all compression programs do dramatically better, even off-the-shelf ones that shouldn't window that far. Maybe there is a lot of random vandalism early in the alphabet?

comment by gwern · 2010-07-08T02:24:25.439Z · score: 0 (0 votes) · LW(p) · GW(p)

Well, early on there are many weirdly titled pages, and I could imagine that the first 100MB includes all the '1958 in British Tennis'-style year articles. But intuitively that doesn't feel like enough to cause bad results.

Nor have any of the articles or theses I've read on vandalism detection noted any unusual distributions of vandalism; further, obvious vandalism like gibberish/high-entropy-strings are the very least long-lived forms of vandalism - long-lived vandalism looks plausible & correct, and indistinguishable from normal English even to native speakers (much less a compression algorithm).

A window really does sound like the best explanation, until someone tries out 100MB chunks from other areas of Wikipedia and finds they compress comparably to 1GB.

comment by Douglas_Knight · 2010-07-08T03:59:55.484Z · score: 1 (1 votes) · LW(p) · GW(p)

bzip's window is 900k, yet it compresses 100MB to 29% but 1GB to 25%. Increasing the memory on 7zip's PPM makes a larger difference on 1GB than 100MB, so maybe it's the window that's relevant there, but it doesn't seem very plausible to me. (18.5% -> 17.8% vs 21.3% -> 21.1%)

Sporting lists might compress badly, especially if they contain times, but this one seems to compress well.

comment by gwern · 2010-07-23T09:51:28.572Z · score: 0 (0 votes) · LW(p) · GW(p)

That's very odd. If you ever find out what is going on here, I'd appreciate knowing.

comment by Sniffnoy · 2010-07-07T21:21:06.424Z · score: 0 (0 votes) · LW(p) · GW(p)

Doesn't arithmetic coding accomplish this? Or does that not count because it's unlikely a human could actually use it?

comment by SilasBarta · 2010-07-07T21:29:43.439Z · score: 1 (1 votes) · LW(p) · GW(p)

I don't think arithmetic coding achieves the 1 bit / character theoretical entropy of common English, as that requires knowledge of very complex boundaries in the probability distribution. If you know a color word is coming next, you can capitalize on it, but not letterwise.

Of course, if you permit a large enough block size, then it could work, but the lookup table would probably be umanageable.

comment by Sniffnoy · 2010-07-09T11:31:48.570Z · score: 1 (1 votes) · LW(p) · GW(p)

Yeah, I meant "arithmetic encoding with absurdly large block size"; I don't have a practical solution.

comment by Roko · 2010-07-05T11:15:29.955Z · score: 2 (2 votes) · LW(p) · GW(p)

Antinatalism is the argument that it is a bad thing to create people.

What arguments do people have against this position?

comment by Kingreaper · 2010-07-05T13:16:12.359Z · score: 5 (5 votes) · LW(p) · GW(p)

Even if antinatalism is true at present (I have no major opinion on the issue yet) it need not be true in all possible future scenarios.

In fact, should the human race shrink significantly [due to antinatalism perhaps], without societal collapse, the average utility of a human life should increase. I find it highly unlikely that even the maximum average utility is still less than zero.

comment by Jayson_Virissimo · 2010-07-07T04:58:28.703Z · score: 1 (1 votes) · LW(p) · GW(p)

In fact, should the human race shrink significantly [due to antinatalism perhaps], without societal collapse, the average utility of a human life should increase.

Why shouldn't having a higher population lead to greater specialization of labor, economies of scale, greater gains from trade, and thus greater average utility?

comment by Kingreaper · 2010-07-07T13:12:57.674Z · score: 1 (3 votes) · LW(p) · GW(p)

Resource limitations.

There is only a limited amount of any given resource available. Decreasing the number of people therefore increases the amount of resource available per person.

There is a point at which decreasing the population will begin decreasing average utility, but to me it seems nigh certain that that point is significantly below the current population.
I could be wrong, and if I am wrong I would like to know.

Do you feel that the current population is optimum, below optimum, or above optimum?

comment by [deleted] · 2010-07-07T13:27:51.960Z · score: -1 (1 votes) · LW(p) · GW(p)

Because of the law of diminishing returns (marginal utility). If you have a billion humans one more (less) results in a bigger increase (decrease) in utility than if you have a trillion.

comment by RichardKennaway · 2010-07-07T14:23:50.742Z · score: 0 (0 votes) · LW(p) · GW(p)

Whose utility? The extra human's utility will be the same in both cases.

comment by Mitchell_Porter · 2010-07-06T02:55:56.527Z · score: 3 (3 votes) · LW(p) · GW(p)

I have long wrestled with the idea of antinatalism, so I should have something to say here. Certainly there were periods in my life in which I thought that the creation of life is the supreme folly.

We all know that terrible things happen, that should never happen to anyone. The simplest antinatalist argument of all is, that any life you create will be at risk of such intolerably bad outcomes; and so, if you care, the very least you can do is not create new life. No new life, no possibility of awful outcomes in it, problem avoided! And it is very easy to elaborate this into a stinging critique of anyone who proposes that nonetheless one shouldn't take this seriously or absolutely (because most people are happy, most people don't commit suicide, etc). You intend to gamble with this new life you propose to create, simply because you hope that it won't turn out terribly? And this gamble you propose appears to be completely unnecessary - it's not as if people have children for the greater good. Etc.

A crude utilitarian way to moderate the absoluteness of this conclusion would be to say, well, surely some lives are worth creating, and it would make a lot of people sad to never have children, so we reluctantly say to the ones who would be really upset to forego reproduction, OK, if you insist... but for people who can take it, we could say: There is always something better that you could do with your life. Have the courage not to hide from the facts of your own existence in the boisterous distraction of naive new lives.

It is probably true that philanthropic antinatalists, like the ones at the blog to which you link, are people who have personally experienced some profound awfulness, and that is why they take human suffering with such deadly seriousness. It's not just an abstraction to them. For example, Jim Crawford (who runs that blog) was once almost killed in a sword attack, had his chest sliced open, and after they stitched him up, literally every breath was agonizing for a long time thereafter. An experience like that would sensitize you to the reality of things which luckier people would prefer not to think about.

comment by Roko · 2010-07-06T10:47:56.012Z · score: 6 (6 votes) · LW(p) · GW(p)

You intend to gamble with this new life you propose to create, simply because you hope that it won't turn out terribly?

Seems like loss aversion bias.

Sure, bad things happen, but so do good things. You need to do an expected utility calculation for the person you're about to create: P(Bad)U(Bad) + P(Good)U(Good)

P(Sword attack) seems to be pretty darn low.

comment by Mitchell_Porter · 2010-07-07T06:22:08.906Z · score: 2 (4 votes) · LW(p) · GW(p)

I think that for you, a student of the singularity concept, to arrive at a considered and consistent opinion regarding antinatalism, you need to make some judgments regarding the quality of human life as it is right now, "pre-singularity".

Suppose there is no possibility of a singularity. Suppose the only option for humanity is life more or less as it is now - ageing, death, war, economic drudgery, etc, with the future the same as the past. Everyone who lives will die; most of them will drudge to stay alive. Do you still consider the creation of a human life justifiable?

Do you have any personal hopes attached to the singularity? Do you think, yes, it could be very bad, it could destroy us, that makes me anxious and affects what I do; but nonetheless, it could also be fantastic, and I derive meaning and hope from that fact?

If you are going to affirm the creation of human life under present conditions, but if you are also deriving hope from the anticipation of much better future conditions, then you may need to ask yourself how much of your toleration of the present derives from the background expectation of a better future.

It would be possible to have the attitude that life is already great and a good singularity would just make it better; or that the serious possibility of a bad singularity is enough for the idea to urgently command our attention; but it's also clear that there are people who either use singularity hope to sustain them in the present, or who have simply grown up with the concept and haven't yet run into difficulty.

I think the combination of transhumanism and antinatalism is actually a very natural one. Not at all an inevitable one; biotechnology, for example, is all about creating life. But if you think, for example, that the natural ageing process is intolerable, something no-one should have to experience, then probably you should be an antinatalist.

comment by Roko · 2010-07-07T13:21:52.159Z · score: 0 (0 votes) · LW(p) · GW(p)

you may need to ask yourself how much of your toleration of the present derives from the background expectation of a better future.

I personally would still want to have been born even if a glorious posthuman future were not possible, but the margin of victory for life over death becomes maybe a factor of 100 thinner.

comment by Douglas_Knight · 2010-07-06T02:17:30.933Z · score: 3 (3 votes) · LW(p) · GW(p)

Why do you link to a blog, rather than an introduction or a summary? Is this to test whether we find it so silly that we don't look for their best arguments?

My impression is that antinatalists are highly verbal people who base their idea of morality on how people speak about morality, ignoring how people act. They get the idea that morality is about assigning blame and so feel compelled only to worry about bad acts, thus becoming strict negative utilitarians or rights-deontologists with very strict and uncommon rights. I am not moved by such moralities.

Maybe some make more factual claims, eg, that most lives are net negative or that reflective life would regret itself. These seem obviously false, but I don't see that they matter. These arguments should not have much impact on the actions of the utilitarians that they seem aimed at. They should build a superhuman intelligence to answer these questions and implement the best course of action. If human lives are not worth living, then other lives may be. If no lives are worth living, then a superintelligence can arrange for no lives to be lead, while people evangelizing antinatalism aren't going to make a difference.

Incidentally, Eliezer sometimes seems to be an anti-human-natalist.

comment by cousin_it · 2010-07-05T12:27:23.451Z · score: 2 (4 votes) · LW(p) · GW(p)

The antinatalist argument goes that humans suffer more than they have fun, therefore not living is better than living. Why don't they convert their loved ones to the same view and commit suicide together, then? Or seek out small isolated communities and bomb them for moral good.

I believe the answer to antinatalism is that pleasure != utility. Your life (and the lives of your hypothetical kids) could create net positive utility despite containing more suffering than joy. The "utility functions" or whatever else determines our actions contain terms that don't correspond to feelings of joy and sorrow, or are out of proportion with those feelings.

comment by Leonhart · 2010-07-05T14:55:43.253Z · score: 3 (5 votes) · LW(p) · GW(p)

The suicide challenge is a non sequitur, because death is not equivalent to never having existed, unless you invent a method of timeless, all-Everett-branch suicide.

comment by Kingreaper · 2010-07-05T15:01:10.115Z · score: 3 (5 votes) · LW(p) · GW(p)

Precisely.

If the utility of the first ten or fifteen years of life is extremely negative, and the utility of the rest slightly positive, then it can be logical to believe that not being born is better than being born, but suicide (after a certain age) is worse than either.

comment by orthonormal · 2010-07-06T05:47:43.657Z · score: 4 (4 votes) · LW(p) · GW(p)

If the utility of the first ten or fifteen years of life is extremely negative

I think that's getting at a non-silly defense of antinatalism: what if the average experience of middle school and high school years is absolutely terrible, outweighing other large chunks of life experience, and adults have simply forgotten for the sake of their sanity?

I don't buy this, but it's not completely silly. (However, it suggests a better Third Alternative exists: applying the Geneva Convention to school social life.)

comment by gwern · 2010-07-06T07:13:30.967Z · score: 3 (3 votes) · LW(p) · GW(p)

adults have simply forgotten for the sake of their sanity?

not completely silly.

Quite right. Suicide rates spike in adolescence, go down, and only spike again in old age, don't they? Suicide is, I think, a good indicator that someone is having a bad life.

(Also, I've seen mentions on LW of studies that people raising kids are unhappier than if they were childless, but once the kids are older, they retrospectively think they were much happier than they actually were.)

comment by ocr-fork · 2010-07-29T23:35:30.401Z · score: 6 (6 votes) · LW(p) · GW(p)

Quite right. Suicide rates spike in adolescence, go down, and only spike again in old age, don't they? Suicide is, I think, a good indicator that someone is having a bad life.

Suicide rates start at .5 in 100,000 for ages 5-14 and rise to about 15 in 100,000 for seniors.

comment by gwern · 2010-07-30T04:27:50.358Z · score: 4 (4 votes) · LW(p) · GW(p)

Interesting. From page 30, suicide rates increase monotonically in the 5 age groups up to and including 45-54 (peaking at 17.2 per 100,000), but then drops by 3 to 14.5 (age 55-64) and drops another 2 for the 65-74 age bracket (12.6), and then rises again after 75 (15.9).

So, I was right that the rates increase again in old age, but wrong about when the first spike was.

comment by pjeby · 2010-07-30T16:27:27.820Z · score: 2 (2 votes) · LW(p) · GW(p)

So, I was right that the rates increase again in old age, but wrong about when the first spike was.

Unfortunately, the age brackets don't really tell you if there's a teenage spike, except that if there is one, it happens after age 14. That 9.9 could actually be a much higher level concentrated within a few years, if I understand correctly.

comment by Unknowns · 2010-08-01T16:58:07.975Z · score: 0 (0 votes) · LW(p) · GW(p)

Suicide rates may be higher in adolescence than at certain other times, but absolutely speaking, they remain very low, showing that most people are having a good life, and therefore refuting antinatalism.

comment by JoshuaZ · 2010-08-01T17:19:21.881Z · score: 2 (4 votes) · LW(p) · GW(p)

Suicide rates are not a good measure of how good life is except at a very rough level since humans have very strong instincts for self-preservation.

comment by gwern · 2010-08-01T17:27:06.517Z · score: 2 (2 votes) · LW(p) · GW(p)

My counterpoint to the above would be that if suicide rates are such a good metric, then why can they go up with affluence? (I believe this applies not just to wealthy nations (ie. Japan, Scandinavia), but to individuals as well, but I wouldn't hang my hat on the latter.)

comment by daedalus2u · 2010-08-01T17:58:07.780Z · score: 3 (3 votes) · LW(p) · GW(p)

Suicide rates are a measure of depression, not of how good life is. Depression can hit people even when they otherwise have a very good life.

comment by gwern · 2010-08-02T04:02:46.709Z · score: 0 (0 votes) · LW(p) · GW(p)

Yes yes, this is an argument for suicide rates never going to zero - but again, the basic theory that suicide is inversely correlated, even partially, with quality of life would seem to be disproved by this point.

comment by daedalus2u · 2010-08-02T12:53:21.961Z · score: 3 (3 votes) · LW(p) · GW(p)

I think the misconception is that what is generally considered “quality of life” is not correlated with things like affluence. People like to believe (pretend?) that it is, and by ever striving for more affluence feel that they are somehow improving their “quality of life”.

When someone is depressed, their “quality of life” is quite low. That “quality of life” can only be improved by resolving the depression, not by adding the bells and whistles of affluence.

How to resolve depression is not well understood. A large part of the problem is people who have never experienced depression, don't understand what it is and believe that things like more affluence will resolve it.

comment by Unknowns · 2010-08-01T17:21:49.468Z · score: 1 (1 votes) · LW(p) · GW(p)

I suspect the majority of adolescents would also deny wishing they had never been born.

comment by RobinZ · 2010-07-06T11:24:46.737Z · score: 0 (0 votes) · LW(p) · GW(p)

I don't buy this, but it's not completely silly. (However, it suggests a better Third Alternative exists: applying the Geneva Convention to school social life.)

I'm surprised the Paul Graham essay "Why Nerds are Unpopular" wasn't linked there.

comment by Mass_Driver · 2010-07-06T05:52:01.160Z · score: 2 (2 votes) · LW(p) · GW(p)

Whenever anyone mentions how much it sucks to be a kid, I plug this article. It does suck, of course, but the suckage is a function of what our society is like, and not of something inherent about being thirteen years old.

Why Nerds Hate Grade School

comment by cousin_it · 2010-07-05T15:39:53.453Z · score: 2 (2 votes) · LW(p) · GW(p)

By the standard you propose, "never having existed" is also inadequate unless you invent a method of timeless, all-Everett-branch means of never having existed. Whatever kids an antinatalist can stop from existing in this branch may still exist in other branches.

comment by Nisan · 2010-07-05T11:50:12.372Z · score: 2 (2 votes) · LW(p) · GW(p)

Here's one: I bet if you asked lots of people whether their birth was a good thing, most of them would say yes.

If it turns out that after sufficient reflection, people, on average, regard their birth as a bad thing, then this argument breaks down.

comment by Roko · 2010-07-05T11:57:17.619Z · score: 3 (3 votes) · LW(p) · GW(p)

They have an answer to that.

The reason I ask is that antinatalism is a contrarian position we think is silly, but has some smart supporters.

If our contrarian position was as wrong as we think antinatalism is, would we realize?

comment by Leonhart · 2010-07-05T15:05:24.339Z · score: 6 (6 votes) · LW(p) · GW(p)

I don't think antinatalism is silly, although I have not really tried to find problems with it yet. My current, not-fully-reflected position is that I would have prefer to not have existed (if that's indeed possible) but, given that I in fact exist, I do not want to die. I don't, right now, see screaming incoherency here, although I'm suspicious.

I would very much appreciate anyone who can point out faultlines for me to investigate. I may be missing something very obvious.

comment by JoshuaZ · 2010-07-05T15:10:26.495Z · score: 5 (5 votes) · LW(p) · GW(p)

The reason I ask is that antinatalism is a contrarian position we think is silly, but has some smart supporters.

Do people here really think that antinatalism is silly? I disagree with the position (very strongly) but it isn't a view that I consider to be silly in the same way that I would consider say, most religious beliefs to be silly.

But keep in mind that having smart supporters is by no means a strong indication that a viewpoint is not silly. For example, Jonathan Sarfati is a prominent young earth creationist who before he became a YEC proponent was a productive chemist. He's also a highly ranked chess master. He's clearly a bright individual. Now, you might be able to argue that YECism has a higher proportion of people who aren't smart (There's some evidence to back this up. See for example this breakdown of GSS data and also this analysis. Note that the metric used in the first one, the GSS WORDSUM, is surprisingly robust under education levels by some measures so the first isn't just measuring a proxy for education.) That might function as a better indicator of silliness. But simply having smart supporters seems insufficient to conclude that a position is not silly.

It does however seem that on LW there's a common tendency to label beliefs silly when they mean "I assign a very low probability to this belief being correct." Or "I don't understand how someone's mind could be so warped as to have this belief." Both of these are problematic, the second more so than the first because different humans have different value systems. In this particular example, value systems that put harm to others as more bad are more likely to be able to make a coherent antinatalist position. In that regard, note that people are able to discuss things like paperclippers but seem to have more difficulty discussing value systems which are in many ways closer to their own. This may be simply because paperclipping is a simple moral system. It may also be because it is so far removed from their own moral systems that it becomes easier to map out in a consistent fashion where something like antinatalism is close enough to their own moral system that people conflate some of their own moral/ethical/value conclusions with those of the antinatalist, and that this occurs subtly enough for people not to notice.

comment by cupholder · 2010-07-05T20:21:00.012Z · score: 3 (3 votes) · LW(p) · GW(p)

Do people here really think that antinatalism is silly?

A data point: I don't think antinatalism (as defined by Roko above - 'it is a bad thing to create people') is silly under every set of circumstances, but neither is it obviously true under all circumstances. If my standard of living is phenomenally awful, and I knew my child's life would be equally bad, it'd be bad to have a child. But if I were living it up, knew I could be a good parent, and wanted a kid, what would be so awful about having one?

comment by Blueberry · 2010-07-05T20:26:28.701Z · score: 5 (7 votes) · LW(p) · GW(p)

But if I were living it up, knew I could be a good parent, and wanted a kid, what would be so awful about having one?

That your child might experience a great deal of pain which you could prevent by not having it.

That your child might regret being born and wish you had made the other decision.

That you can be a good parent, raise a kid, and improve someone's life without having a kid (adopt).

That the world is already overpopulated and our natural resources are not infinite.

comment by cupholder · 2010-07-05T20:54:22.872Z · score: 2 (2 votes) · LW(p) · GW(p)

Points taken.

Let me restate what I mean more formally. Conditional on high living standards, high-quality parenting, and desire to raise a child, one can reasonably calculate that the expected utility (to myself, to the potential child and to others) of having the child is higher than the expected utility of not having a child. In which case I wouldn't think the antinatalism position has legs.

comment by Blueberry · 2010-07-06T01:42:52.835Z · score: 4 (4 votes) · LW(p) · GW(p)

one can reasonably calculate that the expected utility (to myself, to the potential child and to others) of having the child is higher than the expected utility of not having a child.

I'm not sure about this. It's most likely that anything your kid does in life will get done by someone else instead. There is also some evidence that having children decreases your happiness (though there may be other reasons to have kids).

But even if this is true, it's still not enough for antinatalism. Increasing total utility is not enough justification to create a life. The act of creation makes you responsible for the utility of the individual created, and you have a duty not to create an entity you have reason to think may have negative personal utility. (Strict utilitarians will disagree.)

comment by cupholder · 2010-07-06T08:39:29.573Z · score: 0 (0 votes) · LW(p) · GW(p)

I'm not sure about this. It's most likely that anything your kid does in life will get done by someone else instead.

True - we might call the expected utility strangers get a wash because of this substitution effect. If we say the expected value most people get from me having a child is nil, it doesn't contribute to the net expected value, but nor does it make it less positive.

There is also some evidence that having children decreases your happiness (though there may be other reasons to have kids).

It sounds as though that data's based on samples of all types of parents, so it may not have much bearing on the subset of parents who (a) have stable (thanks NL!) high living standards, (b) are good at being parents, and (c) wanted their children. (Of course this just means the evidence is weak, not completely irrelevant.)

But even if this is true, it's still not enough for antinatalism. Increasing total utility is not enough justification to create a life.

That's a good point, I know of nothing in utilitarianism that says whose utility I should care about.

The act of creation makes you responsible for the utility of the individual created, and you have a duty not to create an entity you have reason to think may have negative personal utility. (Strict utilitarians will disagree.)

Whether or not someone agrees with this is going to depend on how much they care about risk aversion in addition to expected utility. (Prediction: antinatalists are more risk averse.) I think my personal level of risk aversion is too low for me to agree that I shouldn't make any entity that has a chance of suffering negative personal utility.

comment by NancyLebovitz · 2010-07-05T21:49:42.115Z · score: 4 (4 votes) · LW(p) · GW(p)

I'd throw in considering how stable you think those high living standards are.

comment by Roko · 2010-07-05T15:20:01.280Z · score: 0 (0 votes) · LW(p) · GW(p)

I still think that it's silly, because the common justification given for the position is highly suspect and borderline sophistry, and is, I suspect, not the causal reason for the values it purports to justify.

Yet because of moral antirealism, the mistake is subtle. And I have yet to find a critique of antinatalism that actually gives the correct (in my view) rebuttal. Most people who try to rebut it seem to also offer arguments that are tantamount sophistry, i.e. they are not the causal reason for the person disagreeing with the view.

And I worry, an I making a similarly subtle mistake? And as a contrarian with few good critics, would anyone present me with the correct counterargument?

comment by JoshuaZ · 2010-07-05T15:31:40.751Z · score: 1 (1 votes) · LW(p) · GW(p)

I still think that it's silly, because the common justification given for the position is highly suspect and borderline sophistry, and is, I suspect, not the causal reason for the values it purports to justify.

I'm curious what you think the causal justification is. I'm not a fan of imputing motives to people I disagree with rather than dealing with their arguments but one can't help but notice that Heinrich Heine was paralyzed, blind and in constant pain for the last decade of his life. Moreover, his religious beliefs prevented him from committing suicide. In that context, antinatalism just in regards to one's own life seems to make some sense. Thus one might think of antinatalism as arising in part from Other Optimizing

comment by Roko · 2010-07-05T15:37:11.344Z · score: 2 (2 votes) · LW(p) · GW(p)

but one can't help but notice that Heinrich Heine was paralyzed, blind and in constant pain for the last decade of his life. Moreover, his religious beliefs prevented him from committing suicide.

I promise that I genuinely did not know that when I wrote "I suspect, not the causal reason for the values it purports to justify." and thought "these people were just born with low happiness set points and they're rationalizing"

comment by Nisan · 2010-07-05T15:29:57.858Z · score: 4 (4 votes) · LW(p) · GW(p)

If our contrarian position was as wrong as we think antinatalism is, would we realize?

If there was an argument for antinatalism that was capable of moving us, would we have seen it? Maybe not. A LessWrong post summarizing all of the good arguments for antinatalism would be a good idea.

comment by RichardKennaway · 2010-07-05T13:25:31.191Z · score: 1 (1 votes) · LW(p) · GW(p)

If our contrarian position was as wrong as we think antinatalism is, would we realize?

We have many contrarian positions, but antinatalism is one position. Personally, I think that some of the contrarian positions that some people advocate here are indeed silly.

comment by Roko · 2010-07-05T14:28:47.859Z · score: 1 (1 votes) · LW(p) · GW(p)

Such as?

comment by RichardKennaway · 2010-07-05T15:02:19.820Z · score: 5 (7 votes) · LW(p) · GW(p)

I knew someone would ask. :-) Ok, I'll list some of my silliness verdicts, but bear in mind that I'm not interested in arguing for my assessments of silliness, because I think they're too silly for me to bother with, and metadiscussion escalates silliness levels. Life is short (however long it may extend), and there are plenty of non-silly matters to think about. I generally don't post on matters I've consigned to the not-even-wrong category,or vote them down for it.

Non-silly: cryonics, advanced nano, AGI, FAI, Bayesian superintelligence. ("Non-silly" doesn't mean I agree with all of these, just that I think there are serious arguments in favour, whether or not I'm persuaded of them.)

Silly: we're living in a simulation, there are infinitely many identical copies of all of us, "status" as a number on an FRP character sheet, any Omega conundrum that depends on Omega being absolutely known to be absolutely reliable.

Does anyone else think that some of the recurrent ideas here are silly?

ETA: Non-silly: the mission of LessWrong. Silly: Utilitarianism of all types.

comment by Douglas_Knight · 2010-07-06T02:56:57.373Z · score: 2 (2 votes) · LW(p) · GW(p)

Silly: we're living in a simulation, there are infinitely many identical copies of all of us, "status" as a number on an FRP character sheet, any Omega conundrum that depends on Omega being absolutely known to be absolutely reliable....Utilitarianism of all types.

There's an odd inconsistency in how you labeled these. The last is identified by name and the first seems similarly neutral, but the third and fourth (and maybe the second - there are a lot of things that could be referring to) are phrased to make it clear what you think is silly about them. This seems tactically poor, if you want to avoid discussion of these issues. (or maybe the first and are the mistake, but tactical diversity seems weird to me)

Moreover, it seems hard for me to imagine that you pay so little attention to these topics that you believe that many people here support them as you've phrased them. Not that I have anything to say about the difference in what one should do in the two situations of encountering people who (1) endorse your silly summary of their position; vs (2) seem to make a silly claim, but also claim to distinguish it from your silly summary. Of course, most of the time silly claims are far away and you never find out whether the people endorse your summary.

comment by Blueberry · 2010-07-06T01:18:58.152Z · score: 1 (5 votes) · LW(p) · GW(p)

I'm baffled at the idea that the simulation hypothesis is silly. It can be rephrased "We are not at the top level of reality." Given that we know of lower levels of reality (works of fiction, artificial life programs, dreams) it seems unlikely we're at the top.

comment by JoshuaZ · 2010-07-06T01:26:01.440Z · score: 4 (4 votes) · LW(p) · GW(p)

I'm baffled at the idea that the simulation hypothesis is silly. It can be rephrased "We are not at the top level of reality." Given that we know of lower levels of reality (works of fiction, artificial life programs, dreams) it seems unlikely we're at the top.

Do you have any evidence that any of those levels have anything remotely approximating observers? (I'll add the tiny data point that I've had dreams where characters have explicitly claimed to be aware. In one dream I and everyone around was aware that it was a dream and that it was my dream. They wanted me to not go on a mission to defeat a villain since if I died I'd wake up and their world would cease to exist. I'm willing to put very high confidence on the hypothesis that no observers actually existed.)

I agree that the simulationist hypothesis is not silly but this is primarily due to the apparently high probability that we will at some point be able to simulate intelligence beings with great accuracy.

comment by Vladimir_Nesov · 2010-07-06T09:19:45.622Z · score: 0 (0 votes) · LW(p) · GW(p)

Reality isn't stratified. A simulated world constitutes a concept of its own, apart from being referenced by the enclosing worlds. Two worlds can simulate each other to an equal degree.

comment by mattnewport · 2010-07-05T18:40:34.035Z · score: 1 (3 votes) · LW(p) · GW(p)

I mostly agree with your list of silly ideas, though I'm not entirely sure what an FRP character sheet is and I do think status explanations are quite important so probably disagree on that one. I'd add utilitarianism to the list of silly ideas as well.

comment by RichardKennaway · 2010-07-05T19:28:04.549Z · score: 2 (4 votes) · LW(p) · GW(p)

Agreed about utilitarianism.

FRP = fantasy role-playing, i.e. Dungeons & Dragons and the like. A character sheet is a list of the attributes of the character you're playing, things like Strength=10, Wisdom=8, Charisma=16, etc. (each number obtained by rolling three dice and adding them together). There are rules about what these attributes mean (e.g. on attempting some task requiring especial Charisma, roll a 20-sided die and if the number is less than your Charisma you succeed). Then there are circumstances that will give you additional points for an attribute or take them away, e.g. wearing a certain enchanted ring might give you +2 to Charisma.

Discussions of "status" here and on OB sometimes sound like D&D geeks arguing about the rules for a Status attribute.

comment by Vladimir_M · 2010-07-06T03:23:36.191Z · score: 6 (6 votes) · LW(p) · GW(p)

RichardKennaway:

Discussions of "status" here and on OB sometimes sound like D&D geeks arguing about the rules for a Status attribute.

Sometimes, yes. However, in many situations, the mere recognition that status considerations play an important role -- even if stated in the crudest possible character-sheet sort of way -- can be a tremendous first step in dispelling widespread, deeply entrenched naive and misguided views of human behavior and institutions.

Unfortunately, since a precise technical terminology for discussing the details of human status dynamic doesn't (yet?) exist, often it's very difficult to do any better.

comment by NancyLebovitz · 2010-07-05T20:10:30.185Z · score: 2 (2 votes) · LW(p) · GW(p)

Could you expand on how those discussions of status here and on OB are different from what you'd see as a more realistic discussion of status?

comment by RichardKennaway · 2010-07-13T18:07:35.584Z · score: 0 (0 votes) · LW(p) · GW(p)

I never replied to this, but this is an example of what I think is a more realistic discussion.

comment by Roko · 2010-07-05T15:06:07.814Z · score: 1 (1 votes) · LW(p) · GW(p)

What probability would you assign then to a well respected, oft-televised, senior scientist and establishment figure arguing in favour of the simulation hypothesis? (And I don't mean Nick Bostrom. I mean someone who heads government committees and has tea with the queen)

comment by RobinZ · 2010-07-05T18:53:47.711Z · score: 2 (2 votes) · LW(p) · GW(p)

What probability would you assign to a well respected, oft-televised, senior scientist and establishment figure arguing in favor of an incompatibilist theory of free will?

comment by Roko · 2010-07-05T19:04:55.029Z · score: 0 (0 votes) · LW(p) · GW(p)

I don't think that incompatibilism is so silly it's not worth talking about. In fact its not actually wrong, it is simply a matter of how you define the term "free will".

comment by RobinZ · 2010-07-06T00:06:29.598Z · score: 1 (1 votes) · LW(p) · GW(p)

Definitions are not a simple matter - I would claim that libertarian free will* is at least as silly as the simulation hypothesis.

But I don't filter my conversation to ban silliness.

* I change my phrasing to emphasize that I can respect hard incompatibilism - the position that "free will" doesn't exist.

comment by RichardKennaway · 2010-07-05T15:19:21.318Z · score: 0 (0 votes) · LW(p) · GW(p)

Close to 1 as makes no difference, since I don't think you would ask this unless there was such a person. (Tea with the queen? Does that correlate positively or negatively with eccentricity, I wonder?)

Before anyone gets offended at my silliness verdicts (presuming you don't find them too silly to get offended by), these are my judgements on the ideas, not on the people holding them.

comment by Roko · 2010-07-05T15:34:23.769Z · score: 2 (2 votes) · LW(p) · GW(p)

Ok, but the point of the question is to try to arrive at true beliefs. So imagine forgetting that I'd asked the question. What does your model of the world, which says that simulation is silly, say for the probability that a major establishment scientist who is in no way a transhumanist, believes that we could be in a simulation? If it assigns too low a probability, maybe you should consider assigning some probability to alternative models?

comment by RichardKennaway · 2010-07-05T19:16:05.661Z · score: 3 (3 votes) · LW(p) · GW(p)

I would not be at all surprised. No speculation is too silly to have been seriously propounded by some philosopher or other, and lofty state gives no immunity to silliness.

[ETA: And of course, I'm talking about ideas that I've judged silly despite their being seriously propounded by (some) folks here on LessWrong that I think are really smart, and after reading a whole lot of their stuff before arriving at that conclusion. So one more smart person, however prestigious, isn't going to make a difference.]

But you changed it to "could be". Sure, could be, but that's like Descartes' speculations about a trickster demon faking all our sensations. It's unfalsifiable unless you deliberately put something into the speculation to let the denizens discover their true state, but at that point you're just writing speculative fiction.

But if this person is arguing that we probably are in a simulation, then no, I just tune that out.

comment by Roko · 2010-07-05T19:24:56.997Z · score: 2 (2 votes) · LW(p) · GW(p)

So the bottom line of your reasoning is quite safe from any evidential threats?

But if this person is arguing that we probably are in a simulation, then no, I just tune that out.

comment by RichardKennaway · 2010-07-05T20:16:51.182Z · score: 0 (4 votes) · LW(p) · GW(p)

So the bottom line of your reasoning is quite safe from any evidential threats?

In one sense, yes, but in another sense....yes.

First sense: I have a high probability for speculations on whether we are living in a simulation (or any of the other ideas I dismiss) not being worth my while outside of entertaining fictions. As a result, evidence to the contrary is unlikely to reach my notice, and even if it does, it has a lot of convincing to do. In that sense, it is as safe as any confidently held belief is from evidential threats.

Second sense: Any evidential threats at all? Now we're into unproductive navel-gazing. If, as a proper Bayesian, I make sure that my probabilities are never quite equal to 1, and therefore answer that my belief must be threatened by some sort of evidence, the next thing is you'll ask what that evidence might be. But why should anyone have to be able to answer that question? If I choose to question some idea I have, then, yes, I must decide what possible observations I might make that would tell either way. This may be a non-trivial task. (Perhaps for reasons relating to the small world/large world controversy in Bayesian reasoning, but I haven't worked that out.) But I have other things to do -- I cannot be questioning everything all the time. The "silly" ideas are the ones I can't be bothered spending any time on at all even if people are talking about them on my favorite blog, and if that means I miss getting in on the ground floor of the revelation of the age, well, that's the risk I accept in hitting the Ignore button.

So in practice, yes, my bottom line on this matter (which was not written down in advance, but reached after having read a bunch of stuff of the sort I don't read any more) is indeed quite safe. I don't see anything wrong with that.

Besides that, I am always suspicious of this question, "what would convince you that you are wrong?" It's the sort of thing that creationists arguing against evolution end up saying. After vigorously debating the evidence and making no headway, the creationist asks, "well, what would convince you?", to which the answer is that to start with, all of the evidence that has just been gone over would have to go away. But in the creationist's mind, the greater their failure to convince someone, the greater the proof that they're right and the other wrong. "Consider it possible that you are mistaken" is the sound of a firing pin clicking on an empty chamber.

comment by Roko · 2010-07-05T20:59:50.658Z · score: 5 (5 votes) · LW(p) · GW(p)

"what would convince you that you are wrong?" It's the sort of thing that creationists arguing against evolution

But a proponent of evolution can easily answer this, for example if they went to the fossil record and found it showed that all and only existing creatures' skeletons appeared 6000 years ago, and that radiocarbon dating showed that the earth was 6000 years old.

comment by RichardKennaway · 2010-07-06T07:34:22.873Z · score: 1 (1 votes) · LW(p) · GW(p)

The creationist generally puts his universal question after having unsuccessfully argued that the fossil record and radiocarbon dating support him.

comment by wedrifid · 2010-07-05T12:44:27.628Z · score: 1 (1 votes) · LW(p) · GW(p)

I'm not entirely opposed to the idea. 6 billion is enough for now. Make more when we expand and distance makes it infeasible to concentrate neg-entropy on the available individuals. This is quite different from the Robin-Hanson 'make as many humans as physically possible and have them living in squalor' (exaggerated) position but probably also in in complete dissagreement with arguments used for Anti-natalism.

comment by red75 · 2010-07-06T04:44:15.444Z · score: 0 (2 votes) · LW(p) · GW(p)

Either antinatalism is futile in long run, or it is existential threat.

If we assume that antinatalism is rational, then in long run it will lead to reduction of part of human population, that is capable/trained to do rational decisions, thus making antinatalists' efforts futile. As we can see, people that should be most susceptible to antinatalism don't even consider this option (en masse at least). And given their circumstances they have clear reason for that: every extra child makes it less likely for them to starve to death in old age, as more children more chances for family to control more resources. It is big prisoner dilemma, where defectors win.

Edit: Post-humans are not considered. They will have other means to acquire resources.

Edit: My point: antinatalism can be rational for individuals, but it cannot be rational for humankind to accept (even if it is universally true as antinatalists claim).

comment by multifoliaterose · 2010-07-04T23:41:41.147Z · score: 2 (2 votes) · LW(p) · GW(p)

Another reference request: Eliezer made a post about how it's ultimately incoherent to talk about how "A causes B" in the physical world because at root, everything is caused by the physical laws and initial conditions of the universe. But I don't remember what it is called. Does anybody else remember?

comment by Vladimir_Nesov · 2010-07-06T09:50:55.280Z · score: 4 (4 votes) · LW(p) · GW(p)

It is coherent to talks about "A causes B", to the contrary it's a mistake to say that everything is caused by physical laws, and therefore you have no free will, for example (as if your actions don't cause anything). Of course, any given event won't normally have only one cause, but considering the causes of an event makes sense. See the posts on free will, and then the solution posts linked from there. The picture you were thinking about is probably from these posts.

comment by multifoliaterose · 2010-07-06T16:56:14.396Z · score: 0 (0 votes) · LW(p) · GW(p)

Thanks for the reference, yes, this is what I had remembered. And yes, I garbled the article - what I had in mind was that point any given event won't normally have only one cause.

comment by Kazuo_Thow · 2010-07-05T21:24:05.922Z · score: 1 (1 votes) · LW(p) · GW(p)

It couldn't have been "Timeless Causality" or "Causality and Moral Responsibility", could it?

comment by multifoliaterose · 2010-07-06T05:05:58.107Z · score: 0 (0 votes) · LW(p) · GW(p)

Thanks, but neither of these are the one I remember.

comment by steven0461 · 2010-07-04T21:46:01.772Z · score: 2 (2 votes) · LW(p) · GW(p)

We think of Aumann updating as updating upward if the other person's probability is higher than you thought it would be, or updating downward if the other person's probability is lower than you thought it would be. But sometimes it's the other way around. Example: there are blue urns that have mostly blue balls and some red balls, and red urns that have mostly red balls and some blue balls. Except on Opposite Day, when the urn colors are reversed. Opposite Day is rare, and if it's OD you might learn it's OD or you might not. A and B are given an urn and are trying to find out whether it's red. It's OD, which A knows but B doesn't. They both draw a few balls. Then A knows if B draws red balls, B (not knowing it's OD) will estimate a high probability for red and therefore A (knowing it's OD) should estimate a low probability for red, and vice versa. So this is a sense in which intelligence can be inverted misguidedness.

Another thought: suppose in the above example, there's a small chance (let's say equal to the chance that it's OD) that A is insane and will behave as if always knowing for sure it's OD. Then if we're back in the case where it actually is OD and A is sane, the estimates of A and B will remain substantially different forever. So taking this as an example it seems like even tiny failures of common knowledge of rationality can (in correspondingly improbable cases) cause big persistent disagreements between rational agents.

Is the reasoning here correct? Are the examples important in practice?

comment by Emile · 2010-07-03T10:17:37.936Z · score: 2 (2 votes) · LW(p) · GW(p)

I have some half-baked ideas about getting interesting information on lesswronger's political opinions.

My goal is to give everybody an "alien's eye" view of their opinions, something like "You hold position Foo on issue Bar, and justify it by the X books you read on Bar; but among the sample people who read X or more books on Bar, 75% hold position ~Foo, suggesting that you are likely to be overconfident".

Something like collecting:

  • your positions on various issues

  • your confidence in that position

  • how important various characteristics are at predicting correct opinions on that issue (intelligence, general education, reading on the issue, age ("general experience"), specific work or life experience with the issue, etc.)

  • How well you fare on those characteristics

  • Whether you expect to be above or below average (for LessWrong) on those characteristics

  • How many lesswrongers you expect will disagree with you on that issue

  • Whether you expect those who disagree with you to be above or below average on the various characteristics

  • How much you would be willing to change your mind if you saw surprising information

What data we could get from that

  • Are differences in opinion due to different "criteria for rightness" (book-knowledge vs. experience), to different "levels of knowledge" (Smart people believe A, stupid people believe B), or to something else ?

Problems with this approach:

  • Politics is the mind-killer. We may not want too much (or any) politics on LessWrong. If the data is collected anonymously, this may not be a huge problem.

  • It's easier to do data-mining etc. with multiple-choice questions rather than with open-ended questions (because two people never answer the same thing, so it leaves space to interpretation), but doing that correctly requires very good advance knowledge of what possible answers exist.

  • Questions would be veeery carefully phrased.

  • Ideally I would want confidence factors for all answers, but the end result may be too intimidating :P (And discourage people from answering, which makes a small sample size, which means questionable results).

I would certainly be interested in seeing the result of such a survey, but for now my idea is too rough to be actionable - any suggestions ? Comments ?

comment by Douglas_Knight · 2010-07-03T19:51:58.994Z · score: 2 (2 votes) · LW(p) · GW(p)

You may like the Correct Contrarian Cluster.

comment by [deleted] · 2010-07-06T01:11:12.301Z · score: 1 (1 votes) · LW(p) · GW(p)

In general I'd be interested in more specific and subtle data on political views than is normally given. In particular, on what issues do people tend to break with their own party or ideology? That's a simpler answer than you're asking, but easily tested.

comment by Emile · 2010-07-03T11:34:12.695Z · score: 1 (1 votes) · LW(p) · GW(p)

Oh, and I would probably want to add something on political affiliation - mostly because I expect a lot of "I believe Foo because I researched the issue / am very smart; others believe ~Foo because of their political affiliation"; but also because "I believe Foo and have researched it well, even though it goes against the grain of my general political affiliation" may be good evidence for Foo.

comment by mattnewport · 2010-07-03T15:28:00.450Z · score: 0 (0 votes) · LW(p) · GW(p)
  • how important various characteristics are at predicting correct opinions on that issue (intelligence, general education, reading on the issue, age ("general experience"), specific work or life experience with the issue, etc.)

How do you propose to determine what constitutes a 'correct' opinion on any given controversial issue?

comment by Emile · 2010-07-03T16:33:22.131Z · score: 1 (1 votes) · LW(p) · GW(p)

I don't :)

If there is a disagreement on, say, the status of Taiwan, even someone who doesn't know much it might agree that some good predictors would agree that some good predictors would be "knowledge of the history of Taiwan", "Having lived in Taiwan", "Familiarity with Chinese culture", etc.

And it can be interesting to see whether:

  • People of different opinions consider different predictors as important (conveniently, those that favor their position)

  • Everyone agrees on which predictors are important, but those who score highly on those predictors have a different opinion from those that score lowly (which would be evidence that they are probably right)

  • Everyone agrees on which predictors are important, but even among those who score highly on those predictors, opinions are split.

I guess what I'm getting at is "If you take the outside view, how likely is it that your opinions are true"?

comment by wedrifid · 2010-07-03T15:58:11.316Z · score: 0 (2 votes) · LW(p) · GW(p)

How do you propose to determine what constitutes a 'correct' opinion on any given controversial issue?

The only way that makes any sense, see how closely they match her own! :)

comment by JamesPfeiffer · 2010-07-02T17:19:32.589Z · score: 2 (2 votes) · LW(p) · GW(p)

I have been thinking about "holding off on proposing solutions." Can anyone comment on whether this is more about the social friction involved in rejecting someone's solution without injuring their pride, or more about the difficulty of getting an idea out of your head once it's there?

If it's mostly social, then I would expect the method to not be useful when used by a single person; and conversely. My anecdote is that I feel it's helped me when thinking solo, but this may be wishful thinking.

comment by Oscar_Cunningham · 2010-07-02T17:28:36.341Z · score: 2 (2 votes) · LW(p) · GW(p)

Definitely the latter, even when I'm on my own, any subsequent ideas after my first one tend to be variations on my first solution, unless I try extra hard to escape its grip.

comment by zero_call · 2010-07-03T05:28:51.018Z · score: 0 (0 votes) · LW(p) · GW(p)

You might think about the zen idea, in which the proposal of solutions is certainly held off, or treated differently. This is a very common idea in response to the tendency of solutions to precipitate themselves so ubiquitously.

comment by Taure · 2010-07-14T22:33:35.873Z · score: 1 (1 votes) · LW(p) · GW(p)

Is self-ignorance a prerequisite of human-like sentience?

I present here some ideas I've been considering recently with regards to philosophy of mind, but I suppose the answer to this question would have significant implications for AI research.

Clearly, our instinctive perception of our own sentience/consciousness is one which is inaccurate and mostly ignorant: we do not have knowledge or sensation of the physical processes occurring in our brains which give rise to our sense of self.

Yet I take it as true that our brains - like everything else - are purely physical. No mysticism here, thank you very much. If they are physical, then everything that occurs within is causally deterministic. I avoid here any implications regarding free will (a topic I regard as mostly nonsense anyway). I simply point out that our brain processes will follow a causal narrative thus: input leads to brain state A leads to brain state B which leads to brain state C, and so on. These processes are entirely physical, and therefore, theoretically (not practically - yet), entirely predictable.

Now, ask yourself this question: what would our self-perception be like, if it was entirely accurate to the physical reality? If there was no barrier of ignorance between our consciousness and the inner workings of our brains?

With every idea, though, emotion, plan, memory and action we had, we would be aware of the brainwave that accompanied it - the specific pattern of neuronal firings, and how they built up to create semantically meaningful information. Further, we'd see how this brain state led to the following brain state, and so on. We would perceive ourselves as purely mechanical.

In addition, as our brain is not a single entity, but a massive network of neurons, collected into different systems (or modules), working together but having separate functions, we would not think of our mental processes as unified - at least nowhere near as much as we do now.We would no longer attribute our thoughts and mental life to an "I", but to the totality of mechanical processes that - when we were ignorant - built up to create a unified sense of "I".

I would tentatively suggest that such a sense of self is incompatible with our current sense of self. That how we act and behave and think, how we see ourselves and others, is intrinsically tied to the way we perceive ourselves as non-mechanical, possessing a mystical will - an I - which goes where it chooses (of course academically you may recognise that you're a biological machine, but instinctually we all behave as if we weren't). In short, I would suggest that our ignorance of our neural processes is necessary for the perception of ourselves as autonomous sentient individuals.

The implications of this, were it true, are clear. It would be impossible to create an AI which was both able to perceive and alter its own programming, while maintaining a human-like sentience. That's not to say that such an AI would not be sentient - just that it would be sentient in a very different way to how we are.

Secondly, we would possibly not even be able to recognise this other-sentience, such was the difference. For every decision or proclamation the AI made, we would simply see the mechanical programming at work, and say "It's not intelligent like we are, it's just following mechanical principles". (Think, for example, of Searle's Chinese Room, which I take only shows that if we can fully comprehend every stage of an information manipulation process, most people will intuitively think it to be not sentient). We would think our AI project unfinished, and keep trying to add that "final spark of life", unaware that we had completed the project already.

comment by steven0461 · 2010-07-14T22:52:55.699Z · score: 0 (0 votes) · LW(p) · GW(p)

I don't think there is really such a thing as introverted and extroverted people at all. People are encouraged to think of these things as part of their "essential character" (TM) - or even their biology.

Here's some evidence the other way -- paywalled, but the gist is on the first page.

comment by Taure · 2010-07-14T23:11:20.314Z · score: 0 (0 votes) · LW(p) · GW(p)

Um, thanks, but I think wrong thread.

comment by steven0461 · 2010-07-14T23:13:51.552Z · score: 0 (0 votes) · LW(p) · GW(p)

Oops, you're right.

comment by Mass_Driver · 2010-07-10T04:44:30.094Z · score: 1 (5 votes) · LW(p) · GW(p)

Downvoted for unnecessarily rude plonking. You can tell someone you're not interested in what they have to say without being mean.

comment by Kevin · 2010-07-06T05:26:18.821Z · score: 1 (1 votes) · LW(p) · GW(p)

I have an IQ of 85. My sister has an IQ of 160+. AMA.

http://www.reddit.com/r/IAmA/comments/cma2j/i_have_an_iq_of_85_my_sister_has_an_iq_of_160_ama/

Posted because of previous LW interest in a similar thread.

comment by RobinZ · 2010-07-06T11:20:38.463Z · score: 0 (0 votes) · LW(p) · GW(p)

...huh, the account has been deleted.

comment by cousin_it · 2010-07-05T15:36:01.661Z · score: 1 (1 votes) · LW(p) · GW(p)

We've been thinking about moral status of identical copies. Some people value them, some people don't, Nesov says we should ask a FAI because our moral intuitions are inadequate for such problems. Here's a new intuition pump:

Wolfram Research has discovered a cellular automaton that, when run for enough cycles, produces a singleton creature named Bob. From what we can see, Bob is conscious, sentient and pretty damn happy in his swamp. But we can't tweak Bob to create other creatures like him, because the automaton's rules are too fragile and poorly understood, and finding another ruleset with sentient beings seems very difficult as well. My question is, how many computers must we allocate to running identical copies of Bob and his world to make our moral sense happy? Assume computing power is pretty cheap.

comment by mkehrt · 2010-07-07T09:24:52.976Z · score: 2 (2 votes) · LW(p) · GW(p)

I completely lack the moral intuition that one should create new conscious beings if one knows that they will be happy. Instead, my ethics apply only to existing people. I am actually completely baffled that so many people seem to have this intuition.

Thus, there is no reason to copy Bob. (Moreover, I avoid the repugnant condition.)

comment by SilasBarta · 2010-07-05T15:45:07.881Z · score: 1 (1 votes) · LW(p) · GW(p)

Same answer I give for all other cases of software life: our ability to run Bob is more resilient against information theoretic death. So as long as we store enough to start him from where he left off, he never feels death, and we have met our moral obligations to him.

(First LW post from my first smartphone btw.)

comment by Vladimir_Nesov · 2010-07-05T18:21:44.076Z · score: 2 (2 votes) · LW(p) · GW(p)

Same answer I give for all other cases of software life: our ability to run Bob is more resilient against information theoretic death. So as long as we store enough to start him from where he left off, he never feels death, and we have met our moral obligations to him.

Bah, he can't feel that we don't run him. Whether we should run him is a question of optimizing the moral value of our world, not of determining his subjective perception. What Bob feels is a property completely determined by the initial conditions of the simulation, and doesn't (generally) depend on whether he gets implemented in any given world.

comment by cousin_it · 2010-07-05T18:37:31.274Z · score: 0 (0 votes) · LW(p) · GW(p)

You believe in Tegmark IV then? How do you reconcile it with my recent argument against it? Your use of "preference" looks like a get out of jail free card: it can "explain" any sequence of observations by claiming that you only "care" about a specific subset of worlds.

comment by Vladimir_Nesov · 2010-07-05T18:58:59.585Z · score: 0 (0 votes) · LW(p) · GW(p)

Don't see how Tegmark IV is relevant here (or indeed relevant anywhere: it doesn't say anything!). My comment was against expecting Bob to have epiphenomenal feelings: if it's not something already in his program (which takes no input), then he can't possibly experience it.

comment by cousin_it · 2010-07-05T19:07:11.969Z · score: 0 (0 votes) · LW(p) · GW(p)

It seems I misread your comment. Sorry.

comment by Vladimir_Nesov · 2010-07-05T19:18:44.586Z · score: 1 (1 votes) · LW(p) · GW(p)

Your confusion with Tegmark IV seems to remain though, so I'm glad you signaled that. This topic is analogous to Tegmark IV, in that in both cases the distinction made is essentially epiphenomenal: multiverses talk about which things "exist" or "don't exist", and here Bob is supposed to feel "non-existence". The property of "existence" is meaningless, that's the problem in both cases. When you refer to the relevant concepts (worlds, behavior of Bob's program), you refer to all their properties, and you can't stamp "exists" on top of that (unless the concept itself is inconsistent, say).

One can value certain concepts, and make decisions based on properties of those concepts. The concepts themselves are determined by what the decision-making algorithm is interested in.

comment by cousin_it · 2010-07-05T20:26:29.100Z · score: 2 (2 votes) · LW(p) · GW(p)

It seems to me you're mistaken. Multiverse theories do make predictions about what experiences we should anticipate, they're just wrong. You haven't yet given any real answer to the issue of pheasants, or maybe I'm a pathetic failure at parsing your posts.

Incidentally, my problem makes for a nice little test case: what experiences do you think Bob "should" anticipate in his future, assuming now we can meddle in the simulation at will? Does this question have a single correct answer? If it doesn't, why do such questions appear to have correct answers in our world, answers which don't require us to hypothesize random meddling gods, and does it tell us anything about how our world is different from Bob's?

comment by jimrandomh · 2010-07-06T22:25:11.695Z · score: 1 (3 votes) · LW(p) · GW(p)

Multiverse theories do make predictions about what experiences we should anticipate, they're just wrong.

On the contrary, multiverse theories do make predictions about subjective experience. For example, they predict what sort of subjective experience a sentient computer program should have, if any, after being halted. Some predict oddities like quantum immortality. The problem is that all observations that could shed light on the issue also require leaving the universe, making the evidence non-transferrable.

comment by cousin_it · 2010-07-05T15:53:54.178Z · score: 2 (2 votes) · LW(p) · GW(p)

Okay next question. Our understanding of the cellular automaton has advanced to the point where we can change one spot of Bob's world, at one specific moment in time, without being too afraid of harming Bob. It will have ripple effects and change the swamp around him slightly, though. So now we have 10^30 possible slightly-different potential futures for Bob. He will probably be happy in the overwhelming majority of them. How many should we run to fulfill our moral utility function of making sentients happy?

comment by SilasBarta · 2010-07-05T16:21:48.859Z · score: 1 (1 votes) · LW(p) · GW(p)

Okay, point taken. The answer depends on how (one believes) the social utility function responds to new instantiations of sentients that are very similar to existing ones. But in any case, you would be obligated to preserve re-instantiation capability of any already-created being.

comment by cousin_it · 2010-07-05T16:31:12.409Z · score: 0 (0 votes) · LW(p) · GW(p)

The answer depends on how (one believes) the social utility function responds to new instantiations of sentients that are very similar to existing ones.

How does yours?

comment by SilasBarta · 2010-07-05T17:16:48.124Z · score: 1 (1 votes) · LW(p) · GW(p)

I don't think that creation of new sentients, in and of itself, has an impact on the (my) SUF. It only has an impact to the extent that their creators value them and others disvalue such new beings.

comment by cousin_it · 2010-07-05T15:47:27.137Z · score: 0 (0 votes) · LW(p) · GW(p)

He never feels death if we just stop the simulation either.

comment by simplicio · 2010-07-28T06:03:39.488Z · score: 0 (0 votes) · LW(p) · GW(p)

I've been listening to a podcast (Skeptically Speaking) talking with a fellow named Sherman K Stein, author of Survival Guide for Outsiders. I haven't read the book, but it seems that the author has a lot of good points about how much weight to give to expert opinions.

EDIT: Having finished listening, I revise my opinion down. It's still probably worth reading, but wait for it to get to the library.

comment by Kevin · 2010-07-07T03:50:39.089Z · score: 0 (0 votes) · LW(p) · GW(p)

Scientific study roundup: fish oil and mental health.

http://www.oilofpisces.com/depression.html

comment by RobinZ · 2010-07-07T04:04:43.580Z · score: 3 (3 votes) · LW(p) · GW(p)

Welcome to the Premier Omega-3/Fish Oil Site on the Web!

I feel cautious about the objectivity of this source. Other sources suggest health benefits to consumption of fish, but I want to be confident that my expert sources are not skewing the selection of research to promote.

comment by Kevin · 2010-07-07T04:08:27.117Z · score: 3 (3 votes) · LW(p) · GW(p)

Regardless of the souce, the evidence seems to be rather strong that fish oil does good things for the brain. If you can find any negative evidence about fish oil and mental health, I'd like to see it.

comment by RobinZ · 2010-07-07T04:13:39.364Z · score: 2 (2 votes) · LW(p) · GW(p)

I would like to know of risks associated with fish oil consumption as well. I am not aware of any. I am also not confident that any given site dedicated to the stuff would provide such information if or when it is available. I would suggest investigating independent sources of information (including but not limited to citations within and citations of referenced research) before drawing a confident conclusion.

comment by mattnewport · 2010-07-07T07:35:54.904Z · score: 2 (2 votes) · LW(p) · GW(p)

Fish oil (particularly cod liver oil) has high levels of vitamin A which is known to be toxic at high doses (above what would typically be consumed through fish oil supplements) and some studies suggest is harmful at lower doses (consistent with daily supplementation).

comment by RichardKennaway · 2010-07-07T04:57:41.556Z · score: 1 (1 votes) · LW(p) · GW(p)

Seth Roberts has written about omega-3s. I believe that somewhere in there he's talked about the possibility of mercury contamination in fish oils.

comment by wedrifid · 2010-07-07T05:03:23.473Z · score: 3 (3 votes) · LW(p) · GW(p)

(I note that mercury concentration is subject to heavy quality control measures. Quality fish oil supplements will include credible guarantees regarding mercury levels, based of independent testing. This is, of course, something to consider when buying cheap sources from some obscure place.)

comment by RichardKennaway · 2010-07-07T06:19:01.577Z · score: 1 (1 votes) · LW(p) · GW(p)

Correction: the health risk he wrote about was PCBs in fish oil. For this reason he advocates flaxseed oil as a source of omega-3. Whether there is any real danger I don't know.

comment by Douglas_Knight · 2010-07-07T06:28:50.782Z · score: 1 (1 votes) · LW(p) · GW(p)

PCBs and omega-3s climb the food chain, so they're pretty well correlated. At some point I eyeballed a chart and decided that mercury was negatively correlated with omega-3s. No idea why.

comment by Kevin · 2010-07-07T05:25:54.372Z · score: 0 (2 votes) · LW(p) · GW(p)

I think this is one of those things that may have been a problem >5 years ago but recent regulation in the USA means that all fish oil on the market is now guaranteed to be safe.

comment by WrongBot · 2010-07-07T05:30:34.181Z · score: 1 (1 votes) · LW(p) · GW(p)

That's a rather... disproportionate level of faith to have in the US government's ability to regulate anything. I would not rely on American regulatory agencies for risk assessment in any field, much less one in which so little is currently known.

comment by Kevin · 2010-07-07T07:48:43.445Z · score: 2 (4 votes) · LW(p) · GW(p)

http://www.nytimes.com/2009/03/24/health/24real.html

I don't have faith, but I have a broad knowledge of the FDA and their regulation of supplements. Usually when the US government works, it works. If evidence comes out that something is dangerous, the FDA usually pulls it from store shelves until it is fixed. Examples of supplements that at a certain point in past history were poisonous but are now correctly regulated are 5-HTP and Kava.

I knew that there were people claiming fish oil is bad, some of them loudly. I know that this was first disclaimed at least five years ago. I then intuited today, that if there ever did exist a safety issue with mercury in fish oil, it would have been fixed by now.

The meme that some fish oil pills are poisoned is mostly perpetuated by companies that are trying to sell you extra expensive fish oil pills.

comment by wedrifid · 2010-07-07T08:59:34.681Z · score: 1 (1 votes) · LW(p) · GW(p)

(Voted up but...)

Examples of supplements that at a certain point in past history were poisonous but are now correctly regulated are 5-HTP and Kava.

I'd like to clarify that claim, because I took the totally wrong message from it the first read through. We're talking about regulation for quality control purposes and not control of the substance itself (I'm assuming). 5-Hydroxytryptophan itself is just an amino acid precursor that is available over the counter in the USA and Canada.

It is an intermediate product produced when Tryptophan is being converted into Seratonin. It was Tryptophan which was banned by the FDA due to association with EMS. They cleared that up eventually once they established that the problem was with the filtering process of a major manufacturer, not the substance itself. I don't think they ever got around to banning 5-HTP, even though the two only differ by one enzymatic reaction.

In general it is relatively hard to mess yourself up with amino acid precursors, even though Seratonin is the most dangerous neurotransmitter to play with. In the case of L-Tryptophan and 5-HTP care should be taken when combining it with SSRIs and MAO-A inhibitors. ie. Take way way less for the same effect or just "DO NOT MESS WITH SERATONIN!" (in slightly shaky handwriting).

Let me know if you meant something different from the above. Also, what is the story with Kava? All I know is that it is a mild plant based supplement that mildly sedates/counters anxiety/reduces pain, etc. Has it had quality issues too?

comment by Kevin · 2010-07-07T17:49:48.303Z · score: 3 (3 votes) · LW(p) · GW(p)

Thanks for the clarification, yes, by 5-HTP I meant tryptophan.

Serotonin has serious drug interactions with SSRIs and MAOIs, but otherwise is decidedly milder than pharmaceutical anti-depressants. It's effects are more comparable to melatonin than prozac

Kava is a plant that counters anxiety, and it is rather effective at doing so but very short lasting. It causes no physical addiction, which is one of the reasons it is on the FDA's Generally Recognized as Safe list. All kava on the market today is sourced from kava root. Kava has a great deal of native/indigenous use, and those people always make their drinks from kava root, throwing away the rest of the plant.

The rest of the plant contains active substances, so in their infinite wisdom, a Western company bought up the cheap kava leaf remnants and made extracts. It turns out that kava leafs have ingredients that cause large amounts of liver damage, but the roots are relatively harmless.

Kava root still isn't good for the liver, but it is less damaging than alcohol or acetaminophen. It is a bad idea to regularly mix it with alcohol or acetaminophen or other things that are bad for the liver, though.

comment by wedrifid · 2010-07-07T23:26:38.916Z · score: 2 (2 votes) · LW(p) · GW(p)

Kava root still isn't good for the liver, but it is less damaging than alcohol or acetaminophen.

Courtesy of google: acetaminophen is 'paracetamol'. It seems several countries (including the US) use a different name for the chemical.

comment by wedrifid · 2010-07-07T05:52:34.626Z · score: 0 (0 votes) · LW(p) · GW(p)

I share your distrust of the regulatory ability of the US government, particularly the FDA. I further lament the ability of the FDA to damage the regulatory procedures worldwide with their incompetence (or more accurately their lost purpose). In the case of Kevin's specific reference to regulation I suspect even the FDA could manage it. While research on the effects of large doses of EPA and DHA (Omega3) may be scant, understanding of mercury content itself is fairly trivial. I'm taking it that Kevin is referring specifically to quality assurance regarding mercury levels which is at least plausible (given litigation risks for violations).

comment by NancyLebovitz · 2010-07-07T07:23:58.597Z · score: 2 (2 votes) · LW(p) · GW(p)

Stored riff here: I think the world would be a better place if people had cheap handy means of doing quantitative chemical tests. I'm not sure how feasible it is, though I think there's a little motion in that direction.

comment by wedrifid · 2010-07-07T07:27:03.649Z · score: 1 (1 votes) · LW(p) · GW(p)

I would love to have that available, either as a product or a readily accessible service.

comment by Nisan · 2010-07-07T08:32:57.737Z · score: 3 (3 votes) · LW(p) · GW(p)

It would make consuming illegal drugs a lot safer, no?

comment by Kevin · 2010-07-07T17:50:54.277Z · score: 3 (3 votes) · LW(p) · GW(p)

http://www.dancesafe.org/testingkits/

comment by wedrifid · 2010-07-07T09:03:31.322Z · score: 0 (0 votes) · LW(p) · GW(p)

I hadn't thought of that, good point. Given that consideration, assume the grandparent comment was written in all caps, with the 'product' option surrounded with '**'.

Quality issues are an important consideration, for me at least, when trying to source substances that violate arbitrary restrictions.

comment by RobinZ · 2010-07-07T05:03:17.534Z · score: 0 (0 votes) · LW(p) · GW(p)

Mercury is a known problem with fish in general, agreed. Content varies somewhat with species, I have heard.

comment by MichaelBishop · 2010-07-04T05:01:17.243Z · score: 0 (0 votes) · LW(p) · GW(p)

Andrew Gelman & Cosma Shalizi - Philosophy and the Practice of Bayesian Statistics arXiv

comment by Unnamed · 2010-07-04T05:53:22.413Z · score: 2 (2 votes) · LW(p) · GW(p)

You're third, after steven0461 and nhamann.

comment by cupholder · 2010-07-04T06:14:57.445Z · score: 3 (3 votes) · LW(p) · GW(p)

Fourth!

comment by DanielVarga · 2010-07-04T19:01:03.497Z · score: 2 (2 votes) · LW(p) · GW(p)

And I still managed to miss it the first three times.

comment by steven0461 · 2010-07-04T21:48:36.209Z · score: 1 (1 votes) · LW(p) · GW(p)

I thought I did a search but apparently not; sorry.

comment by cupholder · 2010-07-04T21:56:30.538Z · score: 1 (1 votes) · LW(p) · GW(p)

In the long run, it's all good - I think it's a decent paper, and I suppose this way more eyeballs see it than if I was the only one to post it. (Not to say that we should make a regular habit of linking things four times :-)

comment by apophenia · 2010-07-02T11:57:34.588Z · score: 0 (0 votes) · LW(p) · GW(p)

one five eight nine eight eight eight nine nine eight SEVEN wait. why seven. seven is the nine thousandth deviation. update. simplest explanation. all ones. next explanation. all ones and one zero. next explanation. random ones and zeros with probability point seven nine nine seven repeating. next explanation pi. gap. next explanation. decimal pi with random errors according to poisson distribution converted to binary. next explanation. one seven one eight eight five two decimals of pi with random errors according to poisson distribution converted to binary followed by eight five nine zero one digits of reflexive code. current explanation--

"Eric, you've got to come over and look at this!" Jerry explained excitedly into the phone. "It's not those damn notebooks again, is it? I've told you, I could just write a computer program and you'd have all your damn results for the last year inside a week," Eric explained sleepily for the umpteenth time. "No, no. Well... yes. But this is something new, you've got to take a look," Jerry wheedled. "What is it this time? I know, it can calculate pi with 99.9% percent accuracy, yadda yadda. We have pi to billions of decimal places with total accuracy, Jerry. You're fifty years too late." "No, I've been trying something new. Come over." Jerry hung up the phone, clearly upset. Eric rubbed his eyes. Fifteen minutes peering at the crackpot notebooks and nodding appreciatively would sooth his friend's ego, he knew. And he was a good friend, if a little nuts. Eric took one last longing look at his bed and grabbed his house key.

"And you see this pattern? The ones that are nearly diagonal here?" "Jerry, it's all a bunch of digits to me. Are you sure you didn't make a mistake?" "I double check all my work, I don't want to go back too far when I make a mistake. I've explained the pattern twice already, Eric." "I know, I know. But it's Saturday morning, I'm going to be a bit--let me get this straight. You decided to apply the algorithm to its old output." "No, not its own output, that's mostly just pi. The whole pad." "Jerry, you must have fifty of these things. There's no way you can--" "Yeah, I didn't go very far. Besides, the scratch pads grow faster than the output as I work through the steps anyway." "Okay, okay. So you run through these same steps with your scratch pad numbers, and you get correct predictions then too?" "That's not the point!" "Calm down, calm down. What's the point then?" "The point is these patterns in the scratch work--" "The memory?" "Yeah, the memory." "You know, if you'd just let me write a program, I--" "No! It's too dangerous." "Jerry, it's a math problem. What's it going to do, write pi at you? Anyway, I don't see this pattern..." "Well, I do. And so then I wondered, what if I just fed it ones for the input? Just rewarded it no matter what it did?" "Jerry, you'd just get random numbers. Garbage in, garbage out." "That's the thing, they weren't random." "Why the hell are you screwing around with these equations anyway? If you want to find patterns in the Bible or something... just joking! Oww, stop. I kid, kid!" "But, I didn't get random numbers! I'm not just seeing things, take a look. You see here in the right hand column of memory? We get mostly zeros, but every once in a while there's a one or two." "Okaaay?" "And if you write those down we have 2212221..." "Not very many threes?" "Ha ha. It's the perfect numbers, Eric. I think I stumbled on some way of outputting the perfect numbers. Although the digits are getting further spaced apart, so I don't know how long it will stay faster than factoring." "Huh. That's actually kinda cool, if they really are the perfect numbers. You have what, five or six so far? Let's keep feeding it ones and see what happens. Want me to write a program? I hear there's a cash prize for the larger ones." "NO! I mean, no, that's fine, Eric. I'd prefer you not write a program for this, just in case." "Geez, Jerry. You're so paranoid. Well, in that case can I help with the calculations by hand? I'd love to get my claim to fame somehow."

"Well... I guess that's okay. First, you copy this digit from here to here..."

comment by apophenia · 2010-07-02T11:56:59.331Z · score: 0 (0 votes) · LW(p) · GW(p)

I was originally not going to post this, but I decided to on the basis that if it's as bad as I think, it'll be voted down:

one five eight nine eight eight eight nine nine eight SEVEN wait. why seven. seven is the nine thousandth deviation. update. simplest explanation. all ones. next explanation. all ones and one zero. next explanation. random ones and zeros with probability point seven nine nine seven repeating. next explanation pi. gap. next explanation. decimal pi with random errors according to poisson distribution converted to binary. next explanation. one seven one eight eight five two decimals of pi with random errors according to poisson distribution converted to binary followed by eight five nine zero one digits of reflexive code. current explanation--

"Eric, you've got to come over and look at this!" Jerry explained excitedly into the phone. "It's not those damn notebooks again, is it? I've told you, I could just write a computer program and you'd have all your damn results for the last year inside a week," Eric explained sleepily for the umpteenth time. "No, no. Well... yes. But this is something new, you've got to take a look," Jerry wheedled. "What is it this time? I know, it can calculate pi with 99.9% percent accuracy, yadda yadda. We have pi to billions of decimal places with total accuracy, Jerry. You're fifty years too late." "No, I've been trying something new. Come over." Jerry hung up the phone, clearly upset. Eric rubbed his eyes. Fifteen minutes peering at the crackpot notebooks and nodding appreciatively would sooth his friend's ego, he knew. And he was a good friend, if a little nuts. Eric took one last longing look at his bed and grabbed his house key.

"And you see this pattern? The ones that are nearly diagonal here?" "Jerry, it's all a bunch of digits to me. Are you sure you didn't make a mistake?" "I double check all my work, I don't want to go back too far when I make a mistake. I've explained the pattern twice already, Eric." "I know, I know. But it's Saturday morning, I'm going to be a bit--let me get this straight. You decided to apply the algorithm to its old output." "No, not its own output, that's mostly just pi. The whole pad." "Jerry, you must have fifty of these things. There's no way you can--" "Yeah, I didn't go very far. Besides, the scratch pads grow faster than the output as I work through the steps anyway." "Okay, okay. So you run through these same steps with your scratch pad numbers, and you get correct predictions then too?" "That's not the point!" "Calm down, calm down. What's the point then?" "The point is these patterns in the scratch work--" "The memory?" "Yeah, the memory." "You know, if you'd just let me write a program, I--" "No! It's too dangerous." "Jerry, it's a math problem. What's it going to do, write pi at you? Anyway, I don't see this pattern..." "Well, I do. And so then I wondered, what if I just fed it ones for the input? Just rewarded it no matter what it did?" "Jerry, you'd just get random numbers. Garbage in, garbage out." "That's the thing, they weren't random." "Why the hell are you screwing around with these equations anyway? If you want to find patterns in the Bible or something... just joking! Oww, stop. I kid, kid!" "But, I didn't get random numbers! I'm not just seeing things, take a look. You see here in the right hand column of memory? We get mostly zeros, but every once in a while there's a one or two." "Okaaay?" "And if you write those down we have 2212221..." "Not very many threes?" "Ha ha. It's the perfect numbers, Eric. I think I stumbled on some way of outputting the perfect numbers. Although the digits are getting further spaced apart, so I don't know how long it will stay faster than factoring." "Huh. That's actually kinda cool, if they really are the perfect numbers. You have what, five or six so far? Let's keep feeding it ones and see what happens. Want me to write a program? I hear there's a cash prize for the larger ones." "NO! I mean, no, that's fine, Eric. I'd prefer you not write a program for this, just in case." "Geez, Jerry. You're so paranoid. Well, in that case can I help with the calculations by hand? I'd love to get my claim to fame somehow." "Well... I guess that's okay. First, you copy this digit from here to here..."

comment by Kevin · 2010-07-02T10:44:32.627Z · score: 0 (2 votes) · LW(p) · GW(p)

Medical grade honey! I can't wait until I can get this stuff in bulk.

How honey kills bacteria

comment by gwern · 2010-07-02T10:54:02.763Z · score: 1 (1 votes) · LW(p) · GW(p)

I'm just wondering - what makes medical-grade honey medical-grade (as opposed to food-grade)?

comment by Emile · 2010-07-02T11:12:53.998Z · score: 4 (6 votes) · LW(p) · GW(p)

The price ?

comment by Douglas_Knight · 2010-07-03T03:15:08.479Z · score: 2 (2 votes) · LW(p) · GW(p)

Medical-grade honey is purer, sterilized, and made from tea tree nectar. It is a better antibiotic, both because of the sterilization and because it has more of the active ingredient than ordinary tea tree honey, probably because they put more effort into preventing the bees from eating anything else.

comment by gwern · 2010-07-03T12:57:50.455Z · score: 0 (0 votes) · LW(p) · GW(p)

'tea tree nectar'? I'm a little confused - I thought honey by definition always came from bees.

comment by wedrifid · 2010-07-03T12:59:36.124Z · score: 0 (0 votes) · LW(p) · GW(p)

I'll presume you aren't making a joke since you used the lesswrong keyword 'confused'.

What do bees eat?

comment by gwern · 2010-07-03T21:55:23.371Z · score: 0 (0 votes) · LW(p) · GW(p)

What do bees eat?

Flower nectar, I had always thought. I did think to myself, 'maybe what is meant is honey harvested from bees feed exclusively on the flowers of tea trees', but leaving aside my similar difficulty with the term 'tea tree' and how one would arrange that (giant sealed greenhouses of tea trees and bee hives?), I couldn't seem to find anything in a quick Google to confirm or deny this - 'tea tree honey' is a pretty rare term and mostly got me useless commercial hits.

comment by Douglas_Knight · 2010-07-04T04:32:06.339Z · score: 0 (0 votes) · LW(p) · GW(p)

The link I gave said "manuka" rather than "tea tree." If you want to know how beekeepers control the inputs, the term is monofloral honey. This is quite common, though a higher price for medical grade honey might lead to more involved methods.

comment by wedrifid · 2010-07-04T02:43:35.827Z · score: 0 (0 votes) · LW(p) · GW(p)

Put the box in the middle of a large forest of tea trees and kill any other plant that bears flowers nearby. Bees are quite efficient optimisers, they'll take low hanging fruit tree blossom if it is available.

comment by Kutta · 2010-07-02T22:08:42.045Z · score: 0 (2 votes) · LW(p) · GW(p)

It is produced by bees.

comment by naivecortex · 2010-07-02T02:22:13.589Z · score: 0 (0 votes) · LW(p) · GW(p)

test

comment by VNKKET · 2010-07-01T22:05:17.860Z · score: 0 (0 votes) · LW(p) · GW(p)

This is a mostly-shameless plug for the small http://lesswrong.com/lw/29o/open_thread_may_2010_part_2/21sr I proposed in May:

I'm still looking for three people to cross the "http://lesswrong.com/lw/d6/the_end_of_sequences" by donating $60 to the http://singinst.org/donate/whysmalldonationsmatter.

If you're interested, see my http://lesswrong.com/lw/29o/open_thread_may_2010_part_2/21sr. I will match your donation.

comment by SamAdams · 2010-07-10T03:50:41.263Z · score: -4 (10 votes) · LW(p) · GW(p)

Karma Encourages Group Think:

LW Karma system allows people to vote up or vote posts down based on good or usless reasons. Without it you cannot make high-level posts or vote idiotic comments or posts down.

Essentially Karma is the currency of popularity of LW. This being said I would wager that this encourages a group think attitude; because people have a strong motivation to get karma and not such a strong incentive to think for themselves and to question the group.

I would also posit that this kind of system causes a stagnation in ideas and thinking within the group. This is evident on LW with how many posts just seem to rehash old news.

Pearls before swine.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-10T06:52:43.639Z · score: 5 (5 votes) · LW(p) · GW(p)

As the comments by this user have been consistently voted down and he cannot seem to take the hint, comments by him will be deleted/banned.

comment by JoshuaZ · 2010-07-10T12:31:14.818Z · score: 2 (2 votes) · LW(p) · GW(p)

I'm not sure that wholesale deletion of comments prior to banning is ideal in this case, in that it a) substantially disrupts the flow of conversations that occurred and b) makes it very difficult for an interested lurker to realize what was occurring. I don't see a good reason to delete the existing comments (many seem to be merely wrong) although I agree with banning the individual.

comment by Morendil · 2010-07-10T13:18:05.019Z · score: 2 (2 votes) · LW(p) · GW(p)

He meant "further comments".

comment by ata · 2010-07-14T21:18:53.681Z · score: 0 (0 votes) · LW(p) · GW(p)

I think "ban" is actually the term the Reddit/LW software uses for deleting a comment if you're an editor rather than the original poster. It doesn't refer to banning the user.

(I could be mistaken about what he means by it in this case, but I distinctly remember some past discussion to that effect.)

comment by RobinZ · 2010-07-10T04:08:02.180Z · score: 4 (4 votes) · LW(p) · GW(p)

(1) We are aware. There are important reasons for keeping a moderation system anyway. Practical suggestions for rational groupthink-alleviating measures would be appreciated, although possibly not implemented.

(2) Bear in mind the selection effect of who reads, votes, and replies to a thread on a given topic. Last year's survey showed more people who had decided to forgo cryonics than signed up for preservation by a factor of sixteen.

(3) You are not yet a sufficiently impressive figure within this community to induce people to reconsider their judgments merely by expressing disapproval.

comment by timtyler · 2010-07-10T08:36:22.712Z · score: 1 (3 votes) · LW(p) · GW(p)

Re: "Rational groupthink-alleviating measures"

Don't delete, ban or otherwise punish critics, would be my recommendation. Critics often bear unpopular messages. The only group I have ever participated in where critics were treated properly is the security/cryptographic community. There, if someone bothers to criticise something, if anything they are thanked for their input.

comment by Paul Crowley (ciphergoth) · 2010-07-10T08:53:38.051Z · score: 3 (3 votes) · LW(p) · GW(p)

I don't perceive a big difference between the crypto community and LW here. Do you have an example in mind of someone who speaks to the wider crypto community with the same tone that SamAdams speaks to us, but who is treated as a valued contributor?

comment by timtyler · 2010-07-10T09:00:33.232Z · score: -1 (3 votes) · LW(p) · GW(p)

I haven't looked closely at the case of SamAdams.

comment by Vladimir_Nesov · 2010-07-11T16:57:37.844Z · score: 2 (2 votes) · LW(p) · GW(p)

Don't delete, ban or otherwise punish critics, would be my recommendation. Critics often bear unpopular messages.

"Critic" is not a very useful category, moderation-wise. What matters is quality of argument, not implied conclusions, so an inane supporter of the group should be banned as readily as an inane defector, and there seems to be little value in keeping inane contributors around, whether "critics" or not.

comment by LucasSloan · 2010-07-10T06:46:19.905Z · score: 2 (2 votes) · LW(p) · GW(p)

This is evident on LW with how many posts just seem to rehash old news.

Do you have any insights which you would like to share that advance the borders of rationality?

comment by nhamann · 2010-07-10T06:32:58.085Z · score: 1 (3 votes) · LW(p) · GW(p)

Actually, Karma is the currency of "not being a troll" on LW. Since you are most likely a troll (not very effective though, IMO. Try being more subtle next time, you're likely to get more genuine responses that way), you are bankrupt. Oops! :(

comment by naivecortex · 2010-07-02T02:14:40.740Z · score: -14 (14 votes) · LW(p) · GW(p)

There are three ways to experience the world: sensations, feelings and thoughts. In the perception process, sensations come first, followed by feelings and then thoughts.

The genetically endowed instinctual passions, and their concomitant feelings, form themselves into an inchoate sense of being a self (I/me) separate from the physical body. Suffering is tied to this self/feelings.

Eradication of self/feelings, and thus suffering, has been accomplished on October 1992 by a man from Australia named Richard; followed by more beginning this year.

And now, in 2010, for the first time, Buddhists at DharmaOverground have begun to consider this new way of life sincerely. To begin with, here is an account from Daniel Ingram (a self-proclaimed Arahat) about the awesomeness of Pure Consciousness Experience compared to any other mode of experience that Humanity has known thus far.

PS: Before responding to this thread, it is helpful to review the commonly raised objections.

comment by JoshuaZ · 2010-07-02T02:42:52.107Z · score: 6 (8 votes) · LW(p) · GW(p)

Ok. Wrongbot has already given you the standard reading list, but I'd like to address this specifically.

The zeroth reason you've been voted down is that this comes across as spamming. No one likes to see a comment of apparently marginal relevance with lots of links to another website with minimal explanation.

Moving on from that, how will the general LW reader respond when reading the above? Let me more or less summarize the thought processes.

There are three ways to experience the world: sensations, feelings and thoughts. In the perception process, sensations come first, followed by feelings and then thoughts.

How do you define these three things? How do you know that they are everything? What is your experimental evidence?

The genetically endowed instinctual passions, and their concomitant feelings, form themselves into an inchoate sense of being a self (I/me) separate from the physical body. Suffering is tied to this self/feelings.

Ok. So now you've made some claim that sounds like the common dualist intuition is somehow due to genetics. That's plausibly true, but would need evidence. The claim that this form of dualism leads to "suffering" seems to be generic Buddhism.

Eradication of self/feelings, and thus suffering, has been accomplished on October 1992 by a man from Australia named Richard; followed by more beginning this year.

So now a testimonial of personal claims about enlightenment. That's going to go over real well with the empircists here.

And now, in 2010, for the first time, Buddhists at DharmaOverground have begun to consider this new way of life sincerely. To begin with, here is an account from Daniel Ingram (a self-proclaimed Arahat) about the awesomeness of Pure Consciousness Experience compared to any other mode of experience that Humanity has known thus far.

And now we get more testimonials, an explicit connection to Buddhism, and some undefined terms thrown in for good measure (what does it mean for someone to be "self-proclaimed Arahat"? If one doesn't know what an Arahat is then this means very little. If one is familiar with the term in Buddhist and Jainist beliefs then one isn't likely to see much of value in this claim).

At this point, the LWer concludes that this message amounts to religious spam or close to that. Then the LWer gets annoyed that scanning this message took up time from their finite lifespan that could be spent in a way that creates more positive utility (whether reading an interesting scientific paper, thinking about the problem of Friendly AI, napping, or even just watching silly cats on Youtube). And then they express their annoyance by downvoting you.

comment by pjeby · 2010-07-02T15:01:21.221Z · score: 4 (4 votes) · LW(p) · GW(p)

And then they express their annoyance by downvoting you.

Following which, they use more of their finite lifespan to comment in reply, in the hopes of feeling a momentary elevation of status, plus a lifetime of karma enhancements, that will maybe make up for the previous loss of time. ;-)

(For the record, I upvoted you anyway. ;-) )

comment by naivecortex · 2010-07-02T04:09:54.955Z · score: -10 (12 votes) ·