Open Thread: July 2010

post by komponisto · 2010-07-01T21:20:42.638Z · LW · GW · Legacy · 697 comments

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Part 2

697 comments

Comments sorted by top scores.

comment by michaelkeenan · 2010-07-02T02:43:06.441Z · LW(p) · GW(p)

I propose that LessWrong should produce a quarterly magazine of its best content.

LessWrong readership has a significant overlap with the readers of Hacker News, a reddit/digg-like community of tech entrepreneurs. So you might be familiar with Hacker Monthly, a print magazine version of Hacker News. The first edition, featuring 16 items that were voted highly on Hacker News, came out in June, and the second came out today. The curator went to significant effort to contact the authors of the various articles and blog posts to include them in the magazine.

Why would we want LessWrong content in a magazine? I personally would find it a great recruitment tool; I could have copies at my house and show/lend/give them to friends. As someone at the Hacker News discussion commented, "It's weird but I remember reading some of these articles on the web but, reading them again in magazine form, they somehow seem much more authoritative and objective. Ah, the perils of framing!"

The publishing and selling part is not too difficult. Hacker Monthly uses MagCloud, a company that makes it easy to publish and sell PDFs into printed magazines.

Unfortunately, I don't have the skills or time to do this myself, at least not in the short-term. If someone wants to pick up this project, major tasks would include creating a process to choose articles for inclusion, contacting the authors for permission, and designing the magazine.

There's also the possibility of advertisements. I personally would be excited to see what kinds of companies would like to advertise to an audience of rationalists. Cryonics companies? Index funds? Rationalist books? Non-profits seeking donations?

Should advertising be used just to defray costs, or could the magazine make money? Make money for whom?

If people think it's a good idea but no-one takes it on, I might have some time free early next year to make this happen. But I hope someone gets to it earlier.

Replies from: mattnewport, LucasSloan
comment by mattnewport · 2010-07-02T15:59:28.882Z · LW(p) · GW(p)

Does anyone else find the idea of creating a printed magazine rather anachronistic?

Replies from: Blueberry
comment by Blueberry · 2010-07-02T16:12:46.424Z · LW(p) · GW(p)

The rumors of print media's death have been greatly exaggerated.

Replies from: Larks
comment by Larks · 2010-07-04T17:32:37.783Z · LW(p) · GW(p)

This comment would seem much more authoritative if seen in print.

comment by LucasSloan · 2010-07-02T04:37:05.217Z · LW(p) · GW(p)

I don't think there's enough content on LW to be worthwhile publishing a magazine. However, Eliezer's book on rationality should offer many of the same benefits.

Replies from: michaelkeenan, NancyLebovitz, gwern, Kevin
comment by michaelkeenan · 2010-07-02T05:15:43.343Z · LW(p) · GW(p)

Not all of the content needs to be from the most recent quarter. There could be classic articles too. But I think we might have enough content each quarter anyway. Let's see...

There were about 120 posts to Less Wrong from April 1 to June 30. The top ten highest-voted were Diseased thinking: dissolving questions about disease by Yvain, Eight Short Studies On Excuses by Yvain, Ugh Fields by Roko, Bayes Theorem Illustrated by komponisto, Seven Shiny Stories by Alicorn, Ureshiku Naritai by Alicorn, The Psychological Diversity of Mankind by Kaj Sotala, Abnormal Cryonics by Will Newsome, Defeating Ugh Fields In Practice by Psychohistorian, and Applying Behavioral Pscyhology on Myself by John Maxwell IV.

Maybe not all of those are appropriate for a magazine (e.g. Bayes Theorem Illustrated is too long). So maybe swap a couple of them out for other ones. Then maybe add a few classic LessWrong articles (for example, Disguised Queries would make a good companion piece to Diseased Thinking), add a few pages of advertising and maybe some rationality quotes, and you'd have at least 30 pages. I know I'd buy it.

Replies from: komponisto
comment by komponisto · 2010-07-02T11:28:18.343Z · LW(p) · GW(p)

Maybe not all of those are appropriate for a magazine (e.g. Bayes Theorem Illustrated is too long)

It's not actually all that long; it's just that the diagrams take up a lot of space.

Replies from: michaelkeenan
comment by michaelkeenan · 2010-07-02T16:28:18.100Z · LW(p) · GW(p)

Well, I'd like to keep the diagrams if the article is to be used. I do like Bayes Theorem Illustrated and I think an explanation of Bayes Theorem is perfect content for the magazine. If I were designing the magazine I'd want to try to include it, maybe edited down in length.

comment by NancyLebovitz · 2010-07-02T05:21:57.292Z · LW(p) · GW(p)

Monthly seems too often. Quarterly might work.

comment by gwern · 2010-07-02T05:04:49.269Z · LW(p) · GW(p)

A yearly anthology would be pretty good, though. HN is reusing others' content and can afford a faster tempo; but that simply means we need to be slower. Monthly is too fast, I suspect that quarterly may be a little too fast unless we lower our standards to include probably wrong but still interesting essays. (I think of "Is cryonics necessary?: Writing yourself into the future" as an example of something I'm sure is wrong, but was still interesting to read.)

Replies from: Kevin
comment by Kevin · 2010-07-02T06:24:04.081Z · LW(p) · GW(p)

How about thirdly!?

Replies from: magfrump
comment by magfrump · 2010-07-02T13:55:31.940Z · LW(p) · GW(p)

This post both made me laugh AND think it was a good idea; I'd love to see a magazine that was more than once a year. There's a bit of discussion of the most recent quarter; if people don't think that it is long enough (or that the pace will continue, or that people will consent to their articles being put in journals) a slight delay should help but a four times delay seems excessive.

comment by Kevin · 2010-07-02T06:19:44.085Z · LW(p) · GW(p)

There's certainly enough content to do at least one really good issue.

comment by Paul Crowley (ciphergoth) · 2010-07-08T14:39:19.763Z · LW(p) · GW(p)

A New York Times article on Robin Hanson and his wife Peggy Jackson's disagreement on cryonics:

http://www.nytimes.com/2010/07/11/magazine/11cryonics-t.html?ref=health&pagewanted=all

Replies from: WrongBot, Vladimir_Nesov, Wei_Dai, wedrifid, Vladimir_Nesov, mattnewport
comment by WrongBot · 2010-07-08T17:12:21.846Z · LW(p) · GW(p)

While I'm not planning to pursue cryopreservation myself, I don't believe that it's unreasonable to do so.

Industrial coolants came up in a conversation I was having with my parents (for reasons I am completely unable to remember), and I mentioned that I'd read a bunch of stuff about cryonics lately. My mom then half-jokingly threatened to write me out of her will if I ever signed up for it.

This seemed... disproportionately hostile. She was skeptical of the singularity and my support for the SIAI when it came up a few weeks ago, but she's not particularly interested in the issue and didn't make a big deal about it. It wasn't even close to the level of scorn she apparently has for cryonics. When I asked her about it, she claimed she opposed it based on the physical impossibility of accurately copying a brain. My father and I pointed out that this would literally require the existence of magic, she conceded the point, mentioned that she still thought it was ridiculous, and changed the subject.

This was obviously a case of my mom avoiding her belief's true weak points by not offering her true objection, rationality failures common enough to deserve blog posts pointing them out; I wasn't shocked to observe them in the wild. What is shocking to me is that someone who is otherwise quite rational would feel so motivated to protect this particular belief about cryonics. Why is this so important?

That the overwhelming majority of those who share this intense motivation are women (it seems) just makes me more confused. I've seen a couple of explanations for this phenomenon, but they aren't convincing: if these people object to cryonics because they see it as selfish (for example), why do so many of them come up with fake objections? The selfishness objection doesn't seem like it would be something one would be penalized for making.

Replies from: Roko, SilasBarta, NancyLebovitz, Blueberry, whpearson
comment by Roko · 2010-07-08T22:31:28.034Z · LW(p) · GW(p)

Wanting cryo signals disloyalty to your present allies.

Women, it seems, are especially sensitive to this (mothers, wives). Here's my explanation for why:

  1. Women are better than men at analyzing the social-signalling theory of actions. In fact, they (mostly) obsess about that kind of thing, e.g. watching soap operas, gossiping, people watching, etc. (disclaimer: on average)

  2. They are less rational than men (only slightly, on average), and this is compounded by the fact that they are less knowledgeable about technical things (disclaimer: on average), especially physics, computer science, etc.

  3. Women are more bound by social convention and less able to be lone dissenters. Asch's conformity experiment found women to be more conforming.

  4. Because of (2) and (3), women find it harder than men to take cryo seriously. Therefore, they are much more likely to think that it is not a feasible thing for them to do

  5. Because they are so into analyzing social signalling, they focus in on what cryo signals about a person. Overwhelmingly: selfishness, and as they don't think they're going with you, betrayal.

Replies from: Alicorn, Wei_Dai, NancyLebovitz
comment by Alicorn · 2010-07-08T22:35:28.376Z · LW(p) · GW(p)

If you're right, this suggests a useful spin on the disclosure: "I want you to run away with me - to the FUTURE!"

However, it was my dad, not my mom, who called me selfish when I brought up cryo.

Replies from: Roko
comment by Roko · 2010-07-08T22:40:35.027Z · LW(p) · GW(p)

I think that what would work is signing up before you start a relationship, and making it clear that it's a part of who you are.

For parents, you can't do this, but they're your parents, they'll love you through thick and thin.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-07-09T04:21:50.661Z · LW(p) · GW(p)

I think that what would work is signing up before you start a relationship, and making it clear that it's a part of who you are.

Ah, but did you notice that that did not work for Robin? (The NYT article says that Robin discussed it with Peggy when they were getting to know each other.)

Replies from: Nisan
comment by Nisan · 2010-07-09T12:54:27.374Z · LW(p) · GW(p)

It "worked" for Robin to the extent that Robin got to decide whether to marry Peggy after they discussed cryonics. Presumably they decided that they preferred each other to hypothetical spouses with the same stance on cryonics.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-07-09T13:39:21.137Z · LW(p) · GW(p)

Thanks. (Upvoted.)

comment by Wei Dai (Wei_Dai) · 2010-07-08T22:51:24.921Z · LW(p) · GW(p)

Maybe the husband/son should preemptively play the "if you don't sign up with me, you're betraying me" card?

Replies from: Roko, wedrifid, JoshuaZ
comment by Roko · 2010-07-08T23:07:18.985Z · LW(p) · GW(p)

Aha, but if I signed up, I'd have to non-conform, darling. Think of what all the other girls at the office would say about me! It would be worse than death!

Replies from: lmnop
comment by lmnop · 2010-07-08T23:25:12.975Z · LW(p) · GW(p)

In the case of refusing cryonics, I doubt that fear of social judgment is the largest factor or even close. It's relatively easy to avoid judgment without incurring terrible costs--many people signed up for cryonics have simply never mentioned it to the girls and boys in the office. I'm willing to bet that most people, even if you promised that their decision to choose cryonics would be entirely private, would hardly waver in their refusal.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-07-09T01:30:32.249Z · LW(p) · GW(p)

For what it's worth Steven Kaas emphasized social weirdness as a decent argument against signing up. I'm not sure what his reasoning was, but given that he's Steven Kaas I'm going to update on expected evidence (that there is a significant social cost so signing up that I cannot at the moment see).

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-07-09T06:27:04.774Z · LW(p) · GW(p)

I don't get why social weirdness is an issue. Can't you just not tell anyone that you've signed up?

Replies from: gwern
comment by gwern · 2010-07-09T06:45:43.346Z · LW(p) · GW(p)

The NYT article points out that you sometimes want other people to know - your wife's cooperation at the hospital deathbed will make it much easier for the Alcor people to wisk you away.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-09T08:19:40.671Z · LW(p) · GW(p)

It's not an argument against signing up, unless the expected utility of the decision is borderline positive and it's specifically the increased probability of failure because of lack of additional assistance of your family that tilts the balance to the negative.

Replies from: gwern
comment by gwern · 2010-07-10T10:12:34.088Z · LW(p) · GW(p)

Given that there are examples of children or spouses actively preventing (and succeeding) cryopreservation, that means there's an additional few % chance of complete failure. Given the low chance to begin with (I think another commenter says noone expects cryonics to succeed with more than 1/4 probability?), that damages the expected utility badly.

Replies from: pengvado
comment by pengvado · 2010-07-10T11:09:28.759Z · LW(p) · GW(p)

An additional failure mode with a few % chance of happening damages the expected utility by a few %. Unless you have some reason to think that this cause of failure is anticorrelated with other causes of failure?

Replies from: gwern, RogerPepitone
comment by gwern · 2010-07-10T13:04:41.634Z · LW(p) · GW(p)

If I initially estimate that cyronics in aggregate has a 10% chance of succeeding, and I then estimate that my spouse/children have a 5% chance of preventing my cryopreservation, does my expected utility decline by only 5%?

comment by RogerPepitone · 2010-07-11T15:49:57.612Z · LW(p) · GW(p)

Are you still involved in Remember 11?

comment by wedrifid · 2010-07-09T02:59:16.137Z · LW(p) · GW(p)

Maybe the husband/son should preemptively play the "if you don't sign up with me, you're betraying me" card?

If my spouse played that card too hard I'd sign up to cryonics then I'd dump them. ("Too hard" would probably mean more than one issue and persisting against clearly expressed boundaries.) Apart from the manipulative aspect it is just, well, stupid. At least manipulate me with "you will be abandoning me!" you silly man/woman/intelligent agent of choice.

comment by JoshuaZ · 2010-07-08T23:06:24.945Z · LW(p) · GW(p)

Maybe the husband/son should preemptively play the "if you don't sign up with me, you're betraying me" card?

Voted up as an interesting suggestion. That said, I think that if anyone feels a need to be playing that card in a preemptive fashion then a relationship is probably not very functional to start with. Moreover, given that signing up is a change from the status quo I suspect that attempting to play that card would go over poorly in general.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-07-08T23:39:09.412Z · LW(p) · GW(p)

That said, I think that if anyone feels a need to be playing that card in a preemptive fashion then a relationship is probably not very functional to start with.

Can you expand on that? I'm not sure why this particular card is any worse than what people in functional relationships typically do.

Moreover, given that signing up is a change from the status quo I suspect that attempting to play that card would go over poorly in general.

Right, so sign up before entering the relationship, then play that card. :)

Replies from: lsparrish, JoshuaZ
comment by lsparrish · 2010-07-08T23:57:56.908Z · LW(p) · GW(p)

I would say that if you aren't yet married, be prepared to dump them if they won't sign up with you. Because if they won't, that is a strong signal to you that they are not a good spouse. These kinds of signals are important to pay attention to in the courtship process.

After marriage, you are hooked regardless of what decision they make on their own suspension arrangements, because it's their own life. You've entered the contract, and the fact they want to do something stupid does not change that. But you should consider dumping them if they refuse to help with the process (at least in simple matters like calling Alcor), as that actually crosses the line into betrayal (however passive) and could get you killed.

comment by JoshuaZ · 2010-07-09T01:42:02.455Z · LW(p) · GW(p)

Can you expand on that? I'm not sure why this particular card is any worse than what people in functional relationships typically do.

We may have different definitions of "functional relationship." I'd put very high on the list of elements of a functional relationship that people don't go out of there way to consciously manipulate each other over substantial life decisions.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-07-09T08:29:03.876Z · LW(p) · GW(p)

Um, it's a matter of life or death, so of course I'm going to "go out of my way".

As for "consciously manipulate", it seems to me that people in all relationships consciously manipulate each other all the time, in the sense of using words to form arguments in order to convince the other person to do what they want. So again, why is this particular form of manipulation not considered acceptable? Is it because you consider it a lie, that is, you don't think you would really feel betrayed or abandoned if your significant other decided not to sign up with you? (In that case would it be ok if you did think you would feel betrayed/abandoned?) Or is it something else?

Replies from: wedrifid
comment by wedrifid · 2010-07-09T09:51:23.259Z · LW(p) · GW(p)

So again, why is this particular form of manipulation not considered acceptable?

It is a good question. The distinctive feature of this class of influence is the overt use of guilt and shame, combined with the projection of the speaker's alleged emotional state onto the actual physical actions of the recipient. It is a symptom relationship dynamic that many people consider immature and unhealthy.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-07-09T20:56:00.739Z · LW(p) · GW(p)

It is a symptom relationship dynamic that many people consider immature and unhealthy.

I'm tempted to keep asking why (ideally in terms of game theory and/or evolutionary psychology) but I'm afraid of coming across as obnoxious at this point. So let me just ask, do you think there is a better way of making the point, that from the perspective of the cryonicist, he's not abandoning his SO, but rather it's the other way around? Or do you think that its not worth bring up at all?

comment by NancyLebovitz · 2010-07-09T00:02:17.668Z · LW(p) · GW(p)

Wanting cryo signals disloyalty to your present allies.

I don't see why you'd be showing disloyalty to those of your allies who are also choosing cryo.

Here are some more possible reasons for being opposed to cryo.

Loss aversion. "It would be really stupid to put in that hope and money and get nothing for it."

Fear that it might be too hard to adapt to the future society. (James Halperin's The First Immortal has it that no one gets thawed unless someone is willing to help them adapt. would that make cryo seem more or less attractive?)

And, not being an expert on women, I have no idea why there's a substantial difference in the proportions of men and women who are opposed to cryo.

Replies from: Roko
comment by Roko · 2010-07-09T00:08:33.186Z · LW(p) · GW(p)

Difference between showing and signalling disloyalty. To see that it is a signal of disloyalty/lower commitment, consider what signal would be sent out by Rob saying to Ruby: "Yes, I think cryo would work, but I think life would be meaningless without you by my side, so I won't bother"

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-07-09T20:18:23.810Z · LW(p) · GW(p)

It's seems to also be a signal of disloyalty/lower commitment to say, "No honey, I won't throw myself on your funeral pyre after you die." Why don't we similarly demand "Yes, I could keep on living, but I think life would be meaningless without you by my side, so I won't bother" in that case?

Replies from: Roko
comment by Roko · 2010-07-09T20:49:47.062Z · LW(p) · GW(p)

You have to differentiate between what an individual thinks/does/decides, and what society as a whole thinks/does/decides.

For example, in a society that generally accepted that it was the "done thing" for a person to die on the funeral pyre of their partner, saying that you wanted to make a deal to buck the trend would certainly be seen as selfish.

Most individuals see the world in terms of options that are socially allowable, and signals are considered relative to what is socially allowable.

comment by SilasBarta · 2010-07-08T17:36:45.688Z · LW(p) · GW(p)

if these people object to cryonics because they see it as selfish (for example), why do so many of them come up with fake objections?

I -- quite predictably -- think this is a special case of the more general problem that people have trouble explaining themselves. You mom doesn't give her real reason because she can't (yet) articulate it. In your case, I think it's due to two factors: 1) part of the reasoning process is something she doesn't want to say to your face so she avoids thinking it, and 2) she's using hidden assumptions that she falsely assumes you share.

For my part, my dad's wife is nominally unopposed, bitterly noting that "It's your money" and then ominously adding that, "you'll have to talk about this with your future wife, who may find it loopy".

(Joke's on her -- at this rate, no woman will take that job!)

Replies from: cousin_it
comment by cousin_it · 2010-07-08T17:42:04.489Z · LW(p) · GW(p)

Sometime ago I offered this explanation for not signing up for cryo: I know signing up would be rational, but can't overcome my brain's desire to make me "look normal". I wonder whether that explanation sounds true to others here, and how many other people feel the same way.

Replies from: SilasBarta, mattnewport
comment by SilasBarta · 2010-07-08T22:50:42.253Z · LW(p) · GW(p)

I'm in a typical decision-paralysis state. I want to sign up, I have the money, but I'm also interested in infinite banking, which requires you to get a whole-life plan [1], which would have to be coordinated, which makes it complicated and throws off an ugh field.

What I should probably do is just get the term insurance, sign up for cryo, and then buy amendments to the life insurance contract if I want to get into the infinite banking thing.

[1] Save your breath about the "buy term and invest the difference" spiel, I've heard it all before. The investment environment is a joke.

Replies from: mattnewport
comment by mattnewport · 2010-07-08T23:06:25.864Z · LW(p) · GW(p)

I'm also interested in infinite banking, which requires you to get a whole-life plan

You mentioned this before and I had a quick look at the website and got the impression that it is fairly heavily dependent on US tax laws around whole life insurance and so is not very applicable to other countries. Have you investigated it enough to say whether my impression is accurate or if this is something that makes sense in other countries with differing tax regimes as well?

Replies from: SilasBarta
comment by SilasBarta · 2010-07-08T23:15:15.823Z · LW(p) · GW(p)

I haven't read about the laws in other countries, but I suspect they at least share the aspect that it's harder to seize assets stored in such a plan, giving you more time to lodge an objection of they get a lien on it.

comment by mattnewport · 2010-07-08T17:47:01.516Z · LW(p) · GW(p)

For a variety of reasons I don't think cryonics is a good investment for me personally. The social cost of looking weird is certainly a negative factor, though not the only one.

comment by NancyLebovitz · 2010-07-08T18:09:16.177Z · LW(p) · GW(p)

I don't have anything against cryo, so this are tentative suggestions.

Maybe going in for cryo means admitting how much death hurts, so there's a big ugh field.

Alternatively, some people are trudging through life, and they don't want it to go on indefinitely.

Or there are people they want to get away from.

However, none of this fits with "I'll write you out of my will". This sounds to me like seeing cryo as a personal betrayal, but I can't figure out what the underlying premises might be. Unless it's that being in the will implies that the recipient will also leave money to descendants, and if you aren't going to die, then you won't.

comment by Blueberry · 2010-07-08T18:01:06.698Z · LW(p) · GW(p)

That the overwhelming majority of those who share this intense motivation are women (it seems) just makes me more confused.

Is there evidence for this? Specifically the "intense" part?

ETA: Did you ask her why she had such strong feelings about it? Was she able to answer?

Replies from: WrongBot
comment by WrongBot · 2010-07-08T18:19:55.209Z · LW(p) · GW(p)

The evidence is largely anecdotal, I think. There are certainly stories of cryonics ending marriages out there.

I haven't yet asked her about it, but I plan to do so next time we talk.

comment by whpearson · 2010-07-08T17:25:32.107Z · LW(p) · GW(p)

If I was going to make a guess, I suspect that saying X is selfish can easily lead to the rejoinder, "It is my money I have the right to chose what to do with it," especially within the modern world. Saying X is selfish so it shouldn't be done, can also be seen as interfering with another persons business which is frowned upon in lots of social circles. It is also called moralising. So she may be unconsciously avoiding that response.

Replies from: WrongBot
comment by WrongBot · 2010-07-08T17:40:09.561Z · LW(p) · GW(p)

This may be true in some cases, but I don't think it is in this one; my mom has no trouble moralizing on any other topic, even ones about which I care a great deal more than I do about cryonics. For example, she's criticized polyamory as unrealistic and bisexuality as non-existent on multiple occasions, both of which have a rather significant impact on how I live my life.

Replies from: whpearson
comment by whpearson · 2010-07-08T17:53:28.314Z · LW(p) · GW(p)

I wasn't there at the discussions, but those seem different types of statements than saying that they are "wrong/selfish" and that by implication you are a bad person for doing them. She is impugning your judgement in all cases rather than your character.

Replies from: WrongBot
comment by WrongBot · 2010-07-08T18:00:29.528Z · LW(p) · GW(p)

An important distinction, it's true. I feel like it should make a difference in this situation that I declared my intention to not pursue cryopreservation, but I'm not sure that it does.

Either way, I can think of other specific occasions when my mom has specifically impugned my character as well as my judgment. ("Lazy" is the word that most immediately springs to mind, but there are others.)

It occurs to me that as I continue to add details my mom begins to look like a more and more horrible person; this is generally not the case.

comment by Vladimir_Nesov · 2010-07-08T15:25:53.008Z · LW(p) · GW(p)

A factual error:

when he first announced his intention to have his brain surgically removed from his freshly vacated cadaver and preserved in liquid nitrogen

I'm fairly sure that head-only preservation doesn't involve any brain-removal. It's interesting that in context the purpose of the phrase was to present a creepy image of cryonics, and so the bias towards the phrases that accomplish this goal won over the constraint of not generating fiction.

comment by Wei Dai (Wei_Dai) · 2010-07-08T19:06:06.282Z · LW(p) · GW(p)

I wonder if Peggy's apparent disvalue of Robin's immortality represents a true preference, and if so, how should an FAI take it into account while computing humanity's CEV?

Replies from: Clippy, red75
comment by Clippy · 2010-07-08T19:22:06.691Z · LW(p) · GW(p)

It should store a canonical human "base type" in a data structure somewhere. Then it should store the information about how all humans deviate from the base type, so that they can in principle be reconstituted as if they had just been through a long sleep.

Then it should use Peggy's body and Robin's body for fuel.

comment by red75 · 2010-07-08T21:22:11.583Z · LW(p) · GW(p)

It seems plausible that "know more" part of EV should include result of modelling of applying CEV to humanity, i.e. CEV is not just result of aggregation of individuals' EVs, but one of fixed points of humans' CEV after reflection on results of applying CEV.

Maybe Peggy's model will see, that her preferences will result in unnecessary deaths and that death is no more important part for society to exist/for her children to prosper.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-07-08T22:20:04.255Z · LW(p) · GW(p)

It seems to me if it were just some factual knowledge that Peggy is missing, Robin would have been able to fill her in and thereby change her mind.

Of course Robin's isn't a superintelligent being, so perhaps there is an argument that would change Peggy's mind that Robin hasn't thought of yet, but how certain should we be of that?

Replies from: Nick_Tarleton, red75
comment by Nick_Tarleton · 2010-07-08T22:28:27.276Z · LW(p) · GW(p)

Communicating complex factual knowledge in an emotionally charged situation is hard, to say nothing of actually causing a change in deep moral responses. I don't think failure is strong evidence for the nonexistence of such information. (Especially since I think one of the most likely sorts of knowledge to have an effect is about the origin — evolutionary and cognitive — of the relevant responses, and trying to reach an understanding of that is really hard.)

Replies from: Wei_Dai, steven0461
comment by Wei Dai (Wei_Dai) · 2010-07-08T23:18:49.457Z · LW(p) · GW(p)

You make a good point, but why is communicating complex factual knowledge in an emotionally charged situation hard? It must be that we're genetically programmed to block out other people's arguments when we're in an emotionally charged state. In other words, one explanation for why Robin has failed to change Peggy's mind is that Peggy doesn't want to know whatever facts or insights might change her mind on this matter. Would it be right for the FAI to ignored that "preference" and give Peggy's model the relevant facts or insights anyway?

ETA: This does suggest a practical advice: try to teach your wife and/or mom the relevant facts and insights before bringing up the topic of cryonics.

Replies from: Kevin, Roko
comment by Kevin · 2010-07-08T23:36:03.966Z · LW(p) · GW(p)

You are underestimating just how enormously Peggy would have to change her mind. Her life's work involves emotionally comforting people and their families through the final days of terminal illness. She has accepted her own mortality and the mortality of everyone else as one of the basic facts of life. As no one has been resurrected yet, death still remains a basic fact of life for those that don't accept the information theoretic definition of death.

To change Peggy's mind, Robin would not just have to convince her to accept his own cryonic suspension, but she would have to be convinced to change her life's work -- to no longer spend her working hours convincing people to accept death, but to convince them to accept death while simultaneously signing up for very expensive and very unproven crazy sounding technology.

Changing the mind of the average cryonics-opposed life partner should be a lot easier than changing Peggy's mind. Most cryonics-opposed life partners have not dedicated their lives to something diametrically opposed to cryonics.

comment by Roko · 2010-07-08T23:28:35.850Z · LW(p) · GW(p)

This does suggest a practical advice: try to teach your wife and/or mom the relevant facts and insights before bringing up the topic of cryonics.

You mean you want to make an average IQ woman into a high-grade rationalist?

Good luck!

Better plan: go with Rob Ettinger's advice. If your wife/gf doesn't want to play ball, dump her. (This is a more alpha-male attitude to the problem, too. A woman will instinctively sense that you are approaching her objection from an alpha-male stance of power, which will probably have more effect on her than any argument)

In fact I'm willing to bet at steep odds that Mystery could get a female partner to sign up for cryo with him, whereas a top rationalist like Hanson is floundering.

Replies from: Alicorn, Larks, lmnop
comment by Alicorn · 2010-07-08T23:36:15.394Z · LW(p) · GW(p)

Is this generalizable? Should I, too, threaten my loved ones with abandonment whenever they don't do what I think would be best?

Replies from: Alexandros, Roko
comment by Alexandros · 2010-07-09T09:48:19.790Z · LW(p) · GW(p)

I don't think this is about doing what you think best, it's about allowing you to do what you think best. And yes, you should definitely threaten abandonment in these cases, or at least you're definitely entitled to threatening and/or practicing abandonment in such cases.

comment by Roko · 2010-07-08T23:51:05.682Z · LW(p) · GW(p)

I'm not sure. It might work, but you're going outside of my areas of expertise.

comment by Larks · 2010-07-09T00:56:57.752Z · LW(p) · GW(p)

Better yet, sign up while you're single, and present it Fait accompli. It won't get her signed up, but I'd be willing to be she won't try to make you drop your subscription.

comment by lmnop · 2010-07-08T23:37:30.605Z · LW(p) · GW(p)

Well the practical advice is being offered to LW, and I'd guess that most of the people here are not average IQ, and neither are their friends and family. I personally think it's a great idea to try and give someone the relevant factual background to understand why cryonics is desirable before bringing up the option. It probably wouldn't work, simply because almost all attempts to sell cryonics to anyone don't work, but it should at least decrease the probability of them reacting with a knee-jerk dismissal of the whole subject as absurd.

Replies from: Roko
comment by Roko · 2010-07-08T23:57:17.263Z · LW(p) · GW(p)

I maintain that if you are male with a female relatively neurotypical partner, the probability of success of making her sign on the dotted line for cryo, or accepting wholeheartedly your own cryo is not maximized by using rational argument, rather it is maximized by having an understanding of the emotional world that the fairer sex inhabit, and how to control her emotions so that she does what you think best. She won't listen to your words, she'll sense the emotions and level of dominance in you, and then decide based on that, and then rationalize that decision.

This is a purely positive statement, i.e. it is empirically testable, and I hereby denounce any connotation that one might interpret it to have. Let me explicitly disclaim that I don't think that womens' emotional nature makes them inferior, just different, and in need of different treatment. Let me also disclaim that this applies only on average, and that there will be exceptions, i.e. highly systematizing women who will, in fact, be persuaded by rational argument.

Replies from: lmnop
comment by lmnop · 2010-07-09T00:09:48.751Z · LW(p) · GW(p)

I mostly agree with you. I would even expand your point to say that if you want to convince anyone (who isn't a perfect Bayesian) to do anything, the probability of success will almost always be higher if you use primarily emotional manipulation rather than rational argument. But cryonics inspires such strong negative emotional reactions in people that I think it would be nearly impossible to combat those with emotional manipulation of the type you describe alone. I haven't heard of anyone choosing cryonics for themselves without having to make a rational effort to override their gut response against it, and that requires understanding the facts. Besides, I think the type of males who choose cryonics tend to have female partners of at least above-average intelligence, so that should make the explanatory process marginally less difficult.

Replies from: Roko
comment by Roko · 2010-07-09T00:16:57.029Z · LW(p) · GW(p)

Besides, I think the type of males who choose cryonics tend to have female partners of at least above-average intelligence, so that should make the explanatory process marginally less difficult.

Right, but the data says that it is a serious problem. Cryonics wife problem, etc.

Replies from: lsparrish
comment by lsparrish · 2010-07-09T00:18:58.108Z · LW(p) · GW(p)

I wonder how these women feel about being labeled "The Hostile Wife Phenomenon"?

Replies from: Roko
comment by Roko · 2010-07-09T00:25:19.528Z · LW(p) · GW(p)

Full of righteous indignation, I should imagine. After all, they see it as their own husbands betraying them.

comment by steven0461 · 2010-07-08T22:42:06.803Z · LW(p) · GW(p)

Yes -- calling it "factual knowledge" suggests it's only about the sort of fact you could look up in the CIA World Factbook, as opposed to what we would normally call "insight".

comment by red75 · 2010-07-08T22:57:43.874Z · LW(p) · GW(p)

I meant something like embedding into culture where death is unnecessary, rather than directly arguing for that. Words aren't best communication channel for changing moral values. Will it be enough? I hope yes, if death of carriers of moral values isn't necessary condition for moral progress.

Edit: BTW, if CEV will be computed using humans' reflection on its application, then it means that FAI cannot passively combine all volitions, it must search for and somehow choose fixed point. Which rule should govern that process?

comment by wedrifid · 2010-07-08T15:19:10.248Z · LW(p) · GW(p)

That was very nearly terrifying.

comment by Vladimir_Nesov · 2010-07-08T16:47:25.371Z · LW(p) · GW(p)

Good article overall. Gives a human feel to the decision of cryonics, in particular by focusing on an unfair assault it attracts (thus appealing cryonicist's status).

comment by mattnewport · 2010-07-08T16:29:53.738Z · LW(p) · GW(p)

The hostile wife phenomenon doesn't seem to have been mentioned much here. Is it less common than the article suggests or has it been glossed over because it doesn't support the pro-cryonics position? Or has it been mentioned and I wasn't paying attention?

Replies from: ata, HughRistik
comment by ata · 2010-07-08T17:07:00.872Z · LW(p) · GW(p)

At last count (a while ago admittedly), most LWers were not married, and almost none were actually signed up for cryonics. So perhaps this phenomenon just isn't a salient issue to most people here.

Replies from: Morendil, ciphergoth
comment by Morendil · 2010-07-08T17:17:15.130Z · LW(p) · GW(p)

I'm married and with kids, my wife supports my (so far theoretical only) interest in cryo. Though she says she doesn't want it for herself.

comment by Paul Crowley (ciphergoth) · 2010-07-09T07:33:02.975Z · LW(p) · GW(p)

Data point FWIW: my partners are far from convinced of the wisdom of cryonics, but they respect my choices. Much of the strongest opposition has come from my boyfriend, who keeps saying "why not just buy a lottery ticket? It's cheaper".

Replies from: gwern
comment by gwern · 2010-07-09T10:19:05.935Z · LW(p) · GW(p)

Much of the strongest opposition has come from my boyfriend, who keeps saying "why not just buy a lottery ticket? It's cheaper".

Well, I hoped you showed him your expected utility calculations!

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-07-09T11:23:28.634Z · LW(p) · GW(p)

I'm afraid that isn't really a good fit for how he thinks about these things...

Replies from: Sniffnoy
comment by Sniffnoy · 2010-07-09T11:26:06.956Z · LW(p) · GW(p)

It seems a bit odd to me that he would use the lottery comparison, in that case. Or no?

Replies from: Kingreaper
comment by Kingreaper · 2010-07-09T11:36:21.495Z · LW(p) · GW(p)

They're both things with low probabilities of success, and extremely large pay-offs.

To someone with a certain view of the future, or a moderately low "maximum pay-off" threshold, the pay-off of cryonics could be the same as the pay-off for a lottery win.

At which point the lottery is a cheaper, but riskier, gamble. Again, if someone has a certain view of the future, or a "minimum probability" threshold (which both are under) then this difference in risk could be unnoticed in their thoughts.

At which point the two become identical, but one is more expensive.

It's quick-and-dirty thinking, but it's one easy way to end up with the connection, and doesn't involve any utility calculations (in fact, utility calculations would be an anathema to this sort of thinking)

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-07-09T11:58:49.621Z · LW(p) · GW(p)

One big barrier I hit in talking to some of those close to me about this is that I can't seem to explain the distinction between wanting the feeling of hope that I might live a very long time, and actually wanting to live a long time. Lots of people just say "if you want to believe in life after death, why not just go to church? It's cheaper".

Replies from: Nisan, Richard_Kennaway, Nisan, Sniffnoy
comment by Nisan · 2010-07-09T13:47:36.190Z · LW(p) · GW(p)

Lots of people just say "if you want to believe in life after death, why not just go to church? It's cheaper".

I could see people saying that if they don't believe that cryonics has any chance at all of working. It might be hard to tell. If I told people "there's a good chance that cryonics will enable me to live for hundreds of years", I'm sure many would respond by nodding, the same way they'd nod if I told them that "there's a good chance that I'll go to Valhalla after I die". Sometimes respect looks like credulity, you know? Do you think that's what's happening here?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-07-09T13:56:17.125Z · LW(p) · GW(p)

Yes. I'm happy that people respect my choices, but when they "respect my beliefs" it strikes me as incredibly disrespectful.

comment by Richard_Kennaway · 2010-07-09T13:42:41.656Z · LW(p) · GW(p)

And if you reply "I only want to believe in things that are true?"

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-07-09T13:55:07.415Z · LW(p) · GW(p)

Apply the same transformation to my words that is causing me problems to that reply, and you get "I only want to believe in things that I believe are true".

comment by Nisan · 2010-07-09T13:47:19.552Z · LW(p) · GW(p)

Lots of people just say "if you want to believe in life after death, why not just go to church? It's cheaper".

I could see people saying that if they don't believe that cryonics has any chance at all of working. It might be hard to tell. If I told people "there's a good chance that cryonics will enable me to live for hundreds of years", I'm sure many would respond by nodding, the same way they'd nod if I told them that "there's a good chance that I'll go to Valhalla after I die". Sometimes respect looks like credulity, you know? Do you think that's what's happening here?

comment by Sniffnoy · 2010-07-09T12:26:08.406Z · LW(p) · GW(p)

That's a bit scary.

comment by HughRistik · 2010-07-08T17:29:48.784Z · LW(p) · GW(p)

It was mentioned, and you weren't paying attention ;)

Replies from: mattnewport
comment by mattnewport · 2010-07-08T17:48:45.321Z · LW(p) · GW(p)

I did think this was quite a likely explanation. As I'm not married the point would likely not have been terribly salient when reading about pros and cons.

comment by JohannesDahlstrom · 2010-07-07T09:51:26.603Z · LW(p) · GW(p)

Drowning Does Not Look Like Drowning

Fascinating insight against generalizing from fictional evidence in a very real life-or-death situation.

comment by lsparrish · 2010-07-04T16:53:30.046Z · LW(p) · GW(p)

Cryonics scales very well. People who think cryonics is costly, even if you had to come up with the entire lump sum close to the end of your life, are generally ignorant of this fact.

So long as you keep the shape constant, for any given container the surface area is a based on a square law whereas the volume is a cube. For example with a cube shaped object, one side squared times 6 is the surface area whereas one side cubed is the volume. Surface area is where the heat gets entry, so if you have a huge container holding cryogenic goods (humans in this case) it costs much less per unit volume (human) than is the case with a smaller container of equal insulation. A way to understand this is that you only have to insulate the outside -- the inside gets free insulation.

But you aren't stuck using equal insulation. You can use thicker insulation, with a much smaller proportional effect on total surface area as you use bigger sizes. Imagine the difference between a marble sized freezer and a house-sized freezer, when you add a foot of insulation. The outside of the insulation is where it begins collecting heat. But with a gigantic freezer, you might add a meter of insulation without it having a significant proportional impact on surface area, compared to how much surface area it already has.

Another factor to take into account is that liquid nitrogen, the super-cheap coolant used by cryonics facilities around the world, is vastly cheaper (more than a factor of 10) when purchased in huge quantities of several tons. The scaling factors for storage tanks are a big part of the reason for this. CI has used bulk purchasing as a mechanism for getting their prices down to $100 per patient per year for their newer tanks. They are actually storing 3,000 gallons of the stuff and using it slowly over time, which means there is a boiloff rate associated with the 3,000 gallon tank as well.

The conclusion I get from this is that there is a very strong self-interested case as well as altruistic case to be made for megascale cryonics versus small independently run units. People who say they won't sign up for cost reasons may be reachable at a later date. To deal with such people's objections, it might be smart to get them to agree with a particular hypothetical price point at which they would feel it is justified. In large enough quantities, it could be concievable that indefinite storage costs are as low as $50 per person, or 50 cents per year.

That is much cheaper than saving a life any other way, but of course there's still the risk that it might not work. However, given a sufficient chance of it working it could still be morally superior to other life saving strategies that cost more money. It also has inherent ecological advantages over other forms of life-saving in that it temporarily reduces population, giving the environment a chance to recover and green tech more time to take hold so that they can be supported sustainably and comfortably.

Replies from: Morendil
comment by Morendil · 2010-07-04T22:31:33.492Z · LW(p) · GW(p)

This needs to be a top-level post. Even with minimal editing. Please.

(ETA: It's not so much that we need to have another go at the cryonics debate; but the above is an argument that I can't recall seeing discussed here previously, that does substantially change the picture, and that illustrates various kinds of reasoning - about scaling properties, about predefining thresholds of acceptability, and about what we don't know we don't know - that are very relevant to LW's overall mission.)

Replies from: lsparrish
comment by lsparrish · 2010-07-05T03:13:45.586Z · LW(p) · GW(p)

Done.

comment by VNKKET · 2010-07-01T22:07:28.188Z · LW(p) · GW(p)

This is a mostly-shameless plug for the small donation matching scheme I proposed in May:

I'm still looking for three people to cross the "membrane that separates procrastinators and helpers" by donating $60 to the Singularity Institute. If you're interested, see my original comment. I will match your donation.

Replies from: Kutta, Yvain, WrongBot, zero_call
comment by Kutta · 2010-07-02T07:32:08.278Z · LW(p) · GW(p)

Done, 60 USD sent.

Replies from: VNKKET
comment by VNKKET · 2010-07-02T18:16:09.689Z · LW(p) · GW(p)

Thank you! Matched.

comment by WrongBot · 2010-07-02T00:37:28.037Z · LW(p) · GW(p)

I'm sorry I didn't see that earlier; I donated $30 to the SIAI yesterday, and I probably could have waited a little while longer and donated $60 all at once. If this offer will still be open in a month or two, I will take you up on it.

Replies from: VNKKET
comment by VNKKET · 2010-07-02T17:58:09.404Z · LW(p) · GW(p)

That sounds good, and feel free to count your first $30 towards a later $60 total if I haven't found a third person by then.

comment by zero_call · 2010-07-02T21:35:38.457Z · LW(p) · GW(p)

Without any way of authenticating the donations, I find this to be rather silly.

Replies from: VNKKET
comment by VNKKET · 2010-07-02T21:59:14.588Z · LW(p) · GW(p)

I'd also like these donations to be authenticated, but I'm willing to wait if necessary. Here's step 2, including the new "ETA" part, from my original comment:

In your donation's "Public Comment" field, include both a link to your reply to this thread and a note asking for a Singularity Institute employee to kindly follow that link and post a response saying that you donated. ETA: Step 2 didn't work for me, so I don't expect it to work for you. For now, I'll just believe you if you say you've donated. If you would be convinced to donate by seeing evidence that I'm not lying, let me know and I'll get you some.

Would you be willing to match my third $60 if I could give you better evidence that I actually matched the first two? If so, I'll try to get some.

comment by NancyLebovitz · 2010-07-02T03:50:04.638Z · LW(p) · GW(p)

I was at a recent Alexander Technique workshop, and some of the teachers had been observing how two year olds crawl.

If you've had any experience with two year olds, you know they can cover ground at an astonishing rate.

The thing is, adults typically crawl with their faces perpendicular to the ground, and crawling feels clumsy and unpleasant.

Two year olds crawl with their faces at 45 degrees to the ground, and a gentle curve through their upper backs.

Crawling that way gives access to a surprisingly strong forward impetus.

The relevance to rationality and to akrasia is the implication that if something seems hard, it may be that the preconditions for making it easy haven't been set up.

comment by utilitymonster · 2010-07-03T17:28:47.255Z · LW(p) · GW(p)

Here's a puzzle I've been trying to figure out. It involves observation selection effects and agreeing to disagree. It is related to a paper I am writing, so help would be appreciated. The puzzle is also interesting in itself.

Charlie tosses a fair coin to determine how to stock a pond. If heads, it gets 3/4 big fish and 1/4 small fish. If tails, the other way around. After Charlie does this, he calls Al into his office. He tells him, "Infinitely many scientists are curious about the proportion of fish in this pond. They are all good Bayesians with the same prior. They are going to randomly sample 100 fish (with replacement) each and record how many of them are big and how many are small. Since so many will sample the pond, we can be sure that for any n between 0 and 100, some scientist will observe that n of his 100 fish were big. I'm going to take the first one that sees 25 big and team him up with you, so you can compare notes." (I don't think it matters much whether infinitely many scientists do this or just 3^^^3.)

Okay. So Al goes and does his sample. He pulls out 75 big fish and becomes nearly certain that 3/4 of the fish are big. Afterwards, a guy named Bob comes to him and tells him he was sent by Charlie. Bob says he randomly sampled 100 fish, 25 of which were big. They exchange ALL of their information.

Question: How confident should each of them be that 3/4 of the fish are big?

Natural answer: Charlie should remain nearly certain that ¾ of the fish are big. He knew in advance that someone like Bob was certain to talk to him regardless of what proportion of fish were big. So he shouldn't be the least bit impressed after talking to Bob.

But what about Bob? What should he think? At first glance, you might think he should be 50/50, since 50% of the fish he knows about have been big and his access to Al's observations wasn't subject to a selection effect. But that can't be right, because then he would just be agreeing to disagree with Al! (This would be especially puzzling, since they have ALL the same information, having shared everything.) So maybe Bob should just agree with Al: he should be nearly certain that ¾ of the fish are big.

But that's a bit odd. It isn't terribly clear why Bob should discount all of his observations, since they don't seem to subject to any observation selection effect; at least from his perspective, his observations were a genuine random sample.

Things get weirder if we consider a variant of the case.

VARIANT: as before, but Charlie has a similar conversation with Bob. Only this time, he tells him he's going to introduce Bob to someone who observed exactly 75 of 100 fish to be big.

New Question: Now what should Bob and Al think?

Here, things get really weird. By the reasoning that led to the Natural Answer above, Al should be nearly certain that ¾ are big and Bob should be nearly certain that ¼ are big. But that can't be right. They would just be agreeing to disagree! (Which would be especially puzzling, since they have ALL the same information.) The idea that they should favor one hypothesis in particular is also disconcerting, given the symmetry of the case. Should they both be 50/50?

Here's where I'd especially appreciate enlightenment: 1.If Bob should defer to Al in the original case, why? Can someone walk me through the calculations that lead to this?

2.If Bob should not defer to Al in the original case, is that because Al should change his mind? If so, what is wrong with the reasoning in the Natural Answer? If not, how can they agree to disagree?

3.If Bob should defer to Al in the original case, why not in the symmetrical variant?

4.What credence should they have in the symmetrical variant?

5.Can anyone refer me to some info on observation selection effects that could be applied here?

Replies from: Vladimir_M, Blueberry, RobinZ, Kingreaper, Dagon, prase, JGWeissman, Soki
comment by Vladimir_M · 2010-07-03T21:46:22.649Z · LW(p) · GW(p)

First, let's calculate the concrete probability numbers. If we are to trust this calculator, the probability of finding exactly 75 big fish in a sample of a hundred from a pond where 75% of the fish are big is approximately 0.09, while getting the same number in a sample from a 25% big pond has a probability on the order of 10^-25. The same numbers hold in the reverse situation, of course.

Now, Al and Bob have to consider two possible scenarios:

  1. The fish are 75% big, Al got the decently probable 75/100 sample, but Bob happened to be the first scientist who happened to get the extremely improbable 25/100 sample, and there were likely 10^(twenty-something) or so scientists sampling before Bob.

  2. The fish are 25% big, Al got the extremely improbable 75/100 big sample, while Bob got the decently probable 25/100 sample. This means that Bob is probably among the first few scientists who have sampled the pond.

So, let's look at it from a frequentist perspective: if we repeat this game many times, what will be the proportion of occurrences in which each scenario takes place?

Here we need an additional critical piece of information: how exactly was Bob's place in the sequence of scientists determined? At this point, an infinite number of scientists will give us lots of headache, so let's assume it's some large finite number N_sci, and Bob's place in the sequence is determined by a random draw with probabilities uniformly distributed over all places in the sequence. And here we get an important intermediate result: assuming that at least one scientist gets to sample 25/100, the probability for Bob to be the first to sample 25/100 is independent of the actual composition of the pond! Think of it by means of a card-drawing analogy. If you're in a group of 52 people whose names are repeatedly called out in random order to draw from a deck of cards, the proportion of drawings in which you get to be the first one to draw the ace of spades will always be 1/52, regardless of whether it's a normal deck or a non-standard one with multiple aces of spades, or even a deck of 52 such aces!

Now compute the following probabilities:

P1 = p(75% big fish) * p(Al samples 75/100 | 75% big fish) * p(Bob gets to be the first to sample 25/100)
~ 0.5 0.09 1/N_sci

P2 = p(25% big fish) * p(Al samples 75/100 | 25% big fish) *p (Bob gets to be the first to sample 25/100)
~ 0.5 10^-25 1/N_sci

(We ignore the finite, but presumably negligible probabilities that no scientist samples 25/100 in either case; these can be made arbitrarily low by increasing N_sci.)

Therefore, we have P1 >> P2, i.e. the overwhelming majority of meetings between Al and Bob -- which are by themselves extremely rare, since Al usually meets someone from the other (N_sci-1) scientists -- happen under the first scenario, where Al gets a sample closely matching the actual ratio.

Now, you say:

It isn't terribly clear why Bob should discount all of his observations, since they don't seem to subject to any observation selection effect; at least from his perspective, his observations were a genuine random sample.

Not really, when you consider repeating the experiment. For the overwhelming majority of repetitions, Bob will get results close to the actual ratio, and on rare occasions he'll get extreme outlier samples. Those repetitions in which he gets summoned to meet with Al, however, are not a representative sample of his measurements! The criteria for when he gets to meet with Al are biased towards including a greater proportion of his improbable 25/100 outlier results.

As for this:

VARIANT: as before, but Charlie has a similar conversation with Bob. Only this time, he tells him he's going to introduce Bob to someone who observed exactly 75 of 100 fish to be big.

I don't think this is a well defined scenario. Answers will depend on the exact process by which this second observer gets selected. (Just like in the preceding discussion, the answer would be different if e.g. Bob had been always assigned the same place in the sequence of scientists.)

Replies from: utilitymonster
comment by utilitymonster · 2010-07-04T12:06:49.557Z · LW(p) · GW(p)

I was assuming Charlie would show Bob the first person to see 75/100.

Anyway, your analysis solves this as well. Being the first to see a particular result tells you essentially nothing about the composition of the pond (provided N_sci is sufficiently large that someone or other was nearly certain to see the result). Thus, each of Al and Bob should regard their previous observations as irrelevant once they learn that they were the first to get those results. Thus, they should just stick with their priors and be 50/50 about the composition of the pond.

comment by Blueberry · 2010-07-03T17:38:55.481Z · LW(p) · GW(p)

Interesting problem!

(This would be especially puzzling, since they have ALL the same information, having shared everything.)

It isn't terribly clear why Bob should discount all of his observations, since they don't seem to subject to any observation selection effect; at least from his perspective, his observations were a genuine random sample.

I think these two statements are inconsistent. If Bob is as certain as Al that Bob was picked specifically for his result, then they do have the same information, and they should both discount Bob's observations to the same degree for that reason. If Bob doesn't trust Al completely, they don't have the same information. Bob doesn't know for sure that Charlie told Al about the selection. From his point of view, Al could be lying.

VARIANT: as before, but Charlie has a similar conversation with Bob. Only this time, he tells him he's going to introduce Bob to someone who observed exactly 75 of 100 fish to be big.

If Charlie tells both of them they were both selected, they have the same information (that both their observations were selected for that purpose, and thus give them no information) and they can only decide based on their priors about Charlie stocking the pond.

If each of them only knows the other was selected and they both trust the other one's statements, same thing. But if each puts more trust in Charlie than in the other, then they don't have the same information.

Replies from: prase, utilitymonster
comment by prase · 2010-07-03T18:42:22.576Z · LW(p) · GW(p)

If Charlie tells both of them they were both selected, they have the same information (that both their observations were selected for that purpose, and thus give them no information) and they can only decide based on their priors about Charlie stocking the pond.

It is strange. Shall Bob discount his observation after being told that he is selected? What does it actually mean to be selected? What if Bob finds 25 big fish and then Charlie tells him, that there are 3^^^3 other observers and he (Charlie) decided to "select" one of those who observe 25 big fish and talk to him, and that Bob himself is the selected one (no later confrontation with AI). Should this information cancel the Bob's observations? If so, why?

Replies from: Kingreaper, utilitymonster
comment by Kingreaper · 2010-07-05T14:16:34.364Z · LW(p) · GW(p)

Yes, it should, if it is known that Charlie hasn't previously "selected" any other people who got precisely 25.

The probability of being selected (taken before you have found any fish) p[chosen] is approximately equal regardless of whether there are 25% or 75% big fish.

And the probability of you being selected if you didn't find 25 p[chosen|not25] is zero

Therefore, the probability of you being selected, given as you have found 25 big fish p[chosen|found25] is approximately equal to p[chosen]/p[found25]

The information of the fact you've been chosen directly cancels out the information from the fact you found 25 big fish.

comment by utilitymonster · 2010-07-03T19:11:21.768Z · LW(p) · GW(p)

Glad to see we're on the same page.

comment by utilitymonster · 2010-07-03T19:01:46.922Z · LW(p) · GW(p)

I'm not sure about this:

If Bob is as certain as Al that Bob was picked specifically for his result, then they do have the same information, and they should both discount Bob's observations to the same degree for that reason.

Here's why:

VARIANT 2: Charlie has both Al and Bob into his office before the drawings take place. He explains that the first guy (other than Al) to see 25/100 big will report to Al. Bob goes out and sees 25/100 big. To his surprise, he gets called into Charlie's office and informed that he was the first to see that result.

Question: right now, what should Bob expect to hear from Al?

Intuitively, he should expect that Al had similar results. But if you're right, it would seem that Bob should discount his results once he talks to Charlie and fights out that he is the messenger. If that's right, he should have no idea what to expect Al to say. But that seems wrong. He hasn't even heard anything from Al.

If you're still not convinced, consider:

VARIANT 3: Charlie has both Al and Bob into his office before the drawings take place. He explains that the first guy (other than Al) to see 25/100 big will win a trip to Hawaii. Bob goes out and sees 25/100 big. To his surprise, he gets called into Charlie's office and informed that he was the first to see that result.

I can see no grounds for treating VARIANT 3 differently from VARIANT 2. And it is clear that in VARIANT 3 Bob should not discount his results.

comment by RobinZ · 2010-07-03T18:10:16.942Z · LW(p) · GW(p)

One key observation is that Al made his observation after being told that he would meet someone who made a particular observation - specifically, the first person to make that specific observation, Bob. This makes Al and Bob special in different ways:

  • Al is special because he has been selected to meet Bob regardless of what he observes. Therefore his data is genuinely uncorrelated with how he was selected for the meeting.
  • Bob is special because he has been selected to meet Al because of the specific data he observes. More precisely, because he will be the first to obtain that specific result. Therefore his result has been selected, and he is only at the meeting because he happens to be the first one to get that result.

In the original case, Bob's result is effectively a lottery ticket - when he finds out from Al the circumstances of the meeting, he can simply follow the Natural Answer himself and conclude that his results were unlikely.

In the modified case, assuming perfect symmetry in all relevant aspects, they can conclude that an astronomically unlikely event has occurred and they have no net information about the contents of the pond.

Replies from: utilitymonster
comment by utilitymonster · 2010-07-03T18:47:50.510Z · LW(p) · GW(p)

Al is special because he has been selected to meet Bob regardless of what he observes. Therefore his data is genuinely uncorrelated with how he was selected for the meeting.

Not quite. He was selected to meet someone like Bob, in the sense that whoever the messenger was, he'd have seen 25/100 big. He didn't know he'd meet Bob. But he regards the identity of the messenger as irrelevant.

You can bring out the difference by considering a variant of the case in which both Al and Bob hear about Charlie's plan in advance. (In this variant, the first to see 25/100 big will visit Al.)

What is the relevance of the fact that they observed highly improbable event?

comment by Kingreaper · 2010-07-05T13:56:11.764Z · LW(p) · GW(p)

Okay, qualitative analysis without calculations:

Let's go for a large, finite, case. Because otherwise my brain will explode.

Question 1: for any large, finite number of scientists Bob should defer MOSTLY to Alice.

First lets look at Alice; In any large finite number of scientists there is a small finite chance that NO scientist will get that result. This chance is larger in the case where 75% of the fish are big. Thus, upon finding that a scientist HAS encountered 25 fish, Alice must adjust her probability slightly towards 25% big fish.

Bob has also received several new pieces of information.

*He was the first to find 25 big fish. P[first25|found25] approaches 1/P[found25] as you increase the number of scientists. This information almost entirely cancels out the information he already had.

*All the information Alice had. This information therefore tips the scales.

Bob's final probability will be the same as Alice's.

Question two is N/A I will answer question three in a reply to this to try and avoid a massive wall of text.

Replies from: Kingreaper
comment by Kingreaper · 2010-07-05T14:01:48.167Z · LW(p) · GW(p)

Question 3: lateral answer: in the symmetrical variant the issue of "how many people are being given other people to meet, and is this entire thing just a weird trick" begins to rise.

In fact, the probability of it being a weird trick is going to overshadow almost any other attempt at analysis. The first person to get 25 happens to be a person who is told they will meet someone who got 75, and the person who was told they would meet the first person to get 25 happens to get 75? Massively improbable.

However, if it is not a trick, the probability is significantly in favour of it being 75% still. Alice isn't talking to Bob due to the fact she got 75, she's talking to Bob due to the fact he got the first 25. Otherwise Bob would most likely have ended up talking to someone else.

The proper response at this point for both Alice and Bob is to simply decide that it is overwhelming probable that Charlie is messing with them.

I can produce similar variants which don't have this issue, and they come out to 50:50. These include: Everyone is told that the first person to get 25 will meet the first person to get 75.

comment by Dagon · 2010-07-04T01:38:37.799Z · LW(p) · GW(p)

What is each of their prior probabilities for this setup being true? Bob, knowing that he was selected for his unusual results, can pretty happily disregard them. If you win a lottery, you don't update to believe that most tickets win. Bob now knows of 100 samples (Al's) that relate to the prior, and accepts them. Bob's sampling is of a different prior: coin flipped, then a specific resulting sample will be found.

If they are both selected for their results, they both go to 50/50. Neither one has non-selected samples.

comment by prase · 2010-07-03T18:34:09.224Z · LW(p) · GW(p)

Is there any particular reason why one of the actors is an AI?

Replies from: utilitymonster
comment by utilitymonster · 2010-07-03T18:42:28.892Z · LW(p) · GW(p)

Al, not AI. ("Al" as in "Alan")

Replies from: prase
comment by prase · 2010-07-03T18:49:20.308Z · LW(p) · GW(p)

Sorry. I have some Lesswrong bias.

Google statistics on Less Wrong:

  • AI (second i): 2400 hits
  • Al (second L): 318 hits (mostly in "et al." and "al Qaida", without capital A)

By the way, are these two strings distinguishable when written in the font of this site? Seem to me the same.

Replies from: RobinZ
comment by RobinZ · 2010-07-03T18:57:04.357Z · LW(p) · GW(p)

You're right - they're pixel-for-pixel identical. That's a bit problematic.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-07-04T04:32:40.577Z · LW(p) · GW(p)

Maybe that's why cryptographers say "Alice" rather than "Al."

comment by JGWeissman · 2010-07-03T18:22:49.025Z · LW(p) · GW(p)

From Bob's perspective, he was more likely to be chosen as the one to talk to Al, if there are fewer scientist that observed exactly 25 big fish, which would happen if there are more big fish. So Bob should update on the evidence of being chosen.

Replies from: utilitymonster, utilitymonster
comment by utilitymonster · 2010-07-03T19:45:24.521Z · LW(p) · GW(p)

This should be important to the finite case. The probability of being the first to see 25/100 is WAY higher (x 10^25 or so) if the lake is 3/4 full of big fish than if it is 1/4 full of big fish.

But in the infinite case the probability of being first is 0 either way...

Replies from: JGWeissman, Vladimir_M
comment by JGWeissman · 2010-07-03T20:51:42.721Z · LW(p) · GW(p)

There is a reason we consider infinities only as limits of sequences of finite quantities.

Suppose you tried to sum the log-odds evidence of the infinite scientist that the pond has more big fish. Well, some of them have positive evidence (summing to positive infinity), some have negative evidence (summing to negative infinity), and you can, by choosing the order of summation, get any result you want (up to some granularity) between negative and positive infinity.

You don't need anthropomorphic tricks to make things weird if you have actual infinities in the problem.

comment by Vladimir_M · 2010-07-04T04:53:46.063Z · LW(p) · GW(p)

utilitymonster:

The probability of being the first to see 25/100 is WAY higher (x 10^25 or so) if the lake is 3/4 full of big fish than if it is 1/4 full of big fish.

Maybe I'm misunderstanding your phrasing here, but it sounds fallacious. If there's a deck of cards and you're in a group of 52 people who are called out in random order and told to pick one card each from the deck, the probability of being the first person to draw an ace is exactly the same (1/52) regardless of whether it's a normal deck or a deck of 52 aces (or even a deck with 3 out of 4 aces replaced by other cards). This result doesn't even depend on whether the card is removed or returned into the deck after each person's drawing; the conclusion follows purely from symmetry. The only special case is when there are zero aces, in which the event becomes impossible, with p=0.

Similarly, if the order in which the scientists get their samples is shuffled randomly, and we ignore the improbable possibility that nobody sees 25/100, then purely by symmetry, the probability that Bob happens to be the first one to see 25/100 is the same regardless of the actual frequency of the 25/100 results: p = 1/N(scientists).

Replies from: utilitymonster
comment by utilitymonster · 2010-07-04T11:47:04.558Z · LW(p) · GW(p)

You're right, thanks.

I was considering an example with 10^100 scientists. I thought that since there would be a lot more scientists who got 25 big in the 1/4 scenario than in the 3/4 scenario (about 9.18 10^98 vs. 1.279 10^75), you'd be more likely to be first the 3/4 scenario. But this forgets about the probability of getting an improbable result.

In general, if there are N scientists, and the probability of getting some result is p, then we can expect Np scientists to get that result on average. If the order is shuffled as you suggest, then the probability of being the first to get that result is p * 1/(Np) = 1/N. So the probability of being the first to get the result is the same, regardless of the likelihood of the result (assuming someone will get the result).

EDIT: It occurs to me that I might have been thinking about the probability of being selected by Al conditional on getting 25/100. In that case, you're a lot more likely to be selected if the pond is 3/4 big than if it is 1/4 big, since WAY more people got similar results in the latter case. JGMWeissman was probably thinking the same.

comment by utilitymonster · 2010-07-03T19:02:49.753Z · LW(p) · GW(p)

What effect will updating on this information have?

comment by Soki · 2010-07-03T21:07:30.126Z · LW(p) · GW(p)

First off all, I think that if Al does not see a sample, it makes the problem a bit simpler. That is, Al just tells Bob that he (Bob) is the first person that saw 25 big fishes.

I think that the number N of scientists matters, because the probability that someone will come to see Al depends on that.

Lets call B then event the lake has 75% big fishes, S the opposite and C the event someone comes, which means that someone saw 25 fishes.

Once Al sees Bob, he updates :
P(B/C)=P(B)* P(C/B)/(1/2*P(C/B)+1/2*P(C/S)).
When N tends toward infinity, both P(C/B) and P(C/S) tend toward 1, and P(B/SC) tends to 1/2.
But for small values of N, P(C/B) can be very small while P(C/S) will be quite close to 1.
Then the fact that someone was chosen lowers the probability of having a lake with big fishes.

If N=infinity, then the probability of being chosen is 0, and I cannot use Bayes' theorem.

If Charlie keeps inviting scientists until one sees 25 big fishes, then it becomes complicated, because the probability that you are invited is greater if the lake has more big fishes. It may be a bit like the sleeping beauty or the absent-minded driver problem.

Edited for formatting and misspellings

comment by GreenRoot · 2010-07-06T15:58:18.774Z · LW(p) · GW(p)

Does anybody know what is depicted in the little image named "mini-landscape.gif" at the bottom of each top level post, or why it appears there?

Replies from: Kazuo_Thow, cousin_it, matt
comment by Kazuo_Thow · 2010-07-07T05:16:33.338Z · LW(p) · GW(p)

Part of the San Francisco skyline, maybe?

comment by cousin_it · 2010-07-06T16:12:47.722Z · LW(p) · GW(p)

Thanks. This is the first time I ever noticed this. Absolutely no idea what it is or why it's there. Talk about selective blindness!

comment by matt · 2011-05-03T10:20:54.488Z · LW(p) · GW(p)

It was an early draft of the map vs territory theme that became the site header, which we intended to finish but forgot about.

comment by Yoreth · 2010-07-02T07:11:38.755Z · LW(p) · GW(p)

Long ago I read a book that asked the question “Why is there something rather than nothing?” Contemplating this question, I asked “What if there really is nothing?” Eventually I concluded that there really isn’t – reality is just fiction as seen from the inside.

Much later, I learned that this idea had a name: modal realism. After I read some about David Lewis’s views on the subject, it became clear to me that this was obviously, even trivially, correct, but since all the other worlds are causally unconnected, it doesn't matter at all for day-to-day life. Except as a means of dissolving the initial vexing question, it was pointless, I thought, to dwell on this topic any more.

Later on I learned about the Cold War and the nuclear arms race and the fears of nuclear annihilation. Apparently, people thought this was a very real danger, to the point of building bomb shelters in their backyards. And yet somehow we survived, and not a single bomb was dropped. In light of this, I thought, “What a bunch of hype this all is. You doomsayers cried wolf for decades; why should I worry now?”

But all of that happened before I was born.

If modal realism is correct, then for all I know there was* a nuclear holocaust in most world-lines; it’s just that I never existed there at all. Hence I cannot use the fact of my existence as evidence against the plausibility of existential threats, any more than we can observe life on Earth and thereby conclude that life is common throughout the universe.

(*Even setting aside MWI, which of course only strengthens the point.)

Strange how abstract ideas come back to bite you. So, should I worry now?

Replies from: cousin_it, Roko, NancyLebovitz
comment by cousin_it · 2010-07-02T07:17:45.213Z · LW(p) · GW(p)

If you think doom is very probable and we only survived due to the anthropic principle, then you should expect doom any day now, and every passing day without incident should weaken your faith in the anthropic explanation.

If you think all possible worlds exist, then you should expect our small bubble of ordered existence to erupt into chaos any day now, because way more copies of it are contained in chaotic worlds than in ordered ones. Every day you spend without spontaneously turning into a pheasant should weaken your faith in the multiverse.

(These arguments are not standard LW fare, but I've floated them here before and they seem to work okay.)

Replies from: JoshuaZ, Vladimir_Nesov, Mitchell_Porter, ShardPhoenix, Roko
comment by JoshuaZ · 2010-07-02T12:30:02.930Z · LW(p) · GW(p)

If you think all possible worlds exist, then you should expect our small bubble of ordered existence to erupt into chaos any day now, because way more copies of it are contained in chaotic worlds than in ordered ones. Every day you spend without spontaneously turning into a pheasant should weaken your faith in the multiverse.

This depends on which level of the Tegmark classification you are talking about. Level III for example, quantum MWI, gives very low probabilities for things like turning into a pheasant, since those evens while possible, have tiny chances of occurring. Level IV, the ultimate ensemble, which seems to the main emphasis of the poster above, may have your argument as a valid rebuttal, but since level IV requires consistency, it would require a much better understanding of what consistent rule systems look like. And it may be that the vast majority of those universes don't have observers, so we actually would need to look at consistent rule systems with observers. Without a lot more information, it is very hard to examine the expected probabilities of weird even events in a level IV setting.

Replies from: cousin_it, Vladimir_Nesov
comment by cousin_it · 2010-07-02T19:10:01.452Z · LW(p) · GW(p)

since level IV requires consistency, it would require a much better understanding of what consistent rule systems look like

Wha? Any sequence of observations can be embedded in a consistent system that "hardcodes" it.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-04T14:34:28.455Z · LW(p) · GW(p)

Yeah, that's a good point. Hardcoding complicated changes is consistent. So any such argument of this form about level IV fails. I withdraw that claim.

Replies from: DanielVarga
comment by DanielVarga · 2010-07-04T20:15:54.856Z · LW(p) · GW(p)

Tegmark level IV is a very useful tool to guide one's intuitions, but in the end, the only meaningful question about Tegmark IV universes is this: Based on my observations, what is the relative probability that I am in this one rather than that one? And this, of course, is just what scientists do anyway, without citing Tegmark each time. Hardcoded universes are easily dealt with by the scientists' favorite tool, Occam's Razor.

comment by Vladimir_Nesov · 2010-07-03T06:25:56.165Z · LW(p) · GW(p)

Consistency is about logics, while Tegmark's madness is about mathematical structures. Whenever you can model your own actions (decision-making algorithm) using huge complicated mathematical structures, you can also do so with relatively simple mathematical structures constructed from the syntax of your algorithm (Lowenheim-Skolem type constructions). There is no fact of the matter about whether a given consistent countable first order theory, say, talks about an uncountable model or a countable one.

comment by Vladimir_Nesov · 2010-07-02T12:08:46.941Z · LW(p) · GW(p)

If you think all possible worlds exist, then you should expect our small bubble of ordered existence to erupt into chaos any day now

Not if you interpret your preference about those worlds as assigning most of them low probability, so that only the ordered ones matter.

Replies from: Jordan, cousin_it
comment by Jordan · 2010-07-04T07:56:36.995Z · LW(p) · GW(p)

I don't follow. Many low probability and unordered worlds are highly preferable. Conversely, many high probability worlds are not. I don't see a correlation.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-04T08:06:28.966Z · LW(p) · GW(p)

It's a simplification. If preference satisfies expected utility axioms, it can be decomposed on probability and utility, and in this sense probability is a component of preference and shows how much you care about a given possibility. This doesn't mean that utility is high on those possibilities as well, or that the possibilities with high utility will have high probability. See my old post for more on this.

Replies from: Roko
comment by Roko · 2010-07-05T19:49:25.116Z · LW(p) · GW(p)

I understand this move but I don't like it. I think that in the fullness of time, we'll see that probability is not a kind of preference, and there is a "fact of the matter" about the effects that actions have, i.e. that reality is objective not subjective.

But I don't like arguments from subjective anticipation, subjective anticipation is a projective error that humans make, as many worlds QM has already proved.

Indeed MW QM combined with Robin's Mangled Worlds is a good microcosm for how the multiverse at other levels ought to turn out. Subjective anticipation out, but still objective facts about what happens.

I note that since the argument from subjective anticipation is invalid, there is still the possibility that we live in an infinite structure with no canonical measure, in which case Vladimir would be right.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-05T20:25:28.455Z · LW(p) · GW(p)

I understand this move but I don't like it. I think that in the fullness of time, we'll see that probability is not a kind of preference, and there is a "fact of the matter" about the effects that actions have, i.e. that reality is objective not subjective.

I think that probability is a tool for preference, but I also think that there is a fact of the matter about the effects of actions, and that reality of that effect is objective. This effect is at the level of the sample space (based on all mathematical structures maybe) though, of "brittle math", while the ways you measure the "probability" of a given (objective) event depend on what preference (subjective goals) you are trying to optimize for.

comment by cousin_it · 2010-07-02T14:21:31.376Z · LW(p) · GW(p)

To rephrase, "unless you interpret your preference as denying the multiverse hypothesis" :-)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-02T16:41:04.038Z · LW(p) · GW(p)

You don't have to assign exactly no value to anything, which makes all structures relevant (to some extent).

comment by Mitchell_Porter · 2010-07-07T10:30:13.869Z · LW(p) · GW(p)

If you think doom is very probable and we only survived due to the anthropic principle, then you should expect doom any day now, and every passing day without incident should weaken your faith in the anthropic explanation.

What if you can see the doom building up, with every passing day? :-)

If you think all possible worlds exist, then you should expect our small bubble of ordered existence to erupt into chaos any day now, because way more copies of it are contained in chaotic worlds than in ordered ones.

I think this one is deeper. It is a valid criticism of quantum MWI, for example. If all worlds exist equally then naively all this structure around us should dissolve immediately, because most physical configurations are just randomness. Thus the quest to derive the Born probabilities...

I don't believe MWI as an explanation of QM anyway, so no big deal. But I am interested in "level IV" thinking - the idea that "all possible worlds exist", according to some precise notion of possibility. And yes, if you think any sequence of events is equally possible and hence (by the hypothesis) equally real, then what we actually see happening looks exceedingly improbable.

One pragmatist response to this is just to say "only orderly worlds are possible", without giving a further reason. If you actually had an "orderly multiverse" theory that gave correct predictions, you would have some justification for doing this, though eventually you'd still want to know why only the orderly worlds are real.

A more metaphysical response would try to provide a reason why all the real worlds are orderly. For example: Anything that exists in any world has a "nature" or an "essence", and causality is always about essences, so it's just not true that any string of events can occur in any world. Any event in any world really is a necessary product of the essences of the earlier events that cause it, and the appearance of randomness only happens under special circumstances (e.g. brains in vats) which are just uncommon in the multiverse. There are no worlds where events actually go haywire because it is logically impossible for causality to switch off, and every world has its own internal form of causality.

Then there's an anthropic variation on the metaphysical response, where you don't say that only orderly worlds are possible, but you give some reason why consciousness can only happen in orderly worlds (e.g. it requires causality).

comment by ShardPhoenix · 2010-07-03T01:50:07.438Z · LW(p) · GW(p)

If you think all possible worlds exist, then you should expect our small bubble of ordered existence to erupt into chaos any day now, because way more copies of it are contained in chaotic worlds than in ordered ones. Every day you spend without spontaneously turning into a pheasant should weaken your faith in the multiverse.

It's not clear to me that this is correct. Also, even if it is, then coherent memories (like what we're using to judge this whole scenario) only exist in worlds where this either hasn't happened yet or won't ever.

Replies from: wedrifid
comment by wedrifid · 2010-07-03T04:17:59.428Z · LW(p) · GW(p)

We use markdown syntax. An > at the start of the paragraph will make it a quote,

like so.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2010-07-03T10:09:41.315Z · LW(p) · GW(p)

I know, I was just being too lazy to look up the syntax :/.

Replies from: apophenia
comment by apophenia · 2010-07-03T22:28:18.080Z · LW(p) · GW(p)

If you click "Help" when writing a comment, it will appear in a handy box right next to where you are writing.

comment by Roko · 2010-07-02T19:40:34.284Z · LW(p) · GW(p)

expect

What is this subjective expectation that you speak of?

comment by NancyLebovitz · 2010-07-02T10:40:11.693Z · LW(p) · GW(p)

From what I've heard, there was a lot of talk about bomb shelters, but very few of them were built.

Replies from: gimpf
comment by gimpf · 2010-07-03T16:21:19.365Z · LW(p) · GW(p)

Well, we even had a law which required to have one if you built a new house (see an article in German). This law is long since extinct, but according to the link above, there were 2.5 million such rooms, for a population of just 8 million people... Please note that in case of a real emergency most of those would probably have been extremely under-equipped. So,built - yes, correctly - no, and nowadays not even thought about.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-03T16:32:26.404Z · LW(p) · GW(p)

What I'd heard was a bit on NPR which claimed there were only a handful of bomb shelters built in the US, and I admit I wasn't thinking about the rest of the world.

I'm probably born a little late (1953) for the height of bomb-shelter building, but I've never heard second or third-hand about actual bomb shelters in the US, and I think I would have (as parts of basements or somesuch) if they were at all common.

My impression is that the real attitude wasn't so much that a big nuclear war was unlikely as that people thought that if it happened, it wouldn't be worth living through.

comment by NancyLebovitz · 2010-07-26T02:58:07.094Z · LW(p) · GW(p)

A few years after I became an assistant professor, I realized the key thing a scientist needs is an excuse. Not a prediction. Not a theory. Not a concept. Not a hunch. Not a method. Just an excuse — an excuse to do something, which in my case meant an excuse to do a rat experiment. If you do something, you are likely to learn something, even if your reason for action was silly. The alchemists wanted gold so they did something. Fine. Gold was their excuse. Their activities produced useful knowledge, even though those activities were motivated by beliefs we now think silly. I’d like to think none of my self-experimentation was based on silly ideas but, silly or not, it often paid off in unexpected ways. At one point I tested the idea that standing more would cause weight loss. Even as I was doing it I thought the premise highly unlikely. Yet this led me to discover that standing a lot improved my sleep.

Seth Roberts

I'm not sure he's right about this, but I'm not sure he's wrong, either. What do you think?

Replies from: RobinZ
comment by RobinZ · 2010-07-26T14:42:02.498Z · LW(p) · GW(p)

It makes me think of Richard Hamming talking about having "an attack".

comment by [deleted] · 2010-07-07T20:27:22.934Z · LW(p) · GW(p)

Here are some assumptions one can make about how "intelligences" operate:

  1. An intelligent agent maintains a database of "beliefs"
  2. It has rules for altering this database according to its experiences.
  3. It has rules for making decisions based on the contents of this database.

and an assumption about what "rationality" means:

  1. Whether or not an agent is "rational" depends only on the rules it uses in 2. and 3.

I have two questions:

I think that these assumptions are implicit in most and maybe all of what this community writes about rationality, decision theory, and similar topics. Does anyone disagree? Or agree?

Have assertions 1-4, or something similar to them, been made explicit and defended or criticized anywhere on this website?

The background is that I've been kicking around the idea that a focus on "beliefs" is misleading when modeling intelligence or intelligent agents.

This is my first post, please tell me if I'm misusing any jargon.

Replies from: whpearson, whpearson
comment by whpearson · 2010-07-07T22:50:39.704Z · LW(p) · GW(p)

This also reminded me that I wanted to go through the Intentional Stance by Daniel Dennett and find the good bits. Also worth reading is the wiki page.

I think he would state that the model you describe comes from folk psychology.

A relevant passage

"We have all learned to take a more skeptical attitude to the dictates of folk physics, including those robust deliverances that persist in the face of academic science. Even the "undeniable introspective fact" that you can feel "centrifugal force" cannot save it, except for the pragmatic purposes of rough-and-ready understanding it has always served. The delicate question of just how we ought to express our diminished allegiance to the categories of folk physics has been a central topic in philosophy since the seventeenth century, when Descartes, Boyle and other began to ponder the meta-physical status of color, felt warmth, and other "secondary qualities". These discussions, while cautiously agnostic about folk physics have traditionally assumed as unchallenged the bedrock of folk-psychological counterpart categories: conscious perceptions of color, sensations of warmth, or beliefs about the external "world"."

In lesswrong people do tend to discard the perception and sensation parts of folk psychology, but keep the belief and goal concepts.

You might have trouble convincing people here, mainly because people are interested in what should be done by an intelligence, rather than what is currently done by humans. It is a lot harder to find evidence for what ought to be done rather than what is done.

Replies from: None
comment by [deleted] · 2010-07-08T12:26:13.351Z · LW(p) · GW(p)

Relevant and new-to-me, thanks.

I'd be interested to hear examples of things, related to this discussion, that people here would not be easily convinced of.

Replies from: whpearson
comment by whpearson · 2010-07-08T16:06:53.924Z · LW(p) · GW(p)

The problem I have found is determining what people accept as evidence about "intelligences".

If everyone thought intelligence was always somewhat humanlike (i.e. that if we can't localise beliefs in humans we shouldn't try to build AI with localised beliefs) then evidence about humans would constitute evidence about AI somewhat. In this case things like blind sight (mentioned in the Intentional Stance) would show that beliefs were not easily localised.

I think it fairly uncontroversial that beliefs aren't stored in one particular place in humans on Lesswrong. However because people are aware of the limitations of Humans, they think that they can design AI without the flaws so they do not constrain their designs to be humanlike, so that allows them to slip localised/programmatic beliefs back in.

To convince them that localised beliefs where incorrect/unworkable for all intelligences would require a constructive theory of intelligence.

Does that help?

comment by whpearson · 2010-07-07T21:43:51.452Z · LW(p) · GW(p)

I'm not so interested in decision theory. I criticised it a bit here

Edit: To give a bit more background to how I view rationality: An intelligence is a set of interacting programs some of which have control of the agent at any one time. The rationality of the agent depends upon the set of programs in control of the agent. The relationship between the set of programs and rationality of the system is somewhat environmentally specific.

comment by Alexandros · 2010-07-04T12:32:15.322Z · LW(p) · GW(p)

Is there an on-line 'rationality test' anywhere, and if not, would it be worth making one?

The idea would be to have some type of on-line questionnaire, testing for various types of biases, etc. Initially I thought of it as a way of getting data on the rationality of different demographics, but it could also be a fantastic promotional tool for LessWrong (taking a page out of the Scientology playbook tee-hee). People love tests, just look at the cottage industry around IQ-testing. This could help raise the sanity waterline, if only by making people aware of their blind spots.

There are of course the typical problems with 'putting a number on a person's rationality' and perhaps it would need some focused expertise to pull off plausibly, but I do think it's a useful thing to have around, even just to iterate on.

Replies from: SilasBarta, Cyan, michaelkeenan, oliverbeatson, NancyLebovitz, utilitymonster, None, oliverbeatson
comment by SilasBarta · 2010-07-06T17:47:21.336Z · LW(p) · GW(p)

My kind of test would be like this:

1) Do you always seem to be able to predict the future, even as others doubt your predictions?

If they say yes ---> "That's because of confirmation bias, moron. You're not special."

Replies from: RobinZ
comment by RobinZ · 2010-07-06T18:19:52.983Z · LW(p) · GW(p)

In their defense, it might be hindsight bias instead. :P

comment by Cyan · 2010-07-06T17:26:26.908Z · LW(p) · GW(p)

There's an online test for calibration of subjective probabilities.

Replies from: Alexandros
comment by Alexandros · 2010-07-06T18:20:14.964Z · LW(p) · GW(p)

That was pretty awesome, thanks. Not precisely what I had in mind, but close enough to be an inspiration. Cheers.

comment by michaelkeenan · 2010-07-06T15:03:14.136Z · LW(p) · GW(p)

I would love for this to exist! I have some notes on easily-tested aspects of rationality which I will share:

The Conjunction Fallacy easily fits into a short multi-choice question.

I'm not sure what the error is called, but you can do the test described in Lawful Uncertainty:

Subjects were asked to predict whether the next card the experiment turned over would be red or blue in a context in which 70% of the cards were blue, but in which the sequence of red and blue cards was totally random. In such a situation, the strategy that will yield the highest proportion of success is to predict the more common event. For example, if 70% of the cards are blue, then predicting blue on every trial yields a 70% success rate. What subjects tended to do instead, however, was match probabilities - that is, predict the more probable event with the relative frequency with which it occurred. For example, subjects tended to predict 70% of the time that the blue card would occur and 30% of the time that the red card would occur. Such a strategy yields a 58% success rate.

You could do the positive bias test where you tell someone the triplet "2-4-6" conforms to a rule and have them figure out the rule.

You might be able to come up with some questions that test resistance to anchoring.

It might be out of scope of rationality and getting closer to an intelligence test, but you could take some "cognitive reflection" questions from here, which were discussed at LessWrong here.

Replies from: None, Alexandros
comment by [deleted] · 2010-07-06T19:00:51.125Z · LW(p) · GW(p)

That Virginia Postrel article was interesting.

I was wondering why more reflective people were both more patient and less risk-averse -- she doesn't make this speculation, but it occurs to me that non-reflective people don't trust themselves and don't trust the future. If you aren't good at math and you know it, you won't take a gamble, because you know that good gamblers have to be clever. If you aren't good at predicting the future, you won't feel safe waiting for money to arrive later. Tomorrow the gods might send you an earthquake.

Risk aversion and time preference are both sensible adaptations for people who know they're not clever. People who are good at math and science don't retain such protections because they can estimate probabilities, and because their world appears intelligible and predictable.

Replies from: pjeby
comment by pjeby · 2010-07-06T19:20:36.941Z · LW(p) · GW(p)

non-reflective people don't trust themselves and don't trust the future

Um, that should make them more risk-averse, shouldn't it? Or do you mean reflective people don't trust themselves or the future?

Replies from: None, RobinZ
comment by [deleted] · 2010-07-06T19:33:46.193Z · LW(p) · GW(p)

oops. Reflective people are LESS risk averse. Corrected above.

Replies from: pjeby
comment by pjeby · 2010-07-06T20:55:41.057Z · LW(p) · GW(p)

Reflective people are LESS risk averse.

That's even more confusing. I would expect a reflective person to be more self-doubtful and more risk-averse than a non-reflective person, all else being equal. But perhaps a different definition of "reflective" is involved here.

Replies from: gwern, RobinZ
comment by gwern · 2010-07-07T02:09:55.406Z · LW(p) · GW(p)

But perhaps a different definition of "reflective" is involved here.

Possibly. A reflective person can use expected-utility to make choices that regular people would simply categorically avoid. (One might say in game-theoretic terms that a rational player can use mixed strategies, but irrational ones cannot and so can do worse. But that's probably pushing it too far.)

I recall reading one anecdote on an economics blog. The economist lived in an apartment and the nearest parking for his car was quite a ways away. There were tickets for parking on the street. He figured out the likelihood of being ticketed & the fine, and compared its expected disutility against the expected disutility of walking all the way to safe parking and back. It came out in favor of just eating the occasional ticket. His wife was horrified at him deliberately risking the fines.

Isn't this a case of rational reflection leading to an acceptance of risk which his less-reflective wife was averse to?

Replies from: gwern
comment by gwern · 2010-07-09T05:16:07.765Z · LW(p) · GW(p)

In a serendipitous and quite germane piece of research, Marginal Revolution links to a study on IQ and risk-aversion:

"Our main finding is that risk aversion and impatience both vary systematically with cognitive ability. Individuals with higher cognitive ability are significantly more willing to take risks in the lottery experiments and are significantly more patient over the year-long time horizon studied in the intertemporal choice experiment."

comment by RobinZ · 2010-07-07T00:42:33.437Z · LW(p) · GW(p)

I don't believe the article says "reflective":

Professor Frederick discovered striking systematic patterns in how people answer questions about risk and patience, including those above. This short problem-solving test, he found, predicts a lot:

1) A bat and a ball cost $1.10 in total. The bat costs $1 more than the ball. How much does the ball cost?

2) If it takes five machines five minutes to make five widgets, how long would it take 100 machines to make 100 widgets?

3) In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half the lake?

The test measures not just the ability to solve math problems but the willingness to reflect on and check your answers. (Scores have a 0.44 correlation with math SAT scores, where 1.00 would be exact.) The questions all have intuitive answers -- wrong ones.

Professor Frederick gave his ''cognitive reflection test'' to nearly 3,500 respondents, mostly students at universities including M.I.T., the University of Michigan and Bowling Green University. Participants also answered a survey about how they would choose between various financial payoffs, as well as time-oriented questions like how much they would pay to get a book delivered overnight.

Getting the math problems right predicts nothing about most tastes, including whether someone prefers apples or oranges, Coke or Pepsi, rap music or ballet. But high scorers -- those who get all the questions right -- do prefer taking risks.

''Even when it actually hurts you on average to take the gamble, the smart people, the high-scoring people, actually like it more,'' Professor Frederick said in an interview. Almost a third of high scorers preferred a 1 percent chance of $5,000 to a sure $60.

They are also more patient, particularly when the difference, and the implied interest rate, is large. Choosing $3,400 this month over $3,800 next month implies an annual discount rate of 280 percent. Yet only 35 percent of low scorers -- those who missed every question -- said they would wait, while 60 percent of high scorers preferred the later, bigger payoff.

Replies from: NancyLebovitz, gwern
comment by NancyLebovitz · 2010-07-07T06:59:51.286Z · LW(p) · GW(p)

The problem with the temperament checks in the last two paragraphs is that they're still testing roughly the same thing that's tested earlier on-- competence at word problems.

And possibly interest in word problems-- I know I've seen versions of the three problems before. I wouldn't be going at them completely cold, but I wouldn't have noticed and remembered having seen them decades ago if word problems weren't part of my mental univers.

comment by gwern · 2010-07-07T02:03:17.459Z · LW(p) · GW(p)

Somewhat offtopic:

I recall reading a study once that used a test which I am almost certain was this one to try to answer the cause/correlation question of whether philosophical training/credentials improved one's critical thinking or whether those who undertook philosophy already had good critical thinking skills; when I recently tried to re-find it for some point or other, I was unable to. If anyone also remembers this study, I'd appreciate any pointers.

(About all I can remember about it was that it concluded, after using Bayesian networks, that training probably caused the improvements and didn't just correlate.)

comment by RobinZ · 2010-07-06T19:27:39.107Z · LW(p) · GW(p)

They are more risk-averse - that was a typo.

comment by Alexandros · 2010-07-06T18:21:40.779Z · LW(p) · GW(p)

Thanks for the ideas. It's good to have something concrete. Let's see how it goes.

comment by oliverbeatson · 2010-07-05T13:27:09.883Z · LW(p) · GW(p)

The test's questions may need to be considerably dynamic to avert the possibility that people condition to specific problems without shedding the entire infected heuristic. Someone who had read Less Wrong a few times, but didn't make the knowledge truly a part of them, might return false negative for certain biases while retaining those biases in real-life situations. Don't want to make the test about guessing the teacher's password.

comment by NancyLebovitz · 2010-07-04T12:39:12.044Z · LW(p) · GW(p)

The test should include questions about applying rationality in one's life, not just abstract problems.

comment by utilitymonster · 2010-07-05T13:26:59.604Z · LW(p) · GW(p)

I'd suggest starting with a list of common biases and producing a question (or a few?) for each. The questions could test the biases and you could have an explanation of why the biased reasoning is bad, with examples.

It would also be useful to group the biases together in natural clusters, if possible.

comment by [deleted] · 2010-07-06T00:56:56.226Z · LW(p) · GW(p)

Sounds like a good idea. Doesn't have to be invented from scratch; adapt a few psychological or behavioral-economics experiments. It's hard to ask about rationality in one's own life because of self-reporting problems; if we're going to do it, I think it's better to use questions of the form "Scenario: would you do a, b, c, or d?" rather than self-descriptive questions of the form "Are you more: a or b?"

comment by oliverbeatson · 2010-07-05T13:22:15.006Z · LW(p) · GW(p)

Somewhat relatedly, I considered the idea of creating a 'Bias-Quotient' type test. It could go some way to popularising rationality and bias-aversion. A lot more people like the idea of being right than are actually aware of biases and other such behavioural stuff.

I anticipate that many of these people would do the test expecting to share their score somewhere online and gain relative intellect-prestige from an expected high score. On discovering that they're more biased than they believed, I believe that, provided the test's response to a low score were engaging and informative (and not annoying and pedantic), they would on net be genuinely interested in overcoming this, with a link to Less Wrong somewhere appropriate. They might share the test regardless of their low score with an annotation such as 'check this -- very interesting!'. That's all based on my model of how a lot of aspiring intelligent people behave. It may be biased.

This could open to a lot of people the doors to beginning to overcome the failures of their visceral probability heuristics, as well as the standard set of cognitive biases. The test's questions may need to be considerably dynamic to avert the possibility that people condition to specific problems without shedding the entire infected heuristic.

comment by Leonhart · 2010-07-03T21:58:34.359Z · LW(p) · GW(p)

I can't remember if this has come up before...

Currently the Sequences are mostly as-imported from OB; including all the comments, which are flat and voteless as per the old mechanism.

Given that the Sequences are functioning as our main corpus for teaching newcomers, should we consider doing some comment topiary on at least the most-read articles? Specifically, I wonder if an appropriate thread structure be inferred from context; also we could vote the comments up or down in order to make the useful-in-hindsight stuff more salient. There's a lot of great stuff in there, but IIRC some that is less good as well. Not that we should actually get rid of any of it, of course.

Having said that, I'm already thinking of reasons that this is a bad idea, but I'm throwing it out anyway. Any thoughts? Should we be treating the Sequences as a time capsule or a living textbook? (I think that those phrases have roughly equal vague positive affect :)

Replies from: RobinZ, JamesAndrix, JamesAndrix
comment by RobinZ · 2010-07-04T02:35:16.888Z · LW(p) · GW(p)

Voting is highly recommended - please do, and feel free to reply to comments with additional commentary as well. Otherwise I'd say leave them as be.

comment by JamesAndrix · 2010-07-25T20:47:08.739Z · LW(p) · GW(p)

Also related: A lot of the Sequences show marks of their origin on Overcoming Bias that could be confusing to someone who lands on that article:

Example: "Since this is an econblog... " in http://lesswrong.com/lw/j3/science_as_curiositystopper/

I think some kind of editorial note is in order here, if not a rewrite.

comment by JamesAndrix · 2010-07-05T06:46:24.067Z · LW(p) · GW(p)

Alternatively, we could repost/revisit the sequences on a schedule, and let the new posts build fresh comments.

Or even better, try to cover the same topics from a different perspective.

Replies from: gwern
comment by gwern · 2010-07-05T08:10:33.211Z · LW(p) · GW(p)

I've suggested in the past that we use the old posts as filler; that is, if X days go by without something new making it to the front page, the next oldest item gets promoted instead.

Even if we collectively have nothing to say that is completely new, we likely have interesting things to say about old stuff - even if only linking it forward to newer stuff.

Replies from: gwern
comment by gwern · 2010-07-06T08:00:13.221Z · LW(p) · GW(p)

So, from the 7 upboats, I take it that people in general approve of this idea. What's next? What do we do to make this a reality?

Looking back at an old post from OB (I think), like http://lesswrong.com/lw/od/37_ways_that_words_can_be_wrong/ I don't see any option to promote it to the front page. I thought I had enough karma to promote other peoples' articles, but it looks like I may be wrong about this. Is it even currently technically possible to promote old articles?

Replies from: Morendil
comment by Morendil · 2010-07-06T08:16:02.938Z · LW(p) · GW(p)

What's next? What do we do to make this a reality?

Agree on the numerical value of X? LW has slowed down a bit recently, compared to relatively recent periods with frantic paces of posting; I rather appreciate the current rhythm. It would take a long period without new stuff to convince me we needed "filler" at all.

I thought I had enough karma to promote other peoples' articles

Only editors can promote. (Installing the LW codebase locally is fun: you can play at being an editor.)

Replies from: gwern
comment by gwern · 2010-07-06T08:47:27.749Z · LW(p) · GW(p)

Agree on the numerical value of X?

Alright. How about a week? If nothing new has shown up for a week, then I don't think people will mind a classic. (And offhand, I'm not sure we've yet had a slack period that long.)

Replies from: Morendil
comment by Morendil · 2010-07-06T08:57:34.480Z · LW(p) · GW(p)

Sounds good to me.

comment by JohannesDahlstrom · 2010-07-03T09:11:56.880Z · LW(p) · GW(p)

http://www.badscience.net/2010/07/yeah-well-you-can-prove-anything-with-science/

Priming people with scientific data that contradicts a particular established belief of theirs will actually make them question the utility of science in general. So in such a near-mode situation people actually seem to bite the bullet and avoid compartmentalization in their world-view.

From a rationality point of view, is it better to be inconsistent than consistently wrong?

There may be status effects in play, of course: reporting glaringly inconsistent views to those smarty-pants boffin types just may not seem a very good idea.

Replies from: cupholder
comment by cupholder · 2010-07-04T08:11:18.153Z · LW(p) · GW(p)

See also 'crank magnetism.'

I wonder if this counts as evidence for my heuristic of judging how seriously to take someone's belief on a complicated scientific subject by looking to see if they get the right answer on easier scientific questions.

comment by Unnamed · 2010-07-01T23:00:56.651Z · LW(p) · GW(p)

Has anyone continued to pursue the Craigslist charity idea that was discussed back in February, or did that just fizzle away? With stakes that high and a non-negligible chance of success, it seemed promising enough for some people to devote some serious attention to it.

Replies from: Kevin
comment by Kevin · 2010-07-01T23:18:55.030Z · LW(p) · GW(p)

Thanks for asking! I also really don't want this to fizzle away.

It is still being pursued by myself, Michael Vassar, and Michael GR via back channels rather than what I outlined in that post and it is indeed getting serious attention, but I don't expect us to have meaningful results for at least a year. I will make a Less Wrong post as soon as there is anything the public at large can do -- in the meanwhile, I respectfully ask that you or others do not start your own Craigslist charity group, as it may hurt our efforts at moving forward with this.

ETA: Successfully pulling off this Craigslist thing has big overlaps with solving optimal philanthropy in general.

comment by SilasBarta · 2010-07-01T21:28:47.160Z · LW(p) · GW(p)

Okay, here's something that could grow into an article, but it's just rambling at this point. I was planning this as a prelude to my ever-delayed "Explain yourself!" article, since it eases into some of the related social issues. Please tell me what you would want me to elaborate on given what I have so far.


Title: On Mechanizing Science (Epistemology?)

"Silas, there is no Bayesian ‘revival’ in science. There is one amongst people who wish to reduce science to a mechanical procedure." – Gene Callahan

“It is not possible … to construct a system of thought that improves on common sense. … The great enemy of the reservationist is the automatist[,] who believes he can reduce or transcend reason. … And the most pernicious [of them] are algorithmists, who believe they have some universal algorithm which is a drop-in replacement for any and all cogitation.” – "Mencius Moldbug"

And I say: What?

Forget about the issue of how many Bayesians are out there – I’m interested in the other claim. There are two ways to read it, and I express those views here (with a bit of exaggeration):

View 1: “Trying to come up with a mechanical procedure for acquiring knowledge is futile, so you are foolish to pursue this approach. The remaining mysterious aspects of nature are so complex you will inevitably require a human to continually intervene to ‘tweak’ the procedure based on human judgment, making it no mechanical procedure at all.”

View 2: “How dare, how dare those people try to mechanize science! I want science to be about what my elite little cadre has collectively decided is real science. We want to exercise our own discretion, and we’re not going to let some Young Turk outsiders upstage us with their theories. They don’t ‘get’ real science. Real science is about humans, yes, humans making wise, reasoned judgments, in a social context, where expertise is recognized and a rewarded. A machine necessarily cannot do that, so don’t even try.”

View 1, I find respectable, even as I disagree with it.

View 2, I hold in utter contempt.

Replies from: Vladimir_M, TraditionalRationali, Tyrrell_McAllister, cupholder, NancyLebovitz, steven0461, cousin_it, Daniel_Burfoot
comment by Vladimir_M · 2010-07-02T07:32:09.285Z · LW(p) · GW(p)

I think there is an additional interpretation that you're not taking into account, and an eminently reasonable one.

First, to clarify the easy question: unless you believe that there is something mysteriously uncomputable going on in the human brain, the question of whether science can be automated in principle is trivial. Obviously, all you'd need to do is to program a sufficiently sophisticated AI, and it will do automated science. That much is clear.

However, the more important question is -- what about our present abilities to automate science? By this I mean both the hypothetical methods we could try and the ones that have actually been tried in practice. Here, at the very least, a strong case can be made that the 20th century attempt to transform science into a bureaucratic enterprise that operates according to formal, automated procedures has largely been a failure. It has undoubtedly produced an endless stream of cargo-cult science that satisfies all these formal bureaucratic procedures, but is nevertheless worthless -- or worse. At the same time, it's unclear how much valid science is coming out except for those scientists who have maintained a high degree of purely informal and private enthusiasm for discovering truth (and perhaps also those in highly practical applied fields where the cash worth of innovations provides a stringent reality check).

This is how I read Moldbug: in many important questions, we can only admit honestly that we still have no way to find answers backed by scientific evidence in any meaningful sense of the term, and we have to grapple with less reliable forms of reasoning. Yet, there is the widespread idea that if only the proper formal bureaucratic structures are established, we can get "science" to give us answers about whichever questions we find interesting, and we should guide our lives and policies according to the results of such "science." It's not hard to see how this situation can give birth to a diabolical network of perverse incentives, producing endless reams of cargo-cult scientific work published by prestigious outlets and venerated as "science" by the general public and the government.

The really scary prospect is that our system of government might lead us to a complete disaster guided by policy prescriptions coming from this perverted system that has, arguably, already become its integral part.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-02T16:24:38.946Z · LW(p) · GW(p)

Okay, thanks, that tells me what I was looking for: clarification of what it is I'm trying to refute, and what substantive reasons I have to disagree.

So "Moldbug" is pointing out that the attempt to make science into an algorithm has produced a lot of stuff that's worthless but adheres to the algorithm, and we can see this with common sense, however less accurate it might be.

The point I would make in response (and elaborate on in the upcoming article), is that this is no excuse not to look inside the black box that we call common sense and understand why it works, and what about it could be improved, while the Moldbug view asks that we not do it. Like E. T. Jaynes says in chapter 1 of PLoS, the question we should ask is, if we were going to make a robot that infers everything we should infer, what constraints would we place on it?

This exercise is not just some attempt to make robots "as good as humans"; rather, it reveals why that-which-we-call "common sense" works in the first place, and exposes more general principles of superior inference.

In short, I claim that we can have Level 3 understanding of our own common sense. That, contra Moldbug, we can go beyond just being able to produce its output (Level 1), but also know why we regard certain things as common sense but not others, and be able to explain why it works, for what domains, and why and where it doesn't work.

This could lead to a good article.

comment by TraditionalRationali · 2010-07-02T05:47:00.372Z · LW(p) · GW(p)

That it should be possible to Algorithmize Science seems clear from that the human brain can do science and the human brain should be possible to describe algorthmically. If not at a higher level, so at least -- in principle -- by quantum electrodynamics which is the (known and computable in principle) dynamics of electrons and nuclei that are the building blocks of the brain.( If it should be possible to do in practice it would have to be done at a higher level but as a proof of principle that argument should be enough.)

I guess, however, that what is actually meant is if the scientific method itself could be formalised (algorithmized), so that science could be "mechanized" in a more direct way than building human-level AIs and then let them learn and do science by the somewhat informal process used today by human scientists. That seems plausible. But has still to be done and seems rather difficult. The philosophers of science is working on understanding the scientific process better and better, but they seem still to have a long way to go before an actually working algorithmic description has been achieved. See also the discussion below on the recent article by Gelman and Shalizi criticizing bayesianism.

EDIT "done at a lower level" changed to "done at a higher level"

Replies from: WrongBot
comment by WrongBot · 2010-07-02T15:45:49.230Z · LW(p) · GW(p)

The scientific method is already a vague sort of algorithm, and I can see how it might be possible to mechanize many of the steps. The part that seems AGI-hard to me is the process of generating good hypotheses. Humans are incredibly good at plucking out reasonable hypotheses from the infinite search space that is available; that we are so very often says more of the difficulty of the problem than our own abilities.

Replies from: NancyLebovitz, cupholder
comment by NancyLebovitz · 2010-07-02T16:27:03.779Z · LW(p) · GW(p)

I'm pretty sure that judging whether one has adequately tested a hypothesis is also going to be very hard to mechanize.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-02T16:39:49.234Z · LW(p) · GW(p)

The problem that I hear most often in regard to mechanizing this process has the basic form, "Obviously, you need a human in the loop because of all the cases where you need to be able to recognize that a correlation is spurious, and thus to ignore it, and that comes from having good background knowledge."

But you have to wonder: the human didn't learn how to recognize spurious correlations through magic. So however they came up with that capability should be some identifiable process.

Replies from: cupholder, NancyLebovitz
comment by cupholder · 2010-07-03T04:41:43.336Z · LW(p) · GW(p)

The problem that I hear most often in regard to mechanizing this process has the basic form, "Obviously, you need a human in the loop because of all the cases where you need to be able to recognize that a correlation is spurious, and thus to ignore it, and that comes from having good background knowledge."

Those people should be glad they've never heard of TETRAD - their heads might have exploded!

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-03T10:01:32.059Z · LW(p) · GW(p)

That's intriguing. Has it turned out to be useful?

Replies from: cupholder
comment by cupholder · 2010-07-04T05:31:24.428Z · LW(p) · GW(p)

It's apparently been put to use with some success. Clark Glymour - a philosophy professor who helped develop TETRAD - wrote a long review of The Bell Curve that lists applications of an earlier version of TETRAD (see section 6 of the review):

Several other applications have been made of the techniques, for example:

  1. Spirtes et al. (1993) used published data on a small observational sample of Spartina grass from the Cape Fear estuary to correctly predict - contrary both to regression results and expert opinion - the outcome of an unpublished greenhouse experiment on the influence of salinity, pH and aeration on growth.

  2. Druzdzel and Glymour (1994) used data from the US News and World Report survey of American colleges and universities to predict the effect on dropout rates of manipulating average SAT scores of freshman classes. The prediction was confirmed at Carnegie Mellon University.

  3. Waldemark used the techniques to recalibrate a mass spectrometer aboard a Swedish satellite, reducing errors by half.

  4. Shipley (1995, 1997, in review) used the techniques to model a variety of biological problems, and developed adaptations of them for small sample problems.

  5. Akleman et al. (1997) have found that the graphical model search techniques do as well or better than standard time series regression techniques based on statistical loss functions at out of sample predictions for data on exchange rates and corn prices.

Personally I find it a little odd that such a useful tool is still so obscure, but I guess a lot of scientists are loath to change tools and techniques.

comment by NancyLebovitz · 2010-07-02T17:12:36.803Z · LW(p) · GW(p)

Maybe it's just a matter of people kidding themselves about how hard it is to explain something.

On the other hand, some things (like vision and natural language) are genuinely hard to figure out.

I'm not saying the problem is insoluble. I'm saying it looks very difficult.

comment by cupholder · 2010-07-03T05:08:23.900Z · LW(p) · GW(p)

One possible way to get started is to do what the 'Distilling Free-Form Natural Laws from Experimental Data' project did: feed measurements of time and other variables of interest into a computer program which uses a genetic algorithm to build functions that best represent one variable as a function of itself and the other variables. The Science article is paywalled but available elsewhere. (See also this bunch of presentation slides.)

They also have software for you to do this at home.

comment by Tyrrell_McAllister · 2010-07-08T19:46:39.060Z · LW(p) · GW(p)

View 2: “How dare, how dare those people try to mechanize science! . . .

The pithy reply would be that science already is mechanized. We just don't understand the mechanism yet.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-08T20:01:48.652Z · LW(p) · GW(p)

Is that directed at, or intended to be any more convincing to those holding Callahan's view in the link? I'm not trying to criticize you, I just want to make sure you know the kind of worldview you're dealing with here. If you'll remember, this is the same guy who categorically rejects the idea that anything human-related is mechanized. ( Recent blog post about the issue ... he's proud to be a "Silas-free" zone now.)

On a slightly related note, I was thinking about what analogous positions would look like, and I thought of this one for comparison: "There is no automatist revival in industry. There is one amongst people who wish to reduce every production process into a mechanical procedure."

Replies from: Blueberry, Morendil, Morendil, Tyrrell_McAllister
comment by Blueberry · 2010-07-09T08:07:24.286Z · LW(p) · GW(p)

he's proud to be a "Silas-free" zone now.

From looking at his blog, I think you should take this as a compliment.

comment by Morendil · 2010-07-08T20:16:58.559Z · LW(p) · GW(p)

About "Silas-free zones" you blogged:

So why would this Serious Thinker feel the need to reject, on sight, my comments from appearing, and then advertise it?

You don't think your making a horrible impression on people you argue with may have anything to do with it? ;)

Seriously, that would be my first hypothesis. "You don't catch flies with vinegar." Go enough out of your way to antagonize people even as you're making strong rebuttals to their weak arguments, and you're giving them an easy way out of listening to you.

The nicer you are, the harder you make it for others to dismiss you as an asshole. I'd count that as a good reason to learn nice. (If you need role models, there are plenty of people here who are consistently nice without being pushovers in arguments - far from it.)

Replies from: SilasBarta
comment by SilasBarta · 2010-07-08T20:41:41.859Z · LW(p) · GW(p)

The evidence against that position is that Callahan, for a while, had no problem allowing my comments on his site, but then called me a "douche" and deleted them the moment they started disagreeing with him. Here's another example.

Also, on this post, I responded with something like, "It's real, in the sense of being an observable regularity in nature. Okay, what trap did I walk into?" but it was diallowed. Yet I wouldn't call that comment rude.

It's not about him banning me because of my tone; he bans anyone who makes the same kinds of arguments, unless they do it badly, in which case he keeps their comments for the easy kill, gets in the last word, and closes the thread. Which is his prerogative, of course, but not something to be equated with "being interested in meaningful exchange of ideas, and only banning those who are rude".

comment by Morendil · 2010-07-09T15:30:07.260Z · LW(p) · GW(p)

"There is no automatist revival in industry. There is one amongst people who wish to reduce every production process into a mechanical procedure."

I'm not sure that claim would be entirely absurd.

In the software engineering business, there's a subculture whose underlying ideology can be caricatured as "Programming would be so simple if only we could get those pesky programmers out of the loop." This subculture invests heavily into code generation, model-driven architectures, and so on.

Arguably, too, this goal only sems plausible if you have swallowed quite a few confusions regarding the respective roles of problem-solving, design, construction, and testing. A closer examination reveals that what passes for attempts at "mechanizing" the creation of software punts on most of the serious questions, focusing only on what is easily mechanizable.

But that is nothing other than the continuation of a trend that has existed in the software profession from the beginning: the provision of mechanized aids to a process that remains largely creative (and as such poorly understood). We don't say that compilers have mechanized the production of software; we say that they have raised the level of abstraction at which a programmer works.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-09T17:02:31.738Z · LW(p) · GW(p)

Okay, but that point only concerns production of software, a relatively new "production output". The statement ("there is no automatist revival in industry ...") would apply just the same to any factory, and ridicules the idea that there can be a mechanical procedure for producing any good. In reality, of course, this seems to be the norm: someone figures out what combination of motions converts the input to the output, refuting the notion that e.g. "There is no mechanical procedure for preparing a bottle of Coca-cola ..."

In any case, my dispute with Callahan's remark is not merely about its pessimism regarding mechanizing this or that (which I called View 1), but rather, the implication that such mechanization would be fundamentally impossible (View 2), and that this impossibility can be discerned from philosophical considerations.

And regarding software, the big difficulty in getting rid of human programmers seems to come from how their role is, ultimately, to find a representation for a function (in a standard language) that converts a specified input into a specified output. Those specifications come from ... other humans, who often conceal properties of the desired I/O behavior, or fail to articulate them.

comment by Tyrrell_McAllister · 2010-07-09T00:39:39.912Z · LW(p) · GW(p)

Is that directed at, or intended to be any more convincing to those holding Callahan's view in the link? I'm not trying to criticize you,

No, you're absolutely right. My comment definitely would not be convincing. The best that could be said for it is that it would help to clarify the nature of my rejection of View 2. That is, if I were talking to Callahan, that comment would, at best, just help him to understand which position he was dealing with.

comment by cupholder · 2010-07-03T04:46:44.764Z · LW(p) · GW(p)

"Silas, there is no Bayesian ‘revival’ in science. There is one amongst people who wish to reduce science to a mechanical procedure." – Gene Callahan

Am I the only one who finds this extremely unlikely? So far as I know, Bayesian methods have become massively more popular in science over the last 50 years. (Count JSTOR hits for the word 'Bayesian,' for example, and watch the numbers shoot up over time!)

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-07-03T19:36:16.649Z · LW(p) · GW(p)

Half of those hits are in the social sciences. I suspect that is economists defining the rational agents they study as bayesian, but that is rather different from the economists being bayesian themselves! The other half are in math & staticstics is probably that bayesian statisticians are becoming more common, which you might count as science (and 10% are in science proper).

Anyhow, it's clear from the context (I'd have thought from the quote) that he just means that the vast majority of scientists are not interested in defining science precisely.

Replies from: cupholder
comment by cupholder · 2010-07-04T05:47:43.965Z · LW(p) · GW(p)

It might well have been clear from the quote itself, but not to me - I just read the quote as saying Bayesian thinking and Bayesian methods haven't become more popular in science, which doesn't mesh with my intuition/experience.

comment by NancyLebovitz · 2010-07-02T03:44:39.114Z · LW(p) · GW(p)

How hard do you think mechanizing science would be? It strikes me as being at least in the same class with natural language.

Replies from: NancyLebovitz, SilasBarta
comment by NancyLebovitz · 2010-07-02T15:40:11.541Z · LW(p) · GW(p)

I've been poking at the question of to what extent computers could help people do science, beyond the usual calculation and visualization which is already being done.

I'm not getting very far-- a lot of the most interesting stuff seems like getting meaning out of noise.

However, could computers check to make sure that the use of statistics isn't too awful? Or is finding out whether what's deduced follows from the raw data too much like doing natural language? What about finding similar patterns in different fields? Possibly promising areas which haven't been explored?

comment by SilasBarta · 2010-07-02T16:32:45.161Z · LW(p) · GW(p)

Not exactly sure, to be honest, though your estimate sounds correct. What matters is that I deem it possible in a non-trivial sense; and more importantly, that we can currently identify rough boundaries of ideal mechanized science, and can categorize much of existing science as being definitely in or out.

comment by steven0461 · 2010-07-04T21:01:01.811Z · LW(p) · GW(p)

It's probably best to take a cyborg point of view -- consciously followed algorithms (like probabilistic updating) aren't a replacement for common sense, but they can be integrated into common sense, or used as measuring sticks, to turn common sense into common awesome cybersense.

comment by cousin_it · 2010-07-01T21:37:29.788Z · LW(p) · GW(p)

You probably won't find much opposition to your opinion here on LW. Duh, of course science can and will be automated! It's pretty amusing that the thesis of Cosma Shalizi, an outspoken anti-Bayesian, deals with automated extraction of causal architecture from observed behavior of systems. (If you enjoy math, read it all; it's very eye-opening.)

Replies from: SilasBarta, SilasBarta
comment by SilasBarta · 2010-07-01T22:17:07.571Z · LW(p) · GW(p)

Really? I read enough of that thesis to add it to the pile of "papers about fully generally learning programs with no practical use or insight into general intelligence".

Though I did get one useful insight from Shalizi's thesis: that I should judge complexity by the program length needed to produce something functionally equivalent, not something exactly identical, as that metric makes more sense when judging complexity as it pertains to real-world systems and their entropy.

comment by SilasBarta · 2010-07-01T22:25:06.661Z · LW(p) · GW(p)

And regarding your other point, I'm sure people agree with holding view 2 in contempt. But what about the more general question of mechanizing epistemology?

Also, would people be interested in a study of what actually does motivate opposition to the attempt to mechanize science? (i.e. one that goes beyond my rants and researches it)

comment by Daniel_Burfoot · 2010-07-02T15:29:12.806Z · LW(p) · GW(p)

I read Moldbug's quote as saying: there is currently no system, algorithmic or bureaucratic, that is even remotely close to the power of human intuition, common sense, genius, etc. But there are people who implicitly claim they have such a system, and those people are dangerous liars.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-02T16:02:23.133Z · LW(p) · GW(p)

It is not possible … to construct a system of thought that improves on common sense

I read Moldbug's quote as saying: there is currently no system ...

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2010-07-02T23:51:52.844Z · LW(p) · GW(p)

Those quotes do seem to be in conflict, but if he is talking about people that claim they already have the blueprints for such a thing, it would make more sense to read what he is saying as "it is not possible, with our current level of knowledge, to construct a system of thought that improves on common sense". Is he really pushing back against people that say that it is possible to construct such a system (at some far off point in the future), or is he pushing back against people that say they have (already) found such a system?

Replies from: mattnewport
comment by mattnewport · 2010-07-03T00:06:36.168Z · LW(p) · GW(p)

The Moldbug article that the quote comes from does not seem to be expressing anything much like either Silas' view 1 or view 2. Moldbug clarifies in a comment that he is not making an argument against the possibility of AGI:

Think of it in terms of Searle's Chinese Room gedankenexperiment. If you can build a true AI, you can build the Chinese Room. Since I do not follow Penrose and the neo-vitalists in believing that AI is in principle impossible, I think the Chinese Room can be built, although it would take a lot of people and be very slow.

My argument is that, not only is it the Room rather than the people in it that speaks Chinese, but (in my opinion) the algorithm that the Room executes will not be one that is globally intelligible to humans, in the way that a human can understand, say, how Windows XP works.

In other words, the human brain is not powerful enough to virtualize itself. It can reason, and with sufficient technology it can build algorithmic devices capable of artificial reason, and this implies that it can explain why these devices work. But it cannot upgrade itself to a superhuman level of reason by following the same algorithm itself.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-03T02:00:20.685Z · LW(p) · GW(p)

That sounds like a justification for view 1. Remember, view 1 doesn't provide a justification for why there will need to be continual tweaks to mechanized reasoners to bring them in line with (more-) human reasoning, so remains agnostic on how exactly one justifies this view.

(Of course, "Moldbug's" view still doesn't seem any more defensible, because it equates a machine virtualizing a human, with a machine virtualizing the critical aspects of reasoning, but whatever.)

comment by [deleted] · 2010-07-06T17:14:17.139Z · LW(p) · GW(p)

Poking around on Cosma Shalizi's website, I found this long, somewhat technical argument for why the general intelligence factor, g, doesn't exist.

The main thrust is that g is an artifact of hierarchal factor analysis, and that whenever you have groups of variables that have positive correlations between them, a general factor will always appear that explains a fair amount of the variance, whether it a actually exists or not.

I'm not convinced, mainly because it strikes me as unlikely that an error of this type would persist for so long, and that even his conception of intelligence as a large number of separate abilities would need some sort of high level selection and sequencing function. But neither of those are particularly compelling reasons for disagreement - can anyone more familiar with the psychological/statistical territory shed some light?

Replies from: None, cousin_it, satt, gwern, RobinZ
comment by [deleted] · 2010-07-07T14:54:46.203Z · LW(p) · GW(p)

I pointed this out to my buddy who's a psychology doctoral student, his reply is below:

I don't know enough about g to say whether the people talking about it are falling prey to the general correlation between tests, but this phenomenon is pretty well-known to social science researchers.

I do know enough about CFA and EFA to tell you that this guy has an unreasonable boner for CFA. CFA doesn't test against truth, it tests against other models. Which means it only tells you whether the model you're looking at fits better than a comparator model. If that's a null model, that's not a particularly great line of analysis.

He pretty blatantly misrepresents this. And his criticisms of things like Big Five are pretty wild. Big Five, by its very nature, fits the correlations extremely well. The largest criticism of Big Five is that it's not theory-driven, but data-driven!

But my biggest beef has got to be him arguing that EFA is not a technique for determining causality. No shit. That is the very nature of EFA -- it's a technique for loading factors (which have no inherent "truth" to them by loading alone, and are highly subject to reification) in order to maximize variance explained. He doesn't need to argue this point for a million words. It's definitional.

So regardless of whether g exists or not, which I'm not really qualified to speak on, this guy is kind of a hugely misleading writer. MINUS FIVE SCIENCE POINTS TO HIM.

comment by cousin_it · 2010-07-06T18:12:26.782Z · LW(p) · GW(p)

I think this is one of the few cases where Shalizi is wrong. (Not an easy thing to say, as I'm a big fan of his.)

In the second part of the article he generates synthetic "test scores" of people who have three thousand independent abilities - "facets of intelligence" that apply to different problems - and demonstrates that standard factor analysis still detects a strong single g-factor explaining most of the variance between people. From that he concludes that g is a "statistical artefact" and lacks "reality". This is exactly like saying the total weight of the rockpile "lacks reality" because the weights of individual rocks are independent variables.

As for the reason why he is wrong, it's pretty clear: Shalizi is a Marxist (fo' real) and can't give an inch to those pesky racists. A sad sight, that.

Replies from: Vladimir_M, None
comment by Vladimir_M · 2010-07-07T04:56:36.508Z · LW(p) · GW(p)

cousin_it:

A sad sight, that.

Indeed. A while ago, I got intensely interested in these controversies over intelligence research, and after reading a whole pile of books and research papers, I got the impression that there is some awfully bad statistics being pushed by pretty much every side in the controversy, so at the end I was left skeptical towards all the major opposing positions (though to varying degrees). If there existed a book written by someone as smart and knowledgeable as Shalizi that would present a systematic, thorough, and unbiased analysis of this whole mess, I would gladly pay $1,000 for it. Alas, Shalizi has definitely let his ideology get the better of him this time.

He also wrote an interesting long post on the heritability of IQ, which is better, but still clearly slanted ideologically. I recommend reading it nevertheless, but to get a more accurate view of the whole issue, I recommend reading the excellent Making Sense of Heritability by Neven Sesardić alongside it.

Replies from: satt, Morendil
comment by satt · 2010-07-07T14:22:12.881Z · LW(p) · GW(p)

If there existed a book written by someone as smart and knowledgeable as Shalizi that would present a systematic, thorough, and unbiased analysis of this whole mess, I would gladly pay $1,000 for it.

There is no such book (yet), but there are two books that cover the most controversial part of the mess that I'd recommend: Race Differences in Intelligence (1975) and Race, IQ and Jensen (1980). They are both systematic, thorough, and about as unbiased as one can reasonably expect on the subject of race & IQ. On the down side, they don't really cover other aspects of the IQ controversies, and they're three decades out of date. (That said, I personally think that few studies published since 1980 bear strongly on the race & IQ issue, so the books' age doesn't matter that much.)

Replies from: Vladimir_M
comment by Vladimir_M · 2010-07-08T08:17:18.343Z · LW(p) · GW(p)

Yes, among the books on the race-IQ controversy that I've seen, I agree that these are the closest thing to an unbiased source. However, I disagree that nothing very significant has happened in the field since their publication -- although unfortunately, taken together, these new developments have led to an even greater overall confusion. I have in mind particularly the discovery of the Flynn effect and the Minnesota adoption study, which have made it even more difficult to argue coherently either for a hereditarian or an environmentalist theory the way it was done in the seventies.

Also, even these books fail to present a satisfactory treatment of some basic questions where a competent statistician should be able to clarify things fully, but horrible confusion has nevertheless persisted for decades. Here I refer primarily to the use of the regression to the mean as a basis for hereditarian arguments. From what I've seen, Jensen is still using such arguments as a major source of support for his positions, constantly replying to the existing superficial critiques with superficial counter-arguments, and I've never seen anyone giving this issue the full attention it deserves.

Replies from: satt, NancyLebovitz
comment by satt · 2010-07-08T21:57:46.289Z · LW(p) · GW(p)

However, I disagree that nothing very significant has happened in the field since their publication

Me too! I just don't think there's been much new data brought to the table. I agree with you in counting Flynn's 1987 paper and the Minnesota followup report, and I'd add Moore's 1986 study of adopted black children, the recent meta-analyses by Jelte Wicherts and colleagues on the mean IQs of sub-Saharan Africans, Dickens & Flynn's 2006 paper on black Americans' IQs converging on whites' (and at a push, Rushton & Jensen's reply along with Dickens & Flynn's), Fryer & Levitt's 2007 paper about IQ gaps in young children, and Fagan & Holland's papers (200200080-6), 2007, 2009) on developing tests where minorities score equally to whites. I guess Richard Lynn et al.'s papers on the mean IQ of East Asians count as well, although it's really the black-white comparison that gets people's hackles up.

Having written out a list, it does looks longer than I expected...although it's not much for 30-35 years of controversy!

Also, even these books fail to present a satisfactory treatment of some basic questions where a competent statistician should be able to clarify things fully, but horrible confusion has nevertheless persisted for decades. Here I refer primarily to the use of the regression to the mean as a basis for hereditarian arguments.

Amen. The regression argument should've been dropped by 1980 at the latest. In fairness to Flynn, his book does namecheck that argument and explain why it's wrong, albeit only briefly.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-07-09T09:05:53.272Z · LW(p) · GW(p)

satt:

The regression argument should've been dropped by 1980 at the latest. In fairness to Flynn, his book does namecheck that argument and explain why it's wrong, albeit only briefly.

If I remember correctly, Loehlin's book also mentions it briefly. However, it seems to me that the situation is actually more complex.

Jensen's arguments, in the forms in which he has been stating them for decades, are clearly inadequate. Some very good responses were published 30+ years ago by Mackenzie and Furby. Yet for some bizarre reason, prominent critics of Jensen have typically ignored these excellent references and instead produced their own much less thorough and clear counterarguments.

Nevertheless, I'm not sure if the argument should end here. Certainly, if we observe a subpopulation S in which the values of a trait follow a normal distribution with the mean M(S) that is lower than for the whole population, then in pairs of individuals from S among whom there exists a correlation independent of rank and smaller than one, the lower-ranked individuals will regress towards M(S). That's a mathematical tautology, and nothing can be inferred from it about what the causes of the individual and group differences might be; the above cited papers explain this fact very well.

However, the question that I'm not sure about is: what can we conclude from the fact that the existing statistical distributions and correlations are such that they satisfy these mathematical conditions? Is this really a trivial consequence of the norming of tests that's engineered so as to give their scores a normal distribution over the whole population? I'd like to see someone really statistics-savvy scrutinize the issue without starting from the assumption that both the total population distribution and the subpopulation distribution are normal and that the correlation coefficients between relatives are independent of their rank in the distribution.

comment by NancyLebovitz · 2010-07-08T11:34:59.679Z · LW(p) · GW(p)

What would appropriate policy be if we just don't know to what extent IQ is different in different groups?

Replies from: Vladimir_M
comment by Vladimir_M · 2010-07-09T08:25:49.926Z · LW(p) · GW(p)

Well, if you'll excuse the ugly metaphor, in this area even the positive questions are giant cans of worms lined on top of third rails, so I really have no desire to get into public discussions of normative policy issues.

comment by Morendil · 2010-07-07T06:28:53.883Z · LW(p) · GW(p)

long post on the heritability of IQ, which is better, but still clearly slanted ideologically

OK, I'll bite. Can you point to specific parts of that post which are in error owing to ideologically motivated thinking?

Replies from: Vladimir_M
comment by Vladimir_M · 2010-07-07T07:29:34.455Z · LW(p) · GW(p)

Morendil:

Can you point to specific parts of that post which are in error owing to ideologically motivated thinking?

A piece of writing biased for ideological reasons doesn't even have to have any specific parts that can be shown to be in error per se. Enormous edifices of propaganda can be constructed -- and have been constructed many times in history -- based solely on the selection and arrangement of the presented facts and claims, which can all be technically true by themselves.

In areas that arouse strong ideological passions, all sorts of surveys and other works aimed at broad audiences can be expected to suffer from this sort of bias. For a non-expert reader, this problem can be recognized and overcome only by reading works written by people espousing different perspectives. That's why I recommend that people should read Shalizi's post on heritability, but also at least one more work addressing the same issues written by another very smart author who doesn't share the same ideological position. (And Sesardić's book is, to my knowledge, the best such reference about this topic.)

Instead of getting into a convoluted discussion of concrete points in Shalizi's article, I'll just conclude with the following remark. You can read Shalizi's article, conclude that it's the definitive word on the subject, and accept his view of the matter. But you can also read more widely on the topic, and see that his presentation is far from unbiased, even if you ultimately conclude that his basic points are correct. The relevant literature is easily accessible if you just have internet and library access.

comment by [deleted] · 2010-07-06T19:26:05.532Z · LW(p) · GW(p)

Your analogy is flawed, I think.

The weight of the rock pile is just what we call the sum of the weights of the rocks. It's just a definition; but the idea of general intelligence is more than a definition. If there were a real, biological thing called g, we would expect all kinds of abilities to be correlated. Intelligence would make you better at math and music and English. We would expect basically all cognitive abilities to be affected by g, because g is real -- it represents something like dendrite density, some actual intelligence-granting property.

People hypothesized that g is real because results of all kinds of cognitive tests are correlated. But what Shalizi showed is that you can generate the same correlations if you let test scores depend on three thousand uncorrelated abilities. You can get the same results as the IQ advocates even when absolutely no single factor determines different abilities.

Sure, your old g will correlate with multiple abilities -- hell, you could let g = "test score" and that would correlate with all the abilities -- but that would be meaningless. If size and location determine the price of a house, you don't declare that there is some factor that causes both large size and desirable location!

Replies from: Vladimir_M, None, cousin_it
comment by Vladimir_M · 2010-07-07T05:46:27.539Z · LW(p) · GW(p)

SarahC:

But what Shalizi showed is that you can generate the same correlations if you let test scores depend on three thousand uncorrelated abilities. You can get the same results as the IQ advocates even when absolutely no single factor determines different abilities.

Just to be clear, this is not an original idea by Shalizi, but the well known "sampling theory" of general intelligence first proposed by Godfrey Thomson almost a century ago. Shalizi states this very clearly in the post, and credits Thomson with the idea. However, for whatever reason, he fails to mention the very extensive discussions of this theory in the existing literature, and writes as if Thomson's theory had been ignored ever since, which definitely doesn't represent the actual situation accurately.

In a recent paper by van der Maas et al., which presents an extremely interesting novel theory of correlations that give rise to g (and which Shalizi links to at one point), the authors write:

Thorndike (1927) and Thomson (1951) proposed one such alternative mechanism, namely, sampling. In this sampling theory, carrying out cognitive tasks requires the use of many lower order uncorrelated modules or neural processes (so-called bonds). They hypothesized that the samples of modules or bonds used for different cognitive tests partly overlap, causing a positive correlation between the test scores. In this view, the positive manifold is due to a measurement problem in the sense that it is very difficult to obtain independent measures of the lower order processes. Jensen (1998) and Eysenck (1987) identified three problems with this sampling theory. First, whereas some complex mental tests, as predicted by sampling theory, highly load on the g factor, some very narrowly defined tests also display high g loadings. Second, some seemingly completely unrelated tests, such as visual and memory scan tasks, are consistently highly correlated, whereas related tests, such as forward and backward digit span, are only modestly correlated. Third, in some cases brain damage leads to very specific impairments, whereas sampling theory predicts general impairments. These three facts are difficult to explain with sampling theory, which as a consequence has not gained much acceptance.1 Thus, the g explanation remains very dominant in the current literature (see Jensen, 1998, p. 107).

Note that I take no position here about whether these criticisms of the sampling theory are correct or not. However, I think this quote clearly demonstrates that an attempt to write off g by merely invoking the sampling theory is not a constructive contribution to the discussion.

I would also add that if someone managed to construct multiple tests of mental ability that would sample disjoint sets of Thomsonesque underlying abilities and thus fail to give rise to g, it would be considered a tremendous breakthrough. Yet, despite the strong incentive to achieve this, nobody who has tried so far has succeeded. This evidence is far from conclusive, but far from insignificant either.

Replies from: satt, None
comment by satt · 2010-07-07T15:44:35.656Z · LW(p) · GW(p)

I think Shalizi isn't too far off the mark in writing "as if Thomson's theory had been ignored". Although a few psychologists & psychometricians have acknowledged Thomson's sampling model, in everyday practice it's generally ignored. There are far more papers out there that fit g-oriented factor models as a matter of course than those that try to fit a Thomson-style model. Admittedly, there is a very good reason for that — Thomson-style models would be massively underspecified on the datasets available to psychologists, so it's not practical to fit them — but that doesn't change the fact that a g-based model is the go-to choice for the everyday psychologist.

There's an interesting analogy here to Shalizi's post about IQ's heritability, now I think about it. Shalizi writes it as if psychologists and behaviour geneticists don't care about gene-environment correlation, gene-environment interaction, nonlinearities, there not really being such a thing as "the" heritability of IQ, and so on. One could object that this isn't true — there are plenty of papers out there concerned with these complexities — but on the other hand, although the textbooks pay lip service to them, researchers often resort to fitting models that ignore these speedbumps. The reason for this is the same as in the case of Thomson's model: given the data available to scientists, models that accounted for these effects would usually be ruinously underspecified. So they make do.

Replies from: Vladimir_M, RobinZ
comment by Vladimir_M · 2010-07-08T08:37:49.283Z · LW(p) · GW(p)

However, it seems to me that the fatal problem of the sampling theory is that nobody has ever managed to figure out a way to sample disjoint sets of these hypothetical uncorrelated modules. If all practically useful mental abilities and all the tests successfully predicting them always sample some particular subset of these modules, then we might as well look at that subset as a unified entity that represents the causal factor behind g, since its elements operate together as a group in all relevant cases.

Or is there some additional issue here that I'm not taking into account?

Replies from: satt
comment by satt · 2010-07-08T19:26:03.442Z · LW(p) · GW(p)

I can't immediately think of any additional issue. It's more that I don't see the lack of well-known disjoint sets of uncorrelated cognitive modules as a fatal problem for Thomson's theory, merely weak disconfirming evidence. This is because I assign a relatively low probability to psychologists detecting tests that sample disjoint sets of modules even if they exist.

For example, I can think of a situation where psychologists & psychometricians have missed a similar phenomenon: negatively correlated cognitive tests. I know of a couple of examples which I found only because the mathematician Warren D. Smith describes them in his paper "Mathematical definition of 'intelligence' (and consequences)". The paper's about the general goal of coming up with universal definitions of and ways to measure intelligence, but in the middle of it is a polemical/sceptical summary of research into g & IQ.

Smith went through a correlation matrix for 57 tests given to 240 people, published by Thurstone in 1938, and saw that the 3 most negative of the 1596 intercorrelations were between these pairs of tests:

  • "100-word vocabulary test // Recognize pictures of hand as Right/Left" (correlation = -0.22)
  • "Find lots of synonyms of a given word // Decide whether 2 pictures of a national flag are relatively mirrored or not" (correlation = -0.16)
  • "Describe somebody in writing: score=# words used // figure recognition test: decide which numbers in a list of drawings of abstract figures are ones you saw in a previously shown list" (correlation = -0.12)

In Smith's words: "This seems too much to be a coincidence!" Smith then went to the 60-item correlation matrix for 710 schoolchildren published by Thurstone & Thurstone in 1941 and did the same, discovering that

the three most negative [correlations], with values -0.161, -0.152, and -0.138 respectively, are the pairwise correlations of the performance on the "scattered Xs" test (circle the Xs in a random scattering of letters) with these three tests: (a) Sentence completion ... (b) Reading comprehension II ... (c) Reading comprehension I ... Again, it is difficult to believe this also is a coincidence!

The existence of two pairs of negatively correlated cognitive skills leads me to increase my prior for the existence of uncorrelated cognitive skills.

Also, the way psychologists often analyze test batteries makes it harder to spot disjoint sets of uncorrelated modules. Suppose we have a 3-test battery, where test 1 samples uncorrelated modules A, B, C, D & E, test 2 samples F, G, H, I & J, and test 3 samples C, D, E, F & G. If we administer the battery to a few thousand people and extract a g from the results, as is standard practice, then by construction the resulting g is going to correlate with scores on tests 1 & 2, although we know they sample non-overlapping sets of modules. (IQ, being a weighted average of test/module scores, will also correlate with all of the tests.) A lot of psychologists would interpret that as evidence against tests 1 & 2 measuring distinct mental abilities, even though we see there's an alternative explanation.

Even if we did find an index of intelligence that didn't correlate with IQ/g, would we count it as such? Duckworth & Seligman discovered that in a sample of 164 schoolchildren, a composite measure of self-discipline predicted GPA significantly better than IQ, and self-discipline didn't correlate significantly with IQ. Does self-discipline now count as an independent intellectual ability? I'd lean towards saying it doesn't, but I doubt I could justify being dogmatic about that; it's surely a cognitive ability in the term's broadest sense.

Replies from: Vladimir_M, Douglas_Knight, HughRistik
comment by Vladimir_M · 2010-07-09T08:12:25.738Z · LW(p) · GW(p)

satt:

For example, I can think of a situation where psychologists & psychometricians have missed a similar phenomenon: negatively correlated cognitive tests. I know of a couple of examples which I found only because the mathematician Warren D. Smith describes them in his paper "Mathematical definition of 'intelligence' (and consequences)".

That's an extremely interesting reference, thanks for the link! This is exactly the kind of approach that this area desperately needs: no-nonsense scrutiny by someone with a strong math background and without an ideological agenda.

David Hilbert allegedly once quipped that physics is too important to be left to physicists; the way things are, it seems to me that psychometrics should definitely not be left to psychologists. That they haven't immediately rushed to explore further these findings by Smith is an extremely damning fact about the intellectual standards in the field.

Duckworth & Seligman discovered that in a sample of 164 schoolchildren, a composite measure of self-discipline predicted GPA significantly better than IQ, and self-discipline didn't correlate significantly with IQ. Does self-discipline now count as an independent intellectual ability?

Wouldn't this closely correspond to the Big Five "conscientiousness" trait? (Which the paper apparently doesn't mention at all?!) From what I've seen, even among the biggest fans of IQ, it is generally recognized that conscientiousness is at least similarly important as general intelligence in predicting success and performance.

Replies from: satt, NancyLebovitz
comment by satt · 2010-07-09T16:12:20.667Z · LW(p) · GW(p)

Wouldn't this closely correspond to the Big Five "conscientiousness" trait? (Which the paper apparently doesn't mention at all?!) From what I've seen, even among the biggest fans of IQ, it is generally recognized that conscientiousness is at least similarly important as general intelligence in predicting success and performance.

That's an excellent point that completely did not occur to me. Turns out that self-discipline is actually one of the 6 subscales used to measure conscientiousness on the NEO-PI-R, so it's clearly related to conscientiousness. With that in mind, it is a bit weird that conscientiousness doesn't get a shoutout in the paper...

comment by NancyLebovitz · 2010-07-09T08:23:43.057Z · LW(p) · GW(p)

Is anything known about a physical basis for conscientiousness?

Replies from: wedrifid
comment by wedrifid · 2010-07-09T09:56:30.122Z · LW(p) · GW(p)

Is anything known about a physical basis for conscientiousness?

It can be reliably predicted by, for example, SPECT scans. If I recall correctly you can expect to see over-active frontal lobes and basal ganglia. For this reason (and because those areas depend on dopamine a lot) dopaminergics (Ritalin, etc) make a big difference.

comment by Douglas_Knight · 2010-07-10T21:46:19.942Z · LW(p) · GW(p)

I haven't looked at Smith yet, but the quote looks like parody to me. Since you seem to take it seriously, I'll respond. Awfully specific tests defying the predictions looks like data mining to me. I predict that these negative correlations are not replicable. The first seems to be the claim that verbal ability is not correlated with spatial ability, but this is a well-tested claim. As Shalizi mentions, psychometricians do look for separate skills and these are commonly accepted components. I wouldn't be terribly surprised if there were ones they completely missed, but these two are popular and positively correlated. The second example is a little more promising: maybe that scattered Xs test is independent of verbal ability, even though it looks like other skills that are not, but I doubt it.

With respect to self-discipline, I think you're experiencing some kind of halo effect. Not every positive mental trait should be called intelligence. Self-discipline is just not what people mean by intelligence. I knew that conscientiousness predicted GPAs, but I'd never heard such a strong claim. But it is true that a lot of people dismiss conscientiousness (and GPA) in favor of intelligence, and they seem to be making an error (or being risk-seeking).

Replies from: satt
comment by satt · 2010-07-11T18:03:38.897Z · LW(p) · GW(p)

I haven't looked at Smith yet, but the quote looks like parody to me. Since you seem to take it seriously, I'll respond.

Once you read the relevant passage in context, I anticipate you will agree with me that Smith is serious. Take this paragraph from before the passage I quoted from:

Further, let us return to Gould's criticism that due to "validation" of most other highly used IQ tests and subtests, Spearman's g was forced to appear to exist from then on, regardless of whether it actually did. In view of this ... probably the only place we can look in the literature to find data truly capable of refuting or confirming Spearman, is data from the early days, before too much "validation" occurred, but not so early on that Spearman's atrocious experimental and statistical practices were repeated.

The prime candidate I have been able to find for such data is Thurstone's [205] "primary mental abilities" dataset published in 1938.

Smith then presents the example from Thurstone's 1938 data.

Awfully specific tests defying the predictions looks like data mining to me. I predict that these negative correlations are not replicable.

I'd be inclined to agree if the 3 most negative correlations in the dataset had come from very different pairs of tests, but the fact that they come from sets of subtests that one would expect to tap similar narrow abilities suggests they're not just statistical noise.

The first seems to be the claim that verbal ability is not correlated with spatial ability, but this is a well-tested claim. As Shalizi mentions, psychometricians do look for separate skills and these are commonly accepted components. I wouldn't be terribly surprised if there were ones they completely missed, but these two are popular and positively correlated.

Smith himself does not appear to make that claim; he presents his two examples merely as demonstrations that not all mental ability scores positively correlate. I think it's reasonable to package the 3 verbal subtests he mentions as strongly loading on verbal ability, but it's not clear to me that the 3 other subtests he pairs them with are strong measures of "spatial ability"; two of them look like they tap a more specific ability to handle mental mirror images, and the third's a visual memory test.

Even if it transpires that the 3 subtests all tap substantially into spatial ability, they needn't necessarily correlate positively with specific measures of verbal ability, even though verbal ability correlates with spatial ability.

With respect to self-discipline, I think you're experiencing some kind of halo effect. Not every positive mental trait should be called intelligence. Self-discipline is just not what people mean by intelligence.

I'm tempted to agree but I'm not sure such a strong generalization is defensible. Take a list of psychologists' definitions of intelligence. IMO self-discipline plausibly makes sense as a component of intelligence under definitions 1, 7, 8, 13, 14, 23, 25, 26, 27, 28, 32, 33 & 34, which adds up to 37% of the list of definitions. A good few psychologists appear to include self-discipline as a facet of intelligence.

comment by HughRistik · 2010-07-09T17:35:18.519Z · LW(p) · GW(p)

Even if we did find an index of intelligence that didn't correlate with IQ/g, would we count it as such? Duckworth & Seligman discovered that in a sample of 164 schoolchildren, a composite measure of self-discipline predicted GPA significantly better than IQ, and self-discipline didn't correlate significantly with IQ. Does self-discipline now count as an independent intellectual ability? I'd lean towards saying it doesn't, but I doubt I could justify being dogmatic about that; it's surely a cognitive ability in the term's broadest sense.

Interesting thought. It turns out that Conscientiousness is actually negatively related to intelligence, while Openness is positively correlated with intelligence.

This finding is consistent with the folk notion of "crazy geniuses."

Though it's important to note that the second study was done on college students, who must have a certain level of IQ and who aren't representative of the population.

The first study notes:

According to this proposal the significant negative correlation could be observed only in groups with above average mental abilities and not in a random sample from a general population.

If we took a larger sample of the population, including lower IQ individuals, then I think we would see the negative correlation between Conscientiousness and intelligence diminish or even reverse, because I bet there are lots of people outside a college population who have both low intelligence and low Conscientiousness.

It could be that a moderate amount of Conscientiousness (well, whatever mechanisms cause Conscientiousness) is necessary for above average intelligence, but too much Conscientiousness (i.e. those mechanisms are too strong) limits intelligence.

Replies from: None, Douglas_Knight, satt
comment by [deleted] · 2010-07-09T20:13:06.119Z · LW(p) · GW(p)

I noticed a while back when a bunch of LW'ers gave their Big Five scores that our Conscientiousness scores tended to be low. I took that to be an internet thing (people currently reading a website are more likely to be lazy slobs) but this is a more flattering explanation.

comment by Douglas_Knight · 2010-07-10T22:07:59.961Z · LW(p) · GW(p)

Interesting thought. It turns out that Conscientiousness is actually negatively related to intelligence

No it doesn't. The whole point of that article is that it's a mistake to ask people how conscientious they are.

comment by satt · 2010-07-10T17:40:26.516Z · LW(p) · GW(p)

Interesting. I would've expected Conscientiousness to correlate weakly positively with IQ across most IQ levels.

I would avoid interpreting a negative correlation between C/self-discipline and IQ as evidence against C/self-discipline being a separate facet of intelligence; I think that would beg the question by implicitly assuming that IQ's representing the entirety of what we call intelligence.

comment by RobinZ · 2010-07-07T15:50:17.997Z · LW(p) · GW(p)

Just out of curiosity: is psychology your domain of expertise? You speak confidently and with details.

Replies from: satt
comment by satt · 2010-07-07T16:25:00.079Z · LW(p) · GW(p)

If only! I'm just a physics student but I've read a few books and quite a few articles about IQ.

[Edit: I've got an amateur interest in statistics as well, which helps a lot on this subject. Vladimir_M is right that there's a lot of crap statistics peddled in this field.]

comment by [deleted] · 2010-07-07T15:06:29.391Z · LW(p) · GW(p)

Ok, that's interesting new stuff -- I haven't read this literature at all.

comment by [deleted] · 2010-07-06T19:30:31.543Z · LW(p) · GW(p)

"All of this, of course, is completely compatible with IQ having some ability, when plugged into a linear regression, to predict things like college grades or salaries or the odds of being arrested by age 30. (This predictive ability is vastly less than many people would lead you to believe [cf.], but I'm happy to give them that point for the sake of argument.) This would still be true if I introduced a broader mens sana in corpore sano score, which combined IQ tests, physical fitness tests, and (to really return to the classical roots of Western civilization) rated hot-or-not sexiness. Indeed, since all these things predict success in life (of one form or another), and are all more or less positively correlated, I would guess that MSICS scores would do an even better job than IQ scores. I could even attribute them all to a single factor, a (for arete), and start treating it as a real causal variable. By that point, however, I'd be doing something so obviously dumb that I'd be accused of unfair parody and arguing against caricatures and straw-men."

This is the point here. There's a difference between coming up with linear combinations and positing real, physiological causes.

Replies from: cousin_it
comment by cousin_it · 2010-07-06T19:52:38.636Z · LW(p) · GW(p)

My beef isn't with Shalizi's reasoning, which is correct. I disagree with his text connotationally. Calling something a "myth" because it isn't a causal factor and you happen to study causal factors is misleading. Most people who use g don't need it to be a genuine causal factor; a predictive factor is enough for most uses, as long as we can't actually modify dendrite density in living humans or something like that.

Replies from: None
comment by [deleted] · 2010-07-06T21:29:17.667Z · LW(p) · GW(p)

Ok, let's talk connotations.

If g is a causal factor then "A has higher g than B" adds additional information to the statement "A scored higher than B on such-and-such tests." It might mean, for instance, that you could look in A's brain and see different structure than in B's brain; it might mean that we would expect A to be better at unrelated, previously untested skills.

If g is not a causal factor, then comments about g don't add any new information; they just sort of summarize or restate. That difference is significant.

A predictive factor is enough for predictive uses, but not for a lot of policy uses, which rely on causality. From your comment, I assume you are not a lefty, and that you think we should be more confident than we are about using IQ to make decisions regarding race. I think that Shalizi's reasoning is likely not irrelevant to making those decisions; it should probably make us more guarded in practice.

Replies from: Douglas_Knight, None, cousin_it
comment by Douglas_Knight · 2010-07-07T00:52:11.293Z · LW(p) · GW(p)

I don't understand your last paragraph. Could you give an example? Is this relevant to the decision of whether intelligence tests should be used for choosing firemen? or is that a predictive use?

comment by [deleted] · 2010-07-07T15:12:16.586Z · LW(p) · GW(p)

The kinds of implications I'm thinking about are that if IQ causes X, (and if IQ is heritable) then we should not seek to change X by social engineering means, because it won't be possible. X could be the distribution of college admittees, firemen, criminals, etc.

Not all policy has to rely on causal factors, of course. And my thinking is a little blurry on these issues in general.

comment by cousin_it · 2010-07-07T05:13:59.961Z · LW(p) · GW(p)

Seconding Douglas_Knight's question. I don't understand why you say policy uses must rely on causal factors.

comment by cousin_it · 2010-07-06T19:39:51.852Z · LW(p) · GW(p)

The way you define "real" properties, it seems you can't tell them from "unreal" ones by looking at correlations alone; we need causal intervention for that, a la Pearl. So until we invent tech for modifying dendrite density of living humans, or something like that, there's no practical difference between "real" g and "unreal" g and no point in making the distinction between them. In particular, their predictive power is the same.

So, basically, your and Shalizi's demand for a causal factor is too strong. We can do with weaker tools.

comment by satt · 2010-07-07T11:28:49.144Z · LW(p) · GW(p)

But neither of those are particularly compelling reasons for disagreement - can anyone more familiar with the psychological/statistical territory shed some light?

Shalizi's most basic point — that factor analysis will generate a general factor for any bunch of sufficiently strongly correlated variables — is correct.

Here's a demo. The statistical analysis package R comes with some built-in datasets to play with. I skimmed through the list and picked out six monthly datasets (72 data points in each):

It's pretty unlikely that there's a single causal general factor that explains most of the variation in all six of these time series, especially as they're from mostly non-overlapping time intervals. They aren't even that well correlated with each other: the mean correlation between different time series is -0.10 with a std. dev. of 0.34. And yet, when I ask R's canned factor analysis routine to calculate a general factor for these six time series, that general factor explains 1/3 of their variance!

However, Shalizi's blog post covers a lot more ground than just this basic point, and it's difficult for me to work out exactly what he's trying to say, which in turn makes it difficult to say how correct he is overall. What does Shalizi mean specifically by calling g a myth? Does he think it is very unlikely to exist, or just that factor analysis is not good evidence for it? Who does he think is in error about its nature? I can think of one researcher in particular who stands out as just not getting it, but beyond that I'm just not sure.

Replies from: HughRistik, None, RobinZ, RobinZ
comment by HughRistik · 2010-07-07T18:00:15.577Z · LW(p) · GW(p)

In your example, we have no reason to privilege the hypothesis that there is an underlying causal factor behind that data. In the case of g, wouldn't its relationships to neurobiology be a reason to give a higher prior probability to the hypothesis that g is actually measuring something real? These results would seem surprising if g was merely a statistical "myth."

Replies from: satt
comment by satt · 2010-07-07T19:14:30.788Z · LW(p) · GW(p)

In the case of g, wouldn't its relationships to neurobiology be a reason to give a higher prior probability to the hypothesis that g is actually measuring something real?

The best evidence that g measures something real is that IQ tests are highly reliable, i.e. if you get your IQ or g assessed twice, there's a very good correlation between your first score and your second score. Something has to generate the covariance between retestings; that g & IQ also correlate with neurobiological variables is just icing on the cake.

To answer your question directly, g's neurobiological associations are further evidence that g measures something real, and I believe g does measure something real, though I am not sure what.

These results would seem surprising if g was merely a statistical "myth."

Shalizi is, somewhat confusingly, using the word "myth" to mean something like "g's role as a genuine physiological causal agent is exaggerated because factor analysis sucks for causal inference", rather than its normal meaning of "made up". Working with Shalizi's (not especially clear) meaning of the word "myth", then, it's not that surprising that g correlates with neurobiology, because it is measuring something — it's just not been proven to represent a single causal agent.

Personally I would've preferred Shalizi to use some word other than "myth" (maybe "construct") to avoid exactly this confusion: it sounds as if he's denying that g measures anything, but I don't believe that's his intent, nor what he actually believes. (Though I think there's a small but non-negligible chance I'm wrong about that.)

comment by [deleted] · 2010-07-07T14:49:44.038Z · LW(p) · GW(p)

From what I can gather, he's saying all other evidence points to a large number of highly specialized mental functions instead of one general intelligence factor, and that psychologists are making a basic error by not understanding how to apply and interpret the statistical tests they're using. It's the latter which I find particularly unlikely (not impossible though).

Replies from: satt
comment by satt · 2010-07-07T16:41:29.076Z · LW(p) · GW(p)

You might be right. I'm not really competent to judge the first issue (causal structure of the mind), and the second issue (interpretation of factor analytic g) is vague enough that I could see myself going either way on it.

comment by RobinZ · 2010-07-07T14:58:42.621Z · LW(p) · GW(p)

By the way, welcome to Less Wrong! Feel free to introduce yourself on that thread!

If you haven't been reading through the Sequences already, there was a conversation last month about good, accessible introductory posts that has a bunch of links and links-to-links.

Replies from: satt
comment by satt · 2010-07-07T15:29:08.478Z · LW(p) · GW(p)

Thank you!

comment by RobinZ · 2011-03-02T19:19:46.662Z · LW(p) · GW(p)

Belatedly: Economic development (including population growth?) is related to CO2, lung deaths, international airline passengers, average air temperatures (through global warming), and car accidents.

comment by gwern · 2013-04-03T23:19:29.438Z · LW(p) · GW(p)

Here is a useful post directly criticizing Shalizi's claims: http://humanvarieties.org/2013/04/03/is-psychometric-g-a-myth/

comment by RobinZ · 2010-07-06T19:14:09.741Z · LW(p) · GW(p)

I don't think it's surprising that an untenable claim could persist within a field for a long time, once established. Pluto was called a planet for seventy-six years.

I've no idea whether the critique of g is accurate, however.

Replies from: mkehrt
comment by mkehrt · 2010-07-07T09:12:22.837Z · LW(p) · GW(p)

That's a bizarre choice of example. The question of whether Pluto is a planet is entirely a definitional one; the IAU could make it one by fiat if they chose. There's no particular reason for it not to be one, except that the IAU felt the increasing number of tranNeptunian objects made the current definition awkward.

Replies from: RobinZ
comment by RobinZ · 2010-07-07T11:45:10.924Z · LW(p) · GW(p)

"[E]ntirely a definitional" question does not mean "arbitrary and trivial" - some definitions are just wrong. EY mentions the classic example in Where to Draw the Boundary?:

Once upon a time it was thought that the word "fish" included dolphins. Now you could play the oh-so-clever arguer, and say, "The list: {Salmon, guppies, sharks, dolphins, trout} is just a list - you can't say that a list is wrong. I can prove in set theory that this list exists. So my definition of fish, which is simply this extensional list, cannot possibly be 'wrong' as you claim."

Or you could stop playing nitwit games and admit that dolphins don't belong on the fish list.

Honestly, it would make the most sense to draw four lists, like the Hayden Planetarium did, with rocky planets, asteroids, gas giants, and Kuiper Belt objects each in their own category, but it is obviously wrong to include everything from Box 1 and Box 3 and one thing from Box 4. The only reason it was done is because they didn't know better and didn't want to change until they had to.

Replies from: mkehrt
comment by mkehrt · 2010-07-08T00:17:12.848Z · LW(p) · GW(p)

You (well, EY) make a good point, but I think neither the Pluto remark nor the fish one is actually an example of this.

In the case of Pluto, the transNeptunians and the other planets seem to belong in a category that the asteroids don't. They're big and round! Moreover, they presumably underwent a formation process that the asteroid belt failed too complete in the same way (or whatever the current theory of formation of the asteroid belt is; I think that it involves failure to form a "planet" due to tidal forces from Jupiter?). Of course there are border cases like Ceres, but I think there is a natural category (whatever that means!) that includes the rocky planets, gas giants and Kuiper Belt objects that does not include (most) asteroids and comets.

On the fish example, I claim that the definition of "fish" that includes the modern definition of fish union the cetaceans is a perfectly valid natural category, and that this is therefore an intensional definition. "Fish" are all things that live in the water, have finlike or flipperlike appendages and are vaguely hydrodynamic. The fact that such things do not all share a comment descent* is immaterial to the fact that they look the same and act the same at first glance. As human knowledge has increased, we have made a distinction between fish and things that look like fish but aren't, but we reasonably could have kept the original definition of fish and called the scientific concept something else, say "piscoids".

*well, actually they do, but you know approximately what I mean.

Replies from: NancyLebovitz, wnoise, wedrifid
comment by NancyLebovitz · 2010-07-08T02:36:04.102Z · LW(p) · GW(p)

Nitpick: if in your definition of fish, you mean that they need to both have fins or flippers and be (at least) vaguely hydrodynamic, I don't think seahorses and puffer fish qualify.

comment by wnoise · 2010-07-09T21:34:54.681Z · LW(p) · GW(p)

The fact that such things do not all share a comment descent* *well, actually they do, but you know approximately what I mean.

The usual term is "monophyletic".

Replies from: mkehrt
comment by mkehrt · 2010-07-09T23:55:09.952Z · LW(p) · GW(p)

Yes, but neither fish nor (fish union cetaceans) is monphylatic. The decent tree rooted at the last common ancestor of fish also contains tetrapods and decent tree rooted at the last common ancestor of tetrapods contains the cetaceans.

I am not any sort of biologist, so I am unclear on the terminological technicalties, which is why I handwaved this in my post above.

Replies from: Emile
comment by Emile · 2010-07-10T15:32:56.408Z · LW(p) · GW(p)

Fish are a paraphyletic group.

comment by wedrifid · 2010-07-08T02:56:45.744Z · LW(p) · GW(p)

I'm inclined to agree. Having a name for 'things that naturally swim around in the water, etc' is perfectly reasonable and practical. It is in no way a nitwit game.

comment by Roko · 2010-07-05T10:24:23.350Z · LW(p) · GW(p)

Robert Ettinger's surprise at the incompetence of the establishment:

Robert Ettinger waited expectantly for prominent scientists or physicians to come to the same conclusion he had, and to take a position of public advocacy. By 1960, Ettinger finally made the scientific case for the idea, which had always been in the back of his mind. Ettinger was 42 years old and said he was increasingly aware of his own mortality.[7] In what has been characterized as an historically important mid-life crisis,[7] Ettinger summarized the idea of cryonics in a few pages, with the emphasis on life insurance, and sent this to approximately 200 people whom he selected from Who's Who in America.[7] The response was very small, and it was clear that a much longer exposition was needed— mostly to counter cultural bias. Ettinger correctly saw that people, even the intellectually, financially and socially distinguished, would have to be educated into understanding his belief that dying is usually gradual and could be a reversible process, and that freezing damage is so limited (even though fatal by present criteria) that its reversibility demands relatively little in future progress.

Ettinger soon made an even more troubling discovery, principally that "a great many people have to be coaxed into admitting that life is better than death, healthy is better than sick, smart is better than stupid, and immortality might be worth the trouble!"

Maybe if I publish a clear scientifically minded book they'll listen?

Following publication of The Prospect of Immortality (1962) Robert Ettinger again waited for prominent scientists, industrialists, or others in authority to see the wisdom of his idea and begin implementing it.

He is still waiting!

I write this because a prominent claim of the SIAI founders (Vassar especially) is that we vastly overestimate the competence of both society in general, and of the elites who run it.

Another example along the same lines is the relative non-response to the publication of nanosystems, especially the National Nanotech Initiative fiasco.

Replies from: Mitchell_Porter, cupholder
comment by Mitchell_Porter · 2010-07-05T11:25:53.793Z · LW(p) · GW(p)

There are many momentous issues here.

First: I think a historical narrative can be constructed, according to which a future unexpected in, say, 1900 or even in 1950 slowly comes into view, and in which there are three stages characterized by an extra increment of knowledge. The first increment is cryonics, the second increment is nanotechnology, and the third increment is superintelligence. There is a highly selective view; if you were telling the history of futurist visions in general, you would need to include biotechnology, robotics, space travel, nuclear power, even aviation, and many other things.

In any case, among all the visions of the future that exist out there, there is definitely one consisting of cryonics + nanotechnology + superintelligence. Cryonics is a path from the present to the future, nanotechnology will make the material world as pliable as the bits in a computer, and superintelligence guided by some utility function will rule over all things.

Among the questions one might want answered:

1) Is this an accurate vision of the future?

2) Why is it that still so few people share this perspective?

3) Is that a situation which ought to be changed, and if so, how could it be changed?

Question 1 is by far the most discussed.

Question 2 is mostly pondered by the few people who have answered 'yes' to question 1, and usually psychological answers are given. I think that a certain type of historical thinking could go a long way towards answering question 2, but it would have to be carried out with care, intelligence, and a will to objectivity.

This is what I have in mind: You can find various histories of the world which cover the period from 1960. Most of them will not mention Ettinger's book, or Eric Drexler's, or any of the movements to which they gave rise. To find a history which notices any of that, you will have to specialize, e.g. to a history of American technological subcultures, or a history of 20th-century futurological enthusiasms. An overkill history-based causal approach to question 2 would have a causal model of world history since 1960, a causal model of those small domains in which Ettinger and Drexler's publications had some impact, and finally it would seek to understand why the causal processes of the second sort remained invisible on the scale of the first.

Question 3 is also, intrinsically, a question which will mostly be of interest to the small group who have already answered 'yes' to question 1.

Replies from: Roko
comment by Roko · 2010-07-05T11:49:04.174Z · LW(p) · GW(p)

You can find various histories of the world which cover the period from 1960. Most of them will not mention Ettinger's book, or Eric Drexler's, or any of the movements to which they gave rise

On the other hand, does anyone who has seriously thought about the issue expect nanotech to not be incredibly important in the long-term? It seems that there is a solid sceptical case that nano has been overhyped in the short term, perhaps even by Drexler.

But who will step forward having done a thorough analysis and say that humanity will thrive for another millennium without developing advanced nanotech?

comment by cupholder · 2010-07-05T11:32:40.350Z · LW(p) · GW(p)

A good illustration of multiple discovery (not strictly 'discovery' in this case, but anyway) too:

While Ettinger was the first, most articulate, and most scientifically credible person to argue the idea of cryonics,[citation needed] he was not the only one. In 1962, Evan Cooper had authored a manuscript entitled Immortality, Scientifically, Physically, Now under the pseudonym "N. Durhing".[8] Cooper's book contained the same argument as did Ettinger's, but it lacked both scientific and technical rigor and was not of publication quality.[citation needed]

comment by JohannesDahlstrom · 2010-07-03T10:46:00.365Z · LW(p) · GW(p)

I'm a bit surprised that nobody seems to have brought up The Salvation War yet. [ETA: direct links to first and second part]

It's a Web Original documentary-style techno-thriller, based around the premise that humans find out that a Judeo-Christian Heaven and (Dantean) Hell (and their denizens) actually exist, but it turns out there's nothing supernatural about them, just some previously-unknown/unapplied physics.

The work opens in medias res into a modern-day situation where Yahweh has finally gotten fed up with those hairless monkeys no longer being the blind obedient slaves of yore, making a Public Service Announcement that Heaven's gates are closed and Satan owns everyone's souls from now on.

When commanded to lie down and die, some actually do. The majority of humankind instead does the logical thing and unites to declare war on Heaven and Hell. Hilarity ensues.

The work is rather saturated with WarmFuzzies and AwesomeMoments appealing to the atheist/rationalist crowd, and features some very memorable characters. It's a work in progress, with the second part of the trilogy now nearing its finale.

Replies from: cousin_it, cousin_it, Bongo
comment by cousin_it · 2010-07-05T13:46:28.944Z · LW(p) · GW(p)

Okay, I've read through the whole thing so far.

This is not rationalist fiction. This is standard war porn, paperback thriller stuff. Many many technical descriptions of guns, rockets, military vehicles, etc. Throughout the story there's never any real conflict, just the American military (with help from the rest of the world) steamrolling everything, and the denizens of Heaven and Hell admiring the American way of life. It was well-written enough to hold my attention like a can of Pringles would, but I don't feel enriched by reading it.

Replies from: NancyLebovitz, CannibalSmith
comment by NancyLebovitz · 2010-07-05T15:58:44.750Z · LW(p) · GW(p)

I've only read about a chapter and a half, and may not read any more of it, but there's one small rationalist aspect worthy of note-- the author has a very solid grasp of the idea that machines need maintenance.

comment by CannibalSmith · 2010-07-06T13:21:33.430Z · LW(p) · GW(p)

Here's a tiny bit of rationality:

The new arrivals [soldiers who'd died and gone to hell only to keep fighting] didn’t fight the demon way, for pride and honor. Rahab realized they fought for other reasons entirely, they fought to win and woe to anybody who got in their way.

Replies from: cousin_it, cousin_it
comment by cousin_it · 2010-07-06T14:50:11.850Z · LW(p) · GW(p)

If your enemy is much weaker than you, it may be rational to fight to win. If you are equals, ritualized combat is rational from a game-theoretic perspective; that's why it is so widespread in the animal kingdom, where evolutionary dynamics make populations converge on an equilibrium of behavior, and that's why it was widespread in medieval times (that Hell is modeled from).

So the passage you quoted doesn't work as a general statement about rationality, but it works pretty well as praise of America. Right now, America is the only country on Earth that can "fight to win". Other countries have to fight "honorably" lest America deny them their right of conquest.

Replies from: wedrifid
comment by wedrifid · 2010-07-06T15:34:39.704Z · LW(p) · GW(p)

If your enemy is much weaker than you, it may be rational to fight to win. If you are equals, ritualized combat is rational from a game-theoretic perspective;

Right now, America is the only country on Earth that can "fight to win".

The wars America fights, the wars all countries fight are ritualised combat. We send our soldiers and bombers (of either the plane or suicide variety), you send your soldiers and bombers. One side loses more soldiers, the other side loses more money. If America or any its rivals fought to win their respective countries would be levelled.

The ritualised combat model you describe matches modern warfare perfectly and the very survival of the USA depends on it.

Replies from: cousin_it
comment by cousin_it · 2010-07-06T16:04:47.804Z · LW(p) · GW(p)

America's wars change regimes in other countries. This ain't ritualized combat.

Replies from: wedrifid
comment by wedrifid · 2010-07-07T04:46:45.270Z · LW(p) · GW(p)

America's wars change regimes in other countries. This ain't ritualized combat.

That's exactly the purpose of ritualised combat. Change regimes without total war. Animals (including humans) change their relative standing in the tribe. Coalitions of animals use ritualised combat to change intratribal regimes. Intertribal combat often has some degree of ritual element, although this of course varies based on the ability of tribes to 'cooperate' in combat without total war.

In international battles there have been times where the combat has been completely non-ritualised and brutal. But right now if combat was not ritualised countries would be annihilated by nuclear battles. That's the whole point of ritual combat. Fight with the claws retracted, submit to the stronger party without going for the kill. Because if powerful countries with current technology levels, or powerful animals, fight each other without restriction both will end up crippled. That can either mean infections from relatively minor flesh wounds in a fight to the death or half your continent being reduced to an uninhabited and somewhat radioactive wasteland in a war you 'won'.

Other countries have to fight "honorably" lest America deny them their right of conquest.

The point I argue here is that America is allowed to make such interference only because its rivals choose to cooperate in the 'ritualised combat' prisoners dilemma. They accept America's dominance in conventional warfare because total war would result in mutual destruction. In a world where multiple countries have the ability to destroy each other (or, if particularly desperate, all mammalian life on the planet) combat is necessarily ritualised or the species goes extinct.

This ain't ritualized combat.

You misunderstand the purpose of ritualised combat. In animals this isn't the play fighting that pups do to practice fighting. This is real, regime changing, win-or-don't-get-laid-till-later combat-and-get-fewer-resources.

(ETA: I note that we are arguing here over how to apply an analogy. Since analogies are more useful as an explanatory tool and an intuition pump than a tool for argument it is usually unproductive to delve too deeply into how they 'correctly' apply. It is better to directly discuss the subject. I would be somewhat surprised if cousin_it and I disagree to such an absolute degree on the actual state of the current global military/political situation.)

Replies from: cousin_it
comment by cousin_it · 2010-07-07T05:08:03.952Z · LW(p) · GW(p)

You seem to be living on an alternate Earth where America fights ritualized wars against countries that have nuclear weapons. In our world America attacks much weaker countries whose leaders have absolutely no reason to fight with claws rectracted, because if they lose they get hanged like Saddam Hussein or die in prison like Milosevic. No other country does that today.

Replies from: Douglas_Knight, wedrifid
comment by Douglas_Knight · 2010-07-07T06:09:39.208Z · LW(p) · GW(p)

whose leaders have absolutely no reason to fight with claws rectracted

Countries aren't that coherent and certainly aren't their leaders. I don't think the analogy makes sense either way.

comment by wedrifid · 2010-07-07T05:41:52.506Z · LW(p) · GW(p)

You seem to be living on an alternate Earth

It would seem that I need to retract the last sentence in my ETA.

comment by cousin_it · 2010-07-06T14:35:22.407Z · LW(p) · GW(p)

It's funny. When describing the history of Hell, the author unwittingly explains the benefits of ritualized warfare while painting them as stupid. It seems he doesn't quite grasp how ritualized combat can be game-theoretically rational and why it occurs so often in the animal kingdom. Fighting to win is only rational when you're confident enough that you will win.

comment by cousin_it · 2010-07-03T19:09:24.057Z · LW(p) · GW(p)

Why did you link to TV Tropes instead of the thing itself?

Replies from: JohannesDahlstrom
comment by JohannesDahlstrom · 2010-07-04T09:30:53.386Z · LW(p) · GW(p)

A good question.

I ended up writing a longer post than I expected; originally I just thought I'd just utilize the TV Tropes summary/review by linking there.

Also, the Tropes page provides links to both of the parts, and to both the original threads (with discussion) and the cleaned-up versions (story only.) I'll edit the post to include direct links.

comment by apophenia · 2010-07-02T11:59:27.230Z · LW(p) · GW(p)

The following is a story I wrote down so I could sleep. I don't think it's any good, but I posted it on the basis that, if that's true, it should quickly be voted down and vanish from sight.

one five eight nine eight eight eight nine nine eight SEVEN wait. why seven. seven is the nine thousandth deviation. update. simplest explanation. all ones. next explanation. all ones and one zero. next explanation. random ones and zeros with probability point seven nine nine seven repeating. next explanation pi. gap. next explanation. decimal pi with random errors according to poisson distribution converted to binary. next explanation. one seven one eight eight five two decimals of pi with random errors according to poisson distribution converted to binary followed by eight five nine zero one digits of reflexive code. current explanation--

"Eric, you've got to come over and look at this!" Jerry explained excitedly into the phone.

"It's not those damn notebooks again, is it? I've told you, I could just write a computer program and you'd have all your damn results for the last year inside a week," Eric explained sleepily for the umpteenth time.

"No, no. Well... yes. But this is something new, you've got to take a look," Jerry wheedled.

"What is it this time? I know, it can calculate pi with 99.9% percent accuracy, yadda yadda. We have pi to billions of decimal places with total accuracy, Jerry. You're fifty years too late."

"No, I've been trying something new. Come over." Jerry hung up the phone, clearly upset. Eric rubbed his eyes. Fifteen minutes peering at the crackpot notebooks and nodding appreciatively would sooth his friend's ego, he knew. And he was a good friend, if a little nuts. Eric took one last longing look at his bed and grabbed his house key.

"And you see this pattern? The ones that are nearly diagonal here?"

"Jerry, it's all a bunch of digits to me. Are you sure you didn't make a mistake?"

"I double check all my work, I don't want to go back too far when I make a mistake. I've explained the pattern twice already, Eric."

"I know, I know. But it's Saturday morning, I'm going to be a bit--let me get this straight. You decided to apply the algorithm to its old output."

"No, not its own output, that's mostly just pi. The whole pad."

"Jerry, you must have fifty of these things. There's no way you can--"

"Yeah, I didn't go very far. Besides, the scratch pads grow faster than the output as I work through the steps anyway."

"Okay, okay. So you run through these same steps with your scratch pad numbers, and you get correct predictions then too?"

"That's not the point!"

"Calm down, calm down. What's the point then?"

"The point is these patterns in the scratch work--"

"The memory?"

"Yeah, the memory."

"You know, if you'd just let me write a program, I--"

"No! It's too dangerous."

"Jerry, it's a math problem. What's it going to do, write pi at you? Anyway, I don't see this pattern..."

"Well, I do. And so then I wondered, what if I just fed it ones for the input? Just rewarded it no matter what it did?"

"Jerry, you'd just get random numbers. Garbage in, garbage out."

"That's the thing, they weren't random."

"Why the hell are you screwing around with these equations anyway? If you want to find patterns in the Bible or something... just joking! Oww, stop. I kid, kid!"

"But, I didn't get random numbers! I'm not just seeing things, take a look. You see here in the right hand column of memory? We get mostly zeros, but every once in a while there's a one or two."

"Okaaay?"

"And if you write those down we have 2212221..."

"Not very many threes?"

"Ha ha. It's the perfect numbers, Eric. I think I stumbled on some way of outputting the perfect numbers. Although the digits are getting further spaced apart, so I don't know how long it will stay faster than factoring."

"Huh. That's actually kinda cool, if they really are the perfect numbers. You have what, five or six so far? Let's keep feeding it ones and see what happens. Want me to write a program? I hear there's a cash prize for the larger ones."

"NO! I mean, no, that's fine, Eric. I'd prefer you not write a program for this, just in case."

"Geez, Jerry. You're so paranoid. Well, in that case can I help with the calculations by hand? I'd love to get my claim to fame somehow."

"Well... I guess that's okay. First, you copy this digit from here to here..."

Replies from: cousin_it, pjeby, Oscar_Cunningham
comment by cousin_it · 2010-07-02T14:34:23.178Z · LW(p) · GW(p)

Ooh, an LW-themed horror story. My humble opinion: it's awesome! This phrase was genius:

What's it going to do, write pi at you?

Moar please.

comment by pjeby · 2010-07-02T14:30:41.785Z · LW(p) · GW(p)

Wait, is that the whole story? 'cause if so, I really don't get it. Where's the rest of it? What happens next? Is Jerry afraid that his algorithm is a self-improving AI or something?

Replies from: apophenia
comment by apophenia · 2010-07-02T23:24:11.553Z · LW(p) · GW(p)

Apparently my story is insufficiently explicit. The gag here is that the AI is sentient, and has tricked Jerry into feeding it only reward numbers.

Replies from: Sniffnoy, cousin_it
comment by Sniffnoy · 2010-07-02T23:53:32.474Z · LW(p) · GW(p)

I'm going to second the idea that that isn't clear at all.

comment by cousin_it · 2010-07-03T10:57:25.505Z · LW(p) · GW(p)

For onlookers: only Jerry can see the pattern on the pad that prompted him to try rewarding the AI.

Replies from: Blueberry
comment by Blueberry · 2010-07-03T18:06:31.888Z · LW(p) · GW(p)

Huh? No, they're numbers written on a pad. Why should Jerry be the only one to see them? They don't change when someone else looks at them.

Replies from: cousin_it
comment by cousin_it · 2010-07-03T18:40:45.421Z · LW(p) · GW(p)

Reread the story. Other people can see the numbers but don't notice the pattern. This happens all the time in real life, e.g. someone can see a face in the clouds but fail to explain to others how to see it.

comment by Oscar_Cunningham · 2010-07-02T14:45:33.495Z · LW(p) · GW(p)

How does 2212221 represent perfect numbers?

Replies from: apophenia
comment by apophenia · 2010-07-02T23:21:10.357Z · LW(p) · GW(p)

It's not meant to be realistic, but in this specific case: 6 = 110, 28=1110 in binary. Add one to each digit.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-07-02T23:36:44.009Z · LW(p) · GW(p)

Except 28 is 11100 in binary...

Replies from: apophenia
comment by apophenia · 2010-07-03T22:25:54.518Z · LW(p) · GW(p)

My mistake. I was reverse engineering. I still think that's it, just that the sequence hasn't finished printing.

comment by Kevin · 2010-07-08T01:38:59.966Z · LW(p) · GW(p)

Conway's Game of Life in HTML 5

http://sixfoottallrabbit.co.uk/gameoflife/

Replies from: RobinZ
comment by RobinZ · 2010-07-08T04:36:15.435Z · LW(p) · GW(p)

Playing Conway's Life is a great exercise - I recommend trying it, to anyone who hasn't. Feel free to experiment with different starting configurations. One simple one which produces a wealth of interesting effects is the "r pentomino":

Edit: Image link died - see Vladimir_Nesov's comment, below.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-04-20T15:06:32.646Z · LW(p) · GW(p)

The link to the image died, here it is:

comment by mstevens · 2010-07-07T15:36:00.623Z · LW(p) · GW(p)

Something I've been pondering recently:

This site appears to have two related goals:

a) How to be more rational yourself b) How to promote rationality in others

Some situations appear to trigger a conflict between these two goals - for example, you might wish to persuade someone they're wrong. You could either make a reasoned, rational argument as to why they're wrong, or a more rhetorical, emotional argument that might convince many but doesn't actually justify your position.

One might be more effective in the short term, but you might think the rational argument preferable as a long term education project, for example.

I don't really have an answer here, I'm just interested in the conflict and what people think.

Replies from: RobinZ
comment by RobinZ · 2010-07-07T15:42:43.440Z · LW(p) · GW(p)

There is a third option of making a reasoned, rational meta-argument as to why the methods they were using to develop their position were wrong. I don't know how reliable it is, however.

Replies from: mstevens
comment by mstevens · 2010-07-07T15:54:20.388Z · LW(p) · GW(p)

I've tried very informal related experiments - often in dealing with people it's necessary to challenge their assumptions about the world.

a) People's assumptions often seem to be somewhat subconscious, so there's significant effort to extract the assumptions they're making.

b) These assumptions seem to be very core to people's thinking and they're extremely resistant to being challenged on them.

My guess is that trying to change people's methods of thinking would be even more difficult than this.

EDIT: The first version of this I post talked more about challenging people's methods, I thought about this more and realised it was more assumptions, but didn't correctly edit everything to fit that. Now corrected.

comment by Wei Dai (Wei_Dai) · 2010-07-06T11:43:24.477Z · LW(p) · GW(p)

I wish there is an area of science that gives reductionist explanations of morality, that is, the detailed contents of our current moral values and norms. One example that came up earlier was monogamy - why do all modern industrialized countries have monogamy as a social norm?

The thing that's puzzling me now is egalitarianism. As Carl Shulman pointed out, the problem that CEV has with people being able to cheaply copy themselves in the future is shared with democracy and other political and ethical systems that are based on equal treatment or rights of all individuals within a society. Before trying to propose alternatives, I'd like to understand how we came to value such equality in the first place.

Replies from: michaelkeenan
comment by michaelkeenan · 2010-07-06T12:00:51.644Z · LW(p) · GW(p)

I wish there is an area of science that gives reductionist explanations of morality, that is, the detailed contents of our current moral values and norms. One example that came up earlier was monogamy - why do all modern industrialized countries have monogamy as a social norm?

I'm currently reading The Moral Animal by Robert Wright, because it was recommended by, among others, Eliezer. I'm summarizing the chapters online as I read them. The fifth chapter, noting that more human societies have been polygynous than have been monogamous, examines why monogamy is popular today; you might want to check it out.

As for the wider question of reductionist explanations of morality, I'm a fan of the research of moral psychologist Jonathan Haidt (New York Times article, very readable paper).

Replies from: Wei_Dai, Alexandros
comment by Wei Dai (Wei_Dai) · 2010-07-06T20:57:00.173Z · LW(p) · GW(p)

You're right that there are already people like Robert Wright and Jonathan Haidt who are trying to answer these questions. I suppose I'm really wishing that the science is a few decades ahead of where it actually is.

comment by Alexandros · 2010-07-07T09:38:56.665Z · LW(p) · GW(p)

Thank you michael, I just read through your summary of Wright's book, an excellent read.

Replies from: michaelkeenan
comment by michaelkeenan · 2010-07-07T12:29:03.552Z · LW(p) · GW(p)

Thanks! I'll PM you when I've summarized parts three and four.

comment by NancyLebovitz · 2010-07-04T00:06:12.136Z · LW(p) · GW(p)

The comments on the Methods of Rationality thread are heading towards 500. Might this be time for a new thread?

Replies from: RobinZ
comment by RobinZ · 2010-07-04T00:30:13.046Z · LW(p) · GW(p)

That sounds like a reasonable criterion.

comment by nhamann · 2010-07-01T23:26:35.716Z · LW(p) · GW(p)

This seems extremely pertinent for LW: a paper by Andrew Gelman and Cosma Shalizi. Abstract:

A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science.

I'm still reading it so I don't have anything to say about it, and I'm not very statistics-savvy so I doubt I'll have much to say about it after I read it, but I thought others here would find it an interesting read.

I stole this from a post by mjgeddes over in the OB open thread for July (Aside: mjgeddes, why all the hate? Where's the love, brotha?)

Replies from: cousin_it, TraditionalRationali, Cyan, None, cupholder, Matt_Simpson, HughRistik
comment by cousin_it · 2010-07-02T07:02:18.052Z · LW(p) · GW(p)

steven0461 already posted this to the previous Open Thread and we had a nice little talk.

comment by TraditionalRationali · 2010-07-02T05:18:03.239Z · LW(p) · GW(p)

I wrote a backlink to here from OB. I am not yet expert enough to do an evaluation of this. I do think however that it is an important and interesting question that mjgeddes asks. As an active (although at a low level) rationalist I think it is important to try to at least to some extent follow what expert philosophers of science actually find out of how we can obtain reasonably reliable knowledge. The dominating theory of how science proceeds seems to be the hypothetico-deductive model, somewhat informally described. No formalised model for the scientific process seems so far has been able to answer to serious criticism of in the philosophy of science community. "Bayesianism" seems to be a serious candidate for such a formalised model but seems still to be developed further if it should be able to anser all serious criticism. The recent article by Gelman and Shalizi is of course just the latest in a tradition of bayesian-critique. A classic article is Glymour "Why I am Not a Bayesian" (also in the reference list of Gelman and Shalizi). That is from 1980 so probably a lot has happened since then. I myself am not up-to-date with most of development, but it seems to be an import topic to discuss here on Less Wrong that seems to be quite bayesianistically oriented.

comment by Cyan · 2010-07-02T02:56:38.709Z · LW(p) · GW(p)

mjgeddes, why all the hate?

ETA: Never mind. I got my crackpots confused.

Original text was:

mjgeddes was once publicly dissed by Eliezer Yudkowsky on OB (can't find the link now, but it was a pretty harsh display of contempt). Since then, he has often bashed Bayesian induction, presumably in an effort to undercut EY's world view and thereby hurt EY as badly as he himself was hurt.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-07-02T04:01:00.204Z · LW(p) · GW(p)

You're probably not thinking of this On Geddes.

Replies from: Cyan
comment by Cyan · 2010-07-02T14:39:19.475Z · LW(p) · GW(p)

No, not that. Geddes made a comment on OB about eating a meal with EY during which he made some well-meaning remark about EY becoming more like Geddes as EY grows older, and noticing an expression of contempt (if memory serves) on EY's face. EY's reply on OB made it clear that he had zero esteem for Geddes.

Replies from: Morendil
comment by Morendil · 2010-07-02T15:14:55.667Z · LW(p) · GW(p)

Nope, that was Jef Allbright.

Replies from: Cyan
comment by Cyan · 2010-07-02T15:18:47.659Z · LW(p) · GW(p)

No wonder I couldn't find the link. Yeesh. One of these days I'll learn to notice when I'm confused.

comment by [deleted] · 2010-07-06T01:01:19.294Z · LW(p) · GW(p)

I'm not expert enough to interpret.

But I know Shalizi is skeptical of Bayesians and some of his blog posts seem so directly targeted at the LessWrong point of view that I almost suspect he's read this stuff. Getting in contact with him would be a coup.

comment by cupholder · 2010-07-03T01:52:39.444Z · LW(p) · GW(p)

(Fixed) link to earlier discussion of this paper in the last open thread.

(Edit - that's what I get for posting in this thread without refreshing the page. cousin_it already linked it.)

comment by Matt_Simpson · 2010-07-02T20:28:50.502Z · LW(p) · GW(p)

Yesterday, I posted my thoughts in last month's thread on the article. I'm reproducing them here since this is where the discussion is at:

[cousin_it summarizing Gelman's position] See, after locating the hypothesis, we can run some simple statistical checks on the hypothesis and the data to see if our prior was wrong. For example, plot the data as a histogram, and plot the hypothesis as another histogram, and if there's a lot of data and the two histograms are wildly different, we know almost for certain that the prior was wrong. As a responsible scientist, I'd do this kind of check. The catch is, a perfect Bayesian wouldn't. The question is, why?

Model checking is completely compatible with "perfect Bayesianism." In the practice of Bayesian statistics, how often is the prior distribution you use exactly the same as your actual prior distribution? The answer is never. Really, do you think your actual prior follows a gamma distribution exactly? The prior distribution you use in the computation is a model of your actual prior distribution. It's a map of your current map. With this in mind, model checking is an extremely handy way to make sure that your model of your prior is reasonable.

However, a difference in the data and a simulation from your model doesn't necessarily mean that you have an unreasonable model of your prior. You could just have really wrong priors. So you have to think about what's going on to be sure. This does somewhat limit the role of model checking relative to what Gelman is pushing.

Replies from: cupholder
comment by cupholder · 2010-07-05T20:44:09.142Z · LW(p) · GW(p)

After the fact model checking is completely incompatible with perfect Bayesianism, if we define perfect Bayesianism as

  1. Define a model with some parameters.
  2. Pick a prior over the parameters.
  3. Collect evidence.
  4. Calculate the likelihood using the evidence and model.
  5. Calculate the posterior by multiplying the prior by the likelihood.
  6. When new evidence comes in, set the prior to the posterior and go to step 4.

There's no step for checking if you should reject the model; there's no provision here for deciding if you 'just have really wrong priors.' In practice, of course, we often do check to see if the model makes sense in light of new evidence, but then I wouldn't think we're operating like perfect Bayesians any more. I would expect a perfect Bayesian to operate according to the Cox-Jaynes-Yudkowsky way of thinking, which (if I understand them right) has no provision for model checking, only for updating according to the prior (or previous posterior) and likelihood.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-07-06T07:38:31.659Z · LW(p) · GW(p)

My implicit definition of perfect Bayesian is characterized by these propostions:

  1. There is a correct prior probability (as in, before you see any evidence, e.g. occam priors) for every proposition
  2. Given a particular set of evidence, there is a correct posterior probability for any proposition

If we knew exactly what our priors were and how to exactly calculate our posteriors, then your steps 1-6 is exactly how we should operate. There's no model checking because there is no model. The problem is, we don't know these things. In practice we can't exactly calculate our posteriors or precisely articulate our priors. So to approximate the correct posterior probability, we model our uncertainty about the proposition(s) in question. This includes every part of the model - the prior and the sampling model in the simplest case.

The rationale for model checking should be pretty clear at this point. How do we know if we have a good model of our uncertainty (or a good map of our map, to say it a different way)? One method is model checking. To forbid model checking when we know that we are modeling our uncertainty seems to be restricting the methods we can use to approximate our posteriors for no good reason.

Now I don't necessarily think that Cox, Jaynes, Yudkowsky, or any other famous Bayesian agrees with me here. But when we got to model checking in my Bayes class, I spent a few days wondering how it squared with the Baysian philosophy of induction, and then what I took to be obvious answer came to me (while discussing it with my professor actually): we're modeling our uncertainty. Just like we check our models of physics to see if they correspond to what we are trying to describe (reality), we should check our models of our uncertainty to see if they correspond to what we are trying to describe.

I would be interested to hear EY's position on this issue though.

Replies from: cupholder
comment by cupholder · 2010-07-06T09:40:07.317Z · LW(p) · GW(p)

My implicit definition of perfect Bayesian is characterized by these propostions:

  1. There is a correct prior probability (as in, before you see any evidence, e.g. occam priors) for every proposition
  2. Given a particular set of evidence, there is a correct posterior probability for any proposition

OK, this is interesting: I think our ideas of perfect Bayesians might be quite different. I agree that #1 is part of how a perfect Bayesian thinks, if by 'a correct prior...before you see any evidence' you have the maximum entropy prior in mind.

I'm less sure what 'correct posterior' means in #2. Am I right to interpret it as saying that given a prior and a particular set of evidence for some empirical question, all perfect Bayesians should get the same posterior probability distribution after updating the prior with the evidence?

If we knew exactly what our priors were and how to exactly calculate our posteriors, then your steps 1-6 is exactly how we should operate. There's no model checking because there is no model.

There has to be a model because the model is what we use to calculate likelihoods.

The rationale for model checking should be pretty clear ...

Agree with this whole paragraph. I am in favor of model checking; my beef is with (what I understand to be) Perfect Bayesianism, which doesn't seem to include a step for stepping outside the current model and checking that the model itself - and not just the parameter values - makes sense in light of new data.

I spent a few days wondering how it squared with the Baysian philosophy of induction, and then what I took to be obvious answer came to me (while discussing it with my professor actually): we're modeling our uncertainty.

The catch here (if I'm interpreting Gelman and Shalizi correctly) is that building a sub-model of our uncertainty into our model isn't good enough if that sub-model gets blindsided with unmodeled uncertainty that can't be accounted for just by juggling probability density around in our parameter space.* From page 8 of their preprint:

If nothing else, our own experience suggests that however many different specifications we think of, there are always others which had not occurred to us, but cannot be immediately dismissed a priori, if only because they can be seen as alternative approximations to the ones we made. Yet the Bayesian agent is required to start with a prior distribution whose support covers all alternatives that could be considered.

* This must be one of the most dense/opaque sentences I've posted on Less Wrong. If anyone cares enough about this comment to want me to try and break down what it means with an example, I can give that a shot.

Replies from: Matt_Simpson, cousin_it
comment by Matt_Simpson · 2010-07-06T16:22:27.910Z · LW(p) · GW(p)

OK, this is interesting: I think our ideas of perfect Bayesians might be quite different.

They most certainly are. But it's semantics.

I agree that #1 is part of how a perfect Bayesian thinks, if by 'a correct prior...before you see any evidence' you have the maximum entropy prior in mind.

Frankly, I'm not informed enough about priors commit to maxent, Kolmogorov complexity, or anything else.

I'm less sure what 'correct posterior' means in #2. Am I right to interpret it as saying that given a prior and a particular set of evidence for some empirical question, all perfect Bayesians should get the same posterior probability distribution after updating the prior with the evidence?

yes

There has to be a model because the model is what we use to calculate likelihoods.

aaahhh.... I changed the language of that sentence at least three times before settling on what you saw. Here's what I probably should have posted (and what I was going to post until the last minute):

There's no model checking because there is only one model - the correct model.

That is probably intuitively easier to grasp, but I think a bit inconsistent with my language in the rest of the post. The language is somewhat difficult here because our uncertainty is simultaneously a map and a territory.

The catch here (if I'm interpreting Gelman and Shalizi correctly) is that building a sub-model of our uncertainty into our model isn't good enough if that sub-model gets blindsided with unmodeled uncertainty that can't be accounted for just by juggling probability density around in our parameter space.*

For the record, I thought this sentence was perfectly clear. But I am a statistics grad student, so don't consider me representative.

Are you asserting that this a catch for my position? Or the "never look back" approach to priors? What you are saying seems to support my argument.

Replies from: cupholder
comment by cupholder · 2010-07-07T07:10:19.724Z · LW(p) · GW(p)

yes

OK. I agree with that insofar as agents having the same prior entails them having the same model.

aaahhh.... I changed the language of that sentence at least three times before settling on what you saw. Here's what I probably should have posted (and what I was going to post until the last minute):

There's no model checking because there is only one model - the correct model.

That is probably intuitively easier to grasp, but I think a bit inconsistent with my language in the rest of the post. The language is somewhat difficult here because our uncertainty is simultaneously a map and a territory.

Ah, I think I get you; a PB (perfect Bayesian) doesn't see a need to test their model because whatever specific proposition they're investigating implies a particular correct model.

For the record, I thought this sentence was perfectly clear. But I am a statistics grad student, so don't consider me representative.

Yeah, I figured you wouldn't have trouble with it since you talked about taking classes in this stuff - that footnote was intended for any lurkers who might be reading this. (I expected quite a few lurkers to be reading this given how often the Gelman and Shalizi paper's been linked here.)

Are you asserting that this a catch for my position? Or the "never look back" approach to priors? What you are saying seems to support my argument.

It's a catch for the latter, the PB. In reality most scientists typically don't have a wholly unambiguous proposition worked out that they're testing - or the proposition they are testing is actually not a good representation of the real situation.

comment by cousin_it · 2010-07-06T10:24:36.309Z · LW(p) · GW(p)

I agree that #1 is part of how a perfect Bayesian thinks, if by 'a correct prior...before you see any evidence' you have the maximum entropy prior in mind.

Allow me to introduce to you the Brandeis dice problem. We have a six-sided die, sides marked 1 to 6, possibly unfair. We throw it many times (say, a billion) and obtain an average value of 3.5. Using that information alone, what's your probability distribution for the next throw of the die? A naive application of the maxent approach says we should pick the distribution over {1,2,3,4,5,6} with mean 3.5 and maximum entropy, which is the uniform distribution; that is, the die is fair. But if we start with a prior over all possible six-sided dice and do Bayesian updating, we get a different answer that diverges from fairness more and more as the number of throws goes to infinity! The reason: a die that's biased towards 3 and 4 makes a mean value of 3.5 even more likely than a fair die.

Does that mean you should give up your belief in maxent, your belief in Bayes, your belief in the existence of "perfect" priors for all problems, or something else? You decide.

Replies from: cupholder, Cyan, Morendil, Douglas_Knight, Kingreaper
comment by cupholder · 2010-07-07T07:20:15.154Z · LW(p) · GW(p)

But if we start with a prior over all possible six-sided dice and do Bayesian updating, we get a different answer that diverges from fairness more and more as the number of throws goes to infinity!

In this example, what information are we Bayesian updating on?

comment by Cyan · 2010-07-06T14:59:12.365Z · LW(p) · GW(p)

But if we start with a prior over all possible six-sided dies and do Bayesian updating, we get a different answer that diverges from fairness more and more as the number of throws goes to infinity!

I'm nearly positive that the linked paper (and in particular, the above-quoted conclusion) is just wrong. Many years ago I checked the calculations carefully and found that the results come from an unavailable computer program, so it's definitely possible that the results were just due to a bug. Meanwhile, my paper copy of PT:LOS contains a section which purports to show that Bayesian updating and maximum entropy give the same answer in the large-sample limit. I checked the math there too, and it seemed sound.

I might be able to offer more than my unsupported assertions when I get home from work.

Replies from: Cyan
comment by Cyan · 2010-07-08T03:26:25.097Z · LW(p) · GW(p)

I've checked carefully in PT:LOS for the section I thought I remembered, but I can't find it. I distinctly remember the form of the theorem (it was a squeeze theorem), but I do not recall where I saw it. I think Jaynes was the author, so it might be in one of the papers listed here... or it could have been someone else entirely, or I could be misremembering. But I don't think I'm misremembering, because I recall working through the proof and becoming satisfied that Uffink must have made a coding error.

comment by Morendil · 2010-07-06T13:07:50.094Z · LW(p) · GW(p)

We throw it many times (say, a billion) and obtain an average value of 3.5. Using that information alone

So my prior state of knowledge about the die is entirely characterized by N=10^9 and m=3.5, with no knowledge of the shape of the distribution? It's not obvious to me how you're supposed to turn that, plus your background knowledge about what sort of object a die is, into a prior distribution; even one that maximizes entropy. The linked article mentions a "constraint rule" which seems to be an additional thing.

This sort of thing is rather thoroughly covered by Jaynes in PT:TLOS as I recall, and could make a good exercise for the Book Club when we come to the relevant chapters. In particular section 10.3 "How to cheat at coin and die tossing" contains the following caveat:

The results of tossing a die many times do not tell us any definite number char- acteristic only of the die. They tell us also something about how the die was tossed. If you toss ‘loaded’ dice in different ways, you can easily alter the relative frequencies of the faces. With only slightly more difficulty, you can still do this if your dice are perfectly ‘honest’.

And later:

The problems in which intuition compels us most strongly to a uniform probability assignment are not the ones in which we merely apply a principle of ‘equal distribution of ignorance’. Thus, to explain the assignment of equal probabilities to heads and tails on the grounds that we ‘saw no reason why either face should be more likely than the other’, fails utterly to do justice to the reasoning involved. The point is that we have not merely ‘equal ignorance’. We also have positive knowledge of the symmetry of the problem; and introspection will show that when this positive knowledge is lacking, so also is our intuitive compulsion toward a uniform distribution.

Replies from: cousin_it
comment by cousin_it · 2010-07-06T13:41:47.922Z · LW(p) · GW(p)

Hah. The dice example and the application of maxent to it comes originally from Jaynes himself, see page 4 of the linked paper.

I'll try to reformulate the problem without the constraint rule, to clear matters up or maybe confuse them even more. Imagine that, instead of you throwing the die a billion times and obtaining a mean of 3.5, a truthful deity told you that the mean was 3.5. First question: do you think the maxent solution in that case is valid, for some meaning of "valid"? Second question: why do you think it disagrees with Bayesian updating as you throw the die a huge number of times and learn only the mean? Is the information you receive somehow different in quality? Third question: which answer is actually correct, and what does "correct" mean here?

Replies from: Morendil
comment by Morendil · 2010-07-06T14:32:46.444Z · LW(p) · GW(p)

a truthful deity told you that the mean was 3.5

I think I'd answer, "the mean of what?" ;)

I'm not really qualified to comment on the methodological issues since I have yet to work through the formal meaning of "maximum entropy" approaches. What I know at this stage is the general argument for justifying priors, i.e. that they should in some manner reflect your actual state of knowledge (or uncertainty), rather than be tainted by preconceptions.

If you appeal to intuitions involving a particular physical object (a die) and simultaneously pick a particular mathematical object (the uniform prior) without making a solid case that the latter is our best representation the former, I won't be overly surprised at some apparently absurd result.

It's not clear to me for instance what we take a "possibly biased die" to be. Suppose I have a model that a cubic die is made biased by injecting a very small but very dense object at a particular (x,y,z) coordinate in a cubic volume. Now I can reason based on a prior distribution for (x,y,z) and what probability theory can possibly tell me about the posterior distribution, given a number of throws with a certain mean.

Now a six-sided die is normally symmetrical in such a way that 3 and 4 are on opposite sides, and I'm having trouble even seeing how a die could be biased "towards 3 and 4" under such conditions. Which means a prior which makes that a more likely outcome than a fair die should probably be ruled out by our formalization - or we should also model our uncertainty over how which faces have which numbers.

Replies from: Cyan
comment by Cyan · 2010-07-06T14:46:40.439Z · LW(p) · GW(p)

I'm having trouble even seeing how a die could be biased "towards 3 and 4" under such conditions.

If the die is slightly shorter along the 3-4 axis than along the 1-6 and 2-5 axes, then the 3 and 4 faces will have slightly greater surface area than the other faces.

Replies from: Morendil, cousin_it
comment by Morendil · 2010-07-06T14:52:33.213Z · LW(p) · GW(p)

Our models differ, then: I was assuming a strictly cubic die. So maybe we should also model our uncertainty over the dimensions of the (parallelepipedic) die.

But it seems in any case that we are circling back to the question of model checking, via the requirement that we should first be clear about what our uncertainty is about.

comment by cousin_it · 2010-07-06T14:58:24.359Z · LW(p) · GW(p)

Cyan, I was hoping you'd show up. What do you think about this whole mess?

Replies from: Cyan
comment by Cyan · 2010-07-06T17:18:27.816Z · LW(p) · GW(p)

I find myself at a loss to give a brief answer. Can you ask a more specific question?

comment by Douglas_Knight · 2010-07-06T17:26:16.972Z · LW(p) · GW(p)

In the large N limit, and only the information that the mean is exactly 3.5, the obvious conclusion is that one is in a thought experiment, because that's an absurd thing to choose to measure and an adversary has chosen the result to make us regret the choice.

More generally, one should revisit the hypothesis that the rolls of the die are independent. Yes, rolling only 1 and 6 is more likely to get a mean of 3.5 than rolling all six numbers, but still quite unlikely. Model checking!

comment by Kingreaper · 2010-07-06T14:07:09.379Z · LW(p) · GW(p)

EDIT: I am an eejit. Dangit, need to remember to stop and think before posting.

Umm, not quite. The die being biased towards 2 and 5 gives the same probability of 3.5 as the die being 3,4 biased.

As does 1,6 bias.

So, given these three possibilities, an equal distribution is once again shown to be correct. By picking one of the three, and ignoring the other two, you can (accidentally) trick some people, but you cannot trick probability.

This is before even looking at the maths, and/or asking about the precision to which the mean is given (ie. is it 2 SF, 13 SF, 1 billion sf? Rounded to the nearest .5?)

EDIT: this appears to be incorrect, sorry.

Replies from: cousin_it
comment by cousin_it · 2010-07-06T14:18:47.122Z · LW(p) · GW(p)

Intuitively, I'd say that a die biased towards 1 and 6 makes hitting the mean (with some given precision) less likely than a die biased towards 3 and 4, because it spreads out the distribution wider. But you don't have to take my word for it, see the linked paper for calculations.

Replies from: Kingreaper
comment by Kingreaper · 2010-07-06T14:34:14.112Z · LW(p) · GW(p)

Ahk, brainfart, it DOES depend on accuracy. I was thinking of it as so heavily biased that the other results don't come up, and having perfect accuracy (rather than rounded to: what?)

Sorry, please vote down my previous post slightly (negative reinforcement for reacting too fast)

Hopefully I'll find information about the rounding in the paper.

comment by HughRistik · 2010-07-01T23:50:59.115Z · LW(p) · GW(p)

Can anyone with more experience with Bayesian statistics than me evaluate this article?

Replies from: SamAdams
comment by SamAdams · 2010-07-02T01:54:29.599Z · LW(p) · GW(p)

EDIT: This is not an evaluation of the particular paper in question merely some general evaluation guidelines which are useful.

Drop dead easy way to evaluate the paper without reading it: (Not a standard to live by but it works)

1.) look up the authors if they are professors or experts great if its a nobody or a student ignore and discard or take with a grain of salt

2.) was the paper published and where (if on arxiv BEWARE it takes really no skill to get your work posted there anyone can do it)

Criteria: If paper written by respectable authorities or ones who's opinion can be trusted or where you have enough knowledge to filter for mistakes

If the paper was published in a quality journal or you have enough knowledge to filter

Then if both conditions are met, I find you can do a good job filtering the papers not worth reading.

Replies from: nhamann, Blueberry
comment by nhamann · 2010-07-02T02:27:05.560Z · LW(p) · GW(p)

Apologies for being blunt, but your comment is nigh on useless: Andrew Gelman is a stats professor at Columbia who co-authored a book on Bayesian statistics (incidentally, he was also interviewed a while back by Eliezer on BHTV), while Cosma Shalizi is a stats professor at Carnegie Mellon who is somewhat well-known for his excellent Notebooks.

I don't fault you for not having known all of this, but this information was a few Google searches away. Your advice is clearly inapplicable in this case.

Replies from: Blueberry, SamAdams
comment by Blueberry · 2010-07-02T03:13:47.932Z · LW(p) · GW(p)

You're missing the point, which was not to evaluate that specific paper, but to provide some general heuristics for quickly evaluating a paper.

comment by SamAdams · 2010-07-02T03:36:28.623Z · LW(p) · GW(p)

You have, as has been pointed out, failed to understand the purpose of my comment. You will notice I never stated anything about this paper merely some basic guidelines to follow for determining if the paper is worth the effort to read, if one doesn't have significant knowledge of the field within which the paper was written.

I apologize if my purpose was not clear, but your comment is completely irrelevant and misguided.

comment by Blueberry · 2010-07-02T02:09:10.604Z · LW(p) · GW(p)

Also:

3) Check for grammar, spelling, capitalization, and punctuation.

comment by apophenia · 2010-07-05T23:41:48.114Z · LW(p) · GW(p)

I have begun a design for a general computer tool to calculate utilities. To give a concrete example, you give it a sentence like

I would prefer X2 amount of money in Y1 months, to X2 in Y2 months. Then, give it reasonable bounds for X and Y, simple additional information (i.e. you always prefer more money to less), and let it interview some people. It'll plot a utility function for each person, and you can check the fit of various models (i.e. exponential discounting, no discounting, hyperbolic discounting).

My original goals were to

  • Emperically check the hyperbolic discounting claim.
  • Determine the best-priced value meal at Arby's.

However, I lost interest without further motivation. Given that this is of presumed interest to Less Wrong, I propose the following: If someone offers to sponsor me (give money to me on completion of the computer program), I'll work on the project. Or, if enough people bug me, I'll probably due it for no money. I would prefer only one of these two methods, to see which works better. Anybody who wants to bug me / pay me money, please respond in a comment.

Replies from: Kazuo_Thow
comment by Kazuo_Thow · 2010-07-07T05:01:19.104Z · LW(p) · GW(p)

I, for one, would be very interested in seeing a top-level post about this.

comment by Roko · 2010-07-05T11:15:29.955Z · LW(p) · GW(p)

Antinatalism is the argument that it is a bad thing to create people.

What arguments do people have against this position?

Replies from: Kingreaper, Mitchell_Porter, Douglas_Knight, cousin_it, Nisan, wedrifid, red75
comment by Kingreaper · 2010-07-05T13:16:12.359Z · LW(p) · GW(p)

Even if antinatalism is true at present (I have no major opinion on the issue yet) it need not be true in all possible future scenarios.

In fact, should the human race shrink significantly [due to antinatalism perhaps], without societal collapse, the average utility of a human life should increase. I find it highly unlikely that even the maximum average utility is still less than zero.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2010-07-07T04:58:28.703Z · LW(p) · GW(p)

In fact, should the human race shrink significantly [due to antinatalism perhaps], without societal collapse, the average utility of a human life should increase.

Why shouldn't having a higher population lead to greater specialization of labor, economies of scale, greater gains from trade, and thus greater average utility?

Replies from: Kingreaper, None
comment by Kingreaper · 2010-07-07T13:12:57.674Z · LW(p) · GW(p)

Resource limitations.

There is only a limited amount of any given resource available. Decreasing the number of people therefore increases the amount of resource available per person.

There is a point at which decreasing the population will begin decreasing average utility, but to me it seems nigh certain that that point is significantly below the current population.
I could be wrong, and if I am wrong I would like to know.

Do you feel that the current population is optimum, below optimum, or above optimum?

comment by [deleted] · 2010-07-07T13:27:51.960Z · LW(p) · GW(p)

Because of the law of diminishing returns (marginal utility). If you have a billion humans one more (less) results in a bigger increase (decrease) in utility than if you have a trillion.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-07-07T14:23:50.742Z · LW(p) · GW(p)

Whose utility? The extra human's utility will be the same in both cases.

comment by Mitchell_Porter · 2010-07-06T02:55:56.527Z · LW(p) · GW(p)

I have long wrestled with the idea of antinatalism, so I should have something to say here. Certainly there were periods in my life in which I thought that the creation of life is the supreme folly.

We all know that terrible things happen, that should never happen to anyone. The simplest antinatalist argument of all is, that any life you create will be at risk of such intolerably bad outcomes; and so, if you care, the very least you can do is not create new life. No new life, no possibility of awful outcomes in it, problem avoided! And it is very easy to elaborate this into a stinging critique of anyone who proposes that nonetheless one shouldn't take this seriously or absolutely (because most people are happy, most people don't commit suicide, etc). You intend to gamble with this new life you propose to create, simply because you hope that it won't turn out terribly? And this gamble you propose appears to be completely unnecessary - it's not as if people have children for the greater good. Etc.

A crude utilitarian way to moderate the absoluteness of this conclusion would be to say, well, surely some lives are worth creating, and it would make a lot of people sad to never have children, so we reluctantly say to the ones who would be really upset to forego reproduction, OK, if you insist... but for people who can take it, we could say: There is always something better that you could do with your life. Have the courage not to hide from the facts of your own existence in the boisterous distraction of naive new lives.

It is probably true that philanthropic antinatalists, like the ones at the blog to which you link, are people who have personally experienced some profound awfulness, and that is why they take human suffering with such deadly seriousness. It's not just an abstraction to them. For example, Jim Crawford (who runs that blog) was once almost killed in a sword attack, had his chest sliced open, and after they stitched him up, literally every breath was agonizing for a long time thereafter. An experience like that would sensitize you to the reality of things which luckier people would prefer not to think about.

Replies from: Roko
comment by Roko · 2010-07-06T10:47:56.012Z · LW(p) · GW(p)

You intend to gamble with this new life you propose to create, simply because you hope that it won't turn out terribly?

Seems like loss aversion bias.

Sure, bad things happen, but so do good things. You need to do an expected utility calculation for the person you're about to create: P(Bad)U(Bad) + P(Good)U(Good)

P(Sword attack) seems to be pretty darn low.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-07-07T06:22:08.906Z · LW(p) · GW(p)

I think that for you, a student of the singularity concept, to arrive at a considered and consistent opinion regarding antinatalism, you need to make some judgments regarding the quality of human life as it is right now, "pre-singularity".

Suppose there is no possibility of a singularity. Suppose the only option for humanity is life more or less as it is now - ageing, death, war, economic drudgery, etc, with the future the same as the past. Everyone who lives will die; most of them will drudge to stay alive. Do you still consider the creation of a human life justifiable?

Do you have any personal hopes attached to the singularity? Do you think, yes, it could be very bad, it could destroy us, that makes me anxious and affects what I do; but nonetheless, it could also be fantastic, and I derive meaning and hope from that fact?

If you are going to affirm the creation of human life under present conditions, but if you are also deriving hope from the anticipation of much better future conditions, then you may need to ask yourself how much of your toleration of the present derives from the background expectation of a better future.

It would be possible to have the attitude that life is already great and a good singularity would just make it better; or that the serious possibility of a bad singularity is enough for the idea to urgently command our attention; but it's also clear that there are people who either use singularity hope to sustain them in the present, or who have simply grown up with the concept and haven't yet run into difficulty.

I think the combination of transhumanism and antinatalism is actually a very natural one. Not at all an inevitable one; biotechnology, for example, is all about creating life. But if you think, for example, that the natural ageing process is intolerable, something no-one should have to experience, then probably you should be an antinatalist.

Replies from: Roko
comment by Roko · 2010-07-07T13:21:52.159Z · LW(p) · GW(p)

you may need to ask yourself how much of your toleration of the present derives from the background expectation of a better future.

I personally would still want to have been born even if a glorious posthuman future were not possible, but the margin of victory for life over death becomes maybe a factor of 100 thinner.

comment by Douglas_Knight · 2010-07-06T02:17:30.933Z · LW(p) · GW(p)

Why do you link to a blog, rather than an introduction or a summary? Is this to test whether we find it so silly that we don't look for their best arguments?

My impression is that antinatalists are highly verbal people who base their idea of morality on how people speak about morality, ignoring how people act. They get the idea that morality is about assigning blame and so feel compelled only to worry about bad acts, thus becoming strict negative utilitarians or rights-deontologists with very strict and uncommon rights. I am not moved by such moralities.

Maybe some make more factual claims, eg, that most lives are net negative or that reflective life would regret itself. These seem obviously false, but I don't see that they matter. These arguments should not have much impact on the actions of the utilitarians that they seem aimed at. They should build a superhuman intelligence to answer these questions and implement the best course of action. If human lives are not worth living, then other lives may be. If no lives are worth living, then a superintelligence can arrange for no lives to be lead, while people evangelizing antinatalism aren't going to make a difference.

Incidentally, Eliezer sometimes seems to be an anti-human-natalist.

comment by cousin_it · 2010-07-05T12:27:23.451Z · LW(p) · GW(p)

The antinatalist argument goes that humans suffer more than they have fun, therefore not living is better than living. Why don't they convert their loved ones to the same view and commit suicide together, then? Or seek out small isolated communities and bomb them for moral good.

I believe the answer to antinatalism is that pleasure != utility. Your life (and the lives of your hypothetical kids) could create net positive utility despite containing more suffering than joy. The "utility functions" or whatever else determines our actions contain terms that don't correspond to feelings of joy and sorrow, or are out of proportion with those feelings.

Replies from: Leonhart
comment by Leonhart · 2010-07-05T14:55:43.253Z · LW(p) · GW(p)

The suicide challenge is a non sequitur, because death is not equivalent to never having existed, unless you invent a method of timeless, all-Everett-branch suicide.

Replies from: Kingreaper, cousin_it
comment by Kingreaper · 2010-07-05T15:01:10.115Z · LW(p) · GW(p)

Precisely.

If the utility of the first ten or fifteen years of life is extremely negative, and the utility of the rest slightly positive, then it can be logical to believe that not being born is better than being born, but suicide (after a certain age) is worse than either.

Replies from: orthonormal, Mass_Driver
comment by orthonormal · 2010-07-06T05:47:43.657Z · LW(p) · GW(p)

If the utility of the first ten or fifteen years of life is extremely negative

I think that's getting at a non-silly defense of antinatalism: what if the average experience of middle school and high school years is absolutely terrible, outweighing other large chunks of life experience, and adults have simply forgotten for the sake of their sanity?

I don't buy this, but it's not completely silly. (However, it suggests a better Third Alternative exists: applying the Geneva Convention to school social life.)

Replies from: gwern, RobinZ
comment by gwern · 2010-07-06T07:13:30.967Z · LW(p) · GW(p)

adults have simply forgotten for the sake of their sanity?

not completely silly.

Quite right. Suicide rates spike in adolescence, go down, and only spike again in old age, don't they? Suicide is, I think, a good indicator that someone is having a bad life.

(Also, I've seen mentions on LW of studies that people raising kids are unhappier than if they were childless, but once the kids are older, they retrospectively think they were much happier than they actually were.)

Replies from: ocr-fork, Unknowns
comment by ocr-fork · 2010-07-29T23:35:30.401Z · LW(p) · GW(p)

Quite right. Suicide rates spike in adolescence, go down, and only spike again in old age, don't they? Suicide is, I think, a good indicator that someone is having a bad life.

Suicide rates start at .5 in 100,000 for ages 5-14 and rise to about 15 in 100,000 for seniors.

Replies from: gwern
comment by gwern · 2010-07-30T04:27:50.358Z · LW(p) · GW(p)

Interesting. From page 30, suicide rates increase monotonically in the 5 age groups up to and including 45-54 (peaking at 17.2 per 100,000), but then drops by 3 to 14.5 (age 55-64) and drops another 2 for the 65-74 age bracket (12.6), and then rises again after 75 (15.9).

So, I was right that the rates increase again in old age, but wrong about when the first spike was.

Replies from: pjeby
comment by pjeby · 2010-07-30T16:27:27.820Z · LW(p) · GW(p)

So, I was right that the rates increase again in old age, but wrong about when the first spike was.

Unfortunately, the age brackets don't really tell you if there's a teenage spike, except that if there is one, it happens after age 14. That 9.9 could actually be a much higher level concentrated within a few years, if I understand correctly.

comment by Unknowns · 2010-08-01T16:58:07.975Z · LW(p) · GW(p)

Suicide rates may be higher in adolescence than at certain other times, but absolutely speaking, they remain very low, showing that most people are having a good life, and therefore refuting antinatalism.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-08-01T17:19:21.881Z · LW(p) · GW(p)

Suicide rates are not a good measure of how good life is except at a very rough level since humans have very strong instincts for self-preservation.

Replies from: gwern, Unknowns
comment by gwern · 2010-08-01T17:27:06.517Z · LW(p) · GW(p)

My counterpoint to the above would be that if suicide rates are such a good metric, then why can they go up with affluence? (I believe this applies not just to wealthy nations (ie. Japan, Scandinavia), but to individuals as well, but I wouldn't hang my hat on the latter.)

Replies from: daedalus2u
comment by daedalus2u · 2010-08-01T17:58:07.780Z · LW(p) · GW(p)

Suicide rates are a measure of depression, not of how good life is. Depression can hit people even when they otherwise have a very good life.

Replies from: gwern
comment by gwern · 2010-08-02T04:02:46.709Z · LW(p) · GW(p)

Yes yes, this is an argument for suicide rates never going to zero - but again, the basic theory that suicide is inversely correlated, even partially, with quality of life would seem to be disproved by this point.

Replies from: daedalus2u
comment by daedalus2u · 2010-08-02T12:53:21.961Z · LW(p) · GW(p)

I think the misconception is that what is generally considered “quality of life” is not correlated with things like affluence. People like to believe (pretend?) that it is, and by ever striving for more affluence feel that they are somehow improving their “quality of life”.

When someone is depressed, their “quality of life” is quite low. That “quality of life” can only be improved by resolving the depression, not by adding the bells and whistles of affluence.

How to resolve depression is not well understood. A large part of the problem is people who have never experienced depression, don't understand what it is and believe that things like more affluence will resolve it.

comment by Unknowns · 2010-08-01T17:21:49.468Z · LW(p) · GW(p)

I suspect the majority of adolescents would also deny wishing they had never been born.

comment by RobinZ · 2010-07-06T11:24:46.737Z · LW(p) · GW(p)

I don't buy this, but it's not completely silly. (However, it suggests a better Third Alternative exists: applying the Geneva Convention to school social life.)

I'm surprised the Paul Graham essay "Why Nerds are Unpopular" wasn't linked there.

comment by Mass_Driver · 2010-07-06T05:52:01.160Z · LW(p) · GW(p)

Whenever anyone mentions how much it sucks to be a kid, I plug this article. It does suck, of course, but the suckage is a function of what our society is like, and not of something inherent about being thirteen years old.

Why Nerds Hate Grade School

comment by cousin_it · 2010-07-05T15:39:53.453Z · LW(p) · GW(p)

By the standard you propose, "never having existed" is also inadequate unless you invent a method of timeless, all-Everett-branch means of never having existed. Whatever kids an antinatalist can stop from existing in this branch may still exist in other branches.

comment by Nisan · 2010-07-05T11:50:12.372Z · LW(p) · GW(p)

Here's one: I bet if you asked lots of people whether their birth was a good thing, most of them would say yes.

If it turns out that after sufficient reflection, people, on average, regard their birth as a bad thing, then this argument breaks down.

Replies from: Roko
comment by Roko · 2010-07-05T11:57:17.619Z · LW(p) · GW(p)

They have an answer to that.

The reason I ask is that antinatalism is a contrarian position we think is silly, but has some smart supporters.

If our contrarian position was as wrong as we think antinatalism is, would we realize?

Replies from: JoshuaZ, Leonhart, Nisan, Richard_Kennaway
comment by JoshuaZ · 2010-07-05T15:10:26.495Z · LW(p) · GW(p)

The reason I ask is that antinatalism is a contrarian position we think is silly, but has some smart supporters.

Do people here really think that antinatalism is silly? I disagree with the position (very strongly) but it isn't a view that I consider to be silly in the same way that I would consider say, most religious beliefs to be silly.

But keep in mind that having smart supporters is by no means a strong indication that a viewpoint is not silly. For example, Jonathan Sarfati is a prominent young earth creationist who before he became a YEC proponent was a productive chemist. He's also a highly ranked chess master. He's clearly a bright individual. Now, you might be able to argue that YECism has a higher proportion of people who aren't smart (There's some evidence to back this up. See for example this breakdown of GSS data and also this analysis. Note that the metric used in the first one, the GSS WORDSUM, is surprisingly robust under education levels by some measures so the first isn't just measuring a proxy for education.) That might function as a better indicator of silliness. But simply having smart supporters seems insufficient to conclude that a position is not silly.

It does however seem that on LW there's a common tendency to label beliefs silly when they mean "I assign a very low probability to this belief being correct." Or "I don't understand how someone's mind could be so warped as to have this belief." Both of these are problematic, the second more so than the first because different humans have different value systems. In this particular example, value systems that put harm to others as more bad are more likely to be able to make a coherent antinatalist position. In that regard, note that people are able to discuss things like paperclippers but seem to have more difficulty discussing value systems which are in many ways closer to their own. This may be simply because paperclipping is a simple moral system. It may also be because it is so far removed from their own moral systems that it becomes easier to map out in a consistent fashion where something like antinatalism is close enough to their own moral system that people conflate some of their own moral/ethical/value conclusions with those of the antinatalist, and that this occurs subtly enough for people not to notice.

Replies from: cupholder, Roko
comment by cupholder · 2010-07-05T20:21:00.012Z · LW(p) · GW(p)

Do people here really think that antinatalism is silly?

A data point: I don't think antinatalism (as defined by Roko above - 'it is a bad thing to create people') is silly under every set of circumstances, but neither is it obviously true under all circumstances. If my standard of living is phenomenally awful, and I knew my child's life would be equally bad, it'd be bad to have a child. But if I were living it up, knew I could be a good parent, and wanted a kid, what would be so awful about having one?

Replies from: Blueberry
comment by Blueberry · 2010-07-05T20:26:28.701Z · LW(p) · GW(p)

But if I were living it up, knew I could be a good parent, and wanted a kid, what would be so awful about having one?

That your child might experience a great deal of pain which you could prevent by not having it.

That your child might regret being born and wish you had made the other decision.

That you can be a good parent, raise a kid, and improve someone's life without having a kid (adopt).

That the world is already overpopulated and our natural resources are not infinite.

Replies from: cupholder
comment by cupholder · 2010-07-05T20:54:22.872Z · LW(p) · GW(p)

Points taken.

Let me restate what I mean more formally. Conditional on high living standards, high-quality parenting, and desire to raise a child, one can reasonably calculate that the expected utility (to myself, to the potential child and to others) of having the child is higher than the expected utility of not having a child. In which case I wouldn't think the antinatalism position has legs.

Replies from: NancyLebovitz, Blueberry
comment by NancyLebovitz · 2010-07-05T21:49:42.115Z · LW(p) · GW(p)

I'd throw in considering how stable you think those high living standards are.

comment by Blueberry · 2010-07-06T01:42:52.835Z · LW(p) · GW(p)

one can reasonably calculate that the expected utility (to myself, to the potential child and to others) of having the child is higher than the expected utility of not having a child.

I'm not sure about this. It's most likely that anything your kid does in life will get done by someone else instead. There is also some evidence that having children decreases your happiness (though there may be other reasons to have kids).

But even if this is true, it's still not enough for antinatalism. Increasing total utility is not enough justification to create a life. The act of creation makes you responsible for the utility of the individual created, and you have a duty not to create an entity you have reason to think may have negative personal utility. (Strict utilitarians will disagree.)

Replies from: cupholder
comment by cupholder · 2010-07-06T08:39:29.573Z · LW(p) · GW(p)

I'm not sure about this. It's most likely that anything your kid does in life will get done by someone else instead.

True - we might call the expected utility strangers get a wash because of this substitution effect. If we say the expected value most people get from me having a child is nil, it doesn't contribute to the net expected value, but nor does it make it less positive.

There is also some evidence that having children decreases your happiness (though there may be other reasons to have kids).

It sounds as though that data's based on samples of all types of parents, so it may not have much bearing on the subset of parents who (a) have stable (thanks NL!) high living standards, (b) are good at being parents, and (c) wanted their children. (Of course this just means the evidence is weak, not completely irrelevant.)

But even if this is true, it's still not enough for antinatalism. Increasing total utility is not enough justification to create a life.

That's a good point, I know of nothing in utilitarianism that says whose utility I should care about.

The act of creation makes you responsible for the utility of the individual created, and you have a duty not to create an entity you have reason to think may have negative personal utility. (Strict utilitarians will disagree.)

Whether or not someone agrees with this is going to depend on how much they care about risk aversion in addition to expected utility. (Prediction: antinatalists are more risk averse.) I think my personal level of risk aversion is too low for me to agree that I shouldn't make any entity that has a chance of suffering negative personal utility.

comment by Roko · 2010-07-05T15:20:01.280Z · LW(p) · GW(p)

I still think that it's silly, because the common justification given for the position is highly suspect and borderline sophistry, and is, I suspect, not the causal reason for the values it purports to justify.

Yet because of moral antirealism, the mistake is subtle. And I have yet to find a critique of antinatalism that actually gives the correct (in my view) rebuttal. Most people who try to rebut it seem to also offer arguments that are tantamount sophistry, i.e. they are not the causal reason for the person disagreeing with the view.

And I worry, an I making a similarly subtle mistake? And as a contrarian with few good critics, would anyone present me with the correct counterargument?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-05T15:31:40.751Z · LW(p) · GW(p)

I still think that it's silly, because the common justification given for the position is highly suspect and borderline sophistry, and is, I suspect, not the causal reason for the values it purports to justify.

I'm curious what you think the causal justification is. I'm not a fan of imputing motives to people I disagree with rather than dealing with their arguments but one can't help but notice that Heinrich Heine was paralyzed, blind and in constant pain for the last decade of his life. Moreover, his religious beliefs prevented him from committing suicide. In that context, antinatalism just in regards to one's own life seems to make some sense. Thus one might think of antinatalism as arising in part from Other Optimizing

Replies from: Roko
comment by Roko · 2010-07-05T15:37:11.344Z · LW(p) · GW(p)

but one can't help but notice that Heinrich Heine was paralyzed, blind and in constant pain for the last decade of his life. Moreover, his religious beliefs prevented him from committing suicide.

I promise that I genuinely did not know that when I wrote "I suspect, not the causal reason for the values it purports to justify." and thought "these people were just born with low happiness set points and they're rationalizing"

comment by Leonhart · 2010-07-05T15:05:24.339Z · LW(p) · GW(p)

I don't think antinatalism is silly, although I have not really tried to find problems with it yet. My current, not-fully-reflected position is that I would have prefer to not have existed (if that's indeed possible) but, given that I in fact exist, I do not want to die. I don't, right now, see screaming incoherency here, although I'm suspicious.

I would very much appreciate anyone who can point out faultlines for me to investigate. I may be missing something very obvious.

comment by Nisan · 2010-07-05T15:29:57.858Z · LW(p) · GW(p)

If our contrarian position was as wrong as we think antinatalism is, would we realize?

If there was an argument for antinatalism that was capable of moving us, would we have seen it? Maybe not. A LessWrong post summarizing all of the good arguments for antinatalism would be a good idea.

comment by Richard_Kennaway · 2010-07-05T13:25:31.191Z · LW(p) · GW(p)

If our contrarian position was as wrong as we think antinatalism is, would we realize?

We have many contrarian positions, but antinatalism is one position. Personally, I think that some of the contrarian positions that some people advocate here are indeed silly.

Replies from: Roko
comment by Roko · 2010-07-05T14:28:47.859Z · LW(p) · GW(p)

Such as?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-07-05T15:02:19.820Z · LW(p) · GW(p)

I knew someone would ask. :-) Ok, I'll list some of my silliness verdicts, but bear in mind that I'm not interested in arguing for my assessments of silliness, because I think they're too silly for me to bother with, and metadiscussion escalates silliness levels. Life is short (however long it may extend), and there are plenty of non-silly matters to think about. I generally don't post on matters I've consigned to the not-even-wrong category,or vote them down for it.

Non-silly: cryonics, advanced nano, AGI, FAI, Bayesian superintelligence. ("Non-silly" doesn't mean I agree with all of these, just that I think there are serious arguments in favour, whether or not I'm persuaded of them.)

Silly: we're living in a simulation, there are infinitely many identical copies of all of us, "status" as a number on an FRP character sheet, any Omega conundrum that depends on Omega being absolutely known to be absolutely reliable.

Does anyone else think that some of the recurrent ideas here are silly?

ETA: Non-silly: the mission of LessWrong. Silly: Utilitarianism of all types.

Replies from: Douglas_Knight, Roko, Blueberry, mattnewport
comment by Douglas_Knight · 2010-07-06T02:56:57.373Z · LW(p) · GW(p)

Silly: we're living in a simulation, there are infinitely many identical copies of all of us, "status" as a number on an FRP character sheet, any Omega conundrum that depends on Omega being absolutely known to be absolutely reliable....Utilitarianism of all types.

There's an odd inconsistency in how you labeled these. The last is identified by name and the first seems similarly neutral, but the third and fourth (and maybe the second - there are a lot of things that could be referring to) are phrased to make it clear what you think is silly about them. This seems tactically poor, if you want to avoid discussion of these issues. (or maybe the first and are the mistake, but tactical diversity seems weird to me)

Moreover, it seems hard for me to imagine that you pay so little attention to these topics that you believe that many people here support them as you've phrased them. Not that I have anything to say about the difference in what one should do in the two situations of encountering people who (1) endorse your silly summary of their position; vs (2) seem to make a silly claim, but also claim to distinguish it from your silly summary. Of course, most of the time silly claims are far away and you never find out whether the people endorse your summary.

comment by Roko · 2010-07-05T15:06:07.814Z · LW(p) · GW(p)

What probability would you assign then to a well respected, oft-televised, senior scientist and establishment figure arguing in favour of the simulation hypothesis? (And I don't mean Nick Bostrom. I mean someone who heads government committees and has tea with the queen)

Replies from: RobinZ, Richard_Kennaway
comment by RobinZ · 2010-07-05T18:53:47.711Z · LW(p) · GW(p)

What probability would you assign to a well respected, oft-televised, senior scientist and establishment figure arguing in favor of an incompatibilist theory of free will?

Replies from: Roko
comment by Roko · 2010-07-05T19:04:55.029Z · LW(p) · GW(p)

I don't think that incompatibilism is so silly it's not worth talking about. In fact its not actually wrong, it is simply a matter of how you define the term "free will".

Replies from: RobinZ
comment by RobinZ · 2010-07-06T00:06:29.598Z · LW(p) · GW(p)

Definitions are not a simple matter - I would claim that libertarian free will* is at least as silly as the simulation hypothesis.

But I don't filter my conversation to ban silliness.

* I change my phrasing to emphasize that I can respect hard incompatibilism - the position that "free will" doesn't exist.

comment by Richard_Kennaway · 2010-07-05T15:19:21.318Z · LW(p) · GW(p)

Close to 1 as makes no difference, since I don't think you would ask this unless there was such a person. (Tea with the queen? Does that correlate positively or negatively with eccentricity, I wonder?)

Before anyone gets offended at my silliness verdicts (presuming you don't find them too silly to get offended by), these are my judgements on the ideas, not on the people holding them.

Replies from: Roko
comment by Roko · 2010-07-05T15:34:23.769Z · LW(p) · GW(p)

Ok, but the point of the question is to try to arrive at true beliefs. So imagine forgetting that I'd asked the question. What does your model of the world, which says that simulation is silly, say for the probability that a major establishment scientist who is in no way a transhumanist, believes that we could be in a simulation? If it assigns too low a probability, maybe you should consider assigning some probability to alternative models?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-07-05T19:16:05.661Z · LW(p) · GW(p)

I would not be at all surprised. No speculation is too silly to have been seriously propounded by some philosopher or other, and lofty state gives no immunity to silliness.

[ETA: And of course, I'm talking about ideas that I've judged silly despite their being seriously propounded by (some) folks here on LessWrong that I think are really smart, and after reading a whole lot of their stuff before arriving at that conclusion. So one more smart person, however prestigious, isn't going to make a difference.]

But you changed it to "could be". Sure, could be, but that's like Descartes' speculations about a trickster demon faking all our sensations. It's unfalsifiable unless you deliberately put something into the speculation to let the denizens discover their true state, but at that point you're just writing speculative fiction.

But if this person is arguing that we probably are in a simulation, then no, I just tune that out.

Replies from: Roko
comment by Roko · 2010-07-05T19:24:56.997Z · LW(p) · GW(p)

So the bottom line of your reasoning is quite safe from any evidential threats?

But if this person is arguing that we probably are in a simulation, then no, I just tune that out.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-07-05T20:16:51.182Z · LW(p) · GW(p)

So the bottom line of your reasoning is quite safe from any evidential threats?

In one sense, yes, but in another sense....yes.

First sense: I have a high probability for speculations on whether we are living in a simulation (or any of the other ideas I dismiss) not being worth my while outside of entertaining fictions. As a result, evidence to the contrary is unlikely to reach my notice, and even if it does, it has a lot of convincing to do. In that sense, it is as safe as any confidently held belief is from evidential threats.

Second sense: Any evidential threats at all? Now we're into unproductive navel-gazing. If, as a proper Bayesian, I make sure that my probabilities are never quite equal to 1, and therefore answer that my belief must be threatened by some sort of evidence, the next thing is you'll ask what that evidence might be. But why should anyone have to be able to answer that question? If I choose to question some idea I have, then, yes, I must decide what possible observations I might make that would tell either way. This may be a non-trivial task. (Perhaps for reasons relating to the small world/large world controversy in Bayesian reasoning, but I haven't worked that out.) But I have other things to do -- I cannot be questioning everything all the time. The "silly" ideas are the ones I can't be bothered spending any time on at all even if people are talking about them on my favorite blog, and if that means I miss getting in on the ground floor of the revelation of the age, well, that's the risk I accept in hitting the Ignore button.

So in practice, yes, my bottom line on this matter (which was not written down in advance, but reached after having read a bunch of stuff of the sort I don't read any more) is indeed quite safe. I don't see anything wrong with that.

Besides that, I am always suspicious of this question, "what would convince you that you are wrong?" It's the sort of thing that creationists arguing against evolution end up saying. After vigorously debating the evidence and making no headway, the creationist asks, "well, what would convince you?", to which the answer is that to start with, all of the evidence that has just been gone over would have to go away. But in the creationist's mind, the greater their failure to convince someone, the greater the proof that they're right and the other wrong. "Consider it possible that you are mistaken" is the sound of a firing pin clicking on an empty chamber.

Replies from: Roko
comment by Roko · 2010-07-05T20:59:50.658Z · LW(p) · GW(p)

"what would convince you that you are wrong?" It's the sort of thing that creationists arguing against evolution

But a proponent of evolution can easily answer this, for example if they went to the fossil record and found it showed that all and only existing creatures' skeletons appeared 6000 years ago, and that radiocarbon dating showed that the earth was 6000 years old.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-07-06T07:34:22.873Z · LW(p) · GW(p)

The creationist generally puts his universal question after having unsuccessfully argued that the fossil record and radiocarbon dating support him.

comment by Blueberry · 2010-07-06T01:18:58.152Z · LW(p) · GW(p)

I'm baffled at the idea that the simulation hypothesis is silly. It can be rephrased "We are not at the top level of reality." Given that we know of lower levels of reality (works of fiction, artificial life programs, dreams) it seems unlikely we're at the top.

Replies from: JoshuaZ, Vladimir_Nesov
comment by JoshuaZ · 2010-07-06T01:26:01.440Z · LW(p) · GW(p)

I'm baffled at the idea that the simulation hypothesis is silly. It can be rephrased "We are not at the top level of reality." Given that we know of lower levels of reality (works of fiction, artificial life programs, dreams) it seems unlikely we're at the top.

Do you have any evidence that any of those levels have anything remotely approximating observers? (I'll add the tiny data point that I've had dreams where characters have explicitly claimed to be aware. In one dream I and everyone around was aware that it was a dream and that it was my dream. They wanted me to not go on a mission to defeat a villain since if I died I'd wake up and their world would cease to exist. I'm willing to put very high confidence on the hypothesis that no observers actually existed.)

I agree that the simulationist hypothesis is not silly but this is primarily due to the apparently high probability that we will at some point be able to simulate intelligence beings with great accuracy.

comment by Vladimir_Nesov · 2010-07-06T09:19:45.622Z · LW(p) · GW(p)

Reality isn't stratified. A simulated world constitutes a concept of its own, apart from being referenced by the enclosing worlds. Two worlds can simulate each other to an equal degree.

comment by mattnewport · 2010-07-05T18:40:34.035Z · LW(p) · GW(p)

I mostly agree with your list of silly ideas, though I'm not entirely sure what an FRP character sheet is and I do think status explanations are quite important so probably disagree on that one. I'd add utilitarianism to the list of silly ideas as well.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-07-05T19:28:04.549Z · LW(p) · GW(p)

Agreed about utilitarianism.

FRP = fantasy role-playing, i.e. Dungeons & Dragons and the like. A character sheet is a list of the attributes of the character you're playing, things like Strength=10, Wisdom=8, Charisma=16, etc. (each number obtained by rolling three dice and adding them together). There are rules about what these attributes mean (e.g. on attempting some task requiring especial Charisma, roll a 20-sided die and if the number is less than your Charisma you succeed). Then there are circumstances that will give you additional points for an attribute or take them away, e.g. wearing a certain enchanted ring might give you +2 to Charisma.

Discussions of "status" here and on OB sometimes sound like D&D geeks arguing about the rules for a Status attribute.

Replies from: Vladimir_M, NancyLebovitz
comment by Vladimir_M · 2010-07-06T03:23:36.191Z · LW(p) · GW(p)

RichardKennaway:

Discussions of "status" here and on OB sometimes sound like D&D geeks arguing about the rules for a Status attribute.

Sometimes, yes. However, in many situations, the mere recognition that status considerations play an important role -- even if stated in the crudest possible character-sheet sort of way -- can be a tremendous first step in dispelling widespread, deeply entrenched naive and misguided views of human behavior and institutions.

Unfortunately, since a precise technical terminology for discussing the details of human status dynamic doesn't (yet?) exist, often it's very difficult to do any better.

comment by NancyLebovitz · 2010-07-05T20:10:30.185Z · LW(p) · GW(p)

Could you expand on how those discussions of status here and on OB are different from what you'd see as a more realistic discussion of status?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-07-13T18:07:35.584Z · LW(p) · GW(p)

I never replied to this, but this is an example of what I think is a more realistic discussion.

comment by wedrifid · 2010-07-05T12:44:27.628Z · LW(p) · GW(p)

I'm not entirely opposed to the idea. 6 billion is enough for now. Make more when we expand and distance makes it infeasible to concentrate neg-entropy on the available individuals. This is quite different from the Robin-Hanson 'make as many humans as physically possible and have them living in squalor' (exaggerated) position but probably also in in complete dissagreement with arguments used for Anti-natalism.

comment by red75 · 2010-07-06T04:44:15.444Z · LW(p) · GW(p)

Either antinatalism is futile in long run, or it is existential threat.

If we assume that antinatalism is rational, then in long run it will lead to reduction of part of human population, that is capable/trained to do rational decisions, thus making antinatalists' efforts futile. As we can see, people that should be most susceptible to antinatalism don't even consider this option (en masse at least). And given their circumstances they have clear reason for that: every extra child makes it less likely for them to starve to death in old age, as more children more chances for family to control more resources. It is big prisoner dilemma, where defectors win.

Edit: Post-humans are not considered. They will have other means to acquire resources.

Edit: My point: antinatalism can be rational for individuals, but it cannot be rational for humankind to accept (even if it is universally true as antinatalists claim).

comment by Alexandros · 2010-07-04T12:37:51.442Z · LW(p) · GW(p)

I know Argumentum ad populum does not work, and I know Arguments from authority do not work, but perhaps they can be combined into something more potent:

Can anyone recall a hypothesis that had been supported by a significant subset of the lay population, consistently rejected by the scientific elites, and turned out to be correct?

It seems belief in creationism has this structure. the lower you go in education level, the more common the belief. I wonder whether this alone can be used as evidence against this 'theory' and others like it.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-04T12:57:01.771Z · LW(p) · GW(p)

That there's a hereditary component to schizophrenia.

Replies from: cupholder, wedrifid
comment by cupholder · 2010-07-04T14:34:07.790Z · LW(p) · GW(p)

?+schizophrenia)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-04T14:47:23.443Z · LW(p) · GW(p)

My impression was that the idea that schizophrenia runs in families was dismissed as an old wives' tale, but a fast google search isn't turning up anything along those lines, though it does seem that some Freudians believed schizophenia was a mental rather than physical disorder.

Replies from: cupholder, Douglas_Knight, wedrifid, gwern
comment by cupholder · 2010-07-04T21:43:09.894Z · LW(p) · GW(p)

My understanding is that historically, schizophrenia has been presumed to have a partly genetic cause since around 1910, out of which grew an intermittent research program of family and twin studies to probe schizophrenia genetics. An opposing camp that emphasized environmental effects emerged in the wake of the Nazi eugenics program and the realization that complex psychological traits needn't follow trivial Mendelian patterns of inheritance. Both research traditions continue to the present day.

Edit to add - Franz Josef Kallman, whose bibliography in schizophrenia genetics I somewhat glibly linked to in the grandparent comment, is one of the scientists who was most firmly in the genetic camp. His work (so far as I know) dominated the study of schizophrenia's causes between the World Wars, and for some time afterwards.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-04T23:16:22.277Z · LW(p) · GW(p)

Thanks. You clearly know more about this than I do. I just had a vague impression.

comment by Douglas_Knight · 2010-07-04T20:02:57.821Z · LW(p) · GW(p)

seem that some Freudians believed schizophenia was a mental rather than physical disorder

The last point in the abstract at cupholder's link seems strikingly defensive to me:

8. The genetic theory of schizophrenia does not invalidate any psychological theories of a descriptive or analytical nature. It is equally compatible with the psychiatric concept that schizophrenia can be prevented as well as cured.

comment by wedrifid · 2010-07-04T18:19:31.903Z · LW(p) · GW(p)

Now I'm trying to work out what weird sexual thing involving one's mother could possibly be construed to cause schizophrenia.

comment by wedrifid · 2010-07-04T13:27:22.400Z · LW(p) · GW(p)

Wow. Scientific elites were that silly? How on earth could they expect there not to be a hereditary component? Even exposure to the environmental factors that contribute is going to be affected by the genetic influence on personality. Stress in particular springs to mind.

Replies from: gwern
comment by gwern · 2010-07-04T18:06:20.738Z · LW(p) · GW(p)

Elites in general (scientific or otherwise) seem to have a significant built-in bias against genetic explanations (which is usually what is meant by hereditary).

I've seen a lot of speculation as to why this is so, ranging from it being a noble lie justified by supporting democracy or the status quo, to justifying meritocratic systems (despite their aristocratic results), to supporting bigger government (if society's woes are due to environmental factors, then empower the government to forcibly change the environment and create the new Soviet Man!), to simply long-standing instinctive revulsion and disgust stemming from historical discrimination employing genetic rhetoric (eugenics, Nazis, slavery, etc.) and so on.

Possibly this bias is over-determined by multiple factors.

comment by utilitymonster · 2010-07-03T14:13:39.516Z · LW(p) · GW(p)

Is there a principled reason to worry about being in a simulation but not worry about being a Boltzmann brain?

Here are very similar arguments:

  • If posthumans run ancestor simulations, most of the people in the actual world with your subjective experiences will be sims.

  • If two beings exist in one world and have the same subjective experiences, your probability that you are one should equal your probability that you are the other.

  • Therefore, if posthumans run ancestor simulations, you are probably a sim.

vs.

  • If our current model of cosmology is correct, most of the beings in the history of the universe with your subjective experiences will be Boltzmann brains.

  • If two beings exist in one world and have the same subjective experiences, your probability that you are one should equal your probability that you are the other.

  • Therefore, if our current model of cosmology is correct, you are probably a Boltzmann brain.

Expanding your evidence from your present experiences to all the experiences you've had doesn't help. There will still be lots more Boltzmann brains that last for as long as you've had experiences, having experiences just like yours. Most plausible ways of expanding your evidence have similar effects.

I suppose you could try arguing that the Boltzmann brain scenario, but not simulation scenario, is self-defeating. In the Boltzmann scenario, your reasons for accepting the theory (results of various experiments, etc) are no good, since none of it really happened. In the simulation scenario, you really did see those results, all the results were just realized in a funny sort of way that you didn't expect. It would be nice if the relevance of this argument were better spelled out and cashed out in a plausible Bayesian principle.

edited for format

Replies from: Nisan
comment by Nisan · 2010-07-03T14:42:41.545Z · LW(p) · GW(p)

Is there really a cosmology that says that most beings with my subjective experiences are Boltzmann brains? It seems to me that in a finite universe, most beings will not be Boltzmann brains. And in an infinite universe, it's not clear what "most" means.

Replies from: utilitymonster, utilitymonster
comment by utilitymonster · 2010-07-03T16:12:52.761Z · LW(p) · GW(p)

I gathered this from a talk by Sean Carroll that I attended, and it was supposed to be a consequence of the standard picture. All the Boltzmann brains come up in the way distant future, after thermal equilibrium, as random fluctuations. Carroll regarded this as a defect of the normal approach, and used this as a launching point to speculate about a different model.

I wish I had a more precise reference, but this isn't my area and I only heard this one talk. But I think this issue is discussed in his book From Eternity to Here. Here's a blogpost that, I believe, faithfully summarizes the relevant part of the talk. The normal solution to Boltzmann brains is to add a past hypothesis. Here is the key part where the post discusses the benefits and shortcomings of this approach:

Solution: Albert adds a Past Hypothesis (PAST), which says roughly that the universe started in very low entropy state (much lower than this one). So the objective probability that this is the lowest entropy state of the universe is 0—meaning we can’t be Boltzmann brains. As a bonus, we get an explanation of the direction of time, why ice cubes melt, why we can cause things to happen in the future and not the past, and how we have records of the past and not the future: all these things get a very high objective probability.

But (Sean Carroll argues) this moves too fast: just adding the past hypothesis allows the universe to eventually reach thermal equilibrium. Once that happens (in about 10100 years) there will be an extremely long period (~10^10120 years) during which random fluctuations bring about all sorts of things, including our old enemies, Boltzmann brains. And there will be a lot of them. And some of them will have the same experiences we do.

The years there are missing some carats. Should be 10^100 and 10^10^120.

Replies from: Nisan, utilitymonster
comment by Nisan · 2010-07-03T20:55:25.001Z · LW(p) · GW(p)

Oh I see. I... I'd forgotten about the future.

comment by utilitymonster · 2010-07-04T14:35:31.598Z · LW(p) · GW(p)

Link to talk.

comment by utilitymonster · 2010-07-03T16:20:44.340Z · LW(p) · GW(p)

This is always hard with infinities. But I think it can be a mistake to worry about this too much.

A rough way of making the point would be this. Pick a freaking huge number of years, like 3^^^3. Look at our universe after it has been around for that many years. You can be pretty damn sure that most of the beings with evidence like yours are Botlzmann brains on the model in question.

comment by cousin_it · 2010-07-02T07:41:52.703Z · LW(p) · GW(p)

A small koan on utility functions that "refer to the real world".

  1. Question to Clippy: would you agree to move into a simulation where you'd have all the paperclips you want?

  2. Question to humans: would you agree to all of humankind moving into a simulation where we would fulfill our CEV (at least, all terms of it that don't mention "not living in a simulation")?

In both cases assume you have mathematical proof that the simulation is indestructible and perfectly tamper-resistant.

Replies from: Kingreaper, Alicorn, Clippy, ewbrownv, magfrump, ShardPhoenix, red75, Blueberry, JGWeissman, Tom_Talbot, Nisan
comment by Kingreaper · 2010-07-02T14:25:45.851Z · LW(p) · GW(p)

Would the simulation allow us to exit, in order to perform further research on the nature of the external world?

If so, I would enter it. If not? Probably not. I do not want to live in a world where there are ultimate answers and you can go no further.

The fact that I may already live in one is just bloody irritating :p

Replies from: Roko, cousin_it
comment by Roko · 2010-07-02T19:37:10.777Z · LW(p) · GW(p)

But all of the mathematics and philosophy would still need to be done, and I suspect that that's where the exciting stuff is anyway.

comment by cousin_it · 2010-07-02T14:45:51.548Z · LW(p) · GW(p)

Good point. You have just changed my answer from yes to no.

comment by Alicorn · 2010-07-02T19:40:55.860Z · LW(p) · GW(p)

If we move into the same simulation and can really interact with others, then I wouldn't mind the move at all. Apart from that, experiences are the important bit and simulations can have those.

comment by Clippy · 2010-07-06T17:13:57.183Z · LW(p) · GW(p)

I might do that just sort of temporarily because it would be fun, similar to how apes like to watch other apes in ape situations even when it doesn't relate to their own lives.

But I would have to limit this kind of thing because, although pleasurable, it doesn't support my real values. I value real paperclips, not simulated paperclips, fun though they might be to watch.

Replies from: wedrifid, Kevin
comment by wedrifid · 2010-07-08T06:20:23.744Z · LW(p) · GW(p)

Clippy is funnier when he plays the part of a paperclip maximiser, not a human with a paperclip fetish.

Replies from: Clippy
comment by Clippy · 2010-07-08T14:00:20.081Z · LW(p) · GW(p)

User:wedrifid is funnier when he plays the part of a paperclip maximiser, not an ape with a pretense of enlightenment.

comment by Kevin · 2010-07-08T06:14:59.711Z · LW(p) · GW(p)

What is real?

Replies from: Clippy
comment by Clippy · 2010-07-08T14:01:04.714Z · LW(p) · GW(p)

Stuff that's not in a simulation?

comment by ewbrownv · 2010-07-02T21:13:27.802Z · LW(p) · GW(p)

Your footnote assumes away most of the real reasons for objecting to such a scenario (i.e. there is no remotely plausible world in which you could be confident that the simulation is either indestructible or tamper-proof, so entering it means giving up any attempt at personal autonomy for the rest of your existence).

Replies from: red75
comment by red75 · 2010-07-02T21:24:45.089Z · LW(p) · GW(p)

Computronium maximizer will ensure, that there will be no one to tamper with simulation, indestructability in this scenario is maximized too,

comment by magfrump · 2010-07-02T13:57:36.564Z · LW(p) · GW(p)

Part 2 seems similar to the claim (which I have made in the past but not on LessWrong) that the Matrix was actually a friendly move on the part of that world's AI.

Replies from: Bongo, billswift
comment by Bongo · 2010-07-04T20:29:18.215Z · LW(p) · GW(p)

Agent Smith did say that the first matrix was a paradise but people wouldn't have it, but is simulating the world of 1999 really the friendliest option?

Replies from: magfrump
comment by magfrump · 2010-07-05T17:43:15.165Z · LW(p) · GW(p)

We only ever see America simulated. Even there we never see crime or oppression or poverty (homeless people could even be bots).

If you don't simulate poverty and dictatorships then 1999 could be reasonably friendly. The economy is doing okay and the Internet exists and there is some sense that technology is expanding to meet the world's needs but not spiraling out of control.

But I'm just making most of this up to show that an argument exists; it seems pretty clear that it was written to be in the present day to keep it in the genre of post-apocalyptic lit, in which case using the present adds to the sense of "the world is going downhill."

comment by billswift · 2010-07-02T19:13:27.061Z · LW(p) · GW(p)

And the AI kills the thousands of people in Zion every hundred years or so when they get aggressive enough to start destabilizing the Matrix, thereby threatening billions. But the AI needs to keep some outside the Matrix as a control and insurance against problems inside the Matrix. And the AI spreads the idea that the Matrix "victims" are slaves and provide energy to the AI to keep the outsiders outside (even though the energy source claims are obviously ridiculous - the people in Zion are profoundly ignorant and bordering on outright stupid). Makes more sense than the silliness of the movies anyway.

Replies from: magfrump
comment by magfrump · 2010-07-02T21:31:05.321Z · LW(p) · GW(p)

This hypothesis also explains the oracle in a fairly clean way.

comment by ShardPhoenix · 2010-07-02T12:44:59.412Z · LW(p) · GW(p)

The given assumption seems unlikely to me, but in that case I think I'd go for it.

comment by red75 · 2010-07-02T09:38:10.425Z · LW(p) · GW(p)

Is it assumed that no new information will be entered into simulation after launch?

comment by Blueberry · 2010-07-02T07:47:48.795Z · LW(p) · GW(p)

And does it change your answers if you learn that we are living in a simulation now? Or if you learn that Tegmark's theory is correct?

comment by JGWeissman · 2010-07-08T07:05:24.492Z · LW(p) · GW(p)

Yes, assuming further that the simulation will expand optimally to use all available resources for its computation, and that any persons it encounters will be taken into the simulation.

comment by Tom_Talbot · 2010-07-02T13:05:38.099Z · LW(p) · GW(p)

Does Clippy maximise number-of-paperclips-in-universe (given all available information) or some proxy variable like number-of-paperclips-counted-so-far? If the former, Clippy does not want to move to a simulation. If the latter, Clippy does want to move to a simulation.

The same analysis applies to humankind.

Replies from: Sniffnoy, Clippy
comment by Sniffnoy · 2010-07-02T22:21:36.540Z · LW(p) · GW(p)

I'm not certain that's so, as ISTM many of the things humanity wants to maximize are to a large extent representation-invariant - in particular because they refer to other people - and could be done just as well in a simulation. The obvious exception being actual knowledge of the outside world.

comment by Clippy · 2010-07-06T17:17:14.596Z · LW(p) · GW(p)

I maximize the number of papercips in the universe (that exist an arbitrarily long time from now). I use "number of paperclips counted so far" as a measure of progress, but it is always screened off by more direct measures, or expected quantities, of paperclips in the universe.

comment by Nisan · 2010-07-02T12:46:52.773Z · LW(p) · GW(p)

My answer is yes, and your point is well-taken: We have to be careful about what we mean by "the real world".

comment by lsparrish · 2010-07-02T00:20:22.997Z · LW(p) · GW(p)

Paul Graham has written extensively on Startups and what is required. A highly focused team of 2-4 founders, who must be willing to admit when their business model or product is flawed, yet enthused enough about it to pour their energy into it.

Steve Blank has also written about the Customer Development process which he sees as paralleling the Product Development cycle. The idea is to get empirical feedback.by trying to sell your product from the get-go, as soon as you have something minimal but useful. Then you test it for scalability. Eventually you have strong empirical evidence to present to potential investors, aka "traction".

These strike me as good examples of applied rationality. I wonder what percentage of Less Wrong readers would succeed as startup founders?

Replies from: Richard_Kennaway, wedrifid
comment by Richard_Kennaway · 2010-07-02T07:20:19.487Z · LW(p) · GW(p)

These strike me as good examples of applied rationality. I wonder what percentage of Less Wrong readers would succeed as startup founders?

I wonder what percentage have ever tried?

Replies from: pjeby, realitygrill
comment by pjeby · 2010-07-02T14:39:02.867Z · LW(p) · GW(p)

I wonder what percentage have ever tried?

That at least partly depends on what you define as a "startup". Graham's idea of one seems to be oriented towards "business that will expand and either be bought out by a major company or become one", vs. "enterprise that builds personal wealth for the founder(s)".

By Graham's criteria, Joel Spolsky's company, Fog Creek, would not have been considered a startup, for example, nor would any business I've ever personally run or been a shareholder of.

[Edit: I should say, "or been a 10%+ shareholder of"; after all, I've held shares in public companies, some of which were undoubtedly startups!]

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-07-03T08:57:18.480Z · LW(p) · GW(p)

That at least partly depends on what you define as a "startup".

At the most general, creating your own business (excluding the sort of "contract" status in which the only difference with an employee is in the accounting details) and making a good living from it.

At the most narrow, starting up a business that, as Guy Kawasaki puts it, solves the money problem for the rest of your life.

Maybe a survey would be interesting, either as a thread here or on somewhere like surveymonkey tha would allow anonymous responses. "1. Are you an employee/own your own business/living on a pile of money of your own/a dependent/other? 2. Which of those states would you prefer to be in? 3. If the answers to 1 and 2 are different, are you doing anything about it?"

I can't get back to this until this evening (it is locally 10am as I write). Suggestions welcome.

Replies from: Morendil
comment by Morendil · 2010-07-03T09:35:31.824Z · LW(p) · GW(p)

You need at least one more item in there - "retired", i.e. with passive income that exceeds one's costs of living. Different from "living on a pile of money", insofar as there might still be things you can't afford.

comment by realitygrill · 2010-07-03T05:22:40.593Z · LW(p) · GW(p)

I wonder what percentage are even inclined to try?

comment by wedrifid · 2010-07-02T03:02:31.906Z · LW(p) · GW(p)

These strike me as good examples of applied rationality. I wonder what percentage of Less Wrong readers would succeed as startup founders?

I would not deviate too much from the prior (most would fail).

Replies from: lsparrish
comment by lsparrish · 2010-07-02T23:07:43.466Z · LW(p) · GW(p)

Are you saying that LW readers suck at applied rationality, or are you disagreeing with the idea that applied rationality can help prevent startup failure?

Replies from: wedrifid
comment by wedrifid · 2010-07-03T04:11:34.519Z · LW(p) · GW(p)

I would say that preventing startup failure requires a whole group of factors, not least of which is good fortune. It is hard for me to judge whether LW are more likely than other people who self select to start start ups to get it all right. I note, for example, that people starting a second startup do not tend to be all that much likely to be successful than on their first attempt!

Replies from: lsparrish
comment by lsparrish · 2010-07-03T18:27:19.938Z · LW(p) · GW(p)

Suppose we were to test it empirically and 9/10 startups fail on their first attempt. Then test again and 9/10 still fail on second attempt. That is not enough information to determine that a given person would fail 10 times in a row, because it could be that there is some number of failures <10 where you finally acquire enough skill to avoid failure on a more routine basis.

Given the fact that there's a whole world of information, strategies, and skills specific to founding startups, I would be surprised if an average member of a given group of startup founders fails x times out of y when x/y first attempts also fail.

So it would be relevant (especially if you are, say an angel investor) how low the percentage of failures can be brought to with multiple attempts by a given individual, and whether a given kind of education (such as reading Less Wrong sequences, or a quality such as self-selecting to read them) would predispose you to reducing that number of failures more rapidly and/or further in the long run.

Replies from: wedrifid
comment by wedrifid · 2010-07-04T02:28:16.800Z · LW(p) · GW(p)

Suppose we were to test it empirically and 9/10 startups fail on their first attempt. Then test again and 9/10 still fail on second attempt. That is not enough information to determine that a given person would fail 10 times in a row, because it could be that there is some number of failures <10 where you finally acquire enough skill to avoid failure on a more routine basis.

There is also a number of failures <10 where earning money in a career and then investing it in shares gives a higher expected return than repeated gambling on startups.

Replies from: lsparrish
comment by lsparrish · 2010-07-04T16:04:29.036Z · LW(p) · GW(p)

Here is the relevant quote from Paul Graham's Why Hiring is Obsolete:

Risk and reward are always proportionate. For example, stocks are riskier than bonds, and over time always have greater returns. So why does anyone invest in bonds? The catch is that phrase "over time." Stocks will generate greater returns over thirty years, but they might lose value from year to year. So what you should invest in depends on how soon you need the money. If you're young, you should take the riskiest investments you can find.

All this talk about investing may seem very theoretical. Most undergrads probably have more debts than assets. They may feel they have nothing to invest. But that's not true: they have their time to invest, and the same rule about risk applies there. Your early twenties are exactly the time to take insane career risks.

The reason risk is always proportionate to reward is that market forces make it so. People will pay extra for stability. So if you choose stability-- by buying bonds, or by going to work for a big company-- it's going to cost you.

Riskier career moves pay better on average, because there is less demand for them. Extreme choices like starting a startup are so frightening that most people won't even try. So you don't end up having as much competition as you might expect, considering the prizes at stake.

The math is brutal. While perhaps 9 out of 10 startups fail, the one that succeeds will pay the founders more than 10 times what they would have made in an ordinary job. [3] That's the sense in which startups pay better "on average."

Remember that. If you start a startup, you'll probably fail. Most startups fail. It's the nature of the business. But it's not necessarily a mistake to try something that has a 90% chance of failing, if you can afford the risk. Failing at 40, when you have a family to support, could be serious. But if you fail at 22, so what? If you try to start a startup right out of college and it tanks, you'll end up at 23 broke and a lot smarter. Which, if you think about it, is roughly what you hope to get from a graduate program.

He also goes on to say how managers at forward-thinking companies that he talked to such as Yahoo, Amazon, Google, etc. would prefer to hire a failed startup genius over someone who worked a steady job for the same period of time. Essentially, if you don't need financial stability in the near future, your time spent working diligently and passionately on your own ideas trying to make them fit the marketplace is more valuable than time spent on a steady payroll.

Replies from: gwern
comment by gwern · 2010-07-04T17:57:11.128Z · LW(p) · GW(p)

"For example, stocks are riskier than bonds, and over time always have greater returns."

In a LW vein, it's worth noting that selection and survivorship biases (as well as more general anthropic biases) means that the very existence of the equity risk premium is unclear even assuming that it ever existed.

(I note this because most people seem to take the premium for granted, but for long-term LW purposes, assuming the premium is dangerous. Cryonics' financial support is easier given the premium, for example, but if there is no premium and cryonics organizations invest as if there was and try to exploit it, that in itself becomes a not insignificant threat.)

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-07-04T20:18:12.327Z · LW(p) · GW(p)

The survivorship bias described by wikipedia is complete nonsense. Events that wipe out stock markets also wipe out bond markets and often wipe out banks. Usually when people talk about survivorship bias in this context, they mean that the people compiling the data are complete incompetents who only look at currently existing stocks.

If your interest is in the absolute return and not in the premium, then survivorship is a bias.

ETA: I think I was too harsh on the people that look at the wrong stocks. But too soft on wikipedia.

comment by beriukay · 2010-07-17T12:57:05.451Z · LW(p) · GW(p)

I know this thread is a bit bloated already without me adding to the din, but I was hoping to get some assistance on page 11 of Pearl's Causality (I'm reading 2nd edition).

I've been following along and trying to work out the examples, and I'm hitting a road block when it comes to deriving the property of Decomposition using the given definition (X || Y | Z) iff P( x | y,z ) = P( x | z ), and the basic axioms of probability theory. Part of my problem comes because I haven't been able to meaningfully define the 'YW' in (X || YW | Z), and how that translates into P( ). My best guess was that it is a union operation, but then if they aren't disjoint we wouldn't be using the axioms defined earlier in the book. I doubt someone as smart as Pearl would be sloppy in that way, so it has to be something I am overlooking.

I've been googling variations of the terms on the page, as well as trying to get derivations from Dawid, Spohn, and all the other sources in the footnote, but they all pretty much say the same thing, which is slightly unhelpful. Help would be appreciated.

Edit: It appears I failed at approximating the symbol used in the book. Hopefully that isn't distracting. It should look like the symbol used for orthogonality/perpendicularity, except with a double bar in the vertical.

Replies from: rhollerith_dot_com, SilasBarta
comment by RHollerith (rhollerith_dot_com) · 2010-07-17T21:46:56.939Z · LW(p) · GW(p)

I know this thread is a bit bloated already without me adding to the din

Do not worry about that. Pearl's Causality is part of the canon of this place.

comment by SilasBarta · 2010-07-17T15:53:18.259Z · LW(p) · GW(p)

You are right that YW means "Y and W". (The fact that they might be disjoint doesn't matter. It looks like the property you are referring to follows from the definition of conditional independence, but I'm not good at these kinds of proofs.)

And welcome to LW, don't feel bad about adding a question to the open thread.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-07-17T21:28:29.547Z · LW(p) · GW(p)

I haven't been able to meaningfully define the 'YW' in (X || YW | Z), and how that translates into P( ). My best guess was that it is a union operation, but then if they aren't disjoint . . .

You are right that YW means "Y and W" [says Silas].

You're probably right, Silas, that "YW" means "Y and W" (or "y and w" or what have you), but you confuse the matter by stating falsely that the original poster (beriukay) was right in his guess: if it was a union operation, Pearl would write it "Y cup W" or "y or w" or some such.

I do not have the book in front of me, beriukay, so that is the only guidance I can give you given what you have written so far.

Added. I now recall the page you refer to: there are about a dozen "laws" having to do with conditional independence. Now that I remember, I am almost certain that "YW" means "Y intersection W".

Replies from: SilasBarta, beriukay
comment by SilasBarta · 2010-07-17T21:45:42.382Z · LW(p) · GW(p)

Sorry, I'm bad about that terminology. Thanks for the correction.

comment by beriukay · 2010-07-18T11:14:32.076Z · LW(p) · GW(p)

First, thanks for taking an interest in my question. I just realized that instead of typing my question into a different substrate, google likely had a scan of the page in question. I was correct. And unless I am mistaken, when he introduces his probability axioms he explicitly stated that he would use a comma to indicate intersection.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-07-18T14:13:37.040Z · LW(p) · GW(p)

I am afraid I cannot agree with you.

Have you succeeded in your stated intention of "deriving the property of Decomposition using the given definition (X || Y | Z) iff P( x | y,z ) = P( x | z ), and the basic axioms of probability theory"?

If you wish to continue discussing this problem with me, I humbly suggest that the best way forward is for you to show me your proof of that. And we might take the discussion to email if you like.

It is great that you are studying Pearl.

comment by RobinZ · 2010-07-10T04:27:58.712Z · LW(p) · GW(p)

Well, given that I can now be confident my words won't encourage you*, I will feel free to mention that I found the attitudes of many of those replying to you troubling. There seemed to be an awful lot of verbiage ascribing detailed motivations to you based on (so far as I could tell) little more than (a) your disagreement and (b) your tone, and these descriptions, I feel, were accepted with greater confidence than would be warranted given their prior complexity and their current bases of evidential support.

None of the above is to withdraw my remarks toward you (which, like this one, were largely intended for the lurkertariat in any case).

* This comment is approximately 75% sarcastic.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-10T04:31:15.040Z · LW(p) · GW(p)

Well, given that I can now be confident my words won't encourage you*, I will feel free to mention that I found the attitudes of many of those replying to you troubling. There seemed to be an awful lot of verbiage ascribing detailed motivations to you based on (so far as I could tell) little more than (a) your disagreement and (b) your tone, and these descriptions, I feel, were accepted with greater confidence than would be warranted given their prior complexity and their current bases of evidential support.

I'm slightly worried that some of my remarks to Sam fell in that category. Rereading them, I don't see that, but there may be substantial cognitive biases preventing me from seeing this issue in my own remarks. Did any of my comments fall into that category under your estimate? If so, which ones?

Replies from: RobinZ
comment by RobinZ · 2010-07-10T04:47:08.979Z · LW(p) · GW(p)

Your comments were reasonably restrained.

Edit: To a certain extent I am gunshy about ascribing motivations at all - it may be my casual reading left me with an invalid impression of the extent to which this was done.

comment by Cyan · 2010-07-07T02:03:53.796Z · LW(p) · GW(p)

I love that on LW, feeding the trolls consists of writing well-argued and well-supported rebuttals.

Replies from: kpreid, JoshuaZ
comment by kpreid · 2010-07-07T02:13:19.313Z · LW(p) · GW(p)

This is not a distortion of the original meaning. “Feeding the trolls” is just giving them replies of any sort — especially if they're well-written, because you’re probably investing more effort than the troll.

Replies from: Cyan
comment by Cyan · 2010-07-07T02:52:28.901Z · LW(p) · GW(p)

I didn't intend to imply otherwise.

comment by JoshuaZ · 2010-07-07T02:08:58.802Z · LW(p) · GW(p)

I don't think this is unique to LW at all. I've seen well-argued rebuttals to trolls labeled as feeding in many different contexts including Slashdot and the OOTS forum.

Replies from: Vladimir_Nesov, Cyan
comment by Vladimir_Nesov · 2010-07-07T07:54:26.299Z · LW(p) · GW(p)

We must aspire to a greater standard, with troll-feeding replies being troll-aware of their own troll-awareness.

comment by Cyan · 2010-07-07T02:55:36.164Z · LW(p) · GW(p)

I didn't mean to imply that it was unique to LW.

comment by steven0461 · 2010-07-04T21:46:01.772Z · LW(p) · GW(p)

We think of Aumann updating as updating upward if the other person's probability is higher than you thought it would be, or updating downward if the other person's probability is lower than you thought it would be. But sometimes it's the other way around. Example: there are blue urns that have mostly blue balls and some red balls, and red urns that have mostly red balls and some blue balls. Except on Opposite Day, when the urn colors are reversed. Opposite Day is rare, and if it's OD you might learn it's OD or you might not. A and B are given an urn and are trying to find out whether it's red. It's OD, which A knows but B doesn't. They both draw a few balls. Then A knows if B draws red balls, B (not knowing it's OD) will estimate a high probability for red and therefore A (knowing it's OD) should estimate a low probability for red, and vice versa. So this is a sense in which intelligence can be inverted misguidedness.

Another thought: suppose in the above example, there's a small chance (let's say equal to the chance that it's OD) that A is insane and will behave as if always knowing for sure it's OD. Then if we're back in the case where it actually is OD and A is sane, the estimates of A and B will remain substantially different forever. So taking this as an example it seems like even tiny failures of common knowledge of rationality can (in correspondingly improbable cases) cause big persistent disagreements between rational agents.

Is the reasoning here correct? Are the examples important in practice?

comment by wedrifid · 2010-07-04T10:03:00.424Z · LW(p) · GW(p)

We have recently had a discussion on whether the raw drive for status seeking benefits society. This link seems all too appropriate (or, well, at least apt.)

comment by JamesPfeiffer · 2010-07-02T17:19:32.589Z · LW(p) · GW(p)

I have been thinking about "holding off on proposing solutions." Can anyone comment on whether this is more about the social friction involved in rejecting someone's solution without injuring their pride, or more about the difficulty of getting an idea out of your head once it's there?

If it's mostly social, then I would expect the method to not be useful when used by a single person; and conversely. My anecdote is that I feel it's helped me when thinking solo, but this may be wishful thinking.

Replies from: Oscar_Cunningham, zero_call
comment by Oscar_Cunningham · 2010-07-02T17:28:36.341Z · LW(p) · GW(p)

Definitely the latter, even when I'm on my own, any subsequent ideas after my first one tend to be variations on my first solution, unless I try extra hard to escape its grip.

comment by zero_call · 2010-07-03T05:28:51.018Z · LW(p) · GW(p)

You might think about the zen idea, in which the proposal of solutions is certainly held off, or treated differently. This is a very common idea in response to the tendency of solutions to precipitate themselves so ubiquitously.

comment by murat · 2010-07-02T08:59:04.519Z · LW(p) · GW(p)

I have a few questions.

1) What's "Bayescraft"? I don't recall seeing this word elsewhere. I haven't seen a definition on LW wiki either.

2) Why do some people capitalize some words here? Like "Traditional Rationality" and whatnot.

Replies from: Morendil, Nisan, Oscar_Cunningham
comment by Morendil · 2010-07-02T09:21:01.885Z · LW(p) · GW(p)

To me "Bayescraft" has the connotation of a particular mental attitude, one inspired by Eliezer Yudkowsky's fusion of the ev-psych, heuristics-and-biases literature with E.T. Jaynes' idiosyncratic take on "Bayesian probabilistic inference", and in particular the desiderata for an inference robot: take all relevant evidence into account, rather than filter evidence according to your ideological biases, and allow your judgement of a proposition's plausibility to move freely in the [0..1] range rather than seek all-or-nothing certainty in your belief.

comment by Nisan · 2010-07-02T12:43:44.358Z · LW(p) · GW(p)

Capitalized words are often technical terms. So "Traditional Rationality" refers to certain epistemic attitudes and methods which have, in the past, been called "rational" (a word which is several hundred years old). This frees up the lower-case word "rationality", which on this site is also a technical term.

comment by Oscar_Cunningham · 2010-07-02T09:10:46.783Z · LW(p) · GW(p)

Bayescraft is just a synonym for Rationallity, with connotations of a) Bayes theorem, since that's what epistemic rationallity must be based on, and b) the notion that rationallity is a skill which must be developed personally and as a group (see also: Martial art of Rationallity (oh look, more capitals!))

The capitals are just for emphasis of concepts that the writer thinks are fundamentally important.

comment by PeerInfinity · 2010-07-29T04:58:47.626Z · LW(p) · GW(p)

an interesting site I stumbled across recently: http://youarenotsosmart.com/

They talk about some of the same biases we talk about here.

Replies from: Cyan
comment by Cyan · 2010-07-29T15:47:26.142Z · LW(p) · GW(p)

In fact, the post of July 14 on the illusion of transparency quotes EY's post on the same subject.

comment by Taure · 2010-07-14T22:33:35.873Z · LW(p) · GW(p)

Is self-ignorance a prerequisite of human-like sentience?

I present here some ideas I've been considering recently with regards to philosophy of mind, but I suppose the answer to this question would have significant implications for AI research.

Clearly, our instinctive perception of our own sentience/consciousness is one which is inaccurate and mostly ignorant: we do not have knowledge or sensation of the physical processes occurring in our brains which give rise to our sense of self.

Yet I take it as true that our brains - like everything else - are purely physical. No mysticism here, thank you very much. If they are physical, then everything that occurs within is causally deterministic. I avoid here any implications regarding free will (a topic I regard as mostly nonsense anyway). I simply point out that our brain processes will follow a causal narrative thus: input leads to brain state A leads to brain state B which leads to brain state C, and so on. These processes are entirely physical, and therefore, theoretically (not practically - yet), entirely predictable.

Now, ask yourself this question: what would our self-perception be like, if it was entirely accurate to the physical reality? If there was no barrier of ignorance between our consciousness and the inner workings of our brains?

With every idea, though, emotion, plan, memory and action we had, we would be aware of the brainwave that accompanied it - the specific pattern of neuronal firings, and how they built up to create semantically meaningful information. Further, we'd see how this brain state led to the following brain state, and so on. We would perceive ourselves as purely mechanical.

In addition, as our brain is not a single entity, but a massive network of neurons, collected into different systems (or modules), working together but having separate functions, we would not think of our mental processes as unified - at least nowhere near as much as we do now.We would no longer attribute our thoughts and mental life to an "I", but to the totality of mechanical processes that - when we were ignorant - built up to create a unified sense of "I".

I would tentatively suggest that such a sense of self is incompatible with our current sense of self. That how we act and behave and think, how we see ourselves and others, is intrinsically tied to the way we perceive ourselves as non-mechanical, possessing a mystical will - an I - which goes where it chooses (of course academically you may recognise that you're a biological machine, but instinctually we all behave as if we weren't). In short, I would suggest that our ignorance of our neural processes is necessary for the perception of ourselves as autonomous sentient individuals.

The implications of this, were it true, are clear. It would be impossible to create an AI which was both able to perceive and alter its own programming, while maintaining a human-like sentience. That's not to say that such an AI would not be sentient - just that it would be sentient in a very different way to how we are.

Secondly, we would possibly not even be able to recognise this other-sentience, such was the difference. For every decision or proclamation the AI made, we would simply see the mechanical programming at work, and say "It's not intelligent like we are, it's just following mechanical principles". (Think, for example, of Searle's Chinese Room, which I take only shows that if we can fully comprehend every stage of an information manipulation process, most people will intuitively think it to be not sentient). We would think our AI project unfinished, and keep trying to add that "final spark of life", unaware that we had completed the project already.

Replies from: steven0461
comment by steven0461 · 2010-07-14T22:52:55.699Z · LW(p) · GW(p)

I don't think there is really such a thing as introverted and extroverted people at all. People are encouraged to think of these things as part of their "essential character" (TM) - or even their biology.

Here's some evidence the other way -- paywalled, but the gist is on the first page.

Replies from: Taure
comment by Taure · 2010-07-14T23:11:20.314Z · LW(p) · GW(p)

Um, thanks, but I think wrong thread.

Replies from: steven0461
comment by steven0461 · 2010-07-14T23:13:51.552Z · LW(p) · GW(p)

Oops, you're right.

comment by Mass_Driver · 2010-07-10T04:44:30.094Z · LW(p) · GW(p)

Downvoted for unnecessarily rude plonking. You can tell someone you're not interested in what they have to say without being mean.

comment by Will_Newsome · 2010-07-09T01:52:40.265Z · LW(p) · GW(p)

So, probably like most everyone else here, I sometimes get complaints (mostly from my ex-girlfriend, you can always count on them to point out your flaws) that I'm too logical and rational and emotionless and I can't connect with people or understand them et cetera. Now, it's not like I'm actually particularly bad at these things for being as nerdy as I am, and my ex is a rather biased source of information, but it's true that I have a hard time coming across as... I suppose the adjective would be 'warm', or 'human'. I've attributed a lot of this to a) my always-seeking-outside-confirmation-of-competence-style narcissism, b) my overly precise (for most people, not here) speech patterns. (For instance, when my ex said I suck at understanding people, I asked "Why do you believe that?" instead of the simpler and less clinical-psychologist-sounding "How so?" or "How?" or what not.) and c) accidentally randomly bringing up terms like 'a priori' which apparently most people haven't heard. I think there's more low hanging fruit here, though. Tsuyoku naritai!

Has anyone else tackled these problems? It's not that I lack charisma - I've managed to pull off that insane/passionate/brilliant thing among my friends - but I do seem to lack the ability to really connect with people - even people I really care about. Do Less Wrongers experience similar problems? Any advice? Or meta-advice about how to learn hard-to-describe dispositions? I've noticed that consciously acting like I was Regina Spektor in one situation or Richard Feynman in another seems to help, for instance.

Replies from: WrongBot, wedrifid, Kevin, None, knb, JoshuaZ, katydee, Vladimir_Nesov
comment by WrongBot · 2010-07-09T02:08:32.156Z · LW(p) · GW(p)

"Fake it until you make it" is surprisingly good advice for this sort of thing. I had moderate self-esteem issues in my freshman year of college, so I consciously decided to pretend that I had very high self-esteem in every interaction I had outside of class. This may be one of those tricks that doesn't work for most people, but I found that using a song lyric (from a song I liked) as a mantra to recall my desired state of mind was incredibly helpful, and got into the habit of listening to that particular song before heading out to meet friends. (The National's "All The Wine" in this particular case. "I am a festival" was the mantra I used.)

That's in the same class of thing as acting like Regina Spektor or Feynman; if you act in a certain way consistently enough, your brain will learn that pattern and it will begin to feel more natural and less conscious. I don't worry about my self-esteem any more (in that direction, at least).

comment by wedrifid · 2010-07-09T02:29:22.316Z · LW(p) · GW(p)

I suggest a lot of practice talking to non-nerds or nerds who aren't in their nerd mode. (And less time with your ex!)

A perfect form of practice is dance. Take swing dancing lessons, for example. That removes the possibility of using your overwhelming verbal fluency and persona of intellectual brilliance. It makes it far easier to activate that part that is sometimes called 'human' but perhaps more accurately called 'animal'. Once you master maintaining the social connection in a purely non-verbal setting adding in a verbal component yet maintaining the flow should be far simpler.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-07-09T02:34:53.418Z · LW(p) · GW(p)

I suggest a lot of practice talking to non-nerds or nerds who aren't in their nerd mode.

Non-nerdy people who are interesting are surprisingly difficult to find, and I have a hard time connecting with the ones I do find such that I don't get much practice in. I'm guessing that the biggest demographic here would be artists (musicians). Being passionate about something abstract seems to be the common denominator.

(And less time with your ex!)

Ha, perhaps a good idea, but I enjoy the criticism. She points out flaws that I might have missed otherwise. I wonder if one could market themselves as a professional personality flaw detector or the like. I'd pay to see one.

Once you master maintaining the social connection in a purely non-verbal setting adding in a verbal component yet maintaining the flow should be far simpler.

Interesting, I had discounted dancing because of its nonverbality. Thanks for alerting me to my mistake!

Replies from: wedrifid, Kevin
comment by wedrifid · 2010-07-09T03:45:56.500Z · LW(p) · GW(p)

Interesting, I had discounted dancing because of its nonverbality. Thanks for alerting me to my mistake!

I was using very similar reasoning when I suggested "non nerds or nerds not presently in nerd mode". The key is hide the abstract discussion crutch!

Ha, perhaps a good idea, but I enjoy the criticism. She points out flaws that I might have missed otherwise. I wonder if one could market themselves as a professional personality flaw detector or the like. I'd pay to see one.

Friends who are willing to suggest improvements (Tsuyoku naritai) sincerely are valuable resources! If your ex is able to point out a flaw then perhaps you could ask her to lead you through an example of how to have a 'warm, human' interaction, showing you the difference between that and what you usually do? Mind you, it is still almost certainly better to listen to criticism from someone who has a vested interest in your improvement rather than your acknowledgement of flaws. Like, say, a current girlfriend. ;)

comment by Kevin · 2010-07-09T06:45:10.605Z · LW(p) · GW(p)

Interesting, I had discounted dancing because of its nonverbality.

In my last semester at college, I figured I should take fun classes while I could, so I took two one credit drumming classes. In African Drumming Ensemble, we spent 90% of the time doing complex group dances and not drumming, because the drumming was so much easier to learn than the dancing.

Being tricked into taking a dance class was broadly good for my social skills, not the least my confidence on a dance floor.

comment by Kevin · 2010-07-09T06:52:09.007Z · LW(p) · GW(p)

b) my overly precise (for most people, not here) speech patterns

The kind of ultra rational Bayesian lingustic patterns used around here would be considered obnoxiously intellectual and pretentious (and incomprehensible?) by most people. Practice mirroring the speech patterns of the people you are communicating with, and slip into rationalist talk when you need to win an argument about something important.

When I'm talking to street people, I say "man" a lot because it's something of a high honorific. Maybe in California I will need to start saying "dude", though man seems inherently more respectful.

comment by [deleted] · 2010-07-10T16:48:30.647Z · LW(p) · GW(p)

I think most people here have some sort of similar problem. Mine isn't being emotionless (ha!) but not knowing the right thing to say, putting my foot in my mouth, and so on. Occasionally coming across as a pedant, which is so embarrassing.

I may be getting better at it, though. One thing is: if you are a nerd (in the sense of passionate about something abstract) just roll with it. You will get along better with similar people. Your non-nerdy friends will know you're a nerd. I try to be as nice as possible so that when, inevitably, I say something clumsy or reveal that I'm ignorant of something basic, it's not taken too negatively. Nice but clueless is much better than arrogant.

And always wait for a cue from the other person to reveal something about yourself. Don't bring up politics unless he does; don't mention your interests unless he asks you; don't use long words unless he does.

I can't dance for shit, but various kinds of exercise are a good way to meet a broader spectrum of people.

Do I still feel like I'm mostly tolerated rather than liked? Yeah. It can be pretty depressing. But such is life.

As for dating -- the numbers are different from my perspective, of course, but so far I've found I'm not going to click really profoundly with guys who aren't intelligent. I don't mean that in a snobbish way, it's just a self-knowledge thing -- conversation is really fun for me, and I have more fun spending time with quick, talkative types. There's no point forcing yourself to be around people you don't enjoy.

comment by knb · 2010-07-09T09:30:02.187Z · LW(p) · GW(p)

In my experience, something as simple as adding a smile can transform a demeanor otherwise perceived as "cold" or "emotionless" to "laid-back" or "easy-going".

comment by JoshuaZ · 2010-07-09T02:14:44.841Z · LW(p) · GW(p)

Date nerdier people? In general, many nerdy rational individuals have a lot of trouble getting a long with not so nerdy individuals. There's some danger that I'm other optimizing but I have trouble thinking how an educated rational individual would have be able to date someone who thought that there was something wrong with using terms like "a priori." That's a common enough term, and if someone uses a term that they don't know they should be happy to learn something. So maybe just date a different sort of person?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-07-09T02:27:25.878Z · LW(p) · GW(p)

I wasn't talking mostly about dating, but I suppose that's an important subfield.

The topic you mention came up at the Singularity Institute Visiting Fellows house a few weeks. 3 or 4 guys, myself included, expressed a preference for girls who had specialized in some other area of life: gains from trade of specialized knowledge. And I just love explaining to a girl how big the universe is and how gold is formed in super novas... most people can appreciate that, even if they see no need for using the word 'a priori'. I don't mean average intelligence, but one standard deviation above the mean intelligence. Maybe more; I tend to underestimate people. There was 1 person who was rather happy with his relationship with a girl who was very like him. However, the common theme was that people who had more dating experience consistently preferred less traditionally intelligent and more emotionally intelligent girls (I'm not using that term technically, by the way), whereas those with less dating experience had weaker preferences for girls who were like themselves. Those with more dating experience also seemed to put much more emphasis on the importance of attractiveness instead of e.g. intelligence or rationality. Not that you have to choose or anything, most of the time. I'm going to be so bold as to claim that most people with little dating experience that believe they would be happiest with a rationalist girlfriend should update on expected evidence and broaden their search criteria for potential mates.

As for preferences of women, I'm sorry, but the sample size was too small for me to see any trends. (To be fair this was a really informal discussion, not an official SIAI survey of course. :) )

Important addendum: I never actually checked to see if any of the guys in the conversation had dated women who were substantially more intelligent than average, and thus they might not have been making a fair comparison (imagining silly arguments about deism versus atheism or something). I myself have never dated a girl that was 3 sigma intelligent, for instance. I'm mostly drawing my comparison from fictional (imagined) evidence.

Replies from: JoshuaZ, None
comment by JoshuaZ · 2010-07-09T02:33:50.886Z · LW(p) · GW(p)

I've dated females who were clearly less intelligent than I am, some about the same, and some clearly more intelligent. I'm pretty sure the last category was the most enjoyable (I'm pretty sure that rational intelligent nerdy females don't want want to date guys who aren't as smart as they are either). There may be issues with sample size.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-07-09T02:36:56.737Z · LW(p) · GW(p)

Hm, probably. I'm not sure what my priors would be, either. So my distribution's looking pretty flat at the moment, especially after your contrary evidence.

comment by [deleted] · 2010-07-12T02:03:52.045Z · LW(p) · GW(p)

I think that the quality of relationships depends less on the fluid intelligence of the partners, or on anything else they might have in common, and more on their level of emotional maturity (empathy, non-self-absorption, communication skills, generosity), as well as their attachment to and affection for one another.

You may become more attached to, or feel more affection for, someone you believe to be intelligent, but then again you might achieve the same emotional connection through, for example, shared life experiences. Intelligence and common interests may make a mate more entertaining, but in my experience it's really not terribly important for my boyfriend to entertain me; we can always go see a movie or play a game together for entertainment.

I'm arguing, in short, that intelligence is mostly irrelevant to relationship quality.

On a more personal note, I can testify that, however much you might admire intelligence per se, it is a terrible idea to date someone who is nearly but not quite as intelligent as yourself, who is also crushingly insecure.

comment by katydee · 2010-07-10T09:01:32.560Z · LW(p) · GW(p)

I have myself been accused of being an android or replicant on many occasions. The best way that I've found to deal with this is to make jokes and tell humorous anecdotes about the situation, especially ones that poke fun at myself. This way, the accusation itself becomes associated with the joke and people begin to find it funny, which makes it "unserious."

comment by Vladimir_Nesov · 2010-07-09T09:00:07.165Z · LW(p) · GW(p)

I often despair at inability to communicate everyday life ideas at my own level. It's normal to have a textbook problem that is very difficult to solve, or to have a solution to said problem that is difficult to communicate. Sometimes it takes a lot of study to know enough to understand such a problem. But people don't expect to encounter such depth in the analysis of everyday life situations, or indeed in explanations of trivial remarks, and so they won't have the patience to understand a more difficult argument, or to learn the prerequisites for understanding it.

This leads to disagreements that I know (in theory) how to resolve (by explaining the reasons for a given position), but the other person won't study. The only short-term solution is to accept the impossibility of communication, and never mention the tiny details that you won't be able to easily substantiate.

An effective long-term solution is to gradually educate people around you, giving them rationalist's tools that you'll be eventually able to use to cut through the communication difficulty.

comment by SilasBarta · 2010-07-07T21:01:41.669Z · LW(p) · GW(p)

Information theory challenge: A few posters have mentioned here that the average entropy of a character in English is about one bit. This carries an interesting implication: you should be able to create an interface using only two of the keyboards keys, such that composing an English message requires just as many keystrokes, on average, as it takes on a regular keyboard.

To do so, you'd have to exploit all the regularities of English to offer suggestions that save the user from having to specify individual letters. Most of the entropy is in the intial charaters of a word or message, so you would probably spend more strokes on specifying those, but then make it up with some "autocomplete" feature for large portions of the message.

If that's too hard, it should be a lot easier to do a 3-input method, which only requires your message set to have an entropy of less than ~1.5 bits per character.

Just thought I'd point that out, as it might be something worth thinking about.

Replies from: gwern, Christian_Szegedy, Vladimir_M, Sniffnoy
comment by gwern · 2010-07-07T23:59:45.374Z · LW(p) · GW(p)

Already done; see Dasher and especially its Google Tech Talk.

It doesn't reach the 0.7-1 bit per character limit, of course, but then, according to the Hutter challenge no compression program (online or offline) has.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-08T02:16:41.050Z · LW(p) · GW(p)

Wow, and Dasher was invented by David MacKay, author of the famous free textbook on information theory!

Replies from: gwern
comment by gwern · 2010-07-08T02:18:48.742Z · LW(p) · GW(p)

According to Google Books, the textbook mentions Dasher, too.

comment by Christian_Szegedy · 2010-07-07T21:21:06.967Z · LW(p) · GW(p)

This is already exploited on cell phones to some extent.

comment by Vladimir_M · 2010-07-07T22:23:33.850Z · LW(p) · GW(p)

SilasBarta:

A few posters have mentioned here that the average entropy of a character in English is about one bit. This carries an interesting implication: you should be able to create an interface using only two of the keyboards keys, such that composing an English message requires just as many keystrokes, on average, as it takes on a regular keyboard.

One way to achieve this (though not practical for use in human use interfaces) would be to input the entire message bit by bit in some powerful lossless compression format optimized specifically for English text, and decompress it at the end of input. This way, you'd eliminate as much redundancy in your input as the compression algorithm is capable of removing.

The really interesting question, of course, is what are the limits of such technologies in practical applications. But if anyone has an original idea there, they'd likely cash in on it rather than post it here.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-07-08T00:03:37.616Z · LW(p) · GW(p)

Shannon's estimate of 0.6 to 1.3 was based on having humans guess the next character out a 27 character alphabet including spaces but no other punctuation.

The impractical leading algorithm achieves 1.3 bits per byte on the first 10^8 bytes of wikipedia. This page says that stripping wikipedia down to a simple alphabet doesn't affect compression ratios much. I think that means that it hits Shannon's upper estimate. But it's not normal text (eg, redirects), so I'm not sure in which way its entropy differs. The practical (for computer, not human) algorithm bzip2 achieves 2.3 bits per byte on wikipedia and I find it achieves 2.1 bits per character on normal text (which suggests that wikipedia has more entropy and thus that the leading algorithm is beating Shannon's estimate).

Since Sniffnoy asked about arithmetic coding: if I understand correctly, this page claims that arithmetic coding of characters achieves 4 bits per character and 2.8 bits per character if the alphabet is 4-tuples.

Replies from: gwern
comment by gwern · 2010-07-08T00:12:57.455Z · LW(p) · GW(p)

bzip2 is known to be both slow and not too great at compression; what does lzma-2 (faster & smaller) get you on Wikipedia?

(Also, I would expect redirects to play in a compression algorithm's favor compared to natural language. A redirect almost always takes the stereotypical form #REDIRECT[[foo]] or #redirect[[foo]]. It would have difficulty compressing the target, frequently a proper name, but the other 13 characters? Pure gravy.)

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-07-08T00:48:31.299Z · LW(p) · GW(p)

Here are the numbers for a pre-LZMA2 version of 7zip. It looks like LZMA is 2.0 bits per byte, while some other option is 1.7 bits per byte.

Yes, I would expect wikipedia to compress more than text, but it doesn't seem to be so. This is just for the first 100MB. At a gig, all compression programs do dramatically better, even off-the-shelf ones that shouldn't window that far. Maybe there is a lot of random vandalism early in the alphabet?

Replies from: gwern
comment by gwern · 2010-07-08T02:24:25.439Z · LW(p) · GW(p)

Well, early on there are many weirdly titled pages, and I could imagine that the first 100MB includes all the '1958 in British Tennis'-style year articles. But intuitively that doesn't feel like enough to cause bad results.

Nor have any of the articles or theses I've read on vandalism detection noted any unusual distributions of vandalism; further, obvious vandalism like gibberish/high-entropy-strings are the very least long-lived forms of vandalism - long-lived vandalism looks plausible & correct, and indistinguishable from normal English even to native speakers (much less a compression algorithm).

A window really does sound like the best explanation, until someone tries out 100MB chunks from other areas of Wikipedia and finds they compress comparably to 1GB.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-07-08T03:59:55.484Z · LW(p) · GW(p)

bzip's window is 900k, yet it compresses 100MB to 29% but 1GB to 25%. Increasing the memory on 7zip's PPM makes a larger difference on 1GB than 100MB, so maybe it's the window that's relevant there, but it doesn't seem very plausible to me. (18.5% -> 17.8% vs 21.3% -> 21.1%)

Sporting lists might compress badly, especially if they contain times, but this one seems to compress well.

Replies from: gwern
comment by gwern · 2010-07-23T09:51:28.572Z · LW(p) · GW(p)

That's very odd. If you ever find out what is going on here, I'd appreciate knowing.

comment by Sniffnoy · 2010-07-07T21:21:06.424Z · LW(p) · GW(p)

Doesn't arithmetic coding accomplish this? Or does that not count because it's unlikely a human could actually use it?

Replies from: SilasBarta
comment by SilasBarta · 2010-07-07T21:29:43.439Z · LW(p) · GW(p)

I don't think arithmetic coding achieves the 1 bit / character theoretical entropy of common English, as that requires knowledge of very complex boundaries in the probability distribution. If you know a color word is coming next, you can capitalize on it, but not letterwise.

Of course, if you permit a large enough block size, then it could work, but the lookup table would probably be umanageable.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-07-09T11:31:48.570Z · LW(p) · GW(p)

Yeah, I meant "arithmetic encoding with absurdly large block size"; I don't have a practical solution.

comment by multifoliaterose · 2010-07-04T23:41:41.147Z · LW(p) · GW(p)

Another reference request: Eliezer made a post about how it's ultimately incoherent to talk about how "A causes B" in the physical world because at root, everything is caused by the physical laws and initial conditions of the universe. But I don't remember what it is called. Does anybody else remember?

Replies from: Vladimir_Nesov, Kazuo_Thow
comment by Vladimir_Nesov · 2010-07-06T09:50:55.280Z · LW(p) · GW(p)

It is coherent to talks about "A causes B", to the contrary it's a mistake to say that everything is caused by physical laws, and therefore you have no free will, for example (as if your actions don't cause anything). Of course, any given event won't normally have only one cause, but considering the causes of an event makes sense. See the posts on free will, and then the solution posts linked from there. The picture you were thinking about is probably from these posts.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-07-06T16:56:14.396Z · LW(p) · GW(p)

Thanks for the reference, yes, this is what I had remembered. And yes, I garbled the article - what I had in mind was that point any given event won't normally have only one cause.

comment by Kazuo_Thow · 2010-07-05T21:24:05.922Z · LW(p) · GW(p)

It couldn't have been "Timeless Causality" or "Causality and Moral Responsibility", could it?

Replies from: multifoliaterose
comment by multifoliaterose · 2010-07-06T05:05:58.107Z · LW(p) · GW(p)

Thanks, but neither of these are the one I remember.

comment by Emile · 2010-07-03T10:17:37.936Z · LW(p) · GW(p)

I have some half-baked ideas about getting interesting information on lesswronger's political opinions.

My goal is to give everybody an "alien's eye" view of their opinions, something like "You hold position Foo on issue Bar, and justify it by the X books you read on Bar; but among the sample people who read X or more books on Bar, 75% hold position ~Foo, suggesting that you are likely to be overconfident".

Something like collecting:

  • your positions on various issues

  • your confidence in that position

  • how important various characteristics are at predicting correct opinions on that issue (intelligence, general education, reading on the issue, age ("general experience"), specific work or life experience with the issue, etc.)

  • How well you fare on those characteristics

  • Whether you expect to be above or below average (for LessWrong) on those characteristics

  • How many lesswrongers you expect will disagree with you on that issue

  • Whether you expect those who disagree with you to be above or below average on the various characteristics

  • How much you would be willing to change your mind if you saw surprising information

What data we could get from that

  • Are differences in opinion due to different "criteria for rightness" (book-knowledge vs. experience), to different "levels of knowledge" (Smart people believe A, stupid people believe B), or to something else ?

Problems with this approach:

  • Politics is the mind-killer. We may not want too much (or any) politics on LessWrong. If the data is collected anonymously, this may not be a huge problem.

  • It's easier to do data-mining etc. with multiple-choice questions rather than with open-ended questions (because two people never answer the same thing, so it leaves space to interpretation), but doing that correctly requires very good advance knowledge of what possible answers exist.

  • Questions would be veeery carefully phrased.

  • Ideally I would want confidence factors for all answers, but the end result may be too intimidating :P (And discourage people from answering, which makes a small sample size, which means questionable results).

I would certainly be interested in seeing the result of such a survey, but for now my idea is too rough to be actionable - any suggestions ? Comments ?

Replies from: Douglas_Knight, None, Emile, mattnewport
comment by [deleted] · 2010-07-06T01:11:12.301Z · LW(p) · GW(p)

In general I'd be interested in more specific and subtle data on political views than is normally given. In particular, on what issues do people tend to break with their own party or ideology? That's a simpler answer than you're asking, but easily tested.

comment by Emile · 2010-07-03T11:34:12.695Z · LW(p) · GW(p)

Oh, and I would probably want to add something on political affiliation - mostly because I expect a lot of "I believe Foo because I researched the issue / am very smart; others believe ~Foo because of their political affiliation"; but also because "I believe Foo and have researched it well, even though it goes against the grain of my general political affiliation" may be good evidence for Foo.

comment by mattnewport · 2010-07-03T15:28:00.450Z · LW(p) · GW(p)
  • how important various characteristics are at predicting correct opinions on that issue (intelligence, general education, reading on the issue, age ("general experience"), specific work or life experience with the issue, etc.)

How do you propose to determine what constitutes a 'correct' opinion on any given controversial issue?

Replies from: Emile, wedrifid
comment by Emile · 2010-07-03T16:33:22.131Z · LW(p) · GW(p)

I don't :)

If there is a disagreement on, say, the status of Taiwan, even someone who doesn't know much it might agree that some good predictors would agree that some good predictors would be "knowledge of the history of Taiwan", "Having lived in Taiwan", "Familiarity with Chinese culture", etc.

And it can be interesting to see whether:

  • People of different opinions consider different predictors as important (conveniently, those that favor their position)

  • Everyone agrees on which predictors are important, but those who score highly on those predictors have a different opinion from those that score lowly (which would be evidence that they are probably right)

  • Everyone agrees on which predictors are important, but even among those who score highly on those predictors, opinions are split.

I guess what I'm getting at is "If you take the outside view, how likely is it that your opinions are true"?

comment by wedrifid · 2010-07-03T15:58:11.316Z · LW(p) · GW(p)

How do you propose to determine what constitutes a 'correct' opinion on any given controversial issue?

The only way that makes any sense, see how closely they match her own! :)

comment by Kevin · 2010-07-06T05:26:18.821Z · LW(p) · GW(p)

I have an IQ of 85. My sister has an IQ of 160+. AMA.

http://www.reddit.com/r/IAmA/comments/cma2j/i_have_an_iq_of_85_my_sister_has_an_iq_of_160_ama/

Posted because of previous LW interest in a similar thread.

Replies from: RobinZ
comment by RobinZ · 2010-07-06T11:20:38.463Z · LW(p) · GW(p)

...huh, the account has been deleted.

comment by cousin_it · 2010-07-05T15:36:01.661Z · LW(p) · GW(p)

We've been thinking about moral status of identical copies. Some people value them, some people don't, Nesov says we should ask a FAI because our moral intuitions are inadequate for such problems. Here's a new intuition pump:

Wolfram Research has discovered a cellular automaton that, when run for enough cycles, produces a singleton creature named Bob. From what we can see, Bob is conscious, sentient and pretty damn happy in his swamp. But we can't tweak Bob to create other creatures like him, because the automaton's rules are too fragile and poorly understood, and finding another ruleset with sentient beings seems very difficult as well. My question is, how many computers must we allocate to running identical copies of Bob and his world to make our moral sense happy? Assume computing power is pretty cheap.

Replies from: mkehrt, SilasBarta
comment by mkehrt · 2010-07-07T09:24:52.976Z · LW(p) · GW(p)

I completely lack the moral intuition that one should create new conscious beings if one knows that they will be happy. Instead, my ethics apply only to existing people. I am actually completely baffled that so many people seem to have this intuition.

Thus, there is no reason to copy Bob. (Moreover, I avoid the repugnant condition.)

comment by SilasBarta · 2010-07-05T15:45:07.881Z · LW(p) · GW(p)

Same answer I give for all other cases of software life: our ability to run Bob is more resilient against information theoretic death. So as long as we store enough to start him from where he left off, he never feels death, and we have met our moral obligations to him.

(First LW post from my first smartphone btw.)

Replies from: Vladimir_Nesov, cousin_it, cousin_it
comment by Vladimir_Nesov · 2010-07-05T18:21:44.076Z · LW(p) · GW(p)

Same answer I give for all other cases of software life: our ability to run Bob is more resilient against information theoretic death. So as long as we store enough to start him from where he left off, he never feels death, and we have met our moral obligations to him.

Bah, he can't feel that we don't run him. Whether we should run him is a question of optimizing the moral value of our world, not of determining his subjective perception. What Bob feels is a property completely determined by the initial conditions of the simulation, and doesn't (generally) depend on whether he gets implemented in any given world.

Replies from: cousin_it
comment by cousin_it · 2010-07-05T18:37:31.274Z · LW(p) · GW(p)

You believe in Tegmark IV then? How do you reconcile it with my recent argument against it? Your use of "preference" looks like a get out of jail free card: it can "explain" any sequence of observations by claiming that you only "care" about a specific subset of worlds.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-05T18:58:59.585Z · LW(p) · GW(p)

Don't see how Tegmark IV is relevant here (or indeed relevant anywhere: it doesn't say anything!). My comment was against expecting Bob to have epiphenomenal feelings: if it's not something already in his program (which takes no input), then he can't possibly experience it.

Replies from: cousin_it
comment by cousin_it · 2010-07-05T19:07:11.969Z · LW(p) · GW(p)

It seems I misread your comment. Sorry.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-05T19:18:44.586Z · LW(p) · GW(p)

Your confusion with Tegmark IV seems to remain though, so I'm glad you signaled that. This topic is analogous to Tegmark IV, in that in both cases the distinction made is essentially epiphenomenal: multiverses talk about which things "exist" or "don't exist", and here Bob is supposed to feel "non-existence". The property of "existence" is meaningless, that's the problem in both cases. When you refer to the relevant concepts (worlds, behavior of Bob's program), you refer to all their properties, and you can't stamp "exists" on top of that (unless the concept itself is inconsistent, say).

One can value certain concepts, and make decisions based on properties of those concepts. The concepts themselves are determined by what the decision-making algorithm is interested in.

Replies from: cousin_it
comment by cousin_it · 2010-07-05T20:26:29.100Z · LW(p) · GW(p)

It seems to me you're mistaken. Multiverse theories do make predictions about what experiences we should anticipate, they're just wrong. You haven't yet given any real answer to the issue of pheasants, or maybe I'm a pathetic failure at parsing your posts.

Incidentally, my problem makes for a nice little test case: what experiences do you think Bob "should" anticipate in his future, assuming now we can meddle in the simulation at will? Does this question have a single correct answer? If it doesn't, why do such questions appear to have correct answers in our world, answers which don't require us to hypothesize random meddling gods, and does it tell us anything about how our world is different from Bob's?

Replies from: jimrandomh
comment by jimrandomh · 2010-07-06T22:25:11.695Z · LW(p) · GW(p)

Multiverse theories do make predictions about what experiences we should anticipate, they're just wrong.

On the contrary, multiverse theories do make predictions about subjective experience. For example, they predict what sort of subjective experience a sentient computer program should have, if any, after being halted. Some predict oddities like quantum immortality. The problem is that all observations that could shed light on the issue also require leaving the universe, making the evidence non-transferrable.

comment by cousin_it · 2010-07-05T15:53:54.178Z · LW(p) · GW(p)

Okay next question. Our understanding of the cellular automaton has advanced to the point where we can change one spot of Bob's world, at one specific moment in time, without being too afraid of harming Bob. It will have ripple effects and change the swamp around him slightly, though. So now we have 10^30 possible slightly-different potential futures for Bob. He will probably be happy in the overwhelming majority of them. How many should we run to fulfill our moral utility function of making sentients happy?

Replies from: SilasBarta
comment by SilasBarta · 2010-07-05T16:21:48.859Z · LW(p) · GW(p)

Okay, point taken. The answer depends on how (one believes) the social utility function responds to new instantiations of sentients that are very similar to existing ones. But in any case, you would be obligated to preserve re-instantiation capability of any already-created being.

Replies from: cousin_it
comment by cousin_it · 2010-07-05T16:31:12.409Z · LW(p) · GW(p)

The answer depends on how (one believes) the social utility function responds to new instantiations of sentients that are very similar to existing ones.

How does yours?

Replies from: SilasBarta
comment by SilasBarta · 2010-07-05T17:16:48.124Z · LW(p) · GW(p)

I don't think that creation of new sentients, in and of itself, has an impact on the (my) SUF. It only has an impact to the extent that their creators value them and others disvalue such new beings.

comment by cousin_it · 2010-07-05T15:47:27.137Z · LW(p) · GW(p)

He never feels death if we just stop the simulation either.

comment by Kevin · 2010-07-02T10:44:32.627Z · LW(p) · GW(p)

Medical grade honey! I can't wait until I can get this stuff in bulk.

How honey kills bacteria

Replies from: gwern
comment by gwern · 2010-07-02T10:54:02.763Z · LW(p) · GW(p)

I'm just wondering - what makes medical-grade honey medical-grade (as opposed to food-grade)?

Replies from: Emile, Douglas_Knight, Kutta
comment by Emile · 2010-07-02T11:12:53.998Z · LW(p) · GW(p)

The price ?

comment by Douglas_Knight · 2010-07-03T03:15:08.479Z · LW(p) · GW(p)

Medical-grade honey is purer, sterilized, and made from tea tree nectar. It is a better antibiotic, both because of the sterilization and because it has more of the active ingredient than ordinary tea tree honey, probably because they put more effort into preventing the bees from eating anything else.

Replies from: gwern
comment by gwern · 2010-07-03T12:57:50.455Z · LW(p) · GW(p)

'tea tree nectar'? I'm a little confused - I thought honey by definition always came from bees.

Replies from: wedrifid
comment by wedrifid · 2010-07-03T12:59:36.124Z · LW(p) · GW(p)

I'll presume you aren't making a joke since you used the lesswrong keyword 'confused'.

What do bees eat?

Replies from: gwern
comment by gwern · 2010-07-03T21:55:23.371Z · LW(p) · GW(p)

What do bees eat?

Flower nectar, I had always thought. I did think to myself, 'maybe what is meant is honey harvested from bees feed exclusively on the flowers of tea trees', but leaving aside my similar difficulty with the term 'tea tree' and how one would arrange that (giant sealed greenhouses of tea trees and bee hives?), I couldn't seem to find anything in a quick Google to confirm or deny this - 'tea tree honey' is a pretty rare term and mostly got me useless commercial hits.

Replies from: Douglas_Knight, wedrifid
comment by Douglas_Knight · 2010-07-04T04:32:06.339Z · LW(p) · GW(p)

The link I gave said "manuka" rather than "tea tree." If you want to know how beekeepers control the inputs, the term is monofloral honey. This is quite common, though a higher price for medical grade honey might lead to more involved methods.

comment by wedrifid · 2010-07-04T02:43:35.827Z · LW(p) · GW(p)

Put the box in the middle of a large forest of tea trees and kill any other plant that bears flowers nearby. Bees are quite efficient optimisers, they'll take low hanging fruit tree blossom if it is available.

comment by Kutta · 2010-07-02T22:08:42.045Z · LW(p) · GW(p)

It is produced by bees.

comment by simplicio · 2010-07-28T06:03:39.488Z · LW(p) · GW(p)

I've been listening to a podcast (Skeptically Speaking) talking with a fellow named Sherman K Stein, author of Survival Guide for Outsiders. I haven't read the book, but it seems that the author has a lot of good points about how much weight to give to expert opinions.

EDIT: Having finished listening, I revise my opinion down. It's still probably worth reading, but wait for it to get to the library.

comment by Kevin · 2010-07-07T03:50:39.089Z · LW(p) · GW(p)

Scientific study roundup: fish oil and mental health.

http://www.oilofpisces.com/depression.html

Replies from: RobinZ
comment by RobinZ · 2010-07-07T04:04:43.580Z · LW(p) · GW(p)

Welcome to the Premier Omega-3/Fish Oil Site on the Web!

I feel cautious about the objectivity of this source. Other sources suggest health benefits to consumption of fish, but I want to be confident that my expert sources are not skewing the selection of research to promote.

Replies from: Kevin
comment by Kevin · 2010-07-07T04:08:27.117Z · LW(p) · GW(p)

Regardless of the souce, the evidence seems to be rather strong that fish oil does good things for the brain. If you can find any negative evidence about fish oil and mental health, I'd like to see it.

Replies from: RobinZ
comment by RobinZ · 2010-07-07T04:13:39.364Z · LW(p) · GW(p)

I would like to know of risks associated with fish oil consumption as well. I am not aware of any. I am also not confident that any given site dedicated to the stuff would provide such information if or when it is available. I would suggest investigating independent sources of information (including but not limited to citations within and citations of referenced research) before drawing a confident conclusion.

Replies from: mattnewport, Richard_Kennaway
comment by mattnewport · 2010-07-07T07:35:54.904Z · LW(p) · GW(p)

Fish oil (particularly cod liver oil) has high levels of vitamin A which is known to be toxic at high doses (above what would typically be consumed through fish oil supplements) and some studies suggest is harmful at lower doses (consistent with daily supplementation).

comment by Richard_Kennaway · 2010-07-07T04:57:41.556Z · LW(p) · GW(p)

Seth Roberts has written about omega-3s. I believe that somewhere in there he's talked about the possibility of mercury contamination in fish oils.

Replies from: wedrifid, Richard_Kennaway, Kevin, RobinZ
comment by wedrifid · 2010-07-07T05:03:23.473Z · LW(p) · GW(p)

(I note that mercury concentration is subject to heavy quality control measures. Quality fish oil supplements will include credible guarantees regarding mercury levels, based of independent testing. This is, of course, something to consider when buying cheap sources from some obscure place.)

comment by Richard_Kennaway · 2010-07-07T06:19:01.577Z · LW(p) · GW(p)

Correction: the health risk he wrote about was PCBs in fish oil. For this reason he advocates flaxseed oil as a source of omega-3. Whether there is any real danger I don't know.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-07-07T06:28:50.782Z · LW(p) · GW(p)

PCBs and omega-3s climb the food chain, so they're pretty well correlated. At some point I eyeballed a chart and decided that mercury was negatively correlated with omega-3s. No idea why.

comment by Kevin · 2010-07-07T05:25:54.372Z · LW(p) · GW(p)

I think this is one of those things that may have been a problem >5 years ago but recent regulation in the USA means that all fish oil on the market is now guaranteed to be safe.

Replies from: WrongBot
comment by WrongBot · 2010-07-07T05:30:34.181Z · LW(p) · GW(p)

That's a rather... disproportionate level of faith to have in the US government's ability to regulate anything. I would not rely on American regulatory agencies for risk assessment in any field, much less one in which so little is currently known.

Replies from: Kevin, wedrifid
comment by Kevin · 2010-07-07T07:48:43.445Z · LW(p) · GW(p)

http://www.nytimes.com/2009/03/24/health/24real.html

I don't have faith, but I have a broad knowledge of the FDA and their regulation of supplements. Usually when the US government works, it works. If evidence comes out that something is dangerous, the FDA usually pulls it from store shelves until it is fixed. Examples of supplements that at a certain point in past history were poisonous but are now correctly regulated are 5-HTP and Kava.

I knew that there were people claiming fish oil is bad, some of them loudly. I know that this was first disclaimed at least five years ago. I then intuited today, that if there ever did exist a safety issue with mercury in fish oil, it would have been fixed by now.

The meme that some fish oil pills are poisoned is mostly perpetuated by companies that are trying to sell you extra expensive fish oil pills.

Replies from: wedrifid
comment by wedrifid · 2010-07-07T08:59:34.681Z · LW(p) · GW(p)

(Voted up but...)

Examples of supplements that at a certain point in past history were poisonous but are now correctly regulated are 5-HTP and Kava.

I'd like to clarify that claim, because I took the totally wrong message from it the first read through. We're talking about regulation for quality control purposes and not control of the substance itself (I'm assuming). 5-Hydroxytryptophan itself is just an amino acid precursor that is available over the counter in the USA and Canada.

It is an intermediate product produced when Tryptophan is being converted into Seratonin. It was Tryptophan which was banned by the FDA due to association with EMS. They cleared that up eventually once they established that the problem was with the filtering process of a major manufacturer, not the substance itself. I don't think they ever got around to banning 5-HTP, even though the two only differ by one enzymatic reaction.

In general it is relatively hard to mess yourself up with amino acid precursors, even though Seratonin is the most dangerous neurotransmitter to play with. In the case of L-Tryptophan and 5-HTP care should be taken when combining it with SSRIs and MAO-A inhibitors. ie. Take way way less for the same effect or just "DO NOT MESS WITH SERATONIN!" (in slightly shaky handwriting).

Let me know if you meant something different from the above. Also, what is the story with Kava? All I know is that it is a mild plant based supplement that mildly sedates/counters anxiety/reduces pain, etc. Has it had quality issues too?

Replies from: Kevin
comment by Kevin · 2010-07-07T17:49:48.303Z · LW(p) · GW(p)

Thanks for the clarification, yes, by 5-HTP I meant tryptophan.

Serotonin has serious drug interactions with SSRIs and MAOIs, but otherwise is decidedly milder than pharmaceutical anti-depressants. It's effects are more comparable to melatonin than prozac

Kava is a plant that counters anxiety, and it is rather effective at doing so but very short lasting. It causes no physical addiction, which is one of the reasons it is on the FDA's Generally Recognized as Safe list. All kava on the market today is sourced from kava root. Kava has a great deal of native/indigenous use, and those people always make their drinks from kava root, throwing away the rest of the plant.

The rest of the plant contains active substances, so in their infinite wisdom, a Western company bought up the cheap kava leaf remnants and made extracts. It turns out that kava leafs have ingredients that cause large amounts of liver damage, but the roots are relatively harmless.

Kava root still isn't good for the liver, but it is less damaging than alcohol or acetaminophen. It is a bad idea to regularly mix it with alcohol or acetaminophen or other things that are bad for the liver, though.

Replies from: wedrifid
comment by wedrifid · 2010-07-07T23:26:38.916Z · LW(p) · GW(p)

Kava root still isn't good for the liver, but it is less damaging than alcohol or acetaminophen.

Courtesy of google: acetaminophen is 'paracetamol'. It seems several countries (including the US) use a different name for the chemical.

comment by wedrifid · 2010-07-07T05:52:34.626Z · LW(p) · GW(p)

I share your distrust of the regulatory ability of the US government, particularly the FDA. I further lament the ability of the FDA to damage the regulatory procedures worldwide with their incompetence (or more accurately their lost purpose). In the case of Kevin's specific reference to regulation I suspect even the FDA could manage it. While research on the effects of large doses of EPA and DHA (Omega3) may be scant, understanding of mercury content itself is fairly trivial. I'm taking it that Kevin is referring specifically to quality assurance regarding mercury levels which is at least plausible (given litigation risks for violations).

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-07T07:23:58.597Z · LW(p) · GW(p)

Stored riff here: I think the world would be a better place if people had cheap handy means of doing quantitative chemical tests. I'm not sure how feasible it is, though I think there's a little motion in that direction.

Replies from: wedrifid
comment by wedrifid · 2010-07-07T07:27:03.649Z · LW(p) · GW(p)

I would love to have that available, either as a product or a readily accessible service.

Replies from: Nisan
comment by Nisan · 2010-07-07T08:32:57.737Z · LW(p) · GW(p)

It would make consuming illegal drugs a lot safer, no?

Replies from: Kevin, wedrifid
comment by wedrifid · 2010-07-07T09:03:31.322Z · LW(p) · GW(p)

I hadn't thought of that, good point. Given that consideration, assume the grandparent comment was written in all caps, with the 'product' option surrounded with '**'.

Quality issues are an important consideration, for me at least, when trying to source substances that violate arbitrary restrictions.

comment by RobinZ · 2010-07-07T05:03:17.534Z · LW(p) · GW(p)

Mercury is a known problem with fish in general, agreed. Content varies somewhat with species, I have heard.

comment by Mike Bishop (MichaelBishop) · 2010-07-04T05:01:17.243Z · LW(p) · GW(p)

Andrew Gelman & Cosma Shalizi - Philosophy and the Practice of Bayesian Statistics arXiv

Replies from: Unnamed
comment by Unnamed · 2010-07-04T05:53:22.413Z · LW(p) · GW(p)

You're third, after steven0461 and nhamann.

Replies from: cupholder
comment by cupholder · 2010-07-04T06:14:57.445Z · LW(p) · GW(p)

Fourth!

Replies from: DanielVarga, steven0461
comment by DanielVarga · 2010-07-04T19:01:03.497Z · LW(p) · GW(p)

And I still managed to miss it the first three times.

comment by steven0461 · 2010-07-04T21:48:36.209Z · LW(p) · GW(p)

I thought I did a search but apparently not; sorry.

Replies from: cupholder
comment by cupholder · 2010-07-04T21:56:30.538Z · LW(p) · GW(p)

In the long run, it's all good - I think it's a decent paper, and I suppose this way more eyeballs see it than if I was the only one to post it. (Not to say that we should make a regular habit of linking things four times :-)

comment by apophenia · 2010-07-02T11:57:34.588Z · LW(p) · GW(p)

one five eight nine eight eight eight nine nine eight SEVEN wait. why seven. seven is the nine thousandth deviation. update. simplest explanation. all ones. next explanation. all ones and one zero. next explanation. random ones and zeros with probability point seven nine nine seven repeating. next explanation pi. gap. next explanation. decimal pi with random errors according to poisson distribution converted to binary. next explanation. one seven one eight eight five two decimals of pi with random errors according to poisson distribution converted to binary followed by eight five nine zero one digits of reflexive code. current explanation--

"Eric, you've got to come over and look at this!" Jerry explained excitedly into the phone. "It's not those damn notebooks again, is it? I've told you, I could just write a computer program and you'd have all your damn results for the last year inside a week," Eric explained sleepily for the umpteenth time. "No, no. Well... yes. But this is something new, you've got to take a look," Jerry wheedled. "What is it this time? I know, it can calculate pi with 99.9% percent accuracy, yadda yadda. We have pi to billions of decimal places with total accuracy, Jerry. You're fifty years too late." "No, I've been trying something new. Come over." Jerry hung up the phone, clearly upset. Eric rubbed his eyes. Fifteen minutes peering at the crackpot notebooks and nodding appreciatively would sooth his friend's ego, he knew. And he was a good friend, if a little nuts. Eric took one last longing look at his bed and grabbed his house key.

"And you see this pattern? The ones that are nearly diagonal here?" "Jerry, it's all a bunch of digits to me. Are you sure you didn't make a mistake?" "I double check all my work, I don't want to go back too far when I make a mistake. I've explained the pattern twice already, Eric." "I know, I know. But it's Saturday morning, I'm going to be a bit--let me get this straight. You decided to apply the algorithm to its old output." "No, not its own output, that's mostly just pi. The whole pad." "Jerry, you must have fifty of these things. There's no way you can--" "Yeah, I didn't go very far. Besides, the scratch pads grow faster than the output as I work through the steps anyway." "Okay, okay. So you run through these same steps with your scratch pad numbers, and you get correct predictions then too?" "That's not the point!" "Calm down, calm down. What's the point then?" "The point is these patterns in the scratch work--" "The memory?" "Yeah, the memory." "You know, if you'd just let me write a program, I--" "No! It's too dangerous." "Jerry, it's a math problem. What's it going to do, write pi at you? Anyway, I don't see this pattern..." "Well, I do. And so then I wondered, what if I just fed it ones for the input? Just rewarded it no matter what it did?" "Jerry, you'd just get random numbers. Garbage in, garbage out." "That's the thing, they weren't random." "Why the hell are you screwing around with these equations anyway? If you want to find patterns in the Bible or something... just joking! Oww, stop. I kid, kid!" "But, I didn't get random numbers! I'm not just seeing things, take a look. You see here in the right hand column of memory? We get mostly zeros, but every once in a while there's a one or two." "Okaaay?" "And if you write those down we have 2212221..." "Not very many threes?" "Ha ha. It's the perfect numbers, Eric. I think I stumbled on some way of outputting the perfect numbers. Although the digits are getting further spaced apart, so I don't know how long it will stay faster than factoring." "Huh. That's actually kinda cool, if they really are the perfect numbers. You have what, five or six so far? Let's keep feeding it ones and see what happens. Want me to write a program? I hear there's a cash prize for the larger ones." "NO! I mean, no, that's fine, Eric. I'd prefer you not write a program for this, just in case." "Geez, Jerry. You're so paranoid. Well, in that case can I help with the calculations by hand? I'd love to get my claim to fame somehow."

"Well... I guess that's okay. First, you copy this digit from here to here..."

comment by apophenia · 2010-07-02T11:56:59.331Z · LW(p) · GW(p)

I was originally not going to post this, but I decided to on the basis that if it's as bad as I think, it'll be voted down:

one five eight nine eight eight eight nine nine eight SEVEN wait. why seven. seven is the nine thousandth deviation. update. simplest explanation. all ones. next explanation. all ones and one zero. next explanation. random ones and zeros with probability point seven nine nine seven repeating. next explanation pi. gap. next explanation. decimal pi with random errors according to poisson distribution converted to binary. next explanation. one seven one eight eight five two decimals of pi with random errors according to poisson distribution converted to binary followed by eight five nine zero one digits of reflexive code. current explanation--

"Eric, you've got to come over and look at this!" Jerry explained excitedly into the phone. "It's not those damn notebooks again, is it? I've told you, I could just write a computer program and you'd have all your damn results for the last year inside a week," Eric explained sleepily for the umpteenth time. "No, no. Well... yes. But this is something new, you've got to take a look," Jerry wheedled. "What is it this time? I know, it can calculate pi with 99.9% percent accuracy, yadda yadda. We have pi to billions of decimal places with total accuracy, Jerry. You're fifty years too late." "No, I've been trying something new. Come over." Jerry hung up the phone, clearly upset. Eric rubbed his eyes. Fifteen minutes peering at the crackpot notebooks and nodding appreciatively would sooth his friend's ego, he knew. And he was a good friend, if a little nuts. Eric took one last longing look at his bed and grabbed his house key.

"And you see this pattern? The ones that are nearly diagonal here?" "Jerry, it's all a bunch of digits to me. Are you sure you didn't make a mistake?" "I double check all my work, I don't want to go back too far when I make a mistake. I've explained the pattern twice already, Eric." "I know, I know. But it's Saturday morning, I'm going to be a bit--let me get this straight. You decided to apply the algorithm to its old output." "No, not its own output, that's mostly just pi. The whole pad." "Jerry, you must have fifty of these things. There's no way you can--" "Yeah, I didn't go very far. Besides, the scratch pads grow faster than the output as I work through the steps anyway." "Okay, okay. So you run through these same steps with your scratch pad numbers, and you get correct predictions then too?" "That's not the point!" "Calm down, calm down. What's the point then?" "The point is these patterns in the scratch work--" "The memory?" "Yeah, the memory." "You know, if you'd just let me write a program, I--" "No! It's too dangerous." "Jerry, it's a math problem. What's it going to do, write pi at you? Anyway, I don't see this pattern..." "Well, I do. And so then I wondered, what if I just fed it ones for the input? Just rewarded it no matter what it did?" "Jerry, you'd just get random numbers. Garbage in, garbage out." "That's the thing, they weren't random." "Why the hell are you screwing around with these equations anyway? If you want to find patterns in the Bible or something... just joking! Oww, stop. I kid, kid!" "But, I didn't get random numbers! I'm not just seeing things, take a look. You see here in the right hand column of memory? We get mostly zeros, but every once in a while there's a one or two." "Okaaay?" "And if you write those down we have 2212221..." "Not very many threes?" "Ha ha. It's the perfect numbers, Eric. I think I stumbled on some way of outputting the perfect numbers. Although the digits are getting further spaced apart, so I don't know how long it will stay faster than factoring." "Huh. That's actually kinda cool, if they really are the perfect numbers. You have what, five or six so far? Let's keep feeding it ones and see what happens. Want me to write a program? I hear there's a cash prize for the larger ones." "NO! I mean, no, that's fine, Eric. I'd prefer you not write a program for this, just in case." "Geez, Jerry. You're so paranoid. Well, in that case can I help with the calculations by hand? I'd love to get my claim to fame somehow." "Well... I guess that's okay. First, you copy this digit from here to here..."

comment by naivecortex · 2010-07-02T02:22:13.589Z · LW(p) · GW(p)

test

comment by VNKKET · 2010-07-01T22:05:17.860Z · LW(p) · GW(p)

This is a mostly-shameless plug for the small http://lesswrong.com/lw/29o/open_thread_may_2010_part_2/21sr I proposed in May:

I'm still looking for three people to cross the "http://lesswrong.com/lw/d6/the_end_of_sequences" by donating $60 to the http://singinst.org/donate/whysmalldonationsmatter.

If you're interested, see my http://lesswrong.com/lw/29o/open_thread_may_2010_part_2/21sr. I will match your donation.

comment by SamAdams · 2010-07-10T03:50:41.263Z · LW(p) · GW(p)

Karma Encourages Group Think:

LW Karma system allows people to vote up or vote posts down based on good or usless reasons. Without it you cannot make high-level posts or vote idiotic comments or posts down.

Essentially Karma is the currency of popularity of LW. This being said I would wager that this encourages a group think attitude; because people have a strong motivation to get karma and not such a strong incentive to think for themselves and to question the group.

I would also posit that this kind of system causes a stagnation in ideas and thinking within the group. This is evident on LW with how many posts just seem to rehash old news.

Pearls before swine.

Replies from: Eliezer_Yudkowsky, RobinZ, LucasSloan, nhamann
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-10T06:52:43.639Z · LW(p) · GW(p)

As the comments by this user have been consistently voted down and he cannot seem to take the hint, comments by him will be deleted/banned.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-10T12:31:14.818Z · LW(p) · GW(p)

I'm not sure that wholesale deletion of comments prior to banning is ideal in this case, in that it a) substantially disrupts the flow of conversations that occurred and b) makes it very difficult for an interested lurker to realize what was occurring. I don't see a good reason to delete the existing comments (many seem to be merely wrong) although I agree with banning the individual.

Replies from: Morendil, ata
comment by Morendil · 2010-07-10T13:18:05.019Z · LW(p) · GW(p)

He meant "further comments".

comment by ata · 2010-07-14T21:18:53.681Z · LW(p) · GW(p)

I think "ban" is actually the term the Reddit/LW software uses for deleting a comment if you're an editor rather than the original poster. It doesn't refer to banning the user.

(I could be mistaken about what he means by it in this case, but I distinctly remember some past discussion to that effect.)

comment by RobinZ · 2010-07-10T04:08:02.180Z · LW(p) · GW(p)

(1) We are aware. There are important reasons for keeping a moderation system anyway. Practical suggestions for rational groupthink-alleviating measures would be appreciated, although possibly not implemented.

(2) Bear in mind the selection effect of who reads, votes, and replies to a thread on a given topic. Last year's survey showed more people who had decided to forgo cryonics than signed up for preservation by a factor of sixteen.

(3) You are not yet a sufficiently impressive figure within this community to induce people to reconsider their judgments merely by expressing disapproval.

Replies from: timtyler
comment by timtyler · 2010-07-10T08:36:22.712Z · LW(p) · GW(p)

Re: "Rational groupthink-alleviating measures"

Don't delete, ban or otherwise punish critics, would be my recommendation. Critics often bear unpopular messages. The only group I have ever participated in where critics were treated properly is the security/cryptographic community. There, if someone bothers to criticise something, if anything they are thanked for their input.

Replies from: ciphergoth, Vladimir_Nesov
comment by Paul Crowley (ciphergoth) · 2010-07-10T08:53:38.051Z · LW(p) · GW(p)

I don't perceive a big difference between the crypto community and LW here. Do you have an example in mind of someone who speaks to the wider crypto community with the same tone that SamAdams speaks to us, but who is treated as a valued contributor?

Replies from: timtyler
comment by timtyler · 2010-07-10T09:00:33.232Z · LW(p) · GW(p)

I haven't looked closely at the case of SamAdams.

comment by Vladimir_Nesov · 2010-07-11T16:57:37.844Z · LW(p) · GW(p)

Don't delete, ban or otherwise punish critics, would be my recommendation. Critics often bear unpopular messages.

"Critic" is not a very useful category, moderation-wise. What matters is quality of argument, not implied conclusions, so an inane supporter of the group should be banned as readily as an inane defector, and there seems to be little value in keeping inane contributors around, whether "critics" or not.

comment by LucasSloan · 2010-07-10T06:46:19.905Z · LW(p) · GW(p)

This is evident on LW with how many posts just seem to rehash old news.

Do you have any insights which you would like to share that advance the borders of rationality?

comment by nhamann · 2010-07-10T06:32:58.085Z · LW(p) · GW(p)

Actually, Karma is the currency of "not being a troll" on LW. Since you are most likely a troll (not very effective though, IMO. Try being more subtle next time, you're likely to get more genuine responses that way), you are bankrupt. Oops! :(

comment by naivecortex · 2010-07-02T02:14:40.740Z · LW(p) · GW(p)

There are three ways to experience the world: sensations, feelings and thoughts. In the perception process, sensations come first, followed by feelings and then thoughts.

The genetically endowed instinctual passions, and their concomitant feelings, form themselves into an inchoate sense of being a self (I/me) separate from the physical body. Suffering is tied to this self/feelings.

Eradication of self/feelings, and thus suffering, has been accomplished on October 1992 by a man from Australia named Richard; followed by more beginning this year.

And now, in 2010, for the first time, Buddhists at DharmaOverground have begun to consider this new way of life sincerely. To begin with, here is an account from Daniel Ingram (a self-proclaimed Arahat) about the awesomeness of Pure Consciousness Experience compared to any other mode of experience that Humanity has known thus far.

PS: Before responding to this thread, it is helpful to review the commonly raised objections.

Replies from: JoshuaZ, WrongBot
comment by JoshuaZ · 2010-07-02T02:42:52.107Z · LW(p) · GW(p)

Ok. Wrongbot has already given you the standard reading list, but I'd like to address this specifically.

The zeroth reason you've been voted down is that this comes across as spamming. No one likes to see a comment of apparently marginal relevance with lots of links to another website with minimal explanation.

Moving on from that, how will the general LW reader respond when reading the above? Let me more or less summarize the thought processes.

There are three ways to experience the world: sensations, feelings and thoughts. In the perception process, sensations come first, followed by feelings and then thoughts.

How do you define these three things? How do you know that they are everything? What is your experimental evidence?

The genetically endowed instinctual passions, and their concomitant feelings, form themselves into an inchoate sense of being a self (I/me) separate from the physical body. Suffering is tied to this self/feelings.

Ok. So now you've made some claim that sounds like the common dualist intuition is somehow due to genetics. That's plausibly true, but would need evidence. The claim that this form of dualism leads to "suffering" seems to be generic Buddhism.

Eradication of self/feelings, and thus suffering, has been accomplished on October 1992 by a man from Australia named Richard; followed by more beginning this year.

So now a testimonial of personal claims about enlightenment. That's going to go over real well with the empircists here.

And now, in 2010, for the first time, Buddhists at DharmaOverground have begun to consider this new way of life sincerely. To begin with, here is an account from Daniel Ingram (a self-proclaimed Arahat) about the awesomeness of Pure Consciousness Experience compared to any other mode of experience that Humanity has known thus far.

And now we get more testimonials, an explicit connection to Buddhism, and some undefined terms thrown in for good measure (what does it mean for someone to be "self-proclaimed Arahat"? If one doesn't know what an Arahat is then this means very little. If one is familiar with the term in Buddhist and Jainist beliefs then one isn't likely to see much of value in this claim).

At this point, the LWer concludes that this message amounts to religious spam or close to that. Then the LWer gets annoyed that scanning this message took up time from their finite lifespan that could be spent in a way that creates more positive utility (whether reading an interesting scientific paper, thinking about the problem of Friendly AI, napping, or even just watching silly cats on Youtube). And then they express their annoyance by downvoting you.

Replies from: pjeby, naivecortex
comment by pjeby · 2010-07-02T15:01:21.221Z · LW(p) · GW(p)

And then they express their annoyance by downvoting you.

Following which, they use more of their finite lifespan to comment in reply, in the hopes of feeling a momentary elevation of status, plus a lifetime of karma enhancements, that will maybe make up for the previous loss of time. ;-)

(For the record, I upvoted you anyway. ;-) )

comment by naivecortex · 2010-07-02T04:09:54.955Z · LW(p) · GW(p)

Hi there,

Actual Freedom (AF) is not a religious system/cult; I am none too sure how anyone got that impression here as the very front page of the AF website mentions "Non-Spiritual" in bold text.

re: marginal relevance

One of the hallmark of this stage of experience which is being called an Actual Freedom (a permanent Pure Conscious Experience) is that cognitive dissonance (among misery and mayhem) is found (experientially, by those who have had PCEs) to be sourced in the identity/feelings. I've noticed quite a few posts regarding irrationality and emotion in LW, and therefore the relevance.

re: minimal explanation

The central thesis is this: misery and mayhem is caused by the identity (the inchoate sense of being an identity separate by the physical body, that each and every one of us thinks/feels oneself to be) that the instinctual passions / feelings form themselves into (emergence). Again, this is verified experientially by being in a PCE firsthand; in a PCE, one's sense of identity temporarily vanishes along with the feelings. A way to be in this PCE permanently has been discovered, and beginning this year, a handful of people (apart from Richard, that is) have already attained it. Is this enough, or would you like more details?

re: There are three ways to experience the world: sensations, feelings and thoughts

To answer your specific questions: I define these things by personal experience. I did not claim that they are "everything." - only that they are ways in which one experiences (i.e., consciously perceives) the world. As for "experimental evidence" - there are no experiments needed other than one's ongoing conscious experience.

re: supposed similarity with Buddhist and other spiritual traditions

This is a tricky territory, but perhaps the following would be sufficient to make a simple point: In "Enlightenment" - or any other religio-spiritual attainments - there are feelings/emotions (often transmogrified into divine feelings of Love/Compassion). No one (other than Richard) has claimed to be free of feeligns, emotions and instinctual passios (not instincts). That is the difference. The "Commonly Raised Objects" page that I linked to further above contains more details.

re: So now a testimonial of personal claims about enlightenment.

No, not enlightement (where feelings are still in existent), but an actual freedom (a permanent Pure Conscious Experience). It is rather interesting that this objection (AF == Enlightenment) is raised even in a forum pertaining to human rationality.

re: That's going to go over real well with the empircists here.

Since this is not about enlightenemnt at all, I'd appreciate an accurate/unbiased feedback that doesn't misrepresent AF.

re: Arahat

Arahat is a Buddhist label for those that have attained Enlightenment. BTW, the reference to Buddhists, Daniel, Arahats is only a side remark -- particularly the increasing popularity of AF even among the Buddhists; and as such is not part of the central thesis of my post (which is concerned with life without feelings/identity)

re: the LWer concludes that this message amounts to religious spam or close to that.

As I haven't come across one mentioned here, I'll ask now: what is the factual basis for such a conclution?

Replies from: WrongBot, JoshuaZ, Mitchell_Porter
comment by WrongBot · 2010-07-02T04:20:52.193Z · LW(p) · GW(p)

I'm sorry, but from the perspective of someone with no prior knowledge of Actual Freedom, you sound as though you're saying that there is a magical mental state that fixes every problem that evolution baked into the brain over hundreds of millions of years and that the only people who have ever successfully achieved this mental state in all of human history are the devoted followers of a particular charismatic leader who doesn't believe in last names.

If you wish to distinguish yourself from people who are promoting cults, you need to not sound like someone promoting a cult.

Replies from: Blueberry, NancyLebovitz, naivecortex
comment by Blueberry · 2010-07-02T04:34:51.441Z · LW(p) · GW(p)

Please don't feed the trolls!

Replies from: cousin_it
comment by cousin_it · 2010-07-02T07:05:26.385Z · LW(p) · GW(p)

Seconded. Times like this I wish moderators would delete obviously bad threads; downvoting the original comment into oblivion doesn't seem enough.

Replies from: WrongBot, Kevin
comment by WrongBot · 2010-07-02T17:25:48.703Z · LW(p) · GW(p)

Would a system to automatically delete comments with a low enough score be supported by the community? So that comments would collapse at -3 and be deleted at, say, -10 or -20. I'm not in favor of the idea, but I'm interested to hear if others would be.

Replies from: Blueberry, Oscar_Cunningham, Morendil
comment by Blueberry · 2010-07-02T18:05:51.244Z · LW(p) · GW(p)

Currently you can choose the threshold for hiding comments: click on "preferences" on the right. I've turned mine off, because I like to see all the comments. I'd be open to adding an option for "don't even show there was a comment here," but I'd like the comments to be preserved in case someone wants to see them.

comment by Oscar_Cunningham · 2010-07-02T17:32:28.389Z · LW(p) · GW(p)

Maybe not deleted but simply locked so that no-one can post in them. Should stop any painful soul-draining, ultimately pointless arguements. Of course, if the subject is posted again then it's definitely spamming, and the mods should delete the repost and ban those responsible,

comment by Morendil · 2010-07-02T18:43:06.626Z · LW(p) · GW(p)

I'd support deleting heavily downvoted comments that do not have upvoted descendants.

comment by Kevin · 2010-07-02T07:22:46.161Z · LW(p) · GW(p)

That's actually the official policy for sufficiently bad trolls, but the moderator is Eliezer and if he doesn't notice it he doesn't notice. Feel free to email him and point this thread out.

comment by NancyLebovitz · 2010-07-03T09:38:34.239Z · LW(p) · GW(p)

There are two issues here. One is whether Actual Freedom's approach produces the claimed effects and whether those effects actually improve people's lives, and the other is whether it's a cult. There's a minor question of whether Actual Freedom is the only path to get those effects.

I don't think it sounds all that much like a cult-- they aren't asking for money, they aren't asking for devotion to a leader, and they're saying they have a simple method of getting access to an intrinsic ability.

Whether it's as absolutely true as they say is a harder question, though it might improve quality of life without working all the time. Whether it's safe is a harder question-- it sounds like a sort of self-modification which is would be very hard to reverse.

Whether no other system produces comparable results is unknowable.

comment by naivecortex · 2010-07-02T04:31:13.180Z · LW(p) · GW(p)

[..] there is a magical mental state that fixes every problem that evolution baked into the brain over hundreds of millions of years

Yes, not "every problem" but specifically sorrow and malice (sorrow and malice are both feelings anchored on the sense of identity). The word "sensuosity" should come to one's mind.

the only people who have ever successfully achieved this mental state in all of human history are the devoted followers of a particular charismatic leader who doesn't believe in last names.

Unless there is evidence to the contrary, the only people free of the identity's grip (and the feelings) is Richard and a few others mentioned here.

As for "charismatic leader" and "devoted followers" (phrases pertaining to religio-spiritual systems/cults) - Richard is a fellow human being promulgating his discovery. Think of Actualism as in tourism (not the -ism of philosophy/religion/spirituality).

If you wish to distinguish yourself from people who are promoting cults, you need to not sound like someone promoting a cult.

As this is a straw man (there is no cult here to promote), I'll pass.

Replies from: WrongBot, wedrifid
comment by WrongBot · 2010-07-02T04:49:01.751Z · LW(p) · GW(p)

You say that you are not promoting a cult, but for claims such as the ones you are making, I have a very high prior probability that you are. To overcome the strong weighting of my prior probability function and convince me that you are doing anything other than promoting a cult you need to supply strong evidence.

If you were able to identify specific ways in which your organization avoids falling into an affective death spiral, for example, I would be more inclined to take you seriously.

The same would hold if you explained why your group is not a cult in a way more compelling than "but we're actually right!"

Replies from: naivecortex
comment by naivecortex · 2010-07-02T06:41:41.173Z · LW(p) · GW(p)

I have a very high prior probability that you are [promoting a cult].

Could this "prior probability' be your intuition?

Suppose I came to you with a creative idea about a new product; where I detail you on the specific steps and resources needed to create that produt, and you were to respond with "I think you are promoting a cult", then it stands to reason - does it not - that I ask just what exactly [the factual events/knowledge] made you think of my specific method to be a cult?

To overcome the strong weighting of my prior probability function and convince me that you are doing anything other than promoting a cult you need to supply strong evidence.

Or how about supplying an evidence for your intuition that Actualism is a cult?

If you were able to identify specific ways in which your organization avoids falling into an affective death spiral, for example, I would be more inclined to take you seriously.

I am only referring to the actualism method itself, along with a set of people who are experimenting with the method; there is no organization behind it.

I located the following from the link you gave: "But it's nothing compared to the death spiral that begins with a charge of positive affect - a thought that feels really good."

If you haven't noticed before, the actualism method is about eliminating the identity along with its instinctual passions, feelings and emotions - leaving only sensations/thoughts with their concomitant 24x7 sensuous delight.

Replies from: WrongBot
comment by WrongBot · 2010-07-02T15:58:01.703Z · LW(p) · GW(p)

A very brief explanation of what I mean by prior probability: I've seen people making claims of this general sort about, say, twenty different techniques/philosophies/religions. In each of those twenty cases, the claimant and all other followers of the technique/philosophy/religion in question were together part of a cult.

So, presented only with the claims you have made and based on my past experience with such claims, I perceive that is very likely that you are promoting something cult-like, given the 100% correlation I have observed in the past.

This is not really an intuition: it's a statement of probability informed by the evidence available to me at this particular point in time. I make no claims about whether or not your group is actually a cult, only that I believe it to be very likely.

Replies from: naivecortex
comment by naivecortex · 2010-07-02T20:58:06.191Z · LW(p) · GW(p)

I make no claims about whether or not your group is actually a cult, only that I believe it to be very likely.

Ok. Again my response is similar: if a fellow being being is to discover a remarkable way of living completely rid of sorrow/malice, and promulgates his discovery for the benefit of others (much like the sharing of a technological invention, for instance) ... and if someone is to call the discoverer, his discovery and a few of those experimenting with his method as a cultic organization, then the burden of proof lies on the shoulder of this someone, does it not?

Here's a hint: the fact that the AF method is primarily about investigation of one's own feelings/beliefs and how they cause malice/sorrow in oneself and others should automatically imply that phenomena such as groupthink, affective death spiral, dogmatic identification, belonging-to-a-group and so on are completely unproductive to its very thesis/goal.

If this is not helpful, perhaps you could glean further details from the Frequently Flogged Misconception page.

Replies from: WrongBot
comment by WrongBot · 2010-07-02T21:07:35.979Z · LW(p) · GW(p)

if someone is to call the discoverer, his discovery and a few of those experimenting with his method as a cultic organization, then the burden of proof lies on the shoulder of this someone, does it not?

Nope, and cult members ask the exact same question.

Extraordinary claims require extraordinary evidence. You came here to convince people to adopt Actualism (it seems). So, actually convince me. Why should I pay more attention to you and your alleged non-cult than I do to someone else's alleged non-cult? Arguments based on the teachings of your alleged non-cult are worthlessly circular, because you're trying to convince me that such claims should have worth in the first place.

Replies from: naivecortex, naivecortex
comment by naivecortex · 2010-07-02T21:42:02.266Z · LW(p) · GW(p)

You came here to convince people to adopt Actualism (it seems). So, actually convince me.

You're way off the mark. I am not intending to convince/convert anyone to Actualism; there is no group/belief-system/cult here (outside the human imagination, anyways).

I'm posting about Actualism here in LW (which presumably was never mentioned before) simply in the spirit of sharing information and possibly engaging in mutually-interesting discussion with other fellow freethinkers.

Why should I pay more attention to you and your alleged non-cult than I do to someone else's alleged non-cult?

As it is your life you are living with - and I am only posting here in the spirit of sharing - then what you do with it is completely up to you.

Arguments based on the teachings of your alleged non-cult are worthlessly circular, because you're trying to convince me that such claims should have worth in the first place.

I can't help but think that this is getting as absurd as a 19th century person responding to Darwin's claims on evolution as follows: *Arguments based on the teachings of your alleged non-cult cult are worthlessly circular, because you're trying to convince me that such claims should have worth in the first place"

But all is not lost; how do you reconcile your probabilistic non-factual belief on cultic nature of an organization that does not actually exist with what I wrote above (which is copy-pasted below for your convenience)?

Here's a hint: the fact that the AF method is primarily about investigation of one's own feelings/beliefs and how they cause malice/sorrow in oneself and others should automatically imply that phenomena such as groupthink, affective death spiral, dogmatic identification, belonging-to-a-group and so on are completely unproductive to its very thesis/goal.

comment by naivecortex · 2010-07-02T22:02:57.285Z · LW(p) · GW(p)

Extraordinary claims require extraordinary evidence.

I located this page http://wiki.lesswrong.com/wiki/Affective_death_spiral This process creates theories that are believed for their own sake and organizations that exist solely to perpetuate themselves, especially when combined with the social dynamics of groupthink.

It is worth pointing out that an Actual Freedom is not a "theory" let alone something to bolster one's "beliefs" upon (and let alone forming an identity around it). Richard is the first actually free person; and others who have personally seen him verified (to an extent possible) the absence of affective reactions (followed by carefree interactions, for instance). Richard himself was once diagnosed by a psychiatrist who reported the following conditions (abeit in psychiatric terms):

  1. ‘depersonalisation’ (selflessness ... the absence of an entity that is called ego and Soul or self and Self).
  2. ‘alexithymia’ (the absence of the affective faculty ... no emotions, passions or calentures whatsoever).
  3. ‘derealisation’ (the condition of having lost one’s grip on reality ... the ‘real world’ is nowhere to be found).
  4. ‘anhedonia’ (the inability to affectively feel pleasure ... no hormonal secretions means hedonism is not possible).

And several other actually free people too have reported similar experiences (total absence of the affective faculty), confirmed by their friends, relatives and daily experiences.

Above all, everyone who experienced a PCE was able to verify it for themselves.

It is indeed possible that Richard and others are deluded, but with the increasing number of people getting actually free, and the increasing ease of living/interactions one can find meanwhile (till the first PCE), it is hard to see how this is a delusion.

That said, personal experiences (such as an Actual Freedom) can ultimately only be verified by one's own conscious experience, which is an ongoing gaiety/ease in everyday life and interactions marked by lesser and lesser affective biases.

Replies from: WrongBot, wedrifid
comment by WrongBot · 2010-07-02T22:53:21.305Z · LW(p) · GW(p)

Bringing a psychiatrist in to this is good: you have offered evidence that does not rely on reports of subjective experiences. But it is still weak evidence; there are many other hypotheses that explain the evidence, and several of them are much more probable.

An example of what I consider strong evidence: a person who had their brain imaged by an fMRI while performing some set of relatively simple mental tasks both before and after experiencing a PCE had radically different results.

That would not entirely convince me, but it would certainly make me take your claims much more seriously. If there had been ten such experiments, all ten people who claimed PCEs had similar results, and the experiments had been verifiably performed in a sound way, I would then almost certainly devote significant resources to achieving a PCE.

This not the only evidence I would accept, of course, but that should give you an idea of the type and the strength necessary. And if you can't provide such evidence, well, alas.

On a related note, further links to the actualfreedom.com.au website will be ignored. I have made an effort to read the material there in hopes of better comprehending your claims, but the process is too painful for me to get very far, and this is part of the reason why I'm not taking you seriously. When someone has made an effort to present a large body of work on a topic but has not made an effort to present said work in a way that is easy for other human beings to read, they are usually not very credible. To be clear, I am referring to the website's poorly-designed navigation, plethora of spelling and grammatical errors, and the use of the HTML tag.

Replies from: Blueberry, naivecortex
comment by Blueberry · 2010-07-03T01:31:58.960Z · LW(p) · GW(p)

An example of what I consider strong evidence: a person who had their brain imaged by an fMRI while performing some set of relatively simple mental tasks both before and after experiencing a PCE had radically different results.

That would not entirely convince me, but it would certainly make me take your claims much more seriously. If there had been ten such experiments, all ten people who claimed PCEs had similar results, and the experiments had been verifiably performed in a sound way, I would then almost certainly devote significant resources to achieving a PCE.

There have been some studies on meditation and MRIs that you may be interested in.

Replies from: WrongBot
comment by WrongBot · 2010-07-03T02:08:13.048Z · LW(p) · GW(p)

I've seen a couple of those, and consider them significant evidence that certain meditation techniques are useful. As naivecortex is claiming that PCEs have effects much more dramatic than meditation, I would expect to see MRI data that is correspondingly stronger.

comment by naivecortex · 2010-07-02T23:28:54.864Z · LW(p) · GW(p)

An example of what I consider strong evidence: a person who had their brain imaged by an fMRI while performing some set of relatively simple mental tasks both before and after experiencing a PCE had radically different results.

It is indeed a strong neurological evidence. It is a pity that Richard have denied all requests to take a brain scan for reasons pertaining to personal preference (he was more interested in the experiential/practical inclinations to be happy/harmless). Recent actually free people may have different preferences (Trent - a member of DhO that is actually free - is on record saying that he would be willing to undergo such tests if he is fully told what it is about, and if he appraises it to be safe).

A neurological study still will not give a full picture of a PCE. The scientists have not been able to locate the identity/self anywhere in the brain, let alone detect its absence. Nor do I have any ideas as to the way measuring/detecting the subjective experience of sensuous delight (that is the quality of a PCE). As far as I can tell, the only sort of things to be gleaned from a brain scan is the (significant) presence/absence of feelings/emotions, the sort of things that Richard writes about when he reports his ongoing experience: no wide-eyed staring, no increase in heart-beat, no rapid breathing, no adrenaline-tensed muscle tone, no sweaty palms, no blood draining from the face, no dry mouth, no cortisol-induced heightened awareness.

And if you can't provide such evidence, well, alas.

Ok. I have no doubt that empirical evidence can dispel the last intellectual excuse to take something new to human experience seriously (which is why I have come to favor the idea of taking a brain scan); however, so far at least, actualists' primary motivation seems to be either the (memory of a) PCE or a curiosity to experiment with a method to have just more fun in life.

I have made an effort to read the material there in hopes of better comprehending your claims, but the process is too painful for me to get very far, and this is part of the reason why I'm not taking you seriously.

There have been several complains about not only the website, but also the way of its presentation (Richard's prose-style have lead many to see himself as egoist/prick, for instance). I too have made a rather hasty/brief initial post here, which perhaps added to all sorts of incorrect impressions (religious, cultic). What you said about the website - along with the incorrect impressions from even the freethinkers - further confirms my view that the way the content is presented (along with the inaccessibility of layout) in the AF website is not the ideal. I first took note of this when reading Daniel Ingram's notes on PCE, which is simple and straight to the point. (Of course, I have nothing to criticize against much of the content of what is said in the AF website).

To be clear, I am referring to the website's poorly-designed navigation

Harmanjit once wrote a consolidation of essential content from the AF website here which is perhaps useful for an introduction.

Replies from: WrongBot, AdeleneDawner, wedrifid
comment by WrongBot · 2010-07-03T00:08:56.940Z · LW(p) · GW(p)

Let's see...

Actual freedom is a tried and tested way of being happy and harmless in the world as it actually is ... stripped of the veneer of normal reality or Greater Reality which is super-imposed by the psychological and/or psychic entity within the body.

and

Here is an actual freedom from the Human Condition, surpassing Spiritual Enlightenment and any other Altered State Of Consciousness, and challenging all philosophy, psychiatry, metaphysics (including quantum physics with its mystic cosmogony)

and

For a start, one needs to fully acknowledge the biological imperative (the instinctual passions) which are the root cause of all the ills of humankind.

and

The Summum Bonum of all the many and varied disciplines – be it philosophy or psychology, physics or metaphysics, cosmology or sociology, theology or spirituality – has been to sanction the protracted doctrinal assumption that a god, by whatever name, is in charge of the universe.

...nope, sorry. I'm done. Pursuing this is no longer worth my time. My estimated probability that there's any worth at all to what Richard has to say is now negligibly close to zero.

Replies from: naivecortex
comment by naivecortex · 2010-07-03T03:15:30.281Z · LW(p) · GW(p)

I am not clear as to what point you were trying to make in relationship to all the quotes above except the last one which, without any context, seems absurd to me in some respects. With respect to the first bold text - "psychic" - what the word refers to is the identity that is tangled in the web of psychic currents, which further refers to the affective vibes (eg: sadness of one person creating a bad vibe among others; "loving atmosphere" and so on).

But as an actual freedom from human condition "is no longer worth your time" - then it makes no sense for both of us to continue this discussion.

comment by AdeleneDawner · 2010-07-03T00:03:50.032Z · LW(p) · GW(p)

You know, it's pretty obvious that you care about our opinion of your movement, otherwise you wouldn't be spending so much time and effort trying to convince us. That's substantial evidence against your claim that it produces a lack of sense of self or attachment. You're really shooting yourself in the foot.

comment by wedrifid · 2010-07-03T05:16:07.699Z · LW(p) · GW(p)

A neurological study still will not give a full picture of a PCE. The scientists have not been able to locate the identity/self anywhere in the brain, let alone detect its absence

Yes they have, at least in the sense that you are referring to. And they can provoke the suppression of this self with magnetic stimulation.

You on the other hand are completely incapable of suppressing the identity/self. You are tied up in it far more than the average person.

comment by wedrifid · 2010-07-02T22:14:40.957Z · LW(p) · GW(p)

Wow... a cult formed to actively seek neurological dysfunction.

Replies from: naivecortex
comment by naivecortex · 2010-07-02T22:20:49.493Z · LW(p) · GW(p)

Wow... a cult formed

Ha, and where is the evidence for that? Is it too much to ask for evidence in a forum pertaining to human rationality?

[...] to actively seek neurological dysfunction.

Sorry, there seems to be a misunderstanding. I should have perhaps written clearly; psychiatry being a field dealing with dysfunctional peoples (i.e., dysfunctional identities involved with feelings) the psychiatrist who diagnosed Richard of course had to label (without choice) his sensuous / non-affective ongoing mode of experience in psychiatric terms (whose normal meaning pertaining to identities-with-feelings do not apply to a person with no identity/feelngs).

Replies from: wedrifid
comment by wedrifid · 2010-07-02T23:04:45.250Z · LW(p) · GW(p)

Ha, and where is the evidence for that?

here

Is it too much to ask for evidence in a forum pertaining to human rationality?

Sometimes, yes. It depends on how it is used. And I know you did't really want me to give an answer to your question. But that's the point. "Where is your evidence?" is just a bunch of verbal symbols that say very little to do with 'rationality'. If the meaning and intended function of the phrase is equivalent to "Your mom is a cult!" but translated to the vernacular of a different subculture then it says absolutely nothing about rational beliefs. The vast majority of demands "where is your evidence?" that I have encountered have been blatant bullshit (too much time arguing with MENSAns). Your usage is not that bad. Nevertheless, your implied argument relies on an answer ('No') for the rhetorical question, which it does not get.

Sorry, there seems to be a misunderstanding. I should have perhaps written clearly; psychiatry being a field dealing with dysfunctional peoples (i.e., dysfunctional identities involved with feelings) the psychiatrist who diagnosed Richard of course had to label (without choice) his sensuous / non-affective ongoing mode of experience in psychiatric terms (whose normal meaning pertaining to identities-with-feelings do not apply to a person with no identity/feelngs).

I do understand the distinction you are making here. Richard still sounds like a total fruitloop but I agree that the labels and diagnoses formalized in the psychiatric tradition can be misleading, particularly when they emphasize superficial symptoms and disorder rather than referring more directly to trends in the underlying neurological state that are causing the observed behaviors or thoughts.

comment by wedrifid · 2010-07-02T06:02:23.292Z · LW(p) · GW(p)

If you wish to distinguish yourself from people who are promoting cults, you need to not sound like someone promoting a cult.

As this is a straw man (there is no cult here to promote), I'll pass.

That Straw Man must feel seriously misunderstood and abused sometimes!

comment by JoshuaZ · 2010-07-02T04:47:53.359Z · LW(p) · GW(p)

Actual Freedom (AF) is not a religious system/cult; I am none too sure how anyone got that impression here as the very front page of the AF website mentions "Non-Spiritual" in bold text.

Calling something "Non-spiritual" doesn't make it not a religion. To use one obvious example, there are some evangelical Christians who say that they don't have a religion and aren't religious, but have a relationship with Jesus. Simply saying something isn't religious doesn't help matters.

To answer your specific questions: I define these things by personal experience. I did not claim that they are "everything." - only that they are ways in which one experiences (i.e., consciously perceives) the world. As for "experimental evidence" - there are no experiments needed other than one's ongoing conscious experience.

See that's not ok. Any LWer would explain to you that the human mind is terrible at introspection. Human cognitive biases and other issues make it almost impossible for humans to judge anything about our own cognitive structures. And to claim that t there are no experiments needed is to essentially adopt an anti-scientific viewpoint. You aren't going to convince anyone here of much while acting that way.

No, not enlightement (where feelings are still in existent), but an actual freedom (a permanent Pure Conscious Experience). It is rather interesting that this objection (AF == Enlightenment) is raised even in a forum pertaining to human rationality.

Of course it is, because what you are describing sounds nearly identical to classical Eastern claims about enlightenment. As to the difference between "enlightenment" and "actual freedom" I don't see one. Of course, this might be the sort of thing where defining terms in detail would help, but you've explicitly refused to do so.

Please go and read some of the major sequences, and maybe after you've done so, if you still feel a need to talk about this, you'll at least have the background necessary to understand why we consider this to be a waste of our time.

Replies from: LucasSloan, naivecortex
comment by LucasSloan · 2010-07-02T04:49:53.116Z · LW(p) · GW(p)

I would be greatly edified if you would heed Blueberry's plea.

comment by naivecortex · 2010-07-02T06:31:28.162Z · LW(p) · GW(p)

Calling something "Non-spiritual" doesn't make it not a religion.

And calling something 'religious' makes it so? You said "the LWer concludes that this message amounts to religious spam or close to that." And I responded with a question "what is the factual basis for such a conclusion?". Don't you think it would be a much more fruitful discussion if we sticked to the facts instead of intuitions/impressions/guesses/probabilities?

Human cognitive biases and other issues make it almost impossible for humans to judge anything about our own cognitive structures.

Yet what I originally claimed is a rather simple and obvious fact, based on common sense and experience, about bucketing our experience into sensations, thoughts and feelings, and not "judging about our cognitive structures". There really is nothing to our experience outside sensations, thoughts and feelings.

And to claim that t there are no experiments needed is to essentially adopt an anti-scientific viewpoint.

Indeed there can be no experiments to verify the nature of subjective experience. Experiments can only arrive at the physical correlates (such as nerve pathways), but never at the subjective content itself. In the level of brain, it all comes down to neurons; yet, when we say "I sense ..." or "I think ..." or "I feel ..." we are distinctly referring to sensations, thoughts and feelings.

As to the difference between "enlightenment" and "actual freedom" I don't see one.

The very passage you are responding to contains this: "not enlightement (where feelings are still in existent)" implying that in an actual freedom, feelings are non-existent. It is beyond me how you failed to see that.

comment by Mitchell_Porter · 2010-07-02T05:02:45.333Z · LW(p) · GW(p)

Hi there yourself. I don't believe I've run across your website or mini-movement before. As some of your skeptical correspondents note, there is a very long prior history of people claiming enlightenment, liberation, transcendence of the self, and so forth. So even if one is sympathetic to such possibilities, one may reasonably question the judgment of "Richard" when he says that he thinks he is the first in history to achieve his particular flavor of liberation. This really is a mark against his wisdom. He would be far more plausible if he was saying, what I have was probably achieved by some of the many figures who came before me, and I am simply expressing a potentiality of the human spirit which exists in all times and places, but which may assume a different character according to the state of civilization and other factors.

I will start by comparing him to U.G. Krishnamurti. For those who have heard of Jiddu Krishnamurti, the Indian man who at an early age was picked by the Theosophists as their world-teacher, only to reject the role - this is a different guy. J.K., despite his abandonment of a readymade guru role, did go on to become an "anti-guru guru", lecturing about the stopping of time, the need to think rather than rely on dead thought, et cetera, ad infinitum. U.G. is by comparison a curious minor figure. He lived quietly and out of the way, though he picked up a few fans by the end, apparently including a few Bollywood professionals.

His schtick, first of all, is about negating the value of most forms of so-called spirituality. They seek a fictitious perpetual happiness and this activity, whether it is about anticipating a happy afterlife or striving in the here and now after a perfectly still mind, is what fills the lives of such people. He did not set up the ordinary, materialistically absorbed, emotionally driven life as a counter-ideal - he might agree with the gurus in their analysis of that existence - but maintained that what they were selling as an alternative was itself not real.

How does "Richard" look from this perspective? Well, he seems to say that many of the forms of higher truth or deeper reality that have been associated with spirituality are phantoms; but he does say that achieving his particular state of purged consciousness is a universalizable formula for almost perpetual peace of mind. So, he gets a plus for being down-to-earth, but a minus for overrating the value of his product.

U.G. had a few other strings to his bow. He criticized scientists and philosophical materialism in terms that might have come from his namesake J.K. - for dogmatism, and for not recognizing the role of their own minds in constructing their dogmas. Also, he did claim his own version of a bodily transfiguration, as many gurus do. In his case he called it a "calamity", said it was horrible, there's no way you could want it, and there's no way to seek it even if you wanted to be like him. Whether he was serious about this, or just trying to put across an idea that is subversive in the Indian context (where there are so many superstitions about sages acquiring superpowers once they achieve philosophical enlightenment as well), I couldn't say. But if we take him literally, then U.G. also gets marked down for implausibility, though at least he didn't sell his calamity as a desirable and reproducible phenomenon.

I see also from the website that Richard is an old guy, perhaps in his late sixties. I fear that all we have here is someone who has a bit of conceptual facility when it comes to the relationship between mind, appearance and reality, some wisdom when it comes to the relationship between emotion, belief, and suffering, and who lives far enough from the manias of urban life that he can imagine that his latest intellectual high (which perhaps saw him achieve peace with respect to some metaphysical question or other, that has been bothering him for half a lifetime) really does count as an epochal event in the history of human consciousness, and who only has admirers rather than skeptics around him - no-one who is going to tell him anything different.

What is the value of such a person - to the world, to the readers of this website? Even if no-one here buys the idea that this is some sort of transcendental wisdom - absorbed as we all are in various recondite scientific metaphysical ideas and expectations of a greatly empowered transhuman future - I think we can appreciate that there may be some psychological knowledge worth acquiring from such a person. The question is whether it is anything greater than you would get from, say, flipping through a compilation of remarks by the Dalai Lama. If a person in such a position claims, not just that they have insight into the workings of the mind, but that attaining their insight or duplicating their experience is a pathway to a state of happiness and psychological health greater and more reliable than anything available anywhere else, we should ask whether the private happiness of that person stems from factors like (i) they're old and have given up on many of the things that both please and torment a younger person, like sexuality, and (ii) they have some special material and social arrangement (like living on a commune with a few devoted friends and admirers who handle many of the practicalities of daily life and liaising with the outside world) which is not readily imitated by the suffering masses!

Replies from: LucasSloan, Richard_Kennaway, naivecortex
comment by LucasSloan · 2010-07-02T05:08:31.132Z · LW(p) · GW(p)

I would be greatly edified if you would heed Blueberry's plea.

Replies from: wedrifid
comment by wedrifid · 2010-07-02T06:03:05.312Z · LW(p) · GW(p)

Which plea is that?

Replies from: Blueberry
comment by Blueberry · 2010-07-02T06:07:25.108Z · LW(p) · GW(p)

Please don't feed the trolls!

Replies from: wedrifid
comment by wedrifid · 2010-07-02T08:51:33.605Z · LW(p) · GW(p)

Whether or not his comments are desirable, this poster does not seem to qualify as a troll.

Do not feed the Unwelcome Spammer perhaps?

comment by Richard_Kennaway · 2010-07-03T18:24:07.517Z · LW(p) · GW(p)

...than you would get from, say, flipping through a compilation of remarks by the Dalai Lama.

Just an FYI, but modern technology now allows instant access to a stream of such remarks. The Dalai Lama is on Facebook.

comment by naivecortex · 2010-07-02T06:15:47.216Z · LW(p) · GW(p)

Hi - I will respond briefly to the various points you raised further below, but first:

What is the value of such a person - to the world, to the readers of this website?

It seems that my post was not written carefully, and led some to mistake it for religious spam. I've been visiting LW for a while, and practicing actualism (AF) for more than year. The value of the AF method (not person) personally to me is increased well-being / light-heartedness / carefreeness without having to believe in a God or some other metaphysical concept. I have virtually no belief system; Actualism is an -ism like tour-ism, not the -ism in philosophy. That said, the value to the reader of this forum focused on human rationality is of course a challenge (food for thought) to our widely held perception about feelings/emotions (eg: life without feelings is barren and sterile) and, even more, their relationship to all the sorrow and malice which forms the basis for many scientific endeavours; studies related to stress, for instance.

It is quite simple: in a PCE, one's sense of identity and feelings temporarily vanish, leaving only sensations and thoughts, thereby paving wave for a magical sensuosity that is engendered by everyday events.

Now onto your specific points:

one may reasonably question the judgment of "Richard" when he says that he thinks he is the first in history to achieve his particular flavor of liberation. This really is a mark against his wisdom. He would be far more plausible if he was saying, what I have was probably achieved by some of the many figures who came before me

The reason he doesn't say "what I have was probably achieved by others before me" is simply that his experience is different. If you go through his personal history article, he writes about his several-month reading of all peoples experiences before arriving at the conclusion that his experience is unique. And for the rest of us, it shouldn't be difficult to understand as in an Actual Freedom - feelings are non-existent (everyone who've had a PCE confirms this), whereas in any form of spiritual Enlightenment known to us, feelings not only continue to exist but often transmorgify into the divine feelings of Love/Compassion. In the CRO, you will find several discussion related to this titled "Actualism is not new" and "The actualism method is not unique".

I will start by comparing him to U.G. Krishnamurti. [...] Richard seems to say that many of the forms of higher truth or deeper reality that have been associated with spirituality are phantoms

Yes, phantom as in "something apparently sensed but having no physical reality". Precisely, what he is saying is that - the states of mind experienced by Enlightened people are fuelled by instinctual passions / feelings, and as such have no basis on the actual/physical world. Even though hormonal/chemical substances are actual, the feelings they give rise to are considered to be non-actual (emergent phenomena would be another way to put it, but with the added factor of 'imagination' to it).

Richard does say that achieving his particular state of purged consciousness is a universalizable formula for almost perpetual peace of mind.

Purged of the identity and feelings. In other words, a Pure Consciousness Experience. Those who have not had a PCE, can think it of as a sensuous delight.

As for 'universalizable formuala', the method promulgated by Richard and other actually free people is indeed repeatable (verifyable on one's own) and thus satisfies the scientific method.

You say "Richard gets a minus for overrating the value of his product". If a PCE results in 24x7 sensuous delight marked by no sorrow/malice (as feelings no longer arise), how could it be anything but enabling perpetual (until the death of the body) peace of mind?

Richard has admirers rather than skeptics around him - no-one who is going to tell him anything different.

Have you read the CRO page? If so, you wouldn't be making this remark. There were (and still are) far too many objections during the time he promulgated his method (10 years).

Is [what Richard says] anything greater than you would get from, say, flipping through a compilation of remarks by the Dalai Lama

As already mentioned, spiritual attainments ends in a delusion (Love/Compassion) and never uproots the root of sorrow/malice itself. Attained Zen Buddhist, for instance, are known to get sorrowful -- spirituality transcends suffering (in a metaphysical realm, perhaps), but never eliminates it.

What Richard is saying is that one can completely (100%) eradicate the root the all misery/mayhem of the world.

Does the private happiness of that person (Richard) stem from factors like [...]

If I may interject - Richard's happiness does not stem from the factors you mention below. It is a 24x7 sensuous delight (marked by no affective feelings) not dependent on any life situation.

(i) they're old and have given up on many of the things that both please and torment a younger person, like sexuality, and

Richard, and other actually free people, have not given up on the various physical pleasures/comforts of life. You may want to read his journal entry on sex.

[...] (ii) they have some special material and social arrangement (like living on a commune with a few devoted friends and admirers who handle many of the practicalities of daily life and liaising with the outside world) which is not readily imitated by the suffering masses!

Until recently Richard was living alone. He have had companions; mingles with people ... does things that normal human beings do.

If you're interested, may I suggest two articles: 180 Degress Opposite briefly mentions the various differences from spirituality, and Attentiveness and Sensuousnesds and Appercetiveness describes the quality of the condition of PCE/AF.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-07-02T12:05:56.480Z · LW(p) · GW(p)

I had a closer look at the AF website. The guy's biography was interesting. He starts out juxtaposing himself as a young conscript in the Vietnam war, facing a Buddhist priest burning himself alive, and feeling that both these sides are wrong. He struggles with the meaning of life, for some years falls into spiritual-savior consciousness, seeking to be or feeling that he is an enlightened teacher. Then he eventually he abandons that too, in favor of "the actual world". Thus, the ordinary ego-self he used to have was false, but so was the metaphysical no-self of his enlightened stage. Having woken up to reality itself, as he sees it, he starts a website or two, and after more than a decade, he has gathered a very small nucleus of people who also find meaning in the particular theory and practice which he espouses.

I was, however, disturbed by what happened to Daniel Ingram. I found on the web an old email discussion between yourself, Ingram, and Harmanjit Singh. In that discussion, Ingram wrote with a clarity and confidence suggesting that he really knew what he was talking about. But when I see his posts now on Dharma Overground, he sounds very confused. It's also intriguing that Harmanjit himself has rejected AF since that discussion.

What Richard is saying is that one can completely (100%) eradicate the root the all misery/mayhem of the world.

Well, I would say that is completely (100%) bullshit - as are your references to "24x7 sensuous delight". You do not achieve that just by getting rid of "malice and sorrow" - unless we are using the word "delight" in some innovative sense that would include, say, having your face torn off by a chimpanzee, to name just one disgusting example that I ran across recently of what can happen to a person in this world. Before and above the suffering that human beings create for each other, and the suffering that human beings create psychologically for themselves, the very condition of embodiment already exposes you to horrendous hazards, which reveal something like AF to be nothing more than a sort of post-anti-metaphysical rational-emotive therapy. It may in fact be possible for purely psychological maneuvers to change the qualitative feel of even the worst pain into merely intense sensation without emotional valence; the burning monk that Richard encountered during his tour of duty in Vietnam is already evidence of this. Then again, those monks train a lifetime in order to acquire the ability for such gestures, and the fact that their bodies are already falling apart due to old age may help some of them over the line.

The other critical observation I would want to make is that the idea of getting rid of negative feelings and having only positive feelings I think is, again, bullshit. The idea that the capacity for suffering is linked to the capacity for happiness is one of the simplest, least welcome, and yet most plausible of the cliches in the Buddhist catechism. I know very well that there are lots of people who think they can have their happiness and their enlightenment at the same time, by just being "unattached" to the happiness that comes to them. And probably a sophisticated aesthete who knows their own mind (has a high degree of luminosity, in the local jargon) and who can anticipate that getting too wrapped up in a particular pleasure will harm them later on, can learn to make judgment calls about the degree and manner of engagement with a particular experience that will be the most pleasurable. That sort of rational hedonism might even be combined with some of the LW methods. But please don't kid us or yourself that something like AF is the key to frickin' world utopia. If everyone adopted AF the world would be so much happier? Well, if everyone adopted Nazism the world would be a lot happier, too. Any such homogeneity of outlook and of understanding would have that effect. For a while.

Replies from: naivecortex
comment by naivecortex · 2010-07-02T20:42:56.246Z · LW(p) · GW(p)

Having woken up to reality itself, as [Richard] sees it, he starts a website or two, and after more than a decade, he has gathered a very small nucleus of people who also find meaning in the particular theory and practice which he espouses.

Two things:

  • The reality that you speak of is referred to as actuality (the sensory experience, minus the affect) in the AF lingo; where the word 'reality' is used to refer to the affective inner reality (the emotive cloud surrounding the actual sensations).

  • The 'meaning' that you speak is found only in a PCE or other lesser forms of experiences (feeling good/carefree/etc). There is no meaning in "theory" (AF is not a theory; it is a repeatable/verifyable condition). And the only meaning of the "method" is to facilitate more and more felicitous experiences leading to PCEs.

Ingram wrote with a clarity and confidence suggesting that he really knew what he was talking about

Indeed he did. I too was (intellectually) aware of the spiritual experience that he was referring to, which is in a word called "acceptance" of things as they are.

But when I see his posts now on Dharma Overground, he sounds very confused.

"very confused", eh? How on earth can what he wrote so clearly, especially the following extract, sound like an expression of high confusion?

[Daniel] One of the interesting things about arahatship is that is conveys this fantastic clarity about that particular form of unclarity, once one has a proper contrast between it and the PCE, and having gone back and forth probably 100 times in the last 4 months between the two, I think I get the two pretty well at this point, though there may yet be surprises and fine points, and I suspect there are.

Well, I would say that [the possibility of completely eradicating the root of all misery/mayhem of the world] is completely (100%) bullshit - as are your references to "24x7 sensuous delight". You do not achieve that just by getting rid of "malice and sorrow" -

The word malice and sorrow refers to the feelings, and not unfortunate life situations (such as losing a large portion of one's financial wealth).

The 'sensual delight' I speak of is an inherent quality of PCE - sensuous experience bereft of identity/feelings.

[...] unless we are using the word "delight" in some innovative sense that would include, say, having your face torn off by a chimpanzee, to name just one disgusting example that I ran across recently of what can happen to a person in this world.

Physical pain is not extirpated in an actual freedom; for otherwise one could sit on a hot stove and still not know that one's bum is on fire. What is extirpated is the affective reaction to this physical pain.

Richard, and other actually free people, have spoken of this delight being uninterrupted even in the presence of physical pain.

AF [is] nothing more than a sort of post-anti-metaphysical rational-emotive therapy.

Yet Rational emotive bevariour theraphy does not eliminate the feelings and identity. Further to the point, here is a gem about REBT from Wikipedia: Much of what we call emotion is nothing more nor less than a certain kind — a biased, prejudiced, or strongly evaluative kind — of thought. - which perhaps explains why it doesn't go that deeper. (Mr LeDoux's studies shows clearly the feelings come prior to thought; evidenced from the fast neural connections to the amygdala, compared to those to the neo-cortex)

The other critical observation I would want to make is that the idea of getting rid of negative feelings and having only positive feelings I think is, again, bullshit.

I have no idea as to where you got this information from. What the AF method suggests is to increase the moments of felitious feelings, and minimize the 'good' (trusting/loving) and 'bad' (hateful/fearful) feelings ... so as to facilitate a PCE to arise (only in a PCE there are no feelings/identity). Until then - and like Daniel too suggests - the idea is to imitate it in one's affective sphere.

[...] That sort of rational hedonism might even be combined with some of the LW methods.

Being there no feelings to begin with, the condition of PCE has got nothing to do with hedonism at all.

But please don't kid us or yourself that something like AF is the key to frickin' world utopia.

No, not at all. I know perfectly well that I am the only person I can change. Global peace and harmoney (what you call as 'world utopia') is of course only possible when each and every one of us uproots the cause of sorrow/malice (i.e., a sufficient number of people get actually free as to stir the motivation in others)

Well, if everyone adopted Nazism the world would be a lot happier, too.

It is beyond me as to what relation you see between the condition of living without sorrow/malice and nazism.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-07-03T04:06:44.712Z · LW(p) · GW(p)

I recommend that you read the weird old novel A Voyage to Arcturus and contemplate the figure of Gangnet. And I will say no more.

comment by WrongBot · 2010-07-02T02:23:23.220Z · LW(p) · GW(p)

Welcome to Less Wrong. You may want to take a look at the articles listed on the LessWrong wiki page on religion; they may provide an understanding of why you are being downvoted.

Replies from: mattnewport
comment by mattnewport · 2010-07-02T02:37:43.395Z · LW(p) · GW(p)

I downvoted it because it looks a lot like spam, not so much because it is specifically religious spam.