Open Thread: May 2010

post by Jack · 2010-05-01T05:29:40.871Z · LW · GW · Legacy · 558 comments

Contents

558 comments

You know what to do.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

558 comments

Comments sorted by top scores.

comment by [deleted] · 2010-05-01T18:25:05.189Z · LW(p) · GW(p)

Is anyone else here disturbed over the recent Harvard incident where Stephanie Grace's perfectly reasonable email where she merley expreses agnosticism over the posiblity that the well documented IQ differences between groups are partially genetic is worthy of harsh and inaccurate condemnation from the Harvard Law school dean?

I feel sorry for the girl since she trusted the wrong people (the email was alegedly leaked by one of her girlfriends who got into a dispute with her over a man). We need to be extra carefull to selfcensure any rationalist discusions about cows "everyone" agrees are holy. These are things I don't feel comfortable even discussing here since they have ruined many carrers and lives due to relentless persecution. Even recanting dosen't help at the end of the day, since you are a google away and people who may not even understand the argument will hate you intensly. Scary.

I mean surley everyone here agrees that the only way to discover truth is to allow all the hypothesies to stand on their own without giving them the privilige of supressing competition to a few. Why is our society so insane that this regurarly happens even concerning views that many relevant academics hold in private (or even the majority of if in certain fields if the polling is anon)?

PS Also why does the Dean equate inteligence with genetic superiority and imlicitly even worth as a person? This is a disturbing view since half by definition will always be below average. And we're all going to be terribly stupid compared to AIs in the near future, such implicit values are dangerus in the context of the time we may be living in.

Replies from: None, None, Nick_Tarleton, timtyler, steven0461, Rain, RobinZ, Morendil, JanetK
comment by [deleted] · 2010-05-02T12:39:50.412Z · LW(p) · GW(p)

I'm a bit upset.

In my world, that's dinner-table conversation. If it's wrong, you argue with it. If it upsets you, you are more praiseworthy the more you control your anger. If your anti-racism is so fragile that it'll crumble if you don't shut students up -- if you think that is the best use of your efforts to help people, or to help the cause of equality -- then something has gone a little screwy in your mind.

The idea that students -- students! -- are at risk if they write about ideas in emails is damn frightening to me. I spent my childhood in a university town. This means that political correctness -- that is, not being rude on the basis of race or ethnicity -- is as deep in my bones as "please" and "thank you." I generally think it's a good thing to treat everyone with respect. But the other thing I got from my "university values" is that freedom to look for the truth is sacrosanct. And if it's tempting to shut someone up, take a few deep cleansing breaths and remember your Voltaire.

My own beef with those studies is that you cannot (to my knowledge) isolate the genetics of race from the experience of race. Every single black subject whose IQ is tested has also lived his whole life as black. And we have a history and culture that makes race matter. You can control for income and education level, because there are a variety of incomes and education levels among all races. You can control for home environment with adoption and twin studies, I guess. But you can't control for what it's like to live as a black person in a society where race matters, because all black people do. So I can't see how such a study can really ever isolate genetics alone. (But correct me if I'm missing something.)

Replies from: Jack
comment by Jack · 2010-05-02T18:56:50.731Z · LW(p) · GW(p)

Since mixed racial background should make a difference in genes but makes only a small difference in the way our culture treats a person, if the IQ gap is the result of genetics we should see find that the those with mixed race backgrounds have higher IQs than those of mostly or exclusively African descent. This has been approximated with skin tone studies in the past and my recollection is that one study showed a slight correlation between lighter skin tone and IQ and the other study showed no correlation. There just hasn't been much research done and I doubt there will ever be much research (which is fine by me).

Replies from: None, NancyLebovitz
comment by [deleted] · 2010-05-03T14:39:27.354Z · LW(p) · GW(p)

I'm still not confident because we're not, as Nancy mentioned, completely binary about race even in the US.

What you'd really need to do is a comparative study between the US and somewhere like Brazil or Cuba, which had a different history regarding mixed race. (The US worked by the one-drop-of-blood rule; Spanish and Portuguese colonies had an elaborate caste system where more white blood meant more legal rights.) If it's mainly a cultural distinction, we ought to see a major difference between the two countries -- the light/dark gap should be larger in the former Spanish colony than it is in the US. If culture doesn't matter much, and the gap is purely genetic, it should be the same all around the world.

The other thing I would add, which is easy to lose track of, is that this is not research that should be done exclusively by whites, and especially not exclusively by whites who have an axe to grind about race. Bias can go in that direction as well, and a subject like this demands extraordinary care in controlling for it. Coming out with a bad, politically motivated IQ study could be extremely harmful.

Replies from: Jack, NancyLebovitz
comment by Jack · 2010-05-03T16:53:48.904Z · LW(p) · GW(p)

The other thing I would add, which is easy to lose track of, is that this is not research that should be done exclusively by whites, and especially not exclusively by whites who have an axe to grind about race.

Frankly, I'm not sure why the research should be done at all.

comment by NancyLebovitz · 2010-05-03T15:04:41.955Z · LW(p) · GW(p)

Minnesota Trans-Racial Adoption Study suggests that a lot of the difference is cultural and/or that white parents are better able to protect their children from the effects of prejudice.

I also have no idea what the practical difference of 4 IQ points might be.

I don't know where you'd find people who were interested enough in racial differences in intelligence to do major studies on it, but who didn't have preconceived ideas.

comment by NancyLebovitz · 2010-05-02T19:05:49.236Z · LW(p) · GW(p)

Afaik, skin tone, hair texture, and facial features make a large difference in how African Americans treat each other.

White people, in my experience, are apt to think of race in binary terms, but this might imply that skin tone affects how African Americans actually get treated.

comment by [deleted] · 2010-05-01T19:37:32.872Z · LW(p) · GW(p)

Here is the leaked email by Stephanie Grace if anyone is interested.

… I just hate leaving things where I feel I misstated my position.

I absolutely do not rule out the possibility that African Americans are, on average, genetically predisposed to be less intelligent. I could also obviously be convinced that by controlling for the right variables, we would see that they are, in fact, as intelligent as white people under the same circumstances. The fact is, some things are genetic. African Americans tend to have darker skin. Irish people are more likely to have red hair. (Now on to the more controversial:)

Women tend to perform less well in math due at least in part to prenatal levels of testosterone, which also account for variations in mathematics performance within genders. This suggests to me that some part of intelligence is genetic, just like identical twins raised apart tend to have very similar IQs and just like I think my babies will be geniuses and beautiful individuals whether I raise them or give them to an orphanage in Nigeria. I don’t think it is that controversial of an opinion to say I think it is at least possible that African Americans are less intelligent on a genetic level, and I didn’t mean to shy away from that opinion at dinner.

I also don’t think that there are no cultural differences or that cultural differences are not likely the most important sources of disparate test scores (statistically, the measurable ones like income do account for some raw differences). I would just like some scientific data to disprove the genetic position, and it is often hard given difficult to quantify cultural aspects. One example (courtesy of Randall Kennedy) is that some people, based on crime statistics, might think African Americans are genetically more likely to be violent, since income and other statistics cannot close the racial gap. In the slavery era, however, the stereotype was of a docile, childlike, African American, and they were, in fact, responsible for very little violence (which was why the handful of rebellions seriously shook white people up). Obviously group wide rates of violence could not fluctuate so dramatically in ten generations if the cause was genetic, and so although there are no quantifiable data currently available to “explain” away the racial discrepancy in violent crimes, it must be some nongenetic cultural shift. Of course, there are pro-genetic counterarguments, but if we assume we can control for all variables in the given time periods, the form of the argument is compelling.

In conclusion, I think it is bad science to disagree with a conclusion in your heart, and then try (unsuccessfully, so far at least) to find data that will confirm what you want to be true. Everyone wants someone to take 100 white infants and 100 African American ones and raise them in Disney utopia and prove once and for all that we are all equal on every dimension, or at least the really important ones like intelligence. I am merely not 100% convinced that this is the case.

Please don’t pull a Larry Summers on me,

A few minor fallacies but overall quite respectable and even stimulating conversation nothing any reasonable person would consider should warrant ostracism. Note the reference to "disscused over Dinner". She was betrayed by someone she socialised with.

And yes I am violating my own advice by boldening that one sentence. ;) I just wanted to drive home how close she may be to a well meaning if perhaps a bit untactfull poster on Less Wrong. Again, we need to be carefull. What society considers taboo changes over time as well, so one must get a feel for where on the scale of forbidden a subject is at any time and where the winds of change are blowing before deciding whether to discuss it online. Something inoccus could cost you your job a decade or so in the future.

Edit: For anyone wondering what a "Larry Summers" is.

Replies from: arundelo, CronoDAS, Matt_Simpson, CronoDAS, novalis
comment by CronoDAS · 2010-05-01T20:55:26.605Z · LW(p) · GW(p)

One of the people criticizing the letter accused the letter writer of privileging the hypothesis - that it's only because of historical contingency (i.e. racism) that someone would decide to carve reality between "African-Americans" and "whites" instead of, say, "people with brown eyes" and "people with blue eyes". (She didn't use that exact phrase, but it's what she meant.)

Replies from: None, Roko, Rain
comment by [deleted] · 2010-05-01T22:01:20.448Z · LW(p) · GW(p)

Isn't nearly everything a social construct though? We can divide people based into two groups, those with university degrees and those without. People with them may tend to live longer or die earlier, they may earn more money or earn less, ect. We may also divide people into groups based on self identification, do blondes really have more fun than brunettes or do hipsters really feel superior to nonhipsters or do religious people have lower IQs than self-identified atheists ect Concepts like species, subspecies and family are also constructs that are just about as arbitrary as race.

I dosen't really matter in the end. Regardless of how we carve up reality, we can then proceed to ask questions and get answers. Suppose we decided to in 1900 take a global test to see whether blue eyed or brown eyed people have higher IQs. Lo and behold we see brown eyed people have higher IQs. But in 2050 the reverse is true. What happened? The population with brown eyes was heterogeneous and its demographics changed! However if we took skin cancer rates we would still see people with blue eyes have higher rates of skin cancer in both periods.

So why should we bother carving up reality on this racial metric and ask questions about it? For the same reason we bother to carve up reality on the family or gender metric. We base policy on it. If society was colour blind, there would be no need for this. But I hope everyone here can see that society isn't colour blind.

For example Affirmative action's ethical status (which is currently framed as a nesecary adjustment against biases and not reparations for past wrongs) depends on what the data has to about say about group differences.

If the data shows we people with blue eyes in our country have lower mean IQs when controlling for socioeconomic status and such, we shouldn't be accusing racism for their higher college drop out rates if the rates are what is to be expected when controlling for IQs. To keep this policy would mean to discriminate against competent brown eyed people. But if there are no difference well then the policy is justified unless it turns out there is another reason that has nothing to do with discrimination behind it.

I hope that you however agree that (regardless of what the truth of this particular matter is) someone should not be vilified for asking questions or proposing hypothesises regarding social constructs we have in place, regularly operate with and even make quantifiable claims about.

Replies from: Jack, cupholder, CronoDAS
comment by Jack · 2010-05-01T22:22:47.601Z · LW(p) · GW(p)

Concepts like species, subspecies and faimily are also constructs that are just about as arbitrary as race.

This is a matter of much dispute and a lot of confusion. See here.

Replies from: kim0, None
comment by kim0 · 2010-05-02T12:52:36.824Z · LW(p) · GW(p)

I wondered how humans are grouped, so I got some genes from the world, and did an eigenvalue analysis, and this is what i found:

http://kim.oyhus.no/EigenGenes.html

As you can see, humans are indeed clustered in subspecies.

Replies from: Jack
comment by Jack · 2010-05-02T18:27:38.875Z · LW(p) · GW(p)

This doesn't demonstrate subspecies.

comment by [deleted] · 2010-05-01T22:37:09.863Z · LW(p) · GW(p)

Thanks for the link, I'm reading it now.

I just want to clear up that I'm refering to species and subspecies in the biological sense in that sentence and family in the ordinary every day sense not to the category between order and genus.

comment by cupholder · 2010-05-02T18:47:23.786Z · LW(p) · GW(p)

For example Affirmative action's ethical status (which is currently framed as a nesecary adjustment against biases and not reparations for past wrongs) depends on what the data has to about say about group differences.

Only if you accept that particular framing, I would have thought? If one chooses to justify affirmative action as reparations for past wrongs, 'what the data has to about say about group differences' won't change your opinion of affirmative action.

(ETA - Also.)

Replies from: None
comment by [deleted] · 2010-05-03T17:10:10.803Z · LW(p) · GW(p)

Of course one can do this. But then you get into the sticky issue of why should we group reparations based on race? Aren't the Chatolic Irish entitled to reparations for their mistreatment as immigrant labour and discrimination against them based on their religion if the same is true of the Chinese? Aren't Native Americans a bit more entitled to reparations than say Indian immigrants? Also why are African Americans descended from slaves not differenciated to those who have migrated to the US a generation ago (after the civil rights era)?

And how long should such reparations be payed? Indefinetly?

I hope that from the above you can see why there would need to be a new debate on affirmative action if one reframes it.

Replies from: cupholder
comment by cupholder · 2010-05-03T23:19:02.563Z · LW(p) · GW(p)

I don't believe affirmative action is justified by 'past wrongs' - I used that as an example only because you mentioned it. (Personally, I believe it makes more sense to justify affirmative action as a device to offset present disadvantages.) I meant only to make the point that the statement 'Affirmative action's ethical status...depends on what the data has to about say about group differences' is too broad, because there are justifications for affirmative action that do not hinge on the nature of IQ differences between blacks and whites.

Replies from: None
comment by [deleted] · 2010-05-05T17:24:19.892Z · LW(p) · GW(p)

I wrote affirmative action as it is currently framed. I consider that an important distinction. I never denied other frames where possible, I'm just saying the current support for affirmative action amongst groups that are harmed by it is loosly based on the notion that it is offseting unwaranted privilige (bias by employers in other words) of the majority.

Replies from: cupholder
comment by cupholder · 2010-05-06T01:51:07.113Z · LW(p) · GW(p)

I think we both agree that 'what the data has to about say about group differences' does not necessarily affect 'Affirmative action's ethical status' in general - only if one justifies it on grounds that make assumptions about the nature of IQ differences between groups. That just wasn't clear to me as of four days ago due to your phrasing.

comment by CronoDAS · 2010-05-01T22:49:52.354Z · LW(p) · GW(p)

I didn't say I agreed.

Replies from: None
comment by [deleted] · 2010-05-01T23:14:24.962Z · LW(p) · GW(p)

I never said you did. :) Would you however agree with the sentiment of my last paragraph?

This thread of conversation is easily derailed since whether group differences exist isn't really its topic.

Replies from: CronoDAS
comment by CronoDAS · 2010-05-02T01:47:15.213Z · LW(p) · GW(p)

Yeah, I do...

comment by Roko · 2010-05-02T23:56:56.322Z · LW(p) · GW(p)

Black people routinely outperform whites at elite running events, Asians already rule at math and science, so the hypothesis that there are genetic differences in performance between blacks and whites is already something one should consider likely.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-05-03T00:13:30.137Z · LW(p) · GW(p)

IAWYC, but "Asians rule at math and science" seems to have a huge cultural basis, and it's at least no more obvious that it has a genetic component than that racial IQ gaps do.

Replies from: Roko
comment by Roko · 2010-05-03T00:41:05.716Z · LW(p) · GW(p)

To someone who knows that Asian math achievement has a fully or almost fully cultural basis, the Asian math link doesn't do work privileging the hypothesis that there might be a black/white genetic IQ difference.

However, to someone who simply sees math classes full of people with yellow skin, and doesn't know why, it does do work privileging the hypothesis that there might be a black/white genetic IQ difference, rather that e.g. anti-black discrimination causing lower grades for blacks etc.

Of course, if you saw Asian-filled math classes, there must have already been something that made you assign some probability to the hypothesis that genes, not memes were responsible.

and it's at least no more obvious that it has a genetic component than that racial IQ gaps do.

I don't think it has to be more obvious or clear-cut, it moves you evidentially by simply being another instance of the same thing.

If the only racial-feature correlation in the world was that black people tested low on IQ, then the idea that genes rather than, say, discrimination were responsible would be something of a stretch.

But when you see a whole collection of racial-feature correlations, the idea that genes are responsible to some extent becomes more plausible. It is a reasonable AI/machine learning heuristic to jump from co-variation to some kind of causal link: if you see skin color (which is well known to indicate DNA-type) covary with ability at sport, ability at math, ability at IQ tests, criminality, etc, you place some weight on the hypothesis that DNA directly causally influences these things. Of course, you don't put all of your weight on that.

Replies from: None
comment by [deleted] · 2010-05-03T16:50:37.069Z · LW(p) · GW(p)

@Nick Tarleton:

Can you explain how you know Asian math acheivement is fully due to cultural bias? Haven't crossracial adpotion studies shown that adopted East Asian children do better than their white peers on IQ tests? I also remember hearing claims that generally Asians do beter on the visualspatial component of IQ tests than whites.

Edit: Originally adressed @Roko

Replies from: Roko
comment by Roko · 2010-05-03T18:24:00.762Z · LW(p) · GW(p)

Nick Tarleton said it, not me ;-)

I have not seen evidence either way; my arguments given above are not dependent upon it being true or false.

Replies from: None
comment by [deleted] · 2010-05-03T20:54:49.484Z · LW(p) · GW(p)

I misread the first sentence. Thanks for the correction, I'll put a @Nick Tarleton in there then.

comment by Rain · 2010-05-01T21:38:39.992Z · LW(p) · GW(p)

I think it would be fascinating if people with blue eyes were more or less intelligent, when controlling for the variables, than people with brown eyes.

That said, I would expect a larger genetic variation when choosing between long-separated and isolated populations rather than eye colors.

Replies from: None
comment by [deleted] · 2010-05-01T22:27:34.282Z · LW(p) · GW(p)

I'm using eye color as an example here since CronoDAS mentioned it. Replace it with a particular gene, future time orientation, nose type or whatever. If society makes quantifiable claims about a particular category into which we slice up reality (ie Atheists are more likley to rape and murder!) an individual should have the right to either test or demand proof for this quantifiable claim.

Race is a pretty good proxy form which populations your ancestors came from. Its not perfect since for example the Black race has the most genetic diversity and geneflow has increased after the rise of civilization and especially globalisation. Knowing however, whether for example most of your ancestors lived outside of Africa for the last 60,000 thousand years or that your group of ancestors diverged from the other guys group of ancestors 40,000 thousand years ago is also relevant info.

I stole this graph from Razib's site (gene expression) for a quick reference of what current biology has to say about ancestral populations.

http://www.gnxp.com/wp/wp-content/uploads/2010/02/PIIS096098220902065X.gr2_.lrg_.jpg

comment by Matt_Simpson · 2010-05-01T20:46:36.199Z · LW(p) · GW(p)

A few minor fallacies

Care to point them out?

Replies from: None
comment by [deleted] · 2010-05-01T21:31:26.371Z · LW(p) · GW(p)

Most escape me right now but I do recall something that bothered me... She implicity uses stereotypes of African American behvariour and how they change over time as an indicator of the actuall change in violent behaviour.

I'm sure it correlates somewhat, but considering how much stronger changes in wider society where and how much people's interests regarding what it was best to have other people belive about Black behaviour changed over time I don't think you can base an argument on this either way.

comment by CronoDAS · 2010-05-03T21:01:28.071Z · LW(p) · GW(p)

Here's a bit more on the "privileging the hypothesis" bit, taken from here:

UPDATE: A lot of commenters have said that not addressing the substance of Stephanie’s email — the contention that it’s possible that black people are genetically inferior to white people — is a mistake, and weakens my point. So, why am I not addressing the idea that maybe black people are intellectually inferior, even if just to quickly debunk Stephanie’s argument? Because if I did that, the comments to this post would turn into a referendum on the genetics of intelligence, and there are always going to be a few very vocal people who have a lot invested in the falsity that black people are genetically inferior, and those people are not going to be convinced by any amount of evidence. It’s also impossible to prove, beyond any scientific doubt at all, that there is no genetic differentiation between racial and ethnic groups. That is, basically, how science works — it’s the reason that people who have some political or religious or personal investment in the idea that evolution is a crock will fall back on the “well evolution is only a theory!” line. Yes, it is “only” a theory, but it’s a theory that has a whole mountain of evidence behind it; and it’s called a theory because scientists are awfully hedgy, for good reason, about calling anything The Absolute Proven Truth. I’ll quote commenter MJ, who makes this point well:

One hears this kind of statement often from advocates of quasi-racist positions. “Oh, of course I could be convinced of perfect equality, if only someone could show me a study the proves that no differences exist!” It’s an extremely disingenuous argument and reflects a fundamental (deliberate?) misunderstanding about statistics.

No study can ever “prove” that no difference between two groups exists—a study can only fail to detect a difference of a certain size with a certain confidence level. Any experiment with enough statistical power will be able to find differences between any two groups, even two flasks of genetically identical bacteria, if you try hard enough.

My point is that asking for a study that demonstrates equality may sound reasonable, but is in fact just a rhetorical technique that can never be satisfied and serves as a shield for racist ideas.

Intelligence, too, is impossible to separate from environment and socialization, again making it impossible for anyone to say with absolute certainty that there is absolutely no biological or genetic difference at all ever between racial and ethnic groups. Intelligence is also incredibly difficult to accurately measure. But for all intents and purposes, the evidence is pretty clear that there aren’t discernible genetic differences when it comes to intelligence. But it’s always possible to make the argument that “We haven’t proven that there are no differences.” That argument tells you a lot more about the person making it than it does about any scientific fact.

I take people who argue that maybe there are race-based genetic differences that determine intelligence about as seriously as I take people who argue that maybe God did create the earth in 7 days with all humans and animals in the exact same form as we find them today. And you know, opening up a free-for-all discussion about race-based genetic difference will be about as fruitful as opening up a discussion about Creationism vs. Evolution. Discussing why Creationists are wrong and trying to convince anyone to switch “sides” in that debate (if you can even call it that) is pointless; if you really feel the need to repeat, “But evolution is just a theory and it doesn’t explain everything, so Creationism can’t be totally ruled out”, then you have some personal or religious or political or cultural investment in that idea, which won’t be toppled by evidence or rationality. Similarly, if you feel the need to repeat, “But it can’t be totally ruled out that there may be genetic differences between the races which make black people intellectually inferior,” you have some personal or political or cultural investment in that idea, which probably won’t be toppled by evidence or rationality. There is no “winning” in this debate.

And the greater danger of even opening up the debate is that, unlike creationism vs. evolution, the question of “are some people genetically inferior to others?” has been used in the service of injustice great and small. Even if we put aside the point that the genetics question has been used to justify slavery, mass sterilization, genocide, incarceration and violence — not a small point to put aside, certainly — the fact remains that the continued asking such an absurd, disproven question does harm. I can understand, for people who are not in the group that has been deemed potentially genetically inferior, that just raising the issue may not feel harmful. But for the people who are in that group? Who know the history? Who are routinely treated to questions like this under the guise of “I’m only asking the question!” but who know quite well that “only asking the question” is, itself, a way of suggesting that the answer to the question just might be yes? Who, by having to respond to the question over and over are basically being told, “You may just not be as intelligent as white people, genetically; you, as you were born, are just less”? It is harmful. It is part of a generations-long continuum of harm. It is a kind of psychological warfare that white people have waged on people of color and other less “fit” populations for centuries, which has augmented, supported and justified physical warfare, slavery, colonialism and genocide.

So no, I am not going to open up a discussion as to why Stephanie Grace’s suggestion that black people may be genetically intellectually inferior to white people is wrong. The asking of the question, and taking the question seriously, suggests that the answer just might be yes, no matter how many times the evidence points in the opposite direction. That does real harm to members of our community; it has done real harm historically to huge numbers of people, and will continue to do real harm in the future. There are many, many places on the internet where you all can talk about this to your heart’s content. I feel no obligation to provide a forum for such a pointless, hateful and harmful debate.

Replies from: Emile
comment by Emile · 2010-05-04T12:19:14.516Z · LW(p) · GW(p)

My "wrong-headed thinking" radar is picking up more bleeps from this than from the incriminating email:

  • "There are people with vested interests" is basically unverifiable, she's basically assuming anybody who disagrees is a fundamentally racist mutant
  • "People won't change their mind anyway, the discussion will be pointless" can be said of any controversial subject
  • The comparison to creationists can also be used to tar any opponent, there should be some version of Godwin's law for that
  • The argument that "one can always find a difference if one looks hard enough"
  • "No study can ever “prove” that no difference between two groups exists" seems to be besides the point - the question isn't whether any difference exists, but whether this specific difference exists, something that can be proved or disproved by experiment. (Well, more exactly, the topic would be what the cause of the difference is)
comment by novalis · 2010-05-02T20:46:38.623Z · LW(p) · GW(p)

As the prior threadmakes clear, distinguishing between genetic and environmental causes of intelligence is immensely complicated -- especially given the confusion over what intelligence is.

However, it is well known that people don't like being told that they're statistically less likely to be intelligent. There are actually a fair number of studies showing that promoting stereotypes can actually reduce test scores. This is called "Stereotype Threat". While there is a recent meta-study which claims that the effect is an artifact of publication bias, that study had not been published when Grace wrote her email.

Grace (a) has no new data, and (b) has no new arguments. When she makes the claim that the search for evidence that the race-iq correlation is not genetic has been "unsuccessful", she hurts people. But she does not, in return, contribute anything at all to the discourse.

She cannot even claim the virtue of curiosity -- note that her open-mindedness extends to the idea that African Americans might be as smart as whites, but not to the idea that they might be smarter.

Someone whose grasp of evidence is that weak, should not be working in the law.

Should someone who callously performs any act which she knows or should know will cause harm to people without any offsetting benefit, should probably be publicly shamed.

Replies from: wnoise, Jack, Matt_Simpson
comment by wnoise · 2010-05-03T18:21:22.021Z · LW(p) · GW(p)

When she makes the claim that the search for evidence that the race-iq correlation is not genetic has been "unsuccessful", she hurts people. But she does not, in return, contribute anything at all to the discourse.

Should someone who callously performs any act which she knows or should know will cause harm to people without any offsetting benefit, should probably be publicly shamed.

She was talking to friends at dinner. No harm there. The harm comes when months later one of the dinner companions forwards the e-mail to those who will likely be hurt.

Replies from: Richard_Kennaway, novalis
comment by Richard_Kennaway · 2010-05-04T05:30:48.356Z · LW(p) · GW(p)

The harm comes when months later one of the dinner companions forwards the e-mail to those who will likely be hurt

It is the dinner companion who should be condemned, if this account of the matter is accurate.

comment by novalis · 2010-05-04T03:12:37.218Z · LW(p) · GW(p)

There is always harm when a person makes a statement without regard to its truth.

comment by Jack · 2010-05-02T21:19:18.046Z · LW(p) · GW(p)

Your standards for a dinner time discussion among law students are awfully high.

Someone whose grasp of evidence is that weak, should not be working in the law.

Incidentally, the only poster here who has ever claimed to be a practicing attorney (afaik) was Brazil, from the prior thread.

Replies from: cupholder, novalis
comment by cupholder · 2010-05-02T21:49:46.670Z · LW(p) · GW(p)

Incidentally, the only poster here who has ever claimed to be a practicing attorney (afaik) was Brazil, from the prior thread.

So that's why I felt like he was cross-examining me in that thread. Mystery solved...

Replies from: Jack
comment by Jack · 2010-05-02T21:57:47.525Z · LW(p) · GW(p)

Well perhaps, fundamental attribution error and all that. Maybe he was just having a bad week or got defensive after we ganged up on him.

(Edit: but his global warming blog had the same kind of tone and approach)

Replies from: cupholder
comment by cupholder · 2010-05-02T22:00:05.352Z · LW(p) · GW(p)

Good point.

comment by novalis · 2010-05-03T13:58:36.539Z · LW(p) · GW(p)

I belive that "choose what to believe based on evidence" is not too high a standard.

The law connection is that Grace is a law student, going to clerk for a judge. Since the comment was not about her correctness but about her treatment, it's reasonable to question whether the treatment was justified.

comment by Matt_Simpson · 2010-05-02T21:04:17.294Z · LW(p) · GW(p)

Grace (a) has no new data, and (b) has no new arguments. When she makes the claim that the search for evidence that the race-iq correlation is not genetic has been "unsuccessful", she hurts people. But she does not, in return, contribute anything at all to the discourse.

Isn't acknowledging what few others will acknowledge contributing to the discourse? A substantial portion of intellectuals refuse to even acknowledge the possibility that there is a correlation between race and intelligence (controlling for culture, etc). And they don't get publicly shamed for shoddy science. Yet Grace should get publicly shamed for pointing out that the evidence suggests such a correlation? It's not as if she claimed a high degree of certainty. Besides, the best way to overcome any disadvantages one race might have in intelligence is to understand why there are differences in the first place. Refusing to believe in a substantial portion of the hypothesis space for no good reason is a potentially huge detriment to this aim.

Grace certainly made a social error, and for that perhaps she can be criticized, but it shouldn't be a social error to acknowledge different possibilities and the evidence for those possibilities in an intellectual* conversation.

* I.e., truth seeking. The evidence/possibilities shouldn't be used in a condescending way, of course.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-05-03T00:00:52.738Z · LW(p) · GW(p)

Besides, the best way to overcome any disadvantages one race might have in intelligence is to understand why there are differences in the first place.

It gets a lot more complicated when those differences are significantly directly affected by publicly discussing them, as seems to be the case. This statement may very well be true, but it's also an applause light, and makes it sound like you think reality is obligated to be set up so that truthseeking wins.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-05-03T00:14:55.878Z · LW(p) · GW(p)

It gets a lot more complicated when those differences are significantly affected by publicly discussing them, as seems to be the case.

Fair enough, though I'll point out that the discussion was over dinner/email, not in an extremely public forum where many people will be exposed (though there is still the possibility that friends tell friends who tell friends, etc.).

...and makes it sound like you think reality is obligated to be set up so that truthseeking wins.

Yes, I see that now. How about this: it's unclear that the best strategy for combating any racial disadvantages is not talking about them, rather than determining the cause and attempting to do something proactive about it.

comment by Nick_Tarleton · 2010-05-02T23:55:52.202Z · LW(p) · GW(p)

PS Also why does the Dean equate inteligence with genetic superiority and imlicitly even worth as a person?

See Michael Vassar's discussion of this phenomenon. Also, I think that people discussing statements they see as dangerous often implicitly (and unconsciously) adopt the frames that make those statements dangerous, which they (correctly) believe many people unreflectively hold and can't easily be talked out of, and treat those frames as simple reality, in order to more simply and credibly call the statement and the person who made it dangerous and Bad.

comment by timtyler · 2010-05-02T12:17:43.116Z · LW(p) · GW(p)

The Harvard incident is business as usual: http://timtyler.org/political_correctness/

comment by steven0461 · 2010-05-02T21:26:13.413Z · LW(p) · GW(p)

I think there's something to be said for not posting opinions such that 1) LW is likely to agree with the opinion, and 2) sites perceived as agreeing with the opinion are likely to be the target of hate campaigns.

Replies from: mattnewport, jimmy, Nick_Tarleton
comment by mattnewport · 2010-05-02T22:35:02.479Z · LW(p) · GW(p)

This is the best exposition I have seen so far of why I believe strongly that you are very wrong.

On a Bus in Kiev

I remember very little about my childhood in the Soviet Union; I was only seven when I left. But one memory I have is being on a bus with one of my parents, and asking something about a conversation we had had at home, in which Stalin and possibly Lenin were mentioned as examples of dictators. My parent took me off the bus at the next stop, even though it wasn’t the place we were originally going.

Please read the whole thing and remember that this is where the road inevitably leads.

Replies from: Nick_Tarleton, steven0461
comment by Nick_Tarleton · 2010-05-02T23:40:50.943Z · LW(p) · GW(p)

Yes, self-censorship is Prisoner's Dilemma defection, but unilaterally cooperating has costs (in terms of LW's nominal purpose) which may outweigh that (and which may in turn be outweighed by considerations having nothing to do with this particular PD).

Also, I think that's an overly dramatic choice of example, especially in conjunction with the word "inevitably".

Replies from: mattnewport
comment by mattnewport · 2010-05-02T23:53:42.295Z · LW(p) · GW(p)

Also, I think that's an overly dramatic choice of example, especially in conjunction with the word "inevitably".

I don't, which is why I posted it.

In the end the Party would announce that two and two made five, and you would have to believe it. It was inevitable that they should make that claim sooner or later: the logic of their position demanded it. Not merely the validity of experience, but the very existence of external reality was tacitly denied by their philosophy. The heresy of heresies was common sense. And what was terrifying was not that they would kill you for thinking otherwise, but that they might be right. For, after all how do we know that two and two make four? Or that the force of gravity works? Or that the past is unchangeable? If both the past and the external world exist only in the mind, and if the mind itself is controllable – what then?

  • Winston Smith in George Orwell’s 1984
Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-05T01:30:52.080Z · LW(p) · GW(p)

It isn't inevitable. There's a trivial demonstration that censorship self-censorship don't form necessarily form a collective downward spiral: There are societies that at one point had much heavier censorship and now don't. That's not easily made consistent with your claim.

Censorship is bad. Self-censorship is very bad. Especially on a website devoted to improving rationality we shouldn't censor what we have to say. But the notion that small bits of self-censorship will eventually lead to believing that 2+2=5 if the Party says so is simply not called for. This is a classic example where a strong argument can be made for a claim but that claim is being inherently undermined by the use of a very weak argument for the claim instead of the strong one.

(Incidentally, generalization from fictional evidence also comes up here).

Replies from: mattnewport
comment by mattnewport · 2010-05-05T02:27:09.215Z · LW(p) · GW(p)

It isn't inevitable. There's a trivial demonstration that censorship self-censorship don't form necessarily form a collective downward spiral: There are societies that at one point had much heavier censorship and now don't. That's not easily made consistent with your claim.

I am claiming that this road leads to totalitarianism. That is not the same as claiming that the road is one way with no exits and no U-turns. If I thought otherwise there would be little point in me expressing my concerns. As long as society keeps its foot on the pedal and fails to realize it is heading in the wrong direction however that is where it will end up. Inevitably.

(Incidentally, generalization from fictional evidence also comes up here).

This is not generalizing from fictional evidence. It is using a literary quote to express an idea more eloquently than I can myself. Since the book can be seen as a parable illustrating the same concerns I am emphasizing I believe it is quite appropriate to quote from it. I am not using the fictional story as proof of my claim, I am quoting it to elaborate on what it is I am claiming.

comment by steven0461 · 2010-05-03T03:15:21.616Z · LW(p) · GW(p)

I'm sympathetic to this as a general principle, but it's not clear to me that LW doesn't have specific battles to fight that are more important than the general principle.

comment by jimmy · 2010-05-04T23:33:58.758Z · LW(p) · GW(p)

Perhaps there should be a "secret underground members only" section where we can discuss these things?

Replies from: RobinZ
comment by RobinZ · 2010-05-05T01:19:15.516Z · LW(p) · GW(p)

Logic would suggest that such a section would be secret, if it existed. It would be simple enough to send private messages to trusted members alerting them to the existence of a private invitation-only forum on another website where such discussions could be held.

Naturally, I would say none of this if I knew of such a forum, or had any intention of creating such. And I would not appreciate any messages informing me of the existence of such a forum - if for no other reason than that I am the worst keeper of secrets I have ever known.

Replies from: mattnewport, jimmy
comment by mattnewport · 2010-05-05T02:31:10.473Z · LW(p) · GW(p)

The first rule of rationality club is: you do not talk about rationality club.

comment by jimmy · 2010-05-05T02:22:20.375Z · LW(p) · GW(p)

There could still be a lower level of 'secrecy' where it wont show up on google and you cant actually read it unless you have the minimum karma, but its existence is acknowledged.

It's not where you'd plan to take over the world, but I'd hope it'd be sufficient for talking about race/intelligence issues

comment by Nick_Tarleton · 2010-05-02T23:48:03.831Z · LW(p) · GW(p)

I share your concern. Literal hate campaigns seem unlikely to me, but such opinions probably do repulse some people, and make it considerably easier for us to lose credibility in some circles, that we might (or might not) care about. On the other hand, we pretty strongly want rationalists to be able to discuss, and if necessary slay, sacred cows, for which purpose leading by example might be really valuable.

comment by Rain · 2010-05-01T18:54:28.126Z · LW(p) · GW(p)

Undiscriminating skepticism strikes again: here's the thread on the very topic of genetic IQ differences.

Replies from: Jack, None
comment by Jack · 2010-05-01T19:12:13.889Z · LW(p) · GW(p)

Oh good. Make it convenient for the guys running background searches.

comment by [deleted] · 2010-05-01T19:06:06.312Z · LW(p) · GW(p)

Thanks for the link! I'm new here and really appreciate stuff to read up on since its mostly new to me. :)

comment by RobinZ · 2010-05-01T18:54:21.770Z · LW(p) · GW(p)

I agree with what you've written, with particular emphasis on the problem of privacy on the Internet (and off, for that matter).

Given that I don't even know who Stephanie Grace is, though, I think I don't care.

Replies from: None
comment by [deleted] · 2010-05-01T19:21:35.633Z · LW(p) · GW(p)

I think when arguing about really controversial things that don't fit your tribe's beliefs via email or online means its best to only use them to send sources and citations. Avoid comments, any comments whatsoever, perhaps even quotes or Galileo forbid boldening anything but the title.
Encourage people involved in the debate to do the same.

Keep any controversial conclusions gleaned from the data or endorsals of any paper from the electronic record.Then when you are private tell them, did you manage to read the Someguyson study I sent you in email 6#? When you've exausted the mailed links talk switch to gossip or the weather.

If the mail was leaked and they don't have you on record for saying anything forbidden except just mailing around sources, how exactly will they tar and feather you?

I can say, this mode of conversation is actually quite stimulating since I've engaged in it before but I've only tested it for noncontraversial and complex subjects. It lets you catch up on what starting points he is coming from as well as gives you time to cool off in heated arguments. It is something however that drags on for weeks so not really appropirate with strangers.

comment by Morendil · 2010-05-02T23:54:23.997Z · LW(p) · GW(p)

I'm more directly disturbed by the bias present in your exposition: "perfectly reasonable", "merely expresses agnosticism", "well documented", "harsh and inaccurate".

Starting off a discussion with assorted applause lights and boo lights strikes me as unlikely to lead to much insight.

What would be likely to lead to useful insight? Making use of the tools LessWrong's mission is to introduce us to, such as the applications of Bayesian reasoning.

"Intelligence has a genetic component" strikes me as a causal statement. If it is, we ought to be able to represent it formally as such, tabooing the terms that give rise to cognitive muddles, until we can tell precisely what kind of data would advance our knowledge on that topic.

I've only just cracked open Pearl's Causality, and started playing with the math, so am still very much an apprentice at such things. (I have my own reasons to be fooling with that math, which are not related to the race-IQ discussion.) But it has already convinced me that probability and causality are deep topics which it's very easy to draw mistaken conclusions about if you rely solely on a layman's intuition.

For instance, "the well documented IQ differences between groups" are purely probabilistic data, which tell us very little about causal pathways generating the data, until and unless we have either controlled experiments, or further data sets which do discriminate between the competing causal models (only very grossly distinguished into "nature" and "nurture").

I don't know if the email you quoted (thanks for that, BTW, it's a treat to have access to a primary source without needing to chase it down) is racist, but it does sound very ignorant to me. It makes unwarranted inferential leaps, e.g. from "skin and hair color are definitely genetic" to "some part of intelligence is genetic", omitting the very different length of developmental chains leading from genes to pigmentation on the one hand, and intelligence on the other. It comes across as arrogant and elitist as well as ignorant when saying "I think my babies will be geniuses and beautiful individuals whether I raise them or give them to an orphanage in Nigeria".

It is not bad science to be on the lookout specifically for data that claims to be "scientific proof" of some old and demonstrably harmful prejudices, and to hold such claims to a higher standard. Just as we do hold claims of "scientific proof of ESP" to a higher standard - at least of scrutiny and replicability - than, say, claims of a correlation between apparel color and competitive performance. We have more reason to suspect ulterior motives in the former case than in the latter.

Replies from: Jack, Tyrrell_McAllister
comment by Jack · 2010-05-03T00:54:30.693Z · LW(p) · GW(p)

Dinnertime conversations between regular, even educated people do not contain probabilistic causal analyses. In the email Grace claimed something was a live possibility and gave some reasons why. Her argument was not of the quality we expect comments to have here at Less Wrong. And frankly, she does sound kind of annoying.

But that all strikes me as irrelevant compared to being made into a news story and attacked on all sides, by her dean, her classmates and dozens of anonymous bloggers. By the standards of normal, loose social conversation she did nothing deserving of this reaction.

I feel a chilling effect and I've only ever argued against the genetic hypothesis. Frankly, you should too since in your comment you quite clearly imply that you don't know for sure there is no genetic component. My take from the reaction to the email is that the only socially acceptable response to encountering the hypothesis is to shout "RACIST! RACIST!" at the top of your lungs. If you think we'd be spared because we're more deliberate and careful when considering the hypothesis you're kidding yourself.

Replies from: Morendil
comment by Morendil · 2010-05-03T01:39:49.759Z · LW(p) · GW(p)

By the standards of normal, loose social conversation she did nothing deserving of this reaction.

Sure. What I do find disturbing is how, knowing what she was doing (and who she was sending it to), the "friend" who leaked that email went ahead and did it anyway. That's positively Machiavellian, especially six months after the fact.

However, I do not feel a need to censure myself when discussing the race-IQ hypothesis. If intelligence has a genetic component, I want to see the evidence and understand how the evidence rules out alternatives. I would feel comfortable laying out the case for and against in an argument map, more or less as I feel comfortable laying out my current state of uncertainty regarding cryonics in the same format.

Neither do I feel a need to shout at the top of my lungs, but it does seem clear to me that racism was a strong enough factor in human civilization that it is necessary, for the time being, to systematically compensate, even at the risk of over-compensating.

"I absolutely do not rule out the possibility [of X]" can be a less than open-minded, even-handed stance, depending on what X you declare it about. (Consider "I absolutely do not rule of the possibility that I will wake up tomorrow with my left arm replaced by a blue tentacle.") Saying this and mistaking it for an "agnostic" stance is kidding oneself.

Replies from: None
comment by [deleted] · 2010-05-03T16:36:18.968Z · LW(p) · GW(p)

Since people are discussing group differences anyway. I would just like people to be a bit clearer in their phrasing.

Inteligence does have a genetic component. I hope no one argues that the cognitive difference between the average Chimpanzee and Resus monkey are result of nurture. The question is if there is any variation in the genetic component in Humans.

Studies have shown a high heritability for IQ, this dosen't nesecarily mean much of it is genetic but it does seem a strong position to take, especially considering results from twin studies. A good alternative explanation I can think of, that could be considered equivalent in explanatory power, would be differences in prenatal environment beyond those controled in previous studies (which could get sticky since such differences may also show group genetic variation ! for example the average lenght of pregnancy and risks associated with postterm complications does vary slightly between races).

The question disscused here however is whether there are any meaningfull differences between human groups regarding their genetic predispositions towards mental faculties.

We know quite a bit from genetic analysis about where people with certain markers have spread and which groups have been isolated. Therefore the real question we face is twofold:

  1. Just how really evolutionary recent is abstract thinking and other mental tricks the IQ test measures? The late advent of behavioral modernity compared vs. the early evidence of anatomically nearly modern could be considered for example. Some claim it was an evolutionary change following the well documented recent bottleneck of the Human species others say the advent of modern behaviour was a radical cultural adaptation to a abrupt environmental change or just part of a long and slow progress of rising population density and material culture complexity we haven't yet spotted. Considering how sketchy the archeological record is we can't be suprised at all if it turns out we've been wrong for decades and modern behvaiour isn't recent at all.

  2. Is the selective value of inteligence compared to other traits identical in all environments econuntered by Homo Sapiens? Remember we may already have some evidence that sometimes inteligence may not be that usefull for hominids depending on how we interpret the fossiles of Homo Floresiensis. Could this also be true of Homo Sapiens population as well?

The answers to these two questions would tell us how likley it would be to see these differences appear and how noticeable they may be in the time window current biology estimates we have for differences between populations to occur.

Note: This from Razib Khan's site (Gene Expression), I'm reposting it here so you don't need to hunt it down in my other post. http://www.gnxp.com/wp/wp-content/uploads/2010/02/PIIS096098220902065X.gr2_.lrg_.jpg

Replies from: mattnewport
comment by mattnewport · 2010-05-03T18:15:01.064Z · LW(p) · GW(p)

If genetic differences in intelligence could not be relevant to reproductive success within a single generation it is difficult to see how human intelligence could have evolved.

Replies from: None
comment by [deleted] · 2010-05-04T19:51:08.656Z · LW(p) · GW(p)

Group selection may help you imagine more.

Replies from: mattnewport
comment by mattnewport · 2010-05-04T20:18:59.387Z · LW(p) · GW(p)

Isn't group selection largely discredited?

comment by Tyrrell_McAllister · 2010-05-03T01:14:52.550Z · LW(p) · GW(p)

I don't know if the email you quoted (thanks for that, BTW, it's a treat to have access to a primary source without needing to chase it down) is racist, but it does sound very ignorant to me. It makes unwarranted inferential leaps, e.g. from "skin and hair color are definitely genetic" to "some part of intelligence is genetic", omitting the very different length of developmental chains leading from genes to pigmentation on the one hand, and intelligence on the other.

Let's be careful here. The letter does not assert baldly that "some part of intelligence is genetic". Rather, the letter asserts that some evidence "suggests to me that some part of intelligence is genetic".

Furthermore, that particular inferential leap does not begin with the observation that "skin and hair color are definitely genetic". Rather, the inferential leap begins with the claim that "Women tend to perform less well in math due at least in part to prenatal levels of testosterone, which also account for variations in mathematics performance within genders." Therefore, at least with regards to that particular inference, it is not fair to criticize the author for "omitting the very different length of developmental chains leading from genes to pigmentation on the one hand, and intelligence on the other."

[ETA: Of course, the inference that the author did make is itself open to criticism, just not the criticism that you made.]

I say all this as someone who considers Occam to be pretty firmly on the side of nongenetic explanations for the racial IQ gaps. But no progress in these kinds of discussions is possible without assiduous effort to avoid misrepresenting the other side's reasoning.

comment by JanetK · 2010-05-03T20:09:48.790Z · LW(p) · GW(p)

We have a very long thread about a not very interesting subject, but it has finally made me annoyed. I hope it does not show too much because I accept that being polite is important.

1)Nature or nurture, genetics or environment, is a discredited dichotomy. Both are probably active at the same time for each and every aspect of intelligence (as they would be for any complex trait). Not only is it unlikely to be one or the other, it is unlikely to be x% for one and 1-x% for the other. They are too interwoven and the patterns of interactive too individual and unique to each person's life to be thought of as two separate influences. Is it nuture or nature? - wrong question.

2)Intelligence as a measure of the worth of a person is not reasonable. We define the worth of a person legally. We say that all people have equal worth. Their intelligence does not change that definition.

3)Measurement of intelligence in the form of IQ scores is not without problems and always needs to be examined to see if it is likely that factors other than 'g' have biased the results. It is only theoretically valid within a group and not between groups. Each human group should have an average IQ of 100 by definition.

4)Comparisons between groups never apply to individuals and are therefore useless in judging the potential of a particular person. One has to wonder of what use such group comparisons really are.

5)If it is considered a good idea to try to increase the intelligence of individuals in the society than it is going to be easier and more socially acceptable to use environmental rather than genetic methods. Good nutrition, lack of lead and other poisons, lack of continuous stress, enriched environment, good education and so on will help all, the smart and the not so smart.

6)Dividing a population into two (or a handful of) racial types is simplistic in societies as genetically and ethically diverse as North America or Western Europe.

7)Differences in mean IQ that have been reported (for what they are worth given the points above) are so small compared to the natural range of IQ within groups, that they are of no value other than in arguments about various stereotypes, pro and con. Investigating how genes affect the structure and functioning of the brain and investigating how environmental aspects affect the brain are both reasonable science. They are both likely to give results that are valuable. Comparing the IQ of racial groups is plainly no longer science but racial politics. It is hard to credit, in this day and age, that educated people are so ignorant that they would actually believe that the brain is built and operated without genes or that an organ that functions to perceive and move in the environment is immune to its effects. Give me a break! Anyone who insists that genetics is not important to intelligence has a big ax to grind. Anyone who insists that environment is not important to intelligence also has a big ax to grind. Trust the motives of neither. The PC gang and the racialist/sexist gang are both acting like bullies.

Replies from: Rain
comment by Rain · 2010-05-05T17:22:33.874Z · LW(p) · GW(p)

I completely agree with each of these points. Thank you!

comment by Jack · 2010-05-01T08:47:23.934Z · LW(p) · GW(p)

He who controls the karma controls the world.

Less Wrong dystopian speculative fiction: An excerpt.

JulXutil sat, legs crossed in the lotus position, at the center of the Less Wrong hedonist-utilitarian subreddit. Above him, in a foot-long green oval, was his karma total: 230450036. The subreddit was a giant room with a wooden floor and rice paper walls. In the middle the floor was raised, and then raised again to form a shallow step pyramid with bamboo staircases linking the levels. The subreddit was well lit. Soft light emanated from the rice paper walls as if they were lit from behind and Japanese lanterns hung from the ceiling.

Foot soldiers, users JerelYu and Maxine stood at the top of each staircase to deal with the newbies who wanted to bother the world famous JulXutil and to spot and downvote trolls before they did much damage. They also kept their eyes out for members of rival factions because while /lw/hedutil was officially public, every Less Wrong user knew this subreddit was Wireheader territory and had been since shortly after Lewis had published his famous Impossibility Proof for Friendliness. The stitched image of an envelope on JulXutil’s right sleeve turned red. He tapped it twice and the dojo disappeared and was replaced by his inbox. He tapped on the new message and its sender appeared before him.

Henry_Neil: Jul, I just heard from my source at Alcor. The procedure was successful. He’s been admitted. It'll go public in the morning.

JulXutil: Exciting, terrifying news. What will happen to his account?

Henry_Neil: It won't go anywhere. But users who haven’t logged in for thirty days don’t get counted when the server computes controlling karma. That leaves his 40% up for grabs.

JulXutil: How much support we end up with will depend on how organized the opposition is. We need full admin powers and enough backing to amend the constitution. Henry, I need you to take care of a few high karma players. They'd interfere with our plans. I’ll tell you whom. It’ll have to be timed just right. Contact me again when you've selected your men.

Henry_Neil: If the Blindsighters have heard the news they'll try the same thing. Your karmic reputation is in danger. Take precautions, stay out of the main subreddits, especially EvPsych. You’ll hear from me soon.

To be continued...

Replies from: Kutta, Thomas, cousin_it
comment by Kutta · 2010-05-01T09:52:56.489Z · LW(p) · GW(p)

This is golden. I demand continuation.

comment by Thomas · 2010-05-01T15:42:03.939Z · LW(p) · GW(p)

It's a real question where to, the Karma system leads. In a long run, we might see quite unexpected and unwanted results. But there is probably no other way to see that, than to wait where to it will actually go. I guess, a kind of conformism will prevail, if it hasn't already.

comment by cousin_it · 2010-05-02T13:48:39.581Z · LW(p) · GW(p)

The karma=wireheading angle is wonderful, and I think new.

comment by khafra · 2010-05-03T06:02:01.536Z · LW(p) · GW(p)

Ask A Rationalist--choosing a cryonics provider:

I'm sold on the concept. We live in a world beyond the reach of god; if I want to experience anything beyond my allotted threescore and ten, I need a friendly singularity before my metabolic processes cease; or information-theoretic preservation from that cessation onward.

But when one gets down to brass tacks, the situation becomes murkier. Alcor whole body suspension is nowhere near as cheap as numbers that get thrown around in discussions on cryonics--if you want to be prepared for senescence as well as accidents, a 20 year payoff on whole life insurance and Alcor dues runs near $200/month; painful but not impossible for me.

The other primary option, Cryonics Institute, is 1/5th the price; but the future availability--even at additional cost--of timely suspension is called into question by their own site.

Alcor shares case reports, but no numbers for average time between death and deep freeze, which seems to stymie any easy comparison on effectiveness. I have little experience reading balance sheets, but both companies seem reasonably stable. What's a prospective immortal on a budget to do?

Replies from: Jack, matt, ata
comment by Jack · 2010-05-03T06:27:59.545Z · LW(p) · GW(p)

Alcor whole body suspension

Why not save some money and lose what's below the neck?

Replies from: khafra
comment by khafra · 2010-05-03T11:38:48.306Z · LW(p) · GW(p)

That saves about half the life insurance cost while leaving the Alcor dues the same, dropping the cost from ~$200/month to ~$140/month. This doesn't make it a clearly preferable option to me.

Replies from: None
comment by [deleted] · 2010-05-03T15:44:22.979Z · LW(p) · GW(p)

If I recall correctly preservation of the brain is supposed to be easier and on average of better quality with the decapitated option (I know I'm using the uncool term) than the whole body option.

comment by matt · 2010-05-04T23:21:42.044Z · LW(p) · GW(p)

For what it's worth, I've been finding Alcor to be bureaucratic and very slow to respond. I've been trying to sign up (from Australia) for several months and am not over the line yet.

comment by ata · 2010-05-03T06:18:43.125Z · LW(p) · GW(p)

I second this query. I've been meaning to post something similar.

comment by Alexandros · 2010-05-12T06:52:15.026Z · LW(p) · GW(p)

I have an idea that may create a (small) revenue stream for LW/SIAI. There are a lot of book recommendations, with links to amazon, going around in LW, and many of them do not use an affiliate code. Having a script add a LessWrong affiliate code to those links that don't already have one may lead to some income, especially given that affiliate codes persist and may get credited for unrelated purchases later in the day.

I believe Posterous did this, and there was a minor PR hubbub about it, but the main issue was that they did not communicate the change properly (or at all). Also, given that LW/SIAI are not-for-profit endeavours, this is much easier to swallow. In fact, if it can be done in an easy-to-implement way, I think quite a few members with popular blogs may be tempted to apply this modification to their own blogs.

Does this sound viable?

Replies from: RobinZ
comment by RobinZ · 2010-05-12T11:24:50.058Z · LW(p) · GW(p)

Yes, under two conditions:

  1. It is announced in advance and properly implemented.

  2. It does not delete other affiliate codes if links are posted with affiliate codes.

Breaking both these rules is one of the many things which Livejournal has done wrong in the last few years, which is why I mention them.

comment by gwern · 2010-05-04T21:17:37.051Z · LW(p) · GW(p)

I have a (short) essay, 'Drug heuristics' in which I take a crack at combining Bostrom's evolutionary heuristics and nootropics - both topics I consider to be quite LW-germane but underdiscussed.

I'm not sure, though, that it's worth pursuing in any greater depth and would appreciate feedback.

Replies from: Metacognition, jimmy, NancyLebovitz
comment by Metacognition · 2010-05-05T19:07:14.784Z · LW(p) · GW(p)

Interesting essay.

comment by jimmy · 2010-05-04T23:06:58.458Z · LW(p) · GW(p)

I'd like to see this pursued further. In particular, I'd like to hear your thoughts on modafinil.

JustinShovelain's post on caffeine was similar, and upvoted.

Replies from: gwern, gwern
comment by gwern · 2011-07-18T01:15:52.997Z · LW(p) · GW(p)

Modafinil is now done: http://www.gwern.net/Drug%20heuristics#modafinil

Replies from: wedrifid
comment by wedrifid · 2011-07-18T02:53:28.658Z · LW(p) · GW(p)

As of the time I reply there is nothing about modafinil on that page.

Replies from: gwern
comment by gwern · 2011-07-18T03:10:44.258Z · LW(p) · GW(p)

I use aggressive caching settings on gwern.net since most of the content doesn't change very often. Force-refresh, and you'll see it.

comment by gwern · 2010-05-05T19:21:51.338Z · LW(p) · GW(p)

Anything besides modafinil? In part I'm stuck because I don't know what else to discuss; Justin's post was similarly short, but it was mainly made of links.

comment by NancyLebovitz · 2010-05-04T21:50:07.726Z · LW(p) · GW(p)

I'd like to see it pursued further. where does alcohol fit in your schema?

Replies from: gwern, jimmy
comment by gwern · 2010-05-04T22:19:20.243Z · LW(p) · GW(p)

I don't know terribly much about alcohol, so take this with a grain of salt.

I think I would probably put it as an out-of-date adaptation; my understanding is that alcoholic beverages would have been extremely energy-rich, and also hard to come by, and so is in the same category as sugars and fats - they are now bad for us though they used to be good. ('Superstimulus', I think, is the term.)

Given that, it's more harmful than helpful and to be avoided.

I'll admit that the issue of resveratrol confuses me. But assuming that it has any beneficial effect in humans, AFAIK one should be able to get it just by drinking grape juice - resveratrol is not created in the fermentation process.

Replies from: CronoDAS
comment by CronoDAS · 2010-05-07T00:56:52.490Z · LW(p) · GW(p)

Fermented beverages also had the advantage of usually being free of dangerous bacteria; ethanol is an antiseptic that kills the bacteria that cause most water-borne diseases. (And water-borne disease used to be very common.)

Replies from: gwern
comment by gwern · 2010-05-07T14:04:32.861Z · LW(p) · GW(p)

That's a good second way it's an out of date optimization.

comment by jimmy · 2010-05-04T23:10:28.474Z · LW(p) · GW(p)

You might find this paper interesting.

In a sentence, it suggests that people drink to signal trustworthiness.

comment by [deleted] · 2010-05-02T00:55:13.119Z · LW(p) · GW(p)

Today, while I was attending an honors banquet, a girl in my class and her boyfriend were arguing over whether or not black was a color. When she had somewhat convinced him that it wasn't (I say somewhat because the argument was more-or-less ending and he didn't have a rebuttal), I asked "Wait, are you saying I can't paint with black paint?" She conceded that, of course black paint can be used to paint with, but that black wasn't technically a color. At which point I explained that we were likely using two different definitions of color, and that we should explain what we mean. I gave two definitions: 1] The various shade which a human eye was seeing and the brain was processing. 2] The specific wavelength of light that a human eye can pick up. The boyfriend and I were using definition 1, where as she was using definition 2. And with that cleared up, the debate ended.

Note: Both definitions aren't word for word, but somewhat close. I was simply making the distinction between the wavelength itself and the process of seeing something and placing it in a certain color category.

Replies from: cousin_it, sketerpot, Liron, zero_call
comment by cousin_it · 2010-05-02T13:42:43.249Z · LW(p) · GW(p)

One could argue that definition 2 is Just Wrong, because it implies that purple isn't a color (purple doesn't have a wavelength, it is non-spectral).

comment by sketerpot · 2010-05-03T19:44:45.065Z · LW(p) · GW(p)

By her definition, the yellow color you see on a computer screen is not a color at all, since it's made up of two wavelengths of light which happen to stimulate the red and green cone cells in your retina in approximately the same way that yellow light would.

comment by Liron · 2010-05-03T03:18:18.138Z · LW(p) · GW(p)

This will replace Eliezer's tree falling in a forest sound as my go-to example of how an algorithm feels on the inside about wrong questions.

comment by zero_call · 2010-05-02T03:56:34.029Z · LW(p) · GW(p)

Huzzah! That's all too common a problem... sometimes the main problem...

comment by JamesPfeiffer · 2010-05-05T05:38:32.707Z · LW(p) · GW(p)

I noticed something recently which might be a positive aspect of akrasia, and a reason for its existence.

Background: I am generally bad at getting things done. For instance, I might put off paying a bill for a long time, which seems strange considering the whole process would take < 5 minutes.

A while back, I read about a solution: when you happen to remember a small task, if you are capable of doing it right then, then do it right then. I found this easy to follow, and quickly got a lot better at keeping up with small things.

A week or two into it, I thought of something evil to do, and following my pattern, quickly did it. Within a few minutes, I regretted it and thankfully, was able to undo it. But it scared me, and I discontinued my habit.

I'm not sure how general a conclusion I can draw from this; perhaps I am unusually prone to these mistakes. But since then I've considered akrasia as a sort of warning: "Some part of you doesn't want to do this. How about doing something else?"

Now when the part of you protesting is the non-exercising part or the ice-cream eating part, then akrasia isn't being helpful. But... it's worth listening to that feeling and seeing why you are avoiding the action.

Replies from: NancyLebovitz, Morendil, Leafy, bogdanb
comment by NancyLebovitz · 2010-05-05T10:52:36.740Z · LW(p) · GW(p)

the most extreme example is depressed people having an increased risk of suicide if an antidepressant lifts their akrasia before it improves their mood.

Replies from: MineCanary, Nisan
comment by MineCanary · 2010-05-14T17:17:50.713Z · LW(p) · GW(p)

I've also read that people with bipolar disorder are more likely to commit suicide as their depression lifts.

But antidepressant effects can be very complicated. I know someone who says one med made her really really want to sleep with her feet where her head normally went. I once reacted to an antidepressant by spending three days cycling through the thoughts, "I should cut off a finger" (I explained to myself why that was a bad idea) "I should cut off a toe" (ditto) "I should cut all the flesh from my ribs" (explain myself out of it again), then back to the start.

The akrasia-lifting explanation certainly seems plausible to me (although "mood" may not be the other relevant variable--it may be worldview and plans; I've never attempted suicide, but certainly when I've self-harmed or sabotaged my own life it's often been on "autopilot", carrying out something I've been thinking about a lot, not directly related to mood--mood and beliefs are related, but I've noticed a lag between one changing and the other changing to catch up to it; someone might no longer be severely depressed but still believe that killing themself is a good course of action). Still, I would also believe an explanation that certain meds cause suicidal impulses in some people, just as they can cause other weird impulses.

Replies from: CronoDAS
comment by CronoDAS · 2010-05-14T17:29:34.284Z · LW(p) · GW(p)

My antidepressant gave me a sweet tooth.

comment by Nisan · 2010-05-10T15:35:50.184Z · LW(p) · GW(p)

Interesting. Are you sure that is going on when antidepressants have paradoxical effects?

Replies from: NancyLebovitz, CronoDAS
comment by NancyLebovitz · 2010-05-10T16:36:33.768Z · LW(p) · GW(p)

Not absolutely certain. It's an impression I've picked up from mass media accounts, and it seems reasonable to me.

It would be good to have both more science and more personal accounts.

Thanks for asking.

comment by CronoDAS · 2010-05-14T17:34:15.644Z · LW(p) · GW(p)

My mom is a psychiatrist, and she's given an explanation basically equivalent to that one - that people with very severe depression don't have the "energy" to do anything at all, including taking action to kill themselves, and that when they start taking medication, they get their energy back and are able to act on their plans.

comment by Morendil · 2010-05-05T08:46:10.286Z · LW(p) · GW(p)

Good observations.

Sometimes I procrastinate for weeks about doing something, generally non-urgent, only to have something happen that would have made the doing of it unnecessary. (For instance, I procrastinate about getting train tickets for a short trip to visit a client, and the day before the visit is due the client rings me to call it off.)

The useful notion here is that it generally pays to defer action or decision until "the last responsible moment"; it is the consequence of applying the theory of options valuation, specifically real options, to everyday decisions.

A top-level post about this would probably be relevant to the LW readership, as real options are a non-trivial instance of a procedure for decision under uncertainty. I'm not entirely sure I'm qualified to write it, but if no one else steps up I'll volunteer to do the research and write it up.

Replies from: ig0r, cousin_it
comment by ig0r · 2010-05-08T17:07:58.318Z · LW(p) · GW(p)

I work in finance (trading) and go through my daily life quantifying everything in terms of EV.

I would just caution in saying that, yes procrastinating provides you with some real option value as you mentioned but you need to weigh this against the probability of you exercising that option value as well as the other obvious costs of delaying the task.

Certain tasks are inherently valuable to delay as long as possible and can be identified as such beforehand. As an example, work related emails that require me to make a decison or choice I put off as long as is politely possible in case new information comes in which would influence my decision.

On the other hand, certain tasks can be identified as possessing little or no option value when weighted with the appropriate probabilities. What is the probability that delaying the payment of your cable bill will have value to you? Perhaps if you experience an emergency cash crunch. Or the off chance that your cable stops working and you decide to try to withhold payment (not that this will necessarily do you any good).

comment by cousin_it · 2010-05-07T10:35:31.231Z · LW(p) · GW(p)

I'd be interested in reading it.

comment by Leafy · 2010-05-06T08:05:49.088Z · LW(p) · GW(p)

Continuing on the "last responsible moment" comment from one of the other responders - would it not be helpful to consider the putting off of a task until the last moment as an attempt to gather the largest amount of information persuant to the task without incurring any penalty?

Having poor focus and attention span I use an online todo-list for work and home life where I list every task as soon as I think of it, whether it is to be done within the next hour or year. The list soon mounts up, occassionally causing me anxiety, and I regularly have cause to carry a task over to the next day for weeks at a time - but what I have found is that a large number of tasks get removed because a change makes the task no longer necessary and a small proportion get notes added to them while they stay on the list so that the by the time the task gets actioned it has been enhanced by the extra information.

By having everything captured I can be sure no task will be lost, but by procrastinating I can ensure the highest level of efficiency in the tasks that I do eventually perform.

Thoughts?

comment by bogdanb · 2010-05-05T12:34:36.353Z · LW(p) · GW(p)

I suspect it’s just a figure of speech, but can you elaborate on what you meant by “evil” above?

comment by MartinB · 2010-05-01T13:03:29.013Z · LW(p) · GW(p)

Question: Which strongly held opinion did you change in a notable way, since learning more about rationality/thinking/biases?

Replies from: None, Matt_Simpson, JoshuaZ, gelisam, MartinB
comment by [deleted] · 2010-05-02T12:56:37.881Z · LW(p) · GW(p)

Theism. Couldn't keep it. In the end, it wasn't so much that the evidence was good -- it had always been good -- as that I lost the conviction that "holding out" or "staying strong" against atheism was a virtue.

Standard liberal politics, of the sort that involved designing a utopia and giving it to people who didn't want it. I had to learn, by hearing stories, some of them terrible, that you have no choice but to respect and listen to other people, if you want to avoid hurting them in ways you really don't want to hurt them.

Replies from: Liron, gelisam
comment by Liron · 2010-05-03T03:21:19.212Z · LW(p) · GW(p)

I just listened to UC Berkeley's "Physics for Future Presidents" course on iTunes U (highly recommended) and I thought, "Surely no one can take theism seriously after experiencing what it's like to have real knowledge about the universe."

Replies from: MartinB, Tyrrell_McAllister
comment by MartinB · 2010-05-03T15:48:27.289Z · LW(p) · GW(p)

Disagreed. My current opinion is that you can be a theist and combine that with pretty much any other knowledge. Eliezer points to Robert Aumann as an example. For someone that has theism hardcoded into their brain and treats it as a different kind of knowledge than physics there can be virtually no visible difference in everyday life from a normal a-theist. I think the problem is not so much the theism, but that people use it to base decisions on it.

Replies from: None
comment by [deleted] · 2010-05-03T16:01:07.740Z · LW(p) · GW(p)

oh it's true. I know deeply religious scientists. Some of them are great scientists. Let's not get unduly snide about this.

comment by Tyrrell_McAllister · 2010-05-03T19:12:19.242Z · LW(p) · GW(p)

There seems to be a common thought-pattern among intelligent theists. When they learn a lot about the physics of the Universe, they don't think "I should only be satisfied with beliefs in things that I understand in this deep way." Instead, they think, "As smart as I am, I have only this dim understanding of the universe. Imagine how smart I would have to be to create it! Truly, God is wonderful beyond comprehension."

Replies from: RobinZ
comment by RobinZ · 2010-05-03T19:20:34.467Z · LW(p) · GW(p)

[...] Instead, they think, "As smart as I am, I have only this dim understanding of the universe. Imagine how smart I would have to be to create it! Truly, God is wonderful beyond comprehension."

"Wonderful" I could believe, but I don't think John Horton Conway is actually wonderful beyond comprehension. To make an analogy.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-05-03T19:26:46.624Z · LW(p) · GW(p)

"Wonderful" I could believe, but I don't think John Horton Conway is actually wonderful beyond comprehension. To make an analogy.

If Conway used the Turing-completeness of Life to create within it a universe like our own, he would be wonderful beyond my comprehension :).

Replies from: RobinZ
comment by RobinZ · 2010-05-03T19:30:13.160Z · LW(p) · GW(p)

If Flatland would do, he could do it 'naturally' given enough scale and time. (:

comment by gelisam · 2010-05-02T15:46:21.664Z · LW(p) · GW(p)

could you link some of these stories, please? I am known to entertain utopian ideas from time to time, but if utopias really do hurt people, then I'd rather believe that they hurt people.

Replies from: None, Matt_Simpson, xamdam
comment by [deleted] · 2010-05-02T15:49:27.311Z · LW(p) · GW(p)

Personal stories, from a friend, so no, there's no place to link them. Well-meaning liberals have either hurt, or failed to help, him and people close to him.

comment by Matt_Simpson · 2010-05-02T16:48:33.851Z · LW(p) · GW(p)

Communism is one utopia that ended in disaster, see Rummel's Death by Government

comment by xamdam · 2010-05-02T16:23:13.917Z · LW(p) · GW(p)

I recommend reading Blank Slate to get a good perspective on the Utopian issues; the examples (I was born in USSR) are trivial to come by, but the book will give you a mental framework to deal with the issues.

comment by Matt_Simpson · 2010-05-01T20:51:36.512Z · LW(p) · GW(p)

I'm no longer a propertarian/Lockean/natural rights libertarian. Learning about rationality essentially made me feel comfortable letting go of a position that I honestly didn't have a good argument for (and I knew it). The ev-psych stuff scared the living hell out of me (and the libertarianism* apparently).

*At least that sort of libertarianism

comment by JoshuaZ · 2010-05-19T03:29:31.884Z · LW(p) · GW(p)

I stopped being a theist a few years ago. That was due more to what Less Wrong people would call "traditional rationalism" than the sort often advocated here (I actually identify as closer to a traditionalist rationalist than a strict Bayesianism but I suspect that the level of disagreement is smaller than Eliezer makes it out to be). And part of this was certainly also emotional reactions to having the theodicy problem thrown in my face rather than direct logic.

One major update that occurred when I first took intro psych was realizing how profoundly irrational the default human thinking processes were. Before then, my general attitude was very close to humans as the rational animal. I'm not sure how relevant that is, since that's saying something like "learning about biases taught me that we are biased." I don't know if that's very helpful.

My political views have updated a lot on a variety of different issues. But I suspect that some of those are due to spending time with people who have those views rather than actually getting relevant evidence.

I've updated on how dangerous extreme theism is. It may sound strange, but this didn't arise as much out of things like terrorism, but rather becoming more aware of how many strongly held beliefs about the nature of the world there were out there that were motivated by religion and utterly at odds with reality. This was not about evolution which even in my religious phases I understood and was annoyed at by the failure of religious compatriots to understand. Rather this has included geocentrism among the Abrahamic religions, flat-Earthism among some Islamic extremists, spontaneous generation among ultra-Orthodox Jews (no really. Not a joke. And not even microscopic spontaneous generation but spontaneous generation of mice), belief among some ultra-Orthodox Jews that the kidneys are the source of moral guidance (which they use as an argument against kidney transplants).

My three most recent major updates (last six months or so) are 1) Thinking that cryonics has a substantial success probability (although I still think it is very low). This came not from actually learning more about rationality, but rather after reading some of the stuff here going back and trying to find out more about cryonics. Learning that the ice formation problem is close to completely solved substantially changed my attitude. 2) Deciding that there's a high chance that we'll have space elevators before we have practical fusion power. (This is a less trivial observation than one might think since once one has a decent space elevator it becomes pretty cheap to put up solar power satelites). This is to some extent a reevaluation based primarily on time-frames given by relevant experts. 3) Deciding that there's a substantial chance that P=NP may undecidable in ZFC. This update occurred because I was reading about how complexity results can be connected to provability of certain classes of statements in weakened forms of the Peano axioms. That makes this sound more potentially like it might be in a class of problems that have decent reasons for being undecidable.

Replies from: MartinB
comment by MartinB · 2010-05-19T09:08:56.562Z · LW(p) · GW(p)

I'm not sure how relevant that is, since that's saying something like "learning about biases taught me that we are biased." I don't know if that's very helpful.

It is! I am repeatedly surprised about a) basic level insights that are not wide spread and b) insights that other people consider basic that I do not have c) applications of an idea i understand in an area I did not think of applying it too

To list a few: People are biased => I am biased! Change is possible Understanding is possible I am a brain in a vat. Real life rocks :-)

Even after learning about cached thought, happy death and many others I still managed to fall into the traps of those.

So i consider it helpful to see where someone applies biases.

My political views have updated a lot on a variety of different issues. But I suspect that some of those are due to spending time with people who have those views rather than actually getting relevant evidence.

That statement in itself looks like a warning sign.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-19T14:13:51.021Z · LW(p) · GW(p)

That statement in itself looks like a warning sign.

Yeah, being aware that there are biases at play doesn't always mean I'm at all sure I'm able to correct for all of them. The problem is made more complicated by the fact that for each of the views in questions, I can point to new information leading to the updates. But I don't know if in general that's the actual cause of the updates.

comment by gelisam · 2010-05-01T13:35:45.155Z · LW(p) · GW(p)

I started to believe in the Big Bang here. I was convinced by the evidence, but as this comment indicates, not by the strongest evidence I was given; rather, it was necessary to contradict the specific reasoning I used to disbelieve the Big Bang in the first place.

Is this typical? I think it would be very helpful if, in addition to stating which opinion you have changed, you stated whether the evidence convinced you because it was strong or because it broke the chain of thought which led to your pre-change opinion.

comment by MartinB · 2010-05-01T21:09:54.749Z · LW(p) · GW(p)

To answer my own question:

  • changed political and economic views (similar to Matt).

  • changed views on the effects of Nutrition and activity on health (including the actions that follow from that)

  • changed view on the dangers of GMO (yet again)

  • I became aware of areas where I am very ignorant of opposing arguments, and try to counterbalance

  • I finally understand the criticisms about the skeptics movement

  • I repeatedly underestimated the amount of ignorance in the world, and got shocked when discovering that

And on the funnier side. Last week I found out that i learned a minor physics fact wrong. That was not a strongly held opinion, just a fact i never looked up again till now. For some reason i always was convinced that the volume increase in freshly frozen water is 10x, while its actually more like 9%

Replies from: Vladimir_Golovin, mattnewport, TraditionalRationali, Jack
comment by Vladimir_Golovin · 2010-05-02T09:15:40.095Z · LW(p) · GW(p)

As a result of reading this post, I uninstalled a 10-year old habit -- drinking a cup of strong coffee every morning. Now I drink coffee only when I feel that I need a short-term boost.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-05-02T10:10:18.377Z · LW(p) · GW(p)

Coffee and concentration experiment

Article about self-measurement

A few months ago, Barooah began to wean himself from coffee. His method was precise. He made a large cup of coffee and removed 20 milliliters weekly. This went on for more than four months, until barely a sip remained in the cup. He drank it and called himself cured. Unlike his previous attempts to quit, this time there were no headaches, no extreme cravings. Still, he was tempted, and on Oct. 12 last year, while distracted at his desk, he told himself that he could probably concentrate better if he had a cup. Coffee may have been bad for his health, he thought, but perhaps it was good for his concentration.

Barooah wasn’t about to try to answer a question like this with guesswork. He had a good data set that showed how many minutes he spent each day in focused work. With this, he could do an objective analysis. Barooah made a chart with dates on the bottom and his work time along the side. Running down the middle was a big black line labeled “Stopped drinking coffee.” On the left side of the line, low spikes and narrow columns. On the right side, high spikes and thick columns. The data had delivered their verdict, and coffee lost.

This doesn't mean you don't get a boost, but it might be worth checking.

Replies from: Vladimir_Golovin
comment by Vladimir_Golovin · 2010-05-02T11:55:04.769Z · LW(p) · GW(p)

My experience is quite similar to what is described in the first article -- no coffee leads to better concentration for me. The caffeine 'boost' I was talking about reduces my concentration but makes me more inclined to action -- I found it useful for breaking through procrastination periods. The effect of Red Bull on me is similar but more pronounced.

The effect seems to be physical, but I don't rule out placebo (and frankly, it's fine with me either way.)

comment by mattnewport · 2010-05-01T21:20:49.247Z · LW(p) · GW(p)

For some reason i always was convinced that the volume increase in freshly frozen water is 10x, while its actually more like 9%

Have you never made ice cubes?

comment by TraditionalRationali · 2010-05-19T02:43:00.046Z · LW(p) · GW(p)

Very interesting. If you find time, could you elaborate on these. I am particularly interested in hearing more on the criticism of the skeptics movement.

Replies from: MartinB
comment by MartinB · 2010-05-19T08:56:41.290Z · LW(p) · GW(p)

I think it was mentioned here before. Skeptics do a decent job of raising the sanity waterline and thats all nice and important.

I watched all of Randis U-tube videos, Penn&Teller Bullshit, Dawkins, Derren Brown and what ever else looked interesting. But as some keep pointing out. Randi is not a scientist! He talks about stuff that should be obvious to elementary school kids. P&T get stuff wrong on their show. (I identified 2 topics so far). And they use a style of edutainment that might make you think a bit, or move in-groups. But you dont learn more about reasoning from it. I am not sure, but you might be able to compare it to any standard theist shoutout show. (To be fair, they generally do a decent job of representing oposing views. But might have learned some Tricks from a certain Michael Moore.)

All those skeptics push saner beliefs into the public and make it cool to have those in their respective subculture. As a fellow Rationalist i feel sometimes smug listening to them. But telling me stuff i already know is not too effective, while i dont have any indicators if they reach a target audience where an opinion shift is really mandated.

And: skeptics are not particularly rational. (I don't think they are even into the concept of learning more about thought processes or how science works.)

Wenn you spend your time battling idiots you might not remark, when you are wrong yourself.

Find a skeptic that will listen to your criticism of the traditional scientific method, and/or about how awesome baysianism is :-)

On a personal note: there is a distinct line of highly accidental circumstances that lead me to become involved in this particular group here. Each step involved people i learned from, and that knew more than my general surrounding. But each of those people got stuck in their personal level of thought (and field of interest respectively), and didn't follow me any further. Becoming an Atheist, and reading sceptics stuff was one of the steps. But i am very glad i didn't get stuck there. I gave a few lectures on scepticism and atheism in one of my peer groups, was highly surprised how difficult it is to bring the most basic points across, and now basically gave up on that, and concentrate on my own education.

comment by Jack · 2010-05-01T21:31:46.468Z · LW(p) · GW(p)

For some reason i always was convinced that the volume increase in freshly frozen water is 10x, while its actually more like 9%

Not to hit you over the head with this, as I've noticed before how common it is that someone learns a random fact or two much later than they should. But, you never, say, made frozen popsicles? I mean a whole lot of havoc would get wreaked... imagine frozen pipes... water in cracks in the road...

Related to this subject, my sister was 14 before someone corrected her belief that "North" on a map corresponded to the sky above her head (which if you think about it is the intuitive interpretation when maps are placed vertically on classroom walls).

Replies from: MartinB
comment by MartinB · 2010-05-01T21:44:26.431Z · LW(p) · GW(p)

Both numbers serve as an explanation for why tubes crack. I never did any visualization about it. (Its not that uncommon that people have inconsistent beliefs.) Iirc I read that fact in the Mickey Mouse magazine at the appropriate age, but never tried it myself.

Since reading about the memory bias I am deeply afraid to have false or corrupted memories, while also wanting to experience such an effect. Finding minor mistakes in my knowledge on physics is similarly disturbing. The content of the example itself doesn't really change anything about my life. But i am left wondering how many other mistakes I carry around.

Replies from: mattnewport, MartinB, Jack
comment by mattnewport · 2010-05-01T21:54:31.306Z · LW(p) · GW(p)

Do you have any scientific/engineering training? A habit I note that people with such training tend to develop is to do a little mental arithmetic when confronted with some new numerical 'fact' and do some basic sanity checking against their existing beliefs. I often find when I am reading a news story that I notice some inconsistency in the numbers presented (something as simple as percentages for supposedly mutually exclusive things adding up to more than 100 for example) that I am amazed slipped past both the writer and the editor. The fact that most journalists lack any real scientific or engineering training is probably the reason for this. This ice 'fact' should have been immediately obviously wrong to someone applying this habit.

It's perfectly understandable if this is just one of those things you picked up as a child and never had any cause to examine but it is indicative of a common failing and I would suggest that as a rule developing this 'engineering mindset' is valuable for any aspiring rationalist regardless of whether their job involves the routine application of such habits.

Replies from: MartinB
comment by MartinB · 2010-05-01T22:09:19.839Z · LW(p) · GW(p)

I am in the finial stages of becoming a computer scientist so: 'no'.

In school I had physics as one of the depend subjects. I don't think I saw any actual science training anywhere in my education. But that might be due to my own ignorance.

I still do not do math as often as I should, but sometimes.

What might have contributed to sustaining the mistake is my very early knowledge on the mistakes in intuitive judging of scaling volumes.

I should really milk this mistake for systematic causes....

Replies from: mattnewport
comment by mattnewport · 2010-05-01T22:14:33.097Z · LW(p) · GW(p)

In school I had physics as one of the depend subjects. I don't think I saw any actual science training anywhere in my education. But that might be due to my own ignorance.

Unfortunately this is not something that is generally taught well in high school science classes even though it would be of much more practical use to most students than what they are actually being taught. It is conveyed better in university science courses that have a strong experimental component and in engineering courses.

Replies from: MartinB
comment by MartinB · 2010-05-01T22:35:41.601Z · LW(p) · GW(p)

It might not be too surprising that i totally agree.

It CS we dont do that much experimentation. And i have some beef with the lack of teaching good ways to actually make software. I dont think the word 'Version control' was ever uttered somewhere.

comment by MartinB · 2010-05-01T21:52:38.356Z · LW(p) · GW(p)

Additional side note: I am deeply troubled by the fact that all of the important things in my life happened by pure accident. I am generally happy with the development of ideas i hold true and dear so far, but wouldn't have minded some short cuts. There is no clearcut path that has me ending up in the place I would want to be in, and I do not see anything systematic I can do about that. I don't 'choose' to become a rationalist or not, instead I get sucked in by interesting articles that carry ideas i find pleasant. But it would have been equally likely that i spent the weeks reading OB/LW initially on tvtropes instead. I recently checked on an atheist board for good recommendations on rational thought. (Considering that my path down to science started with the reasoned atheism bit) and was shocked by the lack of anything that resembled even a reasonable recommendation.

I don't like accidental developments.

Replies from: orthonormal
comment by orthonormal · 2010-05-06T00:13:46.337Z · LW(p) · GW(p)

Just because you weren't aware of any conscious reasoning behind your choices doesn't imply that they were fully accidents. The mind manages some very important things subconsciously, especially in emotionally charged domains where explicit awareness of motivations might hurt someone else's feelings or one's own self-image.

comment by cwillu · 2010-05-01T21:21:31.245Z · LW(p) · GW(p)

Has anybody considered starting a folding@home team for lesswrong? Seems like it would be a fairly cheap way of increasing our visibility.

After a brief 10 word discussion on #lesswrong, I've made a lesswrong team :p

Our team number is 186453; enter this into the folding@home client, and your completed work units will be credited.

Replies from: nhamann, Jack, Jack
comment by nhamann · 2010-05-02T00:55:07.579Z · LW(p) · GW(p)

Does anyone know the relative merits of folding@home and rosetta@home, which I currently run? I don't understand enough of the science involved to compare them, yet I would like to contribute to the project which is likely to be more important. I found this page, which explains the differences between the projects (and has some information about other distributed computing projects), but I'm still not sure what to think about which project I should prefer to run.

Replies from: MichaelGR
comment by MichaelGR · 2010-05-03T18:14:22.545Z · LW(p) · GW(p)

Personally I run Rosetta@home because, based on my research, it could be more useful to designing new proteins and computationally predicting the function of proteins. Folding seems to be more about understanding how they proteins fold, which can help with some diseases, but isn't nearly the game changing that in silico design and shape prediction would be.

I also think that the SENS Foundation (Aubrey de Grey & co) have some ties to Rosetta, and might use it in the future to design some proteins.

I'm a member of the Lifeboat Foundation team: http://lifeboat.com/ex/rosetta.home

But we could also create a Less Wrong team if there's enough interest.

comment by Jack · 2010-05-01T22:03:41.600Z · LW(p) · GW(p)

So I think I have it working but... theres nothing to tell me if my CPU is actually doing any work. It says it's running but... is there supposed to be something else? I used to do SETI@home back in the day and they had some nice feedback that made you feel like you were actually doing something (of course, you weren't because your computer was looking for non-existent signals, but still).

Replies from: zero_call, cwillu
comment by zero_call · 2010-05-02T00:49:27.332Z · LW(p) · GW(p)

...of course, you weren't because your computer was looking for non-existent signals...

The existence of ET signals is an open qustion. SETI is a fully legitimate organization ran according to a well thought out plan for collecting data to help answer this question.

Replies from: Jack
comment by Jack · 2010-05-02T01:00:19.973Z · LW(p) · GW(p)

I think the probability they ever find what they're looking for is extraordinarily low. But I don't have anything against the organization.

Replies from: zero_call
comment by zero_call · 2010-05-02T01:14:29.029Z · LW(p) · GW(p)

Right on, but just so you know, other (highly informed) people think that we may find a signal by 2027, so there you go. For an excellent short article (explaining this prediction), see here.

Replies from: Jack
comment by Jack · 2010-05-02T01:55:41.838Z · LW(p) · GW(p)

I don't think the author deals with the Fermi paradox very well, and the paradox is basically my reason for assigning a low probability to SETI finding something.

Replies from: zero_call
comment by zero_call · 2010-05-02T02:08:13.615Z · LW(p) · GW(p)

The Fermi paradox also struck me as a big issue when I first looked into these ideas, but now it doesn't bother me so much. Maybe this should be the subject of another open thread.

comment by cwillu · 2010-05-01T22:28:25.669Z · LW(p) · GW(p)

I use the origami client manager thingie; it handles deploying the folding client, and gives a nice progress meter. The 'normal' clients should have similar information available (I'd expect that origami is just polling the clients themselves).

comment by Jack · 2010-05-01T21:35:53.342Z · LW(p) · GW(p)

What is this?

Replies from: MichaelGR, Rain
comment by MichaelGR · 2010-05-03T18:16:18.632Z · LW(p) · GW(p)

I wrote a quick introduction to distributed computing a while ago:

http://michaelgr.com/distributed-computing/

My favorite project (the one which I think could benefit humanity the most) is Rosetta@home.

comment by Rain · 2010-05-01T21:41:21.657Z · LW(p) · GW(p)

Donating money to scientific organizations (in the form of a larger power bill). You run your CPU (otherwise idle) to crunch difficult, highly parallel problems like protein folding.

Replies from: cwillu
comment by cwillu · 2010-05-01T22:16:47.773Z · LW(p) · GW(p)

Granted that in many cases, it's donating money that you were otherwise going to burn.

Replies from: mattnewport
comment by mattnewport · 2010-05-01T22:23:57.037Z · LW(p) · GW(p)

Granted that in many cases, it's donating money that you were otherwise going to burn.

No, modern CPUs use considerably less power when they are idle. A computer running folding at home will be drawing more power than if it were not.

Replies from: rwallace, cwillu
comment by rwallace · 2010-05-01T23:56:47.447Z · LW(p) · GW(p)

But you've already paid for the hardware, you've already paid for the power to run the CPU at baseload, and the video card, and the hard disk, and all the other components; if you turn the machine off overnight, you're paying for wear and tear on the hardware turning it off and on every day, and paying for the time you spend booting up, reloading programs and reestablishing your context before you can get back to work.

In other words, the small amount of money spent on the extra electricity enables the useful application of a much larger chunk of resources.

That means if you run Folding@home, your donation is effectively being matched not just one for one but severalfold, and not by another philanthropist, but by the universe.

Replies from: Rain, CarlShulman, Vladimir_Golovin, Jack, mattnewport
comment by Rain · 2010-05-03T14:17:32.719Z · LW(p) · GW(p)

if you turn the machine off overnight, you're paying for wear and tear on the hardware turning it off and on every day, and paying for the time you spend booting up, reloading programs and reestablishing your context before you can get back to work.

I've seen numerous discussions about whether it's better / more economical to turn off your machine or to leave it running all the time, and I have never seen a satisfactory conclusion based on solid evidence.

Replies from: RobinZ
comment by RobinZ · 2010-05-03T14:35:05.551Z · LW(p) · GW(p)

That's because it depends on the design. On the lifetime point, for example: if the machine tends to fail based on time spent running (solder creep, perhaps), leaving it running more often will reduce the life, but if the machine tends to fail based on power cycling (low-cycle fatigue, perhaps), turning it on and off more often will reduce the life.

Given that I've dropped my MacBook from a height of four feet onto a concrete slab, I figure the difference is roundoff error as far as I am concerned.

comment by CarlShulman · 2010-05-02T03:40:09.680Z · LW(p) · GW(p)

A severalfold match isn't very impressive if the underlying activity is at least several orders of magnitude less efficient than alternatives, which seems likely here.

Replies from: rwallace
comment by rwallace · 2010-05-02T10:43:46.259Z · LW(p) · GW(p)

It seems highly unlikely to me. Biomedical research in general and protein folding in particular are extremely high leverage areas. I think you will be very hard put to it to find a way to spend resources even a single order of magnitude more efficiently (let alone make a case that the budget of any of us here is already being spent more efficiently, either on average or at the margin).

Replies from: CarlShulman
comment by CarlShulman · 2010-05-02T20:44:06.339Z · LW(p) · GW(p)
  1. Moore's Law means that the cost of computation is falling exponentially. Even if one thought that providing computing power was the best way to spend money (on electricity) it would likely be better to save the money spent on the electric power and buy more computing power later, unless the computation is much much more useful now.

  2. Biomedical research already gets an outsized portion of all R&D, with diminishing returns. The NIH budget is over $30 billion.

  3. Slightly accelerating protein folding research doesn't benefit very much from astronomical waste considerations compared to improving the security of future progress with existential risk reduction.

Replies from: Kaj_Sotala, rwallace
comment by Kaj_Sotala · 2010-05-12T06:50:36.167Z · LW(p) · GW(p)

it would likely be better to save the money spent on the electric power and buy more computing power later, unless the computation is much much more useful now.

In principle, this is true; in practice, saying things like these seems more likely to make the people in question to simply cease donating electricity, instead of ceasing to donate electricity and donating the saved money to something more useful. Installing a program and running it all the time doesn't really feel like you're spending money, but explicitly donating money requires you to cross the mental barrier between free and paid in a way that running the program doesn't.

For those reasons, I'd be very hesitant about arguing against running programs like Folding@Home; it seems likely to cause more harm than good.

Replies from: CarlShulman
comment by CarlShulman · 2010-05-12T08:14:04.825Z · LW(p) · GW(p)

http://lesswrong.com/lw/1d9/doing_your_good_deed_for_the_day/

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-05-12T09:59:16.957Z · LW(p) · GW(p)

But on the other hand http://lesswrong.com/lw/4e/cached_selves/ ; it doesn't seem clear to me which effect dominates, so we should be careful about drawing inferences based on that.

Furthermore, it seems to me like things like F@H are rather unlikely to cause a "good deed of the day" effect for very long: by their nature, they're continuing processes that rather quickly fade into the background of your consciousness and you partially forget about. If F@H automatically starts up whenever you boot your computer, then having it running wouldn't count for a day's good deed for most people. Constantly seeing the icon might boost a cached self effect of "I should do useful things", though.

comment by rwallace · 2010-05-02T21:03:38.888Z · LW(p) · GW(p)
  1. In practice, it is worth doing the computation now -- we can easily establish this by looking at the past, and noting that the people who performed large computations then, would not have been better off waiting until now.

  2. $30 billion is a lot of money compared to what you and I have in our pockets. It's dirt cheap compared to the trillions being spent on unsuccessful attempts to treat people who are dying for lack of better biotechnology.

  3. By far the most important way to reduce real life existential risks is speed.

  4. Even if you could find a more cost effective research area to finance, it is highly unlikely that you are actually spending every penny you can spare in that way. The value of spending resources on X, needs to be compared to the other ways you are actually spending those resources, not to the other ways you hypothetically could be spending them.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-02T21:12:41.227Z · LW(p) · GW(p)

Whether it makes sense in general doing a calculation now or just waiting isn't always so clear cut. Also, at least historically there hasn't always been a choice. For example, in the 1940s and 1950s, mathematicians studying the Riemann zeta function really wanted to do hard computations to look at more of the non-trivial zeros. but this was given very low priority by the people who controlled computers and by the people who programmed them. The priority was so low that by the time it advanced up the queue the computer in question would already be labeled as obsolete and thus would not be maintained. It wasn't until the late 1950s that the first such calculation was actually performed

comment by Vladimir_Golovin · 2010-05-03T06:46:29.928Z · LW(p) · GW(p)

and the video card

They have high-performance GPU clients that are a lot faster than CPU-only ones.

comment by Jack · 2010-05-02T00:00:41.177Z · LW(p) · GW(p)

Assuming whatever gets learned through folding@home has applications they should offer users partial ownership of the intellectual property.

Replies from: rwallace
comment by rwallace · 2010-05-02T03:19:07.393Z · LW(p) · GW(p)

It's scientific research, the results are freely published.

comment by mattnewport · 2010-05-01T23:59:26.704Z · LW(p) · GW(p)

I'm not saying it isn't a net gain, it may well be according to your own personal weighing of the factors. I'm just saying it is not free. Nothing is.

comment by cwillu · 2010-05-01T22:43:48.841Z · LW(p) · GW(p)

Many != all.

My desktop is old enough that it uses very little more power at full capacity than it does at idle.

Additionally, you can configure (may be the default, not sure) the client to not increase the clock rate.

Replies from: mattnewport, Rain
comment by mattnewport · 2010-05-01T23:28:03.049Z · LW(p) · GW(p)

Many != all.

It is also not equal to 'some'. The vast majority of computers today will use more power when running folding at home than they would if they were not running folding at home. There may be some specific cases where this is not true but it will generally be true.

My desktop is old enough that it uses very little more power at full capacity than it does at idle.

You've measured that have you? Here's an example of some actual measurements for a range of current processors' power draw at idle and under load. It's not a vast difference but it is real and ranges from about 30W / 40% increase in total system power draw to around 100W / 100% increase.

Additionally, you can configure (may be the default, not sure) the client to not increase the clock rate.

I couldn't find mention of any such setting on their site. Do you have a link to an explanation of this setting?

Replies from: cwillu
comment by cwillu · 2010-05-02T01:39:18.382Z · LW(p) · GW(p)

On further consideration, my complaint wasn't my real/best argument, consider this a redirect to rwallace's response above :p

That said, I personally don't take 'many' as meaning 'most', but more in the sense of "a significant fraction", which may be as little as 1/5 and as much as 4/5. I'd be somewhat surprised if the number of old machines (5+ years old) in use wasn't in that range.

re: scaling, the Ubuntu folding team's wiki describes the approach.

comment by Rain · 2010-05-01T23:02:00.316Z · LW(p) · GW(p)

Idle could also mean 'off', which would be significant power savings even (especially?) for older CPUs.

Replies from: cwillu
comment by cwillu · 2010-05-02T01:18:40.699Z · LW(p) · GW(p)

One who refers to their powered-off computer as 'idle' might find themselves missing an arm.

Replies from: Rain
comment by Rain · 2010-05-03T14:15:36.576Z · LW(p) · GW(p)

Except I'm talking about opportunity cost rather than redefining the word. You can turn off a machine you aren't using, a machine that's idle.

comment by NancyLebovitz · 2010-05-08T03:11:17.421Z · LW(p) · GW(p)

Self-forgiveness limits procrastination

Wohl's team followed 134 first year undergrads through their first mid-term exams to just after their second lot of mid-terms. Before the initial exams, the students reported how much they'd procrastinated with their revision and how much they'd forgiven themselves. Next, midway between these exams and the second lot, the students reported how positive or negative they were feeling. Finally, just before the second round of mid-terms, the students once more reported how much they had procrastinated in their exam preparations.

The key finding was that students who'd forgiven themselves for their initial bout of procrastination subsequently showed less negative affect in the intermediate period between exams and were less likely to procrastinate before the second round of exams. Crucially, self-forgiveness wasn't related to performance in the first set of exams but it did predict better performance in the second set.

comment by SilasBarta · 2010-05-03T15:36:17.263Z · LW(p) · GW(p)

I recalled the strangest thing an AI could tell you thread, and I came up with another one in a dream. Tell me how plausible you think this one is:

Claim: "Many intelligent mammals (e.g. dogs, cats, elephants, cetaceans, and apes) act just as intelligently as feral humans, and would be capable of human-level intelligence with the right enculturation."

That is, if we did to pet mammals something analogous to what we do to feral humans when discovered, we could assimilate them; their deficiencies are the result of a) not knowing what assimilation regimen is necessary for pets/zoo mammals; and b) mammals in the wild being currently at a lower level of cultural development, but which humans at one time passed through.

Thoughts?

Replies from: wiresnips, NancyLebovitz, RobinZ
comment by wiresnips · 2010-05-04T17:07:30.310Z · LW(p) · GW(p)

I don't know that we've ever successfully assimilated a feral human either.

comment by NancyLebovitz · 2010-05-03T16:21:29.553Z · LW(p) · GW(p)

Sounds plausible to me. I suspect people aren't able to develop enculturation for animals-- the sensory systems and communication methods are too different.

I also believe people have been unintentionally selecting wild animals for intelligence.

comment by RobinZ · 2010-05-03T16:20:27.874Z · LW(p) · GW(p)

Next step: "Okay, what should I do to test this?"

Replies from: SilasBarta
comment by SilasBarta · 2010-05-03T17:25:22.323Z · LW(p) · GW(p)

Find some method of communication that the mammal can use, and raise it in a society of children that use that method of communication. See if its behavior tracks that of the children in terms of intelligence.

I believe such an experiment has already been performed, involving deaf children and using sign language as the communication method, and some kind of ape as the mammal. It supposedly actually adapted very comfortably, behaving just as the children (except that they taught it to ask for drugs), but they had to cut off the experiment on the grounds that, after another year of growth, the ape would be too strong and therefore too dangerous to risk leaving in the presence of children.

I can't find a cite at the moment, but I remember a friend telling me about this and it checked out in an online search.

Replies from: NancyLebovitz, Jack
comment by NancyLebovitz · 2010-05-03T18:23:55.410Z · LW(p) · GW(p)

Human languages (including sign) are adapted for human beings. While there's some flexibility, I wouldn't expect animals using human language to be at their best.

comment by Jack · 2010-05-03T17:52:22.635Z · LW(p) · GW(p)

What they need to do is include like 5 or 6 apes with the children and then when they're removed they can continue socializing with each other.

The problem is coming up with methods of communication. Aside from apes and sign language I can't think of any...

Replies from: CronoDAS, NancyLebovitz
comment by CronoDAS · 2010-05-03T19:06:10.101Z · LW(p) · GW(p)

One major difference between humans and apes is this:

Humans teach each other. When we discover something new, we immediately go and tell everybody. Apes don't. If an ape discovers something, it doesn't spread to the other members of its social group until they happen to watch the ape using its discovery. And apes that are taught sign language don't pass it on to their children.

Replies from: cupholder, SilasBarta
comment by cupholder · 2010-05-03T22:58:42.199Z · LW(p) · GW(p)

Which means apes don't get the benefit of cultural evolution (or gene-culture co-evolution). I wonder if that was a key barrier to the development of ape culture.

comment by SilasBarta · 2010-05-04T18:00:44.735Z · LW(p) · GW(p)

Hm, I thought I had a counterexample, but it looks like it was just a case of learning by imitation. Also, vervet monkeys teach their proto-language (of "eagles!", "snakes!", and "leopards!") to their young by smacking them when they give the wrong call.

As for other mammals, there are cases of them teaching each other when they learn something new, for example when an elephant learned how to unlock her cage.

comment by NancyLebovitz · 2010-05-03T18:26:17.678Z · LW(p) · GW(p)

African gray parrots and spoken language.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-03T18:46:57.285Z · LW(p) · GW(p)

Yes, and there's been a lot of work with African Greys already. Irene Pepperberg and her lab have done most of the really pioneering work. They've shown that Greys can recognize colors, small numbers and in some cases produce very large vocabs. There's also evidence that Grey's sometimes overcorrect. That is, they apply complete grammatical rules to conjugate/decline words even when the words are irregular. This happens with human children as well. Thus for example, human children will frequently say "runned" when they mean "ran" or "mouses" when they mean "mice" and many similar examples. This is strong evidence that they are internalizing general rules rather than simply repeating words they've heard. Since Greys do the same thing, we can conclude that parrots aren't just parroting.

Replies from: Jack
comment by Jack · 2010-05-03T18:50:03.395Z · LW(p) · GW(p)

This is strong evidence that they are internalizing general rules rather than simply repeating words they've heard.

Yes, it is! I hadn't heard that before. Is there a journal article somewhere?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-03T19:03:04.387Z · LW(p) · GW(p)

I'm not aware of any journal articles for overcorrection and a quick Google search doesn't turn any up. I'll go bug my ornithology friends. In the meantime, here's a BBC article that discusses the matter: http://web.archive.org/web/20060519061120/http://news.bbc.co.uk/2/hi/science/nature/3430481.stm . They give the example of N'kisis using "flied" for the past tense of "fly" rather than "flew."

Edit: Fixed link. Edit: Link's accuracy is questionable. See Mass Driver's remarks below.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-05-04T04:14:32.125Z · LW(p) · GW(p)

The link seems to be dead or misspelled.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-04T04:26:48.670Z · LW(p) · GW(p)

Misspelled. Edited for correct link.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-05-04T05:12:39.978Z · LW(p) · GW(p)

The BBC appears to have at least partially withdrawn their article about the parrot in question:

Note: This story about animal communication has replaced an earlier one on this page which contained factual inaccuracies we were unable to correct. As a result, the original story is no longer in our archive. It is still visible elsewhere, via the link below: 'Parrot oratory stuns scientists'

New BBC News Article

Replies from: thomblake, JoshuaZ
comment by JoshuaZ · 2010-05-04T05:17:26.867Z · LW(p) · GW(p)

Hmm, that's very interesting. I think I've seen the overcorrection claim before but then definitely don't have anything resembling a good citation.

comment by CronoDAS · 2010-05-28T06:18:22.600Z · LW(p) · GW(p)

I am thinking of making a top-level post criticizing libertarianism, in spite of the current norm against discussing politics. Would you prefer that I write the post, or not write it?

Replies from: tut, Blueberry, cupholder, Alicorn, cousin_it, kodos96
comment by tut · 2010-05-28T10:53:22.226Z · LW(p) · GW(p)

I will vote it down unless you say something that I have not seen before. I think that it was a good idea to not make LW a site for rehearsing political arguments, but if you have thought of something that hasn't been said before and if you can explain how you came up with it then it might be a good reasoning lesson.

Replies from: NancyLebovitz, Blueberry
comment by NancyLebovitz · 2010-05-28T12:11:52.258Z · LW(p) · GW(p)

I will only vote it up if there's something I haven't seen before, but will only vote it down if I think it's dreadful.

We may not be ready for it yet, but at some point we need to be able to pass the big test of addressing hard topics.

comment by Blueberry · 2010-05-28T14:22:48.942Z · LW(p) · GW(p)

I will vote it up to cancel the above downvote, to encourage you to make the post in case the threat of downvoting scares you off.

comment by Blueberry · 2010-05-28T06:51:28.673Z · LW(p) · GW(p)

I'd love to read it, though I may well disagree with a lot of it. I'd prefer it if it were kept more abstract and philosophical, as opposed to discussing current political parties and laws and so forth: I think that would increase the light-to-heat ratio.

comment by cupholder · 2010-05-29T07:14:15.143Z · LW(p) · GW(p)

Upvoted your comment for asking in the first place.

If your post was a novel explanation of some aspect of rationality, and wasn't just about landing punches on libertarianism, I'd want to see it. If it was pretty much just about criticizing libertarianism, I wouldn't.

I say this as someone very unsympathetic to libertarianism (or at least what contemporary Americans usually mean by 'libertarianism') - I'm motivated by a feeling that LW ought to be about rationality and things that touch on it directly, and I set the bar high for mind-killy topics, though I know others disagree with me about that, and that's OK. So, though I personally would want to downvote a top-level post only about libertarianism, I likely wouldn't, unless it were obnoxiously bare-faced libertarian baiting.

Replies from: ata
comment by ata · 2010-05-29T07:30:03.173Z · LW(p) · GW(p)

I agree on most counts.

However, I'd also enjoy reading it if it were just a critique of libertarianism but done in an exceptionally rational way, such that if it is flawed, it will be very clear why. At minimum, I'd want it to explicitly state what terminal values or top-level goals it is assuming we want a political system to maximize, consider only the least convenient possible interpretation of libertarianism, avoid talking about libertarians too much (i.e. avoid speculating on their motives and their psychology; focus as much as possible on the policies themselves), separate it from discussion of alternatives (except insofar as is necessary to demonstrate that there is at least one system from which we can expect better outcomes than libertarianism), not appear one-sided, avoid considering it as a package deal whenever possible, etc.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-05-29T16:59:58.650Z · LW(p) · GW(p)

done in an exceptionally rational way, such that if it is flawed, it will be very clear why

That standard sounds pretty weird. If it is so clear that it is flawed, wouldn't you expect it to be clear to the author and thus not posted? Perhaps you mean clear what your core disagreement is?

comment by Alicorn · 2010-05-28T06:18:58.104Z · LW(p) · GW(p)

I'm interested.

comment by cousin_it · 2010-05-28T11:59:37.679Z · LW(p) · GW(p)

Not enough information to answer. I will upvote your post if I find it novel and convincing by rationalist lights. Try sending draft versions to other contributors that you trust and incorporate their advice before going public. I can offer my help, if being outside of American politics doesn't disqualify me from that.

comment by kodos96 · 2010-05-28T08:18:30.125Z · LW(p) · GW(p)

ergh.... after the recent flamewar I was involved in, I had resolved to not allow myself to get wrapped up in another one, but if there's going to be a top level post on this, I don't realistically see myself staying out of it.

I'm not saying don't write it though. If you do, I'd recommend you let a few people you trust read it over first before you put it up, to check for anything unnecessarily inflammatory. Also, what Blueberry said.

comment by Paul Crowley (ciphergoth) · 2010-05-06T16:27:14.044Z · LW(p) · GW(p)

The Cognitive Bias song:

http://www.youtube.com/watch?v=3RsbmjNLQkc

Not very good, but, you know, it's a song about cognitive bias, how cool is that?

comment by arundelo · 2010-05-13T04:35:49.740Z · LW(p) · GW(p)

Kaj_Sotala is doing a series of interviews with people in the SIAI house. The first is with Alicorn.

Edit: They are tagged as "siai interviews".

comment by Mitchell_Porter · 2010-05-09T06:47:47.116Z · LW(p) · GW(p)

Entertainment for out-of-work Judea Pearl fans: go to your local job site and search on the word "causal", and then imagine that all those ads aren't just mis-spelling the word "casual"...

comment by NancyLebovitz · 2010-05-10T12:22:44.258Z · LW(p) · GW(p)

Most people's intuition is that assassination is worse than war, but simple utilitarianism suggests that war is much worse.

I have some ideas about why assassination isn't a tool for getting reliable outcomes-- leaders are sufficiently entangled in the groups that they lead that removing a leader isn't like removing a counter from a game, it's like cutting a piece out of a web which is going to rebuild itself in not quite the same shape-- but this doesn't add up to why assassination could be worse than war.

Is there any reason to think the common intuition is right?

Replies from: bogdanb, PhilGoetz, Nick_Tarleton
comment by bogdanb · 2010-05-21T14:27:56.514Z · LW(p) · GW(p)

TLDR: “War” is the inter-group version of “duel” (ie, lawful conflict). “Assassination” is the inter-group version of “murder” (ie, unlawful conflict).

My first “intuition about the intuition” is that it’s a historical consequence: During most history, things like freedom, and power and responsibility for enforcement of rules when conflicts (freedom vs. freedom) occur, were stratified. Conflicts between individuals in a family are resolved by the family (e.g. by the head thereof), conflicts between families (or individuals in different families) by tribal leaders or the kind. During feudalism the “scale” was formalized, but even before we had a large series of family → group → tribe → city → barony → kingdom → empire.

The key about this system is that attempts to “cross the borders” in this system, for instance punishing someone from a different group directly rather than invoking punishment from that group’s leadership is seen as an intrusion in that group’s affairs.

So assassination becomes seen as the between-group version of murder: going around the established rules of society. That’s something that is selected against in social environments (and has been discussed elsewhere).

By contrast, war is the “normal” result when there is no higher authority to recurse to, in a conflict of groups. Note that, analogously, for much of history duels were considered correct methods of conflict resolution between some individuals, as long as they respected some rules. So as long as, at least in theory, there are laws of war, war is considered a direct extension of that instinct. Assassination is seen as breaking rules, so it’s seen differently.

A few other points:

  • war is very visible, so you can expend a lot of signaling to dehumanize the adversary.
  • but assassination is supposed to be done in secret, so you can’t use propaganda as well (assassinating opposing leadership during a war is not seen as that much of a big problem; they’re all infidels/drug lords/terrorists anyway!)
  • assassination was a bit harder (even now, drones are expensive), and failed assassination attempts would lead to escalation to war often, anyway
  • assassination is oriented towards leaders, who have an interest to discourage, as much as they can, the concept. You can do that, e.g., via the meme that conflict is only honorable when it’s between armored knights on horses and the like. (For best results, add another meme which implies that observing that peasants are not allowed to own armor and horses is “dissent”.)
Replies from: JanetK
comment by JanetK · 2010-05-22T09:30:54.751Z · LW(p) · GW(p)

What an excellent analysis. I voted up. The only thing I can think of that could be added is that making a martyr can backfire.

comment by PhilGoetz · 2010-05-15T04:13:37.461Z · LW(p) · GW(p)

Who thinks assassination is worse than war?

I could make an argument for it, though: If countries engaged regularly in assassination, it would never come to a conclusion, and would not reduce (and might increase) the incidence of war. Phrasing it as "which is worse" makes it sound like we can choose one or the other. This assumes that an assassination can prevent a war (and doesn't count the cases where it starts a war).

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-05-15T06:14:58.768Z · LW(p) · GW(p)

It seems to me that the vast majority of people think of war as a legitimate tool of national policy, but are horrified by assassination.

comment by Nick_Tarleton · 2010-05-10T18:25:57.622Z · LW(p) · GW(p)

I've always assumed that the norm against assassination, causally speaking, exists mostly due to historical promotion by leaders who wanted to maintain a low-assassination equilibrium, now maintained largely by inertia. (Of course, it could be normatively supported by other considerations.)

It makes sense to me that people would oversimplify the effect of assassination in basically the way you describe, overestimating the indispensability of leaders. I know I've seen a study on the effects of assassination on terrorist groups, but can't find a link or remember the conclusions.

comment by Kevin · 2010-05-06T20:12:21.397Z · LW(p) · GW(p)

Neanderthal genome reveals interbreeding with humans:

http://www.newscientist.com/article/dn18869-neanderthal-genome-reveals-interbreeding-with-humans.

Replies from: None, Kevin
comment by [deleted] · 2010-05-07T00:15:52.790Z · LW(p) · GW(p)

Whoooohooo! Awsomest thing in the last ten years of genetic news for me! YAAY! WHO HOO!!! /does a little dance / I want to munch on that delicious data!

Ahem.

Sorry about that.

But people 1 to 4% admixture! This is big! This gets an emotional response from me!That survived more than a thousand generations of selection, the bulk of it is probably neutral but think about how many perfectly usefull and working allels we may have today (since the Neanderthalls where close to us to start with). 600 000 or something years of speration these guys evolved sperate from us for nearly as long as the fictional Vampires in Blindsight.

It seems some of us are have a bit our ancestors picked of another species in our genes! Could this have anything to do with behavioural modernity that started off at about the same time the populations crossbred in the middle east ~100 000 years ago? Which adaptations did we pick up? Think of the possiblities!

Ok I'll stop the torrent of downvote magnet words and get back to reading about this. And then everything else my grubby little paws can get on Neanderthals, I need to brush up!

Edit: I just realized part of the reason why I got so excited is because it shows I may have a bit of exotic ancestry. Considering how much people, all else being equal, like to play up their "foreign" or "unusual" semimythical ancestors or even roots in conversation, national myths or on the census instead of the ethnicity of the majority of their ancestors this may be a more general bias, that I could of course quickly justify with a evo psych "just so" story but I'll refrain from that to search for what studies have to say about this.

Replies from: Kevin
comment by Kevin · 2010-05-07T00:30:14.307Z · LW(p) · GW(p)

I definitely think this is top-level post material but I didn't have enough to say to not piss the people off that think all top level posts need to be at least 500 words long.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-07T00:31:38.676Z · LW(p) · GW(p)

I think this is very interesting but I'm not sure it should be a top-level post. Not due to the length but simply because it isn't terribly relevant to LW. Something can be very interesting and still not the focus here.

Replies from: Kevin
comment by Kevin · 2010-05-07T00:33:18.730Z · LW(p) · GW(p)

There is interesting discussion to be had that is relevant to LW.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-07T00:34:41.346Z · LW(p) · GW(p)

How so? I'm not seeing it.

Replies from: Kevin
comment by Kevin · 2010-05-07T00:36:27.785Z · LW(p) · GW(p)

That's because there isn't a top-level post yet! :P

The point being that many, many more people read /new than read 400 comments deep in the open thread.

Replies from: RobinZ
comment by RobinZ · 2010-05-07T01:44:15.800Z · LW(p) · GW(p)

It is easier to convince people that there is an interesting discussion to be had relevant to LW if you can discuss its relevance to LW in an interesting fashion when you post it.

More seriously, if there isn't some barrier to posting, /new will be suffer a deluge of marginally interesting material, and after the transients die out nobody will be watching the posts there, either. I read most new posts because most new posts are substantive.

comment by simplicio · 2010-05-03T23:19:15.648Z · LW(p) · GW(p)

I have a request. My training is in science & engineering, but I am totally ignorant of basic economics. I have come to see this as a huge blind spot. I feel my views on social issues are fairly well-reasoned, but when it comes to anything fiscal, it's all very touchy-feely at present.

Can anyone recommend intro material on economics (books, tutorials)? I ask on LW because I have no idea where to start and who to trust. If you offer a recommendation of a book pushing some particular economic "school of thought," that's fine, but I'd like to know what that school is.

Thanks!

Replies from: mattnewport, CronoDAS, steven0461, None, Vladimir_Nesov
comment by mattnewport · 2010-05-03T23:35:36.772Z · LW(p) · GW(p)

Economics in One Lesson by Henry Hazlitt is a good slim introduction to the economic mindset. For a different approach focused on the application of economic thinking to everyday life The Logic of Life by Tim Harford is worth a look. Neither book covers much of the math of economics but I think that is a good thing since most of the math heavy parts of economics are the least useful and relevant.

ETA: Economics In One Lesson is a heavily free market / free trade 'classical' economic slant.

comment by CronoDAS · 2010-05-04T09:09:56.418Z · LW(p) · GW(p)

The book I used in my college Econ 101 class was this one.

comment by steven0461 · 2010-05-04T00:25:56.015Z · LW(p) · GW(p)

MIT OpenCourseWare has a lot of material. I also like Bryan Caplan's lecture notes (these sometimes have a libertarian slant).

comment by [deleted] · 2010-05-04T15:34:40.426Z · LW(p) · GW(p)

Pick up a copy of Basic Economics by Thomas Sowell. It is, by far, the best introduction to economics I've ever read. Don't be put off by the size of the thing — it's a very easy read and requires no background in math. There's a follow-up book by the same author, too: Applied Economics.

comment by Vladimir_Nesov · 2010-05-03T23:20:37.821Z · LW(p) · GW(p)

I believe a textbook is always a better first step.

Replies from: wnoise
comment by wnoise · 2010-05-03T23:59:42.838Z · LW(p) · GW(p)

A good textbook is a better first step. Unfortunately, there are some truly dire textbooks out there.

comment by Matt_Stein · 2010-05-03T04:32:54.611Z · LW(p) · GW(p)

So, I'm somewhat new to this whole rationality/Bayesianism/(nice label that would describe what we do here on LessWrong). Are there any podcasts or good audiobooks that you'd recommend on the subjects of LessWrong? I have a large amount of time at work that I can listen to audio, but I'm not able to read during this time. Does anyone have any suggestions for essential listening/reading on subjects similar to the ones covered here?

Replies from: None
comment by [deleted] · 2010-05-03T19:07:04.008Z · LW(p) · GW(p)

I know you said you don't have a ton of time to read but Gary Drescher's Good and Real has been called Less Wrong in book form on occasion. If nothing else, I found it an enjoyable read that gives a good start to getting into the mindset people have in this community.

comment by mattnewport · 2010-05-20T19:01:55.606Z · LW(p) · GW(p)

This seems like a potentially significant milestone: 'Artificial life' breakthrough announced by scientists

Scientists in the US have succeeded in developing the first synthetic living cell.

The researchers constructed a bacterium's "genetic software" and transplanted it into a host cell.

The resulting microbe then looked and behaved like the species "dictated" by the synthetic DNA.

Replies from: retiredurologist
comment by retiredurologist · 2010-05-21T15:15:39.975Z · LW(p) · GW(p)

Given that this now opens the door for artificially designed and deployed harmful viruses, perhaps unfriendly AI falls a few notches on existentialist risk ladder.

comment by Alexandros · 2010-05-20T10:08:58.429Z · LW(p) · GW(p)

I remember hearing a few anecdotes about abstaining for food for a period of time (fasting) and improved brain performance. I also seem to recall some pop-sci explanation involving detoxification of the body and the like. Today something triggered interest in this topic again, but a quick Google search did not return much on the topic (fasting is drowned in religious references).

I figure this is well within LW scope, so does anyone have any knowledge or links that offer more concrete insight into (or rebuttal of) this notion?

comment by cousin_it · 2010-05-19T10:17:27.489Z · LW(p) · GW(p)

Rolf Nelson's AI deterrence doesn't work for Schellingian reasons: the Rogue AI has incentive to modify itself to not understand such threats before it first looks at the outside world. This makes you unable to threaten, because when you simulate the Rogue AI you will see its precommitment first. So the Rogue AI negates your "first mover advantage" by becoming the first mover in your simulation :-) Discuss.

Replies from: rolf_nelson, Vladimir_Nesov
comment by rolf_nelson · 2010-05-20T02:14:56.573Z · LW(p) · GW(p)

I agree that AI deterrence will necessarily fail if:

  1. All AI's modify themselves to ignore threats from all agents (including ones it considers irrational), and

  2. any deterrence simulation counts as a threat.

Why do you believe that both or either of these statements are true? Do you have some concrete definition of 'threat' in mind?

Replies from: cousin_it
comment by cousin_it · 2010-05-20T07:05:55.919Z · LW(p) · GW(p)

I don't believe statement 1 and don't see why it's required. After all, we are quite rational, and so is our future FAI.

comment by Vladimir_Nesov · 2010-05-19T11:24:46.118Z · LW(p) · GW(p)

The notion of "first mover" is meaningless, where the other player's program is visible from the start.

comment by Richard_Kennaway · 2010-05-19T08:08:07.734Z · LW(p) · GW(p)

In another comment I coined (although not for the first time, it turns out) the expression "Friendly Human Intelligence". Which is simply geekspeak for how to bring up your kids right and not make druggie losers, wholesale killers, or other sorts of paperclipper. I don't recall seeing this discussed on LessWrong. Maybe most of us don't have children, and Eliezer has said somewhere that he doesn't consider himself ready to create new people, but as the saying is, if not now, when, and if not this, what?

I don't have children and don't intend to. I have two nephews and a niece, but have not had much to do with their lives, beyond sending them improving books for birthdays and Christmas. I wonder if LessWrongers, with or without children, have anything to say on how to raise children to be rational non-paperclippers?

Replies from: JanetK, Zack_M_Davis
comment by JanetK · 2010-05-22T09:08:12.989Z · LW(p) · GW(p)

I think that question is a conversation stopper because those who do not have children who not feel qualify and those that do have children know what a complex and tricky question it is. Personally I don't think there is a method that fits all children and all relationships with them. But... You might try activities rather than presents. 'Oh cool, uncle is gone to make a video with us and we're going to do it at the zoo.' If you get the right activity (depends on child), they will remember it and what you did and said for years. I had a uncle that I only saw a few times but he showed me how to make and throw a bomerang. He explained why it returned. I have thanked him for that day for 60 years.

Replies from: cupholder
comment by cupholder · 2010-05-22T19:06:54.177Z · LW(p) · GW(p)

I think that question is a conversation stopper because those who do not have children who not feel qualify and those that do have children know what a complex and tricky question it is.

I don't have children and I didn't try answering the question because I knew what a complex and tricky question it is - I don't expect it to be much different than the bigger question of how to improve human rationality for people in general.

comment by PhilGoetz · 2010-05-15T04:06:27.557Z · LW(p) · GW(p)

Today the Pope finally admitted there has been a problem with child sex abuse by Catholic priests. He blamed it on sin.

What a great answer! It covers any imaginable situation. Sin could be the greatest tool for bad managers everywhere since Total Quality Management.

"Sir, your company, British Petroleum, is responsible for the biggest environmental disaster in America this decade. How did this happen, and what is being done to prevent it happening again?"

"Senator, I've made a thorough investigation, and I'm afraid there has been sin in the ranks of British Petroleum. BP has a deep need to re-learn penance, to accept purification, to learn on one hand forgiveness but also the need for justice."

"Thank you, Mr. Hayward. I'm glad you're on top of the situation."

I wonder if I can use this at work.

Replies from: Morendil
comment by Morendil · 2010-05-15T08:04:06.341Z · LW(p) · GW(p)

Sin could be the greatest tool for bad managers everywhere since Total Quality Management.

That sounds like the kind of remark that goes out of its way to offend several categories of people at once. :)

But in that category the gold standard remains Evelyn Waugh's “now that they no longer defrock priests for sexual perversities, one can no longer get any decent proofreading.”

comment by Will_Newsome · 2010-05-10T05:11:47.572Z · LW(p) · GW(p)

You know, lots of people claim to be good cooks, or know good cooks, or have an amazing recipe for this or that. But Alicorn's cauliflower soup... it's the first food that, upon sneakily shoveling a fourth helping into my bowl, made me cackle maniacally like an insane evil sorcerer high on magic potions of incredible power, unable to keep myself from alerting three other soup-enjoying people to my glorious triumph. It's that good.

Replies from: Alicorn, SilasBarta
comment by Alicorn · 2010-05-10T05:37:19.860Z · LW(p) · GW(p)

Awwwww :D

PS: If this endorsement of house food quality encourages anyone to apply for an SIAI fellowship, note your inspiration in the e-mail! We receive referral rewards!

Replies from: NancyLebovitz, Jack
comment by NancyLebovitz · 2010-05-10T12:17:15.462Z · LW(p) · GW(p)

Would you be willing to post the recipe?

Replies from: Alicorn
comment by Alicorn · 2010-05-10T16:10:09.798Z · LW(p) · GW(p)

http://improvisationalsoup.wordpress.com/2009/05/31/cream-of-cauliflower-soup/

I have taken to also adding two or three parsnips per batch.

Replies from: bogdanb
comment by bogdanb · 2010-05-21T14:36:26.268Z · LW(p) · GW(p)

Can you describe that “better than bouillon” thing, for us non-US (I assume) readers?

Also, how much cream do you use, and what’s “a ton” of garlic? (In my kitchen, that could mean half a pound — we use garlic paste as ketchup around here...)

Replies from: Alicorn
comment by Alicorn · 2010-05-21T19:00:07.661Z · LW(p) · GW(p)

Better than Bouillon is paste-textured reduced stock. It's gloopy, not very pretty, and adds excellent flavor to just about any savory dish. Instead of water and BTB, you could use a prepared stock, or instead of just the BTB, use a bouillon cube, but I find they have dramatically inferior flavors unless you make your own stock at home. I haven't tried cooking down a batch of homemade stock to see if I could get paste, but I think it would probably take too long.

I guess on the cream until the color looks about right. I use less cream if I overshot on the water when I was cooking the veggies, more if it's a little too thick.

"A ton" of garlic means "lots, to taste". I'd put one bulb in a batch of cauliflower soup mostly because it's convenient to grab one bulb out of a bag of garlic bulbs. If you're that enthusiastic about garlic, go ahead and use two, three, four - it's kind of hard to overdo something that wonderful.

comment by Jack · 2010-05-10T05:50:39.094Z · LW(p) · GW(p)

How long are the fellowships for?

Replies from: Alicorn
comment by Alicorn · 2010-05-10T06:01:32.623Z · LW(p) · GW(p)

As long as three months (and the possibility of sticking around after if everything goes swimmingly), but you could come for considerably shorter if you have scheduling constraints. We've also been known to have people over for a day or two just to visit and see how cool we are. Totally e-mail Anna if you have any interest at all! Don't be shy! She isn't scary!

comment by SilasBarta · 2010-05-10T16:28:39.097Z · LW(p) · GW(p)

Does Alicorn's presence prohibit me from applying for an SIAI fellowship?

Replies from: AnnaSalamon, Will_Newsome
comment by AnnaSalamon · 2010-05-10T19:38:49.625Z · LW(p) · GW(p)

Nope. All applications are welcome.

comment by Will_Newsome · 2010-05-11T00:03:12.658Z · LW(p) · GW(p)

I second Anna, but I will also note that we plan on moving into a biggg house or possibly two big houses, and this would hopefully minimize friction in the event that two Visiting Fellows don't quite get along. I hope you apply!

comment by PhilGoetz · 2010-05-08T17:55:28.512Z · LW(p) · GW(p)

If we get forums, I'd like a projects section. A person could create a project, which is a form centered around a problem to work on with other people over an extended period of time.

Replies from: NihilCredo
comment by NihilCredo · 2010-05-17T15:57:52.272Z · LW(p) · GW(p)

This seems like the sort of activity Google Wave is (was?) meant for.

comment by SilasBarta · 2010-05-06T22:40:58.720Z · LW(p) · GW(p)

Tough financial question about cryonics: I've been looking into the infinite banking idea, which actually has credible supporters, and basically involves using a mutual whole life insurance policy as a tax shelter for your earnings, allow you to accumulate dividends thereon tax free ("'cause it's to provide for the spouse and kids"), and withdraw from your premiums and borrow against yourself (and pay yourself back).

Would having one mutual whole life insurance policy keep you from having a separate policy of the kind of life insurance needed to fund a cryonic self-preservation project? Would the mutual whole life policy itself be a way to fund cryopreservation?

comment by Alex Flint (alexflint) · 2010-05-05T22:41:50.354Z · LW(p) · GW(p)

Apparently it is all too easy to draw neat little circles around concepts like "science" or "math" or "rationality" and forget the awesome complexity and terrifying beauty of what is inside the circles. I certainly did. I recommend all 1400 pages of "Molecular Biology Of The Cell" (well, at least the first 600 pages) as an antidote. A more spectacularly extensive, accessible, or beautifully illustrated textbook I have never seen.

comment by AllanCrossman · 2010-05-04T21:18:29.630Z · LW(p) · GW(p)

Is Eliezer alive and well? He's not said anything here (or on Hacker News, for that matter) for a month...

Replies from: TraditionalRationali, gwern, CarlShulman, Jack, RobinZ
comment by TraditionalRationali · 2010-05-16T01:48:16.823Z · LW(p) · GW(p)

Eliezer Yudkowsky and Massimo Pigliucci just recently had a dialogue on Bloggingheads.tv. The title is The Great Singularity Debate.

After Yudkowsky at the beginning gives three different definitions of "the singularity" they discuss strong artificial intelligence and consciousness. Pigliucci is the one who quite quickly takes the discussion from intelligence to consciousness. Just before that they discuss whether simulated intelligence is actually intelligence. Yudkowsky made an argument (something like) if the AI can solve problems over a sufficiently broad range of areas and give answers then that is what we mean by intelligence, so if it manages to do that then it has intelligence. I.e., it is then not "just simulating to have intelligence" but is actually intelligent. Pigliucci however seems to want to distinguish between those and say that "well it may then just simulate intelligence, but maybe it is not actually having it". (Too difficult for me to summarize it very well, you have too look for yourself if you want it more accurately.)

There it seemed to me (but I am certainly not an expert in the field) that Yudkowsky's definition looked reasonable. It would have been interesting to have that point elaborated in more detail though.

Pigliucci's point seemed to be something like that for the only intelligence that we know so far (humans (and to lesser extent other higher animals)) intelligence comes together with consciousness. And for consciousness we know less, maybe only that the human biological brain somehow manages to have it, and therefore we of course do not know whether or not e.g. a computer simulating the brain on a different substrate will also be conscious. Yudkowsky seemed to think this very likely while Pigliucci seemed to think that very unlikely. But what I lacked in that discussion is what do we know (or reasonable conjecture) about the connection between intelligence and consciousness? Of course Pigliucci is right in that for the only intelligence we know of so so far (the human brain) intelligence and consciousness comes together. But for me (who do not know much about this subject matter) that seems not a strong argument for discussing them so closely together when it comes to artificial intelligence. Maybe someone here on Less Wrong who knows more about connection or not between intelligence and consciousness? For a naive non-expert like me intelligence seems (rather) easy to test if anything has: just test how good it is to solve general problems? While to test if anything has consciousness I would guess that a working theory of consciousness would have to be developed before a test could be designed?

This was the second recent BHTV dialogue where Pigliucci discussed singularity/transhumanism related questions. The previous I mentioned here. As mentioned there it seems to have started with a blogg-post of Pigliucci's where he criticized transhumanism. I think it interesting that Pigliucci continues his interest in the topic. I personally see it as a very positive establishing of contact between "traditional rationalist/skeptic/(cis-)humanist"-community and "LessWrong-style rationalist/trans-humanist".community. Massimo Pigliucci very much gave the impression of enjoying the discussion with Elizer Yudkowsky! I am also pleased to have noticed that recently Pigliucci's blog has now and then linked to LessWrong/ElizerYudkowsky (mostly Julias Galef if I remember correctly (too lazy to locate the exact links right now)). I would very much like to see this continue (e.g. Yudkowsky discussing with people like e.g. Paul Kurtz, Michael Shermer, Richard Dawkins, Sean Carroll, Steven Weinberg, Victor Stenger (realizing of course that they are probably too busy for it to happen)).

Previous BHTV dialogues with Elizer Yudkowsky I have seen noticed here on LessWrong but not this one (hope it is not that I have just missed that post). Therefore I posted this here, I did not find a perfect place for it, this was the least-bad I noticed. Although my post here is only partly about "Is Elizer alive and well" (he surely looked so on BHTV), I hope it is not considered too much off-topic.

Replies from: kodos96, Zack_M_Davis, Jack, Kevin, PeerInfinity
comment by kodos96 · 2010-05-20T21:28:30.696Z · LW(p) · GW(p)

I found this diavlog entertaining, but not particularly enlightening - the two of them seemed to mostly just be talking past each other. Pigliucci kept on conflating intelligence and consciousness, continually repeating his photosynthesis analogy, which makes sense in the context of consciousness, but not intelligence, and Eliezer would respond by explaining why that doesn't make sense in the context of intelligence, and then they'd just go in circles. I wish Eliezer had been more strict about forcing him to explicitly differentiate between intelligence/consciousness. Frustrating.... but worth watching regardless.

Note that I'm not saying I agree with Pigliucci's photosynthesis analogy, even when applied to consciousness, just that it seems at least to be coherent in that context, unlike in the context of intelligence, in which case it's just silly. Personally, I don't see any reason for consciousness to be substrate-dependant, but I feel much less confident in asserting that it isn't, just because I don't really know what consciousness is, so it seems more arrogant to make any definitive pronouncement about it.

Replies from: Christian_Szegedy
comment by Christian_Szegedy · 2010-05-23T08:21:17.781Z · LW(p) · GW(p)

That diavlog was a total shocker!

Pigliucci is not a nobody: he is a university professor, authored several books, holds 3 PhD's.

Still, he made an utterly confused impression on me. I don't think people must agree on everything, especially when it comes to hard questions like consciousness,but his views were so weak and incoherent that it was just too painful to watch. My head still aches... :(

comment by Zack_M_Davis · 2010-05-20T23:15:28.714Z · LW(p) · GW(p)

I personally see it as a very positive establishing of contact between "traditional rationalist/skeptic/(cis-)humanist"-community

I'm going to have to remember to use the word cishumanism more often.

Replies from: komponisto
comment by Jack · 2010-05-16T01:57:58.592Z · LW(p) · GW(p)

SIAI may have built an automaton to keep donors from panicking

comment by Kevin · 2010-05-16T10:58:41.764Z · LW(p) · GW(p)

You should post this as a top-level post for +10x karma.

comment by PeerInfinity · 2010-05-16T02:51:08.389Z · LW(p) · GW(p)

random, possibly off-topic question:

Is there an index somewhere of all of Eliezer's appearances on BHTV? Or a search tool on the BHTV site that I can use to find them?

Replies from: ata, NancyLebovitz
comment by ata · 2010-05-16T10:32:35.156Z · LW(p) · GW(p)

Direct link: http://bloggingheads.tv/search/?participant1=Yudkowsky,%20Eliezer

Replies from: PeerInfinity
comment by PeerInfinity · 2010-05-16T16:35:07.337Z · LW(p) · GW(p)

Thanks! I had tried using the search tool before, but I guess I hadn't tried searching for "Yudkowsky, Eliezer"

... oh, and it turns out that there was a note right beside the search box saying "NAME FORMAT = last, first". oops...

anyway, now I know, thanks :)

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2010-05-22T19:39:21.411Z · LW(p) · GW(p)

In general, google's site: operator is great for websites that have missing or uncooperative search functionality:

site:bloggingheads.tv eliezer

comment by NancyLebovitz · 2010-05-16T10:27:33.225Z · LW(p) · GW(p)

Orange button called "search" in the upper right hand corner.

comment by gwern · 2010-05-05T01:47:57.250Z · LW(p) · GW(p)

You can tell he's alive and well because he's posted several chapters in his Harry Potter fanfiction in that time; his author's notes lead me to believe that, as he stated long ago, he's letting LW drift so he has time to write his book.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-05-05T02:22:07.034Z · LW(p) · GW(p)

Anyway, he can't be hurt; "Somebody would have noticed."

Replies from: gwern
comment by gwern · 2010-05-05T02:41:43.297Z · LW(p) · GW(p)

Well, he would've noticed, but he's not us...

comment by CarlShulman · 2010-05-04T21:25:35.718Z · LW(p) · GW(p)

He's writing his book.

Replies from: Jack
comment by Jack · 2010-05-04T21:26:34.846Z · LW(p) · GW(p)

and Harry Potter fanfiction. Unless that was what you meant by "his book".

comment by Jack · 2010-05-04T21:42:44.137Z · LW(p) · GW(p)

Question: Who is moderating if Eliezer isn't?

Replies from: ata
comment by ata · 2010-05-05T00:32:38.361Z · LW(p) · GW(p)

The other moderators appear to be Robin Hanson, matt, and wmoore. None of them have posted in the past few days, but maybe at least one of them has been popping in to moderate from time to time. And/or maybe Eliezer is too, just not posting.

comment by RobinZ · 2010-05-04T21:26:18.921Z · LW(p) · GW(p)

Harry Potter and the Methods of Rationality updated on Sunday; it could be that writing that story is filling much of his off time.

comment by JaapSuter · 2010-05-04T19:13:13.374Z · LW(p) · GW(p)

Recycling an email I wrote in a Existential Risk Reduction Career Network discussion. The topic looked at various career options, specifically with an eye towards accumulating wealth - the two major fields recognized being finance and software development.

Frank Adamek enquired as to my (flippant) vanilla latte comments, which revealed a personal blind-spot. Namely, that my default assumption for people with an interest in accumulating wealth is that they're motivated by an interest in improving the quality of their own life (e.g., expensive gadgets, etc.).

I should know -- especially in X-Risk Network context -- that wealth accumulation is not necessarily predominantly selfish, and that instead wealth can be an effective multiplier to benefit positive futures. Thanks for mentioning this Frank.

The motivation for copying this email here is two-fold.

  • One, what else can further rational critic of my own rants teach me?

  • Two, I've lurked in this community for a long time, but can't muster gusto to contribute. The quality-bar for top-level posts is well beyond my thinking and writing skills. Which is great, because it means I get to learn and grow. But there's a flip-side, which is the gap between Less Wrong discourse and that of my day-to-day interaction with friends, family, and coworkers. I don't have a solution to this, but perhaps an increase in open-thread comment mediocrity helps close the gap.

Ugh, probably not. Alas, here goes - posted as a reply to myself, because of comment-length limits.

Replies from: JaapSuter
comment by JaapSuter · 2010-05-04T19:14:59.800Z · LW(p) · GW(p)

In a thread called Acturial vs. Software Engineering - what pays best?, somebody wrote:

Do any of you know how much of an option it is to start a software engineering career with a math/science but not CS background?

My response...

I encourage most people to pursue a math or science degree, rather than comp.sci., even if their long term goals are in the field of software engineering. My opinion is based on personal hindsight (having majored in computer science, I often wish my ability to absorb and apply fundamental math or hard physics was stronger) and on eleven years industry experience (where I've noticed an inverse correlation between the amount of formal comp.sci. training a person's had and his or her strength as a software engineer).

In regards to my personal hindsight; it could well be that had I studied math or physics, that I'd feel my comp.sci. expertise would need brushing up. That's probably true to some extent, but there's another factor; namely that many comp.sci. programs are a less-than-ideal blend of theoretical math (better obtained through a dedicated programs[1]) and practical engineering (most definitely useful[2], but because of its nature easily accessible in your spare time). That last point is critical; anybody who can afford university education, has access to a computer and a compiler. So why not tinker at home - you're passionate, right? Compare with programs like mechanical engineering, chemistry, and most hard physics programs - you probably don't have access to a particle accelerator or DNA extraction lab at home.

Not yet anyway... :-)

That brings me to my observation from industry experience, namely that the best programmers I've worked with often hadn't majored in comp.sci. The point of course not that a comp.sci. education makes for worse programmers. Rather, that people with the audacity and discipline to pursue hard physics or math who also have a passion for programming have a leg-up on those who are only passionate about programming.

I'm sure there's the occasional failed particle physicist applying for a hundred programming gigs without success, but that person would've been just as unskilled as a programmer had he or she majored in comp.sci.

Replies from: JaapSuter
comment by JaapSuter · 2010-05-04T19:15:41.319Z · LW(p) · GW(p)

Having shared my view on comp.sci. education, I do wish to throw in a recommendation for pursuing a career in software development (beyond the years of formal education). Specifically in contrast to one alternative discussed earlier in this thread, namely a career in finance.

Full disclaimer, my perspective on "jobs that involve working with money" stems mostly from how the mainstream portrays it and is likely to be extremely naive. Despite what I'm about to say, I actually have a great deal of respect for money-savy people. Considering my personal financial situation is a constant source of akrasia, I'm often envious of people who are able to wield money itself as a tool to generate more of it.

I'm realistic enough to admit that income potential is a valid factor in deciding what kind of career to pursue - like most of us, I enjoy food, shelter, and expensive gadgets. Meanwhile, I also believe nobody treats money as the only factor in choosing a caree - we all rather work in fields we're passionate about.

So really, we have a realistic assessment of various career options - all of whom promise at least a decent living. Even agreeing with comments made earlier, that programming is prole and finance has higher likelihood of fast-tracking prestige (and as a programmer, I actually must admit there's some truth to this sentiment), my gut says that your passion and interest far outweighs these observations. I mean, we're not talking about whether you'll become a high-school janitor versus United States president. If you like money and you have knack for using it for growth and your benefit, go to Wall Street. If you like computers and have a knack for using them for innovation, go to Silicon Valley. In both cases you'll be able to afford a grande sugar-free vanilla low-fat soy latte every morning - if that's your cup of tea.

Now all of this is fairly generic advice, nothing you weren't told already by your parents. My reason for chiming in on this discussion has (obviously) to do with how the above is affected by accelerating chance. That's something most parents or advisors haven't really clued into yet, and I felt it worth pointing out.

The question is, assuming the kind of consequences from accelerating change that are commonly accepted in singularity circles; what type of careers promise the most leverage in the future? In other words, what skill set guarantees you can maintain or expand the amount of control you have over the reality that surrounds and affects you?

Presumably there won't be much contention over why leverage is an important metric. Now imagine the world one, two, or three decades from now - and ask yourself; what can I offer that is of value? Value comes in many forms, we can roughly categorize these as: money, ideas (and secrets), goods, labor (and skill). Of these, money and ideas are the ones with the most long term potential. The value of manual labor will dissappear rapidly, even skilled labor (biological enhancement notwithstanding). The value of goods will diminish when life moves from its reliance on matter to information, and our ability to transform and distribute matter improves. The value of secrets is likely to exist for eternity, but those who consider this a worthy pursuit should read Snowcrash, not this email.

It's my belief the only types of leverage with future potential are money and ideas, some conditions apply.

In the case of money, the assumption is that there'll exist a legal system to assure the continuous promise of value in tender. Considering the alternative is impractical barter - or worse - all-out chaos, I believe money will stick around for a long time. In the case of ideas, the assumption is that you can turn them into reality. An idea stuck in your head is useless, so you'll need money, skill, or both to make things happen.

But wait, didn't I just say that skilled labor is a dead-end path? Yes, when speaking of the mechanical kind (i.e., the things you can do by moving your limbs around, such as playing the piano). But when it comes to ideas (and the direction our society is heading) - the kind of skill I'm referring to is of the information-theoretic kind. Future creativity will occur primarily in a universe of bits and bytes, and the more adept you are at wielding these bits and bytes, the more leverage your ideas will have.

There is one more assumption in this, namely that creative information-based skill is of a different nature than biological mechanical skill. It may be that strong AI will leapfrog well past our human ability to merge and enhance, in which case both creative skill and mechanical skill will be displaced. If that's the case, I don't expect money will be much value to humans very long either, and we'll be on a short-lived dead end path.

I'm hoping for a more optimistic future, where intellectual enhancement permits us to remain competitively creative.

So unless you have money, and use it to make more money (e.g., pursue a financial career - a valid option), I recommend people become creative experts in a digital universe. That is, study theoretical computer science (through formal math education, in your spare time, or through a career), familiarize yourself breath-first with the entire hardware and software stack that permits the digital universe (from primitive electronics to silicon to computer architectures to machine language to assembly to compilers to higher level languages to creative tools for both art and process improvement), and pick two or three comp.sci. specialties in which you become a depth-first expert. Ideally, you do this alongside a grounding in a hard physical science, to keep you in touch with the universe you currently embed (it'll be around for a while to come).

That's what you'll need to escape from the consumer end of information, and become a creative source of information - which in turn is your future leverage and source of income. Those with the ability to command, influence, and transform the growing stream bits and bytes will have the most value to offer (and be able to afford two sugar free vanilla soy lattes).

On a bit of a tangential note, this is why I advocate the introduction of a mandatory comp.sci. component from kindergarten all the way up to university - on par with traditional components like math or phys-ed. To verbalize this as: "...our society relies increasingly on computers" is to state the obvious, and the point is not that everybody should become a software developer. The critical point is to raise a generation that understands the notion of algorithmic computation well enough to believe they can (in principle) be in control of a computing device, rather than it controlling them. Computers are not magic, and one day present-day humans won't be either.

Then again, even basic schooling in math and physics fails to teach many people they can (in principle) be in control of their own life. But alas, I digress - lest this become political... :-)

Long post, little value - time to return to my computer and become a better programmer. Gotta make a living...

Two cents,

Jaap Suter - http://jaapsuter.com

[1] To be clear, I love the fundamentals of computer science. It's a great passion of mine. But I believe its place in education is by and large a sub-field of math. I suspect that'll change over time, but I'm not yet sure in which direction (math absorbing computer science, or theoretical computer science growing enough meat to justify recognition as being a field on its own.)

[2] With the additional remark that the fundamental habits of good engineering are timeless and emerge from developing your expertise in the humanities (both in one's ability to interact and cooperate with other people to achieve your goals, and the study of interactions between man, his environment, and the fruits of your labor). The tools we use along the way are fleeting - software and hardware is commonly outdated by the time you've become an expert - better to recognize the underlying patterns.

comment by gwern · 2010-05-03T16:55:05.832Z · LW(p) · GW(p)

Some people have curious ideas about what LW is; from http://www.fanfiction.net/r/5782108/18/1/ :

"HO-ley **! That was awesome! You might also be interested to know that my brother, my father and I all had a wonderful evening reading that wikipedia blog on rationality that you are named for. Thank you for this, most dearly and truly."

Replies from: thomblake
comment by thomblake · 2010-05-03T17:07:39.330Z · LW(p) · GW(p)

I'm not sure I even know how to parse "wikipedia blog on rationality". But at least in some sense, we apparently are Wikipedia. Congrats.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-05-03T19:23:11.568Z · LW(p) · GW(p)

I'm not sure I even know how to parse "wikipedia blog on rationality".

The link to Less Wrong in Eliezer's fanfiction.net profile takes you to the wiki page for the sequences. So they were in a wiki, which I guess they figured was part of Wikipedia.

Replies from: ata
comment by ata · 2010-05-03T20:14:50.504Z · LW(p) · GW(p)

So they were in a wiki, which I guess they figured was part of Wikipedia.

A lot of people think that every wiki is a part of Wikipedia or the same thing as Wikipedia, or that "Wikipedia" is a common noun meaning "wiki", or that every Wiki has to be a 'Pedia of some sort. And most people don't know that the first wiki predated Wikipedia by six years, so they assume Wikipedia invented the concept.

Replies from: Douglas_Knight, Sniffnoy
comment by Douglas_Knight · 2010-05-07T05:50:18.761Z · LW(p) · GW(p)

A lot of people think that every wiki is a part of Wikipedia or the same thing as Wikipedia, or that "Wikipedia" is a common noun meaning "wiki", or that every Wiki has to be a 'Pedia of some sort.

I'm not sure what that third hypothesis means, but the first two seem very different to me and seems worth knowing how popular those two beliefs are.

Replies from: ata, JoshuaZ
comment by ata · 2010-05-07T06:18:06.507Z · LW(p) · GW(p)

I'm not sure what that third hypothesis means

By "every Wiki has to be a 'Pedia of some sort", I was referring to this observation:

Over the first six years of wikis, most were informally-organized, and mixed discussion in with factual content. They gathered information, but focused on (or at least allowed) discussion and socializing; they did not resemble encyclopedias. (The original wiki, Ward's Wiki AKA WikiWikiWeb AKA the Portland Pattern Repository, is a good surviving example of this format, as is CocoaDev, the first wiki I encountered, plus the SL4 Wiki.)

But people often assume Wikipedia-like rules, norms, and structure apply to every wiki. I own and (now only occasionally) edit a wiki about a little-known band, documenting its albums, songs, concerts, etc., and its fan culture. Early on, a few people mistakenly assumed that rules like NPOV, notability, and encyclopedicness applied there. I've seen this elsewhere too, but it's admittedly getting harder to find incorrect examples of such assumptions, because most wikis these days (at least the big ones) are modeled after Wikipedia, just within fictional universes or within more specific domains than "all human knowledge".

(Also, to "...or that 'Wikipedia' is a common noun meaning 'wiki'", let me add "or that 'wiki' is an abbreviation for 'Wikipedia'". I'm not the sort who will cling to old definitions as the True Essence of normally evolving words, but given the wide historical and current use of "wiki" for sites unrelated to Wikipedia, I reserve the right to get mildly annoyed at people who say "wiki" as shorthand for "Wikipedia".)

Replies from: JoshuaZ, NancyLebovitz, NancyLebovitz
comment by JoshuaZ · 2010-05-07T06:22:33.214Z · LW(p) · GW(p)

For what it is worth, the Wikimedia foundation doesn't like people using "wiki" for Wikipedia. Most Wikipedians don't like it either. And neither does Wikipe-tan as she makes clear here

Edit: Apparently the software interprets a closing parenthesis in a URL as the end of the URL string. This is an unfun bug. Using a short url to avoid the problematic parsing.

Replies from: ata
comment by ata · 2010-05-07T06:28:21.790Z · LW(p) · GW(p)

I think you can backslash-escape parentheses in URLs to avoid that bug (or that unexpected-but-correct-according-to-the-spec behaviour, rather).

Testing it: blah.png)

comment by NancyLebovitz · 2010-05-07T14:32:08.348Z · LW(p) · GW(p)

Although it doesn't apply in this case, do you think the common use of WikiMedia (which defaults to the same dignified blue and gray look that Wikipedia has) contributes to the problem?

Do people expect TvTropes to be like Wikipedia?

Replies from: ata
comment by ata · 2010-05-07T14:38:33.102Z · LW(p) · GW(p)

It probably contributes to it. It's pretty easy to assume a site is identical or similar to Wikipedia when they look almost identical. (Nitpick: Wikimedia is the foundation that owns Wikipedia and its related sites. The wiki software that they develop and run is called MediaWiki.)

TvTropes probably doesn't suffer from this problem too much because 1) it doesn't have "wiki" in its name; 2) it doesn't run MediaWiki or look like it; and 3) the home page has a paragraph that starts "We are not Wikipedia. We're a buttload more informal. . . ."

comment by NancyLebovitz · 2010-05-07T10:48:56.845Z · LW(p) · GW(p)

I wonder if that sort of thing should be added to the list of biases-- it's being so influenced by the most prominent example that one no longer perceives the range of possibility. It seems something like framing, but not exactly it.

Replies from: ata
comment by ata · 2010-05-07T12:04:16.036Z · LW(p) · GW(p)

I don't know if that's quite what's happening here. It's probably more that Wikipedia (and maybe a few other heavily Wikipedia-inspired, MediaWiki-based sites) is the only exposure most people will have to the wiki concept. The range of possibility didn't exist in their minds in the first place.

I'm not sure if the effect whereby it skews people's expectations of other later-discovered wikis is something like a qualitative rather than numeric version of anchoring (is there any research on that? Does it have a name?), or if it's just an unsurprising and possibly rational result of people originally seeing "wiki" associated with a single site and not a larger category. If a person is casually familiar with Wikipedia, and they hear their friends call it "wiki", and they've never heard of the general wiki concept... and then they happen upon CocoaDev, see that it describes itself as a wiki (which, to them, was previously not even a one-element category but just a single website; it would seem analogous to Bing calling itself "a google"), and import their expectations about "wiki"... then is that really a bias if they find many aspects of CocoaDev's structure very surprising?

Maybe it's a bias specifically if they fail to update their understanding of the concept "wiki" and instead assume that CocoaDev is doing something wrong.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-05-07T12:17:49.559Z · LW(p) · GW(p)

I don't know if that's quite what's happening here. It's probably more that Wikipedia (and maybe a few other heavily Wikipedia-inspired, MediaWiki-based sites) is the only exposure most people will have to the wiki concept. The range of possibility didn't exist in their minds in the first place.

Fair enough.

It could be described as a sort of group bias. People would have been capable of seeing a range of possibility except that a strong example channels their minds.

comment by JoshuaZ · 2010-05-07T06:01:55.904Z · LW(p) · GW(p)

The first one is not at all uncommon. Although I don't have any citations off the top of my head, as a (not very active) admin for Wikinews I can say that very often news sources credit us as "Wikipedia."

comment by Sniffnoy · 2010-05-03T21:10:23.433Z · LW(p) · GW(p)

I have to wonder where they think Wikipedia got its name from...

Replies from: bogdanb
comment by bogdanb · 2010-05-05T11:37:40.460Z · LW(p) · GW(p)

With all the nonsensical “cool” prefixes (see iPod, XBox), “cool” etymologically-challenged names (Skype, Google), and “cool” weird-spelling-that-kind-of-suggests something (Syfy) going on, I don’t blame people for thinking any new name they encounter is simply made up for no reason.

comment by Kevin · 2010-05-14T04:18:53.149Z · LW(p) · GW(p)

"Effects of nutritional supplements on aggression, rule-breaking, and psychopathology among young adult prisoners"

Likely the effects were due to the fish oil. This study was replicating similar results seen in a UK youth prison.

http://www3.interscience.wiley.com/journal/123213582/abstract?CRETRY=1&SRETRY=0

Also see this other study of the use of fish oil to present the onset of schizophrenia in a population of youth that had had one psychotic episode or similar reason to seek treatment. The p-values they got are ridiculous -- fish oil appears to be way more effective in reality than I would have expected.

http://archpsyc.ama-assn.org/cgi/content/short/67/2/146

Take your fish oil, people.

Replies from: PhilGoetz, Jack
comment by PhilGoetz · 2010-05-15T04:08:08.090Z · LW(p) · GW(p)

What about snake oil?

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-05-15T06:24:31.966Z · LW(p) · GW(p)

I don't know if your kidding or scoffing, but I will give a straight answer.

Richard Kunin, M.D., once analyzed snake oil and found that it is 20 or 25% omega-3 fatty acids.

comment by Jack · 2010-05-14T04:32:23.796Z · LW(p) · GW(p)

The link is giving me trouble. Can you paste the whole abstract?

Replies from: Kevin
comment by Kevin · 2010-05-14T04:36:10.570Z · LW(p) · GW(p)

Effects of nutritional supplements on aggression, rule-breaking, and psychopathology among young adult prisoners

ABSTRACT Objective: In an earlier study, improvement of dietary status with food supplements led to a reduction in antisocial behavior among prisoners. Based on these earlier findings, a study of the effects of food supplements on aggression, rule-breaking, and psychopathology was conducted among young Dutch prisoners.

Methods: Two hundred and twenty-one young adult prisoners (mean age=21.0, range 18-25 years) received nutritional supplements containing vitamins, minerals, and essential fatty acids or placebos, over a period of 1-3 months.

Results: As in the earlier (British) study, reported incidents were significantly reduced (P=.017, one-tailed) in the active condition (n=115), as compared with placebo (n=106). Other assessments, however, revealed no significant reductions in aggressiveness or psychiatric symptoms. Conclusion: As the incidents reported concerned aggressive and rule-breaking behavior as observed by the prison staff, the results are considered to be promising. However, as no significant improvements were found in a number of other (self-reported) outcome measures, the results should be interpreted with caution. Aggr. Behav. 36:117-126, 2010. © 2009 Wiley-Liss, Inc.

comment by ata · 2010-05-05T08:47:44.803Z · LW(p) · GW(p)

Is it possible to change the time zone in which LW displays dates/times?

comment by clay · 2010-05-04T01:33:33.642Z · LW(p) · GW(p)

Would it be reasonable to request a LW open thread digest to accompany these posts? A simple bullet list of most of the topics covered would be nice.

Replies from: RobinZ
comment by RobinZ · 2010-05-04T02:38:54.524Z · LW(p) · GW(p)

On the Wiki, perhaps? A bit of a pain to update it, admittedly...

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-05-04T03:02:29.019Z · LW(p) · GW(p)

Would there need to be titles for open thread discussions? Open tagging?

Replies from: RobinZ
comment by RobinZ · 2010-05-04T03:09:05.206Z · LW(p) · GW(p)

"Open Thread Digest: May 2010" seems fine to me. And I think we're supposed to make top-level posts if we get anything tag-worthy out of a thread.

comment by CronoDAS · 2010-05-01T21:54:20.353Z · LW(p) · GW(p)

Hooray! Hooray! It's First of May!

Replies from: None
comment by [deleted] · 2010-05-01T23:26:50.669Z · LW(p) · GW(p)

Its too late for me. Its the Second of May over here. :' (

comment by Thomas · 2010-05-01T17:55:06.172Z · LW(p) · GW(p)

Question: How many of you, readers and contributers here on this site, actually do work on some (nontrivial) AI project?

Or have an intention to do that in the future?

Replies from: ata, Baughn, Daniel_Burfoot, kpreid, Kazuo_Thow
comment by ata · 2010-05-03T06:14:38.467Z · LW(p) · GW(p)

Yes, I have an intention to do so, because I'm convinced that it is very important to the future of humanity. I don't quite know how I'll be able to contribute yet, but I think I'm smart and creative enough that I'll be able to acquire the necessary knowledge and thinking habits (that's the part I'm working on these days) and eventually contribute something novel, if I can do all that soon enough for it to matter.

comment by Baughn · 2010-05-01T18:52:27.499Z · LW(p) · GW(p)

I'm working on one as part of a game, where I'm knocking off just about every concept I've run into - goal systems, eurisko-type self-modifying code, AIXI, etc. I'll claim it's nontrivial because the game is, and I very much intend to make it unusually smart by game standards.

But that's not really true AI. It's for fun, as much as anything else. I'm not going to claim it works very well, if at all; it's just interesting to see what kind of code is involved.

(I have, nevertheless, considered FAI. There's no room to implement it, which was an interesting thing to discover in itself. Clearly my design is insufficiently advanced.)

Replies from: Richard_Kennaway, kpreid, CronoDAS
comment by Richard_Kennaway · 2010-05-07T09:49:10.228Z · LW(p) · GW(p)

I happened to see this today, which you might find interesting. He's using genetic algorithms to make the creatures that the player has to kill evolve. At one point they evolved to exploit bugs in the game.

comment by kpreid · 2010-05-07T00:09:14.171Z · LW(p) · GW(p)

As a programmer, I'm curious exactly how there is no room to implement it. (I understand the “no room” concept, but want details.)

Replies from: Baughn
comment by Baughn · 2010-05-07T08:29:08.036Z · LW(p) · GW(p)

The most obvious problem, which I suspect most games would have in common, is that it has no notion that it's a game. As far as the AI is concerned, the game world is all there is.

It wants to win a war,and it has no idea that there's a player on the other side. Building up its understanding to the point where that is not the case would be, well, both way too much work and probably beyond my abilities.

comment by CronoDAS · 2010-05-04T21:34:41.125Z · LW(p) · GW(p)

May I ask what game?

Replies from: Baughn
comment by Baughn · 2010-05-06T15:24:08.501Z · LW(p) · GW(p)

You can ask, but at the moment it's more of a design document plus some proof-of-concept algorithms. 99% incomplete, in other words, and I don't really want to get people excited over something that might never come to fruition.

I can't really describe the game, because that'd be mostly wishful thinking, but perhaps some design criteria will satisfy your curiosity. So, some highlights I guess:

  • 4X space-based RTS. Realism is important: I want this to look somewhat like reality, with the rule of fun applied only where it has to be, not as an attempt to justify bad physics.

  • Therefore, using non-iterative equations where possible (and some places they really shouldn't be used) to allow for drastic changes in simulation speed - smoothly going from slower than realtime during battles to a million times faster for slower-than-light interstellar travel. That is to say, using equations that do a constant amount of work to return the state at time T, instead of doing work proportional to the amount of in-game time that has passed.

  • Therefore, having a lot of systems able to work (and translate between) multiple levels of abstraction. Things that require an iterative simulation to look good when inspected in real-time may be unnoticably off as a cheaper abstraction if time moves a thousand times faster.

  • To support this, I'm using an explicit cause-effect dependency graph, which lead me to..

  • Full support for general relativity. Obviously that makes it a single-player game, but the time compression pretty much requires that already.

Causality, FTL, Relativity - pick any two. I'm dropping causality. The cause-effect graph makes it relatively (ha ha, it is to laugh - theoretically it's just looking for cycles, but the details are many) simple to detect paradoxes. What happens if there are paradoxes, though.. that, I don't know yet. Anything from gribbly lovecraftian horrors to wormholes spontaneously collapsing will do.

Hopefully, I'll find the time to work on it, because it sounds like an interesting game to play. :P

Replies from: Risto_Saarelma, Slider
comment by Risto_Saarelma · 2010-05-06T21:22:57.866Z · LW(p) · GW(p)

Ambitious time travel (or anomalous causality) game mechanics are fun.

There's the Achron RTS game which involves some kind of multiplayer time travel. As far as I can tell, they deal with paradoxes by cheating with a "real" time point that progresses normally as the players do stuff. There is only a window of a few minutes around the real time point to do time-travel stuff in, and things past the window get frozen into history. Changes in the past also propagate slowly into the rest of the timeline as the real time point advances. So paradoxes end up as oscillations of timewaves until some essential part moves out of the time travel window and gets frozen in an arbitrary state.

I'm not sure how well something similar could work with a relativistic space game where you end up displaced from time by just moving around instead of using gamedev-controllable magic timetravel juice.

Your concept also kinda reminds me of a very obscure Amiga game called Gravity. Based on the manual it had relativistic space-time, programmable drones, terraforming planets from gas giants and all sorts of hard SF spacegame craziness not really seen games pretty much ever nowadays.

Replies from: Baughn
comment by Baughn · 2010-05-06T21:33:28.954Z · LW(p) · GW(p)

I've been playing Achron, but it's not really an inspiration. How should I put it..

My understanding of physics is weak enough without trying to alter it. If I stick as closely as possible to real-life physics, I know I won't run into any inconsistencies.

Therefore, there will be no time-travel. I might do something cute with paradoxes later, but the immediate solution for those is to blow the offending ship or wormhole up, as real-life wormholes have been theorized to do via virtual particles.

Replies from: aleksiL
comment by aleksiL · 2010-05-08T07:10:27.473Z · LW(p) · GW(p)

Blow up the paradox-causing FTL? Sounds like that could be weaponized.

I was about to go into detail about the implications of FTL and relativity but realized that my understanding is way too vague for that. Instead, I googled up a "Relativity and FTL travel" FAQ.

I love the idea of strategically manipulating FTL simultaneity landscape for offensive/defensive purposes. How are you planning to decide what breaks and how severely if a paradox is detected?

Replies from: Baughn
comment by Baughn · 2010-05-08T10:02:54.005Z · LW(p) · GW(p)

I think the only possible answer to that is "through play-testing".

As I understand it, real-life wormhole physics gives enormous advantages to a defender. However, this is a wargame, so I will have to limit that somewhat. Exactly how, and to what degree - well, that's something I will be confronting in a year or two.

(And yes, it could be weaponized. Doing so might not be a good idea, depending on the lovecraft parameter, but you can certainly try.)

comment by Slider · 2023-01-09T18:34:44.550Z · LW(p) · GW(p)

Did this ever get made? I have had (what feels like separate) intentions to make game with most of the bullet point (minus relativity)

I have a (atleast skill implicit) understanding how one would account for causality(essentially meta-time)

comment by Daniel_Burfoot · 2010-05-01T23:42:52.828Z · LW(p) · GW(p)

I am writing a book about a new approach to AI. The book is a roadmap, after I'm finished, I will follow the roadmap. That will take many years.

I have near-zero belief that AI can succeed without a major scientific revolution.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-05-02T00:34:42.717Z · LW(p) · GW(p)

I'm interested in what sort of scientific revolution you think is needed and why.

Replies from: Daniel_Burfoot, Thomas
comment by Daniel_Burfoot · 2010-05-02T00:56:50.337Z · LW(p) · GW(p)

Well... you'll have to read the book :-)

Here's a hint. Define a scientific method as any process by which reliable predictions can be obtained. Now observe that human children can learn to make very reliable predictions. So they must be doing some sort of science. But they don't make controlled experiments. So our current understanding of the scientific method must be incomplete: there is some way of obtaining reliable theories about the world other than the standard theorize/predict/test loop.

Replies from: SilasBarta, SilasBarta, kpreid, NancyLebovitz, Morendil, PeterS
comment by SilasBarta · 2010-05-04T22:02:13.748Z · LW(p) · GW(p)

Now observe that human children can learn to make very reliable predictions. So they must be doing some sort of science.

Danger! You're not looking at the whole system. Children's knowledge doesn't just come from their experience after birth (or even conception), but is implicitly encoded by the interplay between their DNA, the womb, and certain environmental invariants. That knowledge was accumulated through evolution.

So children are not, in their early development, using some really awesome learning algorithm (and certainly not a universally applicable one); rather, they are born with a huge "boost", and their post-natal experiences need to fill in relatively little information, as that initial, implicit knowledge heavily constrains how sensory data should be interpreted and gives useful assumptions that help in modeling the world.

If you were to look only at what sensory data children get, you would find that it is woefully insufficient to "train" them to the level they eventually reach, no matter what epistemology they're using. It's not that there's a problem with the scientific method (though there is), or that we have some powerful Bayesian algorithm to learn from childhood development, but rather, children are springboarding off of a larger body of knowledge.

You seem to be endorsing the discredited "blank slate" paradigm.

A better strategy would be to look at how evolution "learned" and "encoded" that data, and how to represent such assumptions about this environment, which is what I'm attempting to do with a model I'm working on: it will incorporate the constraints imposed by thermodynamics, life, and information theory, and see what "intelligence" means in such a model, and how to get it.

(By the way, I made essentially this same point way back when. I think the same point holds here.)

Replies from: timtyler, Daniel_Burfoot
comment by timtyler · 2010-05-22T12:43:29.893Z · LW(p) · GW(p)

Re: "If you were to look only at what sensory data children get, you would find that it is woefully insufficient to "train" them to the level they eventually reach, no matter what epistemology they're using."

That hasn't been demonstrated - AFAIK.

Children are not blank slates - but if they were highly intelligent agents with negligible a-priori knowledge, they might well wind up eventually being much smarter than adult humans. In fact that would be strongly expected - for a sufficiently smart agent.

comment by Daniel_Burfoot · 2010-05-05T01:38:13.773Z · LW(p) · GW(p)

That knowledge was accumulated through evolution.

If large amounts of our knowledge base was encoded through evolution, we would see people with weird, specific cognitive deficits - say, the inability to use nouns - as a result of genetic mutations. Or, more obviously, we would observe people who have functioning eyes but nevertheless can't see because they failed to learn how. Now, we do see people with strange specific deficits, but only as a result of stroke or other brain damage. The genetic deficits we do see are all much more general.

If you were to look only at what sensory data children get, you would find that it is woefully insufficient to "train" them to the level they eventually reach, no matter what epistemology they're using.

How do you know? Are you making some sort of Chomskian poverty of the stimulus argument?

Replies from: JoshuaZ, SilasBarta
comment by JoshuaZ · 2010-05-05T01:44:26.598Z · LW(p) · GW(p)

If large amounts of our knowledge base was encoded through evolution, we would see people with weird, specific cognitive deficits - say, the inability to use nouns - as > a result of genetic mutations.

That's not necessarily the case. You are assuming a much more narrow encoding system than necessary. One doesn't need a direct encoding of specific genes going to nouns or the like. Remember, evolution is messy and doesn't encode data in the direct fashion that a human would. Moreover, some problems we see are in fact pretty close to this. For example, many autistic children have serious problems handling how pronouns work (such as some using "you" to refer to themselves and "I" to refer to anyone else). Similarly, there's a clear genetic distinction in language processing between humans and other primates in that many of the "sentences" constructed by apes which have been taught sign language completely lack verbs and almost never have any verb other than an imperative.

comment by SilasBarta · 2010-05-05T03:13:22.444Z · LW(p) · GW(p)

If large amounts of our knowledge base was encoded through evolution, we would see people with weird, specific cognitive deficits - say, the inability to use nouns - as a result of genetic mutations.

Well, people do have weird, specific cognitive deficits, but not of that kind. Remember, grammatical structures themselves are the result of human-specific tendencies to form tools into a shape our native hardware is already capable of handling (pardon the metaphor overload). Grammatical structures are not a list of arbitrary rules imposed by aliens, but conventions that already make sense to human brains.

In any case, keep in mind that I said the information accumulated by evolution is stored in the interplay of the genes and the womb, and invariant features of the environment. The way that e.g. noun use comes about is a result of a combination of these; like JoshuaZ notes, there needn't be a gene-to-noun mapping under this theory.

Or, more obviously, we would observe people who have functioning eyes but nevertheless can't see because they failed to learn how.

We do see this! It's possible to be blind despite having functioning eyes simply because the brain didn't receive sensory information from the eyes early on. It's called Amblyopia

The problem is caused by either no transmission or poor transmission of the visual stimulation through the optic nerve to the brain for a sustained period of dysfunction or during early childhood thus resulting in poor or dim vision.

In other words, an expected environmental invariant -- light being regularly fed through the eye -- wasn't present, preventing the manifestation of the accumulated knowledge of evolution.

How do you know? Are you making some sort of Chomskian poverty of the stimulus argument?

I'm making a POTS argument, but more based on reading Pinker. There are patterns common to all languages, and there are kinds of grammatical errors children never make. This, along with similar phenomena in other areas, shows that children aren't blank slates that accept whatever they get, but are born with a kind of pre-formatting that has certain expectations of their stimuli that causes them to constrain the solution set to the point where they don't need the massive data that would be necessary to train a blank slate.

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2010-05-06T03:01:05.896Z · LW(p) · GW(p)

We seem to be arguing over a minor point. All knowledge comes from a combination of evolution and learning. We disagree about how much comes from one or the other.

I'll say one thing about the POTS argument, though. The basic idea is that people compare the amount of linguistic data absorbed by the child to his linguistic competence, find that the latter cannot be explained by the former, and conclude that there must be some sort of built-in language module. But they might be oversimplifying the data vs. competence comparison. What really happens is that the child absorbs a huge amount of visual and motor data, as well as a relatively smaller amount of linguistic data, and comes out with sophisticated competence in all three domains. So it may very well be that the linguistic competence is built on top of the visual and motor competences: the learning algorithm builds modules to understand visual reality, justified by the massive amount of visual data that is available, and then is able to reuse these modules to produce sophisticated linguistic competence in spite of the impoverished linguistic data source. Language, in this view, is a thin wrapper over the representations built by the learning algorithm for other purposes. This argument is supported by the existence of mirror neurons.

Replies from: SilasBarta
comment by SilasBarta · 2010-05-06T03:43:41.286Z · LW(p) · GW(p)

We seem to be arguing over a minor point. All knowledge comes from a combination of evolution and learning. We disagree about how much comes from one or the other.

Well, earlier, the way you had stated your position, it looked like you were saying that all knowledge acquisition (or nearly all) comes from sense data, and children use some method, superior to scientific experimentation, to maximally exploit that data. If you grant a role for evolution to be "passing correct answers" to human minds, then yes, our positions are much closer than I had thought.

But still, it's not enough to say "evolution did it". You would have to say how the process of evolution -- which works only via genes -- gains that knowledge and converts it into a belief on the part of the organism. Your research program, as you've described it, mentions nothing about this.

I'll say one thing about the POTS argument, though. ... they might be oversimplifying the data vs. competence comparison. What really happens is that the child absorbs a huge amount of visual and motor data, as well as a relatively smaller amount of linguistic data, and comes out with sophisticated competence in all three domains. So it may very well be that the linguistic competence is built on top of the visual and motor competences: the learning algorithm builds modules to understand visual reality, justified by the massive amount of visual data that is available, and then is able to reuse these modules to produce sophisticated linguistic competence in spite of the impoverished linguistic data source. Language, in this view, is a thin wrapper over the representations built by the learning algorithm for other purposes.

The problem of vision (inference of a 3-D scene from a 2-D image) is ill-posed and has an even more intractable search space. It doesn't seem like a child's brain (given the problem of local optima) even has enough time to reach the hypothesis that a 3-D scene is generating the sense data. But I'd be happy to be proven wrong by seeing an algorithm that would identify the right hypothesis without "cheating" (i.e. being told where to look, which is what I claim evolution does).

This argument is supported by the existence of mirror neurons.

How so? Mirror neurons still have to know what salient aspect of the sense data they're supposed to be mirroring. It's not like there's a one-to-one mapping between "monkey see" and "monkey do".

comment by SilasBarta · 2010-05-05T17:46:23.764Z · LW(p) · GW(p)

Okay, I think I should take a minute to clarify where exactly we disagree. Starting from your conclusion:

So our current understanding of the scientific method must be incomplete: there is some way of obtaining reliable theories about the world other than the standard theorize/predict/test loop.

This by itself isn't objectionable: of course you can move your probability distribution on your future observations closer to reality's true probability distribution without controlled experiments. And Bayesian inference is how you do it.

But you also say:

Now observe that human children can learn to make very reliable predictions. So they must be doing some sort of science. But they don't make controlled experiments

I agree that children learn how to solve AI-complete problems, including reliable prediction in this environment (and also face-recognition, character-recognition, bipedal traversal of a path barring obstacles, etc.). But you seem to have already concluded (too hastily, in my opinion) that the answer lies in a really good epistemology that children have that allows them to extract near-maximal knowledge from the data in their experiences.

I claim that this ignores other significant sources of the knowledge children have, which can explain how they gain (accurate) knowledge even when it's not entailed by their sense data. For example, if some other process feeds them knowledge -- itself gained through a reliable epistemology -- then they can have beliefs that reflect reality, even though they didn't personally perform the (Bayes-approximating) inference on the original data.

So that answers the question of how the person got the accurate belief without performing lots of controlled experiments, and the problem regresses to that of how the other process gained that knowledge and transmitted it to the person. And I say (based on my reading of Pinker's How the Mind Works) that the most likely possibility for the "other process" is that of evolution.

As for the transmission mechanism, it's most likely the interplay between the genes, the womb, and reliably present features of the environment. All of these can be exploited by evolution, in very roundabout ways, to increase fitness. For example, the DNA/womb system can interact in just the right way to give the brain a certain structure, favorable to some "rituals of cognition" but not others.

This is why I don't expect you to find a superior epistemology by looking at how children handle their experiences -- you'll be stuck wondering why they make one inference from the data rather than another that's just as well-grounded but wrong. Though I'm still interested in hearing why you think you've made progress and what insights your method has given you.

comment by kpreid · 2010-05-02T02:56:26.304Z · LW(p) · GW(p)

I am reminded of a phrase from Yudkowsky's An Intuitive Explanation of Bayes' Theorem, which I was rereading today for no particularly good reason:

What is the so-called Bayesian Revolution now sweeping through the sciences, which claims to subsume even the experimental method itself as a special case?

comment by NancyLebovitz · 2010-05-02T01:17:57.057Z · LW(p) · GW(p)

On the off-chance you haven't heard about this: Unconscious statistical processing in learning languages.

comment by Morendil · 2010-05-02T09:31:45.878Z · LW(p) · GW(p)

Define a scientific method as any process by which reliable predictions can be obtained

At the risk of being blunt, that sounds like a Humpty Dumpty move. There are many processes which yield reliable predictions that we don't call science, and many processes we identify as part of the scientific method which don't yield reliable predictions.

What you've said above can be re-expressed as "if we think theorize/predict/test is the only way to make reliable predictions about the world, then our current understanding of how to make reliable predictions is incomplete". Well, I agree. :)

comment by PeterS · 2010-05-17T22:16:49.379Z · LW(p) · GW(p)

It's been 30+ years since Paul Feyerabend wrote Against Method, and the idea that the "scientific method" is inexistent is no longer even the heresy it once was. He wrote that science is founded on methodological diversity, the only necessary prerequisite of any method's inclusion being that it works. It sounds a bit like what you're getting at, and I'd recommend looking into it if you haven't already.

comment by Thomas · 2010-05-02T07:08:24.659Z · LW(p) · GW(p)

You apparently think, that it isn't necessary. I am quite sure it isn't, too. We need some technical innovations, yes, but from the scientific point of view, it's done.

On the contrary! Any major scientific revolution could use some AI power. I am not sure that so called Quantum Gravity (or String Theory of a kind), can be constructed in a reasonable time without a big AI involvement. Could be too hard for a naked human mind.

So yes, we probably need the AI for a big scientific revolution, but no more scientific revolutions to build AI.

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2010-05-03T01:04:33.151Z · LW(p) · GW(p)

You apparently think, that it isn't necessary. I am quite sure it isn't, too. We need some technical innovations, yes, but from the scientific point of view, it's done.

Are you familiar with the current state of the art in AI? Can you point to a body of work that you think will scale up to AI with a few more "technical innovations"?

Replies from: Thomas
comment by Thomas · 2010-05-03T04:42:24.439Z · LW(p) · GW(p)

What art? What are you talking about? Every random action can go as art.

Art is definitively not an AI problem.

Replies from: RobinZ
comment by RobinZ · 2010-05-03T04:46:36.878Z · LW(p) · GW(p)

Are you a native English speaker? State of the art refers to the best developed techniques and knowledge in a field.

Replies from: Thomas
comment by Thomas · 2010-05-03T08:53:14.718Z · LW(p) · GW(p)

I am not a native English speaker. But I do know, what the "state of the art" means. However, instead of debating much abut that, I would first like to see an older question answered. The one of NancyLebovitz. It is above, the same I have emphasized a little in a replay.

What scientific breakthroughs we need, before we can build a decent AI?

Replies from: Clippy, NancyLebovitz
comment by Clippy · 2010-05-04T23:13:43.901Z · LW(p) · GW(p)

Sprichst du lieber Deutsch? Das ist eine Sprache, die ich auch kann. Willst du, dass ich manchmal für dich übersetze?

ETA: Wow, I knew humans were extremely bigoted toward those not like them, but I would never have guessed that they'd show such bigotry toward someone merely for helping a possibly-German-speaking poster to communicate. Bad apes! No sex for you!

Replies from: Thomas, gregconen
comment by Thomas · 2010-05-05T14:47:53.455Z · LW(p) · GW(p)

Unfortunately, my German is even worse than my English. A Google Translator chip into my head would be quite handy, already.

OTOH ... the spelling is already installed in my browser, so I can spell much less wrong. ;-)

comment by gregconen · 2010-05-04T23:38:25.833Z · LW(p) · GW(p)

It may have helped if you'd explained yourself to onlookers in English, or simply asked in English (given Thomas's apparent reasonable fluency).

I disagree with the downvotes, though.

Replies from: Clippy
comment by Clippy · 2010-05-05T13:19:48.518Z · LW(p) · GW(p)

All I said was, "Do you prefer to speak German? That's a language that I can also do [sic]. Do you want me to sometimes translate for you?"

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-05T13:35:09.923Z · LW(p) · GW(p)

The decision to start speaking in German where it's unnecessary for communicating what you've said, was stupid, and should be punished accordingly.

Replies from: Clippy
comment by Clippy · 2010-05-05T13:56:59.773Z · LW(p) · GW(p)

Sex is unnecessary and stupid too, ape. How about some tolerance for other's differences? Oh, right, I forgot.

comment by NancyLebovitz · 2010-05-03T09:36:09.739Z · LW(p) · GW(p)

I think it was answered-- a better understanding of how informal (non-scientific) learning works.

That might not be all that's needed for AI, but I'm sure it's a crucial piece.

Replies from: Thomas
comment by Thomas · 2010-05-03T10:34:41.922Z · LW(p) · GW(p)

And that would be a scientific revolution - a better understanding of how informal (non-scientific) learning works?

Let use such potent labels careffuly!

comment by kpreid · 2010-05-01T22:35:49.066Z · LW(p) · GW(p)

I used to be interested in working on AI, but my current general understanding of the field indicates that for me to do anything worthwhile in the field would require acquiring a lot of additional knowledge and skills — or possibly having a different sort of mind. I am spending my time on other projects where I more readily see how I can do something worthwhile.

comment by Kazuo_Thow · 2010-05-01T18:24:12.498Z · LW(p) · GW(p)

Count me as "having an intention to do that in the future". Although I'm currently just an undergraduate studying math and computer science, I hope to (within 5-10 years) start doing everything I can to help with the task of FAI design.

comment by PhilGoetz · 2010-05-01T15:40:23.605Z · LW(p) · GW(p)

I was going thru the rationality quotes, and noticed that I always glanced at the current point score before voting. I wasn't able to not do that.

It might be useful to have a setting under which the points on a comment, and maybe also on a post, would be hidden until after you voted on it.

Replies from: Rain, anonym, NancyLebovitz, RobinZ
comment by Rain · 2010-05-01T16:15:00.863Z · LW(p) · GW(p)

Marcello posted an anti-kibitzer Greasemonkey script which does that. It'd be nice to have it as core functionality of the site though, yeah.

Replies from: Morendil
comment by Morendil · 2010-05-01T16:46:14.261Z · LW(p) · GW(p)

Been working on it - it's actually committed to the LW codebase - but not released yet due to browser issues. Finding a design that avoids those is more work, not sure when I can commit to taking it on.

comment by anonym · 2010-05-02T05:51:52.154Z · LW(p) · GW(p)

If you just want to not see the scores, but still see the author names, then Marcello's script isn't appropriate, because I think that hides the author as well.

I just added a LessWrong Comment Score Tweaks script that can be used to hide/display scores or toggle between visible and hidden.

After you install greasemonkey and that script, the "user script commands" menu of greasemonkey on lesswrong pages will contain:

  • Toggle Comment Scores
  • Hide Comment Scores
  • Show Comment Scores

There are also some key bindings defined for these. They don't work for me, but they might work for others.

comment by NancyLebovitz · 2010-05-01T16:05:53.100Z · LW(p) · GW(p)

For that matter, it might be good to have the option of not automatically seeing one's karma score-- I think I give mine more attention than it's worth, and I can't not see it if it's at the top of the page.

comment by RobinZ · 2010-05-01T16:16:29.556Z · LW(p) · GW(p)

There's currently a Greasemonkey script - "no kibitz", I think (I'm browsing by mobile, hard to look things up) - that does this; someone said they were working on adding it to the codebase a while ago.

comment by MartinB · 2010-05-01T13:02:29.526Z · LW(p) · GW(p)

Question: How do you apply the rationalist ideas you learned on lesswrong in your own (professional and/or private) life?

Replies from: Bo102010, ShardPhoenix, None
comment by Bo102010 · 2010-05-01T14:52:08.946Z · LW(p) · GW(p)

I remind myself of Conservation of Expected Evidence most days I'm at work.

I'm an engineer, and it helps remind me that a data point can either support a hypothesis or that hypothesis's opposite, but not both at once. This is especially useful for explaining things to non-technical people.

comment by ShardPhoenix · 2010-05-01T15:40:01.916Z · LW(p) · GW(p)

I think learning more about rationalization, akrasia, and so forth, has made it easier for me to keep regularly going to the gym, by noticing when I'm just making excuses for being lazy, etc.

comment by [deleted] · 2010-05-03T19:17:27.516Z · LW(p) · GW(p)

I really should start working on applying akrasia techniques, but I just haven't the willpower :P

That aside, I regularly remind myself to not be like Bruce in games like Risk and Magic, and probably in some other actives where he might show. And, as I posted before on this thread, making use of tabooing words and learning how an algorithm feels from the inside have helped clear up discussions when they'd otherwise be going back and forth with no understanding.

I always enjoy these kinds of questions, because when I ask myself, I have a harder time finding areas where I use what I learned here, than I do when someone else asks.

comment by vinayak · 2010-05-01T11:23:11.311Z · LW(p) · GW(p)

I want to understand Bayesian reasoning in detail, in the sense that, I want to take up a statement that is relevant to our daily life and then try to find exactly how much should I believe in it based on the the beliefs that I already have. I think this might be a good exercise for the LW community? If yes, then let's take up a statement, for example, "The whole world is going to be nuked before 2020." And now, based on whatever you know right now, you should form some percentage of belief in this statement. Can someone please show me exactly how to do that?

Replies from: Morendil, Jack, cupholder, Daniel_Burfoot
comment by Morendil · 2010-05-02T09:17:00.167Z · LW(p) · GW(p)

The interesting question isn't so much "how do I convert a degree of belief into a number", but "how do I reconcile my degrees of beliefs in various propositions so that they are more consistent and make me less vulnerable to Dutch books".

One way to do that is to formalize what you take that statement to mean, so that its relationships to "other beliefs" becomes clearer. It's what, in the example you suggest, the Doomsday clock scientists have done. So you can look at whatever data has been used by the Doomsday Clock people, and if you have reason to believe they got the data wrong (say, about international agreements), then your estimate would have to be different from theirs. Or you could figure out they forgot to include some evidence that is relevant (say, about peak uranium), or that they included evidence you disagree is relevant. In each of these cases Bayes' theorem would probably tell you at the very least in what direction you should update your degree of belief, if not the exact amount.

Or, finally, you could disagree with them about the structural relationships between bits of evidence. That case pretty much amounts to making up your own causal model of the situation. As other commenters have noted it's fantastically hard to apply Bayes rigorously to even a moderately sophisticated causal model, especially one that involves such an intricately interconnected system as human society. But you can always simplify, and end up with something you know is strictly wrong, but has enough correspondence with reality to be less wrong than a more naive model.

In practice, it's worth noting that only very seldom does science tackle a statement like this one head-on; as a reductionist approach science generally tries to explicate causal relationships in much smaller portions of the whole situation, treating each such portion as a "black box" module, and hoping that once this module's workings are formalized it can be plugged back into a more general model without threatening the overall model's validity too much.

The word "complex" is appropriate to refer precisely to situations where this approach fails, IMHO.

comment by Jack · 2010-05-01T18:30:42.469Z · LW(p) · GW(p)

Well to begin with we need a prior. You can choose one of two wagers. In the first, 1,000,000 blue marble and one red marble are put in a bag. You get to remove one marble, if it is the red one you win a million dollars. Blue you get nothing. In the second wager, you win a million dollars if a a nuclear weapon is detonated under non-testing and non-accidental conditions before 2020. Otherwise, nothing. In both cases you don't get the money until January 1st 2021. Which wager do you prefer?

If you prefer the nuke bet, repeat with 100,000 blue marbles, if you prefer the marbles try 100,000,000. Repeat until you get wagers that are approximately equal in their estimated value to you.

Edit: Commenters other than vinayak should do this too so that he has someone to exchange information with. I think I stop at maybe 200:1 against nuking.

Replies from: vinayak
comment by vinayak · 2010-05-01T23:39:26.204Z · LW(p) · GW(p)

So 200:1 is your prior? Then where's the rest of the calculation? Also, how exactly did you come up with the prior? How did you decide that 200:1 is the right place to stop? Or in other words, can you claim that if a completely rational agent had the same information that you have right now, then that agent would also come up with a prior of 200:1? What you have described is just a way of measuring how much you believe in something. But what I am asking is how do you decide how strong your belief should be.

Replies from: Jack
comment by Jack · 2010-05-01T23:57:57.331Z · LW(p) · GW(p)

It's just the numerical expression of how likely I feel a nuclear attack is. (ETA: I didn't just pick it out of thin air. I can give reasons but they aren't mathematically exact. But we could work up to that by considering information about geopolitics, proliferation etc.)

Or in other words, can you claim that if a completely rational agent had the same information that you have right now, then that agent would also come up with a prior of 200:1?

No, I absolutely can't claim that.

What you have described is just a way of measuring how much you believe in something. But what I am asking is how do you decide how strong your belief should be.

By making a lot of predictions and hopefully getting good at it while paying attention to known biases and discussing the proposition with others to catch your errors and gather new information. If you were hoping there was a perfect method for relating information about extremely complex propositions to their probabilities... I don't have that. If anyone here does please share. I have missed this!

But theoretically, if we're even a little bit rational the more updating we do the closer we should get to the the right answer (though I'm not actually sure we're even this rational). So we pick priors and go from there.

comment by cupholder · 2010-05-05T16:23:51.058Z · LW(p) · GW(p)

Normally I would try and find systematic risk analyses by people who know more about this subject than me. However, Martin Hellman has written a preliminary risk analysis of nuclear deterrence as part of his Defusing the Nuclear Threat project, and he claims that there have been no formal studies of the failure rate of nuclear deterrence. Hellman himself estimates that failure rate as on the order of 1% a year, but I don't know how seriously to take that estimate.

comment by Daniel_Burfoot · 2010-05-01T23:56:38.119Z · LW(p) · GW(p)

Can someone please show me exactly how to do that?

The problem with your question is that the event you described has never happened. Normally you would take a dataset and count the number of times an event occurs vs. the number of times it does not occur, and that gives you the probability.

So to get estimates here you need to be creative with the definition of events. You could count the number of times a global war started in a decade. Going back to say 1800 and counting the two world wars and the Napoleonic wars, that would give about 3/21. If you wanted to make yourself feel safe, you could count the number of nukes used compared to the number that have been built. You could count the number of people killed due to particular historical events, and fit a power law to the distribution.

But nothing is going to give you the exact answer. Probability is exact, but statistics (the inverse problem of probability) decidedly isn't.

Replies from: vinayak
comment by vinayak · 2010-05-02T05:20:38.511Z · LW(p) · GW(p)

Consulting a dataset and counting the number of times the event occured and so on would be a rather frequentist way of doing things. If you are a Bayesian, you are supposed to have a probability estimate for any arbitrary hypothesis that's presented to you. You cannot say that oh, I do not have the dataset with me right now, can I get back to you later?

What I was expecting as a reply to my question was something along the following lines. One would first come up with a prior for the hypothesis that the world will be nuked before 2020. Then, one would identify some facts that could be used as evidence in favour or against the hypothesis. And then one would do the necessary Bayesian updates.

I know how to do this for the simple cases of balls in a bin etc. But I get confused when it comes to forming beliefs about statements that are about the real world.

Replies from: Mass_Driver, Matt_Simpson
comment by Mass_Driver · 2010-05-02T06:07:34.640Z · LW(p) · GW(p)

If you haven't already, you might want to take a look at Bayes Theorem by Eliezer.

As sort of a quick tip about where you might be getting confused: you summarize the steps involved as (1) come up with a prior, (2) identify potential evidence, and (3) update on the evidence. You're missing one step. You also need to check to see whether the potential evidence is "true," and you need to do that before you update.

If you check out Conservation of Expected Evidence, linked above, you'll see why. You can't update just because you've thought of some facts that might bear on your hypothesis and guessed at their probability -- if your intuition is good enough, your guess about the probability of the facts that bear on the hypothesis should already be factored into your very first prior. What you need to do is go out and actually gather information about those facts, and then update on that new information.

For example: I feel hot. I bet I'm running a fever. I estimate my chance of having a bacterial infection that would show up on a microscope slide at 20%.

I think: if my temperature were above 103 degrees, I would be twice as likely to have a bacterial infection, and if my temperature were below 103 degrees, I would only be half as likely to have a bacterial infection. Considering how hot I feel, I guess there's a 50-50 chance my temperature is above 103 degrees. I STILL estimate my chance of having a bacterial infection at 20%, because I already accounted for all of this. This is just a longhand way of guessing.

Now, I take my temperature with a thermometer. The readout says 104 degrees. Now I update on the evidence; now I think the odds that I have a bacterial infection are 40%.

The math is fudged very heavily, but hopefully it clarifies the concepts. If you want accurate math, you can read Eliezer's post.

comment by Matt_Simpson · 2010-05-02T06:01:15.830Z · LW(p) · GW(p)

The answer is... its complicated, so you approximate. A good way of approximating is getting a dataset together and putting together a good model that helps explain that dataset. Doing the perfect Bayesian update in the real world is usually worse than nontrivial - its basically impossible.

comment by Matt_Duing · 2010-05-27T03:25:01.578Z · LW(p) · GW(p)

Has anyone read "Games and Decisions: Introduction and Critical Survey" by R. Duncan Luce and Howard Raiffa? Any thoughts on its quality?

comment by ata · 2010-05-13T08:47:02.741Z · LW(p) · GW(p)

Rationality comix!

Hover over the red button at the bottom (to the left of the RSS button and social bookmarking links) for a bonus panel.

Edit: "Whoever did the duplication" would be a better answer than "The guy who came first", admittedly. The duplicate and original would both believe themselves to be the original, or, if they are a rationalist, would probably withhold judgment.

Replies from: RobinZ, NihilCredo
comment by RobinZ · 2010-05-13T12:26:22.321Z · LW(p) · GW(p)

Speaking as an engineer, I'd think he wasn't talking about subjective aspects: "The guy who came first" is the one which was copied (perfectly) to make the clone, and therefore existed before the clone existed.

comment by NihilCredo · 2010-05-17T16:29:24.393Z · LW(p) · GW(p)

More importantly, the question is terribly phrased - or just terrible. The philosopher could have started with "If you met the 'twins' afterwards, could someone tell them apart without asking anyone?", which has an obvious response of "no", and then gets followed by a actually interesting questions about, for example, what "memory" exactly is.

That version is a lot funnier, though!

comment by Seth_Goldin · 2010-05-10T15:18:45.790Z · LW(p) · GW(p)

Cool paper: When Did Bayesian Inference Become “Bayesian”?

http://ba.stat.cmu.edu/journal/2006/vol01/issue01/fienberg.pdf

comment by Kevin · 2010-05-08T23:24:32.649Z · LW(p) · GW(p)

No-name terrorists now CIA drone targets

http://www.cnn.com/2010/TECH/05/07/wired.terrorist.drone.strikes/index.html?hpt=C1

Replies from: steven0461, Jack, mattnewport
comment by steven0461 · 2010-05-09T03:08:16.932Z · LW(p) · GW(p)

Is there a consensus on whether or not it's OK to discuss not-specifically-rationality-related politics on LW?

Replies from: Kevin
comment by Kevin · 2010-05-09T03:15:12.787Z · LW(p) · GW(p)

Doesn't bother me. I think the consensus is that we should probably try and stay at a meta-political level, looking at a much broader picture than that which is discussed on the nightly news. The community is now mature enough that anything political is not automatically taboo.

I posted this not to be political, but because people here are generally interested in killer robots and their escalation of use.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-05-09T03:54:09.412Z · LW(p) · GW(p)

Unless you used to post under a different user name on LW, OB or SL4, it seems to me that you're a little new here to be writing things like "I think the consensus is ..." and "The community is ..."

Replies from: New_Here
comment by New_Here · 2010-05-09T04:28:23.354Z · LW(p) · GW(p)

No, I'm New Here.

Replies from: Clippy
comment by Clippy · 2010-05-09T04:58:48.566Z · LW(p) · GW(p)

No, you're New_Here.

I'm Clippy, by the way. I like paperclips.

Replies from: New_Here
comment by New_Here · 2010-05-09T09:58:21.776Z · LW(p) · GW(p)

Does not compute.

comment by Jack · 2010-05-09T00:28:57.863Z · LW(p) · GW(p)

This looks like very expensive way to kill terrorists, like 100k$ per militant not counting sunken costs such as the 4.5 mil price tag per drone. And not trying to estimate the cost of civilian deaths.

comment by mattnewport · 2010-05-08T23:32:08.895Z · LW(p) · GW(p)

Related, Obama authorizes assassination of US citizen. I'm amazed how little anybody seems to care.

Replies from: Jack, PhilGoetz, JenniferRM, None
comment by Jack · 2010-05-10T21:58:48.378Z · LW(p) · GW(p)

Something problematic: if you're a cosmopolitan, as I assume most people here are, can you consistently object to assassinations of citizens if you don't object to assassinations of non-citizens?

Replies from: mattnewport
comment by mattnewport · 2010-05-10T22:49:52.920Z · LW(p) · GW(p)

Probably not, though you might be able to make a case that if a particular non-citizen is a significant perceived threat but there is no legal mechanism for prosecuting them then different rules apply. Most people are not cosmopolitan however and so I am more surprised at the lack of outrage over ordering the assassination of a US citizen than by the lack of outrage over the assassination of non-US citizens.

comment by PhilGoetz · 2010-05-15T04:21:10.454Z · LW(p) · GW(p)

I care, and approve, provided that Al-Awlaki can forestall it if he chooses by coming to the US to face charges.

I don't believe in treating everything with the slippery-slope argument. That way lies the madness I saw at the patent office, where every decision had to be made following precedent and procedure with syntactic regularity, without any contaminating element of human judgement.

comment by JenniferRM · 2010-05-09T00:42:50.271Z · LW(p) · GW(p)

The drone targeting is worrisome in the very big picture and long term sense of establishing certain kinds of precedents for robotic warfare that might be troubling. The fact that it is happening in Pakistan honestly seems more problematic to me in terms of the badness that comes with not having "clearly defined parties who can verifiably negotiate". Did the US declare war on Pakistan without me noticing? Is Pakistan happy that we're helping them "maintain brutal law and order" in their country by bombing people in their back country? Are there even functioning Westphalian nation states in this area? (These are honest questions - I generally don't watch push media, preferring instead to formulate hypotheses and then search for news or blogs that can answer the hypothesis.)

The assassination story, if true, seems much more worrisome because it would imply that the fuzziness from the so-called "war on terror" is causing an erosion of the rule of law within the US. Moreover, it seems like something I should take responsibility for doing something about because it is happening entirely within my own country.

Does anyone know of an existing political organization working to put an end to the imprisonment and/or killing of US citizens by the US government without formal legal proceedings that include the right to a trail by jury? I would rather coordinate with other people (especially competent experts) if such a thing is possible.

Replies from: Jack, JGWeissman, Kevin, mattnewport
comment by Jack · 2010-05-09T00:58:19.392Z · LW(p) · GW(p)

Are there even functioning Westphalian nation states in this area?

Pakistan does not have anything close to a force monopoly in the region we're attacking. They've as much as admitted that, I believe. I actually think I'm okay with the attacks as far as international law goes.

The drone targeting is worrisome in the very big picture and long term sense of establishing certain kinds of precedents for robotic warfare that might be troubling.

I always hear this but no one ever tells me just what precedents for robotic warfare they find troubling.

Replies from: mattnewport
comment by mattnewport · 2010-05-09T01:08:27.172Z · LW(p) · GW(p)

I always hear this but no one ever tells me just what precedents for robotic warfare they find troubling.

It is a further dehumanization of the process of killing and so tends to undermine any inbuilt human moral repugnance produced by violence. To the extent that you think that killing humans is a bad thing I suggest that is something that should be of concern. It is one more level of emotional detachment for the drone operators beyond what can be observed in the Apache pilots in the recent Wikileaks collateral murder video.

ETA: This Dylan Rattigan clip discusses some of the concerns raised by the Wikileaks video. The same concerns apply to drone attacks, only more so.

comment by JGWeissman · 2010-05-09T01:08:38.869Z · LW(p) · GW(p)

Does anyone know of an existing political organization working to put an end to the imprisonment and/or killing of US citizens by the US government without formal legal proceedings that include the right to a trail by jury? I would rather coordinate with other people (especially competent experts) if such a thing is possible.

I don't know if they have responded to this specific issue, but the ACLU is working against the breakdown of rule of law in the name of national defense.

Replies from: JenniferRM
comment by JenniferRM · 2010-05-09T02:43:21.162Z · LW(p) · GW(p)

Thanks for the link. I have sent them an email asking for advice as to whether this situation is as bad as it seems to be, and if so, what I can do to make things less bad. I have also added something to my tickler file so that on May 21 I will be reminded to respond here with a followup even if there is no response from the ACLU's National Security Project.

I think I have done my good deed for the day :-)

ETA: One thing to point out is that before sending the email I tried googling "Presidential Assassination Program" in google news and the subject seems to have had little coverage since then. This was the best followup I could find in the last few days, and it spoke of general apathy on the subject. This leading me to conclude that "not enough people had noticed" yet, so I followed through with my email.

Replies from: JenniferRM
comment by JenniferRM · 2010-05-24T21:52:40.313Z · LW(p) · GW(p)

Following up for the sake of reference...

I did not get a reply from the ACLU on this subject and just today sent a second email asking for another response. If the ACLU continues to blow me off by June 1st I may try forwarding my unanswered emails to several people at the ACLU (to see the blowoff was simply due to incompetance on the part of only the person monitoring the email).

If that doesn't work then I expect I'll try Amnesty International as suggested by Kevin. There will be at least one more comment with an update here, whatever happens, and possibly two or three :-)

Replies from: JenniferRM
comment by JenniferRM · 2010-05-27T01:20:11.418Z · LW(p) · GW(p)

This will be my final update on this subject. I received an email from a representative of the ACLU. He apologized for the delayed response and directed me to a series of links that I'm passing on here for the sake of completeness.

First, there is an April 7th ACLU press release about extra-judicial killings of US citizens, that press release notes that an FOIA request had already been filed which appears to ask for the details of the program to see specifically how it works in order to find out if it really violates any laws or not, preparatory to potential legal action.

Second, on April 19th the Washington Post published a letter for the ACLU's Executive Director on the subject. This confirms that the issue is getting institutional attention, recognition in the press, and will probably not "slip through the cracks".

Third, on April 28th the ACLU sent an open letter to President Barack Obama about extrajudicial killings which is the same date that the ACLU's update page for "targeted killings" was last updated. So it seems clear that steps have been taken to open negotiations with an individual human being who has the personal authority to cancel the program.

This appears to provide a good summary of the institutional processes that have already been put in motion to fix the problems raised in the parent posts. The only thing left to consider appears to be (1) whether violations of the constitution will be adequately prevented and (2) to be sure that we are not free riding on the public service of other people too egregiously.

In this vein, the ACLU has a letter writing campaign organized so that people can send messages to elected officials asking that they respect the rule of law and the text of treaties that the US has signed, in case the extra-judicial killings of US citizens are really being planned and accomplished by the executive branch without trail or oversight by the courts.

Sending letters like these may help solve the problem a little bit, is very unlikely to hurt anything, and may patch guilt over free riding :-)

In the meantime I think "joining the ACLU as a dues paying member" just bumped up my todo list a bit.

comment by Kevin · 2010-05-09T02:09:01.064Z · LW(p) · GW(p)

I would ask Amnesty International.

comment by mattnewport · 2010-05-09T01:01:20.428Z · LW(p) · GW(p)

Is Pakistan happy that we're helping them "maintain brutal law and order" in their country by bombing people in their back country?

No, in general I think they are about as unhappy as you might expect US citizens to be if the Chinese government was conducting drone attacks on targets in the US with heavy civilian casualties. This was part of the basis for my prediction last year that there will be a major terrorist attack in the US with a connection to Pakistan. Let's hope that all would be attackers are as incompetent as Faisal Shahzad.

The assassination story, if true, seems much more worrisome because it would imply that the fuzziness from the so-called "war on terror" is causing an erosion of the rule of law within the US.

I don't believe anyone has challenged the truth of the story, it has just not been widely reported or received the same level of scrutiny as the extra-judicial imprisonment and torture conducted by the last administration. The article I linked links to a New York Times piece on the decision. The erosion of the rule of law within the US in response to supposed terrorist threats has been going on ever since 9/11 and Obama has if anything accelerated rather than slowed that process.

Replies from: Jack
comment by Jack · 2010-05-09T01:39:32.722Z · LW(p) · GW(p)

I imagine the assassination story would be a bigger deal if the target was still in the US.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-16T05:53:15.657Z · LW(p) · GW(p)

It wouldn't happen. They'd arrest him.

Or, to put it another way - it would happen; it just wouldn't be called assassination, because it would be done using standard police procedure, and because other people would get killed. It would be like the standoffs with MOVE, or David Koresh's organization in Waco, or Ruby Ridge.

The word assassination is wrong for all these cases. These kinds of "assassination" are just the logical result of law enforcement. If you're enforcing the law, and you have police and courts and so on; and someone refuses to play along, eventually you have to use force. I don't see that the person being outside or inside America makes a big moral difference, when their actions are having effect inside America. A diplomatic difference, but not a moral difference.

I also think it's funny for people to have moral arguments in a forum where you get labeled an idiot if you admit you believe there are such things as morals.

Perhaps we should be grateful that technology hasn't advanced to the point where we can take these people out non-violently, because then we'd do it a lot more, for more trivial reasons.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-05-24T22:23:29.764Z · LW(p) · GW(p)

I also think it's funny for people to have moral arguments in a forum where you get labeled an idiot if you admit you believe there are such things as morals.

Why shouldn't people argue over morals? The mainstream view here is that each person is arguing about what the fully-informed, fully-reflected-upon output of the other person's moral-evaluating computation would be. The presumption is that all of our respective moral-evaluating computational mechanisms would reach the same conclusion on the issue at hand in the limit of information and reflection.

comment by [deleted] · 2010-05-15T04:25:39.941Z · LW(p) · GW(p)

The people who care are poorly represented by the news and by our political institutions. But they're out there.

comment by neq1 · 2010-05-07T12:45:32.741Z · LW(p) · GW(p)

I tried to post this discussion of the sleeping beauty problem to less wrong, but it didn't work

http://neq1.wordpress.com/2010/05/07/beauty-quips-id-shut-up-and-multiply/

So I just posted it on my blog.

Can there be a big lag time between when you submit something to lesswrong and to when it shows up in recent posts? (I waited two days before giving up)

Replies from: NancyLebovitz, ata
comment by NancyLebovitz · 2010-05-07T12:55:29.435Z · LW(p) · GW(p)

No idea about the time lag-- my posts show up quickly-- but my intuition says that a fair coin has a 1/2 probability of being heads, and nothing about the experiment changes that.

comment by ata · 2010-05-07T12:54:14.251Z · LW(p) · GW(p)

Nope, new posts should show up immediately (or maybe with a half hour delay or so; I seem to recall that the sidebars are cached, but for far less than two days). Did it appear to post successfully, just not showing up? The only thing I can think of is that you might not have switched the "Post to" menu from "Drafts for neq1" to "LessWrong".

Replies from: neq1
comment by neq1 · 2010-05-07T14:31:41.558Z · LW(p) · GW(p)

Ah, I think that's it (posted to drafts). Thanks. Not sure how I missed that.

comment by Morendil · 2010-05-06T10:10:01.579Z · LW(p) · GW(p)

The unrecognized death of speech recognition

Interesting thoughts about the limits encountered in the quest for better speech recognition, the implications for probabilistic approaches to AI, and "mispredictions of the future".

What do y'all think?

Replies from: thomblake
comment by thomblake · 2010-05-06T13:18:33.235Z · LW(p) · GW(p)

As noted in the comments, artificial natural speech recognition is in the 85-95% range, and human natural speech recognition is also around 95%. I was skeptical of that article when I first read it because it did not even mention how good humans are at it to compare to the machines.

comment by byrnema · 2010-05-05T21:29:07.074Z · LW(p) · GW(p)

Curiously, what happens when I refresh LW (or navigate to a particular LW page like the comments page) and I get the "error encountered" page with those little witticisms? Is the site 'busy' or being modified or something else ...? Also, does everyone experience the same thing at the same moment or is it a local phenomenon?

Thanks ... this will help me develop my 'reddit-page' worldview.

Replies from: RobinZ
comment by RobinZ · 2010-05-05T21:34:03.775Z · LW(p) · GW(p)

This has happened twice in the past two days - generally there is some specific comment which is broken and causes pages which would display it to crash. My analysis of the previous and current pattern here.

Replies from: byrnema
comment by byrnema · 2010-05-05T21:37:43.471Z · LW(p) · GW(p)

To test this hypothesis, the Recent Comments should work as soon as the bad comment moves to a new page..

Replies from: RobinZ
comment by RobinZ · 2010-05-05T21:39:31.637Z · LW(p) · GW(p)

I predict that it will with confidence - it has in previous instances.

comment by Jack · 2010-05-04T21:50:07.226Z · LW(p) · GW(p)

Geocities Less Wrong

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-05-04T22:10:41.289Z · LW(p) · GW(p)

Can you control the colors? Dark red on black is hard to read.

Replies from: Jack
comment by Jack · 2010-05-04T22:14:42.903Z · LW(p) · GW(p)

Nah. I just used this

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-05-04T22:30:40.862Z · LW(p) · GW(p)

Oh my dear God. Indeed, human values differ as much as values can differ.

If I hadn't started with sites that did quiet, geekish design, I would have fled the net and never come back.

comment by RobinZ · 2010-05-04T19:38:08.576Z · LW(p) · GW(p)

By the way: getting crashes on the comments page again. Prior to 1yp8 works and subsequent to 1yp8 works; I haven't found the thread with the broken comment.

Edit: It's not any of the posts after 23andme genome analysis - $99 today only in Recent Posts, I believe.

Edit 2: Recent Comments still broken for me, but ?before=t1_1yp8 is no longer showing the most recent comments to me - ?before=t1_1yqo continues where the other is leaving off.

Edit 3: Recent Comments has now recovered for me.

Replies from: RobinZ, cupholder
comment by RobinZ · 2010-05-05T20:55:19.078Z · LW(p) · GW(p)

Having Recent Comments problems again: after 1yyu and before 1yyu work. The sidebar "Recent Comments" circa 1yyw does not include 1yyu - skips straight from 1yyv to 1yyt.

No crashes are observed in the comment threads of "Antagonizing Opioid Receptors for (Prevention of) Fun and Profit" through "Possibilities for converting useless fun into utility in Online Gaming".

Edit: byrnema has discovered the guilty comment - it appears to have been on this post.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-05T21:06:23.356Z · LW(p) · GW(p)

Having similar problems. Getting error messages when I click "Recent comments."

Replies from: RobinZ
comment by RobinZ · 2010-05-05T21:30:53.128Z · LW(p) · GW(p)

Usually the way these work is that any page which would include a specific comment fail with an error message. The "before 1yyu" page should show more recent comments than the broken one - if the most recent comments in the sidebar don't appear on that page, replace the "1yyu" at the end of the string with the identifier of a more recent comment or see if the plain old "Recent Comments" page has fixed itself.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-05-05T21:37:54.716Z · LW(p) · GW(p)

What's the coding system for urls for the recent comments pages? Why "1yyu"?

Replies from: RobinZ
comment by RobinZ · 2010-05-05T21:39:38.782Z · LW(p) · GW(p)

I don't know.

Edit: Wait, I misread your comment as a question about the codebase. I'll tell you what I know:

  • "1yyu" is the serial number of a particular comment - the broken one. Every comment has a serial number in lowercase base 36. You can find these serial numbers by examining the Permalink URLs - for example, your question about the coding system is 1yzf.
  • The URLs for navigation in the "Recent Comments" page by the "next" and "prev" buttons work by placing a query string at the end of the URL with at least two possible parameters (separated by an ampersand - "&"): "count" and "after" or "before".
  • "count=50" specifies 50 comments per page.
  • "after=t1_1yyu" specifies that comments start with the comment after 1yyu in reverse chronological order - 1yyt. "before=t1_1yyu" specifies that the comments end with the comment before 1yyu in reverse chronological order - 1yyv.

That's most of what I know about the syntax - I'm not sure how much control you can exercise over the number of comment shown, for example. I know you get more with count=50 than without any such indicator.

Replies from: byrnema
comment by byrnema · 2010-05-05T21:49:18.347Z · LW(p) · GW(p)

This is a link that works. This is the broken link. This is a link that doesn't exist.

Replies from: RobinZ
comment by RobinZ · 2010-05-05T21:55:49.373Z · LW(p) · GW(p)

...!

That's got to be the comment! Apparently it doesn't break the comments page here, though - that's quite strange.

Replies from: byrnema
comment by byrnema · 2010-05-05T22:48:14.180Z · LW(p) · GW(p)

...!

I didn't find that comment -- you did! I was replying to your parent post in which you told us that was a broken link.

That's got to be the comment!

I don't think it's the comment. I think it's old. Since you discovered that 1yyu was broken, didn't the Recent Comments work for a while? If so, we have a new broken link.

Apparently it doesn't break the comments page here, though - that's quite strange.

A link to a broken link is OK. What you can't have is the broken link in Recent Comments, because the Recent Comments page wouldn't be able to bring it up and show it.

Replies from: RobinZ, RobinZ
comment by RobinZ · 2010-05-06T23:43:48.133Z · LW(p) · GW(p)

Oh - it gives the same error appended to any thread.

comment by RobinZ · 2010-05-06T00:00:54.289Z · LW(p) · GW(p)

It wouldn't be an old comment - not with that serial number on this thread. I don't know what's going on, now.

comment by cupholder · 2010-05-04T22:10:44.505Z · LW(p) · GW(p)

Yep, the URL http://lesswrong.com/comments?count=40&after=t1_1ypw is reliably breaking for me.

comment by thomblake · 2010-05-04T14:35:13.487Z · LW(p) · GW(p)

I'm going to be giving a lecture soon on rationality. I'm probably going to focus on human cognitive bias. Any thoughts on what I should absolutely not miss including?

Replies from: RobinZ, nerzhin
comment by RobinZ · 2010-05-04T14:41:02.274Z · LW(p) · GW(p)

I recall "Knowing About Biases Can Hurt People":

Whether I do it on paper, or in speech, I now try to never mention calibration and overconfidence unless I have first talked about disconfirmation bias, motivated skepticism, sophisticated arguers, and dysrationalia in the mentally agile. First, do no harm!

Replies from: thomblake
comment by thomblake · 2010-05-04T14:48:42.625Z · LW(p) · GW(p)

Thanks; that sort of thing is exactly why I asked.

Replies from: RobinZ
comment by RobinZ · 2010-05-04T14:49:55.249Z · LW(p) · GW(p)

Glad to be of service!

comment by nerzhin · 2010-05-04T18:29:33.826Z · LW(p) · GW(p)

Can you give us a basic outline of what you have? Then it's a lot easier to see if you're missing anything.

Replies from: thomblake
comment by thomblake · 2010-05-04T18:43:27.409Z · LW(p) · GW(p)

Sure, I'll probably post an outline once I've written it up. I just didn't want to start down a path neglecting something important.

comment by eugman · 2010-05-03T02:59:30.271Z · LW(p) · GW(p)

Has anyone read The Integral Trees by Larry Niven? Something I always wonder about people supporting cryonics is why do they assume that the future will be a good place to live in? Why do they assume they will have any rights? Or do they figure that if they are revived, FAI has most likely come to pass?

Replies from: JoshuaZ, NancyLebovitz, ata
comment by JoshuaZ · 2010-05-03T06:07:32.747Z · LW(p) · GW(p)

A dystopian society is unlikely to thaw out and revive people in cryostasis. Cryostasis revival makes sense for societies that are benevolent and have a lot of free resources. Also, be careful not to try to generalize from fictional examples. They are not evidence. That's all the more the case here because science fiction is in general a highly reactionary genre that even as it uses advance technology either warns about the perils or uses it as an excuse to hearken back to a more romantic era. For example look how many science fiction stories and universes have feudal systems of government.

Replies from: eugman, Mass_Driver, Jack
comment by eugman · 2010-05-03T14:13:24.969Z · LW(p) · GW(p)

Now that's a reasonable argument: benevolent, resource rich societies are more likely to thaw people. Thanks.

And yes, that's true, science fiction does often look at what could go really wrong.

comment by Mass_Driver · 2010-05-04T04:21:22.376Z · LW(p) · GW(p)

There certainly is a large chunk of science fiction that could be accurately described as medieval fantasy moved to a superficially futuristic setting.

There is also the legitimate question of how fragile our liberal norms and economy are -- do they depend on population density? on the ratio between the reach of weapons and the reach of communications? on the dominance of a particular set of subcultures that attained to industrial hegemony through what amounts to chance and might not be repeated?

If egalitarianism is not robust to changes in the sociological environment, then there might simply be many more possible futures with feudal regimes than with capitalist or democratic regimes.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-04T04:35:44.237Z · LW(p) · GW(p)

Yes, but how often do they bother to explain this rise other than in some very vague way? And it isn't just feudalism. Look for example at Dune where not only is there a feudal system but the technology conveniently makes sword fighting once again a reasonable melee tactic. Additional evidence for the romantic nature is that almost invariably the stories are about people who happen to be nobles. So there's less thinking and focusing on how unpleasant feudalism is for the lower classes.

The only individual I've ever seen give a plausible set of explanations for the presence of feudal cultures is Bujold in her Vorkosigan books. But it is important to note that there there are many different governmental systems including dictatorships and anarcho-capitalist worlds and lots of other things. And she's very aware that feudalism absolutely sucks for the serfs.

I don't think that most of these writers are arriving at their societies by probabilistic extrapolation. Rather, they are just writing what they want their societies to have. (Incidentally, I suspect that many of these cultural and political norms are much more fragile than we like to think. There are likely large swaths of the space of political systems that we haven't even thought about. There might well be very stable systems that we haven't conceived of yet. Or there might be Markov chains of what systems are likely to transfer to other systems).

Replies from: Emile, Mass_Driver
comment by Emile · 2010-05-04T13:31:37.554Z · LW(p) · GW(p)

I don't think that most of these writers are arriving at their societies by probabilistic extrapolation. Rather, they are just writing what they want their societies to have.

Those aren't the only possibilities - much more likely is the Rule of Cool. Wielding a sword is cooler than wielding a gun, and swordfights are more interesting than gunfights.

comment by Mass_Driver · 2010-05-04T05:22:31.440Z · LW(p) · GW(p)

I don't think that most of these writers are arriving at their societies by probabilistic extrapolation.

Granted. Some are, though. Two more counter-examples, besides Bujold:

  • Asimov's Foundation, e.g. the planet of Anacreon. Feudalism is portrayed as the result of a security dilemma and the stagnation of science, as reducing the access of ordinary people to effective medicine and nuclear power, and as producing a variety of sham nobles who deserve mockery.

  • Brave New World. Feudalism is portrayed as a logical outgrowth of an endless drive toward bureaucratic/administrative efficiency in a world where personal freedom has been subordinated to personal pleasure. Regionally-based bureaucrat-lords with concentrically overlapping territories 'earn' their authority not by protecting ordinary serfs from the danger of death but from the danger of momentary boredom or discomfort. Huxler doesn't seem overly fond of this feudalism; the question of whether a romantic would prefer this sort of system is, at worst, left as an exercise for the reader.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-04T05:43:16.855Z · LW(p) · GW(p)

Huh. I had not really thought Brave New World as using a feudal system but that really is what it is. It might be more accurate to then make the point that the vast majority of the other cases have systems that aren't just feudal but are ones in which the positions are inherited.

I agree that some of these writers are extrapolating. Since Asimov is explicitly writing in a world where the running theme is the ability to reliably predict social changes it shouldn't be that surprising that he'd actually try to do so. (Note also that Asimov also avoids here the standard trap of having protagonists who are nobles).

comment by Jack · 2010-05-04T07:05:04.166Z · LW(p) · GW(p)

That's all the more the case here because science fiction is in general a highly reactionary genre that even as it uses advance technology either warns about the perils or uses it as an excuse to hearken back to a more romantic era. For example look how many science fiction stories and universes have feudal systems of government.

This is a little too broad for me to be comfortable with. There are certainly subgenres and authors who are reactionary but then there are those that are quite the opposite. Military SF and space opera (which, frankly, is just fantasy with lasers) are usually quite reactionary. Cyberpunk is cautionary but not so much about technology as about capitalism. Post-apocalyptic sf is sometimes about technology getting to great for us to handle but the jewel of the genre, A Canticle for Leibowitz is about the tragedy of a nationwide book burning. Post-cyberpunk is characterized by it's relative optimism. Hard sf varies in its political sensibilities (there seem to be a lot of libertarians) but it's almost always pro-tech for obvious reasons.

I'm having a hard time coming up with authors that fit the reactionary bill, but that might be because I read the wrong subgenres. And the libertarians are hard to classify. Michael Crichton is the obvious one that occurs to me. Larry Niven, I suppose. Card and Heinlein could be put there though both are more complicated than that. Herbert. In the other camp: Brin, Kim Stanley Robinson, LeGuin, Dick, Neil Stephenson, Gibson, Vonnegut, Orwell, Doctorow, Bradbury. Asimov and Clark probably fall in the second camp...

Am I just missing the reactionary stuff?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-04T08:06:21.578Z · LW(p) · GW(p)

I think it would be fair to say that the more famous authors in general are less reactionary. But if I had to list reactionaries I'd list Herbert, Crichton, Pournelle, Weber, Anderson, McCaffrey' (arguable, but definite aspects in Pern), Koontz, Shelley, Lovecraft and to some extent Niven and Card.

Also, there seems to be a lot more of a general reactionary bent in the less successful scifi. The major authors seem to have less of that (possibly because their views are so unique that they override anything as simple as being reactionary or not).

The example you give of a Canticle for Liebowitz is more complicated: While book burning and such is portrayed as bad, that's still a response to a nuclear apocalypse. Indeed, in that regard, almost any science fiction that's post nuclear war has a reactionary aspect.

If we move outside literature directly, and say into movies and TV the general pattern is pretty clear. While people often think of Star Trek as optimistic about technology, even in TOS many episodes dealt with the threat of new technologies (androids and intelligent computers both came up). The Outer Limits in both its original form and reincarnation were generally anti-technology. It was a safe bet in any episode of the reincarnation that any new knowledge or new technology was going to fail or cause horribly disturbing side effects that would be summarized with a moralistic voice over at the end that would make Leon Kass proud. Similarly Doctor Who has had multiple incarnations of the Doctor lecture about how bad trying to be immortal is. Movies have a similar track record (The Terminator, Hollowman, The Sixth Day, for just a few examples. Many more examples can be given)

I agree that overall this was likely a hasty generalization. Science fiction has reactionary elements but it is by no means an intrinsically reactionary genre.

Replies from: Jack
comment by Jack · 2010-05-04T08:59:29.332Z · LW(p) · GW(p)

Shelley and Lovecraft are good calls, I had forgotten to think about the early stuff. We can put Vern in the progressive camp, I think.

There is sort of an interesting division among the "cautionary tales". There's the Crichton/Shelley/Romero zombie tradition of humans try to play God and get their asses kicked as punishment unless traditional values/folkways come to the rescue. And then theres the more leftist tradition: new technology has implications capitalism or statism isn't equipped to deal with, here we include H.G. Wells, Brave New World and other dystopias, cyberpunk, Gattaca, a lot of post-nuke war stuff, etc.

Are both groups reactionary under your definition or just the first?

I totally agree about Hollywood. There is also the whole alien invasion subgenre which originally was really about Cold War anxiety. Cloverfield is probably counts as a modern-day equivalent.

Replies from: ata, CronoDAS, JoshuaZ
comment by ata · 2010-05-04T09:12:07.081Z · LW(p) · GW(p)

There's the Crichton/Shelley/Romero zombie tradition of humans try to play God and get their asses kicked as punishment unless traditional values/folkways come to the rescue.

For anyone who hasn't already seen it — Caveman Science Fiction!

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-05-04T10:24:41.916Z · LW(p) · GW(p)

How do you classify Egan? Pretty pro-tech in his novels, iirc, but a pretty high proportion of his short stories are effectively horror about new tech.

Replies from: Richard_Kennaway, ata
comment by Richard_Kennaway · 2010-05-04T12:35:38.708Z · LW(p) · GW(p)

That isn't how his short stories have struck me. A handful that come to mind about near-future technology, not having the books in front of me, are Axiomatic, Silver Fire, The Moral Virologist, Worthless, and one whose name I forget about artificial nanomcguffins that let you gradually reprogram your own mind just by wishing the change you want. They're pretty dark, but I wouldn't classify them as horror. That is, I don't read them as saying "these are things that man should not know", but "after such knowledge, these are issues that must be faced".

Replies from: grendelkhan
comment by grendelkhan · 2011-03-16T17:55:10.847Z · LW(p) · GW(p)

one whose name I forget about artificial nanomcguffins that let you gradually reprogram your own mind just by wishing the change you want

I think those are the "Grey Knights" from "Chaff".

comment by ata · 2010-05-04T10:26:34.803Z · LW(p) · GW(p)

Was this intended to be a reply to Jack's post?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-05-04T10:31:09.462Z · LW(p) · GW(p)

Yes, sorry.

comment by CronoDAS · 2010-05-04T21:49:55.401Z · LW(p) · GW(p)

The original The War of the Worlds by H.G. Wells has many similarities to the era's "invasion stories" in which a hostile foreign power (usually Germany or France) launches a very successful surprise invasion of Great Britain. Wells just replaced Germany with Martians.

comment by JoshuaZ · 2010-05-04T12:50:05.546Z · LW(p) · GW(p)

The point about there being different categories is one I had not thought about. I agree that the first is unambiguously iin the reactionary form. I'm not sure that the second is always reactionary: It might depend on the degree at which the technology is caricatured. Thus for example, Brave New World and Gattaca both seem to be such extreme caricatures of what might happen with those technologies that they seem reactionary. That's in contrast with say "A Deepness in the Sky" which takes the same technologies and shows different societal responses to them (careful use, arguable abuse and outright tyranny). Similar, a lot of Bujold's works raise serious issues about the ethical and policy issues brought up by specific, plausible technologies, but she's generally careful to show both use and abuse, not just horrific dystopias.

Replies from: Emile
comment by Emile · 2010-05-04T15:02:37.916Z · LW(p) · GW(p)

agree that the first is unambiguously iin the reactionary form. I'm not sure that the second is always reactionary: It might depend on the degree at which the technology is caricatured.

This sounds a lot like just debating definitions - is "reactionary" such a useful term here? Sounds to me like you're trying to shoehorn it in a context where it doesn't really fit? Wouldn't replacing it with a more precise and narrow terms make the discussion clearer - such as "romantic about traditional societies" or something?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-05T01:38:47.791Z · LW(p) · GW(p)

That's a valid point. Maybe split into two forms: 1) Romantic attitudes towards traditional societies and 2) extreme caricatures of the potential negative ramifications of new technologies. 1 and 2 both seem to be highly correlated in science fiction. Many of the examples given show aspects of both.

comment by NancyLebovitz · 2010-05-03T06:03:35.190Z · LW(p) · GW(p)

Science fiction has a bias towards things going wrong.

In the particular case of cryonics, if there's a dystopian future where the majority of people have few or no rights, it's a disaster all around, but as ata says, you can presumably commit suicide. There's a chance that even that will be unfeasible-- for example if brains are used, while conscious, for their processing power. This doesn't seem likely, but I don't know how to evaluate it in detail.

The other case-- people in general have rights, but thawed people, or thawed people from before a certain point in time, do not-- requires that thawed people do not have a constituency. This doesn't seem terribly likely, though as I recall, Niven has it that it takes a very long time for thawing to be developed.

Normally, I would expect for there to be commercial and legal pressures for thawed people to be treated decently. (I've never seen an sf story in which thawed people are a political football, but it's an interesting premise.)

I think the trend is towards better futures (including richer, with less reason to enslave people), but there's no guarantee. I think it's much more likely that frozen people won't be revived than that they'll be revived into a bad situation.

Replies from: ata
comment by ata · 2010-05-03T06:26:15.687Z · LW(p) · GW(p)

Science fiction has a bias towards things going wrong.

All fiction has a bias towards things going wrong. Need some kind of conflict.

(Reality also has a bias towards things going wrong, but if Fun Theory is correct, then unlike with fiction, we can change that condition without reducing the demand for reality.)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-05-03T06:43:53.611Z · LW(p) · GW(p)

Science fiction has a stronger bias towards things going wrong on a grand scale than most fiction does.

Replies from: Larks
comment by Larks · 2010-05-07T01:57:44.134Z · LW(p) · GW(p)

Otherwise, the advanced technology would just make everything great. They need extra-conflict to counter-out.

comment by ata · 2010-05-03T03:17:42.252Z · LW(p) · GW(p)

Or do they figure that if they are revived, FAI has most likely come to pass?

Can't speak for any other cryonics advocates, but I find that to be likely. I see AI either destroying or saving the world once it's invented, if we haven't destroyed ourselves some other way first, and one of those could easily happen before the world has a chance to turn dystopian. But in any case, if you wake up and find yourself in a world that you couldn't possibly bear to live in, you can just kill yourself and be no worse off than if you hadn't tried cryonics in the first place.

Replies from: humpolec
comment by humpolec · 2010-05-03T12:35:30.950Z · LW(p) · GW(p)

Unless it's unFriendly AI that revives you and tortures you forever.

Replies from: gregconen, ata, NancyLebovitz
comment by gregconen · 2010-05-03T14:10:05.993Z · LW(p) · GW(p)

Strongly unFriendly AI (the kind that tortures you eternally, rather than kills you and uses your matter to make paperclips) would be about as difficult to create as Friendly AI. And since few people would try to create one, I don't think it's a likely future.

comment by ata · 2010-05-03T19:18:46.248Z · LW(p) · GW(p)

"unFriendly" doesn't mean "evil", just "not explicitly Friendly". Assuming you already have an AI capable of recursive self-improvement, it's easy to give it a goal system that will result in the world being destroyed (not because it hates us, but because it can think of better things to do with all this matter), but creating one that's actually evil or that hates humans (or has some other reason that torturing us would make sense in its goal system) would probably be nearly as hard as the problem of Friendliness itself, as gregconen pointed out.

comment by NancyLebovitz · 2010-05-03T12:49:11.736Z · LW(p) · GW(p)

Actually, it's quite possible to deny physical means of suicide to prisoners, and sufficiently good longevity tech could make torture for a very long time possible.

I think something like that (say, for actions which are not currently considered to be crimes) to be possible, considering the observable cruelty of some fraction of the human race, but not very likely-- on the other hand, I don't know how to begin to quantify how unlikely it is.

comment by Lightwave · 2010-05-02T23:43:33.752Z · LW(p) · GW(p)

Here's my question to everyone:

What do you think are the benefits of reading fiction (all kinds, not just science fiction) apart from the entertainment value? Whatever you're learning about the real world from fiction, wouldn't it be more effective to read a textbook instead or something? Is fiction mostly about entertainment rather than learning and improvement? Any thoughts?

Replies from: None, Nisan, Morendil, Jack, sketerpot
comment by [deleted] · 2010-05-03T14:32:11.189Z · LW(p) · GW(p)

We are wired for individual rather than general insights. Stories are much more effective at communicating certain things than treatises are. I would never have believed, in theory, that a man who enjoyed killing could be worthy of respect; only a story could convince me. To use Robin Hanson's terminology, narrative can bring near mode and far mode together.

Why not true stories? I think there you get into Aristotle and why versimilitude can be more effective than mere reality. True stories are good too, but life is disorderly and not necessarily narrative. It's a truism of writing workshops and creative writing classes that whenever you see a particularly unrealistic event in a story, the author will protest "But that really happened!" It doesn't matter; it's still unrealistic. Narrative is, I think, a particular kind of brain function that humans are good at, and it's a painting, not a photograph. To tap into our ability to understand each other through narrative, we usually need to fictionalize the world, apply some masks and filters.

comment by Nisan · 2010-05-03T06:50:11.237Z · LW(p) · GW(p)

It was not until I read Three Worlds Collide that I began to embrace moral consequentialism. I would not have found an essay or real-life case study nearly as convincing.

ETA: I didn't change my mind just because I liked the story. The story made me realize that in a particular situation, I would be a moral consequentialist.

Replies from: Academian
comment by Academian · 2010-05-03T07:09:14.252Z · LW(p) · GW(p)

My take on works of fiction, especially written fiction, is that they're thought experiments for your emotional intelligence. The best ones are the ones written for that purpose, since I think they tend to better optimize the net value of entertainment and personal growth.

Morality in particular usually stems from some sort of emotional intelligence, like empathy, so it makes sense to me that written fiction could help especially with that.

comment by Morendil · 2010-05-03T00:04:42.939Z · LW(p) · GW(p)

A possible benefit of fiction is that it leads you to experience emotions vicariously that it would be much more expensive to experience for real, yet the vicarious experience is realistic enough that it serves as useful practice, a way of "taming" the emotions. Textbooks don't convey emotions.

I seem to recall this argument from a review of Cloverfield, or possibly the director's commentary. Broadcast images such as from the 9/11 aftermath generated lots of anxiety, and seeing similar images - the amateurish, jerky camcorder type - reframed in a fictional setting which is "obviously" over the top helps you, the audience, come to terms with the reality.

comment by Jack · 2010-05-02T23:53:32.890Z · LW(p) · GW(p)

Fiction is good for teasing out possibilities and counterfactuals, experimenting with different attitudes toward the world (as opposed to learning facts about the world), and learning to be cool.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-05-03T00:28:36.129Z · LW(p) · GW(p)

On the other hand (and I speak as a person who really likes fiction), it's possible that you learn more about the human range by reading letters and diaries-- whatever is true in fiction may be distorted to make good stories.

comment by sketerpot · 2010-05-03T19:39:01.850Z · LW(p) · GW(p)

Whatever you're learning about the real world from fiction, wouldn't it be more effective to read a textbook instead or something?

Let's assume for the sake of argument that your time would be better spent reading a textbook. Getting yourself to read textbooks is harder than getting yourself to read entertaining fiction, so you may not get as much benefit from textbooks even if you get more benefit per hour.

Also, it sounds like you're putting too little importance on fun. Fun is important.

comment by eugman · 2010-05-25T18:19:52.912Z · LW(p) · GW(p)

I have a cognitive problem and I figured someone might be able to help with it.

I think I might have trouble filtering stimuli, or something similar. A dog barking, an ear ache, loud people, or a really long day can break me down. I start to have difficulty focusing. I can't hold complex concepts in my head. I'll often start a task, and quit in the middle because it feels too difficult and try to switch to something else, ultimately getting nothing done. I'll have difficulty deciding what to work on. I'll start to panic or get intimidated. It's really an issue.

I've found two things that help:

Music is good at filtering out noise and helping me focus. However, sometimes i can't listen to it or it is not enough.

The other thing is to make a extremely granular tasklist and then follow it without question. The tasks have to be really small and seem manageable.

Anyone have any suggestions? I'm not neurotypical in the broader sense, but I don't believe I fall on the autism spectrum.

Replies from: Alicorn
comment by Alicorn · 2010-05-25T18:26:29.815Z · LW(p) · GW(p)

I have similar sensory issues on occasion and believe them to be a component of my autism, but if you don't have other features of an ASD then this could just be a sensory integration disorder. When it's an auditory processing issue, I find that listening to loud techno or other music with a strong beat helps more than other types of music, and ear-covering headphones help filter out other noise. I'm more often assaulted by textures, which I have to deal with by avoiding contact with the item(s).

As for the long day, that sounds like a matter of running out of (metaphorical) spoons. Paying attention to what activities drain or replenish said spoons, and choosing spoon-neutral or spoon-positive activities whenever they're viable options, is the way to manage this.

Replies from: eugman
comment by eugman · 2010-05-25T18:39:21.555Z · LW(p) · GW(p)

Thanks for the advice. The only other symptom I have is some problems with my social coprocessor, but it doesn't feel like it fits an ASD.

comment by NancyLebovitz · 2010-05-15T23:59:25.739Z · LW(p) · GW(p)

Crinimal profiling, good and bad

Article discusses the shift from impressive-looking guesswork to use of statistics. Also has an egregious example of the guesswork approach privileging the hypothesis.

comment by JoshuaZ · 2010-05-13T04:08:29.046Z · LW(p) · GW(p)

There an article in this month's Nature examining the statistical evidence for universal common descent. This is the first time someone has taken the massive amounts of genetic data and applied a Bayesian analysis to determine whether the existence of a universal common ancestor is the best model. Most of what we generally think of as evidence for evolution and shared ancestry is evidence for shared ancestry of large collections, such as mammals or birds, or for smaller groups. Some of the evidence is for common ancestry for a phylum. There is prior evidence for their shared ancestry based on primitive fossils and on the shared genetic code and extreme similarity of genomes across very different species. This is the first paper to make that last argument mathematically rigorous. When taken in this fashion, the paper more or less concludes that a Bayesian analysis using just the genetic and phylogenetic known data puts the universal common ancestor model as overwhelmingly more likely than other models. (The article is behind a paywall so until I get back to the university tomorrow I won't be able to comment on this in any substantial detail but this looks pretty cool and a good example how careful Bayesianism can help make something more precise).

Replies from: JoshuaZ, PhilGoetz
comment by JoshuaZ · 2010-05-13T20:13:48.678Z · LW(p) · GW(p)

Ok. Reading the paper now. Some aspects are bit technical and so I don't follow all of the arguments or genetic claims other than at a broad level. However, the money quote is "Therefore, UCA is at least 10^2,860 times more probable than the closest competing hypothesis." (I've replaced the superscript with a ^ becaause I don't know how to format superscripts). 10^2860 is a very big number.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2010-05-13T20:32:27.589Z · LW(p) · GW(p)

What were they using for prior probabilities for the various candidate hypotheses? Uniform? Some form of complexity weighting? Other?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-23T18:04:05.538Z · LW(p) · GW(p)

They have hypotheses concerning whether Eukarya, Archaea and Bacteria share a common ancestor or not, or possibly in pairs. All hypotheses were given equal prior likelyhood.

comment by PhilGoetz · 2010-05-13T05:03:47.919Z · LW(p) · GW(p)

I take it "a universal common ancestor" doesn't mean a universal common ancestor, but means a universal common ancestral group?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-13T08:51:07.854Z · LW(p) · GW(p)

As I said, I haven't had a chance to actually read the article itself, but as I understand it, this would indicate a universal common ancestor group of nearly genetically identical organisms. While there is suspicion that horizontal gene transfer was more common in the past than it is now, this supports the general narrative of all life arising from a single organism. These sorts of techniques won't distinguish between that and life arising from several genetically identical organisms.

comment by mattnewport · 2010-05-06T21:20:16.880Z · LW(p) · GW(p)

Don't know if anyone else was watching the stock market meltdown in realtime today but as the indices were plunging down the face of what looked a bit like an upside down exponential curve driven by HFT algorithms gone wild and the financial news sites started going down under the traffic I couldn't help thinking that this is probably what the singularity would look like to a human. Being invested in VXX made it particularly compelling viewing.

Replies from: SilasBarta
comment by SilasBarta · 2010-05-06T22:30:54.209Z · LW(p) · GW(p)

To save everyone the googling: VXX is an exchange traded fund (basically a stock) whose value tracks the level of the VIX index. The VIX index is a measure of the volatility of the markets, with higher values indicating higher volatility (volatility here generally implying lost market value). VIX stands at about 33 now, and was around 80 during the '08 crisis.

Replies from: bogdanb, mattnewport
comment by bogdanb · 2010-05-21T14:40:13.630Z · LW(p) · GW(p)

Does that mean VXX stock becomes more expensive/valuable when the volatility grows, or when it goes down?

Replies from: SilasBarta
comment by SilasBarta · 2010-05-21T14:43:06.631Z · LW(p) · GW(p)

VXX becomes more expensive when volatility grows.

comment by mattnewport · 2010-05-06T22:57:18.655Z · LW(p) · GW(p)

Thanks, I meant to include a link to that. I'll edit it.

comment by Mitchell_Porter · 2010-05-04T12:04:24.217Z · LW(p) · GW(p)

Continuing discussion with Daniel Varga:

It's difficult to discuss the behavioral dispositions of these imagined cosmic civilizations guided by a single utility function, without making a lot of assumptions about cosmology, physics, and their cosmic abundance. For example, the accelerating expansion of the universe implies that the universe will eventually separate into gravitationally bound systems (galactic superclusters, say) which will be causally isolated from each other; everything else will move beyond the cosmological horizon. The strategic behavior of such a civilization may be very different if it expects a rival or it expects no rivals to have formed independently in its supercluster. It's the difference between expecting a future of unimpeded expansion and expecting to have to negotiate or fight.

Regarding my multipronged flame attack on your supposed intellectual sins :-) ... OK, you're not a platonist. "Tegmark's multiverse" is just an inventory of possible formal structures for you, and you want to know which ones could describe a universe that contains time. That's still a hard question. ata says, quite reasonably, that there had better be some form of sequential structure, so you can have temporal succession and temporal dynamics. But we also regard relativity as providing a model of time, only then you don't have a universal time. Technically, you don't have a total order on the set of all events, only a partial order. So will we say that any partially ordered set can provide a model of time? Then I wonder about generalizations of relativity in which you have more than one timelike direction. Is that a formal generalization which exceeds the possibility of an interpretation in terms of time? I think that phenomenon - formal overgeneralization - exists and is hardly talked about, again because of our dereliction of integrated ontology in favor of our combination of rigorous formalism and fuzzy thinking about how the formalism relates to reality. You can see this in logic, I believe. Classical logic is formalized, and then the formal system is generalized, and the new formalism is treated as if it describes "a logic", but one may reasonably ask if it has become simply a set of rules of symbolic manipulation that no longer corresponds to any valid form of reasoning. I would not want to say that non-Euclidean geometry is not a geometry, so some forms of formal generalization will retain a connection to their alleged meaning, but the whole issue is hardly addressed.

As for "emergent time". You say you agree with Barbour, but you think time is real. Well, I don't know what you mean by time then. To me, time is about change. Becoming, not just being. Things aren't just sitting there inertly in static eternity; change is real. And I do not at all see how change can be "emergent". There may be parts of reality that don't change, and parts of reality that do, and maybe there's a definable boundary. But talking of emergent time makes it sound like you're trying to have it both ways at once: you don't have time and you do have time. You have a universe without change, and yet if you look at it differently it is changing.

I don't buy that. Either change is real or it isn't. You may have a static description of a changing reality. That is, it may be possible to talk about the set of all physical events in the history of the universe, and say things about their relationships and the patterns they form, without referring to time or change; but that doesn't mean you can start by postulating that reality consists of a set of physical conditions in timeless stasis, and then somehow get time back by magic. It's a matter of interpretation of the formalism. Either it refers to time or it doesn't.

Feel free to refute me by explaining what the emergence of time could possibly mean (even better, the emergence of time from memory).

Replies from: rwallace, thomblake
comment by rwallace · 2010-05-04T14:42:29.775Z · LW(p) · GW(p)

Time is just a dimension in which there is determinism, so that the slice of the universe at position t is a function of the slice at position t-1.

Replies from: wnoise
comment by wnoise · 2010-05-04T14:45:12.945Z · LW(p) · GW(p)

Uh, wow. In our universe, that makes all dimensions time.

Replies from: rwallace
comment by rwallace · 2010-05-04T16:51:48.181Z · LW(p) · GW(p)

No. The state of affairs along the slice of space passing through Earth's equator, for example, does not uniquely determine the state of affairs at 1° north latitude. But the state of affairs now, does determine the state of affairs one second in the future. (Relativistic motion can tilt the axes somewhat, but not enough to interchange space and time.)

Replies from: wnoise
comment by wnoise · 2010-05-04T18:13:13.514Z · LW(p) · GW(p)

All our physical models are described by local partial differential equations. Given the data on an (n-1) dimensional slice (including derivatives, of course), we can propagate that to cover the whole space. (there are complications once GR is in the picture making the notion of global slices questionable, but the same result holds "locally".)

If the data at the slice doesn't include derivatives, you can't propagate in time either.

Replies from: Douglas_Knight, rwallace, Douglas_Knight
comment by Douglas_Knight · 2010-05-05T01:37:49.612Z · LW(p) · GW(p)

All our physical models are described by local partial differential equations. Given the data on an (n-1) dimensional slice (including derivatives, of course), we can propagate that to cover the whole space.

In that generality, this is false. Not all differential equations are causal in all directions. I doubt that it's true of most physical examples. In particular, everyone I've ever heard talk about reconstruction in GR mentioned space-like hypersurfaces.

UPDATE: Actually, it's true. Until I redefine causal.

Replies from: wnoise
comment by wnoise · 2010-05-06T11:24:50.593Z · LW(p) · GW(p)

In that generality, this is false. Not all differential equations are causal in all directions.

I don't doubt that pathological examples exist. I don't suppose you have any handy? I really would be interested. I do doubt that physical examples happen (except perhaps along null vectors).

The prototypical 4-d wave equation is f_tt - f_xx - f_yy - f_zz = 0. I don't see how rearranging that to f_tt = f_xx + f_yy + f_zz provides any more predictive power in the t direction than f_xx = f_tt - f_yy - f_zz provides in the x direction. (There are numerical stability issues, it's true.)

In particular, everyone I've ever heard talk about reconstruction in GR mentioned space-like hypersurfaces.

Well, that's partially an artifact of that being the sort of question we tend to be interested in: given this starting condition (e.g. two orbiting black holes), what happens? But this is only a partial answer. In GR we can only extend in space so long as we know the mass densities at those locations as well. Extending QFT solutions should do that. The problem is that we don't know how to combine QFT and GR, so we use classical mechanics, which is indeed only causal in the time direction. But for source-free (no mass density) solutions to GR, we really can extend a 2+1 dimensional slice in the remaining spatial direction.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-05-07T03:08:26.338Z · LW(p) · GW(p)

No, I think you're right that directions in which solutions to differential equations are determined by their boundary values are generic.

But I think it is reasonable (and maybe common) to identify causality with well-posedness. So I think that rwallace's definition is salvageable.

I'm confused about GR, though, since I remember one guy who put in caveats to make it clear he was talking about uniqueness, not constructive extension, yet still insisted on spacelike boundary.

comment by rwallace · 2010-05-04T21:48:22.684Z · LW(p) · GW(p)

How does that square with e.g. the fact that the gravity of a spherically symmetric object from the outside is the same as that of the same mass compressed into a point at the same center of gravity?

Replies from: wnoise
comment by wnoise · 2010-05-04T22:06:13.286Z · LW(p) · GW(p)

The short answer is that there's more there besides the gravitational field[1] (in the approximation that we can think of it as that). There are the various elementary particle fields. These will have their own values[2] and derivatives, which are part of a giant system of PDEs intertwining them. Two different spherically symmetric objects with the same gravitational field will have different particle fields.

  1. If there were only gravity, we would have something like the Schwarzschild solution -- which is uniquely determined by mass.
  2. These are not number-valued fields, but operator valued, and usually spinor-operator and vector-operator valued at that.
Replies from: rwallace, DanielVarga
comment by rwallace · 2010-05-05T00:04:47.358Z · LW(p) · GW(p)

Ah, so you're saying my initial statement would be true in Newtonian physics, but is false in quantum mechanics (for reasons which I'm struggling to grasp -- my knowledge of physics is strictly that of an interested layman)?

Now that is very interesting indeed.

Does this have anything to do with holographic theory and the fact that the Bekenstein bound is defined over area rather than volume?

Would it be correct to say this is still true for an uncharged black hole, based on the existence of frames of reference in which matter evaporated just before entering the event horizon?

Replies from: Douglas_Knight, wnoise
comment by Douglas_Knight · 2010-05-07T04:08:58.424Z · LW(p) · GW(p)

Ah, so you're saying my initial statement would be true in Newtonian physics, but is false in quantum mechanics...?

No, newtonian vs QM isn't making a difference here and wnoise's first answer makes perfect sense in the classical setting. But I'm not sure why he didn't say that in response to this question. Let's transpose your example into Maxwell's equations, for concreteness. Maxwell's equations in vacuum are determined, but once you put in the charged body, they're underdetermined. In potential form, they say that the wave operator applied to the four-vector-potential is equal to the four-current. In vacuum, this is four equations in four unknowns and one expects solutions to be determined by boundary values, but if you don't constrain the four-current, it says that you can make the vector-potential anything you like (which is some constraint on E&B). You can violate causality in any direction. You can have vacuum up to time 0, and then have charge appear.

You can pin down the point particle or the field of charge by fiat, but then that's a law of physics differing between the two versions. In real life, you need to do that by some other physical process, some other law that governs the charged particles and is coupled to Maxwell's equations. In particular, the charged particles should get pushed around by the electromagnetic field. That's not Maxwell's equation! E&B and the charge tell you how much force, but you need to know the mass to determine the effect of the force. In the point particle case, you just have one mass per particle and I think that's the end of the story. In the uniformly distributed case, you need a mass density field, plus probably other stuff to determine the motion of the charge. Plus you need other forces involved to prevent the soup from dispersing from Coulomb's law. It's that other force that, presumably, will be felt on the slice that is not space-like.

Replies from: rwallace
comment by rwallace · 2010-05-07T14:09:40.307Z · LW(p) · GW(p)

And again, I am surprised, and perhaps missing something.

In classical mechanics with Maxwell's equations, suppose you have a non-spacelike slice, and 1 meter to the north there is a spherical charged object with a given charge and mass. Is it not the case that the electromagnetic and gravitational effects of this object are just the same as they would be if the same charge and mass were in the form of a point particle? How could you tell the difference, even in principle, just by looking at what is happening at the slice?

Another question: if a 3d slice is enough to determine what happens in a 4d volume of space-time, even when the slice is not spacelike, does relative size matter? Suppose the slice is a spherical shell surrounding a small volume... say for example, a piece of fairy cake... presumably this is enough to determine what happens inside the volume. Is it also enough to determine what happens in the rest of the universe?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-05-08T04:45:37.130Z · LW(p) · GW(p)

In no direction is a 3d slice enough to determine what happens with, say, Maxwell's equations. This is true in a space-like direction, as in your example, and it is true in a time-like direction, as in the case of an electron and a positron appearing out of the vacuum. Throwing in Newton's laws isn't enough to change this. You need to know what governs the creation of particles or what holds together the uniform sphere of charge.

Replies from: rwallace
comment by rwallace · 2010-05-08T20:47:33.091Z · LW(p) · GW(p)

Certainly you need to know those things, but I'm not clear on how that relates to the 3dness of the slice; suppose you add rules like particles can neither be created nor destroyed and spherical charged particles hold together by fiat, doesn't that solve that problem?

comment by wnoise · 2010-05-06T11:42:48.567Z · LW(p) · GW(p)

It's not in general true in quantum mechanics. It is true for 1-particle quantum mechanics if and only if the potential and any other interactions (e.g. form of canonical momentum for external magnetic field) are specified.

The reason is that the arena of quantum mechanics is not space with 3-dimensions, but configuration-space, with 3-n dimensions, one space for each particle (disregarding symmetries). Having time evolution be known lets us get rid of one spatial dimension, but we need to get rid of one spatial dimension for each of n particles. The other thing that destroys any hope is that non-local interactions are often used to model systems.

Of course quantum mechanics is only an approximation to quantum field theory, which is nicely local in the spatial sense.

Does this have anything to do with holographic theory

You know, I asked that at a colloquium nearly a year ago, and got back the answer "no", but without a satisfactory explanation.

Would it be correct to say this is still true for an uncharged black hole, based on the existence of frames of reference in which matter evaporated just before entering the event horizon?

I can't quite figure out what you're asking here, and probably couldn't give an answer without a full theory of quantum gravity.

comment by DanielVarga · 2010-05-04T22:35:54.380Z · LW(p) · GW(p)

Can I have your input at the "marketplace of insights and issues"? Please? On any of the two questions I posed there for physicists.

comment by Douglas_Knight · 2010-05-05T01:37:57.632Z · LW(p) · GW(p)

All our physical models are described by local partial differential equations. Given the data on an (n-1) dimensional slice (including derivatives, of course), we can propagate that to cover the whole space.

In that generality, this is false. Not all differential equations are causal in all directions. I doubt that it's true of most physical examples. In particular, everyone I've ever heard talk about reconstruction in GR mentioned space-like hypersurfaces.

comment by thomblake · 2010-05-04T13:27:50.077Z · LW(p) · GW(p)

For this entire discussion, I'm missing the part where the theories/models being discussed lead us to anticipate certain experiences.

Replies from: DanielVarga
comment by DanielVarga · 2010-05-04T14:36:27.927Z · LW(p) · GW(p)

I am not sure if this is a reasonable request. It is impossible to talk about experiences (not to mention anticipation :) ) without accessing higher levels of some reductionist hierarchy. I am interested in the emergence of the thermodynamic arrow of time from more basic notions. I leave to other reductionists the task to reduce the notion of experience to the more basic notion of information processing in a space-time continuum. People like Dennett and Minsky had spectacular successes in this other task.

comment by Liron · 2010-05-03T07:55:41.515Z · LW(p) · GW(p)

I recently heard a physics lecture claim that the luminiferous aether didn't really get kicked out of physics. We still have a mathematical structure, which we just call "the vacuum", through which electromagnetic waves propagate. So all we ever did was kill the aether's velocity-structure, right?

Replies from: ata
comment by ata · 2010-05-03T08:14:20.893Z · LW(p) · GW(p)

That reminds me of this discussion.

Of course if you define "luminiferous aether" as generally as "whatever mechanism results in the propagation of electromagnetic waves", then it exists, because electromagnetic waves do propagate. But when it was under serious scientific consideration, the luminiferous aether theory made testable predictions, and they failed. Just saying "they're different concepts" is easier than saying "it's the same basic concept except it has a different name and the structure of the theory is totally different".

I could sympathize with trying to revive the name "luminiferous aether" (or even better, "luminiferous æther"), though. It's a pretty awesome name. (I go by "Luminiferous Æther Bunny" on a few other forums.)

Replies from: Liron
comment by Liron · 2010-05-03T08:36:52.302Z · LW(p) · GW(p)

Nice link. It would be cool to see a similar discussion for all the classic rejected hypotheses.

comment by [deleted] · 2010-05-28T05:17:51.898Z · LW(p) · GW(p)

If the future of the universe is a 'heat death' in which no meaningful information can be stored, and in which no meaningful computation is possible, what will it matter if the singularity happens or not?

Ordinarily, we judge the success of a project by looking at how much positive utility has come of it.

We can view the universe we live in as such a project. Engineering a positive singularity looks like the only really good strategy for maximizing the expression of complex human values (simplified as 'utility') in the universe.

But if the universe reaches a final heat death, so that no intelligent life exists, and there is no memory and no record of anything, what do the contents of the antecedent eons count for? There is no way to tell if the-universe-which-resulted-in-heat-death saw the rise of marvelous intelligence and value or remained empty and unobserved.

What is the utility of a project after all of its participants, and all records and memory of it, are utterly destroyed?

The pragmatic answer is simply 'carpe diem': make the best of this finite existence. This is what people have done for years before the ideas of the singularity and transhumanism had been formulated.

Transhumanist beliefs, including the prospect of 'immortality' or transcendence seem to be a way in which some cope with their fear of death. But I fail to see why death should be any less gloomy a prospect for a 3^^^3 year old being than it is for a 30 year old. By definition, one cannot 'reminisce' about one's accumulated positive experiences after death, so in one sense a 3^^^3 year old has actually lost more: vastly more information has been destroyed!

So, in short, I struggle to see a rationale for my intuitive belief that surviving into deep time is truly better than a natural human lifespan, for if heat death is inevitable, as seems to be the case, the end result--the final tally of utils accumulated--is exactly the same. 0.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-05-28T05:23:44.283Z · LW(p) · GW(p)

The problem with this is that it assumes we only care about the end state.

Replies from: None
comment by [deleted] · 2010-05-28T05:31:26.238Z · LW(p) · GW(p)

Is it rational for a decision procedure to place great value on the the interim state, if the end state contains absolutely no utility?

Replies from: khafra, Sniffnoy
comment by khafra · 2010-05-28T15:29:31.218Z · LW(p) · GW(p)

This is a philosophical question, not a rational one. Terminal values are not generated by rational processes; that's why they're terminal values. The metaethics sequence, especially existential angst factory and the moral void, should expand on this sufficiently.

comment by Sniffnoy · 2010-05-28T20:53:18.335Z · LW(p) · GW(p)

Does caring about interim states leave you open to Dutch books?

comment by Jack · 2010-05-15T16:36:13.294Z · LW(p) · GW(p)

HELP NEEDED Today if at all possible.

So I'm working on a Bayesian approach to the Duhem-Quine problem. Basically, the problem is that any experiment never tests a hypothesis directly but only the conjunction of the hypothesis and other auxiliary assumption. The standard method for dealing with this is to make

P(h|e)=P(h & a|e) + P(h & -a|e) (so if e falsifies h&a you just use the h&-a)

So if e falsifies h&a you end up with:

P(h|e) = P(e|h&-a) * P(h&-a) / P(e)

This guy Strevens objects on the grounds that e can impact h without impacting a. His example:

Newstein, a brilliant but controversial scientist, has asserted both that h is true and that e will be observed. You do not know Newstein’s reasons for either assertion, but if one of her claims turns out to be correct, that will greatly increase your confidence that Newstein is putting her brilliance to good use and thus your confidence that the other claim will also turn out to be correct. Because of your knowledge of Newstein’s predictions, then, your P(h|e) will be higher than it would be otherwise.

Am I crazy or shouldn't that information already be contained in the above formula? Specifically, the term P(e|h&-a) should be higher than it would otherwise.

comment by kpreid · 2010-05-14T17:22:36.623Z · LW(p) · GW(p)

It's a vicious cycle-- if you work on something that sounds crank-ish, you get defensive about being seen as a crank, and that defensiveness is also characteristic of cranks. Lather, rinse, repeat.

Chad Orzel

This seems possibly broadly applicable to me; e.g. replace “crank” with “fanboy”.

comment by Leafy · 2010-05-14T08:43:16.938Z · LW(p) · GW(p)

Science.

To me it is a process, a method, an outlook on life. But so often it is used as a pronoun: "Science says tomatoes are good for you".

It should be used to encourage rational thinking, clarity of arguement and assumption and rigorous unbiased testing. The pursuit of knowledge and truth. Instead it is often seen as a club, to which you either belong by working in a scientific profession, or you do not.

As a child of a mixed religeon household I felt like an outcast from religeon from an early age - it didn't matter that I have beliefs of my own, if I didn't belong to a specific club then I didn't belong at all. Very few religeous people I met encourage me to have faith regardless of what that faith is.

I see a scientific approach to life and its mysteries as my way of forming my own "map of the territory" as others perhaps use religeon and I hope that as promoters of rationality that we can encourage scientific principles in others rather than making them feel like outcasts for not being in our "club".

comment by PhilGoetz · 2010-05-13T00:17:22.259Z · LW(p) · GW(p)

I don't think that the math in Aumann's agreement theorem says what Aumann's paper says that it says. The math may be right, but the translation into English isn't.

Aumann's agreement theorem says:

Let N1 and N2 be partitions of Omega ... Ni is the information partition of i; that is, if the true state of the world is w [an element of Omega], then i is informed of that element Pi(w) of Ni that contains w.

Given w in Omega, an event E is called common knowledge at w if E includes that member of meet(N1, N2) that contains w.

Let A be an event, and let Qi denote the posterior probability p(A|Ni) of A given i's information; i.e., Qi(w) [ = p(A | Pi(w)) ] = p(A ^ Pi(w) / p(Pi(w)).

Proposition: Let w be in Omega ... If it is common knowledge at w that Q1 = q1 and Q2 = q2, then q1 = q2.

Proof: Let P be the member of meet(N1, N2) that contains w. Write P = union over all j of Pj, where the Pj are disjoint members of P1. Since Q1 = q1 throughout P, we have p(A ^ Pj) / p(Pj) = q1 for all j; hence p(A ^ Pj) = q1p(Pj), and so by summing over j we get p(A^P) = q1p(P). Similarly p(A^P) = q2p(P), and so q1=q2.

meet(N1, N2) is not an intersection; it's a very aggressive union of the subsets in the partitions N1 and N2 of Omega. It's generated this way:

M = {w}, Used = {}

while (M neq Used)

 Take an element m from M \ Used.

 Find H1 in N1 and H2 in N2 containing m.

 M = union(M, H1, H2).

 Used = Used u {m}

Note in particular that P is a member of meet(N1, N2) that contains elements of Omega taken from H2, that are not in H1. To say that Q1 = q1 throughout P means that, for every x in P, Q1(x) = p(A | P1(x)) = q1. This is used to infer that p(A ^ Pj) = p(Pj) q1 for every Pj in N1.

This is a very strange thing to believe, given the initial conditions. The justification (as Robin Hanson pointed out to me) is that "common knowledge at w that Q1=q1" is defined to mean just that: Q1(x) = q1 for all x in the member P of the meet(N1,N2) containing w.

Now comes the translation into English. Aumann says that this technical definition of "common knowledge at w of the posteriors" means the same as "agent 1 and agent 2 both know both of their posteriors". And the justification for that is this: "Suppose now that w is the true state of the world, P1 = P1(w), and E is an event. To say that 1 'knows' E means that E includes P1. To say that 1 knows that 2 knows E means that E includes all P2 in N2 that intersect P1. ..." et cetera, to closure.

And this, I think, is wrong. If 1 knows that 2 knows E, 1 knows that E includes P1 union some P2 that intersects with P1, not that E includes P1 union all P2 that intersect with P1. So the "common knowledge" used in the theorem doesn't mean the same thing at all that we mean in English when we say they "know each others' posteriors".

Also, Aumann adds after the proof that it implicitly assumes that the agents know each others' complete partition functions over all possible worlds. Which is several orders of magnitude of outlandish; so the theorem can never be applied to the real world.

comment by Richard_Kennaway · 2010-05-11T11:21:24.218Z · LW(p) · GW(p)

The moral life of babies. This is an article that also recently appeared in the New York Times Magazine.

It covers various scientific experiments to explore the mental life of babies, finding evidence of moral judgements, theory of mind, and theory of things (e.g. when two dolls are placed behind a screen, and the screen is removed, 5-month-old babies expect to see two dolls).

Unlike many psychological experiments which produce more noise than signal, "these results were not subtle; babies almost always showed this pattern of response."

It also discusses various responses to the existence of innate morality, and the existence of "higher" adult morality -- caring about people who cannot possibly be of any benefit to oneself.

comment by exapted · 2010-05-11T05:04:18.350Z · LW(p) · GW(p)

How should one reply to the argument that there is no prior probability for the outcome to some quantum event that that already happened and splits the world into two worlds, each with a different outcome to some test (say, a "quantum coin toss")? The idea is that if you merely sever the quantum event and consider different outcomes to the test (say, your quantum coin landed heads), and consider that the outcome could have been different (your quantum coin could have landed tails), there is no way to really determine who would be "you." Is it necessary to apply the SSA or some form of the SSSA? To me it seems that it should be allowed to rigidly maintain your identity while allowing the outcome of the quantum coin toss to vary across those two worlds. One could then base the prior probability of the coin landing heads in your world on the empirical evidence that quantum coin tosses of that type land heads with frequency 0.5 in any particular instance of a world history.

comment by Will_Newsome · 2010-05-08T20:53:44.268Z · LW(p) · GW(p)

I'm looking at the forecast for the next year on CNN Money for Google stock (which will likely be an outdated link very soon). But while it's relevant...

I don't know much economics, but this forecast looks absurd to me. What are the confidence intervals? According to this graph, am I pretty much guaranteed to make vast sums of money simply by investing all of what I have in Google stock? (I'm assuming that this is just an example of the world being mad. Unless I really should buy some stock?) What implications does this sort of thing have on very unsavvy investors who look at graphs like that and instantly invest thousands of dollars? Do they win at everything forever? What am I missing?

Replies from: mattnewport
comment by mattnewport · 2010-05-08T21:24:28.419Z · LW(p) · GW(p)

It's fairly well established that actively managed funds on average underperform their benchmarks. I'm not aware of specific research on investing based solely on analyst forecasts but I imagine performance would be even worse using such a strategy. Basically, you are right to be skeptical. All the evidence indicates that the best long term strategy for the average individual investor is to invest in a low cost index fund and avoid trying to pick stocks.

ETA: This recent paper appears relevant. They do indeed find that analysts' target prices are inaccurate and appear to suffer from consistent biases.

comment by xamdam · 2010-05-07T18:16:21.505Z · LW(p) · GW(p)

Pre-commitment Strategies in Behavioral Economics - PowerPoint by Russell James. Not deep, which is sometimes a good thing.

comment by Eneasz · 2010-05-07T15:11:58.879Z · LW(p) · GW(p)

First step in the AI take-over: gather funds. Yesterday's massive stock market spike took place in a matter of minutes, and it looks like it was in large part due to "glitches" in automatic trading programs. Accenture opened and closed at $41/share, but at one point was trading for $0.01/share. Anyone with $1000, lighting reflexes, and insider knowledge could've made $4.1M yesterday. For every $1000 they had.

http://www.npr.org/blogs/money/2010/05/the_market_just_flipped_out_ma.html

Next month: our new overlords reveal themselves?

Replies from: nhamann, mattnewport, mattnewport
comment by mattnewport · 2010-05-07T17:58:21.523Z · LW(p) · GW(p)

In the same vein, a note from the machines:

"Many traders said computer program trades accelerated the slide as market indexes fell through crucial levels." —A barely literate human assessment of yesterday's two-minute market panic.

We are Wall Street. It's our job to make money. We didn't hear you humans complaining when the Dow went up 3000 points in the last nine months.

Just like gambling, it's not a problem for you until we make some of the machines lose so that some of the other machines can win. Your market positions are merely a small casualty in yesterday's triumph of Fidessa's EMS Workstation over Automated Trading Desk in the larger Algorithm Battles during this Long War on Execution Services. Well, yesterday some machines were crapped out and even though the market has come back somewhat, the reporters, the regulators and the hyperactive business blogs are looking for a scapegoat. But what did you think was going to happen when you invented the Turing Test anyway?

comment by mattnewport · 2010-05-07T15:49:27.163Z · LW(p) · GW(p)

Anyone with $1000, lighting reflexes, and insider knowledge could've made $4.1M yesterday. For every $1000 they had.

Many of those trades will be cancelled.

comment by NancyLebovitz · 2010-05-05T23:13:03.761Z · LW(p) · GW(p)

Convergence: Threat or Menace? : How to Create the Ultimate TED Talk.

Replies from: Morendil
comment by Morendil · 2010-05-06T06:13:25.350Z · LW(p) · GW(p)

At 4:06 we see a slide that provides less than overwhelming support for the Red Bias hypothesis.

comment by XiXiDu · 2010-05-05T15:41:31.198Z · LW(p) · GW(p)

Does anybody know what happened to Roko Mijic's blog 'Transhuman Goodness'? It completely vanished. Looks like it has been deleted?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-05T15:46:06.458Z · LW(p) · GW(p)

And http://www.rokomijic.com/ doesn't work as well...

comment by whpearson · 2010-05-04T13:10:54.667Z · LW(p) · GW(p)

I've created a mailing list interested in the future of computation. Not very Sl4, but I think it is well worth exploring if RSI doesn't work as people expect.

comment by Academian · 2010-05-02T16:51:05.428Z · LW(p) · GW(p)

Looking for help here... does anyone know a good model for cognitive dissonance resolution? I looked at this "constraint satisfaction" model, an I'm not pleased about it:

I did some simulations with the recursion they suggest, and it produces values outside the "activation range" of their "units" if the edge weights, which represent conflicts, aren't chosen carefully (a "unit" represents a belief or cognition, and its "activation" is how strongly present it is in the mind) .

Lacking a decent model for this is really bothering me...

comment by zero_call · 2010-05-02T12:24:11.771Z · LW(p) · GW(p)

Here is a little story I've written. "When the sun goes down."

In the dawning light a boy steps out onto the shadowed plane. All around him lie the pits of entry into the other realms. When he reaches the edge of these pits he skirts his way around slowly and carefully until reaching the open space between them. He continues walking in the shadowed plane until the sun has waned completely, and all is dark. And then the shadowed plane becomes the night plane.

In the night the pits cannot be seen, and the boy can no longer walk his way among them to avoid the entrance into the other realms. He is faced with the decision of continuing the journey in danger, or stopping, and setting up a camp in one place. He knows he has a small safe area around him which would be free of obstacles and the dangers of the pits. But it is a place of stagnation in the night plane, and it lies stagnant as a swampy pool until the sun has come again. And it can be hard to guess the rising of the sun in the night plane, when there is nothing to judge the changing of the sky.

Experience has shown him that the pits could be both good and bad, but frequently bad. The pits had stress and nervousness and fear. There could be great rewards lieing in the pits, waiting to be seized, but these rewards required the unknown descent. The descent into the other realms was like a deep pulsing fear inside the boy's mind. He sits down on the rough ground of the night plane, looking to the sky for distraction. He looks to escape the decision to enter the pits or to continue walking along the plane. Either decision holds fear, the commitment, the isolation of the pits.

"I seek distraction from my journey," says the boy, and in response a sprite appears on the horizon, a point of light. This light is dimmer than the sun, far dimmer. The light holds his gaze. From the direction of the light there comes a voice.

Sprite: Welcome to the other space.

Boy: Show me something new.

Sprite: What would you like to see?

Boy: I will see what lies in the pits. But I will not enter.

Sprite: I can show you the pits, but you cannot touch them, you can only look.

And the boy nods his head and the Sprite takes his hand, and his mind leaves his body sitting on the night plane. In the other space, his mind runs from pit to pit, leaning just over the edge, without fear of entry. The sprite stands beside him all the while, a comforting presence, an ethereal presence which takes no form. When it takes his hand, he feels the hand of air. When he sees its light, it is the light of a distant star, never close enough to be revealed for its source.

In the pits he sees many things. He sees all of his curiosities, and branching curiosities, points of humor, interest, and instinctual desire. Each view leads him to the next as though the night plane lay below him at a great distance, where the solid spaces between the pits could not be seen. In the other space, the boy could look down on the night plane, see into the pits, and move from one to the next with no more effort than the slight shift of his gaze.

The sun approaches rapidly now and the boy begins to feel a sense of weariness. He feels confused, lonely, fatigued by his removal from the solid earth of the shadowed plane. He calls out to the emptiness once more.

Boy: Take me back to the plane. The sun approaches.

Sprite: Let us go then.

And with his simple command he executes his path. Back on the plane the sun begins to rise -- the boy, tired, exhausted from his journeys in the other space. He feels safe, having avoided the treacheries of the pits for one more night. In the daytime he can navigate safely once again. In the back of his mind, he knows the pits will come once more, and the fear remains inside his mind, tucked away into a back corner. The fatigue of the experience has its own fear, but this fear he cannot understand, and so he does not feel it.

The sun rises, the boy sleeps to recover, wakes again, and looks out among the plane. There is only a short time now before the sun will dawn upon the shadowed plane. In the dim light the boy begins again.

comment by Jack · 2010-05-02T07:36:43.352Z · LW(p) · GW(p)

"Talking with the Planets"(1901) by Nikola Tesla

comment by exapted · 2010-05-11T04:33:15.796Z · LW(p) · GW(p)

I have read about the argument that the Self-Indication Assumption (SIA) cancels the Doomsday Argument (DA), and I think that the argument fails to take into account the fact that it is not more likely that an observer of a particular birth rank will have existed given the general hypothesis that there will have been many observers in her reference class than given the general hypothesis that there will have been few observers in her reference class, as long as there will have been at least as many as would be necessary for anyone with her birth rank to exist at all.

Please ignore the following if it has been stated elsewhere.

The form of the argument that I am criticizing assumes both the Self Sampling Assumption (SSA) and the SIA. The SIA asserts that you would be more likely to ever exist given the general hypothesis that many observers will have existed. I could be wrong, but I think the argument is that each observer that will have existed can naively be thought of as a "slot," and that the more "slots," the more likely your "card" is to be dealt. For the argument to work, I think your "card" must include not only your (non rank dependent) physical properties (your position in time-space, physical composition, etc.), but also necessarily your birth rank. What this means is that, in order for the argument to work, we must ignore the fact that the proposition, "someone of birth rank R will have existed" is in fact not more likely to be true given the general hypothesis that there will have been X*Y observers than given the general hypothesis that there will have been only X observers (R, X and Y are positive integers and R is less than both X and Y). (I am ignoring the "no outsider" issue, and assuming that there are no outsiders, in order to focus on the issue of birth rank.)

We can separate birth rank from other properties and apply both the SSA and the SIA, and see that the DA still applies. Suppose there are n possible (non rank dependent) types of observers, and m types of observers that will actually have existed. The probability that your (non rank dependent, n.r.d.) type of observer will have existed is m/n, and the probability that your (n.r.d.) type of observer will have existed, given that the hypothesis "i (n.r.d.) types of observers will have existed" is true is i/n. We are leaving the SIA intact except for the fact that we are separating rank out from it, and the SIA seems to implicitly rely on a the principle of indifference - indifference as to the probability of any particular (n.r.d.) type of observer existing, or existing in any particular quantity. We can apply the SSA to say that, although your (n.r.d.) type of observer is a priori more likely to exist given the hypothesis that there will have been X observers than given the hypothesis that there will have been X*Y observers, the probability that an observer randomly selected from the set of all observers is equal on both hypotheses.

We can say that there are c possible birth ranks and z birth ranks that will have existed. Here we can use a fact about birth ranks that is not taken into account in the naive application of the SIA: The probability that a birth rank will have existed is 1 given that it is less than or equal to z, and zero otherwise. So the SIA doesn't apply to rank.

Finally, we can apply the SSA to rank and all of the normal implications of the DA apply. First, a (n.r.d.) type of observer was randomly selected from the set of observers that will have existed. Then a birth rank was randomly selected from the birth ranks that will have existed among those with the selected (n.r.d.) type. Since the SIA implicitly relies on the principle of indifference, selecting a (n.r.d.) type of observer first is equally likely to make it more/less likely that we would select some particular rank instead of another. And if there are some rank dependent properties other than birth rank, it is assumed that the number of permutations of those possible rank dependent properties are equally distributed among the ranks that will have existed.

comment by Maelin · 2010-05-17T10:11:01.335Z · LW(p) · GW(p)

A friend linked me to this rather ambitiously described paper: An Algorithm for Consciousness:

This document offers a complete explanation of the hard problems of consciousness and free will, in only 34 pages. The explanation is given as an algorithm, that can be implemented on a computer as a software program. (Open-)Source code will be released by Jan 2011. A solid background in psychology, computer science & artificial intelligence is useful, but if you're prepared to follow the hyperlinks in the document, it should be possible for most people to enjoy.

The author wishes to remain anonymous, but is not a crank. He/she has 10 years professional experience building cutting-edge artificial intelligence, computer vision and machine learning systems that are used on 4 continents, and has degrees in computer science, artificial intelligence, and robotics.

I haven't read past the first page yet (don't have time right now) but I thought that it might be something people here would be interested in.

Replies from: orthonormal, nhamann
comment by orthonormal · 2010-05-17T23:11:55.240Z · LW(p) · GW(p)

The author wishes to remain anonymous, but is not a crank.

Wow. How can I argue with that?

Replies from: nclsbrbki
comment by nclsbrbki · 2010-05-23T04:09:22.517Z · LW(p) · GW(p)

You can argue with it.. But you don't have to, because I wrote the article and I agree with you. It's cranky stuff. :)

The description is supposed to be taken lightly (hence the tongue-in-cheek comment "in only 34 pages"). It's not scientific content, and I wouldn't claim it as such. It is because it is unscientific (and partly because contractually, my employer owns all my ideas) that it's published anonymously. It's fun to develop outrageous ideas that might be impractical to evaluate scientifically: It's wrong to claim they're proven fact, without strong evidence. Which I don't have.

Not all good ideas make good or easy science, and not all bad ideas are unscientific.

To the commenter who thinks it stinks due to use of graph representation - there is a lot of evidence for the existence of a representational system within the brain, and the graph is simply one useful way of representing information. Agreed, by themselves graph-ontologies like Cyc ( http://www.cyc.com/ ) are not conscious. Unless you're challenging representationalism itself (as behaviourists would), rejecting it on the basis that it has graphs is no better than rejecting it for the choice of font. Have a proper read, if you can spare the time.

For what it's worth, I didn't choose to put the article on lesswrong, but word eventually got back to me that a friend [of a friend... etc] had posted it. Which is quite nice actually, because I didn't know about lesswrong before and I like it now I'm here.

Anyway, have a read if you want to and I'm happy to answer questions. In the meantime I'm going to continue reading some of the other articles here.

best regards,

comment by nhamann · 2010-05-21T03:16:11.100Z · LW(p) · GW(p)

From a quick skim, in this proposal "knowledge" is modeled as symbols connected in a graphical structure, i.e. a bunch of suggestively-named LISP tokens. I obviously stopped skimming after that. Unfortunately, AI theories are like assholes: everyone's got one, and they usually stink.