Posts

Comments

Comment by Hopefully_Anonymous2 on The Virtue of Narrowness · 2007-08-08T17:57:48.000Z · LW · GW

well, I googled superintelligence and corporations and this came up with the top result for an articulated claim that corporations are superintelligent:

http://roboticnation.blogspot.com/2005/07/understanding-coming-singularity.html#112232394069813120

The top result for an articulated claim that corporations are not superintelligent came from our own Nick Bostrom:

http://64.233.169.104/search?q=cache:4SF3hsyMvasJ:www.nickbostrom.com/ethics/ai.pdf+corporations+superintelligent&hl=en&ct=clnk&cd=4&gl=us

Nick Bostrom "A superintelligence is any intellect that is vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.1This definition leaves open how the superintelligence is implemented – it could be in a digital computer, an ensemble of networked computers, cultured cortical tissue, or something else."

If one is defining superintelligent as able to beat any human in any field, then I think it's reasonable to say that no corporations currently behave in a superintelligent manner. But that doesn't mean that the smartest corporations aren't smarter than the smartest humans. It may mean that it's just not rational for them to engage in those specific tasks. Anyways, the way corporations operate, one wouldn't attempt, as a unit, to be more socially skilled than Bill Clinton. It would just pay to utilize Bill Clinton's social skills.

So Nick's point is interesting, but I don't think it's an ending point, it's a starting or midway point in the analysis of networked groups of humans (and nonhuman computers, etc.) as potentially distinct intelligences, in my opinion.

Here are some more personal thoughts on this in a recent blog post of mine:

http://hopeanon.typepad.com/my_weblog/2007/08/do-archetypes-e.html

Comment by Hopefully_Anonymous2 on The Virtue of Narrowness · 2007-08-07T20:34:07.000Z · LW · GW

Jeff Kottalam, I'd also like to be directed to such claims and claim justifications (there's a protean claim justification on my blog). I'll resist the temptation of the thread-jacking bait that constitutes your last sentence, and encourage you -and Eliezer- to join me on my blog to continue the conversation on this topic.

Comment by Hopefully_Anonymous2 on The Importance of Saying "Oops" · 2007-08-07T18:52:27.000Z · LW · GW

Eliezer, not bothering to go after a goal may fall into that category. For example, it's reasonable to choose to live an average life, because one is probably mistaken if one thinks one is likely to have strongly positively deviant outcomes in life, such as becoming a billionaire, or procreating with a 1 in a million beauty, or winning a nobel prize for one's academic contributions, or becoming an A list celebrity. So one may choose never to invest in going after these goals, and devote the balance of one's time and energy to optimizing one's odds of maintaining a median existence, in terms of achievements and experiences. I could name people who seem to be doing that, but you've never heard of them.

Comment by Hopefully_Anonymous2 on The Virtue of Narrowness · 2007-08-07T18:46:32.000Z · LW · GW

Eliezer, Actually, I'd like to read good critiques of descriptions of corporations as superintelligent (or more nuanced versions of that assertion/theory, such as that some corporations may be intelligent, and more intelligent than individual humans).

Where can I find such critiques?

Comment by Hopefully_Anonymous2 on The Proper Use of Doubt · 2007-08-06T23:40:05.000Z · LW · GW

Eliezer, I'm using transparency to mean people wearing lab coats, or making great public displays of doubt being open and honest to themselves and others about why they're doing so. I think it's a standard usage of the word.

Comment by Hopefully_Anonymous2 on The Proper Use of Doubt · 2007-08-06T23:12:46.000Z · LW · GW

Eliezer,

http://www.mja.com.au/public/issues/174_07_020401/mvdw/mvdw.html

Particularly scary sentence:

"And yet, the practice of medicine involves more than its subservience to evidence or science. It also involves issues such as the meaning of service and feelings of professional pride."

Comment by Hopefully_Anonymous2 on The Proper Use of Doubt · 2007-08-06T22:19:40.000Z · LW · GW

PS I love this line for the double scoop of transparency: "Making a great public display of doubt to convince yourself that you are a rationalist, will do around as much good as wearing a lab coat."

Comment by Hopefully_Anonymous2 on The Proper Use of Doubt · 2007-08-06T22:17:48.000Z · LW · GW

A great post (in a series of great recent posts from Eliezer), and so far the comments on this post are very strong too.

Comment by Hopefully_Anonymous2 on The Importance of Saying "Oops" · 2007-08-05T11:50:29.000Z · LW · GW

Great post.

Comment by Hopefully_Anonymous2 on Religion's Claim to be Non-Disprovable · 2007-08-04T19:23:01.000Z · LW · GW

Richard, I share your concerns, as expressed in past posts to this blog. Great to see someone else (non-anonymously?) expressing them. I have a longer response on my anonymous blog.

Comment by Hopefully_Anonymous2 on Religion's Claim to be Non-Disprovable · 2007-08-04T07:00:30.000Z · LW · GW

Eliezer, it's a good point, and hopefully writings like these will get the skeptic community (much larger than the reduce existential risk community) buzzing about "bayesian reasoning" as the proper contrast to religion. But it seems to me that religion has already been slayed many, many times by public intellectuals. The cutting edge areas to address, the "hard" areas, are things like universal adult enfranchisement to select policy makers and juries as finders of fact.

Comment by Hopefully_Anonymous2 on Belief as Attire · 2007-08-03T17:09:15.000Z · LW · GW

Bob, I take it you're not the deceased kiwi atmospheric scientist Robert "Bob" Unwin. But very high quality commentary. I hope that you start an blog to consolidate your observations under this name/pseudonym (as I have done with mine).

Comment by Hopefully_Anonymous2 on Belief as Attire · 2007-08-03T15:17:30.000Z · LW · GW

Michael, how about the point that you're (rather explicitly now) picking a point upon which to manufacture in-groups and out-groups. In-group: those of us who get motivations for execution. Out-group: those who get honor killings.

The in-groups and out-groups change if the point to get is abrahamic monotheism, or if the point to get is state-sanctioned punitive killings. It seems to me that you're picking one that's particularly salient either to you or to what you imagine your audience to be. I think this gets to the belief as attire/beliefs as cheers for teams. It's an attempt to pick teams, but I think the implied in-groups and out-groups are at least in theory contestible.

A bit tangentially, I think teams themselves can be an effective (the most effective?) way to construct hierarchical privilege. The people on the field vs. the people on the bench (or the people regulated to the audience) of the two teams.

In terms of overcoming bias, I think understanding and when necessary countering these phenomena is important primarily to the degree that they warp decision-making or increase economic waste/existential risk.

Comment by Hopefully_Anonymous2 on Belief as Attire · 2007-08-03T11:45:40.000Z · LW · GW

Michael, I think your example is interestingly rooted in an implied in-group/out-group construction that construction Americans in a flattering way. Consider that you contrast honor killings with forcing kids to go to law school or day camp -that won't necessarily result in their death. It's a flattering contrast that I think constructs America as Western and honor killers as culturally Middle Eastern. But, if we contrasts cultures that approve of state-sanctioned killing of people for moral transgressions, America and the nations of the honor-killers are now in the same group, with Western Europe (and much of the rest of the world) in the other group. Incidentally, I'm not opposed to state-sanctioned killing, but I think it would be more rational for the penalty to start with doing it to to individuals to the extent it will prevent future great economic waste/increase in existential risk, rather than to punish premeditated murder of a small number of people or purported extramarital/premarital sex.

Comment by Hopefully_Anonymous2 on Belief as Attire · 2007-08-03T00:06:46.000Z · LW · GW

I think questioning the Alabama bar analogy is useful within the context of this post. Whose attire is a belief in the value of giving primacy skepticism, critical thinking, etc.? According to Eliezer's performance in the OP, it's not the attire of either Alabama bar patrons or "muslim terrorist suicide bombers" -and both of those may signal more generally, the losers of the American Civil War and non-white brown people. In short, I think there may be a gentrification of critical thinking: it's reserved for an in-group, perhaps in particular northeastern anglo-saxon and ashkenazi jewish male intellectuals, or an even more narrow archetypal definition that might be reducible to zero actual people. I'm interested in the degree to which our behavior might be governed by aligning with and contesting these archetypes. Including which beliefs as attire to wear (it's perhaps an archetype alignment for Steven Hawkings and Richard Dawkins to claim to be skeptical about religion. It would probably not be an archetype alignment for Oprah to publicly wear such belief attire, even if in fact she was a crypto-skeptic).

This post may meander a bit but I think Eliezer's post (and some of the criticisms of it) are thought provoking and may be getting us closer to a more real world, real time model of how bias and belief is operating in the world we live in.

Comment by Hopefully_Anonymous2 on Belief as Attire · 2007-08-02T21:09:00.000Z · LW · GW

Silas, My opinion: you seem invested in using "muslim terrorists" for in-group/out-group construction, and I think it's coloring (biasing?) your analysis.

Michael, great criticism of an element of Eliezer's post.

Comment by Hopefully_Anonymous2 on Belief as Attire · 2007-08-02T21:04:35.000Z · LW · GW

Eliezer, Brilliant post, in my opinion. Clarifying and edifying. I'm looking forward to where you're going to go with this analysis of how bias and belief operate.

Comment by Hopefully_Anonymous2 on Professing and Cheering · 2007-08-02T11:24:10.000Z · LW · GW

Eliezer, first, really great topic. I think it will help move this blog to new and fertile ground. Secondly, in this particular case, I think Cole has a very plausible theory. If this person wanted to rise above being just one person on a panel, to a person in the key diaelectical exchange with the entire room, it might have been a good strategy for them to try to bait the room by professing, to the point of mass irritation, a contrarian stance.

It would be interesting to see she would adjust strategies in a room filled with pagan scientists. If she's completely flexible in external presentation of self, and attention-maximizing, she might then claim to be a fundamentalist christian?

Comment by Hopefully_Anonymous2 on Bayesian Judo · 2007-08-01T00:51:51.000Z · LW · GW

Good catch, Pseudonymous. Robin, my guess is that they're crypto-skeptics, performing for their self-perceived comparative economic/social advantage. Eliezer, please don't make something that will kill us all.

Comment by Hopefully_Anonymous2 on Belief in Belief · 2007-07-30T12:59:47.000Z · LW · GW

Eliezer, Very interesting post. I'll try to respond when I've had time to read it more closely and to digest it.

Comment by Hopefully_Anonymous2 on Belief in Belief · 2007-07-30T12:59:08.000Z · LW · GW

Anna, If you're talking about real dragons, the theory that made the most intuitive sense to me (I think I read it in an E.O. Wilson writing?) is that dragons are an amalgamation of things we've been naturally selected to biologically fear: snakes and birds of prey (I think rats may have also been part of the list). Dragons don't incorporate an element of them that looks like a handgun or a piping hot electric stove, probably because they're too new as threats for us to be naturally selected to fear things with those properties.

Comment by Hopefully_Anonymous2 on Open Thread · 2007-07-24T04:00:58.000Z · LW · GW

The statement the above post refers to:

http://www.singinst.org/overview/whyworktowardthesingularity

Comment by Hopefully_Anonymous2 on Open Thread · 2007-07-24T04:00:27.000Z · LW · GW

This statement seems to me to be extraordinarily (relative to the capabilities of the presumed authors) ungrounded in empiricism. All sorts of ideas in it are framed as declarative fact, when I think they should be more accurately presented as conjecture or aspirations of unknown certainty. I'm very interested in the Singularity Institute people at overcomingbias addressing these concerns directly.

Comment by Hopefully_Anonymous2 on Open Thread · 2007-07-18T14:06:56.000Z · LW · GW

Maybe the Brazilian Appeals Court was right?

http://apnews.myway.com/article/20070718/D8QEV3703.html

I'd like to lobby for a new open thread to be created weekly.

Comment by Hopefully_Anonymous2 on Two More Things to Unlearn from School · 2007-07-14T14:21:42.000Z · LW · GW

It may be a fair question of whether better outcomes result when a substantial portion of the population is taught followign directions rather than to think critically. Sort of like how the Straussians approach religion and how the armed forces approach chain of command.

Comment by Hopefully_Anonymous2 on Two More Things to Unlearn from School · 2007-07-12T21:55:43.000Z · LW · GW

Robin, good point. At the same time, there might be a large functional vs. optimal gap in the degree to which school is fulfilling its real purposes. Although the best way to optimize it might not be to brainstorm about how to get it closer to its stated purposes -so point well-taken on that end.

Comment by Hopefully_Anonymous2 on Two More Things to Unlearn from School · 2007-07-12T18:57:25.000Z · LW · GW

Great post, Eliezer (you've earned my approval). I think tied for worst school-nutured habit, along with parroting things back, is emphasis on what we think we know, as opposed to what we don't know. I think school science and history subjects would be a lot more interesting, and accurately presented, if at least equal time was given to all the problems and areas where we don't know what's going on, and for which there are various competing theories. Unfortunately one doesn't usually get this presentation of the state of things until one is working as a research assistant in college or grad school.

Comment by Hopefully_Anonymous2 on Open Thread · 2007-07-08T16:24:06.000Z · LW · GW

Nick and Eliezer, are you still Singularitarians?

http://en.wikipedia.org/wiki/Singularitarian

The idea that people are actively working to bring about self-improving, smarter-than-humanity intelligences scares me, because I think you're blind to your own ruthless selfishness (not meant pejoratively) and thus think that by creating something smarter than us (and therefore you) it can also attempt to be kind to us, as you perceive yourself to be attempting to be kind to people generally.

In contrast, I don't see either of you as Gandhi-types (here I'm referring to the archetypal elements of Gandhi's self-cultivated image, not his actual life-in-practice). It may be a hubris-derived bias that makes you think otherwise. I don't see any singulatarians performing and attempt to engage in minimal pleasurable resource use to maximize their ability to save currently existing lives. Instead I see thousands or millions of people dying daily, permanently, while leading singularians enjoy a variety of life's simple pleasures.

My prescriptive solution: more selfishness, fear, and paranoia on your end. Be thankful that you're apparently (big caveat) one of the smartest entities in apparent reality and there's apparently nothing of much greater intelligence seeking resources in your shared environment. Rather than consciously try to bring about a singularity, I think we should race against a naturally occuring singularity to understand the various existential threats to us and to minimize them.

At the same time, I think we should try to realistically assess more mundane existential threats and threats to our personal persistence, and try to minimize these too with what seems to be the best proportionate energy and effort.

But the rationalizations of why people are trying to inentionally create a self-improving intelligence smarter than humanity seem to me to be very, very weak, and could be unecessarily catastrophic to our existence.

Comment by Hopefully_Anonymous2 on Open Thread · 2007-07-07T23:47:36.000Z · LW · GW

This makes notions of representative democracy, at least in the USA, seem a bit silly:

http://andrewsullivan.theatlantic.com/the_daily_dish/2007/07/one-problem-wit.html

The link details evidence that most Americans have very low knowledge levels of the basics of American government.

Comment by Hopefully_Anonymous2 on Open Thread · 2007-07-04T07:46:23.000Z · LW · GW

Mark, alarmingly high? I don't see how that probability can be calculated as any higher than the existential threat of quantum flux or other simple, random end to our apparent reality, but I'd be interested in seeing the paper.

Comment by Hopefully_Anonymous2 on Open Thread · 2007-07-03T20:29:38.000Z · LW · GW

Mark, until I read Kurzweil's interesting argument that we're most likely living in a simulation (within a simulation, etc. -almost all the way down) I thought there was more likely than not no intelligent creator of our apparent reality. Now it seems to me the stronger argument that our apparent reality is a simulation of some other intelligence's reality, with some abstractions/reductions of their more complex reality. Just like we've already filled the Earth with various (and increasingly better) simulations of the universe and our own apparent reality).

Comment by Hopefully_Anonymous2 on Open Thread · 2007-07-02T21:10:49.000Z · LW · GW

forgot to include the link:

http://www.commentarymagazine.com/cm/main/viewArticle.html?id=10916&page=all

Comment by Hopefully_Anonymous2 on Open Thread · 2007-07-02T21:10:22.000Z · LW · GW

I thought Cochran and Harpending's letter was the most interesting. As for Murray, I think he tends to mythologize more than give primacy to empiricism. I find a Murray vs. Patricia Williams type dialectic to be annoying, performative, and mostly about manufacturing American cultural norms (while drowning out more interesting and critical voices). So I'm glad the discussion on the topics related to human intelligence is expanding, and expanding beyond some narrow Left/Right performance.

Comment by Hopefully_Anonymous2 on Open Thread · 2007-07-02T17:44:54.000Z · LW · GW

I'm interested in responses to these lines:

...

"But I think it's a bit arbitrary that freedom can be curtailed to forestall death from a threat in one hour's time, or one day's time, or one week's time, but not in a few decade's time (as would be attempted with the compulsory medical trial participation example)."

and

"I don't think randomly drafting people into medical experiments to benefit human health/medical knowledge would just help society. I think it helps all of us individuals at risk of being so drafted, provided it's structured in such a way that our risk of disease and death ends up net lower than if human medical experimentation wasn't being done in this way.

I'd think economists might look at our humoring of various "moral intuitions"/biases as a sort of luxury spending, or waste. There also might be a cost in terms of human life, health, etc. that could legitimately be described as morally horrific.

It goes to the problem of how people often think shooting and killing 3 people is much worse than fraud, corruption, or waste that wipes out hundreds of millions of dollars of wealth, although objectively that reduction in global wealth might mean a much greater negative impact on human life and health."

...

I think it's worth looking into if waste from eww bias-derived moral intuitions on topics such as freedom actually result in social waste such that the net freedom for all humans is lower. For example, we all may be more likely to die as a result of failing to have randomized compulsory medical trials at this stage of human history. Thus, by not engaging in this temporary fix, are we reducing a lot more freedom 50 years from now.

The valuing of freedom now more than freedom later (if that's what this is) is, parallels a classic bias of preferring less money now rather than more money later, beyond the advantages of Time Value of Money.

Comment by Hopefully_Anonymous2 on Open Thread · 2007-07-02T12:17:12.000Z · LW · GW

What's the point of freedom? Is it god-given? an illusion? is it utilitarian (for example to promote innovation and economic growth through market participation) within certain threshhold levels to the degree that it helps maximize our mutual odds of persistence? Personally, I lean at least towards the latter justification for promoting certain amounts of free agency for people in society. But I think it's a bit arbitrary that freedom can be curtailed to forestall death from a threat in one hour's time, or one day's time, or one week's time, but not in a few decade's time (as would be attempted with the compulsory medical trial participation example).

Comment by Hopefully_Anonymous2 on Open Thread · 2007-07-02T11:20:40.000Z · LW · GW

I'm sure readers other than me have pet ideas that they'd like to see exposed to community scrutiny so I hope some other readers throw out some bombs, too.

Another interest is a better version of the "Nobel Prize Sperm Bank". A version individualists could support, structured by encouraging volunteering financial incentives, and incorporating both donated (or purchased) eggs, sperms, surrogate wombs, and adoptive parents, with the genetic material selected from those most talented at solving existential threats humanity faces (not necessarily nobel prize winners) the surrogates and adoptive parents probably being less talented, but still the best at some combination of nuturing and existential threat-solving, and each offspring having an endowed trust that gives them financial rewards for each stage of education and professional development they choose to complete geared towards making them an expert at solving existential threats. I think all this could be done with current laws and social norms in the West. If the singularity is coming, this is all probably unecessary (or, more ominously, useless), but if there are barriers to AI of which we're currently unaware, this could speed up solving the challenges (in particular) of our current aging/SENS problem, and various other difficult existential problems of which we're currently aware or are unknown.

I think this relates to overcoming bias, because I'm not sure of objections to doing something like this other than a social aesthetic bias that this would be yucky, or that people smart at solving difficult challenge that humanity faces arise magically.

Comment by Hopefully_Anonymous2 on Open Thread · 2007-07-02T10:46:29.000Z · LW · GW

Eliezer, I don't think the approach I'm suggesting needs to be done through government. For example, it could be done extragovernmentally, and then it would require an excercise a government power to prevent extragovernmental agents from carrying it out.

TGGP, it sounds like you're saying that if certain social arrangements become too yucky to optimize yor personal odds of persistence (and I understand maximizing general odds is different than maximizing personal odds) then you'd rather die (or at least take an increased odds of death)? I can't say I relate to that point of view, at all.

Nathan, I think the reason we don't have compulsory medical trials is probably explained more by "functional not optimal" than the possibility that it doesn't pass cost-benefit. Here I'm specifically making randomized compulsory medical trials contingent on the degree that they pass cost-benefit. It seems to me to be such a naturally beneficial idea (at least on some levels) that I'm curious if utilitarians like Singer have at least done the analysis.

Comment by Hopefully_Anonymous2 on Open Thread · 2007-07-02T01:45:08.000Z · LW · GW

Anders, Thanks for the really interesting response. Perhaps I should be pitching this idea to leading utilitarians and finding out the groundwork they've already laid in this area.

I do think many "moral intuitions" fall neatly with already articulated biases, such as Eww bias.

One thing I'm not sure if you picked up on from my post. I don't think randomly drafting people into medical experiments to benefit human health/medical knowledge would just help society. I think it helps all of us individuals at risk of being so drafted, provided it's structured in such a way that our risk of disease and death ends up net lower than if human medical experimentation wasn't being done in this way.

I'd think economists might look at our humoring of various "moral intuitions"/biases as a sort of luxury spending, or waste. There also might be a cost in terms of human life, health, etc. that could legitimately be described as morally horrific.

It goes to the problem of how people often think shooting and killing 3 people is much worse than fraud, corruption, or waste that wipes out hundreds of millions of dollars of wealth, although objectively that reduction in global wealth might mean a much greater negative impact on human life and health.

Comment by Hopefully_Anonymous2 on Open Thread · 2007-07-01T20:39:10.000Z · LW · GW

Adding Adam Crowe as another person who I'd like to hear from on this topic.

Comment by Hopefully_Anonymous2 on Open Thread · 2007-07-01T20:29:56.000Z · LW · GW

I agree with Nathan, but I think 1 or 2 per week would be ideal. What do people think about moving to a system of laws an social norms focused on rationally minimizing our odds of death or harm, rather than on maintaining certain principles.

To take an example that gets extreme negative reactions, human societies don't force random sets of people involuntarily into medical experiments that could adversely impact their health, even though every individual human might have our odds of health outcomes improved if we did have such a policy. Does that make us currently irrational for not pursuing such a policy? I think it does. If each individual human would odds-wise be better off healthwise if we engaged in mandatory compulsory drafting into medical experiments than if we didn't, then I think it's irrational for human societies not to do this. And I think this general principle applies widely to other areas of rule-making and social policy.

Is any expert in the fields of applied ethics and social policy studying this? Or done so in the past (no cheap throw away lines about Nazis or Tuskeegee please). Directions to links and publications are welcome.

I'm especially interested in responses from Anders Sandberg and TGGP. Contributors are welcome to respond in this thread on this topic anonymously for obvious reasons.

Comment by Hopefully_Anonymous2 on Risk-Free Bonds Aren't · 2007-06-28T18:11:23.000Z · LW · GW

Zubon, I think that was much more true in 1950 (when I think the us. was 50% of the world economy) than it is to day, post-Cold War, when the US might be under 20% of the world economy, and where appreciation of market economics and liberal government are widely appreciated. Thankfully world economic product has heavily diversified away from the United State. With regions like the EU and East Asia, I think even a particularly disastrous US collapse wouldn't take the rest of the world down with it.

Comment by Hopefully_Anonymous2 on Correspondence Bias · 2007-06-27T23:39:45.000Z · LW · GW

Michael, I think I understand what Nick and Matthew are saying, but if I don't I hope they or you jump in with a barrier-aesthetic/hide-the-ball denuded explanation. I think they're claiming something like onesself is always changing, or that it's arbitrarily defined where one's self ends and other phenomena in apparently reality begins, or that any concept of self becomes absurdly messy under sustained scrutiny. That's all fine and dandy as far as analysis and descriptions go, but I'm a bit skeptical that they're right, since as best I can tell the analysis has been done by a couple of people with 3 pound primate brains in a rather enormous and complex apparent reality. If they want to end their lives tonight and bequest all their personal wealth to me (I'll come out of anonymity for that), I'll accept that as their decision, and give it a good college try to have their "selves" live on through a "shared awareness" that exists between my ears. But as for me, I'll still be trying to MMPOOP, rather conservatively, in something closer to its present form of organization. I understand my odds of success may be vanishingly low, but I'm happy to collaborate with similarly inclined folks on this blog or elsewhere.

Comment by Hopefully_Anonymous2 on Correspondence Bias · 2007-06-27T03:09:38.000Z · LW · GW

Matthew, Well, I checked out the link on Ourobouros and it didn't spark any great epiphany or change my mind about wanting to MMPOOP first and foremost. That doesn't make me opposed to other people being altruistic, but I do think that goal should be subordinated to MMPOOP. However, I'm willing to compromise on policy -if that's what's necessary to ... MMPOOP.

Comment by Hopefully_Anonymous2 on Correspondence Bias · 2007-06-27T03:05:21.000Z · LW · GW

Matthew, I'm not sure I completely understand your last statement, but it hasn't altered my my belief "that I enjoy (apparently) existing as a subjective conscious entity, and I want to persist existing as a subjective conscious entity -forever, and in a real time sort of way." I won't object if you decide to end your life and donate your current possessions and wealth to the charitable organization of your choice (UNICEF, Gates Foundation, Soros Foundation, or something else). But if you decide to persist as an interactive personality in the world with me, it's going to seem to me like you're an egoist yourself, and that you're just not being as transparent about it as I am (although admittedly I would only be this transparent about it anonymously, because of the -irrational in my opinion- social costs that many people seem to want to burden transparent egoists with.

I'll check out your link but a more detailed explanation from you of that last sentence would probably be welcome, too.

ps. I think there is some irony in naming people as being notable for having ceased to identified with self.

Comment by Hopefully_Anonymous2 on Are Your Enemies Innately Evil? · 2007-06-26T23:01:56.000Z · LW · GW

An interesting (and in my opinion daring) point, Eliezer, although I'm not sure if it's true or not, because I'm not sure about the degree to which genetics, etc. plays a role in creating "evil mutants". After all, people who commit 9/11 type acts ARE rare. The 9/11 participants in my understanding included people with masters degrees and people with long periods of exposure to the West, and that even enjoyed Western comforts immediately prior to their act. I'm not sure if they're representative of "muslim males" as much as they're representative of people that belong to death cults. Just because they're widely admired in some parts of the world doesn't mean that they'd have many imitators. It defies most forms of "selfish gene" logic to kill onesself prior to procreating, particularly if one is a young healthy male. I do think it's possible that the actual 9/11 participants were deviant in all sorts of ways, rather than representatives of people that grow up culturally non-western and muslim rather than culturally western (muslim or not). However, I think you still make great points about the not-always-utilitarian human bias of picking a side and then supporting all of its arguments, rather than focusing on what mix of policy is actually best.

Comment by Hopefully_Anonymous2 on Correspondence Bias · 2007-06-26T18:52:10.000Z · LW · GW

Nick, this is great, we have an interesting agreement. :) We may want to discuss this by email so we don't take over the thread, although I think it would be great if overcomingbias incorporate regular open threads and a sister message board. I don't care whether or not selfishness is more rationally justifiable than altruism or not. In fact, I'm not even sure what that means because the first principles behind that statement don't seem clear to me. Unless your point is that all first principles are arbitrary. I look at it from the perspective that I enjoy (apparently) existing as a subjective conscious entity, and I want to persist existing as a subjective conscious entity -forever, and in a real time sort of way. I think that defines me as an egoist (a classic egoist sentence in itself?). As a consequentialist, altruists only bother me to the extent that they may adversely impact my odds of persistence by engaging in their altruistic behavior, more rationally justifiable or not. To the extent that they positively impact -or even better, optimize- my odds of persistence, they're a phenomenon that I want to encourage. You live in a universe with me in it, Nick. And you seem to me to be a bright person. So, given that you seem to want us both to do what's most rationally justifiable, and I want us to do what will maximize my personal odds of persistence, I'm hoping there's some common ground we can meet, that will in the process MMPOOP (maximize my personal odds of persistence) -please pardon the unsavory acronym.

Comment by Hopefully_Anonymous2 on Correspondence Bias · 2007-06-26T18:02:54.000Z · LW · GW

Nick, I don't think we should all intrinsically care about saving the world. I think you, me, and whoever would socially contract with us and could add value should care about saving ourselves. Since we can't currently survive without the world (the Earth, Sun, and moon in their current general states) we need to conserve it to the degree that we need it to survive. Going beyond that in my opinion is bias, arbitrary aesthetics, irrational, or some combination of the three, and could problematically interfere with our mutual persistence.

Comment by Hopefully_Anonymous2 on Correspondence Bias · 2007-06-26T15:12:46.000Z · LW · GW

Matthew, I agree. The flip side of Hansen's recent post on freethinkers, is that we as inhabitants of a system with undiscriminating free thinkers in it would be rational not to reject their innovative good ideas simply because they're paired with a bunch of aesthetically off-putting contrarian ideas. I'm positing Kevembuangga to be such a free thinker in relation to many overcomingbias contributors.

Comment by Hopefully_Anonymous2 on Think Like Reality · 2007-06-26T09:51:44.000Z · LW · GW

I agree with James Somers. Best post on this blog I've read so far. Best short writing that I've read in a while anywhere, Eliezer.

Comment by Hopefully_Anonymous2 on Correspondence Bias · 2007-06-25T20:22:55.000Z · LW · GW

Elizier, you comment "And yet people sometimes ask me why I want to save the world". I think you have a rational reason to save the world: You and I both live here on planet Earth. If the two of us can persist without a saved habitable Earth, then I do think it becomes to a degree more disposable. But we seem to be a bit far from that point at present.