Posts

Comments

Comment by denis_bider on Lawrence Watt-Evans's Fiction · 2009-04-27T16:46:06.000Z · LW · GW

Well. I finally got around to reading The Unwilling Warlord, and I must say that, despite the world of Ethshar being mildly interesting, the book is disappointment. It builds up nice and well in the first 2/3 of the book, but in the last 1/3, when you expect it to unfold and flourish in some interesting, surprising, revealing manner, Watt-Evans instead decides to pursue the lamest, boringest plot possible, all the while insulting the reader's intelligence.

For the last 1/3 of the book, Watt-Evans attempts to make the eventual reasons for Vond's undoing a "mystery". He suggests that Sterren knows the answer, but the reader is not told what it is. When the end finally arrives, it is a disapointing anti-climax as Watt-Evans chooses the most non-eventful possible outcome that has been blatantly obvious all the while.

He employs an exceedingly lame plot device where Vond is so stupid he just doesn't see it coming. The author neither takes the opportunity to explain what the Calling is, nor does he have Sterren take Vond down in a more interesting manner, such as having Sterren go to the Towers of Lumeth and turning them off, or something.

Yes, the writing has some positive traits such as Eliezer described, but overall it's much lamer and more amateurish than I expected. Given the recommendation, I would have expected this to be much better fiction than it turns out it is.

Comment by denis_bider on Epilogue: Atonement (8/8) · 2009-02-12T19:54:00.000Z · LW · GW

Neh. Eliezer, I'm kind of disappointed by how you write the tragic ending ("saving" humans) as if it's the happy one, and the happy ending (civilization melting pot) as if it's the tragic one. I'm not sure what to make of that.

Do you really, actually believe that, in this fictional scenario, the human race is better off sacrificing a part of itself in order to avoid blending with the super-happies?

It just blows my mind that you can write an intriguing story like this, and yet draw that kind of conclusion.

Comment by denis_bider on Three Worlds Collide (0/8) · 2009-01-31T20:50:14.000Z · LW · GW

Excellent. I was reluctant to start reading at first, but when I did, I found it entertaining. This should be a TV series. :)

Comment by denis_bider on Complex Novelty · 2008-12-22T01:14:07.000Z · LW · GW

Eliezer: This post is an example of how all your goals and everything you're doing is affected by your existing preferences and biases.

For some reason, you see Peer's existence as described by Greg Egan as horrible. You propose an insight-driven alternative, but this seems no more convincing to me than Peer's leg carving. I think Peer's existence is totally acceptable, and might even be delightful. If Peer wires himself to get ultimate satisfaction from leg carving, then by definition, he is getting ultimate satisfaction from leg carving. There's nothing wrong with that.

More importantly - no alternative you might propose is more meaningful!

There's also nothing wrong with being a blob lying down on a pillow having a permanent fantastic orgasm.

The one argument I do have against these preoccupations is that they provide no progress towards avoiding threats to one's existence. In this respect, the most sensible preoccupation to wire yourself for would be something that involves preserving life, and other creatures' lives as well, if you care for that as the designer.

Satisfying that, the options are open. What's really wrong with leg carving?

Comment by denis_bider on Engelbart: Insufficiently Recursive · 2008-11-26T14:24:29.000Z · LW · GW

Eliezer: all these posts seem to take an awful lot of your time as well as your readers', and they seem to be providing diminishing utility. It seems to me that talking at great length about what the AI might look like, instead of working on the AI, just postpones the eventual arrival of the AI. I think you already understand what design criteria are important, and a part of your audience understands as well. It is not at all apparent that spending your time to change the minds of others (about friendliness etc) is a good investment or that it has any impact on when and whether they will change their minds.

I think your time would be better spent actually working, or writing about, the actual details of the problems that need to be solved. Alternately, instead of adding to the already enormous cumulative volume of your posts, perhaps you might try writing something clearer and shorter.

But just piling more on top of what's already been written doesn't seem like it will have an influence.

Comment by denis_bider on Rationality Quotes 15 · 2008-09-06T20:18:12.000Z · LW · GW

I stumbled over the same quote. What "gift"? From whom? What "responsibility"? And just how is being "lucky" at odds with being "superior"?

To see the nonsense, let me paraphrase:

"Because giftedness is not to be talked about, no one tells human children explicitly, forcefully and repeatedly that their intellectual talent is a gift. That they are not superior animals, but lucky ones. That the gift brings with it obligations to other animals on Earth to be worthy of it."

The few people who honestly believe that are called a lunatic fringe. And yet, it is the same statement as Murray's, merely in a wider context.

Comment by denis_bider on The Truly Iterated Prisoner's Dilemma · 2008-09-04T18:58:01.000Z · LW · GW

What Kevin Dick said.

The benefit to each player from mutual cooperation in a majority of the rounds is much more than the benefit from mutual defection in all rounds. Therefore it makes sense for both players to invest at the beginning, and cooperate, in order to establish each other's trustworthiness.

Tit-for-tat seems like it might be a good strategy in the very early rounds, but as the game goes on, the best reaction to defection might become two defections in response, and in the last rounds, when the other party defects, the best response might be all defections until the end.

Comment by denis_bider on The True Prisoner's Dilemma · 2008-09-03T22:33:44.000Z · LW · GW

An excellent way to pose the problem.

Obviously, if you know that the other party cares nothing about your outcome, then you know that they're more likely to defect.

And if you know that the other party knows that you care nothing about their outcome, then it's even more likely that they'll defect.

Since the way you posed the problem precludes an iteration of this dilemma, it follows that we must defect.

Comment by denis_bider on Dreams of Friendliness · 2008-09-02T18:02:55.000Z · LW · GW

Eliezer: what I proposed is not a superintelligence, it's a tool. Intelligence is composed of multiple factors, and what I'm proposing is stripping away the active, dynamic, live factor - the factor that has any motivations at all - and leaving just the computational part; that is, leaving the part which can navigate vast networks of data and help the user make sense of them and come to conclusions that he would not be able to on his own. Effectively, what I'm proposing is an intelligence tool that can be used as a supplement by the brains of its users.

How is that different from Google, or data mining? It isn't. It's conceptually the same thing, just with better algorithms. Algorithms don't care how they're used.

This bit of technology is something that will have to be developed to put together the first iteration of an AI anyway. By definition, this "making sense of things" technology needs to be strong enough that it allows a user to improve the technology itself; that is what an iterative, self-improving AI would be doing. So why let the AI self-improve itself, which more likely than not will run amok, despite the designers' efforts and best intentions? Why not use the same technology that the AI would use to improve itself, to improve _your_self? Indeed, it seems ridiculous not to do so.

To build an AI, you need all the same skills that you would need to improve yourself. So why create an external entity, when you can be that entity?

Comment by denis_bider on Rationality Quotes 13 · 2008-09-02T17:44:48.000Z · LW · GW

Looks like the soldier quote is gonna be big in comments. I think it's out of place too, and as opposed to most other quotes that Eliezer comes up with, it doesn't make a lot of sense. In the same way as: "It is the scalpel, not the surgeon, or the nurse, that fixed your wounds!"

Soldiers are tools wielded by the structure in power, and it is the structure in power that determines whether the soliders are going to protect your rights and take them away.

Perhaps, "The One" might argue, it is a different kind of person who becomes a soldier in an army that "protects freedom" rather than an army that oppresses its countrymen. There are probably more such idealists among the soldiers in the US army, than among troops commanded by the Burmese generals.

Even so, though, the idealist soldier does what he's commanded to do, and whether that which he does actually protects freedom or not, is largely determined by the structure of power, not the idealist soldier. He remains a tool, a hammer wielded by someone else's will.

Comment by denis_bider on Dreams of Friendliness · 2008-09-02T00:48:52.000Z · LW · GW

Kaj makes the efficiency argument in favor of full-fledged AI, but what good is efficiency when you have fully surrendered your power?

What good is being the president of a corporation any more, when you've just pressed a button that makes a full-fledged AI run it?

Forget any leadership role in a situation where an AI comes to life. Except in the case that it is completely uninterested in us and manages to depart into outer space without totally destroying us in the process.

Comment by denis_bider on Dreams of Friendliness · 2008-09-02T00:39:02.000Z · LW · GW

Why build an AI at all?

That is, why build a self-optimizing process?

Why not build a process that accumulates data and helps us find relationships and answers that we would not have found ourselves? And if we want to use that same process to improve it, why not let us do that ourselves?

Why be locked out of the optimization loop, and then inevitably become subjects of a God, when we can make ourselves a critical component in that loop, and thus 'be' gods?

I find it perplexing why anyone would ever want to build an automatic self-optimizing AI and switch it to "on". No matter how well you planned things out, not matter how sure you are of yourself, by turning the thing on, you are basically relinquishing control over your future to... whatever genie it is that pops out.

Why would anyone want to do that?

Comment by denis_bider on Is Morality Given? · 2008-07-06T21:09:53.000Z · LW · GW

My earlier comment is not to imply that I think "maximization of human happiness" is the most preferred goal.

An easily obvious one, yes. But faulty; "human" is a severely underspecified term.

In fact, I think that putting in place a One True Global Goal would require ultimate knowledge about the nature of being, to which we do not have access currently.

Possibly, the best we can do is come up with plausible global goal that suits us for medium run, while we try to find out more.

That is, after all, what we have always done as human beings.

Comment by denis_bider on Is Morality Given? · 2008-07-06T20:58:13.000Z · LW · GW

Eliezer: You have perhaps already considered this, but I think it would be helpful to learn some lessons from E-Prime when discussing this topic. E-Prime is a subset of English that bans most varieties of the verb "to be".

I find sentences like "murder is wrong" particularly underspecified and confusing. Just what, exactly, is meant by "is", and "wrong"? It seems like agreeing on a definition for "murder" is the easy part.

It seems the ultimate confusion here is that we are talking about instrumental values (should I open the car door?) before agreeing on terminal values (am I going to the store?).

If we could agree on some well-defined goal, e.g. maximization of human happiness, we could much more easily theorize on whether a particular case of murder would benefit or harm that goal.

It is, however, much harder to talk about murders in general, and infeasible to discuss this unless we have agreed on a terminal value to work for.

Comment by denis_bider on Is Morality Preference? · 2008-07-06T20:18:04.000Z · LW · GW

frelkins: Should I apologize, then, for not yet having developed sufficient wit to provide pleasure with style to those readers who are not pleased by the thought?

Cynicism is warranted to the extent that it leads to a realistic assessment and a predictive model of the world.

Cynicism is exaggerated when it produces an unrealistic, usually too pessimistic, model of the world.

But to the extent that cynicism is a negative evaluation of "what is", I am not being a cynic in this topic.

I am not saying, bitterly, how sad it is that most people are really motivated by their selfishness, and how sad the world is because of this, etc.

What I am saying is that selfishness is okay. That recognizing your selfishness is the healthiest state. I am saying not that people who are selfish are corrupting the world. I am saying that people who are self-righteous are.

I understand people who want to reshape the world because they want it to be different, and are honest about this selfish preference and endeavor. I respect that.

What I don't respect is people who are self-righteous in thinking that they know how to reshape the world to make other people happy, and do not see how self-anchored their motivation is. They are trying to do the same thing as those people who want to reshape the world selfishly. But the self-righteous ones, they sell what they are doing as being "higher on a moral ladder", because, obviously, they know what is good for everyone.

I think that sort of behavior is just pompous, arrogant, and offensive.

Be honest. Do things because of you. Don't do things because of others. Then, we can all come together and agree sensibly on how to act as to not step on each other's toes.

But don't be running around "healing" the world, pretending like you're doing it a favor.

Comment by denis_bider on Is Morality Preference? · 2008-07-06T19:45:40.000Z · LW · GW

Phillip Huggan - let me just say that I think you are an arrogant creature that does much less good to the world than he thinks. The morality you so highly praise only appears to provide you with a reason to smugly think of yourself as "higher developed" than others. Its benefit to you, and its selfish motivation, is plainly clear.

Comment by denis_bider on Is Morality Preference? · 2008-07-06T06:15:26.000Z · LW · GW

Phillip Huggan: "Denis, are you claiming there is no way to commit acts that make others happy?"

Why the obsession with making other people happy?

Phillip Huggan: "Or are you claiming such an act is always out of self-interest?"

Such acts are. Stuff just is. Real reasons are often unknowable; and if known, would be trivial, technical, mundane.

In general, I wouldn't say self-interest. It is not in your self interest to cut off your penis and eat it, for example. But some people desire it and act on it.

Desire. Not necessarily logical. Does not necessarily make sense. But drives people's actions.

Reasons for desire? Unknowable.

And if known?

Trivial. Technical. Mundane.

Phillip Huggan: "The former position is absurd, the latter runs into the problem that people who jump on grenades, die."

I can write a program that will erase itself.

Doesn't mean that there's an overarching morality of what programs should and should not do.

People who jump on grenades do so due to an impulse. That impulse comes from cached emotions and thoughts. You prime yourself that it's romantic to jump on a grenade, you jump on a grenade. Poof.

Stuff is. Fitting stuff that happens into a moral framework? A hopeless endeavor for misguided individuals seeking to fulfil the romantic notion that things should make sense.

Phillip Huggan: "A middle class western individual, all else equal, is morally better by donating conspicious consumption income to charity, than by exercising the Libertarian market behaviour of buying luxury goods."

Give me a break. You gonna contribute to a charity to take care of all the squid in the ocean? The only justification not to is if you invent an excuse why they are not worth caring about. And if not the squid, how about gorillas, then? Baboons, and chimpanzees?

If we're going to intervene because a child in Africa is dying of malaria or hunger - both thoroughly natural causes of death - then should we not also intervene when a lion kills an antelope, or a tribe of chimpanzees is slaughtered by their neighbors?

You have to draw a line somewhere, or else your efforts are hopeless. Most people draw the line at homo sapiens. I say that line is arbitrary. I draw it where it makes sense. With people in my environment.

Comment by denis_bider on Is Morality Preference? · 2008-07-06T01:49:30.000Z · LW · GW

Thanks for the link to The People's Romance!

Comment by denis_bider on Is Morality Preference? · 2008-07-05T19:24:18.000Z · LW · GW

Disagreeing with Mr. Huggan, I'd say Obert is the one without a clue.

Obert seems to be trying to find some external justification for his wants, as if it's not sufficient that they are his wants; or as if his wants depend on there being an external justification, and his mental world would collapse if he were to acknowledge that there isn't an external justification.

I would compare morality to patriotism in the sense of the Onion article that Robin Hanson recently linked to. Much like patriotism, morality is something adopted by people who like to believe in Great Guiding Ideas. Their intellect drives them to recognize that the idea of a god is ridiculous, but the religious need remains, so they try to replace it with a principle. A self-generated principle which they try to think is independent and universal and not self-generated at all. They create their own illusion as a means of providing purpose for their existence.

Comment by denis_bider on What Would You Do Without Morality? · 2008-06-30T07:20:00.000Z · LW · GW

Unknown: "For all those who have said that morality makes no difference to them, I have another question: if you had the ring of Gyges (a ring of invisibility) would that make any difference to your behavior?"

What sort of stupid question is this? :-) But of course! If I gave you a billion dollars, would it make any difference to your behavior? :-)

Comment by denis_bider on What Would You Do Without Morality? · 2008-06-30T06:55:00.000Z · LW · GW

mtraven: "Psychopathy is not "not believing in morality": it entails certain kinds of behaviors, which naive analyses of attribute to "lack of morality", but which I would argue are a result of aberrant preferences that manifest as aberrant behavior and can be explained without recourse to the concept of morality."

Exactly. Logically, I can agree entirely with Marquis de Sade, and yet when reading Juliette, my stomach turns around about page 300, and I just can't read any more about the raping and the burning and the torture.

It is one thing to say that we are all just competing for our desires to be realized, and that no one's desires are above others. But it is another thing to actually desire the same things as the moralists, or the same thing as the psychos.

I don't have to invent artificial reasons why psychos are somehow morally inferior, to justify my disliking of, and disagreement with them.

Comment by denis_bider on What Would You Do Without Morality? · 2008-06-30T06:46:00.000Z · LW · GW

Not having read the other comments, I'd say Eliezer is being tedious.

I'd do whatever the hell I want, which is what I am already doing.

Comment by denis_bider on Perpetual Motion Beliefs · 2008-02-27T23:32:55.000Z · LW · GW

Interesting stuff about the preservation of phase space volume, though. I appreciate it, I previously knew nothing about that.

Comment by denis_bider on Perpetual Motion Beliefs · 2008-02-27T23:31:07.000Z · LW · GW

Reading today's fare is a bit like eating unflavored oatmeal. :-)

It seems to me that the person who can read this and understand it, already knows it.

But the person who does not know it, cannot understand it and will be frustrated by reading it.

I'm not sure what your intention is with the whole series of posts, but if you'd like to enligthen the muggles, the trick is to explain it in a concise, striking, unusual, easily understood, entertaining manner.

Of course, that takes genius. :-)

But otherwise you are writing primarily for people who already know it.

In yet other words: some of your posts, I will forward to my wife. Others, I won't. This one is one of the latter.

Comment by denis_bider on Leave a Line of Retreat · 2008-02-26T03:55:17.000Z · LW · GW

I should however note that one of the last mathy posts (Mutual Information) struck a chord with me and caused an "Aha!" moment for which I am grateful.

Specifically, it was this:

I digress here to remark that the symmetry of the expression for the mutual information shows that Y must tell us as much about Z, on average, as Z tells us about Y. I leave it as an exercise to the reader to reconcile this with anything they were taught in logic class about how, if all ravens are black, being allowed to reason Raven(x)->Black(x) doesn't mean you're allowed to reason Black(x)->Raven(x). How different seem the symmetrical probability flows of the Bayesian, from the sharp lurches of logic - even though the latter is just a degenerate case of the former.

Insightful!

Comment by denis_bider on Leave a Line of Retreat · 2008-02-26T03:47:28.000Z · LW · GW

I think you should go with the advice and post something fun. Especially so if you have "much important material" to cover in following months. No need for a big hurry to lose readers. ;)

Comment by denis_bider on Circular Altruism · 2008-01-22T23:49:38.000Z · LW · GW

Eliezer - the way question #1 is phrased, it is basically a choice between the following:

  1. Be perceived as a hero, with certainty.

  2. Be perceived as a hero with 90% probability, and continue not to be noticed with 10% probability.

This choice will be easy for most people. The expected 50 extra deaths are a reasonable sacrifice for the certainty of being perceived as a hero.

The way question #2 is phrased, it is similarly a choice between the following:

  1. Be perceived as a villain, with certainty.

  2. Not be noticed with 90% probability, and be perceived as a villain with 10% probability.

Again, the choice is obvious. Choose #2 to avoid being perceived as a villain.

If you argue that the above interpretations are then not altruistic, I think the "Repugnant Conclusion" link shows how futile it is to try to make actual "altruistic decisions".

Comment by denis_bider on The Allais Paradox · 2008-01-20T09:17:00.000Z · LW · GW

Not sure if anyone pointed this out, but in a situation where you don't trust the organizer, the proper execution of 1A is a lot easier to verify than the proper execution of 1B, 2A and 2B.

1A minimizes your risk of being fooled by some hidden cleverness or violation of the contract. In 1B, 2A and 2B, if you lose, you have to verify that the random number generator is truly random. This can be extremely costly.

In option 1A, verification consists of checking your bank account and seeing that you gained $24,000. Straightforward and simple. Hardly any risk of being deceived.

Comment by denis_bider on Absolute Authority · 2008-01-08T04:45:04.000Z · LW · GW

For all your talk about The One, I'm going to start to call you Morpheus.

Comment by denis_bider on A Failed Just-So Story · 2008-01-05T20:42:07.000Z · LW · GW

Eliezer - who is this "the one" you keep talking about? Do you mean Neo? ;)

Comment by denis_bider on The Two-Party Swindle · 2008-01-02T01:09:44.000Z · LW · GW

Joseph - well, people like you aren't the ones who need to be accompanied to the stadium by the police.

Comment by denis_bider on The Two-Party Swindle · 2008-01-01T18:04:17.000Z · LW · GW

I agree with Eliezer that it seems to be the in-group/out-group dynamic that drives the popularity of sports. The popularity in turn drive the ads, the ads provide a revenue opportunity, and the revenue opportunity drives the high salaries of popular players.

The dynamic seems ridiculous to those of us who find the in-group/out-group dynamic silly. Then again, those of us who find that silly, and so do not contribute to the salaries of football players, still support the high salaries for superstars in other roles. Jerry Seinfeld and Ray Romano probably made more money than most football players delivering a one-to-many service based on humor rather than on identification with a group. Maybe someone who doesn't understand their humor finds it ridiculous how these guys make so much money pandering to an audience so inept as to enjoy these guys' unfunniness?

And yet the audience laughs, enjoys it, and pays for a service they perceive as well performed.

I wonder whether people, at some level, might be aware of the silliness of their group identification, but enjoy it nevertheless, just like most of us enjoy sex, even after taking actions to prevent it leading to reproduction, which is its whole evolutionary point.

If that is the case, then those of us who cannot bring ourselves to identify with a group, might be handicapped in a similar sense as a person who doesn't see the humor in comedy, or a person who derives no joy from sex.

Politics, meanwhile, is tough. I think it's more productive to provide constructive arguments why a certain policy is sensible, and try to spread support for that policy, than to try a meta-analysis of why existing policies are ineffective, and trying to get people to understand that.

Nature abhors a vacuum: if you have a room filled with ineffective thoughts, concepts, ideas, and you try to take them out, this will create a vacuum which will cause more ineffective ideas to flood in through the cracks in the walls. But if, instead, you fill the room with effective ideas, they will displace the ineffective ones.

Telling people what does not work is much less useful than showing them what does.

Comment by denis_bider on Cultish Countercultishness · 2007-12-30T02:48:45.000Z · LW · GW

Eli: great posts, but you are continuously abusing "the one", "the one", "the one". That's not how the word "one" is used in the way you are trying to use it. Proper usage is "one", without "the".

Furthermore, when the pronoun needs to be repeated, the nicer and more traditional usage is "one ... one's ... to one", and not "one ... their ... to them".

See Usage Note here.

Comment by denis_bider on The Modesty Argument · 2007-12-27T05:25:08.000Z · LW · GW

In the Verizon case, George can apply the modesty argument and still come up with the conclusion that he is almost certainly right.

He needs to take into account two things: (1) what other people besides Verizon think about the distinction between .002 dollars and .002 cents, and (2) what is the likelihood that Verizon would admit the mistake even if they know there is one.

Admitting the mistake and refunding one customer might as well have the consequence of having to refund tens of thousands of customers and losing millions of dollars. Even if that's the upstanding and right thing to do for the company, each individual customer support rep will fear that their admitting the facts will lead to them being fired. People whose jobs are at stake will do their best to bluff a belief that .002 == 0.00002, even if they don't actually believe that's the case.

Comment by denis_bider on When None Dare Urge Restraint · 2007-12-08T23:53:17.000Z · LW · GW

Despite your post being entirely correct, if for a moment we ignore the welfare of humanity and consider the welfare of the United States alone, there is a good chance that this irrational overreaction will be remembered, and that it will serve as deterrence to any aspiring attackers for a hundred years to come.

Sometimes irrational wrath pays, especially if you can inflict pain much more effectively than you need to endure it.

The cost to humanity is probably dominated by some 1,000,000 deaths in Iraq, but the cost to the U.S. at least in terms of deaths is comparatively smaller. The Iraq deaths are an externality.

Comment by denis_bider on Evaporative Cooling of Group Beliefs · 2007-12-08T00:49:00.000Z · LW · GW

Unquestionably, things get done a lot more by groups of people who are very much alike. Differences in opinions only tend to brake things.

The question is not whether you need people who are different in order to brake the group. The question is whether you're in the right group to begin with. As per Kuhn, things will get done faster and better if members of the group share a lot of commonalities.

If you're in the right group, excluding dissenters will allow you to progress faster. But if you're in the wrong group, then you're going to be making progress towards the wrong things.

Comment by denis_bider on Politics is the Mind-Killer · 2007-12-04T18:23:22.000Z · LW · GW

Arguing about politics is helping people. If it makes sense that "a bad argument gets a counterargument, not a bullet," then it makes sense that frictions among people's political beliefs should be cooled by allowing everyone to state their case. Not necessarily on this site, but as a general matter, I don't think that talking about politics is either a mind-killer or time-wasting. For me personally it's a motivator both to understand more about the facts, so that I can present arguments; to understand more about other people, so I know why they disagree; and to understand more about myself, so that I can make sure that my convictions are solid. I actually believe that trying to find a way to influence politics to become more sensible is the most I can do to make a positive difference in the lives of other people.

Comment by denis_bider on Superhero Bias · 2007-12-03T06:53:16.000Z · LW · GW

Here's some heroism. :)

Comment by denis_bider on Stranger Than History · 2007-11-20T21:25:14.000Z · LW · GW

dearieme: "Given that WWII showed that race could be dynamite, it's surely astonishing that so many rich countries have permitted mass immigration by people who are not only of different race, but often of different religion. Even more astonishing that they've allowed some groups to keep immigrating even after the early arrivers from those groups have proved to be failures, economically or socially. Did anyone predict that 60 years ago?"

I thought that the excessive tolerances and the aversion to distinguish groups of people based on factual differences are traits that developed as a result of oversensitization from the events of WWII. Hitler's people engaged in cruel and unjust discrimination, so all discrimination is now cruel and unjust. Hitler's people (and others before them) engaged in cruel and gruesome eugenics experiments, so all eugenics are cruel and gruesome.

If Hitler did cruel experiments using pasta, pasta would now be known to be bad for everyone.

Comment by denis_bider on The Logical Fallacy of Generalization from Fictional Evidence · 2007-11-20T19:48:16.000Z · LW · GW

Louis: "The more recent example is the TV series BattleStar Galactica. Of course it's unrealistic and biased, but it changed my views on the issues of AGI's rights. Can a robot be destroyed without a proper trial? Is it OK to torture it? to rape it? What about marrying one? or having children with it (or should I type 'her')?"

See this: http://denisbider.blogspot.com/2007/11/weak-versus-strong-law-of-strongest_15.html

You are confused because you misinterpret humanity's traditional behavior towards other apparently sentient entities in the first place. Humanity's traditional (and game-theoretically correct) behavior is to (1) be allies with creatures who can hurt us, (2) go to war with creatures who can hurt us and don't want to be our allies, (3) plunder and exploit creatures that cannot hurt us, regardless of how peaceful they are or how they feel towards us.

This remains true historically whether we are talking about other people, about other nations, or about other animals. There's no reason why it shouldn't be true for robots. We will ally with and "respect" robots that can hurt us; we will go to war with robots that can hurt us but do not want to be our allies; and we will abuse, mistreat and disrespect any creature that does not have the capacity to hurt us.

Conversely, if the robots reach or exceed human capacities, they will do the same. Whoever is the top animal will be the new "human". That will be the new "humanity" where their will be reign of "law" among entities that have similar capacities. Entities with lower capacities, such as humans that continue to be mere humans, will be relegated to about the same level as capucin monkeys today. Some will be left "in the wild" to do as they please, some will be used in experiments, some will be hunted, some will be eaten, and so forth.

There is no morality. It is an illusion. There will be no morality in the future. But the ruthlessness of game theory will continue to hold.

Comment by denis_bider on Feeling Rational · 2007-11-18T01:11:30.000Z · LW · GW

Think about it this way. The past 28 years of your life are already dead, as history is not something living. The future years of your life, meanwhile, have yet to come into existence. You have already lost all of your past; and as soon as you "gain" your future, you already lose it. All you are is but an ever-changing state; the "now" that you inhabit is but an instruction pointer in the registers of a machine that is continually executing instructions.

What do you care if the machine stops processing at any point? Do you think you will notice? Does a program's thread of execution notice when the OS swaps it out and resumes execution on another thread? Does a thread of execution notice if it is never resumed?

I'm not claiming that we are but software running on the hardware of the universe, but this is what you seem to posit; and if this is so, then death is no more terrible than it's terrible that the sky appears blue, or that the grasses appear green, or that the Sun appears yellow.

And yet, you seem to believe that death is somehow "horrible", so you are sad when it takes place; and you believe that other things are somehow "good", so you are happy when they happen. This seems to be at odds with the things-just-are view that you otherwise represent, and it tells me that these feelings of yours are based on something more fundamental, something more axiomatic than reason. Reason is a consistency vehicle; but these feelings of yours, they are. Reason may help provoke them, but they exist independently of reason. And indeed, such feelings are known to distort the reasoning process in people substantially, causing them to delay validation of critical basic assumptions, thus causing them to reach and stick by invalid conclusions even though their reasoning process may be sound. Garbage in, garbage out.

This reason-distorting effect is why emotions are thought of as at odds with reason. And with good reasons. :)

Comment by denis_bider on Feeling Rational · 2007-11-18T00:54:14.000Z · LW · GW

Eliezer: It may be rational to (choose to) feel, but feelings are not rooted in reason. Reason is a consistency-based mechanism which we use to validate new information based on previously validated information, or to derive new hypotheses based on information we have already interned. One can reason with great skill, and one can know a great deal about the reasoning process in general, and yet one's conclusions may be false or irrelevant if one has not validated all of the basic assumptions one's reasoning ultimately depends on. But validating these most fundamental assumptions is difficult and time consuming, and is a task most of us do not tend to, as we instead scurry about to achieve our daily goals and objectives, which in turn we determine by applying reason to data and attitudes we have previously interned, which in turn are based ultimately upon basic premises which we have never thorougly investigated.

These are the thoughts that I get after reading your eulogy for your brother, Yehuda. I get the impression that you are too busy studying how to defeat death, to stop and think why death should be bad in the first place. Of course, to stop and think about it would mean opening yourself to the possibility that death might be acceptable after all, which in turn would threaten to annihilate your innermost motivations.