Posts

Comments

Comment by ChrisA on Fake Selfishness · 2007-11-12T09:53:26.000Z · LW · GW

Nick

My response is, evolution! Let's say a genuinely (what ever that means) altruistic entity exists. He then is uploaded. He then observed that not all entities are fully altruistic, in other words they will want to take resources from others. In any contest over resources this puts the altruistic entity at a disadvantage (he is spending resources helping others that he could use to defend himself). With potentially mega intelligent entities any weakness is serious. He realises that very quickly he will be eliminated if he doesn't fix this weakness. He either fixes the weakness (becomes selfish) or he accepts his elimination. Note that uploaded entities are likely to be very paranoid, after all when one is eliminated, a potentially immortal life is eliminated, so they should have very low discount rates. You might be a threat to me in a million years, so if I get the chance I should eliminate you now.

If your answer is that the altruistic entities will be able to use cooperation to defend themselves against the selfish ones, you must realise there is nothing to stop a genuinely selfish entity from pretending to be altruistic. And the altruistic entities will know this.

I don't think that most people realise that the reason we can work as a society is that we have hardwired cooperation genes in us, and we know that. We are not altruistic through choice. Allow us to make the decision on whether to be altruistic and the game theory becomes very different.

Comment by ChrisA on Fake Selfishness · 2007-11-10T09:52:17.000Z · LW · GW

The question Eliezer raises is the first problem any religious person has to face once he abandons the god thesis, i.e. why should I be good now? The answer, I believe, is that you cannot act contrary to your genetic nature. Our brains are wired (or have modules in Pinker terms) for various forms of altruism, for group survival reasons probably. I therefore can’t easily commit acts against my genetic nature, even if intellectually I can see they are in my best interests. (As Eliezer has already recognised this is why AI or uploaded personalities are so dangerous; they will be able to rewrite the brain code that prevents widespread selfishness. I say dangerous of course, because likely the first uploaded person or AI will not be me, so they will be a threat to me.)

More simply, the reason I don't steal from people is not that stealing is wrong, but that my genetic programming (perhaps also an element of social conditioning) is such that I don’t want to steal, or have an active non-intellectual aversion to stealing.

Why do I try to convince you of this point of view if I am intellectually convinced that I should be selfish? I agree with Robin, it is because I am gentically programmed to do so, probably related to status seeking. Also, I genuinely would like to hear arguments againt this point of view, in case I am wrong.

Eliezer, genetics as a source of our ethical actions mean that it is unlikely we can ever develop a consistent ethical theory, if you accept this does this not present a big problem for your attempt to create an ethical AI? Is it possible your rejection of this approach to ethics and your attempt to prove a standalone moral system is perhaps subconciously driven by the impact this would have on your work?

Comment by ChrisA on Congratulations to Paris Hilton · 2007-10-20T08:31:52.000Z · LW · GW

Eliezer “This is a black hole that sucks up arguments. Beware. I'll deal with this on Overcoming Bias eventually, but I haven't done many of the preliminary posts that would be required. Meanwhile, I hope you've noticed your confusion about personal identity.”

I look forward to the posts on consciousness, and yes, I don’t feel like I have a super coherent position on this. I struggle to understand how me is still me after I have died, my dead body is frozen, mashed up and then reconstituted some indefinite time in the future. Quarks are quarks but a human is an emergent property of quarks so interchangeability doesn't necessarily follow at a macro scale. (A copy of a painting is not equivalent to the original, no matter how good a copy). This is why I don’t invest in cryonics. To me there should be better continuity to qualify as transference of consciousness, but I can't be explicit on what I mean by better.

Comment by ChrisA on Pascal's Mugging: Tiny Probabilities of Vast Utilities · 2007-10-20T07:55:09.000Z · LW · GW

Eliezer Sorry to say (because it makes me sound callous), but if someone can and is willing to create and then destroy 3^^^3 people for less than $5, then there is no value in life, and definitely no moral structure to the universe. The creation and destruction of 3^^^3 people (or more) is probably happening all the time. Therefore the AI is safe declining the wager on purely selfish grounds.

Comment by ChrisA on Congratulations to Paris Hilton · 2007-10-19T10:59:04.000Z · LW · GW

“The problem with Pascal's Wager is not the large payoff but the tiny, unsupported probability”

Why is the unsupported probability a problem? As long as there is any probability the positive nature of the wager holds. My problem with Pascal’s Wager is that there are any number of equivalent bets, so why chose Pascal’s bet over any of the others available? Better not to chose any, and spend the resources on a sure bet, i.e. utility in today’s life not a chance at a future one.

On Cryonics, while the technical nature of process is clearly more (by a huge amount) feasible than the existence of a god, it is not clear to me that the critical part can actually work, i.e. the transference of consciousness to a new body. The results of a successful cryonics experiment seem to me to be the creation of a very good copy of me. At least in this respect the god solution is better.

A better bet than cryonics seems to me to be quantum immortality (aka many worlds immortality). At least the majority of people working in the relevant field reportedly believe in the MW hypothesis so technically it is probably on par with cryonics. On this basis I should put any immortality investment into maximising the numbers of me (with continuity of conciousness), say by sensible choices on diet, avoiding risky sports etc. But no-one makes any money with this solution.

Comment by ChrisA on Congratulations to Paris Hilton · 2007-10-19T07:56:43.000Z · LW · GW

On cryonics, basically I understand the wager as that for a (reasonably) small sum I can obtain a small, but finite, possibility of immortality, therefore the bet has a very high return, so I should be prepared to make the bet. But this logic to me has the same flaw as Pascal’s wager. There are many bets that appear to have similar payoff. For instance, although I am an atheist, I cannot deny there is a small, probably smaller than cryonics, chance that belief in a God is correct. Could cryonics be like religion in this way, an example of exposure bias, resulting from the fact that someone has a business model called cryonics; as a result this approach to immortality is given higher visibility (whether through traditional advertising or through motivation on behalf of the investors to raise it's profile)?

Comment by ChrisA on "Can't Say No" Spending · 2007-10-18T16:46:53.000Z · LW · GW

I suggest the common issue (between health care and foreign aid) is that of agents, who benefit from the current status quo, and provide arguments and generate memes to sustain the status quo. I don't mean to say that the agents are generally doing this wilfully, almost no Doctor believes that "most health care is useless but I will continue the scam to maintain my income/status", just like no aid worker believes what they are doing is useless. But human nature is very good at hiding motives and clothing them in altruistic seeming morality, even to yourself, sort of the reverse of the Smithian invisible hand/selfish baker.

Another way to look at this is incentives: which option has the better pay off (at the margin) for a doctor or aid worker; either convincing people that a given proposed treatment is effective (generating more income for the agent) or the alternative of creating more effective/efficient treatments (generating less income for the agent)?

Comment by ChrisA on Consolidated Nature of Morality Thread · 2007-04-16T13:31:25.000Z · LW · GW

I should clarify the above statement - I mean it is possible that moral rules can be derived logically, like the speed of light, from the structure of the universe.

Comment by ChrisA on Consolidated Nature of Morality Thread · 2007-04-16T13:15:33.000Z · LW · GW

It is possible that there are moral rules that apply universally, perhaps we just haven't discovered them yet. After all, who would have predicted that the universe had a natural speed limit. So 8 could be false. The rest of the statements assume the viewpoint that there is such a thing as universal morality, asking tricky questions about how to apply or calculate morality. I don't have answers as I predict morality is mostly a psychological device imposed by genetics to promote group genetic survival mixed with some childhood conditioning. We will find out the true nature of morality when we invent AI, if AIs that have no initial moral constraints develop a moral sense that is the same as ours, we can suspect that it is universal. I think the reality is that AIs will be entirely goal orientiated, no matter how smart they are and lying, cheating, stealing, killing etc is all OK if that serves their purpose. Of course they may want us to think they are moral (for their survival purposes) so we had better be careful.

In the meantime, it makes me happier to be a conventionally moral person because of my conditioning or genetics or whatever, so that's what I do

Comment by ChrisA on Tsuyoku Naritai! (I Want To Become Stronger) · 2007-03-28T08:58:44.000Z · LW · GW

There seem to be lots of parallels between majoritarianism and the efficient market hypothesis in finance. In the efficient market hypothesis, it is entirely possible that a liquidly traded asset is mispriced, (similar to the possibility that the majority view is very wrong) however on average, according to the efficient market view, I maximise my chances of being right by accepting the current price of the asset as the correct price. Therefore the fact that a stock halved in price over a year is not a valid criticism of the efficient market theory, just as in majoritarianism the fact that the majority has been proven wrong is not a valid criticism of majoritarianism. The problem of free loading is inherent in the efficient market theory, if everyone accepts it then the market no longer is efficient. But there are enough people who have justified reasons not to invest on an efficient market basis to ensure that this does not happen as discussed below.

Some examples of justified reasons in differing from the majority view in the case of efficient markets are; 1) In the efficient market theorem, it is accepted that people with inside information can have successful trading strategies which deliver predictably above average returns. In the case of majoritarianism we would be justified holding a different view to the majority if we had inside information that the majority did not have (for instance we know the colour of someone eyes, when we know the majority do not). 2) A professional money manager of a non-index mutual fund is also justified in differing from the majority view since this is what he is paid to do. The parallel here for majoritarianism would be scientists or academics, who are paid to advance new theories, they receive compensation for differing from the majority, at least in their own area of speciality. 3) The final area where you might differ from the efficient market approach is when you gain some entertainment utility from investing (i.e. as a form of gambling, if I am honest, this is why I invest in single stocks). In the case of majoritarianism, the parallel is that you can chose to hold a view that is different from the majority if it brings you entertainment utility which outweighs the costs of holding a non-efficient view (religious views might be in this category).