Posts

Comments

Comment by wuwei on Hacking the CEV for Fun and Profit · 2010-06-03T23:18:33.717Z · LW · GW

CEV is not preference utilitarianism, or any other first-order ethical theory. Rather, preference utilitarianism is the sort of thing that might be CEV's output.

Comment by wuwei on Human values differ as much as values can differ · 2010-05-04T05:08:07.615Z · LW · GW

Matt Simpson was talking about people who have in fact reflected on their values a lot. Why did you switch to talking about people who think they have reflected a lot?

What "someone actually values" or what their "terminal values" are seems to be ambiguous in this discussion. On one reading, it just means what motivates someone the most. In that case, your claims are pretty plausible.

On the other reading, which seems more relevant in this thread and the original comment, it means the terminal values someone should act on, which we might approximate as what they would value at the end of reflection. Switching back to people who have reflected a lot (not merely think they have), it doesn't seem all that plausible to suppose that people who have reflected a lot about their "terminal values" are often the most confused about them.

For the record, I'm perfectly happy to concede that in general, speaking of what someone "actually values" or what their present "terminal values" are should be reserved for what in fact most motivates people. I think it is tempting to use that kind of talk to refer to what people should value because it allows us to point to existing mental structures that play a clear causal role in influencing actions, but I think it is ultimately only confusing because it is the wrong mental structures to point to when analyzing rightness or shouldness.

Comment by wuwei on Only humans can have human values · 2010-04-27T00:40:16.674Z · LW · GW

I suppose I might count as someone who favors "organismal" preferences over confusing the metaphorical "preferences" of our genes with those of the individual. I think your argument against this is pretty weak.

You claim that favoring the "organismal" over the "evolutionary" fails to accurately identify our values in four cases, but I fail to see any problem with these cases.

  • I find no problem with upholding the human preference for foods which taste fatty, sugary and salty. (Note that consistently applied, the "organismal" preference would be for the fatty, sugary and salty taste and not foods that are actually fatty, sugary and salty. E.g. We like drinking diet Pepsi with Splenda almost as much as Pepsi and in a way roughly proportional to the success with which Splenda mimics the taste of sugar. We could even go one step further and drop the actual food part, valuing just the experience of [seemingly] eating fatty, sugary and salty foods.) This doesn't necessarily commit me to valuing an unhealthy diet all things considered because we also have many other preferences, e.g. for our health, which may outweigh this true human value.
  • The next two cases (fear of snakes and enjoying violence) can be dealt with similarly.
  • The last one is a little trickier but I think it can be addressed by a similar principle in which one value gets outweighed by a different value. In this case, it would be some higher-order value such as treating like cases alike. The difference here is that rather than being a competing value that outweighs the initial value, it is more like a constitutive value which nullifies the initial value. (Technically, I would prefer to talk here of principles which govern our values rather than necessarily higher order values.)

I thought your arguments throughout this post were similarly shallow and uncharitable to the side you were arguing against. For instance, you go on at length about how disagreements about value are present and intuitions are not consistent across cultures and history, but I don't see how this is supposed to be any more convincing than talking about how many people in history have believed the earth is flat.

Okay, you've defeated the view that ethics is about the values all humans throughout history unanimously agree on. Now what about views that extrapolate not from perfectly consistent, unanimous and foundational intuitions or preferences, but from dynamics in human psychology that tend to shape initially inconsistent and incoherent intuitions to be more consistent and coherent -- dynamics, the end result of which can be hard to predict when iteratively applied and which can be misapplied in any given instance in a way analogous to applications of the dynamic over beliefs of favoring the simplest hypothesis consistent with the evidence.

By the way, I don't mean to claim that your conclusion is obviously wrong. I think someone favoring my type of view about ethics has a heavy burden of proof that you hint at, perhaps even one that has been underappreciated here. I just don't think your arguments here provide any support for your conclusion.

It seems to me that when you try to provide illustrative examples of how opposing views fail, you end up merely attacking straw men. Perhaps you'd do better if you tried to establish that any opposing views must have some property in common and that such a property dooms those views to failure. Or that opposing views must go one of two mutually exclusive and exhaustive routes in response to some central dilemma and both routes doom them to failure.

I really would like to see the most precise and cogent version of your argument here as I think it could prompt some important progress in filling in the gaps present in the sort of ethical view I favor.

Comment by wuwei on Attention Lurkers: Please say hi · 2010-04-17T05:02:53.458Z · LW · GW

Hi.

I've read nearly everything on less wrong but except for a couple months last summer, I generally don't comment because a) I feel I don't have time, b) my perfectionist standards make me anxious about meeting and maintaining the high standards of discussion here and c) very often someone has either already said what I would have wanted to say or I anticipate from experience that someone will very soon.

Comment by wuwei on That Magical Click · 2010-01-22T23:58:04.352Z · LW · GW

There's the consequentialist/utilitarian click, and the intelligence explosion click, and the life-is-good/death-is-bad click, and the cryonics click.

I can find a number of blog posts from you clearly laying out the arguments in favor of each of those clicks except the consequentialism/utilitarianism one.

What do you mean by "consequentialism" and "utilitarianism" and why do you think they are not just right but obviously right?

Comment by wuwei on Open Thread: December 2009 · 2009-12-03T04:14:43.838Z · LW · GW

d) should be changed to the sparseness of intelligent aliens and limits to how fast even a superintelligence can extend its sphere of influence.

Comment by wuwei on A Less Wrong singularity article? · 2009-11-21T03:27:57.542Z · LW · GW

Interesting, what about either of the following:

A) If X should do A, then it is rational for X to do A.

B) If it is rational for X to do A, then X should do A.

Comment by wuwei on A Less Wrong singularity article? · 2009-11-21T01:29:15.369Z · LW · GW

I'm a moral cognitivist too but I'm becoming quite puzzled as to what truth-conditions you think "should" statements have. Maybe it would help if you said which of these you think are true statements.

1) Eliezer Yudkowsky should not kill babies.

2) Babyeating aliens should not kill babies.

3) Sharks should not kill babies.

4) Volcanoes should not kill babies.

5) Should not kill babies. (sic)

The meaning of "should not" in 2 through 5 are intended to be the same as the common usage of the words in 1.

Comment by wuwei on A Less Wrong singularity article? · 2009-11-18T04:20:42.311Z · LW · GW

I don't think we anticipate different experimental results.

I find that quite surprising to hear. Wouldn't disagreements about meaning generally cash out in some sort of difference in experimental results?

Comment by wuwei on A Less Wrong singularity article? · 2009-11-18T04:14:08.665Z · LW · GW

On your analysis of should, paperclip maximizers should not maximize paperclips. Do you think this is a more useful characterization of 'should' than one in which we should be moral and rational, etc., and paperclip maximizers should maximize paperclips?

Comment by wuwei on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-12T04:24:48.875Z · LW · GW

Do you think that morality or rationality recommends placing no intrinsic weight or relevance on either a) backwards-looking considerations (e.g. having made a promise) as opposed to future consequences, or b) essentially indexical considerations (e.g. that I would be doing something wrong)?

Comment by wuwei on Pain · 2009-08-03T17:59:40.868Z · LW · GW

Its painfulness.

After some medical procedure, there have been some patients for whom pain is not painful. When asked whether their pain is still there, they will report that the sensation of pain is still there just as it was before, but that they simply don't mind it anymore.

That feature of pain that their pain now lacks is what I am calling its painfulness and that is what is bad about pain.

Comment by wuwei on Joint Distributions and the Slow Spread of Good Ideas · 2009-07-21T03:40:51.412Z · LW · GW

Yes. And since being a maverick has a similar negative expectation for most working people, it seems well-placed to explain the slow spread of good ideas more generally as well.

Comment by wuwei on Outside Analysis and Blind Spots · 2009-07-21T02:45:34.862Z · LW · GW

Great post.

I agree that you identify a very good reason to take care in the use of gender-specific pronouns or anything else that is likely to create in-group, out-group effects.

I also think there probably was a fair amount of attitude polarization on the question of how acceptable it was to make the statement in question.

Comment by wuwei on Sayeth the Girl · 2009-07-20T03:30:51.184Z · LW · GW

Under what conditions do you normally find it necessary to attempt to fully describe a goal?

Comment by wuwei on Sayeth the Girl · 2009-07-20T00:58:54.884Z · LW · GW

Upvoted because I appreciate Alicorn's efforts and would like to hear additional rational presentations of views in the same neighborhood as her's.

I would bet I also upvoted some of the comments Alicorn is referring to as comments that perpetuate the problem.

Comment by wuwei on Sayeth the Girl · 2009-07-20T00:40:14.664Z · LW · GW

disregard for the autonomy of people =/= thinking of someone in a way that doesn't include respect for his goals, interests, or personhood

I am reading the latter rather literally in much the same way RobinHanson seems to and as I think the author intended.

Comment by wuwei on Absolute denial for atheists · 2009-07-19T03:18:02.509Z · LW · GW

Nice. Tying the usage of words to inferences seems to be a generally useful strategy for moving semantic discussions forward.

Comment by wuwei on Absolute denial for atheists · 2009-07-19T02:55:46.808Z · LW · GW

I had negative associations attached to Roko's comment because I started imagining myself with my preferences adopting Roko's suggestions.

This sentence was meant to explain why I was momentarily off-put. I did not mean to imply that I have any ethical problems with the desires mentioned (I don't), though now that you mention it, I wouldn't be too surprised if I do retain some knee-jerk ethical intuitions against them.

Comment by wuwei on Absolute denial for atheists · 2009-07-19T02:34:22.383Z · LW · GW

Have you tried programming in a language with an interactive interpreter, extensive documentation and tutorials, and open source code?

Comment by wuwei on Absolute denial for atheists · 2009-07-19T01:35:00.509Z · LW · GW

I still have very little idea what you mean by 'objectification' and 'objectify people'.

I was momentarily off-put by Roko's comment on the desire to have sex with extremely attractive women that money and status would get. This was because of:

  • the focus on sex, whereas I would desire a relationship.
  • the connotation of 'attractive' which in my mind usually means physical attractiveness, whereas my preferences are dominated by other features of women.
  • the modifier 'extremely' which seems to imply a large difference in utility placed on sex with extremely attractive women vs. very attractive or moderately attractive women, especially when followed by identifying this desire as a generator for desiring high social status rather than vice versa or discussing both directions of causation. (The latter would have made more sense to me in the context of Roko saying we should value social influential power.)

I had negative associations attached to Roko's comment because I started imagining myself with my preferences adopting Roko's suggestions. However, I wouldn't have voiced these negative associations in any phrases along the lines of 'objectificaton' or 'objectifying', or in terms of any moral concerns. The use of the word 'get' by itself did not strike me as particularly out of place any more than talk of 'getting a girlfriend/boyfriend'.

Comment by wuwei on The Strangest Thing An AI Could Tell You · 2009-07-16T00:53:42.975Z · LW · GW

"The Fermi paradox is actually quite easily resolvable. There are zillions of aliens teeming all around us. They're just so technologically advanced that they have no trouble at all hiding all evidence of their existence from us."

Comment by wuwei on The Strangest Thing An AI Could Tell You · 2009-07-16T00:44:57.173Z · LW · GW

I thought Chalmers is an analytic functionalist about cognition and only reserves his brand of dualism for qualia.

Comment by wuwei on Can self-help be bad for you? · 2009-07-08T00:22:41.303Z · LW · GW

Yes, but not all self-help needs to involve positive affirmations.

I was going to ask whether repeating positive statements about oneself has actually been recommended on lesswrong. Then I remembered this post. Perhaps that post would have made a more suitable target than the claim that rationalists should win.

Wouldn't a rationalist looking to win simply welcome this study along with any other evidence about what does or does not work?

Comment by wuwei on Open Thread: July 2009 · 2009-07-06T23:52:35.446Z · LW · GW

Unless that changes then, I wouldn't particularly recommend programming as a job. I quite like my programming job but that's because I like programming and I don't work in a dilbert cartoon.

Comment by wuwei on Rationality Quotes - July 2009 · 2009-07-05T22:33:17.830Z · LW · GW

According to an old story, a lord of ancient China once asked his physician, a member of a family of healers, which of them was the most skilled in the art.

The physician, whose reputation was such that his name became synonymous with medical science in China, replied, "My eldest brother sees the spirit of sickness and removes it before it takes shape and so his name does not get out of the house."

"My elder brother cures sickness when it is still extremely minute, so his name does not get out of the neighborhood."

"As for me, I puncture veins, prescribe potions, and massage skin, so from time to time my name gets out and is heard among the lords."

-- Thomas Cleary, Introduction to The Art of War

Comment by wuwei on Open Thread: July 2009 · 2009-07-05T21:23:31.363Z · LW · GW

Do you program for fun?

Comment by wuwei on Rationality Quotes - July 2009 · 2009-07-04T17:43:15.275Z · LW · GW

Take the thoughts of such an one, used for many years to one tract, out of that narrow compass he has been all his life confined to, you will find him no more capable of reasoning than almost a perfect natural. Some one or two rules on which their conclusions immediately depend you will find in most men have governed all their thoughts; these, true or false, have been the maxims they have been guided by. Take these from them, and they are perfectly at a loss, their compass and polestar then are gone and their understanding is perfectly at a nonplus; and therefore they either immediately return to their old maxims again as the foundations of all truth to them, notwithstanding all that can be said to show their weakness, or, if they give them up to their reasons, they with them give up all truth and further enquiry and think there is no such thing as certainty.

-- John Locke, Of the Conduct of Understanding

Comment by wuwei on Rationality Quotes - July 2009 · 2009-07-04T02:33:44.004Z · LW · GW

There is a mathematical style in which proofs are presented as strings of unmotivated tricks that miraculously do the job, but we found greater intellectual satisfaction in showing how each next step in the argument, if not actually forced, is at least something sweetly reasonable to try. Another reason for avoiding [pulling] rabbits [out of the magicians's hat] as much as possible was that we did not want to teach proofs, we wanted to teach proof design. Eventually, expelling rabbits became another joy of my professional life.

-- Edsger Dijkstra

Edit: Added context to "rabbits" in brackets.

Comment by wuwei on Rationality Quotes - July 2009 · 2009-07-04T01:03:33.226Z · LW · GW

Thanks for the explanations.

Comment by wuwei on Rationality Quotes - July 2009 · 2009-07-04T00:39:12.321Z · LW · GW

Testing shows the presence, not the absence of bugs.

-- Edsger Dijkstra

Comment by wuwei on What's In A Name? · 2009-06-30T00:39:02.234Z · LW · GW

Here's one way this could be explained: Susie realizes that her name could become a cheap and effective marketing tool if she sells seashells at the seashore. Since that's something she enjoys doing anyway, she does so.

If that's how things are, I wouldn't really call this a cognitive bias.

Comment by wuwei on Cascio in The Atlantic, more on cognitive enhancement as existential risk mitigation · 2009-06-19T00:27:49.674Z · LW · GW

That's a good point, but it would be more relevant if this were a policy proposal rather than an epistemic probe.

Comment by wuwei on Cascio in The Atlantic, more on cognitive enhancement as existential risk mitigation · 2009-06-19T00:00:27.632Z · LW · GW

To answer your second question: No, there aren't any historical examples I am thinking of. Do you find many historical examples of existential risks?

Edit: Global nuclear warfare and biological weapons would be the best candidates I can think of.

Comment by wuwei on Cascio in The Atlantic, more on cognitive enhancement as existential risk mitigation · 2009-06-18T23:56:09.156Z · LW · GW

If you decreased the intelligence of everyone to 100 IQ points or lower, I think overall quality of life would decrease but that it would also drastically decrease existential risks.

Edit: On second thought, now that I think about nuclear and biological weapons, I might want to take that back while pointing out that these large threats were predominantly created by quite intelligent, well-intentioned and rational people.

Comment by wuwei on Cascio in The Atlantic, more on cognitive enhancement as existential risk mitigation · 2009-06-18T19:08:55.317Z · LW · GW

You seem to be assuming that the relation between IQ and risk must be monotonic.

I think existential risk mitigation is better pursued by helping the most intelligent and rational efforts than by trying to raise the average intelligence or rationality.

Comment by wuwei on Cascio in The Atlantic, more on cognitive enhancement as existential risk mitigation · 2009-06-18T19:07:41.561Z · LW · GW

And I will suggest in turn that you are guilty of the catchy fallacy name fallacy. The giant cheesecake fallacy was originally introduced as applying to those who anthropomorphize minds in general, often slipping from capability to motivation because a given motivation is common in humans.

I'm talking about a certain class of humans and not suggesting that they are actually motivated to bring about bad effects. Rather all it takes is for there to be problems where it is significantly easier to mess things up than to get it right.

Comment by wuwei on Cascio in The Atlantic, more on cognitive enhancement as existential risk mitigation · 2009-06-18T17:31:25.845Z · LW · GW

I think many of the most pressing existential risks (e.g. nanotech, biotech and AI accidents) come from the likely actions of moderately intelligent, well-intentioned, and rational humans (compared to the very low baseline). If that is right then increasing the number of such people will increase rather than decrease risk.

Comment by wuwei on Intelligence enhancement as existential risk mitigation · 2009-06-17T02:33:08.509Z · LW · GW

Increases in rationality can sometimes lead with some regularity to decreasing knowledge or utility (hopefully only temporarily and in limited domains).

Comment by wuwei on Intelligence enhancement as existential risk mitigation · 2009-06-16T23:11:03.783Z · LW · GW

I suspect you aren't sufficiently taking into account the magnitude of people's irrationality and the non-monotonicity of rationality's rewards. I agree that intelligence enhancement would have greater overall effects than rationality enhancement, but rationality's effects will be more careful and targeted -- and therefore more likely to work as existential risk mitigation.

Comment by wuwei on Intelligence enhancement as existential risk mitigation · 2009-06-16T02:15:22.032Z · LW · GW

I'm not sure intelligence enhancement alone is sufficient. It'd be better to first do rationality enhancement and then intelligence enhancement. Of course that's also much harder to implement but who said it would be easy?

It sounds like you think intelligence enhancement would result in rationality enhancement. I'm inclined to agree that there is a modest correlation but doubt that it's enough to warrant your conclusion.

Comment by wuwei on Rationality Quotes - June 2009 · 2009-06-15T05:59:42.301Z · LW · GW

"One can measure the importance of a scientific work by the number of earlier publications rendered superfluous by it."

-- David Hilbert

Comment by wuwei on Rationality Quotes - June 2009 · 2009-06-15T04:39:02.963Z · LW · GW

"Muad'Dib learned rapidly because his first training was in how to learn. And the first lesson of all was the basic trust that he could learn. It's shocking to find how many people do not believe they can learn, and how many more believe learning to be difficult. Muad'Dib knew that every experience carries its lesson."

-- Frank Herbert, Dune

Comment by wuwei on Rationality Quotes - June 2009 · 2009-06-15T04:15:12.446Z · LW · GW

"Science is what we understand well enough to explain to a computer. Art is everything else we do. ... Science advances whenever an Art becomes a Science. And the state of the Art advances too because people always leap into new territory once they have understood more about the old."

-- Donald Knuth

Comment by wuwei on Rationality Quotes - June 2009 · 2009-06-15T03:50:24.763Z · LW · GW

I like some of the imagery but I wouldn't say whatever the outcome is, it is by definition good.

To continue with the analogy, sometimes our inner book of morals really says one thing while a momentary upset prevents what is written in that book from successfully governing.

Comment by wuwei on Why safety is not safe · 2009-06-14T23:18:19.581Z · LW · GW

You seem to think an FAI researcher is someone who does not engage in any AGI research. That would certainly be a rather foolish researcher.

Perhaps you are being fooled by the fact that a decent FAI researcher would tend not to publicly announce any advancements in AGI research.

Comment by wuwei on Why safety is not safe · 2009-06-14T18:16:32.924Z · LW · GW

Would you bet on resource depletion?

Comment by wuwei on The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It · 2009-06-12T04:41:17.456Z · LW · GW

Voted Down. Sorry, Roko.

I don't find Greene's arguments to be valuable or convincing. I won't defend those claims here but merely point out that this post makes it extremely inconvenient to do so properly.

I would prefer concise reconstructions of important arguments over a link to a 377 page document and some lengthy quotes, many of which simply presuppose that certain important conclusions have already been established elsewhere in the dissertation.

As an exercise for the reader demonstrating my complaint, consider what it would take to work out whether Joshua Greene has any argument against this analysis of morality.

I agree that this is an important discussion to have but I don't think this post helps us to engage in a productive discussion. Rather, it merely seems to handicap those who disagree with Greene on multiple points when they wish to participate in the discussion and does so without adequate justification.

Comment by wuwei on Mate selection for the men here · 2009-06-09T00:14:39.226Z · LW · GW

Thanks for clarifying what factors you think are relevant. I agree that those have not been tested.

Comment by wuwei on Mate selection for the men here · 2009-06-08T22:25:33.683Z · LW · GW

The correlations with independent ratings of attractiveness were still .44 and .39. Compared to .04 and -.06 for intelligence, that still supports the conclusion that "sheer physical attractiveness appears to be the overriding determinant of liking."

They also used various personality measures assessing such things as social skills, maturity, masculinity/femininity, introversion/extroversion and self-acceptance. They found predominantly negative correlations (from -.18 to -.08) and only two comparatively small positive correlations .14 and .03.