Posts
Comments
No
SIAI needed to improve as an organization, so they brought in people who they thought could run a successful non-profit. What they got was a better non-profit plus the whole accompanying spectrum of philanthropy status divas, professional beggars and related hangers-on.
Most of the original thinkers have left, replaced by those who believe in thinking, but only for fashionable thoughts.
Jargon separates the raw value systems I'm talking about from the tribes that cling to them. I figured this would be less mind-killing but still communicative to the sort of person who cares about this thread.
It's not much better in the US. I live in a fairly Townie area, but there is a University, which has a student body unanimous in its adoration of Brahmin values. All of my young coworkers chattered with glee this morning at the "humiliation" of the "enemy".
Konkvistador is a concerned Tutsi living in a politico-cultural regime which seems increasingly pleased at the prospect of watching Hutus eat Tutsis.
Isn't it as simple as the fact that eugenicist ideas, even obviously good ones, assume the reality of HBD and therefore violate Western Universalist taboos?
Assume guilt!
It is a genuine challenge for me to tell if this is a joke.
Is this post meant as satire?
Certainly my good sir.
http://www.mayoclinic.com/health/organic-food/NU00255/NSECTIONGROUP=2
All the reliable literature I have read says that:
a) Conventional produce and organic produce are nutritionally equivalent
b) Organic produce is more prone to rancidity as fewer preservatives are used
c) Organic produce will make you popular with people who wear glasses with no lenses
The immortal starfarers necessarily go somewhere; the status game players don't necessarily go anywhere. Hence "winning".
Deciding that going somewhere is "winning" comes from your existing utility function. Another person could judge that the civilization with the most rich and complex social hierarchy "wins".
Rationality can help you search the space of actions, policies, and outcomes for those which produce the highest value for you. It cannot help you pass objective judgment on your values, or discover "better" ones.
You're priveliging your values when you judge which society - the status game players versus the immortal starfarers - is "winning".
By your lengthy apologetic introduction you're signaling to me that you know this doesn't belong here.
I'm receiving signals that people would rather I not comment.
Thanks for engaging, you've explained your position well.
I think there's a decent consensus on food:
"Eat food. Not too much. Mostly plants."
Sure, there are always scare stories about how (insert target food) might be trouble because one study found a mild correlation. However, I think there are many diet choices that people make (myself included) that are conclusively unhealthy.
Do you disagree?
I apologize for the leading questions. I didn't want to make outright accusations of tone when I wasn't sure how you had intended your comments. Your comments had seemed brief and chastising, and I wasn't sure what you were trying to communicate.
However, your answers make sense, and your rephrasing of my second question is fair.
Although, I am still unsure why you object to the use of the "poor diet choices as destructive behavior" analogy. It seems comparable to the drug-use analogy you propose as an alternative.
As a complete outsider to this conversation, it doesn't look like you're playing fair.
Can I ask you just to consider a few questions?
1) Do you think you are using a constructive tone?
2) If overeating were the primary cause of most obesity, would you want to know?
3) Is it your goal to shut down any discussion of this topic because of your personal sensibilities?
Tragically no. Sorry I never got back to you last week, I didn't get your text until the next day.
We both have a super busy fall semester, so we've been missing the meetups.
It's not a written rule by any means, but in order to acclimate to the style and reduce inferential distances it's usually a good idea.
Your piece isn't hated, it's just not good.
If you are actually interested in participating in this community, read the sequences. Then read some of the current frontpage material. Then try engaging us again, with one (and only one) username.
Dude, if you're going to make a dozen accounts to talk up your post, you can't use the same deranged writing style in all of them.
Downvoted, sockpuppet.
Hey! That's great. Excited to meet you :)
I'm sorry I missed it. I'll check in regularly for info on the next one.
Austin LW'er here. I totally would have come to this, but somehow I missed the announcement in the meta-meetup thread. I will watch for the next one of these and absolutely be there.
Edit: Wait... why was this downvoted?
Upvoted.
I'm starting to think this will not end well. We've started down a much too familiar non-theist and religionist conversation path.
I swear, if you can make an ironclad rational argument for Mormonism, I will personally convert.
Seconded. I am entirely open to models of the universe that better fit the evidence at hand than the ones I have. If you (calcsam) can present a convincing case for the accuracy and validity of your beliefs I will adopt them as well.
For a most of my life I thought I didn't enjoy music. Then I realized that I just don't like the music everyone listens to. The stuff that comes out of my radio is extremely unpleasant, but with much searching I have found some music that I do enjoy.
You're signalling to me right now that you have no desire to have a productive conversation. I don't know if you're meaning to do that, but I'm not going to keep asking questions if it seems like you have no intent to answer them.
Let's break this all the way down. Can you give me your thesis?
I mean, I see there is a claim here:
The aliens do not want to be exterminated so they should not exterminate.
... of the format (X therefore Y). I can understand what the (X) part of it means: aliens with a preference not to be destroyed. Now the (Y) part is a little murky. You're saying that the truth of X implies that they "should not exterminate". What does the word should mean there?
What would it mean for a utility function to be objectively wrong? How would one determine that a utility function has the property of "wrongness"?
Please, do not answer "by reasoning about it" unless you are willing to provide that reasoning.
So, we're working with thomblake's definition of "wrong" as those actions which reduce utility for whatever function an agent happens to care about. The aliens care about themselves not being exterminated, but may actually assign very high utility to humans being wiped out.
Perhaps we would be viewed as pests, like rats or pigeons. Just as humans can assign utility to exterminating rats, the aliens could do so for us.
Exterminating humans has the objectively determinable outcome of reducing the utility in your subjectively privileged function.
Okay, we don't disagree at all.
There is an objective sense in which actions have consequences. I am always surprised when people seem to think I'm denying this. Science works, there is a concrete and objective reality, and we can with varying degrees of accuracy predict outcomes with empirical study. Zero disagreement from me on that point.
So, we judge consequences of actions with our preferences. One can be empirically incorrect about what consequences an action can have, and if you choose to define "wrong" as those actions which reduce the utility of whatever function you happen to care about, then sure, we can determine that objectively too. All I am saying is that there is no objective method for selecting the function to use, and it seems like we're in agreement on that.
Namely, we privilege utility functions which value human life only because of facts about our brains, as shaped by our genetics, evolution, and experiences. If an alien came along and saw humans as a pest to be eradicated, we could say:
"Exterminating us is wrong!"
... and the alien could say:
"LOL. No, silly humans. Exterminating you is right!"
And there is no sense in which either party has an objective "rightness" that the other lacks. They are each referring to the utility functions they care about.
Is there a sense in which there are "two" rocks here, even if there were no agent to count the rocks? Is there a sense in which murder is wrong, even if there was never anyone to murder or observe murder?
I can understand what physical conditions you are describing when you say "two rocks". What does it mean, in a concrete and substantive sense, for murder to be "wrong"?
We are, near as I can tell, in perfect agreement on the substance of this issue. Aumann would be proud. :)
Unfortunately, I'm afraid I still don't understand your point.
I think we may have reached the somewhat common on LW point where we're arguing even though we have no disagreement.
sociological phenomenon ... still reasonable to disapprove of murder, etc.
Yup.
Could an agent with different preferences from ours reasonably approve of murder?
Yes to that too.
I very, very, strongly disapprove of terrorism. Terrorists, of course, would disagree. There is no objective sense in which one of us can be "right", unless you go out of your way to specifically define "right" as those actions which agree with one side or the other. The privileging of those actions as "right" still originates from the subjective values of whatever agent is judging.
Thanks, CuSithBell, I think you've done a good job of making the issue plain. It does indeed all add up to normality.
For that matter, I do not punish to transfer funds from healthy young males to impoverished old ladies who have not been stolen from
There are people who feel there is a moral imperative to do just that. Likewise, there is wide disagreement over what deserves punishment. An orthodox Jew, a Muslim, a Catholic, a Lutheran, a Communist, and a Vulcan walk into a bar... I'm sure we can all see the potential for punchlines.
You may punish action X which violates your preferences because you want to see people punished for action X. You could simultaneously choose not to punish action Y which violates your preferences, because for whatever reason you would prefer people not be punished for it. Others could disagree, and people often do disagree on what deserves punishment and what doesn't.
Neither side in such a debate is objectively incorrect. Each would indeed prefer their position of punishment or non-punishment.
we punish those who steal from old ladies, because the stealing is wrong.
I would say we punish those who steal from old ladies because we would prefer the old ladies not be stolen from. It is that preference, the subjective value we all (except the thief of course) place on a society where the meek are not abused by criminals, that causes us to call that behavior "wrong".
The evolutionary origins of that preference seem pretty obvious. In any group of social animals there will be one or two top physical competitors, and the remainder would be subject to their will. Of those many weaker individuals, the ones who survived to procreate were those who banded together to make bully-free tribes.
So, there are people who disagree with what you posted, and may be inclined to argue about it. That, combined with the idea shared in the Paul Graham quote in this very thread (about politics frequently being used as a form of identity) leads to defensiveness, leads to rationalization, and leads to stupidity.
So, in order to avoid stupid arguments, people would prefer fewer posts like your quote on LW.
Well, duh! What is your point?
There are people who do not recognize this. It was, in fact, my point.
Edit: Hmm, did I say something rude Perplexed?
We don't cringe at the thought of stealing from old ladies because it's wrong, but rather we call it wrong to steal from old ladies because we cringe at the thought
This is crisp, clear, and one of the best short explanations of the issue I've read.
So you're saying that there's one single set of behaviors, which, even though different agents will assign drastically different values to the same potential outcomes, balances their conflicting interests to provide the most net utility across the group. That could be true, although I'm not convinced.
Even if it is, though, what the optimal strategy is will change if the net values across the group changes. The only point I have ever tried to make in these threads is that the origin of any applicable moral value must be the subjective preferences of the agents involved.
The reason any agent would agree to follow such a rule set is if you could demonstrate convincingly that such behaviors maximize that agent's utility. It all comes down to subjective values. There exists no other motivating force.
It's a long story, starting with Eugine publicly declaring that he was downvoting the comments I made that he disagreed with, which has seemingly escalated to downvoting every comment I make even where I'm just conducting meta-housekeeping and the like.
I'm not commenting to shame or accuse, I'm trying to understand his motivations.
On the topic of karma, why are you downvoting every post I make regardless of content?
imagine it as a doctrine teaching you how to judge the behavior of others (and to a lesser extent, yourself).
Which metrics do I use to judge others?
There has been some confusion over the word "preference" in the thread, so perhaps I should use "subjective value". Would you agree that the only tools I have for judging others are subjective values? (This includes me placing value on other people reaching a state of subjective high value)
Or do you think there's a set of metrics for judging people which has some spooky, metaphysical property that makes it "better"?
I replied to Marius higher up in the thread with my efforts at preference-taboo.
Heh, I'm tempted to answer "yes" to your question because it makes me seem wittier than I was.
In reality, what I meant by "okay" was: Not contradictory or a crisis of rationality. It is indeed hard to avoid the language of objective judgments in English. :)
there aren't just people fulfilling their preferences.
You missed a word in my original. I said that there were agents trying to fulfill their preferences. Now, per my comment at the end of your subthread with Amanojack, I realize that the word "preferences" may be unhelpful. Let me try to taboo it:
There are intelligent agents who assign higher values to some futures than others. I observe them generally making an effort to actualize those futures, but sometimes failing due to various immediate circumstances, which we could call cognitive overrides. What I mean by that is that these agents have biases and heuristics which lead them to poorly evaluate the consequences of actions.
Even if a human sleeping on the edge of a cliff knows that the cliff edge is right next to him, he will jolt if startled by noise or movement. He may not want to fall off the cliff, but the jolt reaction occurs before he is able to analyze it. Similarly, under conditions of sufficient hunger, thirst, fear, or pain, the analytical parts of the agent's mind give way to evolved heuristics.
definition of morality (that doesn't involve the word preferences) is that set of habits that are most likely to bring long-term happiness to oneself and those around one.
If that's how you would like to define it, that's fine. Would you agree then, that the contents of that set of habits is contingent upon what makes you and those around you happy?
Oops, I totally missed this subthread.
Amanojack has, I think, explained my meaning well. It may be useful to reduce down to physical brains and talk about actual computational facts (i.e. utility function) that lead to behavior rather than use the slippery words "want" or "preference".