Suffering
post by Tiiba · 2009-08-03T16:02:38.270Z · LW · GW · Legacy · 96 commentsContents
96 comments
For a long time, I wanted to ask something. I was just thinking about it again when I saw that Alicorn has a post on a similar topic. So I decided to go ahead.
The question is: what is the difference between morally neutral stimulus responces and agony? What features must an animal, machine, program, alien, human fetus, molecule, or anime character have before you will say that if their utility meter is low, it needs to be raised. For example, if you wanted to know if lobsters suffer when they're cooked alive, what exactly are you asking?
On reflection, I'm actually asking two questions: what is a morally significant agent (MSA; is there an established term for this?) whose goals you would want to further; and having determined that, under what conditions would you consider it to be suffering, so that you would?
I think that an MSA would not be defined by one feature. So try to list several features, possibly assigning relative weights to each.
IIRC, I read a study that tried to determine if fish suffer by injecting them with toxins and observing whether their reactions are planned or entirely instinctive. (They found that there's a bit of planning among bony fish, but none among the cartilaginous.) I don't know why they had to actually hurt the fish, especially in a way that didn't leave much room for planning, if all they wanted to know was if the fish can plan. But that was their definition. You might also name introspection, remembering the pain after it's over...
This is the ultimate subjective question, so the only wrong answer is one that is never given. Speak, or be wrong. I will downvote any post you don't make.
BTW, I think the most important defining feature of an MSA is ability to kick people's asses. Very humanizing.
96 comments
Comments sorted by top scores.
comment by HalFinney · 2009-08-04T06:06:58.772Z · LW(p) · GW(p)
Reading the comments here, there seem to be two issues entangled. One is which organisms are capable of suffering (which is probably roughly the same set that is capable of experiencing qualia; we might call this the set of sentient beings). The other is which entities we would care about and perhaps try to help.
I don't think the second question is really relevant here. It is not the issue Tiiba is trying to raise. If you're a selfish bastard, or a saintly altruist, fine. That doesn't matter. What matters is what constitutes a sentient being which can experience suffering and similar sensations.
Let us try to devote our attention to this question, and not the issue of what our personal policies are towards helping other people.
comment by CronoDAS · 2009-08-04T00:13:29.176Z · LW(p) · GW(p)
Why is this tagged "serasvictoriawouldyoumarryme"?
Anything to do with this fictional character?
comment by Scott Alexander (Yvain) · 2009-08-03T23:23:43.969Z · LW(p) · GW(p)
For me, a morally significant agent is one that has positive and negative qualia. Since I can't tell this by looking at an agent, I guess. I'm pretty sure that my brother does, and I'm pretty sure that a rock doesn't.
Cyan mentioned pain asymbolia on the last thread: the ability to feel pain, but not really find anything painful or problematic about it. If someone had asymbolia generalized across all mental functions, I would stop counting that person as a moral agent.
Replies from: DanielLC, Nominull↑ comment by DanielLC · 2009-12-29T03:29:32.222Z · LW(p) · GW(p)
Wouldn't it be enough to have positive /or/ negative qualia?
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2009-12-29T13:02:32.561Z · LW(p) · GW(p)
...yes.
↑ comment by Nominull · 2009-08-09T02:34:04.439Z · LW(p) · GW(p)
Could you elaborate on your reasons for doubting that a rock has qualia?
Replies from: orthonormal, DanielLC, scroogemcduck1↑ comment by orthonormal · 2009-08-09T19:20:01.580Z · LW(p) · GW(p)
Qualia appear to require complicated internal structure: knock out a certain brain area and you lose some aspect of it.
↑ comment by scroogemcduck1 · 2021-01-27T00:45:00.252Z · LW(p) · GW(p)
I cannot speak for Scott, but I can speculate. I am quite sure a rock doesn't have qualia, because it doesn't have any processing center, gives no sign of having any utility to maximize, and has no reaction to stimuli. It most probably doesn't have a mind.
comment by Nominull · 2009-08-03T18:13:21.440Z · LW(p) · GW(p)
If a "morally significant agent" is one whose goals I would want to further, then I am the only inherently morally significant agent. The moral significance of other agents shifts with my mood.
Replies from: Vladimir_Nesov, conchis↑ comment by Vladimir_Nesov · 2009-08-04T12:14:03.990Z · LW(p) · GW(p)
What if the other agent is more "you" than you are? You are hiding the complexity of "moral significance" in the "I".
Replies from: Nominullcomment by HalFinney · 2009-08-04T06:18:15.011Z · LW(p) · GW(p)
We talk a lot here about creating Artificial Intelligence. What I think Tiiba is asking about is how we might create Artificial Consciousness, or Artificial Sentience. Could there be a being which is conscious and which can suffer and have other experiences, but which is not intelligent? Contrariwise, could there be a being which is intelligent and a great problem solver, able to act as a Bayesian agent very effectively and achieve goals, but which is not conscious, not sentient, has no qualia, cannot be said to suffer? Are these two properties, intelligence and consciousness, independent or intrinsically linked?
Acknowledging the limited value of introspection, nevertheless I can remember times which I was close to experiencing "pure consciousness", with no conscious problem-solving activity at all. Perhaps I was entranced by a beautiful sunset, or haunting musical performance. My whole being seemed to be pure experience, pure consciousness, with no particular need for intelligence, Bayesian optimization, goal satisfaction, or any of the other paraphernalia which we associate with intelligence. This suggests to me that it is at least plausible that consciousness does not require intelligence.
In the other direction, the idea of an intelligence problem solver devoid of consciousness is an element in many powerful, fictional dystopias. Even Eliezer's paperclip maximizer partakes of this trope. It seems that we have little difficulty imagining intelligence without consciousness, without awareness, sentience, qualia, the ability to suffer.
If we provisionally assume that the two qualities are independent, it raises the question of how we might program consciousness (even if we only want to know how, to avoid doing it accidentally). is it possible that even relatively simple programs may be conscious, may be capable of feeling real pain and suffering, as well as pleasure and joy? Is there any kind of research program that could shed light on these questions?
comment by Tom_Talbot · 2009-08-03T22:13:23.194Z · LW(p) · GW(p)
serasvictoriawouldyoumarryme?
Replies from: SilasBarta↑ comment by SilasBarta · 2009-08-03T22:44:12.691Z · LW(p) · GW(p)
Yeah, would an editor please delete that OT tag? She's probably turned him down by now anyway. (Or her, I don't know Tiiba's gender, perhaps a Tiiba is something really feminine.)
Replies from: Tiibacomment by Psychohistorian · 2009-08-03T21:48:29.274Z · LW(p) · GW(p)
The capacity to abide by morality carves out the right cluster in thingspace for me, though I'd hesitate to call it the determining factor. If a thing has this capacity, we care about its preferences proportionately.
People, save probably infants, are fully capable, in theory, of understanding and abiding by morality. Most animals are not. Those more capable of doing so, domestic pets, beasts of burden for example, receive some protection. Those who do not have this capacity are generally less protected and that which is done to them less morally relevant.
I don't fully endorse this view, but it feels like it explians a lot.
Replies from: pengvado↑ comment by pengvado · 2009-08-04T20:58:37.837Z · LW(p) · GW(p)
Which cluster is that: Agents currently acknowledging a morality similar to yours (with "capacity" referring to their choice on whether or not to act according to those nominal beliefs at any give time)? Agents who would be moved by your moral arguments (even if those arguments haven't yet been presented to them)? Anything Turing-complete (even if not currently running an algorithm that has anything to do with morality)?
Replies from: Psychohistorian↑ comment by Psychohistorian · 2009-08-05T00:23:32.788Z · LW(p) · GW(p)
"Agents capable of being moral" corresponds very closely with my intuitive set "agents whose desires we should have some degree of respect for." Thus, it captures my personal sense of what morality is quite well, though it doesn't really capture why that's my sense of it.
comment by dclayh · 2009-08-04T01:36:03.151Z · LW(p) · GW(p)
The following was originally going to be a top-level post, but I never posted it because I couldn't complete the proof of my assertion.
In his recent book I Am a Strange Loop, Douglas Hofstadter writes:
A spectacular evolutionary gulf opened up at some point as human beings were gradually separating from other primates: their category systems became arbitrarily extensible. Into our mental lives there entered a dramatic quality of open-endedness, an essentially unlimited extensibility, as compared with a very palpable limitedness in other species. Concepts in the brains of humans acquired the property that they could get rolled together with other concepts into larger packets, any such larger packet could then become a new concept in its own right. In other words, concepts could nest inside each other hierarchically, and such nesting could go on to arbitrary degrees. This reminds me—and I do not think it is pure coincidence—of the huge difference, in video feedback, between an infinite corridor and a truncated one.
In other words, Hofstadter sees a phase transition, a discontinuity, a binary division between the mental processes of humans and other species. Yet curiously, when he discusses the moral consideration we ought to give to various species, he advocates a continuum approach based on something like "capacity for friendship", thereby privileging species with K-strategies and/or pack-hunting tendencies for no very good reason that I can see.
To me, the implication of Hofstadter's phase transition is obvious: beings with arbitrary category systems get moral consideration; those with bounded systems do not. By "moral consideration", incidentally, I don't mean some sort of Kantian treating-as-ends-not-means (oh wait, Kant is irrelevant); rather I mean that when you're making some nice utilitarian calculation, you must consider the feelings/opinions of all humans involved, but should not factor in the preferences of (say) a dog.
This is not to say that animal cruelty for the hell of it is a good idea (though I think it should be legal). Many of us anthropomorphize animals, especially pets, to a huge extent, and doing "evil" to animals could easily lead to actual evil. On the other hand, if you're deciding between torturing a human or a googol kittens, go for the kittens.
Replies from: PhilGoetz, CronoDAScomment by cousin_it · 2009-08-03T19:24:03.910Z · LW(p) · GW(p)
I will help a suffering thing if it benefits me to help it, or if the social contract requires me to. Otherwise I will walk away.
I adopted this cruel position after going through one long relationship where I constantly demanded emotional "help" from the girl, then another relationship soon afterwards where the girl constantly demanded similar "help" from me. Both those situations felt so sick that I finally understood: participating in any guilt-trip scenario makes you a worse person, no matter whether you're tripping or being tripped. And it also makes the world worse off: being openly vulnerable to guilt-tripping encourages more guilt-tripping all around.
So relax and follow your own utility - this will incentivize others to incentivize you to help them, so everyone will treat you well, and you'll treat them well in advance for the same reason.
Replies from: contravariant, Vladimir_Nesov, Tem42, CronoDAS, Emily↑ comment by contravariant · 2015-08-08T04:21:55.996Z · LW(p) · GW(p)
People who require help can be divided into those who are capable of helping themselves, and those who are not. Such a position as yours would express the value preference that sacrificing the good of the latter group is better than letting the first group get unpaid rewards - in all cases. For me it's not that simple, the choice depends on the proportion of the groups, cost to me and society, and just how much good is being sacrificed. To make an extreme example, I would save someone's life even if this encourages other people to be less careful protecting theirs.
↑ comment by Vladimir_Nesov · 2009-08-04T12:28:23.891Z · LW(p) · GW(p)
I think it helps to distinguish moral injunctions from statements about human preference. First are the heuristics, while latter are the statements of truth. A "position" is a heuristic, but it isn't necessarily the right thing to do, in some of the case where it applies. Generalization from personal experience may be useful on average, but doesn't give knowledge about preference with certainty. When you "follow you own utility", you are merely following your imperfect understanding of your own utility, and there is always a potential for getting the map closer to reality.
Replies from: cousin_it↑ comment by cousin_it · 2009-08-04T13:15:42.924Z · LW(p) · GW(p)
You're talking about preferences over outcomes and you're right that they don't change much. I interpreted Tiiba as asking about preferences over actions ("whose goals you would want to further"), those depend on heuristics.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-08-04T16:05:07.813Z · LW(p) · GW(p)
I don't understand what you're saying here...
↑ comment by Tem42 · 2015-08-15T02:49:16.161Z · LW(p) · GW(p)
This differs from what I had hypothesized was the standard model. I think I like my hypothesis of the standard model better than my understanding of your model, so I'll mention it here, on the off-chance that you might also like it.
I think that most people make (or intuit) the calculation "If it's not too much trouble, I should help this person one time. If they are appropriately thankful, and if they do not inconvenience me too much, I will consider helping them again; if they reciprocate appropriately, I will probably be friends with them, and engage in a long-term reciprocal relationship."
In this calculation, 'appropriately thankful', 'inconvenience me too much', and 'reciprocate appropriately' are highly subjective, but this model appears to account for most stable relationships. It also accounts for guilt-trip based relationships being unstable. The "I should help one time" clause may make the world a better place in general, although it is unclear if that's why most people hold it.
It is possible that when you say "social contract" and I say "reciprocal relationship", we mean exactly the same thing.
↑ comment by Emily · 2009-08-03T19:38:51.540Z · LW(p) · GW(p)
Are you saying that "help the helpless" is a bad idea?
Replies from: cousin_it↑ comment by cousin_it · 2009-08-03T19:54:12.893Z · LW(p) · GW(p)
If you discovered their helplessness yourself, most likely a good idea; if it was advertised to you, almost certainly a bad idea.
Replies from: Emily↑ comment by Emily · 2009-08-03T20:31:21.099Z · LW(p) · GW(p)
That seems like it could make sense. If you discover their helplessness, does that come under "it benefits me" or "the social contract requires me" to help them?
What about the helpless who would normally be discovered by no one in a position to help them, and don't have their helplessness advertised? Is it a good idea under this formula to go and actively seek them out, or not?
Replies from: cousin_it↑ comment by cousin_it · 2009-08-04T09:57:57.699Z · LW(p) · GW(p)
If I discover their helplessness and expect a high enough degree of gratitude, I'll help for selfish reasons, otherwise move on. For example, I love helping old women on the metro with their heavy bags because they're always so surprised that someone decided to help them (Moscow's not a polite city), but I never give money to beggars. For an even more clear-cut example, I will yield my seat to an elderly person unless specifically demanded to.
Actively seeking out people to help might be warranted if the resulting warm fuzzies are high enough.
Replies from: Emily↑ comment by Emily · 2009-08-04T14:10:20.635Z · LW(p) · GW(p)
This kinda bothers me, and I don't know whether it's just an emotional, illogical reaction or whether there are some good reasons to be bothered by it. In practice, I would imagine it's not a bad description of how most people behave most of the time. But if everyone used these criteria all the time, something is telling me the world would not be a better place. I could well be wrong.
ps. I assumed that was supposed to read "I will not yield my seat...", but I guess it's possible that it wasn't supposed to. ?
Replies from: cousin_it↑ comment by cousin_it · 2009-08-04T14:51:48.580Z · LW(p) · GW(p)
Nah, it was supposed to read "I will". Someone who demands that I yield my seat isn't likely to show gratitude when I comply.
Can't speak about the whole world, but anyone who's very prone to manipulating and being manipulated (like I was before) will benefit from adopting this strategy, and everyone around them will benefit too.
Replies from: Emilycomment by djcb · 2009-08-03T17:55:58.423Z · LW(p) · GW(p)
Interesting question....
Could there be suffering in anything not considered an MSA? While I can imagine a hypothetical MSA that could not suffer, it's hard to think of a being that suffers yet could not be considered an MSA.
But do we have a good operational definition of 'suffering'? The study with the fish is a start, but is planning really a good criterion?
The discussion reminds of that story On being a bat (iirc) in Hofstadter/Dennets highly recommended The Mind's I, on the impossibility of understanding at all what it is like to be something so different from us.
Replies from: Daniel_Lewis↑ comment by Daniel_Lewis · 2009-08-03T20:41:17.057Z · LW(p) · GW(p)
The discussion reminds of that story On being a bat (iirc) in Hofstadter/Dennets highly recommended The Mind's I, on the impossibility of understanding at all what it is like to be something so different from us.
Thomas Nagel's "What is it like to be a bat?" [PDF], indeed included in The Mind's I.
comment by prase · 2009-08-03T17:16:29.136Z · LW(p) · GW(p)
Most people do this intuitionally, and most people keep to make rationalisations of their intuitive judgements or construct neat logical moral theories in order to support them (and these theories usually fail to describe what they are intended to describe, because of their simplicity relative to the complexity of an average man's value system).
That said, for me an agent is the more morally significant the more is it similar to human, and I determine suffering by comparison with my own experiences and some necessary extrapolation. Not much useful answer perhaps, but I don't know of any better.
Replies from: Alicorn, Vladimir_Nesov, Dagon↑ comment by Alicorn · 2009-08-03T17:27:07.909Z · LW(p) · GW(p)
for me an agent is the more morally significant the more is it similar to human
Similar to a human in what way? We're more closely related to the aforementioned cartilaginous fish than to any given sapient alien. We probably have psychology more similar to that of a border collie than that of at least some possible types of sapient alien.
Replies from: prase↑ comment by prase · 2009-08-03T17:41:21.845Z · LW(p) · GW(p)
Similar in an intuitive way.
As for the fish, I don't know, it depends how the aliens are thinking and communicating. In this respect, I don't feel much similarity with fish anyway.
As for the collie, very probably we are more similar. And I would probably care more about border collies than about crystalline baby-eating aliens. If you have a dog, you can probably imagine that the relation between a man and a collie can be pretty strong.
↑ comment by Vladimir_Nesov · 2009-08-04T12:31:38.230Z · LW(p) · GW(p)
And you are hiding the complexity of "moral significance" in "similarity". Is a statue of a human more similar to a human than a horse? Is a human corpse? What if you take out the brain and replace it with a life-support system that keeps the rest of the body alive?
Replies from: prase↑ comment by prase · 2009-08-04T17:06:53.636Z · LW(p) · GW(p)
Similarity of thinking, communication and behaviour makes very important part. So statues and corpses don't rank high in my value list.
You may have a point, but similarity sounds a bit less vague to me than moral significance. At least it makes some restrictions: if objects A and B differ only in one quality, and A is human-like in this quality while B not so, then A is clearly more similar to humans. If A is more human-like in certain respects while B in other, more precise description is needed, but I can't describe my preferences and their forming more precisely at the moment.
↑ comment by Dagon · 2009-08-03T19:53:08.633Z · LW(p) · GW(p)
more morally significant the more is it similar to human
I'd expand this to "the more I empathize with it". Often, I feel more strongly about the suffering of some felines than some humans.
Of course, that's just a description, not a recommendation. The question of "what entities should one empathize with" remains difficult. Most answers which are self-consistent and match observed behaviors are pretty divergent from the signaling (including self-signaling) that you'd like to give out.
Replies from: prase↑ comment by prase · 2009-08-03T22:59:45.877Z · LW(p) · GW(p)
Of course it's a description. I understood the original post as asking for description as much as recommendation.
The question "what entities should one empathize with" is as difficult as many similar questions about morality, since it's not absolutely clear what "should" means here. If your values form a system which can derive the answer, do it; but one can hardly expect wide consensus. My recommendation is: you don't need the answer, instead use your own intuition. I think the chances that our intuitions overlap significantly are higher than chances of discovering an answer satisfactory for all.
comment by teageegeepea · 2009-08-03T17:45:06.306Z · LW(p) · GW(p)
BTW, I think the most important defining feature of an MSA is ability to kick people's asses. Very humanizing.
I don't know if you meant that as a joke, but that's pretty much my take from a contractarian perspective (though I wouldn't use the phrase "morally significant agent"). Fish can't do much about us cooking and eating them, so they are not a party to any social contract. That's also the logic behind my tolerance of infanticide.
Replies from: Yvain, wedrifid, PhilGoetz↑ comment by Scott Alexander (Yvain) · 2009-08-03T18:02:57.657Z · LW(p) · GW(p)
Was it okay to kill the Indians back in the 1700s, before they got guns? What were they going to do? Throw rocks at us?
Replies from: Baughn, teageegeepea, Eliezer_Yudkowsky, None↑ comment by Baughn · 2010-08-05T13:39:18.777Z · LW(p) · GW(p)
I realize you're joking, but it bears mentioning in a general-knowledge kind of way:
Bows and arrows were, at the time, as dangerous if not more so than guns. The reason guns were superior back then was solely due to a lack of required training. (Bows take decades of practice, and it's been joked that you should start with the grandfather - but in practice, starting with the father is a good idea.)
Replies from: byrnema↑ comment by byrnema · 2010-08-05T13:58:34.832Z · LW(p) · GW(p)
I guess it was meant that you start with the grandfather because he would be most skilled.. Has this been described in certain kinds of books? (Diaries, etc.)
Replies from: Baughn↑ comment by Baughn · 2010-08-05T19:26:35.868Z · LW(p) · GW(p)
No, the grandfather would be the least skilled of the three.
The basic idea is that to make a good archer, you need to start when he's (women need not apply) practically a baby. In order to teach well, you must be an archer yourself; thus, the father should be an archer.
Adding in the grandfather was probably a case of exaggeration for effect, but - no, I haven't read any diaries about it, so I could be wrong. You'd probably get some benefit from it.. I have no idea how much.
↑ comment by teageegeepea · 2009-08-03T22:43:19.568Z · LW(p) · GW(p)
I am an emotivist and do not believe anything is good or bad in an objective sense. I think some Indians may have had guns by the 1700s, but their bows and arrows weren't terribly outclassed by many of the old muskets back then either (I'm actually discussing that at my blog right now). The biggest advantage of the colonists was their ever-increasing numbers (while disease steadily drained those of the natives). The indians frequently did respond in kind to killings and the extent to which they could do so would strike me as as the most significant factor to take into consideration when it comes to the decision to kill them.
There is also the factor of trade relations that could be disrupted, but most people engaged in prolonged voluntary trade are going to have significant ass-kicking ability or otherwise they would have been conquered and their goods seized by force already. I understand Peter Leeson has a paper "Trading with bandits" disputing that point, but the frequency with which dominance based resource extraction occurs makes me think the phenomena he discusses only occur under very limited conditions.
Replies from: Yvain, None, Aurini↑ comment by Scott Alexander (Yvain) · 2009-08-03T23:29:10.367Z · LW(p) · GW(p)
Well, I can't accuse you of having any unwillingness to bite bullets. Nor of having any unwillingness to do lots of other questionable things with bullets besides.
Still, Less Wrong has got to be the only place where I can ask if it's okay to massacre Indians, and get one person who says it depends what the people living back then thought, and another who says it depends on the sophistication of musketry technology. I don't know if that's a good thing or a bad thing about this site.
Replies from: teageegeepea, RobinZ↑ comment by teageegeepea · 2009-08-04T02:37:34.743Z · LW(p) · GW(p)
I suspect there are a higher-than-average number of bullet-biters here, and I number myself among them. I don't grant the intuitions which lead people to dodge bullets much credence.
Although I am a gun-owner, I don't think I am substantially more likely to shoot anyone (delicious animals are another story) than the others here. Though you may think my above-mentioned criteria (including the government as a source of ass-kicking and taking into account risk aversion) don't count, I'd say that constitutes a substantial unwillingness. Also, while this is pedantic, I'd like to again emphasize the importance of disease over guns. Note that north america and australia have had nearly complete population replacement by europeans, while africa has been decolonized. The reason for that is not technology, but relative vulnerability to disease.
If it makes you feel any better about the inhabitants of Less Wrong, note that your reaction was voted up while my response (which was relevant and informative with links to more information, if I may judge my own case for a moment) was voted down. I do not say this to object to anyone's actions (I don't bother voting myself and have no plans to make a front-page post) but to indicate that this is evidence of what the community approves.
Although, as mentioned, I don't believe in objective normative truth, we can pretend for a little while in response to joeteicher. We believe we have a better understanding of many things than 1700s colonists did. If we could bring them in a time-machine to the present we could presumably convince them of many of those things. Do you think we could convince them of our moral superiority? From a Bayesian perspective (I think this is Aumann's specialty) do they have any less justification for dismissing our time period's (or country's) morality as being obviously wrong? Or would they be horrified and make a note to lock up anyone who promotes such crazy ideas in their own day?
G. K. Chesterton once said tradition is a democracy in which the dead get to vote (perhaps he didn't know much about Chicago), which would certainly not be a suitable mechanism of electing representatives but gets to an interesting point in majoritarian epistemology. There are simply huge numbers of people who lived in the past and had such beliefs. What evidence ancient morality?
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2009-08-04T03:22:21.593Z · LW(p) · GW(p)
I don't doubt you're a nonviolent and non-aggressive guy in every day life, nor that in its proper historical context the history of colonists and Indians in the New World was really complicated. I wasn't asking you the question because of an interest in 18th century history, I was asking it as a simplified way to see how far you were taking this "Anyone who can't kick ass isn't a morally significant agent" thing.
Your willingness to take it as far as you do is...well, I'll be honest. To me it's weird, especially since you describe yourself as an emotivist and therefore willing to link morality to feeling. I can think of two interpretations. One, you literally wouldn't feel bad about killing people, as long as they're defenseless. This would make you a psychopath by the technical definition, the one where you simply lack the moral feelings the rest of us take for granted. Two, you have the same tendency to feel bad about actually killing an Indian or any other defenseless person as the rest of us, but you want to uncouple your feelings from "rationality" and make a theory of morality that ignores them (but then how are you an emotivist?!). I know you read all of the morality stuff on Overcoming Bias and that that stuff gave what I thought was a pretty good argument for not doing that. Do you have a counterargument?
(Or I could be completely misunderstanding what you're saying and taking your statement much further than you meant for it to go.)
By the way, I didn't downvote your response; you deserve points for consistency.
Replies from: teageegeepea↑ comment by teageegeepea · 2009-08-04T06:20:01.761Z · LW(p) · GW(p)
Do I deserve points for consistency? I personally tend to respect bullet-biters more, but I am one. I'm not sure I have a very good reason for that. When I say that I think bullet-dodgers tend to be less sensible I could just be affirming something about myself. I don't know your (or other non-biters) reasons for giving points, other than over-confidence being more respected than hedged mealy-mouth wishy-washing. One might say that by following ideas to their logical conclusion we are more likely to determine which ideas are false (perhaps making bullet-biters epistemological kamikazes), but by accepting repugnant/absurd reductios we may just all end up very wrong rather than some incoherent mix of somewhat wrong. In the case of morality though I can't say what it really means to be wrong or what motivation there is to be right. Like fiction, I choose to evict these deadbeat mind-haunting geists and forestall risks to my epistemic hygiene.
I did read the morality stuff at Overcoming Bias (other than the stuff in Eliezer's sci-fi series, I don't read fiction), didn't find it convincing and made similar arguments there.
You're right that emotivism doesn't imply indifference to the suffering of others, it's really a meta-ethical theory which says that moral talk is "yay", "boo", "I approve and want you to as well" and so on rather than having objective (as in one-parameter-function) content. Going below the meta-level to my actual morals (if they can be called such), I am a Stirnerite egoist. I believe Vichy is as well, but I don't know if other LWers are. Even that doesn't preclude sympathy for others (though its hard to say what it does preclude). I think it meshes well with emotivism, and Bryan Caplan seems to as well since he deems Stirner an "emotivist anarchist". Let's ignore for now that he never called himself an anarchist and Sidney Parker said the egoist must be an archist!
At any rate, with no moral truth or God to punish me I have no reason to subject myself to any moral standard. To look out for ones' self is what comes easiest and determines most of our behavior. That comes into tension with other impulses, but I am liberated from the tribal constraints which would force me to affirm the communal faith. I probably would not do that if I felt the conflicting emotions that others do (low in Agreeable, presumably like most atheists but even moreso). To the extent that I can determine how I feel, I choose to do so in a way that serves my purposes. Being an adaptation-executer, my purposes are linked to how I feel and so I'm quite open to Nozick's experience machine (in some sense there's a good chance I'm already in one) or wireheading. Hopefully Anonymous is also an egoist, but would seek perpetual subjective existence even if it means an eternity of torture.
One proto-emotivist book, though it doesn't embrace egoism, is The Theory of Moral Sentiments. I haven't actually read it in the original, but there's a passage about a disaster in China compared to the loss of your own finger. I think it aptly describes how most of us would react. The occurrence in China is distant, like something in a work of fiction. If the universe is infinite there may well be an infinite number of Chinas, or Earths, disappearing right now. And the past, with its native americans and colonists or peasants and proletariat that died for modernity is similar. If we thought utilitarianism was true, it would be sheerest nonsense to care more about your own finger than all the Chinese, or even insects. But I do care more about my finger and am completely comfortable with that reflexive priority. If I was going to be in charge of making deals then the massive subjective harm from the perspective of the Chinese would be something to consider, and that leads us back to the ability to take part in a contract.
Aschwin de Wolf's Against Politics site used to have a lot more material on contractarianism and minimal ethics, but the re-launched version has less and I was asked to take down my mirror site. There is still some there to check out, and cyonics enthusiasts may be interested in the related Depressed Metabolism site.
↑ comment by RobinZ · 2009-08-04T02:20:21.620Z · LW(p) · GW(p)
Still, Less Wrong has got to be the only place where I can ask if it's okay to massacre Indians, and get one person who says it depends what the people living back then thought, and another who says it depends on the sophistication of musketry technology. I don't know if that's a good thing or a bad thing about this site.
It's not that unusual in my experience, to be perfectly frank. Once you get out of the YouTube-comment swamps to less-mainstream, more geeky sites, the GIFT-ratio starts to drop enough to allow intelligent provocative conversation. I could easily imagine this comment thread on a Making Light post, for example.
↑ comment by [deleted] · 2009-08-13T23:38:40.994Z · LW(p) · GW(p)
As an emotivist, you might be interested in reading After Virtue, particularly the first three or four chapters. He presents a rather compelling argument against emotivism, and if you want to maintain your emotivism you probably ought to find some rationalization defending yourself from his argument.
Replies from: Psychohistorian↑ comment by Psychohistorian · 2009-08-14T00:13:01.490Z · LW(p) · GW(p)
One should generally seek reasons as a defense from argument, not rationalization.
{Edit: My mistake, he really did mean emotivism and this paragraph kind of misses the point. Not going to delete, as it may confuse later comments.} More to the point, though, a refutation of emotivism is not a refutation of moral relativism, and, based on the little bit I could get off Amazon previews, relativism seems to be his problem, even if he wants to straw-man it as emotivism. Similarly, TGGP (given that he redundantly conjoins "I do not believe anything is good or bad in an objective sense" with "emotivism") seems to be more about the relativism than the emotivism specifically.
If that author actually manages to put a decent dent in moral relativism, please explain so I can go buy this book immediately, because I would be literally stunned to see such an argument.
Replies from: None↑ comment by [deleted] · 2009-08-14T02:36:34.586Z · LW(p) · GW(p)
Actually, based on this comment, TGGP actually believes in emotivism as such.
He isolates three reasons in the second chapter:
- Moral approval is a magical category that hides what is meant by "moral."
"'Moral judgments express feelings or attitudes,' it is said. 'What kind of feelings or attitudes?' we ask. 'Feelings or attitudes of approval,' is the reply. 'What kind of approval?' we ask, perhaps remarking that approval is of many kinds. It is in answer to this question that every version of emotivism either remains silent or... becomes vacuously circular [by identifying the approval as moral approval]" (12, 13).
Emotivism conflates 'expressions of personal preference' ("I like this!") with 'evaluative expressions' ("This is good!"), despite the fact the first is gets part of its meaning from the person saying it ("I like this!") and the second doesn't.
Emotivism attempts to assign meaning to the sentence, when the sentence itself might express different feelings or attitudes in different uses. (See Gandalf's take on "Good morning!" in The Hobbit). This is probably where emotivism can be rehabilitated, as MacIntyre goes on to say:
"This suggests that we should not simply rely on these objections to reject the emotive theory, but that we should consider whether it ought not to have been proposed as a theory about the use -- understood as purpose or function -- of members of a certain class of expressions rather than about their meaning...." (13).
Note that I'm not defending MacIntyre's position, here; I'm only bringing it up because an emotivist should know what his or her response to it is, because it is a pretty large objection. My experience is that they go into absolute denial upon hearing the second and third objections, and that's just not cool.
Replies from: Douglas_Knight, Psychohistorian↑ comment by Douglas_Knight · 2009-08-14T11:13:24.414Z · LW(p) · GW(p)
What does "pretty large" mean of an objection other than "good"? But you say you're not defending MacIntyre.
I'd just like to know what the position is.
The second bullet point looks like the "point and gape" attack. It simply restates emotivism and replies by declaring the opposite to be fact. The whole point of emotivism is that the "I" is implicity in "this is good," that the syntax is deceptive. The defense seems to be that we should trust syntax.
Is "moral approval" any more magic than "moral"? It seems like a pretty straightforward category: when people express approval using moral language. This fails to predict when people will express moral approval rather than the ordinary type, but that hardly makes it magical.
Is there any moral theory to which the third bullet point does not apply? Surely, every moral theory has opponents who will apply it incorrectly to "good morning." The second bullet point says we should trust syntax, while the third that language is tricky.
The quoted part seems like a good response to virtually all of analytic philosophy; perhaps it can be rehabilitated. But surely emotivism is explicit about promoting performance over meaning? Isn't that thewhole point of emotivism as opposed to other forms of moral relativism?
Replies from: None↑ comment by [deleted] · 2009-08-14T13:38:00.384Z · LW(p) · GW(p)
1) "pretty large" tends to mean the same thing as "fundamental", "general", "widely binding" -- at least in my experience. E.g., "Godel's Theorem was a pretty large rejection of the Russell program."
And no, I'm not defending MacIntyre. All I'm trying to demonstrate is that his arguments against emotivism are worthy enough for emotivists to learn.
2) No. You've never heard someone say, "I may not like it, but it's still good?" For example, there are people who are personally dislike gay marriage, but support it anyway because they feel it is good.
3) Defining "moral approval" as "when people express approval using moral language" says nothing about what the term "moral" means, and that's something any ethical system really ought to get to eventually.
4) Yes: deontological systems don't give one whit about the syntax of a statement; if your 'intention' was bad, your speech act was still bad. Utilitarianism also is more concerned with the actual weal or woe caused by a sentence, not its syntatic form.
And I'm done. If you want to learn more about MacIntyre, read the damn book. I'm a mathematician, not a philosopher.
Replies from: Douglas_Knight, thomblake↑ comment by Douglas_Knight · 2009-08-14T19:52:44.187Z · LW(p) · GW(p)
"I may not like it, but it's still good?" For example, there are people who are personally dislike gay marriage, but support it anyway because they feel it is good.
You said that emotivists you know go into "absolute denial" at point 2; how do they react to an example like this?
I would expect them to say that the people are lying or feel constrained by social conventions. In Haidt terms, they feel both fairness and disgust or violation of tradition and feel that fairness trumps tradition/purity in this instance. Or they live in a liberal milieu where they're not allowed to treat tradition or purity morally. (I should give a lying example, but I'm not sure what I meant.)
ETA: if MacIntyre treated deontology the way he treats emotivism, he'd say that the morning is not an actor, therefore it cannot be "good" so "good morning" is incoherent. But I guess deontology is not a theory of language, so it's OK to just say that people are wrong.
↑ comment by thomblake · 2009-08-14T14:33:02.093Z · LW(p) · GW(p)
For reference, I think you've done MacIntyre sufficient justice here.
says nothing about what the term "moral" means, and that's something any ethical system really ought to get to eventually.
I think that's putting the cart before the horse. Figuring out what 'moral' means should be something you do before even starting to try to study morality.
↑ comment by Psychohistorian · 2009-08-14T06:03:46.393Z · LW(p) · GW(p)
Ah, I stand corrected; I got the impression from the intro of the book that the author was trying to slay relativism by slaying emotivism, which really doesn't work. I basically agree with the point against emotivism; it does not capture meaning well. I ascribe to projectivism myself, but it looks like I've mistaken where the original argument was going, so sorry for adding confusion.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2009-08-14T11:18:18.027Z · LW(p) · GW(p)
If you understand the point, could you spell it out? Weren't there supposed to be three points? I don't see anything in the above to distinguish emotivism from projectivism. I suspect that you just assumed an argument against something you rejected was the argument you use.
Replies from: None↑ comment by [deleted] · 2009-08-14T13:43:57.292Z · LW(p) · GW(p)
There are three points, marked with bullet points.
1) "Moral approval" is magical. 2) Reducing "This is good" to "I like this" misrepresents the way people actually speak. 3) Emotivism doesn't account for the use of sentences in a context -- which is the whole of actual ethical speech.
Emotivism is very different from projectivism. One is a theory of ethical language, and one is a theory of mind.
EDIT: Perhaps this wasn't so clear -- one consequence of projectivism is a theory of ethical language as well; see Psychohistorian below. My point was that it's a category error to consider them as indistinguishable, because projectivism proper has consequences in several other fields of philosophy, whereas emotivism proper is mostly about ethical language and doesn't say anything wrt how we think about things other than moral approval.
Replies from: Psychohistorian, Douglas_Knight↑ comment by Psychohistorian · 2009-08-14T17:29:05.450Z · LW(p) · GW(p)
Ethical projectivism isn't quite so much a theory of mind as it is a theory of ethical language. It's clear that in most cases where people say, "X is wrong" they ascribe the objective quality "wrongness" to X. Projectivism holds that there is no such objective quality, thus, the property of wrongness is in the mind, but it doesn't feel like it, much like how the concepts of beauty and disgust are in the mind, but don't feel like it. You can't smell something disgusting and say, "Well, that's just my opinion, that's not really a property of the smell;" it still smells disgusting. Thus, projectivism has the same rejection of objective morality as emotivism does, but it describes how we actually think and speak much better than emotivism does.
The attack on emotivism as not accurately expressing what we mean is largely orthogonal to realism vs. subjectivism. Just because we speak about objective moral principles as if they exist does not mean they actually exist, anymore than speaking about the Flying Spaghetti Monster as if it existed conjures it into existence. But the view that moral statements actually express mere approval or disapproval seems clearly wrong; that's just not what people mean when they talk about morality.
Replies from: Douglas_Knight, thomblake↑ comment by Douglas_Knight · 2009-08-14T19:38:46.259Z · LW(p) · GW(p)
As I see it, you ignore the first and third bullet points and take the second bullet point to promote projectivism over emotivism. It's certainly true that projectivism takes speech more at face value than emotivism. But since emotivism is up-front about this, this is a pretty weak complaint. Maybe it means that emotivism has to do more work to fill in a psychological theory of morality, but producing a psychological theory of morality seems big enough that it's not obvious whether it makes it harder or easier.
What if I posited a part of the mind that tried to figure out what moral claims it could (socially) get away with making and chose the one it felt was most advantageous to impose on the conscious mind as a moral imperative. Would you call that emotivism or projectivism?
Replies from: Psychohistorian↑ comment by Psychohistorian · 2009-08-15T20:21:13.198Z · LW(p) · GW(p)
What if I posited a part of the mind that tried to figure out what moral claims it could (socially) get away with making and chose the one it felt was most advantageous to impose on the conscious mind as a moral imperative. Would you call that emotivism or projectivism?
I have trouble understanding this; mostly, I don't get if you think it exists or if you just want me to pretend it does. But, if I do understand the concept correctly, if something is being imposed on the conscious mind as a moral imperative, that would be projectivist, as it would feel real. If you had part of the unconscious mind that imposed the most socially acceptable, expedient concept of "disgust" on the rest of the mind, one would still feel genuinely disgusted by whatever it "thought" you should be disgusted by. The problem with emotivism is that most people who make moral statements genuinely believe them to be objective, so rendering them into emotive statements loses meaning. Projectivism retains this meaning without accepting the completely unsupported (and I believe unsupportable) claim that objective morals exist.
The magical category objection doesn't really make sense, even for emotivism. If "Murder is bad" means, "Boo murder!" no category is evoked and none need be. Furthermore, from any anti-realist perspective, any thing or act could potentially be viewed as immoral, so trying to describe a set of things or acts that count as valid subjects of "moral approval" makes no sense. "Perhaps remarking that approval is of many kinds" makes no (or at the best, very ill-defined) sense. The author doesn't mention a single kind, and it is unclear what would distinguish kinds in a way that meets his own standards. Forcing the other side to navigate an ill-defined, context-free classification system and claiming their definition is defective when they fail to do so proves nothing.
As for the third point, it's a straw man. Claiming that emotivism must act as a mapping function such that any sentence XYZ -> a new sentence ABC irrespective of context is a caricature; English doesn't work like this, and no self-respecting theory of language would pretend it does. Unless emotivists consistently claim that context is irrelevant and can be ignored, this point shouldn't even be made. I could write a paper about how "Murder is wrong" can be replaced with, "Boo murder!" You can't then use '"Murder is wrong" contains the word "is"' as a legitimate counterexample, because it is quite obviously a different context.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2009-08-16T01:18:42.635Z · LW(p) · GW(p)
I don't remember why I asked that question. It sure reads as a trick question. It's certainly reasonable to treat things as a dichotomy if the overlap is not likely, but I think that's wrong here. I endorse this very broad projectivist view that includes this example, and I imagine most emotivists agree; I doubt that most emotivists are sociopaths projecting their abnormality onto the general population. But I also think emotivism is possible, such as along the lines of this example, or more broadly.
I do think you're treating projectivism as broad, and thus likely, and emotivism as narrow, and thus unlikely. In theory, that's fine, except for miscommunication, but in practice it's terrible. Either you give emotivism's neighbors names, greatly raising their salience, or you don't, greatly lowering their salience.
(Contrast this to the first bullet point, which seems to reject emotivism on the ground that it's broad. That's silly.)
Since projectivism is a theory of mind and emotivism a theory of language or social interaction, they are potentially compatible, though it seems tricky to merge their simple interpretations. But neither minds nor meaning are unitary. If projectivism says that there's a part of the mind that does something, that's broad theory, thus likely to be true, but it also doesn't seem to predict much. Emotivism is a claim about the overall meaning. That's narrower than a claim that there exists a part of the mind that takes a particular meaning and broader than the claim that the mind is unitary and takes a particular meaning. But the overall meaning is the most important.
↑ comment by thomblake · 2009-08-14T17:42:39.253Z · LW(p) · GW(p)
I think some confusion here might arise from missing the distinction between "projectivism" and "ethical projectivism". Projectivism is a family of theories in philosophy, one of which applies to ethics.
You might be talking past each other.
Replies from: None↑ comment by Douglas_Knight · 2009-08-14T18:48:57.294Z · LW(p) · GW(p)
My point was that it's a category error to consider them as indistinguishable,
I didn't say I can't distinguish them, I said the particular attack on emotivism applies just as well to projectivism.
Replies from: None↑ comment by [deleted] · 2009-08-14T20:18:37.510Z · LW(p) · GW(p)
My bad; I misread you.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2009-08-14T23:32:01.952Z · LW(p) · GW(p)
My bad
As much as I'd like to think so, I'll try to learn a lesson about pronouns and antecedents in high latency communications.
↑ comment by Aurini · 2009-08-13T22:06:06.875Z · LW(p) · GW(p)
Just an aside, you should look up some of the writings by my old (and favourite) Professor Dr. (James?) Weaver of McMaster University. He argues that it was the social technology of institutions, banking, land speculating, and established commerce that allowed whites to take over North America, not individual hostility. The key players he notes are the empire (who vacillated between expansionist and not-expansionist), the homesteaders, and the land speculators.
The Indians were harsh and intelligent bargainers, but they were playing by the rules of a game that white people wrote and created - the house always wins.
Fact: All Historians approach historical documents with their own set of contexts and biases - all Historians except Dr. Weaver, that is. Fact: Most Historians have to cite sources - Dr. Weaver is able to go back in time and create them.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-04T02:56:46.628Z · LW(p) · GW(p)
Was it okay to kill the Indians back in the 1700s, before they got guns?
No, cryonic suspension wasn't available back then, and a headshot would have prevented it in any case. In general, murder strikes me as a very dangerous activity - I can see why it's outlawed.
Replies from: bentarm↑ comment by bentarm · 2010-11-07T22:34:19.150Z · LW(p) · GW(p)
Was it okay to kill the Indians back in the 1700s, before they got guns?
No, cryonic suspension wasn't available back then
Erm... so it's ok to kill people as long as you cryonically suspend them afterwards? I've no idea if you actually believe this (I assume not, or you would probably have committed suicide), but even joking about it seems to be very bad politics if you're a cryonics advocate.
↑ comment by [deleted] · 2009-08-03T21:09:19.152Z · LW(p) · GW(p)
Did the majority of people living at the time feel like it was okay? Is it okay for you to second guess the judgement of thoughtful people who understood the context way better than anyone does now?
If at some point most people believe that killing mammals for food is monstrous, and it is banned, and children learn with horror about 21st century practices of murdering and devouring millions of cows and pigs each year, will that make it wrong to eat a hamburger now? Will eating a hamburger now be okay if that never happens? I certainly don't feel that the moral value of my actions should depend on the beliefs of people living hundreds of years in the future.
Replies from: cousin_it, Alicorn, thomblake↑ comment by cousin_it · 2009-08-04T16:39:39.851Z · LW(p) · GW(p)
Moral realism fallacy alert?
If at some point most people believe that killing mammals for food is monstrous, and it is banned, and children learn with horror about 21st century practices of murdering and devouring millions of cows and pigs each year, will that make it wrong to eat a hamburger now?
Yes, that will make it wrong in their view. There's no law of nature that says different people from different times should have identical moral judgements.
I certainly don't feel that the moral value of my actions should depend on the beliefs of people living hundreds of years in the future.
No, the moral value of your actions in your view doesn't have to depend on their beliefs. There's no law of nature that says different people from different times should have identical moral judgements.
Replies from: thomblake↑ comment by thomblake · 2009-08-04T16:23:07.062Z · LW(p) · GW(p)
I certainly don't feel that the moral value of my actions should depend on the beliefs of people living hundreds of years in the future.
Don't worry, you have it backwards. The moral value of your actions is not determined by the beliefs of any people, but rather the people's beliefs are an attempt to track the facts about the moral value of your actions (assuming there is such a thing at all).
↑ comment by wedrifid · 2010-08-05T15:11:36.391Z · LW(p) · GW(p)
So once I create a friendly-to-me AI I am the only morally significant agent in existence? I think not.
Relevant moral significance seems to be far more determined by the ability of any agent (not limited to just themselves) to kick ass on their behalf. So infants, fish or cows can have moral significance just because someone says so (and is willing to back that up).
Fortunately for you this means that if I happen to gain overwhelming power you will remain a morally significant agent based purely on my whim.
↑ comment by PhilGoetz · 2010-11-07T20:34:08.541Z · LW(p) · GW(p)
That's using the word "moral" to mean its opposite. Or, it's a claim that "morality" is a nonsensical concept, disguised as an alternate view of morality.
Replies from: Perplexed↑ comment by Perplexed · 2010-11-07T21:54:27.393Z · LW(p) · GW(p)
You need to read something by Gauthier or Binmore. The idea that morality is closely related to rational self-interest is hardly a crackpot idea. There are at least two lines of argument pointing in this direction.
One derives from Hume's point that a system of morality must not only inform us as to what actions are moral, but also show why we should perform only the moral actions.
The other observes that "moral facts" are simply our moral intuitions, and that those have been shaped by evolution into a pretty good caricature of rational self-interest.
A 'morality" which takes into account the power of others may be un-Christian, but it is hardly inhuman.
comment by MendelSchmiedekamp · 2009-08-03T16:47:14.035Z · LW(p) · GW(p)
The potential to enhance the information complexity of another agent. Where the degree of this potential and the degree of the complexity provided indicates the degree of moral significance.
Which reduces the problem to the somewhat less difficult one of estimating complexity and so estimating potential complexity influences among agents. By this, I means something more nuanced than algorithmic or Kolmogorov complexity. We need something that takes into account fun theory and how both simple systems and random noise are innately less complex than systems with non-trivial structure and dynamics or to put it another way, systems that interest and enrich.
Also note, don't make the error of equating the presence of complexity for the potential to enhance complexity in other agents.
As for suffering, in this context, you can define suffering as the inverse of complexity enhancement, namely the sapping of the innate complexity of the agent.
Replies from: jimmy↑ comment by jimmy · 2009-08-03T18:20:40.761Z · LW(p) · GW(p)
Can you explain why "the inverse of complexity enhancement" would be a good definition of "suffering" that would share the other features we mean by the word?
Replies from: MendelSchmiedekamp↑ comment by MendelSchmiedekamp · 2009-08-03T18:29:35.860Z · LW(p) · GW(p)
Possibly, could you list some of the features you had in mind?
Replies from: jimmy↑ comment by jimmy · 2009-08-03T22:20:51.932Z · LW(p) · GW(p)
Well, I just don't see any connection at all, and I assume that has something to do with the -1 karma status of the comment.
People usually use "suffering" to mean something along the lines of "experiencing subjectivly unpleasant qualia" and having negative utility associated with it.
Where does complexity come in?
Replies from: MendelSchmiedekamp↑ comment by MendelSchmiedekamp · 2009-08-04T04:28:02.622Z · LW(p) · GW(p)
Building on some of the more non-trivial theories of fun - specifically cognitive science research focusing on the human response to learning there is a direct relationship between human perception of subjectively unpleasant qualia and the complexity impact on the human of that qualia.
Admittedly extending this concept of suffering beyond humanity is a bit questionable. But it's better than a tautological or innately subjective definition, because with this model it is possible to estimate and compare with more intuitive expectations.
One nice effect of having suffering be defined as the sapping of complexity is that it deals with the question of which pain is suffering fairly elegantly - "subjectively" interesting pain is not suffering, but "subjectively" uninteresting pain is suffering.
Of course, that is only a small part of the process of making these distinctions. It's important to estimate both the subject of the qualia, and the structure of the sequence of qualia as it relates to the current state of the entity in question before you can estimate whether the stream of qualia will induce suffering or not.
It is a very powerful approach. But it is by no means simple. So I don't begrudge some karma loss in trying to explain it to folks here. But it's at least some feedback from unclear explanations.
Replies from: jimmy↑ comment by jimmy · 2009-08-04T16:55:54.554Z · LW(p) · GW(p)
I don't mean to suggest that anything that subtracts a karma point isn't worth doing, just that it's evidence that you're not accomplishing what you'd like.
You've made some claims (in other comments too) which would be very interesting if true, but weren't backed up enough for me to make the inferential jump.
I'd like to see a full top level post on this idea, as it seems quite interesting if true, but it also seems to need more space to give the details and full supporting arguments.
Replies from: MendelSchmiedekamp↑ comment by MendelSchmiedekamp · 2009-08-04T22:19:11.647Z · LW(p) · GW(p)
You're right in that this, among other topics, I owe a top level post.
Although one worry I have with trying to lay out inferential steps is that some of these ideas (this one included) seem to encounter a sort of Xeno's paradox for full comprehension. It stops being enough to be willing to take the next step, it becomes necessary to take the inferential limit to get to the other side.
Which means that until I find a way to map people around that phenomena I'm hesitant in giving a large scale treatment. Just because it was the route I took, doesn't mean it's a good way to explain things generally, ala Typical Mind Fallacy born out by evidence.
But in any case I will return to it when I have the time.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2009-08-09T05:54:32.625Z · LW(p) · GW(p)
Laying out the route you took might be a lot easier than looking for another route. Also, the feedback from comments might be a better way to look for another route than modeling other minds on your own.
I suspect that people are voting you down because you sound like you're attempting to show off, rather than attempting to communicate. Several of your posts seem to be simple assertions that you possess knowledge or a theory. I did vote down the comment at the top of this thread, but I don't remember if that's why. I was surprised that I didn't vote down other of your comments where I remember having that reaction, so this theory-from-introspection isn't even a good theory of me. But it might work better for people who vote more. (the simple theory of when I vote you up is 21 May and 6 June, which disturbs me.)