Whither Moral Progress?

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-07-16T05:04:42.000Z · LW · GW · Legacy · 101 comments

Followup toIs Morality Preference?

In the dialogue "Is Morality Preference?", Obert argues for the existence of moral progress by pointing to free speech, democracy, mass street protests against wars, the end of slavery... and we could also cite female suffrage, or the fact that burning a cat alive was once a popular entertainment... and many other things that our ancestors believed were right, but which we have come to see as wrong, or vice versa.

But Subhan points out that if your only measure of progress is to take a difference against your current state, then you can follow a random walk, and still see the appearance of inevitable progress.

One way of refuting the simplest version of this argument, would be to say that we don't automatically think ourselves the very apex of possible morality; that we can imagine our descendants being more moral than us.

But can you concretely imagine a being morally wiser than yourself—one who knows that some particular thing is wrong, when you believe it to be right?

Certainly:  I am not sure of the moral status of chimpanzees, and hence I find it easy to imagine that a future civilization will label them definitely people, and castigate us for failing to cryopreserve the chimpanzees who died in human custody.

Yet this still doesn't prove the existence of moral progress.  Maybe I am simply mistaken about the nature of changes in morality that have previously occurred—like looking at a time chart of "differences between past and present", noting that the difference has been steadily decreasing, and saying, without being able to visualize it, "Extrapolating this chart into the future, we find that the future will be even less different from the present than the present."

So let me throw the question open to my readers:  Whither moral progress?

You might say, perhaps, "Over time, people have become more willing to help one another—that is the very substance and definition of moral progress."

But as John McCarthy put it:

"If everyone were to live for others all the time, life would be like a procession of ants following each other around in a circle."

Once you make "People helping each other more" the definition of moral progress, then people helping each other all the time, is by definition the apex of moral progress.

At the very least we have Moore's Open Question:  It is not clear that helping others all the time is automatically moral progress, whether or not you argue that it is; and so we apparently have some notion of what constitutes "moral progress" that goes beyond the direct identification with "helping others more often".

Or if you identify moral progress with "Democracy!", then at some point there was a first democratic civilization—at some point, people went from having no notion of democracy as a good thing, to inventing the idea of democracy as a good thing.  If increasing democracy is the very substance of moral progress, then how did this moral progress come about to exist in the world?  How did people invent, without knowing it, this very substance of moral progress?

It's easy to come up with concrete examples of moral progress.  Just point to a moral disagreement between past and present civilizations; or point to a disagreement between yourself and present civilization, and claim that future civilizations might agree with you.

It's harder to answer Subhan's challenge—to show directionality, rather than a random walk, on the meta-level.  And explain how this directionality is implemented, on the meta-level: how people go from not having a moral ideal, to having it.

(I have my own ideas about this, as some of you know.  And I'll thank you not to link to them in the comments, or quote them and attribute them to me, until at least 24 hours have passed from this post.)

 

Part of The Metaethics Sequence

Next post: "The Gift We Give To Tomorrow"

Previous post: "Probability is Subjectively Objective"

101 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Wiseman · 2008-07-16T05:32:36.000Z · LW(p) · GW(p)

Since the actual source of the meaning of "morality" is simply about achieving happiness, humans will eventually link political philosophies which increase happiness with "morality".

Subhan's challenge is easy to solve if you accept that morality is not epiphenomenal, and actually grounded in concrete, mechanically-driven happiness.

comment by Marshall · 2008-07-16T05:59:26.000Z · LW(p) · GW(p)

Why keep on about "morality"? Isn't this just a type of con-word used by ministers of religion, teachers and politicians to impress on us the need to be good and improve (in ways which they decide)? Can't we just abolish this word and tune it out? We all drive on the correct side of the road, because it is useful and it is seen to be useful by all. This is morality - small useful rules for getting along. There is no mystery about where they come from. We find ways to avoid bumping into things. We are brought up with this implicit usefulness and maintain it. In the luxury of our affluence it is no longer useful to boil the cat and we need our dogs to understand our language. Thus our circles of "utility" expand.

comment by Lowly_Undergrad2 · 2008-07-16T06:28:39.000Z · LW(p) · GW(p)

I don't think anyone can really argue that a large-scale decrease in global violence and violent death is a sign of moral progress. So I must point to this Steven Pinker conference where he lays out some statistics showing the gradual decline of violence and violent death throughout our history: http://www.ted.com/index.php/talks/steven_pinker_on_the_myth_of_violence.html

Replies from: waveman
comment by waveman · 2016-06-28T09:54:07.170Z · LW(p) · GW(p)

This has actually been trenchantly criticized on statistical grounds. https://en.wikipedia.org/wiki/The_Better_Angels_of_Our_Nature

The basic idea is that if the Cuban Missile Crisis(or numerous other similar events) had ended badly the conclusion would have been reversed. And according to people who were there, such as President John F Kennedy, it very well could have ended badly.

Replies from: gjm
comment by gjm · 2016-06-28T10:09:50.511Z · LW(p) · GW(p)

I haven't looked at the original criticism, but the "basic idea" as you describe it seems to introduce a source of bias: we have more visibility of luckily avoided ways in which things could have gone badly for recent events than for older ones, so if you try to take those into account then you will skew the change over time in the direction opposite to the one Pinker claims.

(If you also look for unluckily avoided ways in which things could have gone well then maybe the bias goes away.)

comment by Oscredwin · 2008-07-16T06:45:00.000Z · LW(p) · GW(p)

You can only answer the question if you have some sort of answer to the question, "What is moral?" If democracy is moral, then the first democrats got their walking randomly when they accidentally stepped on the "Golden path" of moral progress. Luckily they were able to recognize it as such.

Also, assuming that society would settle on Chimp rights correctly (however correctness is determined) either as human or not, is assuming your conclusion (or building an experiment that might not bear fruit until your unfrozen).

comment by [deleted] · 2008-07-16T06:50:41.000Z · LW(p) · GW(p)

Readers Note:

Since there is a ‘directionality’ to physics (ie. the universe moves from a simpler to a more complex state), and there is also an analogue to a ‘directionality’ in logic/mathematics (ie more complex ideas are built from simpler ideas), isn’t it apriori highly plausible that there’s also an analogue to a ‘directionality’ in the realm of values (ie. moral progress)?

Let me remind all readers that years ago I speculated on multiple transhumanist lists there may be three different ways to define causality. I don’t see a difference between ‘causality’ and ‘directionality’ in the abstract. -So you could define (1) Physical Causality – the growth of complexity, (2) Moral causality- the evolution of values and (3) Logical/Mathematical causality – the growth of knowledge.

When will E.Yudkowsky realize that the whole notion of directionality or progress in some domain cannot be coherent without objectively existing entities in these domains? Directionality in physics is only coherent because objective physical properties do exist. Directionality in logic is only coherent because objective mathematical facts do exist. Finally, directionality in the realm of values can only be coherent if there are objectivity existing moral archetypes.

So close…sigh

comment by Infotropism2 · 2008-07-16T06:59:50.000Z · LW(p) · GW(p)

" the future will be even less different from the present than the present."

instead of

" the future will be even less different from the present than the present from the past."

?

comment by Tim_Tyler · 2008-07-16T07:08:47.000Z · LW(p) · GW(p)

Why is there a direction to the shifting moral zeitgeist?

E.g. see the work of Robert Wright: How cooperation (eventually) trumps conflict

comment by Robin_Brandt · 2008-07-16T07:24:29.000Z · LW(p) · GW(p)

Technology is the single most important thing for morality. As technology allows better resources, communication, documentation, safer paths for society emerge as in the difference between bonobos and chimps, where resources makes the species less aggressive. Also when we become economically dependent on each other due to specialization and can be held responsible for our actions due to documentation, the threshold for cheating increases. Also we seem to want to generalize as many principles we dare to, if we are healthy, feel safe and have plenty of resources we may think outsiders are okey, and may even provide a benefit, but when it is tough we may start a fight and defend our teritory. Moral progress is possible because of technological and organiational progress, which means more resources, more surveillance and communication, more dependence, more general coherence, therefore equality and safety. Our inner moral modalities don't change, the environment does, and so we adapt, the moral modalities are rather flexible and general, sadly not general enough. With nanotech and quantum computing, we will have even more resourcs, and therefore possibly better morality, but I would rather not take that chance, therefore Friendly AI...

Replies from: buybuydandavis
comment by buybuydandavis · 2012-07-04T08:23:23.288Z · LW(p) · GW(p)

Technology is power. It could also enable a boot stomping human face forever.

comment by Paul_Crowley2 · 2008-07-16T07:53:05.000Z · LW(p) · GW(p)

I'm by no means sure that the idea of moral progress can be salvaged. But it might be interesting to try and make a case that we have fewer circular preferences now than we used to.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-07-16T08:16:23.000Z · LW(p) · GW(p)

Wiseman, if everyone were blissed-out by direct stimulation of their pleasure center all the time, would that by definition be moral progress?

Marshall, how is your "usefulness" not isomorphic to the word "good"? Useful for what?

Lowly Undergrad, early societies didn't have this idea of reducing violent death to zero - through what mechanism did they acquire this belief, given that they didn't start out with the idea that it was "moral progress"?

Robin Brandt, is whatever increasing technology does to a society, moral progress by definition, or does increasing technology only tend to cause moral progress?

Tim, if we all cooperated with each other all the time, would that by definition be moral progress?

Paul, do you think that your own morality is optimum or can you conceive of someone more moral than yourself - not just a being who better adheres to your current ideals, but a being with better ideals than you?

Replies from: MugaSofer
comment by MugaSofer · 2013-04-11T17:06:00.365Z · LW(p) · GW(p)

Lowly Undergrad, early societies didn't have this idea of reducing violent death to zero - through what mechanism did they acquire this belief, given that they didn't start out with the idea that it was "moral progress"?

I realize it's been years, but - didn't early societies want to encourage peace (in general, since the Hated Enemy obviously needs to be fought) and reduce violent crime? My model of early societies does, in fact, have something roughly isomorphic to "reduce violent death", even if they don't explicitly extrapolate this all the way to "someday, violent death should be nonexistent" - and, let's face it, most modern societies don't really do this either, it's just too far away. Do you have a source for asserting otherwise? Or, if you've changed your mind, do you remember why you claimed this?

comment by Ian_C. · 2008-07-16T08:33:39.000Z · LW(p) · GW(p)

If you take the list of things that were moral yesterday and the list that are moral today, and look for pairs between the lists that are kind of the same idea, but just in different quantity (e.g. like and love) then you could step back and see if there is an overall direction.

The key idea is to recognize when two things with different names are really different amounts of some higher more abstract idea.

comment by Wiseman · 2008-07-16T08:41:35.000Z · LW(p) · GW(p)

Eliezer: Wiseman, if everyone were blissed-out by direct stimulation of their pleasure center all the time, would that by definition be moral progress?

Compared to todays state of affairs in the world? Yes, I think that would be enormous moral progress compared to right now (so long as the bliss was not short term and would not burn out eventually and leave everyone dead. So long as the bliss was of an individual's choice. So long as it really was everyone in bliss, and others didn't have to suffer for it. Etc. etc.)

comment by billswift · 2008-07-16T08:59:07.000Z · LW(p) · GW(p)

The best discussion of moral progress I've seen yet is in Heinlein's Starship Troopers, where morality progresses by becoming more inclusive. Once, it was family, everyone else was fair game; then, tribe, race, religion, nation, now we recognize (at least officially, there are still many on lower "rungs") the human species as being deserving of our consideration. In Starship Troopers, Heinlein had one of his teachers in "History and Moral Philosophy" say that they were developing morality for dealing with intelligent aliens.

For the counter position, that morality is delusion and fraud, L. A. Rollins, "The Myth of Natural Rights" and the extended discussion by R. A. Wilson, "Natural Law", are well argued and entertaining.

comment by Paul_Crowley2 · 2008-07-16T09:01:57.000Z · LW(p) · GW(p)

Paul, do you think that your own morality is optimum or can you conceive of someone more moral than yourself - not just a being who better adheres to your current ideals, but a being with better ideals than you?

Yes I can.

If you take the view that ethics and aesthetics are one and the same, then in general it's hard to imagine how any ideals other than your own could be better than your own for the obvious reason that I can only measure them against my own.

What interests me about the rule I propose (circular preferences are bad!) is that it is exclusively a meta-rule - it cannot measure my behavour, only my ideals. It provides a meta-ethic that can show flaws in my current ethical thinking, but not how to correct them - it provides no guidance on which arrow in the circle needs to be reversed. And I think it covers the way in which I've been persuaded of moral positions in the past (very hard to account for otherwise) and better yet allows me to imagine that I might be persuaded of moral points in the future, though obviously I can't anticipate which ones.

If I can imagine that through this rule I could be persuaded to take a different moral stance in the future, and see that as good, then I'm definitely elevating a different set of ideals - my imagined future ideals - over my current ideals.

comment by Paul_Gowder · 2008-07-16T09:08:04.000Z · LW(p) · GW(p)

One possibility: we can see a connection between morality and certain empirical facts -- for example, if we believe that more moral societies will be more stable, we might think that we can see moral progress in the form of changes that are brought about by previous morally related instability. That's not very clear -- but a much clearer and more sophisticated variant on that idea can perhaps be seen in an old paper by Joshua Cohen, "The Arc of the Moral Universe" (google scholar will get it, and definitely read it, because a) it's brilliant, and b) I'm not representing it very well).

Or we might think that some of our morally relevant behaviors are consistently dependent on empirical facts, in which we might progress in finding out. For example, we might have always thought that beings who are as intelligent as we are and have as complex social and emotional lives as do we deserve to be treated as equals. Suppose we think the above at year 1 and year 500, but at year 500, we discover that some group of entities X (which could include fellow humans, as with the slaves, or other species) is as intelligent, etc., and act accordingly. Then it seems like we've made clearly directional moral progress -- we've learned to more accurately make the empirical judgments about which our unchanged moral judgment depends.

comment by Venu · 2008-07-16T09:22:02.000Z · LW(p) · GW(p)

A few processes to explain moral progress (but probably not all of it): a) Acquiring new knowledge (e.g. the knowledge that chimps and humans are, on an evolutionary scale, close relatives), which leads us to throw away moral judgements that make assumptions which are inconsistent with such knowledge. b) Morality is only one of the many ends that we pursue, and as an end it becomes easier to pursue once you are amply fed, watered and clothed. In other words, improvements in material conditions enable improvements in morality. c) Conquest of one culture by another means the morals of the conquerors get transferred to the conquered (to some extent). Similarly, migration and higher levels of general exposure between cultures means practices that are viewed as immoral by much of the rest of the world are under much pressure to be abolished.

comment by billswift · 2008-07-16T10:11:52.000Z · LW(p) · GW(p)

"the knowledge that chimps and humans are, on an evolutionary scale, close relatives"

So what? The differences are so profound that humans should be considered a different class, maybe even a new phylum. The basic one is possession of language and culture. "Animal rights" is a stupid idea. I am against mistreatment of animals, but recognize that it is more an aesthetic than ethical position.

comment by Robin_Brandt · 2008-07-16T10:39:38.000Z · LW(p) · GW(p)

Eliezer:Robin Brandt, is whatever increasing technology does to a society, moral progress by definition, or does increasing technology only tend to cause moral progress?

I see, I answered quite a different question there, I had a funny feeling of that while writing that comment.

Increasing technology tends to cause moral progress yes, by making moral choices economically and experientially(as in our experience of things) more strategic/optimal. It all boils down into satisfying our adapted pattern-recognizers that gives us pleasure or a feeling of righteousness. And the human brain is calibrated to exercise a absolute optimal general morality in a much limited way, because of limited mental and limited physical(food, mates, power) resources. But the "absoulte" general morality is by itself just a set of strategies, a soultion to a game-theoretic problem. It can never be in itself moral until some mind gives it that meaning. So morality without agents is just one of other mathematical structures. But when you are a mind you percieve your approximation(regulated by genes and learning) of morality as a strong emotion, parts of it close to what we call preference, parts of it very absolute.

comment by spindizzy2 · 2008-07-16T11:11:20.000Z · LW(p) · GW(p)

1) Supposing that moral progress is possible, why would I want to make such progress?

2) Psychological experiments such as the Stanford prison experiment suggest to me that people do not act morally when empowered not to do so. So if I were moral I would prefer to remain powerless, but I do not want to be powerless, therefore I perform my moral acts unwillingly.

3) Suppose that agents of type X act more morally than agents of type Y. Also suppose that the moral acts impact on fitness such that type Y agents out-reproduce type X agents. If the product of population size and moral utility is greater for Y than X then Y is the greater producer of moral good.

So is net morality important or morality per capita? How about a very moral population of size 0? What is the trade off between net and per capita moral output?

4) Predicting the long-term outcomes of our actions is very difficult. If the moral value of an act depends on the outcome, then our confidence in the morality of an act should be less than or equal to our confidence in the possible outcomes.

However, peoples' confidence in their morality is often much higher than their confidence in the outcome. Therefore, there must be a component of morality independent of outcome. What does the desirability of this component derive from?

comment by Dynamically_Linked · 2008-07-16T11:24:15.000Z · LW(p) · GW(p)

My view is similar to Robin Brandt's, but I would say that technological progress has caused the appearance of moral progress, because we responded to past technological progress by changing our moral perceptions in roughly the same direction. But different kinds of future technological progress may cause further changes in orthogonal or even opposite directions. It's easy to imagine for example that slavery may make a comeback if a perfect mind control technology was invented.

comment by Venu · 2008-07-16T11:41:09.000Z · LW(p) · GW(p)

@billswift: I do not want to divert the thread onto the topic of animal rights. It was only an example in any case. See Paul Gowder's comment previous to mine for a more detailed (and different) example of how empirical knowledge can affect our moral judgements.

comment by Marshall · 2008-07-16T11:41:45.000Z · LW(p) · GW(p)

Marshall, how is your "usefulness" not isomorphic to the word "good"? Useful for what?

I suppose I just want to avoid the preachiness of the word good. It is unfortunately coherent to die for goodness. It is not very useful to die for usefullness.

Useful for what? This doesn't seem like a useful question. Usefulness is obvious and thus no need to ask.

I do not wish to lose my way or be carried away by the bigness of the nominalisation "morality". Occam's Razor should also be applied here - in a pleasant and gentle way.

comment by Yvain2 · 2008-07-16T11:45:40.000Z · LW(p) · GW(p)

If one defines morality in a utilitarian way, in which a moral person is one who tries for the greatest possible utility of everyone in the world, that sidesteps McCarthy's complaint. In that case, the apex of moral progress is also, by definition, the world in which people are happiest on average.

It's easy to view moral progress up to this point as progress towards that ideal. Ending slavery increases ex-slaves' utility, hopefully less than it hurts ex-slaveowners. Ending cat-burning increases cats' utility, hopefully less than it hurts that of cat-burning fans.

I guess you could argue this has a hidden bias - that 19th century-ers claimed that keeping slavery was helping slaveowners more than it was hurting slaves, and that we really are in a random walk that we're justifying by fudging terms in the utility function in order to look good. But you could equally well argue that real moral progress means computing the utilities more accurately.

Since utility is by definition a Good Thing, it's less vulnerable to the Open Question argument than some other things, though I wouldn't know how to put that formally.

comment by Tim_Tyler · 2008-07-16T11:51:47.000Z · LW(p) · GW(p)

Re: if we all cooperated with each other all the time, would that by definition be moral progress?

If we all cooperated with each other all the time, that would be moral progress.

Moral progress simply means a systematic improvement of morals over time - so widespread cooperation would indeed represent an improvement over today's fighting and deceit.

comment by Sebastian_Hagen2 · 2008-07-16T11:55:10.000Z · LW(p) · GW(p)

It's harder to answer Subhan's challenge - to show directionality, rather than a random walk, on the meta-level.
Even if one is ignorant of what humans mean when they talk about morality, or what aspects of the environment influence it, it should be possible to determine whether morality-development over time follows a random walk empirically: a random walk would, on average, cause more repeated reversals of a given value judgement than a directional process.
For performing this test, one would take a number of moral judgements that have changed in the past, and compare their development from a particular point in human history (the earlier, the better; unreversed recent changes may have been a result of the random walk only becoming sufficiently extreme in the recent past) to now, counting how often those judgements flipped during historical development. I'm not quite sure about the conditional probabilities, but a true random walk should result in more such flips than a directional (even a noisy directional) process.
Does anyone have suggestions for moral values that changed early in human development?

comment by Ken_Sharpe2 · 2008-07-16T12:01:56.000Z · LW(p) · GW(p)

Gee, this seems awfully similar to Timeless Physics, doesn't it?

comment by michael_vassar3 · 2008-07-16T12:07:01.000Z · LW(p) · GW(p)

A possibility that I have mentioned here before has to do with positive feedback loops in an isolated society between economic growth and luxury spending on moral coherence. On this account, people always had qualms about slavery but considered it to impractical to seriously consider abandoning it. When feeling rich they abandoned it anyway, either as conspicuous consumption or as luxury spending on simplicity. Having done so, it turned out, made them richer, affirming this sort of apparent luxury spending or conspicuous consumption as actually being moral progress. Viewing them as an ecosystem of godshatter, increased power destabilized the balance of power between dissonant utility functions, allowing certain elements to largely erase others while still further increasing in power. One problem with this story is that it passes some of the buck to economic growth, though only some, as access to resources and population are surely part of the answer there. Another problem is that it doesn't add up to normality, but proposed moralities should only add up to normality when approximated crudely, not when approximated precisely.

Replies from: waveman
comment by waveman · 2016-06-28T09:58:57.559Z · LW(p) · GW(p)

positive feedback loops in an isolated society between economic growth and luxury spending on moral coherence

Or as Saul Alinsky put it “[C]oncern with ethics increases with the number of means available and vice versa.” It is easy to be ethical when you have little at stake.

comment by Tim_Tyler · 2008-07-16T12:15:23.000Z · LW(p) · GW(p)

Re: direct stimulation of their pleasure center

Morality is normally concerned with conduct, not feelings.

comment by Jay3 · 2008-07-16T12:39:32.000Z · LW(p) · GW(p)

"If everyone were to live for others all the time, life would be like a procession of ants following each other around in a circle."

Someone actually gets it right. Greed is moral. Greed is good.

comment by Ben_Jones · 2008-07-16T12:53:48.000Z · LW(p) · GW(p)

Imagine a country that abolishes capital punishment, then, a few years later, brings it back. Have they made moral progress? Have they regressed? More importantly, who's to say?

Imagine also an alien who arrives on Earth, hears of what we've done with laws and societies and says 'what the hell? They've been morally regressing all this time?!'

Looking forward to the next post. The moral valuation of sentient/conscious matter over 'dumb' matter is something I have trouble wrapping my head around.

comment by Robert4 · 2008-07-16T13:18:20.000Z · LW(p) · GW(p)

This has been mentioned many times, by Peter Singer, for instance, but one way towards moral progress is by expanding the domain over which we feel morally obligated. While we may have evolved to feel morally responsible in our dealings with close relatives and tribesmen, it is harder to hold ourselves to the same standards when dealing with whoever we consider to be not part of this group. Maybe we can attribute some of moral progress to a widening of who we consider to be a part of our tribe, which would be driven by technology forcing us to live and interact with and identify with larger and more diverse groups of people. Clearly this doesn't solve all the problems of moral progress, but I think this idea could chip away at parts of the problem.

Replies from: Yosarian2
comment by Yosarian2 · 2013-01-01T01:15:16.250Z · LW(p) · GW(p)

Yes, this is what I was going to say.

As time goes on, we seem to add more and more people and groups of people to the catagory we treat morally. Most major changes in morality over time you could describe this way; the elimination of slavery, women's suffrage, laws of war, better treatment for mental illness, even the idea that it's bad to torture cats for your own amusement could all be called "expanding the group of beings who we feel we have to treat ethically".

Replies from: beoShaffer
comment by beoShaffer · 2013-01-01T02:07:10.540Z · LW(p) · GW(p)

Gwern has a rather interesting refutation of this idea.

Replies from: William_Quixote
comment by William_Quixote · 2013-01-01T04:00:46.318Z · LW(p) · GW(p)

It's interesting, but its claim may be flawed. The fact is, to the absolute best of our ability to judge, neither gods nor dead people actually exist. So it is not the case that there exist entities that have been pushed out of the circle. On the other hand women and minorities and cats do exist. And they have to various extents been brought into the moral circle. To the extent that the categories of existent entities and morally relevant entities have increased their overlap, that's progress. Or it's at least movement in a consistent direction.

comment by Caledonian2 · 2008-07-16T13:40:17.000Z · LW(p) · GW(p)

Still waiting for someone to take the necessary first step towards a rational understanding of the issue.

Any time now, folks.

comment by A_Madden · 2008-07-16T13:40:27.000Z · LW(p) · GW(p)

http://en.wikipedia.org/wiki/Nihilism

Maybe I had better not join the discussion, I just want to say that nearly everyone you will ever meet, they get something and they try to hold onto it for as long as possible, and all their actions are defined by this.

Also everyone will argue what they are hardwired for: sex and eating, till they turn blue.

comment by Peter_Turney · 2008-07-16T13:51:37.000Z · LW(p) · GW(p)

If we all cooperated with each other all the time, that would be moral progress. -- Tim Tyler

I agree with Tim. Morality is all about cooperation.

If everyone were to live for others all the time, life would be like a procession of ants following each other around in a circle. -- John McCarthy, via Eliezer Yudkowsky

This is a reductio ad absurdum argument against the idea that morality is an end. I agree with what it implies: Morality is a means, not an end. Cooperation is a means we each use to achieve our personal goals.

comment by poke · 2008-07-16T14:24:40.000Z · LW(p) · GW(p)

As I said previously, I think "moral progress" is the heroic story we tell of social change, and I find it unlikely that these changes are really caused by moral deliberation. I'm not a cultural relativist but I think we need to be more attuned to the fact that people inside a culture are less harmed by its practices than outsiders feel they would be in that culture. You can't simply imagine how you would feel as, say, a woman in Islam. Baselines change, expectations change, and we need to keep track of these things.

As for democracy, I think there are many cases where democracy is an impediment to economic progress, and so causes standards of living to be lower. I doubt Singapore would have been better off had it been more democratic and I suspect it would have been much worse off (nowadays it probably wouldn't make a lot of difference either way). Likewise, I think Japan, Taiwan and South Korea probably benefited from relative authoritarianism during their respective periods of industrialization.

My own perspective on electoral democracy is that it's essentially symbolic and the only real benefit for developing countries is legitimacy in the eyes of the West; it's rather like a modern form of Christianization. Westerners tend to use "democracy" as a catch-all term for every good they perceive in their society and imagine having an election will somehow solve a country's problems. I think we'd be better off talking about openness, responsiveness, lawfulness and how to achieve institutional benevolence rather than elections and representation.

Now, you could argue that because I value things like economic progress, I have a moral system. I don't think it's that clear cut though. One of the distinctive features of moral philosophy is that it's tested against people's supposed moral intuitions. I value technological progress and growth in knowledge but, importantly, I would still value them if they were intuitively anti-moral. If technological progress and growth in knowledge were net harms for us as human beings I would still want to maximize them. I think many people here would agree (although perhaps they've never thought about it): if pursuing knowledge was somehow painful and depressing, I'd still want to do it, and I'd still encourage the whole of society to be ordered towards that goal.

comment by Silas · 2008-07-16T15:08:22.000Z · LW(p) · GW(p)

I think a lot of people are confusing a) improved ability to act morally, and b) improved moral wisdom.

Remember, things like "having fewer deaths, conflicts" does not mean moral progress. It's only moral progress if people in general change their evaluation of the merit of e.g. fewer deaths, conflicts.

So it really is a difficult question Eliezer is asking: can you imagine how you would have/achieve greater moral wisdom in the future, as evaluated with your present mental faculties?

My best answer is yes, in that I can imagine being better able to discern inherent conflict between certain moral principles. Haphazard example: today, I might believe that a) assaulting people out-of-the-blue is bad, and b) credibly demonstrating ability to fend off assaulters are good. In the future, I might notice that these come into conflict, that if people value both of these, some people will inevitably have a utility function that encourages them to do a), and this is unavoidable. So then I find out more precisely how much of one comes at how much cost of the other, and that persuing certain combinations of them is impossible.

I call that moral progress. Am I right, assuming the premises?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-11T14:34:22.678Z · LW(p) · GW(p)

I think a lot of people are confusing a) improved ability to act morally, and b) improved moral wisdom.

Agreed.

comment by Nick_Tarleton · 2008-07-16T15:34:13.000Z · LW(p) · GW(p)

Yvain: I think you're equivocating between two definitions of utility, "happiness" and "the quantity that's maximized". This dual meaning is really unfortunate.

Sebastian: moral progress might be random except that people (very plausibly) try not to return to a rejected past state. This would be directionless (or move in an arbitrary direction) but produce very few reversals.

poke: pursuing knowledge could be painful and depressing but still intuitively moral.

I see a bit of what looks like terminal/instrumental confusion in this thread. I don't think discovering better instrumental values toward the same terminal values you always had counts as moral progress, at least if those terminal values are consciously, explicitly held.

comment by prase · 2008-07-16T17:50:21.000Z · LW(p) · GW(p)

A particularly interesting question is, what would people of e.g. Roman empire or mediaeval France think about today's society. We can compare the morality of the past with contemporary standards, but we can't see the future. I wonder whether mediaeval people would find our morality less despicable than we find theirs. If such comparison was possible, one could define some sort of objective (or subjectively objective?) criterion - simply put together two societies with different moral codes and watch how many will convert from first to the second and vice versa. Anyway, it is probable that different moral codes are not all equally well suited for human nature. If so, tha apex can be defined as the moral code which is perfectly stable, i.e. does not evolve (given we stop human biological evolution) and in contact with different moral code becomes dominant.

comment by Aleksei_Riikonen · 2008-07-16T18:22:51.000Z · LW(p) · GW(p)

Some changes in morality come about because people notice that their previous ideas contained incorrect probability assessments. These changes can be considered moral progress.

Example: people find a logical inconsistency in their moral thinking, and correct for it.

Example: people notice that they have been assuming it necessary to be Homo sapiens or to be of a specific gender or color in order to have conscious experience, and that they don't actually have any basis for such an assumption.

As long as our knowledge about the universe (including our own thought processes and the assumptions and mistakes we are making without realising them) continues to increase at a rapid pace, it is likely that every now and then we learn something such that it causes a correction in our moral thinking. Or at some point, we may still be learning rapidly, but it may have been a really long time that we last run across something that changed our ideas about morality (such a time hasn't yet come during history).

When/if we have learned all that we possibly can (this includes thinking about stuff long enough and with sufficient quality to get all the insights that we can), there can be no more moral progress. If in such circumstances we find that we have identical ideas about morality when compared to our peers-in-knowledge (including knowledge about past life experiences of each other), and that these ideas don't change over time, it proves there was convergence in the change of our ideas about morality (the question of how much of the change was a random walk could be further studied by running ancestor simulations).

On the other hand, we might find that we still can't agree with each other about morality, not even when we are essentially omniscient. This would prove that much of the change in moral ideas is a random walk, and that possibly only a small fraction of the changes can be considered progress.

And, it does even currently seem to me, that it is logically possible to be essentially omniscient, and have really weird utility functions. But perhaps very few of us humans ever want to change ourselves into beings with very weird utility functions, and most of us will indeed converge to some specific ideas about morality.

(I guess I should confess that my thinking expressed here has been heavily influenced by Eliezer's previous writings.)

comment by Paul_Gowder · 2008-07-16T18:59:28.000Z · LW(p) · GW(p)

Nick:

I don't think discovering better instrumental values toward the same terminal values you always had counts as moral progress, at least if those terminal values are consciously, explicitly held.

Why on earth not? Aristotle thought some people were naturally suited for slavery. We now know that's not true. Why isn't that moral progress?

(Similarly, general improvements in reasoning, to the extent they allow us to reject bad moral arguments as well as more testable kinds of bad arguments, could count as moral progress.)

comment by Nominull3 · 2008-07-16T19:21:24.000Z · LW(p) · GW(p)

A moral state X represents progress from moral state Y if people in both moral state X and moral state Y agree that X is better after being presented with the arguments. That is, X represents progress from Y if all it takes is the right way of thinking about it to convince someone from Y to move to X.

comment by Nick_Tarleton · 2008-07-16T19:30:35.000Z · LW(p) · GW(p)

Paul, I think values and beliefs have both changed in that case - we (I hope I'm right to generalize!) don't judge that any facts about a person could make it right to enslave them. Most of us have scrapped the whole teleological framework Aristotle used to say that.

I probably should have said "...counts as the sort of moral progress Eliezer is talking about", the reason being that updating beliefs/instrumental values isn't a matter of metaethics, and is unproblematically directional.

Nominull, in your first sentence, does "people" mean everybody? In the second, what's the "right way of thinking"?

Unrelated: I think Michael makes an important point with "an ecosystem of godshatter".

comment by Paul_Gowder · 2008-07-16T19:53:02.000Z · LW(p) · GW(p)

Nick,

Fair enough, but consider the counterfactual case: suppose we believed that there were some fact about a person that would permit enslaving that person, but learned that the set of people to whom those facts applied was the null set. It seems like that would still represent moral progress in some sense.

Perhaps not the sort that Eliezer is talking about, though. But I'm not sure that the two can be cleanly separated. Consider slavery again, or the equality of humanity in general. Much of the moral movement there can be seen as changing interpretations of Christianity -- that is, people thought the Bible justified slavery, then they stopped thinking that. Is that a purely moral change? Or is that a better interpretation of a body of religious thought?

comment by Z._M._Davis · 2008-07-16T21:00:07.000Z · LW(p) · GW(p)

"[W]e (I hope I'm right to generalize!) don't judge that any facts about a person could make it right to enslave them."

I'm not so sure, Nick. Taboo person, and consider our treatment of certain nonhuman animals.

comment by TGGP4 · 2008-07-16T23:13:46.000Z · LW(p) · GW(p)

People will always consider their own beliefs moral and those of their predecessors who disagreed to be less so. People who believe in "moral progress" are adherents of a religion, whether they recognize it or not.

comment by Lowly_Undergrad2 · 2008-07-16T23:51:23.000Z · LW(p) · GW(p)

"Lowly Undergrad, early societies didn't have this idea of reducing violent death to zero - through what mechanism did they acquire this belief, given that they didn't start out with the idea that it was "moral progress"?"

While it is certainly difficult to imagine the mindset of people who existed ten of thousands of years before us, I think since they were still human beings, we can assume they were somewhat similar to you and I. From this basic assumption think we can look toward Peter Singer's philosophy of the moral circle. The starting point that you ask for I think would be the immediate family in whom our genes are largely invested. We likely evolved to not want our immediate family members to die violently because those genes could easily be sustained. From this point we can extend it our to our non-immediate family members. From this point we can further extend it out to friends/trading partners and so forth, ever expanding our moral circle. Since we were invested in all these people either genetically, emotionally, or economically we may have rationalized reasons why it was "WRONG" to kill them, and hence the foundation of morality is formed. Even though it pains me to consider that the foundation of morality is just a rationalization, this is the conclusion I am forced to accept given my assumptions.

comment by Caledonian2 · 2008-07-17T00:54:49.000Z · LW(p) · GW(p)

Why on earth not? Aristotle thought some people were naturally suited for slavery. We now know that's not true.
No, we don't. We know no such thing.

Replies from: Kenny
comment by Kenny · 2013-04-11T12:01:41.161Z · LW(p) · GW(p)

Morally, we do know such a thing.

Replies from: ArisKatsaris, MugaSofer
comment by ArisKatsaris · 2013-04-11T12:37:27.677Z · LW(p) · GW(p)

Morally, we do know such a thing.

This sounds like an is-ought confusion. "Some people would be happier as slaves." is an is-statement -- it's either right or wrong (true or false) as a matter of fact, regardless of morality. "Slavery oughtn't exist" is a moral statement -- it only has a truth value according to a particular ethical/moral set.

I don't know whether "naturally suited for slavery" is supposed to be a "is" or an "ought" statement (descriptive or prescriptive). If it's an is-statement then our moral sense is irrelevant to whether the statement is true or false as a matter of fact.

Replies from: TimS, PrawnOfFate
comment by TimS · 2013-04-11T13:44:04.635Z · LW(p) · GW(p)

"Some people would be happier as slaves." is an is-statement -- it's either right or wrong (true or false) as a matter of fact, regardless of morality.

I agree generally with your point, but this sentence assumes "happier" is an objective quality - which may not be true. if we were to taboo "happier" in that sentence, the new phrasing might include a moral claim. Consider:

"Everyone is happier if jocks can haze nerds without complaint" --> "Jocks by show virtue by hazing nerds, and nerds show virtue by accepting hazing without complaint."

The second sentence contains a number of explicit and implicit moral claims. Those moral claims are also present in the first sentence, just concealed by the applause light word "happy."

comment by PrawnOfFate · 2013-04-11T14:48:55.501Z · LW(p) · GW(p)

"Slavery oughtn't exist" is a moral statement -- it only has a truth value according to a particular ethical/moral set.

That; something we don't know. Moral statements might be uniformly false (error theory), neither true nor false (expressivism), have singe non-relative truth values (moral realism) etc.

Replies from: ArisKatsaris, MugaSofer
comment by ArisKatsaris · 2013-04-11T15:25:50.789Z · LW(p) · GW(p)

Just to mention my own view on the subject in a single line (though I ought really illustrate it with examples): I'm guessing that the moral algorithm in our brains is executing an unconscious estimation of our preferences while attempting a depersonalization of context.

As such a moral statement is on the one hand dependent on individual preferences (moral values) but on the other hand it can also be true/false on an objective level, as some parts of the above algorithm are objective and some are subjective.

comment by MugaSofer · 2013-04-11T16:03:13.011Z · LW(p) · GW(p)

But you might believe it's moral to increase net happiness, or moral to enforce people's rights, one of which is a right to personal autonomy. So it's truth value is still only determined "according to a particular ethical/moral set."

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-11T16:08:26.342Z · LW(p) · GW(p)

No., because your beliefs might be wrong. Aris was asserting relativism against error theory, non-congitivism and realism. Relativism is the claim that some set of statements have truth values, and have truth values that are relative to something. Relativism is not proven simply by producing evidence of conflicting beliefs.

Replies from: ArisKatsaris, MugaSofer
comment by ArisKatsaris · 2013-04-11T19:34:26.326Z · LW(p) · GW(p)

A small note: I don't consider myself a moral relativist, though I understand why my statement was misunderstood as such, as I was vastly simplifying my position. My actual position on morality probably needs a full discussion post to be fully explained.

I do think that for any X, X can only be called "morally wrong" according to some moral/ethical system, but I also think that there may exist objective criteria which might invalidate some moral/ethical systems altogether, or even rank (at least somewhat) the validity of the various moral/ethical systems.

comment by MugaSofer · 2013-04-12T20:41:54.396Z · LW(p) · GW(p)

I'm well aware they could be wrong - in fact that's my whole point. The answer depends on which moral theory is true - even though only one theory is actually true. It's a counterfactual.

Since Aris says they're not a moral reletavist, I suspect this is at least similar to what they intended. If it's not, I'd still endorse it.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-12T20:48:08.356Z · LW(p) · GW(p)

why would an "ethciical/moral set" be what makes a moral claim (realistically) true? Realists tend to think claims are rendered true by some sort of fact.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-12T21:22:02.621Z · LW(p) · GW(p)

The position of realism + belief in such a fact is one such "ethical/moral set", as I meant it. I think that may have come across as referring to different terminal values or something?

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-13T12:02:20.515Z · LW(p) · GW(p)

Realists think claims are made true by facts, not beliefs in facts.

Replies from: MugaSofer, TheOtherDave
comment by MugaSofer · 2013-04-13T14:00:13.172Z · LW(p) · GW(p)

Right, but it depends which facts are true. The answer is contingent on these disputed facts.

ETA: edited for wording.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-13T14:11:09.194Z · LW(p) · GW(p)

Sure. Which theory will be believed to be true depends on which facts are believed to be true, and which theory is actually true will depend on which facts are actually true. But beleiving one thing to be true becuae you believe another to be tirue is non argument for relativism, although careless wording can make it seem that way.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-13T14:31:38.385Z · LW(p) · GW(p)

But beleiving one thing to be true becuae you believe another to be tirue is non argument for relativism

Sorry, I'm having trouble parsing that :( Possibly because I'm misreading the typos?

comment by TheOtherDave · 2013-04-13T15:50:52.991Z · LW(p) · GW(p)

I believe we've established that shminux's position allows for some models to be made more accurate than others by events, not beliefs in events. I think these two positions are analogous.

Replies from: MugaSofer
comment by MugaSofer · 2013-04-13T16:23:24.919Z · LW(p) · GW(p)

Different conversation ;)

We're talking about moral realism here, sort of.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-04-13T16:25:18.217Z · LW(p) · GW(p)

Whoops! You're entirely right; I should have read the parent-tree before responding, but I was confident that I knew what the conversation was. Retracted.

comment by MugaSofer · 2013-04-11T16:48:13.134Z · LW(p) · GW(p)

Do we, though? I think most people would agree that it's acceptable to enslave sentient creatures that are actually happy under such a system - albeit reluctantly, due to the signalling implications - and this seems consistent with the fact that historically, societies kept slaves believed this (if not alieved it.)

comment by Leonid · 2008-07-17T02:15:40.000Z · LW(p) · GW(p)

There is a tendency for older generation to feel nostalgic for the time of their youth and for the younger generation to strive for changing the status quo. So I wonder whether the modern perception of moral progress (as opposed to perennial complaints of moral degradation popular among our ancestors) comes from the youth being more economically and politically empowered than ever before, which allows it to dominate public discourse.

comment by athmwiji · 2008-07-17T07:00:14.000Z · LW(p) · GW(p)

i also consider morality to be about cooperation... in this sense moral progress predates humanity specifically i consider the evolution of multicellular life to be an example of moral progress

comment by Ben_Jones · 2008-07-17T23:34:50.000Z · LW(p) · GW(p)

@Paul Gowder

What Caledonian said.

Not true? Please, please post about that. Not about moral progress etc, but how you have come to hold that any moral belief can be an objectively true belief. This is surely what 'we now know that's not true' implies.

Aristotle would probably ask you for evidence that he is flatly wrong. He might also ask you why your judgment is true, and his is not.

While I might not agree that we should enslave anyone, I'd certainly have the courtesy to admit to Aristotle that a moral is only as true as a society and an era holds it to be. Is this what you meant?

For your second comment, I'd say that this is a case of a religious given succumbing to overwhelming pressure from society and society's conscience. The Bible was the problem, or at least a justification for the problem. By this rationale, yes, progress was made. However, to temper TGGP's comment a little, 'moral progress' can surely only be made from one set of beliefs towards another set. There is no objective 'morality meter' that we've been inching our way up (down? along?) since the year dot. How could there be? Moral development might be a better way to describe what this post talks about.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-11T14:55:39.889Z · LW(p) · GW(p)

Not true? Please, please post about that. Not about moral progress etc, but how you have come to hold that any moral belief can be an objectively true belief.

  • I wouldn't want "enslave him" to become a universal law (Kant I)

  • Enslaving people treats them as means, not ends (Kant II)

  • I wouldn't want to become a random member of society that permits slavery (Rawls)

etc

comment by Nick_Tarleton · 2008-07-18T05:37:00.000Z · LW(p) · GW(p)

Not everyone has the same intuition about the wrongness of slavery, though, and "they're not us and they're more use to us this way" is justification enough for some. People have divergent intuitions about empirical and logical propositions, too, but in those cases there's an obvious (if not always practical) way to settle things: go and look, or find a (dis)proof. You can trivially demonstrate that 1+1≠3, but it's hard to see how you could reject with nearly as much rigor even something as ridiculous as "it's good to enslave people born on a Tuesday." You could put the latter into a calculator if you defined "good" in minute detail, and if you copied the definition out of your own skull you could even get an output you're justified in caring about, but try to claim fully mind-independent truth and you fall right into the Open Question. It still adds up to normality, though, to the extent that definitions of "good" converge under increasing knowledge, reflection, and discussion - probably a large extent, excluding a few sociopaths and other oddballs.

But maybe, as Eliezer's been pointing out, I'm wrong to think 1+1=2 is mind-independently demonstrable either - really, in both cases, a mind needs to be running certain dynamics to appreciate the argument; it's just harder to imagine a mind (that deserves the term) with different arithmetic dynamics than different moral dynamics.

(BTW, like Ben, I think novel interpretations of the Bible re: slavery were mostly rationalizations of already-changing fundamental values.)

comment by Ben_Jones · 2008-07-18T13:07:00.000Z · LW(p) · GW(p)

Paul,

Thanks for the clarification.

I submit that's enough to constitute all the knowledge we need to say that kind of behavior is immoral.

So we're saying what we think is moral based on our knowledge. I'd say that's pretty watertight. We know what we feel is right, but the more we can tie it to objective facts about the world, the stronger our position. However, I'd still argue that we can never move beyond merely believing in our morals, by definition. (Yes, I said it!) The moment we state that we know that our morals are true for all time and space, we're setting ourselves up for a fall that we can't recover from.

comment by Ben_Jones · 2008-07-18T13:08:00.000Z · LW(p) · GW(p)

Sorry for repost, but note also that my earlier comment made no reference to slavery, and I of course agree that slavery isn't right. My beef was with the assertion of a true moral.

comment by John_David_Galt · 2008-07-20T23:54:00.000Z · LW(p) · GW(p)

Whether you agree with it or not, Obama's "moral progress" means a change in US law to comport more closely to (his present view of) morality, not a change in the moral views of Americans. It is quite possible to view oneself as the apex of possible morality and still believe in the possibility of moral progress on other people's part.

I disagree with Obama because I disagree with some of the goals of his morality, but I don't see that as any reason to attack his semantics.

comment by Rain · 2010-03-22T19:15:16.445Z · LW(p) · GW(p)

I see moral progress as 1) increased empathy, defined as increasingly satisfying, increasingly accurate mental models of sentient beings, including oneself, and 2) increased ability to predict the future, to map out the potential chains of causality for one's actions.

comment by lockeandkeynes · 2010-07-20T04:40:13.914Z · LW(p) · GW(p)

Inspired by this article http://www.thecherrycreeknews.com/news-mainmenu-2/1-latest/5517-higher-intelligence-associated-with-liberalism-atheism.html I think one way of doing it might be to show directionality in terms of evolutionary novelty. That is, look at what parts of our evolutionary psychology we have rationally worked against as a culture, and why we came to those more intellectual conclusions. That way, the measure of our progress could be in how we learn to fix the mistakes of the stupid natural selection.

However, that sounds a lot to me like reversed stupidity, which I now know to be a false means of winning, but I do think it at least explains our perception of moral progress, if not progress as an absolute. If we somehow discover that when cultures step away from their evolutionary psychology that it is always for the sake of positive rational morality, then the concept might hold more weight in terms of a holistic moral progress.

comment by kilobug · 2011-09-29T10:27:10.034Z · LW(p) · GW(p)

Part of the answer could lie into "what would someone teleported to another culture think ?" I don't think it totally solves the question, but it's a hint, or a part of the answer.

If you take someone from now, and he's teleported to dark ages, with absolute monarchy, serfdom, capital punishment with the most horrible ways of killing, torture, ... he will be horrified.

If you take someone from the dark ages and teleport him now, he'll probably be very lost at first, but I don't think he would be horrified by the fact we manage to take more-or-less reasonable decisions using democracy (at least as reasonable at what the kings used to do), that the society doesn't collapse into crime and chaos when we suppress death penalty, serfdom, torture, ...

Many people who, in the past, advocated the use of what we now consider barbaric (torture, death penalty, dictatorship, ...) did it saying "there is no alternative", "if we don't maintain order, it'll be chaos and everyone will murder each other", "if you don't have a king, no decision will be taken", ...

The same applies to points which are debated right now in western societies, like "painless" death penalty, or corporal punishment in education. People who are against them are horrified by them, people who are in favor are more arguing "we need them".

And it also applies things like prison, which are almost consensual right now. I find very few people around me who justify prison for the sake of it, but only because we need it to prevent/deter crime. So when we'll find a way to do without prison (or using it much less), by finding alternatives (using technology like electronic monitoring, societal evolution, better understanding of psychology and sociology, ...) people in the future will be outraged thinking how we locked people away for so long, while we, if teleported in that future, would be confused on how they manage to keep society from chaos without prison, but not outraged of it.

That gives a general direction of "ethical progress", which is (to a point) universal to all humans. But it's just a hint to a real theory of moral progress (I didn't read yet the following posts, nor took months formalizing it).

Replies from: Multiheaded, thomblake, army1987
comment by Multiheaded · 2012-02-28T15:20:41.735Z · LW(p) · GW(p)

...the fact we manage to take more-or-less reasonable decisions using democracy (at least as reasonable at what the kings used to do), that the society doesn't collapse into crime and chaos when we suppress death penalty, serfdom, torture, ...

Recently, it has been quite fashionable on LW to profoundly disagree with all of those points. At the very least, someone's going to say that, when an attempt to suppress slavery was made, the US society did for a while collapse into chaos unheard of before or since.

Speaking quite frankly (and in purple prose), though, there are few other things in the realm of the mind I'd desire right now than to be able to trust securely in all those points, and rest well, knowing that the job of SIAI and partly LW is simply to fight our way upwards before the sky comes crashing down - not also to run as fast as possible from the eldritch monster born of our own shadow!

Replies from: kilobug
comment by kilobug · 2012-02-28T16:26:40.062Z · LW(p) · GW(p)

Temporary chaos frequently happen when changes are made - but that's not what I was referring to. The issue of "will chaos occur when moving from slavery to no slavery" is a different issue than "would a society without slavery be more chaotic". That can justify inertia (keeping things as they are), but is not in itself an argument for or against slavery (or anything else).

And that fact that despite that inertia we still see things like torture or slavery mostly disappearing is a good indicator or moral progress.

Replies from: Multiheaded
comment by Multiheaded · 2012-02-28T17:44:25.472Z · LW(p) · GW(p)

Eh, I'm just not the go-to guy here. You should try talking to people like:

  • sam0345 (low-level combat tutorial)

  • TGGP (online co-op mode)

  • Aurini (MEDIUM) - and he might end up just opening the gate and letting you pass if you look like enough of a bro - has recently been witnessed in a brawl against a pick-up raid. Pick-up, get it? Get it? Eh heh!

  • Konkivistador (HARD)

  • steven0461 (BONUS CONTENT; need the Meta^2-Contrarian Edition DLC to unlock - BUY NOW for only LW$ 5499)

  • Vladimir_M (VERY HARD)

  • ??? (IMPOSSIBLE)

Replies from: None, Multiheaded
comment by [deleted] · 2012-02-28T18:00:16.092Z · LW(p) · GW(p)

MORAL KOMBAT!

Edit: Lyrics need to be included obviously:

Test your mind, Test your mind,
Test your mind, Test your mind. 
MORAL KOMBAT!
FIGHT!
MORAL KOMBAT!
EXCELLENT!
Konkvistador, TGGP, Roko, Will_Newsome,
steven, cousin_it, Vladimir.
MORAL KOMBAT!
FIGHT!
MORAL KOMBAT!
Konkvistador, TGGP, Roko, Will_Newsome,
steven, cousin_it, Vladimir.
MORAL KOMBAT!
(Modus ponens!)
(Ceteris paribus)
(Aumann's agreement)
(Excellent!)
FIGHT!
Test your mind, Test your mind.
Konkvistador, TGGP, Roko, Will_Newsome,
steven, cousin_it, Vladimir.
MORAL KOMBAT!
FIGHT!
MORAL KOMBAT! [4x]

Since I'm apparently a stepping stone on the path to the Final Boss of the contrarian Internet, I wonder what my fatality is.

Replies from: Multiheaded, Multiheaded
comment by Multiheaded · 2012-02-28T18:06:29.993Z · LW(p) · GW(p)

So, we have an agreement that outright flattering each other in the future shall be reprociated with positive karma loops, as long as it's done in a sufficiently nerdy manner? C'mon, bro, just say yeah!

Replies from: None
comment by [deleted] · 2012-02-28T18:09:10.876Z · LW(p) · GW(p)

Past behaviour is an excellent predictor of future behaviour. Nerdy flattery and humour seem to be consistently rewarded on LessWrong.

comment by Multiheaded · 2012-02-28T18:23:23.016Z · LW(p) · GW(p)

:reads the edit:

Now you're just adding insult to injury, except that "injury" is "awesomeness" and "insult" is "nostalgia".

comment by Multiheaded · 2012-02-28T18:43:58.529Z · LW(p) · GW(p)

We are glad to announce an upcoming full-fledged expansion pack: 'The Twisting Way'

Engage the enigmatic genius Will_Newsome and rescue Lady AspiringKnitter from his unspeakable experiments; survive the shamanistic Rites of Hanson (not for the sake of survival!); endure stigma and uproar as you optimize your threads for the gaze of the feared Outsiders; boldly embark upon the Doomed Quest for Mencius' Magnificient Monocle, and more!

comment by thomblake · 2012-02-28T16:35:05.265Z · LW(p) · GW(p)

I like the criteria above. If people on one side are arguing that x is "necessary" and people on the other are arguing that x is "horrible", then it should be clear that x is horrible and something should be done about it. (make x less horrible, find an alternative to x, remove whatever makes x necessary)

Applies well to things like medical testing on animals, prisons, and death.

Replies from: Kenny
comment by Kenny · 2013-04-11T13:55:56.752Z · LW(p) · GW(p)

This is interesting. I was going to offer what I thought were counterexamples, such as abortion, masturbation, and drug use, but now I'm not so sure.

For one thing, the "something" that should be done might simply be to prevent anyone from feeling horror (such as for masturbation) and for another it seems that there should be ways to mitigate the negative consequences of 'horrible' things (such as for abortion, by transferring an unwanted fetus to an artificial womb for adoption, or deliberately mitigating the unwanted side effects of drugs).

comment by A1987dM (army1987) · 2012-02-28T21:29:55.061Z · LW(p) · GW(p)

I find very few people around me who justify prison for the sake of it, but only because we need it to prevent/deter crime.

Seems like there are more such people than we'd expect. (Are you in Europe too?)

comment by Ben Pace (Benito) · 2012-07-14T12:15:42.245Z · LW(p) · GW(p)

The discussion in the comments has been interesting, but I believe I have a simple answer to Eliezer's question (please tell me if I am mistaken). Consider a society that has a moral idea say, like valuing bodily autonomy, but they don't give woman that right. They often kill women for the organs to give to men and children, due to an old tribal culture mainly forgotten. Unfortunately, certain rituals and dogma still continue on. One day, a leading public intellectual points this out on tv, and they change their actions to fit in with their true moral beliefs, and stop acting on non-moral ones. Wouldn't this be an example of moral progress?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-07-14T15:48:02.120Z · LW(p) · GW(p)

Consider a different society that has a moral idea like valuing the bodily autonomy of non-women, but for various historical reasons this has historically been expressed as "valuing bodily autonomy" without specifying gender. Their behavior has been identical to the example you give, until one day someone points this out, and they start expressing it as "valuing bodily autonomy for non-women" instead, while continuing to do everything else the way they used to.

Is this also an example of moral progress?

If not, why not?

Replies from: Benito
comment by Ben Pace (Benito) · 2012-08-13T15:26:15.899Z · LW(p) · GW(p)

I see. I've said that if people become more aligned with their meta-morals in practice, then it is progress... And you've offered that their meta-morals might seem or be bad anyway, so it wouldn't seem to be progress to us. I suppose, to be able to show my progress to be directional and not arbitrary, I'd have to present a perfect, objective basis for morality. I won't be doing that in this post (sorry) so my point is redundant. Thanks for clearing that up with me.

comment by sjmp · 2013-05-15T14:03:36.753Z · LW(p) · GW(p)

I was going to say something about moral progress being changes in society that result in global increase in happiness, but I ran into some problems pretty fast following that thought. Hell, if we could poll every single living being from 11th century and 21st century and ask them to rate their happiness from 1-10 why do I have a feeling we'd end up with same average in both cases?

If you gave me exensional definition of moral progress by listing free speech, end of slavery and democracy, and then ask me for intensional definition, I'd say moral progress is global and local increase in co-operation between humans. That does not necessarily mean increase in global happiness.