Maximizing life universally

post by EliasHasle · 2014-02-07T12:32:23.843Z · LW · GW · Legacy · 23 comments

Contents

23 comments

Pain and pleasure as moral standards do not appeal to me. They are easily manipulated by drugs, and can lead to results such as killing sick people against their will.

To me, life and death is much more interesting. There are issues in defining which lives are to be saved, what it means for a life to be "maximized", what life actually is, and so on. I propose trying to remain as objective as possible, and defining life through physics and information theory (think negentropy, Schrödinger's "What is Life" and related works). I am not skilled in any of these sciences, so my chances of being less wrong in details of this are slim. But what I envision is something like "Maximize (universally) the amount of computation energy causes before dissolving into high entropy", or "Maximize (universally) the number of orderly/non-chaotic events". Probably severely wrong, but I hope you get the idea.

I suppose that some rules/actions that may contribute to this goal (not considering all consequences), are:

- Minimizing killing.

- Having lots of children.

- Sustainable agriculture.

- Utilizing solar energy in deserts.

- Building computers.

- Production rather than consumption.

- Colonizing space.

and, ultimately, creating superintelligence, even if it means the end to humanity.

This, to me, is the ultimate altruistic utilitarianism. I don't think I'm an utilitarian, though... But I wonder if some of you clever people have any insights to contribute with, helping me in getting to be less wrong?

(Placing the following in parentheses is an invitation to discuss this part enclosed in parentheses and doing the main discussion in the wild.

There are other ideas that appeal more to me personally:

- Some kind of justice utilitarianism. That is, justice is defined out of people's (or other decent entities') self interest (survival, pleasure and pain, health, wealth, etc.) and the action relations between people (such as "He hit me first"). Then the universal goal is maximizing justice (reward and punish) and minimizing injustice (protect innocents).

- Rational egoism based on maximizing learning.

- Less attention paid to particular principles and more to everyday responsibilities, subjective conscience, and natural and social conformity.

Last, but not least, focus on staying human and protecting humanity. Maybe extending it both upwards (think AGI) and downwards (to some other species and to human embryos), but protecting the weakness necessary for securing truthful social relations. Protecting weakness means suppressing potential power, most importantly unfriendly AI.

)

23 comments

Comments sorted by top scores.

comment by Creutzer · 2014-02-07T12:59:32.942Z · LW(p) · GW(p)

Preference utilitarianism has none of the problems that you mention in the first paragraph and is still about something that people actually care about - as opposed to arbitrarily fetishising life. To be honest, someone with the values you outline registers basically as a paperclip-maximiser to me.

Replies from: EliasHasle, EliasHasle
comment by EliasHasle · 2014-02-08T10:45:17.006Z · LW(p) · GW(p)Replies from: Kawoomba
comment by Kawoomba · 2014-02-08T20:04:12.319Z · LW(p) · GW(p)

This would be a variant of the "utility monster" thought experiment. The sensible implementations of utilitarianism take care not to fall into such traps.

comment by EliasHasle · 2014-02-07T13:27:02.482Z · LW(p) · GW(p)Replies from: DaFranker, Viliam_Bur
comment by DaFranker · 2014-02-07T14:15:16.570Z · LW(p) · GW(p)

This seems like it falls face-first, hands-tied-behind-back right in the giant pit of the Repugnant Conclusion and all of its corollaries, including sentience and intelligence and ability-to-enjoy and ability-to-value.

For instance, if I'm a life-maximizer and I don't care about whether the life I create even has the ability to care about anything, and just lives, but has no values or desires or anything even remotely like what humans think of (whatever they do think of) when they think about "values" or "utility"... does that still make me more altruistically ideal and worthy of destroying all humanity?

What about intelligence? If the universe is filled to the planck with life, but not a single being is intelligent enough to even do anything more than be, is that simply not an issue? What about consciousness?

And, as so troubling in the repugnant conclusion, what if the number of lives is inversely proportional to the maximum quality of each?

Replies from: EliasHasle, EliasHasle
comment by EliasHasle · 2014-02-07T15:16:15.595Z · LW(p) · GW(p)Replies from: Creutzer
comment by Creutzer · 2014-02-07T16:39:15.396Z · LW(p) · GW(p)

The point of the reference to paperclip-maximisers was that these values are just as alien to me as those of the paperclip-maximiser. "Putting up a fight against nature's descent from order to chaos" is a bizarre terminal value.

Replies from: EliasHasle, EliasHasle
comment by EliasHasle · 2014-02-07T15:02:52.540Z · LW(p) · GW(p)

Consciousness certainly is something it is possible to care about, and caring itself may be important. Some theories of consciousness imply a kind of panpsychism or panexperiantialism, though.

I am not exactly talking about maximizing the number of lives, but on maximizing the utilization of free energy for the maximization of the utilization of energy (not for anything else)... I think.

comment by Viliam_Bur · 2014-02-07T15:56:47.189Z · LW(p) · GW(p)

Instead of paperclips, the life-maximizer would probably fill the universe with some simple thing that qualifies as life. Maybe it would be a bacteria-maximizer. Maybe some fractal-shaped bacteria or many different kinds of bacteria, depending on how exactly its goal will be specified.

Is this the best "a real altruist" can hope for?

If UFAI would be inevitable, I would hope for some approximation of a FAI. If no decent approximation is possible, I would wish some UFAI which is likely to destroy itself. Among different kinds of smart paperclip-maximizers... uhm, I guess I prefer red paperclips aesthetically, but that's really not so important.

My legacy is important only if there is someone able to enjoy it.

Replies from: EliasHasle
comment by [deleted] · 2014-02-07T15:59:09.869Z · LW(p) · GW(p)

I propose trying to remain as objective as possible, and defining life through physics and information theory (think negentropy, Schrödinger's "What is Life" and related works). I am not skilled in any of these sciences, so my chances of being less wrong in details of this are slim.

Have you tried studying these subjects? I expect that would be vastly more productive than throwing your vague hint of an idea to an internet community. As it stands, it seems you yourself don't know enough to clearly articulate your ideas.

Replies from: EliasHasle
comment by EliasHasle · 2014-02-07T18:43:13.284Z · LW(p) · GW(p)Replies from: None
comment by [deleted] · 2014-02-10T03:02:19.588Z · LW(p) · GW(p)

Yes, diversity is important, but you have to take into consideration just how much you add to that diversity. When it comes to physics and information theory, there are people here that know everything you know, and there is nothing you know that nobody here knows. Your advantage over other people come from your experiences, which give you a unique and possibly enlightening perspective. Your ideas, no doubt, come from that perspective. What you should do now is to study so that you can find out whether your ideas can lead to anything useful, and if it does, so that you can tell us in a way we can understand. I recommend studying information theory first. You'll be able to tackle the interesting ideas quickly without having to slog through months of Newtonian mechanics. (Not that mechanics isn't interesting, it's just doesn't seem very relevant to your ideas. It's also possible to skip most of the early undergraduate physics curriculum and go straight to your fields of interest, which I expect would include statistical mechanics and such, but I really don't recommend it unless you have an extremely solid foundation in math. Which you should get anyway if you want to get anywhere, but studying years' worth of prerequisites before the stuff you're actually interested in is demotivating. Start with the stuff you like and fill in as you notice what you don't know. This is difficult but doable with information theory, and ridiculous for say, quantum field theory.)

And most importantly, Fix this.

I am not so good at reading, especially hard subjects in English.

You will get nowhere without being a proficient reader. But I'm sure you'll do fine on this one. The very fact that you're on Less Wrong means you enjoy reading about abstract ideas. Just read a lot and it'll come to you.

Replies from: EliasHasle
comment by EliasHasle · 2014-02-10T08:40:10.960Z · LW(p) · GW(p)Replies from: None
comment by [deleted] · 2014-02-10T13:52:21.189Z · LW(p) · GW(p)

Usually, college courses are the way to go. Find out if you can sit in at courses at nearby colleges and use the textbooks they use. Failing that, I see Coursera has a course on information theory.

comment by EliasHasle · 2014-02-08T10:59:19.061Z · LW(p) · GW(p)
comment by EliasHasle · 2014-02-08T10:31:58.250Z · LW(p) · GW(p)Replies from: drethelin
comment by drethelin · 2014-02-08T18:36:24.161Z · LW(p) · GW(p)

Don't do either. think about it for a good long while, and ask questions and talk about these topics elsewhere.

Replies from: EliasHasle
comment by EliasHasle · 2014-02-08T19:47:36.937Z · LW(p) · GW(p)Replies from: mbitton24, drethelin
comment by mbitton24 · 2014-02-10T16:31:38.852Z · LW(p) · GW(p)

I think the reason for the downvotes is that people on LW have generally already formulated their ethical views past the point of wanting to speculate about entirely new normative theories.

Your post probably would have received a better reaction had you framed it as a question ("What flaws can you guys find in a utilitarian theory that values the maximization of the amount of computation energy causes before dissolving into high entropy?") rather than as some great breakthrough in moral reasoning.

As for constructive feedback, I think Creutzer's response was pretty much spot on. There are already mainstream normative theories like preference utilitarianism that don't directly value pain and pleasure and yet seem to make more sense than the alternatives you offered.

Also, your post is specifically about ethics in the age of superintelligence, but doesn't mention CEV. If you're going to offer a completely new theory in a field as well-trod as normative ethics, you need to spend more time debunking alternative popular theories and explaining the advantages yours has over them.

comment by drethelin · 2014-02-09T21:56:06.687Z · LW(p) · GW(p)

You can discuss it here, just do it in comments. People don't like overambitious top level posts. Read up on the relevant posts that already exist on lesswrong, and comment there and on open threads.