post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by moridinamael · 2017-07-06T17:56:14.547Z · LW(p) · GW(p)

I suspect that the Utilitarian assumption which places happiness and suffering as two ends of the same axis is a mistake. Treating suffering as negative happiness doesn't feel consistent with my experience as a conscious entity.

Reducing suffering seems good. Increasing happiness seems good. Trading off suffering and happiness as if they are opposites, or even mutually fungible, seems highly questionable.

comment by JenniferRM · 2017-07-05T07:56:23.473Z · LW(p) · GW(p)

What if happiness is not our goal?

Replies from: pandrama_duplicate0.6248904072594477, ImmortalRationalist
comment by pandrama_duplicate0.6248904072594477 · 2017-07-12T15:09:51.335Z · LW(p) · GW(p)

Hey, all! An interesting discussion in this thread. Regarding terminal/ end goals...

I've come up with a goal framework consisting of 3 parts: 1) TRUTH. Let’s get to know as much as we can, basing our decisions on the best available knowledge, never closing our eyes to the truth. 2) KINDNESS. Let’s be good to each other, for this is the only kind of life worth living for. 3) BLISS. Let’s enjoy this all, every moment of it.

(A prerequisite to them all is existence, survival. For me, the idea of infinite or near-infinite survival of me/ humankind certainly has appeal, but I'd choose a somewhat shorter existence having more of the above-mentioned 3 things over a somewat longer existence with less of these things. But this is another longer discussion, let's just say that IF existence already exists, for a shorter or longer time, then that's what it should be like).

These 3 goals/values are axiomatic, they are what I consciously choose to want. What I want to want. Be ther humans, transhumans, AI, whatever - a world that consists more of these things is a better direction to head towards, a world that has less, a worse one. Yet another longer discussion is, what would the trade-offs between each of these be, but let's just say for now, that the goal is to find harmonious outcomes that have all three of these. (This way, wireheading-style happiness and harming-others-as-happiness, can easily be excluded).

Anyone wants to discuss something further from here, I'd be glad to.

comment by ImmortalRationalist · 2017-07-06T11:42:28.139Z · LW(p) · GW(p)

If you are a consequentialist, it's the exact same calculation you would use if happiness were your goal. Just with different criteria to determine what constitute "good" and "bad" world states.

Replies from: JenniferRM
comment by JenniferRM · 2017-07-06T22:04:39.304Z · LW(p) · GW(p)

I think you're missing the thrust of my question.

I'm asking something more like "What if mental states are mostly a means of achieving worthwhile consequences, rather than being mostly the consequences that should be cared about in and for themselves?"

It is "consequences" either way.

But what might be called intrinsic hedonism would then be a consequentialism that puts the causal and moral stop sign at "how an action makes people feel" (mostly ignoring the results of the feelings (except to the degree that the feelings might cause other feelings via some series of second order side effects)).

An approach like this suggests that if people in general could reliably achieve an utterly passive and side effect free sort of bliss, that would be the end game... it would be an ideal stable outcome for people to collectively shoot for, and once it was attained the lack of side effects would keep it from being disrupted.

By contrast, hedonic instrumentalism (that I'm mostly advocating) would be a component of some larger consequentialism that is very concerned with what arises because of feelings (like what actions, with what results) and defers the core axiological question about the final value of various world states to a separate (likely independent) theory.

The position of hedonic instrumentalism is basically that happiness that causes behavior with bad results for the world is bad happiness. Happiness that causes behavior with good results in the world is good happiness. And happiness is arguably pointless if it is "sterile"... having no behavioral or world affecting consequences (though this depends on how much control we have over our actions and health via intermediaries other than by wireheading our affective subsystems). What does "good" mean here? That's a separate question.

Basically, the way I'm using the terms here: intrinsic hedonism is "an axiology", but hedonic instrumentalism treats affective states mostly as causal intermediates that lead to large scale adjustments to the world (though behavior) that can then be judged by some external axiology that pays attention to the whole world and the causal processes that deserve credit for bringing about the good world states.

You might break this down further, where perhaps "strong hedonic instrumentalism" is a claim that in actual practice, humans can (and already have, to some degree) come up with ways to make plans, follow the plans with action, and thereby produce huge amounts of good in the world, all without the need for very much "passion" as a neural/cognitive intermediate.

Then "weak hedonic instrumentalism" would be a claim that maybe such practices exist somewhere, or could exist if we searched for them really hard, and probably we should do that.

Then perhaps "skeptical hedonic instrumentalism" would be a claim that even if such practices don't exist and might not even be worth discovering, still it is the case that intrinsic hedonism is pretty weaksauce as far as axiologies go.

I would not currently say that I'm a strong hedonic instrumentalist, because I am not certain that the relevant mental practices exist as a factual matter... But also I'm just not very impressed by a moral theory that points to a little bit of tissue inside one or more skulls and says that the whole world can go to hell, so long as that neural tissue is in a "happy state".

comment by ignoranceprior · 2017-07-05T00:59:44.711Z · LW(p) · GW(p)

Some people in the EA community have already written a bit about this.

I think this is the kind of thing Mike Johnson (/user/johnsonmx) and Andres Gomez Emilsson (/user/algekalipso) of the Qualia Research Institute are interested in, though they probably take a different approach. See:

Effective Altruism, and building a better QALY

Principia Qualia: blueprint for a new cause area, consciousness research with an eye toward ethics and x-risk

The Foundational Research Institute also takes an interest in the issue, but they tend to advocate an eliminativist, subjectivist view according to which there is no way to objectively determine which beings are conscious because consciousness itself is an essentially contested concept. (I don't know if everyone at FRI agrees with that, but at least a few including Brian Tomasik do.) FRI also has done some work on measuring happiness and suffering.

Animal Charity Evaluators announced in 2016 that they were starting a deep investigation of animal sentience. I don't know if they have done anything since then.

Luke Muehlhauser (/u/lukeprog) wrote an extensive report on consciousness for the Open Philanthropy Project. He has also indicated an interest in further exploring the area of sentience and moral weight. Since phenomenal consciousness is necessary to experience either happiness or suffering, this may fall under the same umbrella as the above research. Lukeprog's LW posts on affective neuroscience are relevant as well (as well as a couple by Yvain).

Replies from: None
comment by [deleted] · 2017-07-05T07:12:31.368Z · LW(p) · GW(p)

This is great info, but it's about a different angle from what I'd like to see.

(I now realise it is totally impossible to infer my angle from my post, so here goes)

I want to describe the causes of happiness with the intentional stance. That is, I want to explain them in terms of beliefs, feelings and intentions.

For example, it seems very relevant that (allegedly) suffering is a result of attachment to outcomes, but I haven't heard any rationalists talk about this.

Replies from: username2
comment by username2 · 2017-07-05T09:21:08.943Z · LW(p) · GW(p)

How is physical torture (or chronic back pain) the result of attachment to outcomes?

Replies from: None
comment by [deleted] · 2017-07-05T09:28:43.384Z · LW(p) · GW(p)

This is a rather extreme case, but people exist that don't suffer from physical damage because they don't identify with their physical body.

Granted, it would take a good 20 years of meditation/brainwashing to get to that state and it's probably not worth it for now.

Luckily many forms of suffering are based on more shallow beliefs

Replies from: username2, gjm
comment by username2 · 2017-07-05T12:21:58.448Z · LW(p) · GW(p)

That seems a bit like arguing for wireheading as a solution to your problems.

Replies from: Viliam
comment by Viliam · 2017-07-06T13:03:15.857Z · LW(p) · GW(p)

A novice, upon observing a brain scan, said: "Two neural pathways bring signals to the limbic system. One of them is right, and the other one is wrong."

An old master listened to his words, and said: "What is the true nature of the neural pathway that brought you to this conclusion? Monkey, riding an elephant! Which one of them has more Buddha nature?"

The novice was not enlightened, but peer pressure made him pretend he was. The master then collected tuition fees from all bystanders.

Later in the evening, the master took a pain pill, opened Less Wrong on his smartphone, and wrote:

Chronic back pain tortures my body
I put an electrode in my brain
Sakura petals flowing in the breeze

comment by gjm · 2017-07-05T09:49:26.424Z · LW(p) · GW(p)

people exist that don't suffer from physical damage because they don't identify with their physical body

Just to clarify: you don't mean that they don't get physical damage, you mean they don't mind getting physical damage?

Do they, then, not bother doing anything to fix any physical damage they incur? That doesn't seem like it's obviously a good tradeoff.

It seems like what you actually want is, roughly, (1) not to feel pain, (2) to be aware of damage, (3) to prefer not to get damaged, and (4) for that preference not to lead to distress when damage occurs. It sounds as if the people you're talking about have managed 2 and 4 but not 1, and on the face of it their way of dealing with 4 seems like it would (if it actually works) break 3.

comment by sen · 2017-07-05T00:08:35.766Z · LW(p) · GW(p)

"But hold up", you say. "Maybe that's true for special cases involving competing subagents, ..."

I don't see how the existence of subagents complicates things in any substantial way. If the existence of competing subagents is a hindrance to optimality, then one should aim to align or eliminate subagents. (Isn't this one of the functions of meditation?) Obviously this isn't always easy, but the goal is at least clear in this case.

It is nonsensical to treat animal welfare as a special case of happiness and suffering. This is because animal happiness and suffering can be only be understood through analogical reasoning, not through logical reasoning. A logical framework of welfare can only be derived through subjects capable of conveying results since results are subjective. The vast majority of animals, at least so far, cannot convey results, so we need to infer results on animals based on similarities between animal observables and human observables. Such inference is analogical and necessarily based entirely on human welfare.

If you want a theory of happiness and suffering in the intellectual sense (where physical pleasure and suffering are ignored), I suspect what you want is a theory of the ideals towards which people strive. For such an endeavor, I recommend looking into category theory, in which ideals are easily recognizable, and whose ideals seem to very closely (if not perfectly) align with intuitive notions.

comment by turchin · 2017-07-04T22:43:24.178Z · LW(p) · GW(p)

I think that theory of final goals should not be about happiness and sufferings.

My final goals are about infinite evolution etc, and suffering is just a signal that I choose a wrong path or have to call 911. If we fight with the signal, we forget to change the reality and start to live in illusion.

Moreover, I think that value of to be alive is more important than value of happiness.

Replies from: akvadrako, oge
comment by akvadrako · 2017-07-06T20:04:50.939Z · LW(p) · GW(p)

+1

comment by oge · 2017-07-06T22:00:15.505Z · LW(p) · GW(p)

Hey, turchin, do you mind explaining how you came about your final goals i.e. infinite evolution?

I'm looking for a way to test which final goal is more right. My current best guess for my final goal is, "avoiding pain and promoting play" and I've heard someone say, alternatively "beauty in the universe and eyes to see it." It would be neat if these different goals are reconcilable in some way.

Replies from: turchin
comment by turchin · 2017-07-06T23:12:11.628Z · LW(p) · GW(p)

At the beginning, I should note that any goal which is not including immortality is stupid, as infinite existence will include realisation of almost all other goals. So immortality seems to be a good proxy for the best goal. It is better goal than pleasure. Pleasure is always temporary, and somewhat not interesting.

However, there is something bigger than immortality. I call it "to become a God". But I can't just jump, or become enlightened, or whatever, it will be not me. I want to go all the way from now to infinitely complex, eternal and superintelligent and benevolent being. I think it is the most interesting way live.

But it is not just fun. It is the meaning of life. And the "meaning" is what makes you work, even if there are no fun ahead. For example, if you care about the survival of your family, it gives you meaning. Or, speaking better, the meaning takes you.

The idea of infinite evolution is also a meaning for the following reasons. There is a basic drive to evolve in every living being. When you choose a beautiful goal, you want to put your genes in the best possible place and create best possible children, and this is a drive that moves evolution. (Not very scientific claim, as sexual selection is not as well accepted as natural selection. So it is more like poetic expression of my feeling about natural drive to evolution). If one educate himself, read, travel etc, it all is parts of this desire for evolution. Even the first AI will immediately find it and start to self-improve.

The desire to evolve is something like Nietzschen "will to power". But this will is oriented on the infinite space of future possible mind states.

I would add that I spent years working in the theory of happiness. I abandoned it and I feel much better. I don't need to be happy, I just need to be in working condition to move to my mission: infinitely evolve (it also includes saving humanity from x-risks and giving life extension for all, so my goal is altruistic).

It may look that this goal has smaller prior chances of success, but it is not so for two reasons, one them connected with appearing of superintelligence in near-term, and another is some form of observation selection which will prevent me from seeing my failure. If I merge with superintelligent AI, I could continue my evolution (as well as other people).

There is another point of view, that I often heard from Lesswrongers. That we should not dare to think about our final goals, as superintelligence will provide us with better goals via CEV. However, there is some circularity here, as superinteligence has to extract our values from us, and if we not investing in attempts to articulate them, it could assume that the most popular TV series are the best presentation of the world we want to live. Its "Games of Thrones" and "The Walking Dead".

Replies from: akvadrako, username2
comment by akvadrako · 2017-07-07T14:26:41.156Z · LW(p) · GW(p)

Also like username2, I'm happy to hear of others with a view along this direction. A couple years ago I made a brief attempt at starting a modern religion called noendism, with the sole moral of survival. Not necessarily individual survival; on that we may differ.

However since then my core beliefs have evolved a bit and it's not so simple anymore. For one, after extensive research I've convinced myself that personal immortality is practically guaranteed. For another, one of my biggest worries is surviving, but imprisoned in a powerless situation.

Anyway, those details aren't practically relevant for my day to day life; these similar goals all head in the same direction.

comment by username2 · 2017-07-07T05:16:21.432Z · LW(p) · GW(p)

I just want to say you are not alone, as my own goals very closely align with yours (and Jennifer's as she expressed them in this thread as well.) It's nice to know that there are other people working towards "infinite evolution" and viewing mental qualia like pain, suffering, and happiness as merely the signals that they are. Ad astra, turchin.

(Also if you know a more focused sub-group to discuss actually implementing and proactively accomplishing such goals, I'd love to join.)

Replies from: turchin
comment by turchin · 2017-07-07T09:13:58.388Z · LW(p) · GW(p)

I think that all of us share the same subgoal for the next 100 years - prevent x-risks and personal short-term mortality via aging and accidential death.

Elon Musk with his Neurallink is looking in the similar direction. He also underlines the importance of "meaning" as something, which connects you with others.

I don't know about any suitable sub-groups.

Replies from: Lumifer, username2
comment by Lumifer · 2017-07-07T14:37:15.872Z · LW(p) · GW(p)

share the same subgoal

That's a very weak claim. Humans have lots and lots of (sub)goals. What matters is how high is that goal in the hierarchy or ranking of all the goals.

comment by username2 · 2017-07-07T11:28:56.887Z · LW(p) · GW(p)

Although a disproportionate number of us share those goals, I think you'd be surprised at the diversity of opinion here. I've encountered EA people focused on reducing suffering over personal longevity, fundamentalist environmentalists that value eco diversity over human life, and those who work on AI 'safety' with the dream of making an overpowering AI overlord that knows best (a dystopian outcome IMHO).

comment by siIver · 2017-07-04T20:23:10.281Z · LW(p) · GW(p)

I don't know what our terminal goals are (more precisely than "positive emotions"). I think it doesn't matter insofar as the answer to "what should we do" is "work on AI alignment" either way. Modulo that, yeah there are some open questions.

On the thesis of suffering requiring higher order cognition in particular, I have to say that sounds incredibly implausible (for I think fairly obvious reasons involving evolution).

comment by Dagon · 2017-07-04T21:11:06.201Z · LW(p) · GW(p)

Are you exploring your own goals and preferences, or hoping to understand/enforce "common" goals on others (including animals)?

I applaud research (including time spent at a Buddhist monastery, however you'll need to acknowledge that you'll perceive different emotions if you're exploring it for happiness than if it's your only option in life) and reporting on such. I've mostly accepted that there's no such thing as coherent terminal goals for humans - everything is relative to each of our imagined possible futures.

Replies from: None
comment by [deleted] · 2017-07-05T07:03:23.059Z · LW(p) · GW(p)

I have a strong contrarian hunch that human terminal goals converge as long as you go far enough up the goal chain. What you see in the wild is people having vastly different tastes of how to live life. One likes freedom, the next likes community, and then the next is just trying to gain as much power as possible. But I call those subterminal goals, and I think what generated them is the same algorithm with different inputs (different perceived possibilities?). The algorithm, which I think optimizes for some proxies of genetic survival like sameness and self-preservation, that's the terminal goal. And no, I'm not trying to enforce any values. This isn't about things-in-the-world that ought to make us happy. This is about inner game.

Replies from: Dagon, username2
comment by Dagon · 2017-07-05T21:50:47.414Z · LW(p) · GW(p)

terminal goals converge as long as you go far enough up the goal chain.

Wait. Terminal goals are, by definition, the head of the goal chain. You can't go any further.

My current thesis about human goals is that they contain loops and have no heads - not that terminal goals diverge, but that there is no such thing as a terminal human goal.

comment by username2 · 2017-07-05T09:23:37.300Z · LW(p) · GW(p)

My terminal goals as far as I can tell involve the state of the world and don't involve happiness at all. How does that fit into your framework?

Replies from: entirelyuseless
comment by entirelyuseless · 2017-07-05T13:59:33.604Z · LW(p) · GW(p)

I think you misunderstand your goals. What kind of unhappy life do you see as satisfying your goals?

Replies from: username2
comment by username2 · 2017-07-05T14:55:44.678Z · LW(p) · GW(p)

My own existence; and that existence being subject to certain liberties and freedoms (NOT the same as happiness, despite what Thomas Jefferson says); understanding the structure underlying rules limits and complexities of the universe at its varying levels; and tiling the universe with a multitude of diverse forms of sentient life.

Edit: Maybe I should have stopped at the first one though since that's the most universal and illustrates the point quite nicely. In a game of "would you rather.." I would rather take any outcome that leaves me alive, no matter how hellish, over one where I am dead. No qualification. I don't see how that could be true if happiness were a terminal goal.

Edit2: if happiness were my terminal goal, why not put myself on a perpetual heroin drip? I think the answe is that happiness is just an instrumental goal, like hunger and thirst satisfaction, that lets us focus on the next layer of Maslow's hierarchy. Asking about terminal goals is asking about the top of the hierarchy, which is not happiness.

Replies from: entirelyuseless
comment by entirelyuseless · 2017-07-06T01:10:02.759Z · LW(p) · GW(p)

I would rather take any outcome that leaves me alive, no matter how hellish, over one where I am dead. No qualification. I don't see how that could be true if happiness were a terminal goal.

I don't consider goals to be what people say they would do, but what they would actually do. So I don't accept your idea of your terminal goal unless it is true that if you were in a hellish scenario indefinitely, with a button that would cause you to cease to exist, you would not press the button.

I think we have a factual disagreement here: I think you would press the button, and you think you would not. I think you are mistaken, but there does not seem any way to resolve the disagreement, since we cannot run the test.

Replies from: username2, username2
comment by username2 · 2017-07-07T05:25:48.242Z · LW(p) · GW(p)

This is the same username2 as the sibling.

After spending some time thinking about it, I think there is a constructive response I can make. I believe that brains and the goals they encode are fully malleable, given time and pressure. Everyone breaks under torture, and brainwashing can be used to rewire people to do or want anything at all. If I was actually in a hellish, eternal suffering outcome I'm sure that I would eventually break. I am absolutely certain of that. But that is because the person who breaks is no longer the same as the person who exists now. But the person that exist now and is typing this response would still rather roll the dice on a hellish outcome than accept certain oblivion. Give me the option of a painless death or _, for literally anything in that blank there, and I'll take that outcome.

Does that make sense?

Replies from: entirelyuseless
comment by entirelyuseless · 2017-07-07T13:43:47.809Z · LW(p) · GW(p)

It makes sense as a description of possible future behavior. That is, if you are allowed to press a button now which will commit you to a hellish existence rather than non-existence, you might actually press it. But in this case I say you have a false belief, namely that a hellish existence is better than non-existence. What you call "breaking" would simply be accepting the truth of the matter.

comment by username2 · 2017-07-06T02:23:04.802Z · LW(p) · GW(p)

I take inspiration from the movie Touching the Viod. Do you?

Beyond that I don't know what to say. I've stated my preferences and you've said "I don't believe you." I have no desire to respond to that.

comment by James_Miller · 2017-07-04T20:42:42.675Z · LW(p) · GW(p)

You might be interested in Sam Harris's book The Moral Landscape which argues that science can be used to answer moral questions and determine how we should behave.

Replies from: Elo
comment by Elo · 2017-07-04T21:26:12.651Z · LW(p) · GW(p)

did he define science?

Replies from: James_Miller
comment by James_Miller · 2017-07-04T22:07:24.114Z · LW(p) · GW(p)

No, but he clearly would have the same definition as most of us. He thinks morality comes from the brain and by learning more about our brains we learn more about morality. He says things like scientists who think science can't answer "should" questions very often act as if should questions have objectively right answers, and our brains seem to store moral beliefs in the same way as they do factual beliefs.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2017-07-05T10:30:01.761Z · LW(p) · GW(p)

He says things like scientists who think science can't answer "should" questions very often act as if should questions have objectively right answers,

Is that supposed to be a bad thing? In any case, the more usual argument is that I can't take "what my brain does" as the last word on the subject.

and our brains seem to store moral beliefs in the same way as they do factual beliefs.

I'm struggling to see the relevance of that. Our brains probably store information about size in the same way that they store information about colour, but that doesn't mean you can infer anything about an objects colour from information about its size. The is-ought is one instance of a general rule about information falling into orthogonal categories, not special pleading.

ETA: Just stumbled on:

"Thesis 5 is the idea that one cannot logically derive a conclusion from a set of premises that have nothing to do with it. (The is-ought gap is an example of this)."

https://nintil.com/2017/04/18/still-not-a-zombie-replies-to-commenters/

Replies from: James_Miller
comment by James_Miller · 2017-07-06T07:16:33.681Z · LW(p) · GW(p)

" I can't take "what my brain does" as the last word on the subject."

But what if morality is all about the welfare of brains? I think Harris would say that once you accept human welfare is the goal you have crossed the is-ought gap and can use science to determine what is in the best interest of humans. Yes this is hard and people will disagree, but the same is true of generally accepted scientific questions. Plus, Harris says, lots of people have moral beliefs based on falsifiable premises (God wants this) and so we can use science to evaluate these beliefs.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2017-07-06T10:14:32.588Z · LW(p) · GW(p)

But what if morality is all about the welfare of brains

That's irrelevant. Welfare being about brains doesn't make my brain omniscient about yours. I'm not omniscient about neruroscience, either.

I think Harris would say that once you accept human welfare is the goal you have crossed the is-ought gap and

For some value of "crossed". What does "accept" mean? Not proved, explain, justified, anyway. .If you accept "welfare is about brains" as an unproven axiom, you can derive oughts from ises ..within that particular system.

The problem, of course, is that you can construct any number of other ethical systems with different but equally arbitrary premises. So you are not getting convergence on objective truth.