Posts

Arguments against the Orthogonality Thesis 2013-03-10T02:13:37.858Z

Comments

Comment by JonatasMueller on Questions for Moral Realists · 2013-03-16T00:48:59.009Z · LW · GW

Apart from that, what do you think of the other points? If you wish, we could continue a conversation on another online medium.

Comment by JonatasMueller on Questions for Moral Realists · 2013-03-14T13:22:18.181Z · LW · GW

I think that this is an important point: the previously argued normative badness of directly accessible bad conscious experiences is not absolute and definitive, or in terms of justifying actions. It should weight on the scale with all other factors involved, even indirect and instrumental ones that could only affect intrinsic goodness or badness in a distant and unclear way.

Comment by JonatasMueller on Questions for Moral Realists · 2013-03-13T23:36:37.832Z · LW · GW

Bad, negative, unpleasant, all possess partial semantic correspondence, which justifies their being a value.

The normative claims in this case need not be definitive and overruling in that case. Perhaps that is where your resistance to accepting it comes from. In moral realism, a justified preference or instrumental / indirect value that weights more can overpower a direct feeling as well. This justified preference will be ultimately reducible to direct feelings in the present or in the future, for oneself or for others, though.

Could you give me examples of any reasonable preferences that could not be reducible to good and bad feelings in that sense?

Anyway, there is also the argument from personal identity which calls for equalization of values taking into account all subjects (equally valued, if ceteris paribus), and their reasoning, if contextually equivalent. This could be in itself a partial refutation of the orthogonality thesis, a refutation in theory and for autonomous and free general superintelligent agents, but not necessarily for imprisoned and tampered ones.

Comment by JonatasMueller on Questions for Moral Realists · 2013-03-13T15:24:48.929Z · LW · GW

A bad occurrence must be a bad ethical value.

Why? That's an assertion - it won't convince anyone who doesn't already agree with you. And you're using two meanings of the word "bad" - an unpleasant subjective experience, and badness according to a moral system.

If it is a bad occurrence, then the definition of ethics, at least as I see it (or this dictionary, although meaning is not authoritative), is defining what is good and bad (values), as normative ethics, and bringing about good and avoiding bad, as applied ethics. It seems to be a matter of including something in a verbal definition, so it seems to be correct. Moral realism would follow. It is not undesirable, but helpful, since anti-realism implies that our values are not really valuable, but just fiction.

Minds in general need not have moral systems, or conversely may lack hedonistic feelings, making the argument incomprehensible to them.

I agree, this would be a special case, of incomplete knowledge about conscious animals. This would be possible for instance in some artificial intelligences, but they might learn about it indirectly by observing animals, humans, and getting contact with human culture in various forms. Otherwise, they might become morally anti-realist.

I have a personal moral system that isn't too far removed from the one you're espousing (a bit more emphasise on preference).

Could you explain a bit this emphasis on preference?

Comment by JonatasMueller on Questions for Moral Realists · 2013-03-12T23:51:34.916Z · LW · GW

This is a relevant discussion in another thread, by the way:

http://lesswrong.com/lw/gu1/decision_theory_faq/8lt9?context=3

Comment by JonatasMueller on Decision Theory FAQ · 2013-03-12T22:39:02.146Z · LW · GW

I thought it was relevant to this, if not, then what was meant by motivation?

The inherent-desirableness of happiness is your mind reifying the internal data describing its motivation to do something

Consciousness is that of which we can be most certain of, and I would rather think that we are living in a virtual world under an universe with other, alien physical laws, than that consciousness itself is not real. If it is not reducible to nonmental facts, then nonmental facts don't seem to account for everything there is of relevant.

From my perspective, this is "supernatural" because your story inherently revolves around mental facts you're not allowed to reduce to nonmental facts - any reduction to nonmental facts will let us construct a mind that doesn't care once the qualia aren't mysteriously irreducibly compelling anymore.

Comment by JonatasMueller on Decision Theory FAQ · 2013-03-12T22:22:53.248Z · LW · GW

It's a reasonably good description, though wanting and liking seem to be neurologically separate, such that liking does not necessarily reflect a motivation, nor vice-versa (see: Not for the sake of pleasure alone. Think the pleasurable but non-motivating effect of opioids such as heroin. Even in cases in which wanting and liking occur together, this does not necessarily invalidate the liking aspect as purely wanting.

Liking and disliking, good and bad feelings as qualia, especially in very intense amounts, seem to be intrinsically so to those who are immediately feeling them. Reasoning could extend and generalize this.

Comment by JonatasMueller on Decision Theory FAQ · 2013-03-12T21:22:35.945Z · LW · GW

Related discussion: http://lesswrong.com/lw/gnb/questions_for_moral_realists/8ls8?context=3

Comment by JonatasMueller on Questions for Moral Realists · 2013-03-12T20:43:59.526Z · LW · GW

I slightly disagree with that on a personal moral level, and entirely disagree with the assertion that it's a logical transition.

Could you explain more at length for me?

The feeling of badness is something bad (imagine yourself or someone being tortured and tell me it's not bad), and it is a real occurrence, because conscious contents are real occurrences. It is then a bad occurrence. A bad occurrence must be a bad ethical value. All this is data, since conscious perceptions have a directly accessible nature, they are "is", and the "ought" is part of the definition of ethical value, that what is good ought to be promoted, and what is bad ought to be avoided.

This does not mean that we should seek direct good and avoid direct bad on the immediate present, such as making parties to no end, but it means that we should seek it in the present and the future, seeking indirect values such as working, learning, promoting peace and equality, so that the future, even in the longest-term, will have direct value.

(To the anonymous users who down-voted this, do me the favor of posting a comment saying why you disagree, if you are sure that you are right and I am wrong, otherwise it's just rudeness, the down-vote should be used as a censoring mechanism for inappropriate posts rather than to express disagreement with a reasonable point of view. I'm using my time to freely explain this as a favor to whoever is reading, and it's a bit insulting and bad mannered to down-vote it).

Comment by JonatasMueller on Questions for Moral Realists · 2013-03-12T17:39:19.300Z · LW · GW

I agree with what you agree with.

Did you read my article Arguments against the Orthogonality Thesis?

I think that the argument for the intrinsic value (goodness or badness) of conscious feelings goes like this:

  1. Conscious experiences are real, and are the most certain data about the world, because they are directly accessible, and don't depend on inference, unlike the external world as we perceive it. It would not be possible to dismiss conscious experiences as unreal, inferring that they not be part of the external world, since they are more certain than the external world is. The external world could be an illusion, and we could be living inside a simulated virtual world, in an underlying universe that be alien and with different physical laws.

  2. Even though conscious experiences are representations (sometimes of external physical states, sometimes of abstract internal states), apart from what they represent they do exist in themselves as real phenomena (likely physical).

  3. Conscious experiences can be felt as intrinsically neutral, good, or bad in value, sometimes intensely so. For example, the bad value of having deep surgery without anesthesia is felt as intrinsically and intensely bad, and this badness is a real occurrence in the world. Likewise, an experience of extreme success or pleasure is intrinsically felt as good, and this goodness is a real occurrence in the world.

  4. Ethical value is, by definition, what is good and what is bad. We have directly accessible data of occurrences of intrinsic goodness and badness. They are ethical value.

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-12T17:20:50.015Z · LW · GW

Why it would not do paperclip (or random value) maximization as a goal is explained more at length in the article. There is more than one reason. We're considering a generally superintelligent agent, assuming above-human philosophical capacity. In terms of personal identity, there is a lack of personal identities, so it would be rational to take an objective, impersonal view, taking account of values and reasonings of relevant different beings. In terms of meta-ethics, there is moral realism and values can be reduced to the quality of conscious experience, so it would have this as its goal. If one takes moral anti-realism to be true, at least for this type of agent we are considering, a lack of real values would be understood as a lack of real goals, and could lead to the tentative goal of seeking more knowledge in order to find a real goal, or having no reason to do anything in particular (this is still susceptible to the considerations from personal identity). I argue against moral anti-realism.

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-12T10:55:39.014Z · LW · GW

I see. I think that ethics could be taken as, even individually, the formal definition of one's goals and how to reach them, although in the orthogonality thesis ethics is taken in a collective level. Since personal identities cannot be sustained by logic, the distinction between individual goals and societal goals becomes trivial, and both are mutually inclusive.

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-12T10:44:31.914Z · LW · GW

What sort of cognitive and physical actions would make you think a robot is superintelligent?

For general superintelligence, proving performance in all cognitive areas that surpasses the highest of any humans. This naturally includes philosophy, which is about the most essential type of reasoning.

What fails in the program when one tries to build a robot that takes both the paperclip-maximizing actions and superintelligent actions?

It could have a narrow superintelligence, like a calculating machine, surpassing human cognitive abilities in some areas but not in others. If it had a general superintelligence, then it would not of its own do paperclip maximization as a goal, because this would be terribly stupid, philosophically.

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-11T19:12:07.142Z · LW · GW

I'm not sure I'm using rational in that sense, I could substitute "being rational" with "using reason", "thinking intelligently", "making sense", "being logical", what seems to follow from being generally superintelligent. Ethics is the study of defining what ought to be done and how to achieve it, so it seems to follow from general superintelligence as well. The trickier part seems to be defining ethics. Humans often act with motivations which are not based on formal ethics, but ethics is like a formal elaboration of what one's (or everyone's) motivations and actions ought to be.

Comment by JonatasMueller on Questions for Moral Realists · 2013-03-11T19:01:36.506Z · LW · GW

I don't think that someone can disagree with it (good conscious feelings are intrinsically good; bad conscious feelings are intrinsically bad), because it would be akin to disagreeing that, for instance, the color green feels greenish. Do you disagree with it?

Because I have certain beliefs (broadly, but not universally, shared). But I don't see how any of those beliefs can be logically deduced.

Can you elaborate? I don't understand... Many valid wants or beliefs can be ultimately reduced as to good and bad feelings, in the present or future, for oneself or for others, as instrumental values, such as peace, learning, curiosity, love, security, longevity, health, science...

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-11T18:42:56.523Z · LW · GW

What is defined as ethically good is by definition what ought to be done, at least rationally. Some agents, such as humans, often don't act rationally, due to a conflict of reason with evolutionarily selected motivations, which have really their own evolutionary values in mind (e.g. have as many children as possible), not ours. This shouldn't happen for much more intelligent agents, with stronger rationality (and possibly a capability to self-modify).

Comment by JonatasMueller on Questions for Moral Realists · 2013-03-11T18:32:41.522Z · LW · GW

Sorry, I thought you already understood why wanting can be wrong.

Example 1: imagine a person named Eliezer walks to an ice cream stand, and picks a new flavor X. Eliezer wants to try the flavor X of ice cream. Eliezer buys it and eats it. The taste is awful and Eliezer vomits it. Eliezer concludes that wanting can be wrong and that it is different from liking in this sense.

Example 2: imagine Eliezer watched a movie in which some homophobic gangsters go about killing homosexuals. Eliezer gets inspired and wants to kill homosexuals too, so he picks a knife and finds a nice looking young man and prepares to torture and kill him. Eliezer looks at the muscular body of the young man, and starts to feel homosexual urges and desires, and instead he makes love with the homosexual young man. Eliezer concludes that he wanted something wrong and that he had been a bigot and homosexual all along, liking men, but not wanting to kill them.

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-11T18:15:42.623Z · LW · GW

Their motivation (or what they care about) should be in line with their rationality. This doesn't happen with humans because we have evolutionarily selected and primitive motivations, coupled with a weak rationality, but should not happen with much more intelligent and designed (possibly self-modifying) agents. Logically, one should care about what one's rationality tells.

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-11T18:05:40.456Z · LW · GW

We seem to be moving from personal identity to ethics. In ethics it is defined that good is what ought to be, and bad is what ought not to be. Ethics is about defining values (what is good and ought to be), and how to cause them.

Good and bad feelings are good and bad as direct data, being direct perceptions, and this quality they have is not an inference. Their good and bad quality is directly accessible by consciousness, as data with the highest epistemic certainty. Being data they are "is", and being good and bad, under the above definition of ethics, they are "ought" too. This is a special status that only good and bad feelings have, and no other values do.

Comment by JonatasMueller on Questions for Moral Realists · 2013-03-11T14:28:39.654Z · LW · GW

Indeed, but what separates wanting and liking is that preferences can be wrong, they require no empirical basis, while liking in itself cannot be wrong, and it has an empirical basis.

When rightfully wanting something, that something gets a justification. Liking, understood as good feelings, is a justification, while another is avoiding bad feelings, and this can be causally extended to include instrumental actions that will cause this in indirect ways.

Comment by JonatasMueller on Questions for Moral Realists · 2013-03-11T13:53:31.331Z · LW · GW

One argument is that from empiricism or verification. Wanting can be and often is wrong. Simple examples can show this, but I assume that they won't be needed because you understand. Liking can be misleading in terms of motivation or in terms of the external object which is liked, but it cannot be misleading or wrong in itself, in that it is a good feeling. For instance, a person could like to use cocaine, and this might be misleading in terms of being a wrong motivation, that in the long-term would prove destructive and dislikeable. However, immediately, in terms of the sensation of liking itself, and all else being equal, then it is certainly good, and this is directly verifiable by consciousness.

Taking this into account, some would argue for wanting values X, Y, or Z, but not values A, B, or C. This is another matter. I'm arguing that good and bad feelings are the direct values that have validity and should be wanted. Other valid values are those that are instrumentally reducible to these, which are very many, and most of what we do.

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-11T13:31:38.052Z · LW · GW

I should have explained things much more at length. The intelligence in that context I use is general superintelligence, being defined as that which surpasses human intelligence in all domains. Why is a native capacity for sociability implied?

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-11T13:08:26.640Z · LW · GW

A "God's-eye view", as David Pearce says, is an impersonal view, an objective rather than subjective view, a view that does not privilege one personal perspective over another, but take the universe as a whole as its point of reference. This comes from the argued non-existence of personal identities. To check arguments on this, see this comment.

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-11T12:55:22.321Z · LW · GW

In practical terms, it's very hard to change the intuitive opinions of people on this, even after many philosophical arguments. Those statements of mine don't touch the subject. For that the literature should be read, for instance the essay I wrote about it. But if we consider general superintelligences, then they could easily understand it and put it coherently into practice. It seems that this can be naturally expected, except perhaps in practice under some specific cases of human intervention.

Comment by JonatasMueller on Questions for Moral Realists · 2013-03-11T12:45:43.534Z · LW · GW

Hi Stuart,

Why? This is the whole core of the disagreement, and you're zooming over it way too fast. Even for ourselves, our wanting systems and our liking systems are not well aligned - we want things we don't like, and vice-versa. A preference utilitarian would say our wants are the most important; you seem to disagree, focusing on the good/bad aspect instead. But what logical reason would there be to follow one or the other?

Indeed, wanting and liking do not always correspond, also from a neurological perspective. Wanting involves planning and planning often involves error. We often want things mistakenly, be it by evolutionary selected reasons, cultural reasons, or just bad planning. Liking is what matters, because it can be immediately and directly determined to be good, with the highest certainty. This is an empirical confirmation of its value, while wanting is like an empty promise.

We have good and bad feelings associated with some evolutionarily or culturally determined things. Theoretically, the result of good and bad feelings could be associated with any inputs. The inputs don't matter, nor does wanting necessarily matter, nor innate intuitions of morality. The only thing that has direct value, which is empirically confirmed, is good and bad feelings.

if there was a single consciousness in the universe, them maybe your argument could get off the ground. But we have many current and potential consciousnesses, with competing values and conscious experiences.

Well noticed. That comment was not well elaborated and is not a complete explanation. It is also necessary for that point you mentioned to consider the philosophy of personal identities, which is a point that I examine in my more complete essay on Less Wrong, and also in my essay Universal Identity.

But in a way, that's entirely a moot point. Your claim is that a certain ethics logically follows from our conscious reality. There I must ask you to prove it. State your assumptions, show your claims, present the deductions. You'll need to do that, before we can start critiquing your position properly.

I have a small essay written on ethics, but it's a detailed topic, and my article may be too concise, assuming much previous reading on the subject. It is here. I propose that we instead focus on questions as they come up.

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-11T12:24:17.067Z · LW · GW

Indeed, a robot could be built that makes paperclips or pretty much anything. For instance, a paperclip assembling machine. That's an issue of practical implementation and not what the essay has been about, as I mention in the first paragraph and concede in the last.

The issue I argued about is that generally superintelligent agents, on their own will, without certain outside pressures from non-superintelligent agents, would understand personal identity and meta-ethics, leading them to converge to the same values and ethics. This is for two reasons: (1) they would need to take a "God's eye view" and value all perspectives besides their own, and (2) they would settle on moral realism, with the same values as good and bad feelings, in the present or future.

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-11T12:11:00.935Z · LW · GW

I read that and similar articles. I deliberately didn't say pleasure or happiness, but "reduced to good and bad feelings", including other feelings that might be deemed good, such as love, curiosity, self-esteem, meaningfulness..., and including the present and the future. The part about the future includes any instrumental actions in the present which be taken with the intention of obtaining good feelings in the future, for oneself or for others.

This should cover visiting Costa Rica, having good sex, and helping loved ones succeed, which are the examples given in that essay against the simple example of Nozick's experience machine. The experience machine is intuitively deemed bad because it precludes acting in order to instrumentally increase good feelings in the future and prevent bad feelings of oneself or others, and because pleasure is not what good feelings are all about. It is a very narrow part of the whole spectrum of good experiences one can have, precluding many others mentioned, and this makes it aversive.

The part about wanting and liking has neurological interest and has been well researched. It is not relevant for this question, because values need not correspond with wanting, they can just correspond with liking. Immediate liking is value, wanting is often mistaken. We want things which are evolutionarily or culturally caused, but that are not good for us. Wanting is like an empty promise, while liking can be empirically and directly verified to be good.

Any valid values reduce to good and bad feelings, for oneself or for others, in the present or in the future. This can be said of survival, learning, working, loving, protecting, sight-seeing, etc.

I say it again, I dare Eliezer (or others) to defend and justify a value that cannot be reduced to good and bad feelings.

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-11T03:59:53.943Z · LW · GW

Indeed, epiphenomenalism can seemingly be easily disproved by its implication that if it were true, then we wouldn't be able to talk about our consciousness. As I said in the essay, though, consciousness is that of which we can be most certain of, by its directly accessible nature, and I would rather think that we are living in a virtual world under an universe with other, alien physical laws, than that consciousness itself is not real.

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-11T03:40:09.688Z · LW · GW

A certain machine could perhaps be programmed with an utility function over causal continuity, but a privileged stance for one's own values wouldn't be rational lacking a personal identity, in an objective "God's eye view", as David Pearce says. That would call at least for something like coherent extrapolated volition, at least including agents with contextually equivalent reasoning capacity. Note that I use "at least" twice, to accommodate your ethical views. More sensible would be to include not only humans, but all known sentient perspectives, because the ethical value(s) of subjects arguably depend more on sentience than on reasoning capacity.

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-11T03:34:43.804Z · LW · GW

I argue (in this article) that the you (consciousness) in one second bears little resemblance to the you in the next second.

In the subatomic world, the smallest passage of time changes our composition and arrangement to a great degree, instantly. In physical terms, the frequency of discrete change at this level, even in just one second, is a number with 44 digits, so vast as to be unimaginable... In comparison, the amount of seconds that have passed since the start of the universe, estimated at 13.5 billion years ago, is a number with just 18 digits. At the most fundamental level, our structure, which seems outwardly stable, moves at a staggering pace, like furiously boiling water. Many of our particles are continually lost, and new ones are acquired, as blood frantically keeps matter flowing in and out of each cell.

I also explain why you can't have partial identity in that paper, and that argues against the position you took (which is similar to that explained by philosopher David Lewis in his paper Survival and Identity).

If we were to be defined as a precise set of particles or arrangement thereof, its permanence in time would be implausibly short-lived; we would be born dead. If this were one's personal identity, it would have been set for the first time to one's baby state, having one's first sparkle of consciousness. In a subatomic level, each second is like many trillions of years in the macroscopic world, and our primordial state as a babies would be incredibly short-lived. In the blink of an eye, our similarity to what that personal identity was would be reduced to a tiny fraction, if any, by the sheer magnitude of change. That we could survive, in a sense, as a tiny fraction of what we once were, would be an hypothesis that goes against our experience, because we feel consciousness always as an integrated whole, not as a vanishing separated fraction. We either exist completely or not at all.

I recommend reading, whether you agree with this essay or not. The advanced and tenable philosophical positions on this subject are two. Empty individualism, characterized by Derek Parfit in his book "Reasons and Persons", and open individualism, for which there are better arguments, explained in 4 pages in my essay and more at length in Daniel Kolak's book "I Am You: The Metaphysical Foundations for Global Ethics".

For another interesting take on the subject here on Less Wrong, check Kaj Sotala's An attempt to dissolve subjective expectation and personal identity.

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-11T03:15:25.870Z · LW · GW

Being open to criticism is very important, and the bias to disvalue it should be resisted. Perhaps I defined the truth conditions later on (see below).

"There is a difference between valid and invalid human values, which is the ground of justification for moral realism: valid values have an epistemological justification, while invalid ones are based on arbitrary choice or intuition. The epistemological justification of valid values occurs by that part of our experiences which has a direct certainty, as opposed to indirect: conscious experiences in themselves."

I find your texts here on ethics incomplete and poor (for instance, this one, it shows a lack of understanding of the topic and is naive). I dare you to defend and justify a value that cannot be reduced to good and bad feelings.

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-10T13:41:40.538Z · LW · GW

Indeed the orthogonality thesis in that practical sense is not what this essay is about, as I explain in the first paragraph and concede in the last paragraph. This article addresses the assumed orthogonality between ethics and intelligence, particularly general superintelligence, based on considerations from meta-ethics and personal identity, and argues for convergence.

There seems to be surprisingly little argumentation in favor of this convergence, what is utterly surprising to me, given how clear and straightforward I take it to be, though requiring an understanding of meta-ethics and of personal identity which is rare. Eliezer has, at least in the past, stated that he had doubts regarding both philosophical topics, while I claim to understand them very well. These doubts should merit an examination of the matter I'm presenting.

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-10T08:37:38.844Z · LW · GW

What if I changed the causation chain in this example, and instead of having the antagonistic values caused by the identical agents themselves, I had myself inserted the antagonistic values in their memories, while I did their replication? I could have picked the antagonistic value from the mind of a different person, and put it into one of the replicas, complete with a small reasoning or justification in its memory.

They would both wake up, one with one value in their memory, and another with an antagonistic value. What would it be that would make one of them correct and not the other? Could both values be correct? The issue here is questioning if any values whatsoever can be validly held for similar beings, or if a good justification is needed. In CEV, Eliezer proposed that we can make errors about our values, and that they should be extrapolated for the reasonings we would make if we had higher intelligence.

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-10T06:02:38.094Z · LW · GW

OK, that is the interpretation I found less convincing. The bare axiomatic normative claim that all the desires and moral intuitions not concerned with pleasure as such are errors with respect to maximization of pleasure isn't an argument for adopting that standard.

The argument for adopting that standard was based on epistemological prevalence of the goodness and badness of good and bad feelings, while other hypothetical intrinsic values could be so only by much less certain inference. But I'd also argue that the nature of how the world is perceived necessitates conscious subjects, and reason that, in the lack of them, or in an universe eternally without consciousness, nothing could possibly matter ethically. Consciousness is therefore given special status, and good and bad relate to it.

And given the admission that biological creatures can and do want things other than pleasure, have other moral intuitions and motivations, and the knowledge that we can and do make computer programs with preferences defined over some model of their environment that do not route through an equivalent of pleasure and pain, the connection from moral philosophy to empirical prediction is on shakier ground than the purely normative assertions.

Biological creatures indeed have other preferences, but I classify those in the error category, as Eliezer justifies in CEV. Their validity could be argued on a case by case basis, though. Machines could be made unconscious or without capacity for good and bad feelings, then they would need to infer the existence of these by seeing living organisms and their culture (in this case, their certainty would be similar to that of their world model), or possibly by being very intelligent and deducing it from scratch (if this be even possible), otherwise they might be morally anti-realist. In the lack of real values, I suppose, they would have no logical reason to act one way or another, considering meta-ethics.

Once one is valuing things in a model of the world, why stop at your particular axiom? And people do have reactions of approval to their mental models of an equal society, or a diversity of goods, or perfectionism, which are directly experienced.

You can say that you might pursue something vaguely like X, which people feel is morally good or obligatory as such, is instrumental in pursuit of Y. But that doesn't change the pursuit of X, even in conflict with Y.

I think that these values need to be justified somehow. I see them as instrumental values for their tendency to lead to the direct values of good feelings, which take a special status by being directly verified as good. Decision theory and practical ethics are very complex, and sometimes one would take an instrumentally valuable action even in detriment of a direct value, if the action be expected to give even more direct value in the future. For instance, one might spend a lot of time learning philosophical topics, even if it be in detriment of direct pleasure, if one sees it as likely to be important to the world, causing good feelings or preventing bad feelings in an unclear but potentially significant way.

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-10T05:15:19.722Z · LW · GW

Hi Carl,

Thank you for a thoughtful comment. I am not used to writing didactically, so forgive my excessive conciseness.

You understood my argument well, in the 5 points, with the detail that I define value as good and bad feelings rather than pleasure, happiness, suffering and pain. The former definition allows for subjective variation and universality, while the latter utilitarian definition is too narrow and anthropocentric, and could be contested on these grounds.

What kind of value do you mean here? Impersonal ethical value? Impact on behavior? Different sorts of pleasurable and painful experience affect motivation and behavior differently, and motivation does not respond to pleasure or pain as such, but to some discounted transformation thereof. E.g. people will accept a pain 1 hour hence in exchange for a reward immediately when they would not take the reverse deal.

I mean ethical value, but not necessarily impact on behavior or motivation. Indeed, people do accept trades between good and bad feelings, and they can be biased in terms of motivation.

Does this apply to other directly felt moral intuitions, like anger or fairness? Later you say that our best theories show that personal identity is an illusion, despite our perception of continued existence over time, and so we would discard it. What distinguishes the two?

It does not apply in the same way to other moral intuitions, like anger or fairness. The latter are directly felt in some way, and in this sense they are real, but they also have a context related to the world that is indirectly felt and could be false. Anger, for instance, can be directly felt as a bad feeling, but its causation and subsequent behavioral motivation relate to the outside world, and are in another level of certainty (not as certain). Likewise, it could be said that whatever caused good or bad feelings (such as kissing a woman) is not universal and not as certain as the good feeling itself which was caused by it in a person, and was directly verified by them. This person doesn't know if he is inside a Matrix virtual world and if the woman was really a woman or just computer data, but he knows that the kiss led to directly felt good feelings. The distinction is that one relates to the outside world, and another relates to itself.

How are good and bad feelings physical occurrences in a way that knowledge or health or equality or the existence of other outcomes that people desire are not?

Good question. The goodness and badness of feelings is directly felt as so, and is a datum of highest certainty about the world, while the goodness or badness of these other physical occurrences (which are indirectly felt) is not data, but inferences, which though generally trustworthy, need to be justified eventually by being connected to intrinsic values.

Earlier you privileged pleasure as a value because it is directly experienced. But an organism directly experiences, and is conditioned or reinforced by its own pain or pleasure.

Indeed. However, in acting on the world, an organism has to assume a model about the world which they are going to trust as true, in order to act ethically. In this model of the world, in the world as it appears to us, the organism would consider the nature of personal identity and not privilege its own viewpoint. However, you have a reason that, strictly, one's own experiences are more certain than those of others. The difference in this certainty could be thought of as the difference between direct conscious feelings and physical theories. Let's say that the former get ascribed a certainty of 100%, while the latter get 95%. The organism might then put 5% more value to its own experiences, not fundamentally, but based on the solipsistic hypothesis that other people are zombies, or that they don't really exist.

Error in what sense? If desires are mostly learned through reward and ranticipations of to reward, one can note when the resulting desires do not maximize some metric of personal pleasure or pain (e.g. to be remembered after one dies, or for equality). But why identify with the usual tendency of reinforcement learning rather than the actual attitudes and desires one has?

I meant in that case intrinsic values. But what you meant, for instance for equality, can be thought of instrumental values. Instrumental values are taken as heuristics or in decision theory as patterns of behavior that usually lead to intrinsic values. Indeed, in order to achieve direct or intrinsic value, the best way tends to be following instrumental values, such as working, learning, increasing longevity... I argue that the validity of these can be examined by the extent that they lead to direct value, being good and bad feelings, in a non-personal way.

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-10T04:37:02.017Z · LW · GW

Where do you include environmental and cultural influences?

While these vary, I don't see legitimate values that could be affected by them. Could you provide examples of such values?

This does not follow. Maybe you need to give some examples. What do you mean by "correct" and "error" here?

Imagine that two exact replicas of a person exist in different locations, exactly the same except for an antagonism in one of their values. Both could not be correct at the same time about that value. I mean error in the sense, for example, that Eliezer employs in Coherent Extrapolated Volition: that error that comes from insufficient intelligence in thinking about our values.

This is a contentious attempt to convert everything to hedons. People have multiple contradictory impulses, desires and motives which shape their actions, often not by "maximizing good feelings".

Except in the aforementioned sense or error, could you provide examples of legitimate values that don't reduce to good and bad feelings?

Really? Been to the Youtube and other video sites lately?

I think that literature about masochism is of more evidence than youtube videos, that could be isolated incidents of people who are not regularly masochist. If you have evidence from those sites, I'd like to see it.

This is wrong in so many ways, unless you define reality as "conscious experiences in themselves", which is rather non-standard. In any case, unless you are a dualist, you can probably agree that your conscious experiences can be virtual as much as anything else.

Even being virtual, or illusive, they would still be real occurrences, and real illusions, being directly felt. I mean that in the sense of Nick Bostrom's simulation argument.

Uhh, that post sucked as well.

Perhaps it was not sufficiently explained, but check this introduction on Less Wrong, then, or the comment I made below about it:

http://lesswrong.com/lw/19d/the_anthropic_trilemma/

I read many sequences, understand them well, and assure you that, if this post seems not to make sense, then it is because it was not explained in sufficient length.

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-10T03:53:46.453Z · LW · GW

For the question of personal identity, another essay, that was posted on Less Wrong by Eliezer, is here:

http://lesswrong.com/lw/19d/the_anthropic_trilemma/

However, while this essay presents the issue, it admittedly does not solve it, and expresses doubt that it would be solved in this forum. The solution exists in philosophy, though. For example, in the first essay I linked to, in Daniel Kolak's work "I Am You: The Metaphysical Foundations for Global Ethics", or also, in a partial form, in Derek Parfit's work "Reasons and Persons".

Comment by JonatasMueller on Arguments against the Orthogonality Thesis · 2013-03-10T03:22:05.171Z · LW · GW

I tend to be a very concise writer, assuming a quick understanding from the reader, and I don't perceive very well what is obvious and what isn't to people. Thank you for the advice. Please point to specific parts that you would like further explaining or expanding, and I will provide it.

Comment by JonatasMueller on General purpose intelligence: arguing the Orthogonality thesis · 2013-03-04T23:25:08.744Z · LW · GW

David, what are those multiple possible defeaters for convergence? As I see it, the practical defeaters that exist still don't affect the convergence thesis, they just are possible practical impediments, from unintelligent agents, to the realization of the goals of convergence.

Comment by JonatasMueller on General purpose intelligence: arguing the Orthogonality thesis · 2013-03-04T22:06:34.275Z · LW · GW

Another argumentation for moral realism:

  1. Let's imagine starting with a blank slate, the physical universe, and building ethical value in it. Hypothetically in a meta-ethical scenario of error theory (which I assume is where you're coming from), or possible variability of values, this kind of "bottom-up" reasoning would make sense for more intelligent agents that could alter their own values, so that they could find, from "bottom-up", values that could be more optimally produced, and also this kind of reasoning would make sense for them in order to fundamentally understand meta-ethics and the nature of value.

  2. In order to connect to the production of some genuine ethical value in this universe, arguably some things would have to be built the same way, with certain conditions, while hypothetically others things could vary, in the value production chain. This is because ethical value could not be absolutely anything, otherwise those things could not be genuinely valuable. If all could be fundamentally valuable, then nothing would really be, because value requires a discrimination in terms of better and worse. Somewhere in the value production chain, some things would have to be constant in order for there to be genuine value. Do you agree so far?

  3. If some things have to be constant in the value production chain, and some things could hypothetically vary, then the constant things would be the really important in creating value, and the variable things would be accessory, and could be randomly specified with some degree of freedom, by those that be analyzing value production from a "bottom-up" perspective in a physical universe. It would seem therefore that the constant things could likely be what is truly valuable, while the variable and accessory things could be mere triggers or engines in the value production chain.

  4. I argue that, in the case of humans and of this universe, the constant things are what really constitute value. There is some constant and universal value in the universe, or meta-ethical moral realism. The variable things, which are accessory, triggers or engines in the value production chain, are preferences or tastes. Those preferences that are valid are those that ultimately connect to what is constant in producing value.

  5. Now, from an empirical perspective, what ethical value has in common in this universe is its relationship to consciousness. What happens in totally unconscious regions of the universe doesn't have any ethical relevance in itself, and only consciousness can ultimately have ethical value.

  6. Consciousness is a peculiar physical phenomenon. It is representational in its nature, and as a representation it can freely differ or vary from the objects it represents. This difference or variability could be, for example, representing a wavelength of light in the vision field as a phenomenal color, or dreaming of unicorns, both of which transcend the original sources of data in the physical universe. The existence of consciousness is what there is of most epistemologically certain to conscious observers, this certainty is higher than that of any objects in this universe, because while objects could be illusions arising from the aforementioned variability in representation, consciousness itself is the most directly verifiable phenomenon. Therefore, the existence of conscious perceptions is more certain than the physical universe or than any physical theories, for example. Those could hypothetically be the product of false world simulations.

  7. Consciousness can produce ethical value due to the transcendental freedom afforded by its representational nature, which is the same freedom that allows the existence of phenomenal colors.

  8. Ethics is about defining value, what is good and bad, and how to produce it. If consciousness is what contains ethical value, then this ethical value lies in good and bad conscious experiences.

  9. Variability in the production chain of good and bad conscious experiences for humans is accessory, as preferences and tastes, and in their ethical dimension they ultimately connect to good and bad conscious experiences. From a physical perspective, it could be said that the direct production of good and bad conscious experiences by nerve cells in brains is what constitutes direct ethical value, and that preferences are accessory triggers or engines that lead to this ethical value production. From paragraph 8, it follows that preferences are only ethically valid insofar as they connect to good and bad conscious experiences, in the present or future. People's brains are like labyrinths with different paths ultimately leading to the production of good and bad feelings, but what matters is that production, not the initial triggers that pass through that labyrinth.

  10. By the previous paragraphs, we have moral realism and constant values, with variability only apparent or accessory. So greater intelligence would find this and not vary. Now, depending on the question of personal identity, you may ask: what about selfishness?

Comment by JonatasMueller on General purpose intelligence: arguing the Orthogonality thesis · 2013-03-04T21:58:38.480Z · LW · GW

Stuart, here is a defense of moral realism:

http://lesswrong.com/lw/gnb/questions_for_moral_realists/8g8l

My paper which you cited needs a bit of updating. Indeed some cases might lead a superintelligence to collaborate with agents without the right ethical mindset (unethical), which constitutes an important existential risk (a reason why I was a bit reluctant to publish much about it).

However, isn't the orthogonality thesis basically about the orthogonality between ethics and intelligence? In that case, the convergence thesis is would not be flawed if some unintelligent agents kidnap and force an intelligent agent to act unethically.

Comment by JonatasMueller on Questions for Moral Realists · 2013-03-04T21:44:10.542Z · LW · GW

Yes, that is correct. I'm glad a Less Wronger finally understood.

Comment by JonatasMueller on Questions for Moral Realists · 2013-02-22T22:17:16.268Z · LW · GW

Who cares about that silly game. Accepting to play it or not is my choice.

You can only validly like ice cream by way of feelings, because all that you have direct access to in this universe is consciousness. The difference between Monday and Tuesday in your example is only in the nature of the feelings involved. In the pain example, it is liked by virtue of the association with other good feelings, not pain in itself. If a person somehow loses the associated good feelings, certain painful stimuli cease to be desirable.

Comment by JonatasMueller on Questions for Moral Realists · 2013-02-22T05:58:34.524Z · LW · GW

The idea that one can like pain in itself is not substantiated by evidence. Masochists or self-harmers seek some pleasure or relief they get from pain or humiliation, not pain for itself. They won't stick their hands in a pot with boiling water.

http://en.wikipedia.org/wiki/Sadomasochism http://en.wikipedia.org/wiki/Self-harm

To follow that line of reasoning, please provide evidence that there exists anyone that enjoys pain in itself. I find that unbelievable, as pain is aversive by nature.

Comment by JonatasMueller on Questions for Moral Realists · 2013-02-22T02:43:07.878Z · LW · GW

Liking pain seems impossible, as it is an aversive feeling. However, for some people, some types of pain or self-harm cause a distraction from underlying emotional pain, which is felt as good or relieving, or it may give them some thrill, but in these cases it seems that it is always pain + some associated good feeling, or some relief of an underlying bad feeling, and it is for the good feeling or relief that they want pain, rather than pain for itself.

Conscious perceptions in themselves seem to be what is most certain in terms of truth. The things they represent, such as the physical world, may be illusions, but one cannot doubt feeling the illusions themselves.

Comment by JonatasMueller on Questions for Moral Realists · 2013-02-21T00:23:35.348Z · LW · GW

I think that it is a worthy use of time, and I applaud your rational attitude of looking to refute one's theories. I also like to do that in order to evolve them and discard wrong parts.

Don't hesitate to bring up specific parts for debate.

Comment by JonatasMueller on Questions for Moral Realists · 2013-02-21T00:19:34.631Z · LW · GW

"Why is fostering good conscious feelings and prevent bad conscious feelings necessarily correct? It is intuitive for humans to say we should maximize conscious experience, and that falls under the success theory that Peter talks about, but why is this necessarily the one true moral system?"

If we agree that good and bad feelings are good and bad, that only conscious experiences produce direct ethical value, which lies in its good or bad quality, then theories that contradict this should not be correct, or they would need to justify their points, but it seems that they have trouble in that area.

"But valuable to who? If there were a person who valued others being in pain, why would this person's views matter less?"

:) That's a beauty of personal identities not existing. It doesn't matter who it is. In the case of valuing others being in pain, would it be generating pleasure from it? In that case, lots of things have to be considered, among which: the net balance of good and bad feelings caused from the actions; the societal effects of legalizing or not certain actions...

Comment by JonatasMueller on Questions for Moral Realists · 2013-02-21T00:11:43.938Z · LW · GW

Conscious perceptions are quite direct and simple. Do you feel, for example, a bad feeling like intense pain as being a bad occurrence (which, like all occurrences in the universe, is physical), and likewise, for example, a good feeling like a delicious taste as being a good occurrence?

I argue that these are perceived with the highest degree of certainty of all things and are the only things that can be ultimately linked to direct good and bad value.

Comment by JonatasMueller on Questions for Moral Realists · 2013-02-21T00:04:31.009Z · LW · GW

"Not quite anything, since the size and complexity of conscious thought is bounded by the human brain. But that is not relevant to this discussion of ethics."

Indeed, conscious experience may be bound by the size and complexity of brains or similar machinery, of humans, other animals, and cyborgs. Theoretically, conscious perceptions may be able to be anything (or nearly), as we could theorize about brains the size of Jupiter or much larger. You get the point.

"Should I interpret this as you defining ethics as good and bad feelings?"

Almost. Not ethics, but ethical value in a direct, ultimate sense. There is also indirect value, which is things that can lead to direct value, which are myriad, and ethics is much more than defining value, it comprises laws, decision theory, heuristics, empirical research, and many theoretical considerations. I'm aware that Elizer has written a post on Less Wrong saying that ethical value is not on happiness alone. Although happiness alone is not my proposition, I find his post on the topic quite poorly developed, and really not an advisable read.

"So, do you endorse wireheading?"

This depends very much on the context. All else being equal, wireheading could be good for some people, depending on the implications of it. However, all else seems hardly equal in this case. People seem to have a diverse spectrum of good feelings that may not be covered by the wireheading (such as love, some types of physical pleasure, good smell and taste, and many others), and the wireheading might prevent people from being functional and acting in order to increase ethical value in the long-term, so as to possibly deny its benefits. I see wireheading, in the sense of artificial paradise simulations, as a possibly desirable condition in a rather distant future of ideal development and post-scarcity, though.

Comment by JonatasMueller on Questions for Moral Realists · 2013-02-13T06:01:37.753Z · LW · GW

I will answer by explaining my view of morally realist ethics.

Conscious experiences and their content are physical occurrences and real. They can vary from the world they represent, but they are still real occurrences. Their reality can be known with the highest possible certainty, above all else, including physics, because they are immediately and directly accessible, while the external world is accessible indirectly.

Unlike the physical world, it seems that physical conscious perceptions can theoretically be anything. The content of conscious perceptions could, with the right technology, be controlled, as in a virtual world, and made to be anything, even things that differ from the external physical world. While the physical world has no ethical value except from conscious perceptions, conscious perceptions can be ethical value, and only by being good or bad conscious perceptions, or feelings. This seems to be so by definition, because ethical value is being good or bad.

That a conscious experience can be a good or bad physical occurrence is also a reality which can be felt and known with the highest possible certainty. This makes it rational, and an imperative, to follow it and care about it, to act in order to foster good conscious feelings and to prevent bad conscious feelings, because it is logical that this will make the universe better. This is acting ethically. Not acting accordingly is irrational and mistaken. Ethics is about realizing valuable states.

Human beings have primitive emotional and instinctive motivations that are not guided by intelligence and rationality. These primitive motivations can take control of human minds and make them act in irrational and unintelligent ways. Although human beings may consider it good to act according to their primitive motivations in cases in which they conflict with acting ethically, this would be an irrational and mistaken decision.

When primitive motivations conflict with human intelligent reason, these two could be thought of as two different agents inside one mind, with differing motivations. Intelligent reason does not always prevail, because primitive motivations have strong control of behavior. However, it would be rational and intelligent for intelligent reason to always take the ultimate control of behavior if it could somehow suppress the power of primitive motivations. This might be done by somehow strengthening human intelligent reason and its control of motivations.

Actions which foster good conscious feelings and prevent bad conscious feelings need not do so in the short-term. Many effective actions tend to do so only in the long-term. Likewise, such actions need not do so directly; many effective actions only do so indirectly. Often it is rational to act if it is probable that it will be ethically positive eventually.

That people have personal identities is false; they are mere parts of the universe. This is clear upon advanced philosophical analysis, but can be hard to understand for those who haven't thought much about it. An objective and impersonal perspective is called for. For this reason it is rational for all beings to 'act ethically' not only for themselves but also for all other beings in the same universe. For an explanation of why personal identities don't exist, what is relevant for the question of why acting ethically in a collective rather than selfish sense, see this brief essay:

https://www.facebook.com/notes/jonatas-müller/universal-identity/10151189314697917