Intrinsic properties and Eliezer's metaethics
post by Tyrrell_McAllister · 2017-08-29T23:26:53.144Z · LW · GW · Legacy · 27 commentsContents
Abstract Section 1: Intuitions of intrinsicness Section 2: Is goodness intrinsic? Section 3: Seeing intrinsicness in simulations Section 4: Back to goodness Footnote None 27 comments
Abstract
I give an account for why some properties seem intrinsic while others seem extrinsic. In light of this account, the property of moral goodness seems intrinsic in one way and extrinsic in another. Most properties do not suffer from this ambiguity. I suggest that this is why many people find Eliezer's metaethics to be confusing.
Section 1: Intuitions of intrinsicness
What makes a particular property seem more or less intrinsic, as opposed to extrinsic?
Consider the following three properties that a physical object X might have:
- The property of having the shape of a regular triangular. (I'll call this property "∆-ness" or "being ∆-shaped", for short.)
- The property of being hard, in the sense of resisting deformation.
- The property of being a key that can open a particular lock L (or L-opening-ness).
To me, intuitively, ∆-ness seems entirely intrinsic, and hardness seems somewhat less intrinsic, but still very intrinsic. However, the property of opening a particular lock seems very extrinsic. (If the notion of "intrinsic" seems meaningless to you, please keep reading. I believe that I ground these intuitions in something meaningful below.)
When I query my intuition on these examples, it elaborates as follows:
(1) If an object X is ∆-shaped, then X is ∆-shaped independently of any consideration of anything else. Object X could manifest its ∆-ness even in perfect isolation, in a universe that contained no other objects. In that sense, being ∆-shaped is intrinsic to X.
(2) If an object X is hard, then that fact does have a whiff of extrinsicness about it. After all, X's being hard is typically apparent only in an interaction between X and some other object Y, such as in a forceful collision after which the parts of X are still in nearly the same arrangement.
Nonetheless, X's hardness still feels to me to be primarily "in" X. Yes, something else has to be brought onto the scene for X's hardness to do anything. That is, X's hardness can be detected only with the help of some "test object" Y (to bounce off of X, for example). Nonetheless, the hardness detected is intrinsic to X. It is not, for example, primarily a fact about the system consisting of X and the test object Y together.
(3) Being an L-opening key (where L is a particular lock), on the other hand, feels very extrinsic to me. A thought experiment that pumps this intuition for me is this: Imagine a molten blob K of metal shifting through a range of key-shapes. The vast majority of such shapes do not open L. Now suppose that, in the course of these metamorphoses, K happens to pass through a shape that does open L. Just for that instant, K takes on the property of L-opening-ness. Nonetheless, and here is the point, an observer without detailed knowledge of L in particular wouldn't notice anything special about that instant.
Contrast this with the other two properties: An observer of three dots moving in space might notice when those three dots happen to fall into the configuration of a regular triangle. And an observer of an object passing through different conditions of hardness might notice when the object has become particularly hard. The observer can use a generic test object Y to check the hardness of X. The observer doesn't need anything in particular to notice that X has become hard.
But all that is just an elaboration of my intuitions. What is really going on here? I think that the answer sheds light on how people understand Eliezer's metaethics.
Section 2: Is goodness intrinsic?
I was led to this line of thinking while trying to understand why Eliezer's metaethics is consistently confusing.
The notion of an L-opening key has been my personal go-to analogy for thinking about how goodness (of a state of affairs) can be objective, as opposed to subjective. The analogy works like this: We are like locks, and states of affairs are like keys. Roughly, a state is good when it engages our moral sensibilities so that, upon reflection, we favor that state. Speaking metaphorically, a state is good just when it has the right shape to "open" us. (Here, "us" means normal human beings as we are in the actual world.) Being of the right shape to open a particular lock is an objective fact about a key. Analogously, being good is an objective fact about a state of affairs.
Objective in what sense? In this important sense, at least: The property of being L-opening picks out a particular point in key-shape space1. This space contains a point for every possible key-shape, even if no existing key has that shape. So we can say that a hypothetical key is "of an L-opening shape" even if the key is assumed to exist in a world that has no locks of type L. Analogously, a state can still be called good even if it is in a counterfactual world containing no agents who share our moral sensibilities.
But the discussion in Section 1 made "being L-opening" seem, while objective, very extrinsic, and not primarily about the key K itself. The analogy between "L-opening-ness" and goodness seems to work against Eliezer's purposes. It suggests that goodness is extrinsic, rather than intrinsic. For, one cannot properly call a key "opening" in general. One can only say that a key "opens this or that particular lock". But the analogous claim about goodness sounds like relativism: "There's no objective fact of the matter about whether a state of affairs is good. There's just an objective fact of the matter about whether it is good to you."
This, I suppose, is why some people think that Eliezer's metaethics is just warmed-over relativism, despite his protestations.
Section 3: Seeing intrinsicness in simulations
I think that we can account for the intuitions of intrinsicness in Section 1 by looking at them from the perspective simulations. Moreover, this account will explain why some of us (including perhaps Eliezer) judge goodness to be intrinsic.
The main idea is this: In our minds, a property P, among other things, "points to" the test for its presence. In particular, P evokes whatever would be involved in detecting the presence of P. Whether I consider a property P to be intrinsic depends on how I would test for the presence of P — NOT, however, on how I would test for P "in the real world", but rather on how I would test for P in a simulation that I'm observing from the outside.
Here is how this plays out in the cases above.
(1) In the case of being ∆-shaped, consider a simulation (on a computer, or in your mind's eye) consisting of three points connected by straight lines to make a triangle X floating in space. The points move around, and the straight lines stretch and change direction to keep the points connected. The simulation itself just keeps track of where the points and lines are. Nonetheless, when X becomes ∆-shaped, I notice this "directly", from outside the simulation. Nothing else within the simulation needs to react to the ∆-ness. Indeed, nothing else needs to be there at all, aside from the points and lines. The ∆-shape detector is in me, outside the simulation. To make the ∆-ness of an object X manifest, the simulation needs to contain only the object X itself.
In summary: A property will feel extremely intrinsic to X when my detecting the property requires only this: "Simulate just X."
(2) For the case of hardness, imagine a computer simulation that models matter and its motions as they follow from the laws of physics and my exogenous manipulations. The simulation keeps track of only fundamental forces, individual molecules, and their positions and momenta. But I can see on the computer display what the resulting clumps of matter look like. In particular, there is a clump X of matter in the simulation, and I can ask myself whether X is hard.
Now, on the one hand, I am not myself a hardness detector that can just look at X and see its hardness. In that sense, hardness is different from ∆-ness, which I can just look at and see. In this case, I need to build a hardness detector. Moreover, I need to build the detector inside the simulation. I need some other thing Y in the simulation to bounce off of X to see whether X is hard. Then I, outside the simulation, can say, "Yup, the way Y bounced off of X indicates that X is hard." (The simulation itself isn't generating statements like "X is hard", any more than the 3-points-and-lines simulation above was generating statements about whether the configuration was a regular triangle.)
On the other hand, crucially, I can detect hardness with practically anything at all in addition to X in the simulation. I can take practically any old chunk of molecules and bounce it off of X with sufficient force.
A property of an object X still feels intrinsic when detecting the property requires only this: "Simulate just X + practically any other arbitrary thing."
Indeed, perhaps I need only an arbitrarily small "epsilon" chunk of additional stuff inside the simulation. Given such a chunk, I can run the simulation to knock the chunk against X, perhaps from various directions. Then I can assess the results to conclude whether X is hard. The sense of intrinsicness comes, perhaps, from "taking the limit as epsilon goes to 0", seeing the hardness there the whole time, and interpreting this as saying that the hardness is "within" X itself.
In summary: A property will feel very intrinsic to X when its detection requires only this: "Simulate just X + epsilon."
(3) In this light, L-opening keys differ crucially from ∆-shaped things and from hard things.
An L-opening key differs from an ∆-shaped object because I myself do not encode lock L. Whereas I can look at a regular triangle and see its ∆-ness from outside the simulation, I cannot do the same (let's suppose) for keys of the right shape to open lock L. So I cannot simulate a key K alone and see its L-opening-ness.
Moreover, I cannot add something merely arbitrary to the simulation to check K for L-opening-ness. I need to build something very precise and complicated inside the simulation: an instance of the lock L. Then I can insert K in the lock and observe whether it opens.
I need, not just K, and not just K + epsilon: I need to simulate K + something complicated in particular.
Section 4: Back to goodness
So how does goodness as a property fit into this story?
There is an important sense in which goodness is more like being ∆-shaped than it is like being L-opening. Namely, goodness of a state of affairs is something that I can assess myself from outside a simulation of that state. I don't need to simulate anything else to see it. Putting it another way, goodness is like L-opening would be if I happened myself to encode lock L. If that were the case, then, as soon as I saw K take on the right shape inside the simulation, that shape could "click" with me outside of the simulation.
That is why goodness seems to have the same ultimate kind of intrinsicness that ∆-ness has and which being L-opening lacks. We don't encode locks, but we do encode morality.
Footnote
1. Or, rather, a small region in key-shape space, since a lock will accept keys that vary slightly in shape.
27 comments
Comments sorted by top scores.
comment by Erfeyah · 2017-08-30T10:19:12.446Z · LW(p) · GW(p)
It is extremely interesting to see the attempts of the community to justify through or extract values from rationality. I have been pointing to the alternative perspective, based on the work of Jordan Peterson, in which morality is grounded on evolved behavioral patterns. It is rationally coherent and strongly supported by evidence. The only 'downside' if you can call it that is that it turns out that morality is not based on rationality and the "ought from an is" problem is an accurate portrayal of our current (and maybe general) situation.
I am not going to expand on this unless you are interested but I have a question. What does the rationalist community in general, and your article, try to get at? I can think of two possibilities:
[1] that morality is based on rational thought as expressed through language
[2] that morality has a computational basis implemented somewhere in the brain and accessed through the conscious mind as an intuition
I do not see how [1] can be true since we can observe the emergence of moral values in cultures in which rationality is hardly developed. Furthermore, even today as your article shows, we are straggling to extract value from rational argument so, our intuition can not be stemming from something we haven't even succeeded at. As for [2], it is a very interesting proposal but I haven't seen any scientific evidence that link it to structures in the human brain.
I feel the rationalist community is resistant in entertaining the alternative because, if true, it would show that rationality is not the foundation of everything but a tool of assessing and manipulating. Maybe further resistance is caused because (in an slightly embarrassing turn of events) it brings stories, myth and religion into the picture again, albeit in a very different manner. But even if that proves to be the case so what? What is our highest priority here? Rationality or Truth?
Replies from: dogiv, TheAncientGeek, Tyrrell_McAllister↑ comment by dogiv · 2017-08-30T20:00:19.782Z · LW(p) · GW(p)
I think many of us "rationalists" here would agree that rationality is a tool for assessing and manipulating reality. I would say much the same about morality. There's not really a dichotomy between morality being "grounded on evolved behavioral patterns" and having "a computational basis implemented somewhere in the brain and accessed through the conscious mind as an intuition". Rather, the moral intuitions we have are computed in our brains, and the form of that computation is determined both by the selection pressures of evolution and the ways that our evolved brain structures interact with our various environments.
So what is our highest priority here? It's neither Rationality nor Truth, but Morality in the broad sense--the somewhat arbitrary and largely incoherent set of states of reality that our moral intuition prefers. I say arbitrary because our moral intuition does not aim entirely at the optimization target of the evolutionary process that generated it--propagating our genes. Call that moral relativism if you want to.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-08-30T22:13:09.635Z · LW(p) · GW(p)
There's not really a dichotomy between morality being "grounded on evolved behavioral patterns" and having "a computational basis implemented somewhere in the brain and accessed through the conscious mind as an intuition".
There is a difference. Computing a moral axiom is not the same as encoding it. With computation the moral value would be an intrinsic property of some kind of mathematical structure. An encoding on the other hand is an implementation of an environmental adaptation as behavior based on selection pressure. It does not contain an implicit rational justification but it is objective in the sense of it being adapted to an external reality.
Replies from: Manfred↑ comment by Manfred · 2017-09-01T20:34:10.960Z · LW(p) · GW(p)
Moral value is not an "intrinsic property" of a mathematical structure - aliens couldn't look at this mathematical structure and tell that it was morally important. And yet, whenever we compute something, there is a corresponding abstract structure. And when we reason about morality, we say that what is right wouldn't change if you gave us brain surgery, so by morality we don't mean "whatever we happen to think," we mean that abstract structure.
Meanwhile, we are actual evolved mammals, and the reason we think what we do about morality is because of evolution, culture, and chance, in that order. I'm not sure what the point is of calling this objective or not, but it definitely has reasons for being how it is. But maybe you can see how this evolved morality can also be talked about as an abstract structure, and therefore both of these paragraphs can be true at the same time.
It seems like you were looking for things with "intrinsic properties" and "objective"-ness that we don't much care about, and maybe this is why the things you were thinking of were incompatible, but the things we're thinking of are compatible.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-09-01T22:46:59.697Z · LW(p) · GW(p)
Meanwhile, we are actual evolved mammals, and the reason we think what we do about morality is because of evolution, culture, and chance, in that order. I'm not sure what the point is of calling this objective or not, but it definitely has reasons for being how it is.
It seems like you were looking for things with "intrinsic properties" and "objective"-ness that we don't much care about..
I was making a comment on the specific points of dogiv but the discussion is about trying to discover if morality 1) has an objective basis or is completely relative and 2) it has a rational/computational basis or not. Is it that you don't care about approaching truth on this matter, or that you believe you already know the answer?
In any case my main point is that Jordan Peterson's perspective is (in my opinion) the most rational, cohesive and supported by evidence available and would love to see the community taking the time to study it, understand it and try to dispute it properly.
Nevertheless, I know not everyone has the time for that so If you expand on your perspective on this 'abstract structure' and its basis we can debate :)
↑ comment by TheAncientGeek · 2017-09-05T13:13:06.303Z · LW(p) · GW(p)
can think of two possibilities:
[1] that morality is based on rational thought as expressed through language
[2] that morality has a computational basis implemented somewhere in the brain and accessed through the conscious mind as an intuition..
[3] Some mixture. Morality doesn't have to be one thing, or achieved in one way. In particular, novel technologies and social situations provoke novel moral quandaries that intuition is not well equipped to handle , and where people debate such things, they tend to use a broadly rationalist style, trying to find common principles, noting undesirable consequences.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-09-05T14:58:01.469Z · LW(p) · GW(p)
[3] Some mixture. Morality doesn't have to be one thing, or achieved in one way.
Sure this is a valid hypothesis. But my assessment and the individual points I offered above can be applied to this possibility as well uncovering the same issues with it.
In particular, novel technologies and social situations provoke novel moral quandaries that intuition is not well equipped to handle , and where people debate such things, they tend to use a broadly rationalist style, trying to find common principles, noting undesirable consequences.
Novel situations can be seen through the lens of certain stories because they are acting to such a level of abstraction that they are applicable to all human situations. The most universal and permanent levels of abstraction are considered archetypal. These would apply equally to a human living in a cave thousands of years ago and a Wall Street lawyer. Of course it is also true that the stories always need to be revisited to avoid their dissolution into dogma as the environment changes. Interestingly it turns out that there are stories that recognize this need for 'revisiting' and deal with the strategies and pitfalls of the process.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2017-09-05T17:39:50.389Z · LW(p) · GW(p)
That amounts to "I can make my theory work if I keep on adding epicycles".
Replies from: Erfeyah↑ comment by Erfeyah · 2017-09-05T18:27:51.553Z · LW(p) · GW(p)
Your comment seems to me an indication that you don't understand what I am talking about. It is a complex subject and in order to formulate a coherent rational argument you will need to study it in some depth.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2017-09-15T13:33:26.648Z · LW(p) · GW(p)
I am not familiar with Peterson specifically, but I recognise the underpinning in terms of Jung, monomyth theory, and so on.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-09-15T15:57:58.368Z · LW(p) · GW(p)
Cool. Peterson is much clearer than Jung (for which I don't have a clear opinion). I am not claiming that everything that Peterson says is correct and I agree with. I am pointing to his argument for the basis of morality in cultural transmission through imitation, rituals, myth, stories etc. and the grounding of these structures in the evolutionary process as the best rational explanation of morality I have come across. I have studied it in depth and I believe it to be correct. I am inviting engagement with the argument instead of biased rejection.
Replies from: hairyfigment↑ comment by hairyfigment · 2017-09-18T20:11:19.709Z · LW(p) · GW(p)
Without using terms such as "grounding" or "basis," what are you saying and why should I care?
Replies from: Erfeyah↑ comment by Erfeyah · 2017-09-18T20:35:14.586Z · LW(p) · GW(p)
Good idea, let me try that.
I am pointing to his argument on our [communication] of moral values as cultural transmission through imitation, rituals, myth, stories etc. and the [indication of their correspondence with actual characteristics of reality] due to their development through the evolutionary process as the best rational explanation of morality I have come across.
And you should care because... you care about truth and also because, if true, you can put some attention to the wisdom traditions and their systems of knowledge.
Replies from: hairyfigment↑ comment by hairyfigment · 2017-09-18T20:58:15.656Z · LW(p) · GW(p)
The second set of brackets may be the disconnect. If "their" refers to moral values, that seems like a category error. If it refers to stories etc, that still seems like a tough sell. Nothing I see about Peterson or his work looks encouraging.
Rather than looking for value you can salvage from his work, or an 'interpretation consistent with modern science,' please imagine that you never liked his approach and ask why you should look at this viewpoint on morality in particular rather than any of the other viewpoints you could examine. Assume you don't have time for all of them.
If that still doesn't help you see where I'm coming from, consider that reality is constantly changing and "the evolutionary process" usually happened in environments which no longer exist.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-09-18T21:21:28.746Z · LW(p) · GW(p)
If "their" refers to moral values, that seems like a category error. If it refers to stories etc, that still seems like a tough sell.
Could you explain in a bit more detail please?
Rather than looking for value you can salvage from his work, or an 'interpretation consistent with modern science,' please imagine that you never liked his approach and ask why you should look at this viewpoint on morality in particular rather than any of the other viewpoints you could examine. Assume you don't have time for all of them.
No I do see where you are coming from and I don't blame you at all. But do see that you are not addressing the actual argument, in its proper depth. My problem becomes one of convincing you to give your attention to it. Even then it would be difficult to accept an approach that is based on a kind of lateral thinking that requires you to be exposed to multiple patterns before they connect. It is a big problem that I alluded to when I wrote my post Too Much Effort | Too Little Evidence. Peterson is trying to create a rational bridge towards the importance of narrative structures so that they are approached with seriousness.
If that still doesn't help you see where I'm coming from, consider that reality is constantly changing and "the evolutionary process" usually happened in environments which no longer exist.
This is addressed. The most archetypal stories are universal at all times and places. Other ones are modified according to time, place and people. Even the process and need of modification is encoded inside the stories themselves. These are extremely sophisticated systems.
↑ comment by Tyrrell_McAllister · 2017-08-30T22:27:54.190Z · LW(p) · GW(p)
I can think of two possibilities:
[1] that morality is based on rational thought as expressed through language
[2] that morality has a computational basis implemented somewhere in the brain and accessed through the conscious mind as an intuition
Closer to [2]. Does the analogy in Section 2 make sense to you? That would be my starting point for trying to explain further.
Replies from: Erfeyah↑ comment by Erfeyah · 2017-08-31T00:18:01.235Z · LW(p) · GW(p)
As an analogy it does make sense but it seems to me more like an attempt for a kind of mental sleight of hand. The fit of the key to the lock is better seen as the description of the pattern matching mechanism that is the implementation of a value judgment. A judgment is more like having the option between two (or more) keys that open the same lock but have different consequences. The question is on what basis we choose the key, not how the key works once chosen.
I really don't see how the analogy gives any evidence for [2] but please tell me if I am missing something!
comment by Manfred · 2017-08-30T03:14:15.594Z · LW(p) · GW(p)
I totally agree. Perceived differences in kind here are largely due to the different methods we use to think about these things.
For the triangle, everybody knows what a triangle is, we don't even need to use conscious thought to recognize them. But for the key, I can't quite keep the entire shape in my memory at once, if I want to know if something is shaped like my front door key, I have to compare it to my existing key, or try it in the lock.
So it naively seems that triangleness is something intrinsic (because I perceive it without need for thought), while front-door-key-shape is something that requires an external reference (because I need external reference). But just because I perceive the world a certain way doesn't mean those are particularly important distinctions. If one wanted to make a computer program recognize triangles and my front door key shape, one could just as well use the same code for both.
comment by Dues · 2017-09-11T03:28:18.102Z · LW(p) · GW(p)
To explain my perspective, let me turn your example around by using a fictional alien species called humans. Instead of spending their childhoods contemplating the holy key, these humans would spend some of their childhoods being taught to recognize a simple shape with three sides, called a 'triangle'.
To them, when the blob forms a holy key shape, it would mean nothing, but if it formed a triangle they would recognize it immediately!
Your theory becomes simpler when you have a triangle, a key, a lock that accepts a triangle, and lock that accepts a holy key. The key and the triangle are the theories and therefore feel intrinsic. The locks (our brains) are the reality and therefore feel extrinsic.
We want to have a true theory of morality (A true holy key ;)). But the only tool we have to deduce the shape of the key is our own moral intuitions. Alieneizer's theory of meta ethics, is that you need to look at the evidence until you have a theory that is satisfying on reflection and at no point require you to eat babies or anything like that. Some people find the bottom up approach to ethics unsatisfying, but we've been trying the top down approach of: propose a morality system, then see if it works perfectly in the real world for a long time without success.
I think this should satisfy your intuitions, like how our brains seem to accept a morality key because it is a lock, morality not changing even if your mind changes (because your mind will fit a new key), how our morality was shaped by evolution but also somehow both abstract and real, and why I think that calling Alieneizer a relativist is silly. He made it his job to figure out if morality is triangle shaped, key shaped, or 4th dimensional croissant shaped so he could make an AI with that shaped ethics.
Replies from: Duescomment by Luke_A_Somers · 2017-08-31T15:58:18.150Z · LW(p) · GW(p)
1) You could define the shape criteria required to open lock L, and then the object reference would fall away. And, indeed, this is how keys usually work. Suppose I have a key with tumbler heights 0, 8, 7, 1, 4, 9, 2, 4. This is an intrinsic property of the key. That is what it is.
Locks can have the same set of tumbler heights, and there is then a relationship between them. I wouldn't even consider it so much an extrinsic property of the key itself, as a relationship between the intrinsic properties of the key and lock.
2) Metaethics is a function from cultural situations and moral intuitions into a space of ethical systems. This function is not onto (i.e. not every coherent ethical system is the result of metaethical analysis on some cultural system and moral intuitions) , and it is not at all guaranteed to yield the same ethical system at use in that cultural situation. This is a very significant difference from Moral relativism, not a mere slight increase in temperature.
comment by Slider · 2017-08-31T10:37:19.851Z · LW(p) · GW(p)
If I am given 3 location vectors and asked whether they fall on a plane I can't do it at a glance, I need some kinda involved calculations. Make the space high-dimensional enough and I will need to build a much more assisting strctures to make it apparent whether a set of 3 points makes a regular triangle or not.
comment by alexey · 2017-09-23T12:10:29.102Z · LW(p) · GW(p)
Whereas I can look at a regular triangle and see its ∆-ness from outside the simulation, I cannot do the same (let's suppose) for keys of the right shape to open lock L.
Why suppose this and not the opposite? If you understand L well enough to see if a key opens it immediately, does this make L-openingness intrinsic, so intrinsicness/extrinsicness is relative to the observer?
And on the other hand, someone else needs to simulate a ruler to check for ∆-ness, so it is an extrinsic property to him.
Namely, goodness of a state of affairs is something that I can assess myself from outside a simulation of that state.
I certainly would consider this much more difficult than merely checking whether a key opens a lock. I could after spending enough time understand the lock well enough for this, but even considering a complete state of affairs e.g. on Earth?
comment by TheAncientGeek · 2017-09-15T13:24:57.302Z · LW(p) · GW(p)
, a state is good when it engages our moral sensibilities s
Individually, or collectively?
We don't encode locks, but we do encode morality.
Individually or collectively?
Namely, goodness of a state of affairs is something that I can assess myself from outside a simulation of that state. I don't need to simulate anything else to see it
The goodness-to-you or the objective goodness?
if you are going say that morality "is" human value, you are faced with the fact that humans vary in their values..the fact that creates the suspicion of relativism.
This, I suppose, is why some people think that Eliezer's metaethics is just warmed-over relativism, despite his protestations.
It's not clearly relativism and it's not clearly not-relativism. Those of us who are confused by it. are confused because we expect a metaethical theory to say something on the subject.
The opposite of Relative is Absolute or Objective. It isn't Intrinsic. You seem to be talking about something orthogonal to the absolute-relative axis.
comment by redlizard · 2017-09-10T17:28:56.928Z · LW(p) · GW(p)
My interpretation of this thesis immediately remind me of Eliezer's post on locality and compactness of specifications, among others.
Under this framework, my analysis is that triangle-ness has a specification that is both compact and local; whereas L-opening-ness has a specification that is compact and nonlocal ("opens L"), and a specification that is local but noncompact (a full specification of what shapes it is and is not allowed to have), but no specification that is both local and compact. In other words, there is a short specification which refers to something external (L) and a long lookup-table specification that talks only about the key.
I think this is the sensible way of caching out your notion of intrinsicness. (Intrinsicity? Intrinsicitude?)
In this interpretation, I don't think human morality should be judged as non-intrinsic. The shortest local specification of our values is not particularly compact; but neither can you resort to a nonlocal specification to find something more compact. "Whatever makes me happy" is not the sum of my morality, nor do I think you will find a simple specification along those lines that refers to the inner workings of humans. In other words, the specification of human morality is not something you can easily shorten by referring to humans.
There is an important sense in which goodness is more like being ∆-shaped than it is like being L-opening. Namely, goodness of a state of affairs is something that I can assess myself from outside a simulation of that state. That is of course only true because human brains have complex brainware for judging morality, and do not have complex brainware for judging L-opening. For that reason, I don't think "can humans simulate this in a local way" is a measure that is particularly relevant.
comment by Elo · 2017-08-29T23:51:14.946Z · LW(p) · GW(p)
Triangle-NESS depends on the dimensional plane of observation. Therefore it is external to the configuration of matter in entropy being observed
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2017-08-30T18:24:40.362Z · LW(p) · GW(p)
∆-ness does not depend on the point of observation. If you like, just stipulate that you always view the configuration from a point outside the affine span of the configuration but on the line perpendicular to the affine span and passing through the configuration's barycenter. Then regular triangles, and only regular triangles, will project to regular triangles on your 2-dimensional display.