Posts

Comments

Comment by Joseph_Knecht on Moral Complexities · 2008-07-04T23:25:50.000Z · LW · GW
Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"? Why are the two propositions argued in different ways?

"I want the pie" is something that nobody else is affected by and thus nobody else has an interest in. "I should get the pie" is something that anybody else interested in the pie has an interest in. In this sense, the moral preferences are those that other moral beings have a stake in, those that affect other moral beings. I think some kind of a distinction like this explains the different ways we talk about and argue these two kinds of preferences. Additionally, evolution has most likely given us a pre-configured and optimized module for dealing with classes of problems involving other beings that were especially important in the environment of evolutionary adaptedness, which subjectively "feels" like an objective morality that is written into the fabric of the universe.

When and why do people change their terminal values? Do the concepts of "moral error" and "moral progress" have referents? Why would anyone want to change what they want?

I think of preferences and values as being part of something like a complex system (in the sense of http://en.wikipedia.org/wiki/Complex_system) in which all the various preferences are inter-related and in constant interaction. There may be something like a messy, tangled hierarchy where we have terminal preferences that are initially hardwired at a very low-level, on top of which are higher-level non-terminal preferences, with something akin to back-propagation allowing for non-terminal preferences to affect the low-level terminal preferences. Some preferences are so general that they are in constant interaction with a very large subset of all the preferences; these are experienced as things that are "core to our being", and we are much more likely to call these "values" rather than "preferences", although preferences and values are not different in kind.

I think of moral error as actions that go against the terminal (and closely associated non-terminal (which feedback to terminal)) and most general values (involving other moral beings) of a large class of human beings (either directly via this particular instance of the error affecting me or indirectly via contemplation of this type of moral error becoming widespread and affecting me in the future). I think of moral progress as changes to core values that result in more human beings having their fundamental values (like fairness, purpose, social harmony) flourish more frequently and more completely rather than be thwarted.

Why and how does anyone ever "do something they know they shouldn't", or "want something they know is wrong"?

Because the system of interdependent values is not a static system and it is not a consistent system either. We have some fundamental values that are in conflict with each other at certain times and in certain circumstances, like self-interest and social harmony. Depending on all the other values and their interdependencies, sometimes one will win out, and sometimes the other will win out. Guilt is a function of recognizing that something we have done has thwarted one of our own fundamental values (but satisfied the others that won out in this instance) and thwarted some fundamental values of other beings too (not thwarting the fundamental values of others is another of our fundamental values). The messiness of the system (and the fact that it is not consistent) dooms any attempt by philosophers to come up with a moral system that is logical and always "says what we want it to say".

Does the notion of morality-as-preference really add up to moral normality?

I think it does add up to moral normality in the sense that our actions and interactions will generally be in accordance with what we think of as moral normality, even if the (ultimate) justifications and the bedrock that underlies the system as a whole are wildly different. Fundamental to what I think of as "moral normality" is the idea that something other than human beings supplies the moral criterion, whereas under the morality-as-preference view as I described it above, all we can say is that IF you desire to have your most fundamental values flourish (and you are a statistically average human in terms of your fundamental values including things like social harmony), THEN a system that provides for the simultaneous flourishing of other beings' fundamental values is the most effective way of accomplishing that. It is a fact that most people DO have these similar fundamental values, but there is no objective criterion from the side of reality itself that says all beings MUST have the desire to have their most fundamental values flourish (or that the fundamental values we do have are the "officially sanctioned" ones). It's just an empirical fact of the way that human beings are (and probably many other classes of beings that were subject to similar pressures).

Comment by Joseph_Knecht on 2 of 10, not 3 total · 2008-07-04T02:10:05.000Z · LW · GW

I've voiced my annoyance with the commenting system in the past, in particular that it is non-threaded and so often very difficult to figure out what someone is responding to if they don't include context (which they often don't), so I won't give details again.

On the topic of the 2 of 10 rule, if it's to prevent one person dominating a thread, shouldn't the rule be "no more than 2 of last 10 should be by the same person in the same thread" (so 3 posts by the same person would be fine as long as they are in 3 different threads)?

Comment by Joseph_Knecht on The Bedrock of Fairness · 2008-07-03T17:23:32.000Z · LW · GW
Optimistically, I would say that if the murderer perfectly knew all the relevant facts, including the victim's experience, ve wouldn't do it

The murderer may have all the facts, understand exactly what ve is doing and what the experience of the other will be, and just decide that ve doesn't care. Which fact is ve not aware of? Ve may understand all the pain and suffering it will cause, ve may understand that ve is wiping out a future for the other person and doing something that ve would prefer not to be on the receiving end of, may realize that it is behavior that if universalized would destroy society, may realize that it lessens the sum total of happiness or whatever else, may even know that "ve should feel compelled not to murder" etc. But at the end of the day, ve still might say, "regardless of all that, I don't care, and this is what I want to do and what I will do".

There is a conflict of desire (and of values) here, not a difference of fact. Having all the facts is one thing. Caring about the facts is something altogether different.

--

On the question of the bedrock of fairness, at the end of the day it seems to me that one of the two scenarios will occur:

(1) all parties happen to agree on what the bedrock is, or they are able to come to an agreement.

(2) all parties cannot agree on what the bedrock is. The matter is resolved by force with some party or coalition of parties saying "this is our bedrock, and we will punish you if you do not obey it".

And the universe itself doesn't care one way or the other.

Comment by Joseph_Knecht on What Would You Do Without Morality? · 2008-07-01T17:11:00.000Z · LW · GW
But if I understand you, you are saying that human morality is human and does not apply to all sentient beings. However, as long as all we are talking about and all we really deal with is humans, then there is no difference in practice between a morality that is specific to humans and a universal morality applicable to all sentient beings, and so the argument about universality seems academic, of no import at least until First Contact is achieved.

What I am really saying is that the notion of "morality" is so hopelessly contaminated with notions of objective standards and criteria of morality above and beyond humanity that we would do good to find other ways to think and talk about it. But to answer you directly in terms of what I think about the two ways of thinking about morality, I think there is a key difference between (1) "our particular 'morality' is purely a function of our evolutionary history (as it expresses in culture)" and (2) "there is a universal morality applicable to all sentients (and we don't know of other similarly intelligent sentients yet)".

With 1, there is no justification for a particular moral system: "this is just the way we are" is as good as it gets (no matter how you try to build on it, that is the bedrock). With 2, there is something outside of humanity that justifies some moralities and forbids others; there is something like an objective criterion that we can apply, rather than the criterion being relative to human beings and the (not inevitable) events that have brought us to this point. In 1 the rules are in some sense arbitrary; in 2 they are not. I think that is a huge difference. In the course of making decisions in day-to-day existence -- should I steal this book? should I cheat on my partner? -- I agree with you that the difference is academic.

In particular, a lot of moral non-realists are wrong.

Yes, they're wrong, but I think the important point is "what are they wrong about"? Under 1, the claim that "it is merely a matter of [arbitrary] personal opinion" is wrong as an empirical matter because personal opinions in "moral" matters are not arbitrary: they are derived from hardwired tendencies to interpret certain things in a moralistic manner. Under 2, it is not so much an empirical matter of studying human beings and experimenting and determining what the basis for personal opinions about "moral" matters is; it is a matter of determining whether "it's merely a matter of personal opinion" is what the universal moral law says (and it does not, of course).

I concede that I was sloppy in speaking of "traditional notions", although I did not say that there were no philosophical traditions such that...; I was talking about the traditions that were most influential over historical times in western culture (based on my meager knowledge of ethics based on a university course and a little other reading). I had in mind thousands of years of Judeo-Christian morality that is rooted in what the Deity Said or Did, and deontological understandings or morality such as Kant (in which species-indepedendent reason compels us to recognize that ...), as well as utilitarianism (in the sense that the justification for believing that the moral worth of an action is strictly determined by the outcome is not based on our evolutionary quirks: it is supposed to be a rationally compelling system on its own, but perhaps a modern utilitarian might appeal to our evolutionary history as justification).

On the topic of natural law tradition, is it your understanding that it is compatible with the idea that moral judgments are just a subset of preferences that we are hardwired to have tendencies regarding, no different in kind to any other preference (like for sweet things)? That is the point I'm trying to make, and it's certainly not something I heard presented in my ethics class in university. The fact that we have a system that is optimized and pre-configured for making judgments about certain important matters is a far cry from saying that there is an objective moral law. It also doesn't support the notion that there are moral facts that are different in kind from any other type of fact.

It seems from skimming that natural law article you mentioned that Aquinas is central to understanding the tradition. The article quotes Aquinas as 'the natural law is the way that the human being “participates” in the eternal law' [of God]. It seems to me that again, we are talking about a system that sees an objective criterion for morality that is outside of humanity, and I think saying that "the way human beings happened to evolve to think about certain actions constitutes a objective natural law for human morality" is a rather tenuous position. Do you hold that position?

Comment by Joseph_Knecht on The Moral Void · 2008-07-01T16:35:49.000Z · LW · GW

Laura ABJ: To expand on the text you quoted, I think that killing babies is ugly, and therefore would not do it without sufficient reason, which I don't think the scenario provides. The ugliness of killing babies doesn't need a moral explanation, and the moral explanation just builds on (and adds nothing but a more convenient way of speaking about) the foundation of aversion, no matter how it's dressed up and made to look like something else.

The idea is not compelling to me and so would not haunt me forever, because like I said, I'm not yet convinced that some X number of refreshing breezes on a hot day is strictly equivalent in some non-arbitrary sense to murdering a baby, and X+1 breezes is "better" in some non-arbitrary sense.

However, the idea of being haunted forever would bother me now if I thought it likely that my future self would think I made the wrong decision, but that implies that I have more knowledge and perspective now than I actually have (in order to know enough to think it likely that I'll be haunted). All I can do is make what I think is the best decision given what I know and understand now, so I don't see that I could think it likely that I would be haunted by what I did. Of course, I could make a terrible mistake, not having understood something I will later think I should have understood, and I might regret that forever, but I wouldn't realize that at the time and I wouldn't think it likely.

Comment by Joseph_Knecht on The Moral Void · 2008-07-01T01:23:57.000Z · LW · GW

Hal: as an amoralist, I wouldn't do it. If there is not enough time to explain to me why it is necessary and convince me that it is necessary, no deal. Even if I thought it probably would substantially increase the future happiness of humanity, I still wouldn't do it without a complete explanation. Not because I think there is a moral fabric to the universe that says killing babies is wrong, but because I am hardwired to have an extremely strong aversion to like killing babies. Even if I actually was convinced that it would increase happiness, I still might not do it, because I'm still undecided on the idea that some number of people experiencing a refreshing breeze on a hot day is worth more than some person being tortured -- ditto for killing babies.

It seems to me that if you want to find people who are willing to torture and kill babies because "it will increase happiness", you need to find some extremely moral utilitarians. I think you'd have much better luck in that community than among amoralists ;-).

Comment by Joseph_Knecht on What Would You Do Without Morality? · 2008-06-30T19:54:00.000Z · LW · GW

Traditional notions of morality are confused, and observation of the way people act does show that they are poor explanations, so I think we are in perfect agreement there. (I do mean "notion" among thinkers, not among average people who haven't given much though to such things.) Your second paragraph isn't in conflict with my statement that morality is traditionally understood to be in some sense objectively true and objectively binding on us, and that it would be just as true and just as binding if we had evolved very differently.

It's a different topic altogether to consider to whom we have moral obligations (or who should be treated in ways constrained by our morality). And it's another topic again to consider what types of beings are able to participate (or are obligated to participate in) the moral system. I wasn't touching on either of these last two topics.

All I'm saying is that I believe that what morality actually is for each of us in our daily lives is a result of what worked for our ancestors, and that is all it is. I.e., there is no objective morality and there is no ONE TRUE WAY. You can never say "reason demands that you must do ..." or "you are morally obligated by reality itself to ..." without first making some assumptions that are themselves not justifiable (the axioms that we have as a result of evolution). Anything you build on that foundational bedrock is contingent and not necessary.

Comment by Joseph_Knecht on What Would You Do Without Morality? · 2008-06-30T18:28:00.000Z · LW · GW

Constant: I basically agree with the gist of your rephrasing it in terms of being relative to the species rather than independent of the species, but I would emphasize that what you end up with is not a "moral system" in anything like the traditional sense, since it is fundamental to traditional notions of morality that THE ONE TRUE WAY does not depend on human beings and the quirks of our evolutionary history and that it is privileged from the point of view of reality (because its edicts were written in stone by God or because the one true species-independent reason proves it must be so).

btw, you mean partial application rather than currying.

Currying is converting a function like the following, which takes a single n-tuple arg (n > 1) ["::" means "has type"]

-- f takes a 2-tuple consisting of a value of type 'x' and a value of type 'y' and returns a value of type 'z'.
f :: (x, y) -> z

into a function like the following, which effectively takes the arguments separately (by returning a function that takes a single argument)

-- f takes a single argument of type 'x', and returns a function that accepts a single argument of type 'y' and returns a value of type 'z'.
f :: x -> y -> z

What you meant is going from

f :: x -> y -> z

to

g :: y -> z
g = f foo

where the 'foo' argument of type 'x' is "hardwired" into function g.

Comment by Joseph_Knecht on What Would You Do Without Morality? · 2008-06-30T15:57:00.000Z · LW · GW

I agree with mtraven's last post that morality is an innate functionality of the human brain that can't be "disproved", and yet I have said again and again that I don't believe in morality, so let me explain.

Morality is just a certain innate functionality in our brains as it expresses itself based on our life experiences. This is entirely consistent with the assertion that what most people mean by morality -- an objective standard of conduct that is written into the fabric of reality itself -- does not exist: there is no such thing!

A lot of confusion in this thread is due to some people taking "there is no morality" to mean there is nothing in the brain that corresponds to morality (and nothing like a moral system that almost all of us intuitively know) -- which I believe is obviously false, i.e., that there is such a system -- and others taking it to mean there is no objective morality that exists independently of thinking beings with morality systems built in to their brains -- which I believe is obviously true, i.e., that there is no objective morality. And of course, others have taken "there is no morality" to mean other things, perhaps following on some of Eliezer's rather bizarre statements (which I hope he will clarify) in the post that conflated morality with motivation and implied that morality is what gets us out of bed in the morning or causes us to prefer tasty food to boring food.

Morality exists as something hardwired into us due to our evolutionary history, and there are sound reasons why we are better off having it. But that doesn't imply that there is some morality that is sanctioned from the side of reality itself or that our particular moral beliefs are in any way privileged.

As a matter of practice, we all privilege the system that is hardwired into us, but that is just a brute fact about how human beings happen to be. It could easily have turned out radically different. We have no objective basis for ranking and distinguishing between alternate possible moralities. Of course, we have strong feelings nevertheless.

Comment by Joseph_Knecht on What Would You Do Without Morality? · 2008-06-30T06:15:00.000Z · LW · GW

mtraven: many of the posters in this thread -- myself included -- have said that they don't believe in morality (meaning morality and not "values" or "motivation"), and yet I very highly doubt that many of us are clinically psychopaths.

Not believing in morality does not mean doing what those who believe in morality consider to be immoral. Psychopathy is not "not believing in morality": it entails certain kinds of behaviors, which naive analyses of attribute to "lack of morality", but which I would argue are a result of aberrant preferences that manifest as aberrant behavior and can be explained without recourse to the concept of morality.

Comment by Joseph_Knecht on What Would You Do Without Morality? · 2008-06-29T23:22:00.000Z · LW · GW

Unknown: of course it would make a difference, just as my behavior would be different if I had billions of dollars rather than next to nothing or if I were immortal rather than mortal. It doesn't have anything to do with "morality" though.

For example, if I had the power of invisibility (and immateriality) and were able to plant a listening device in the oval office with no chance of getting caught, I would do it in order to publicly expose the lies and manipulations of the Bush administration and give proof of the willful stupidity and rampant dishonesty that many of his former administration have stated they witnessed daily -- not because I think there is some objective code of morality that they violate but because I think the world would be a better place if their lies were exposed and such people did not have such power. (Note: I don't think it would be a better place in anything like an objective sense: that is just my personal preference, and if I had the power to make it so, I would.)

(Hello, NSA: this is all purely fictional, of course.)

Comment by Joseph_Knecht on What Would You Do Without Morality? · 2008-06-29T22:42:00.000Z · LW · GW

On the topic of vegetarianism, I originally became a vegetarian 15 years ago because I thought it was "wrong" to cause unnecessary pain and suffering of conscious beings, but I am still a vegetarian even though I no longer think it is "wrong" (in anything like the ordinary sense).

Now that I no longer think that the concept of "morality" makes much sense at all (except as a fancy and unnecessary name for certain evolved tendencies that are purely a result of what worked for my ancestors in their environments (as they have expressed themselves and changed over the course of my lifetime)), I remain a vegetarian for the reason that I still prefer there to be less unnecessary pain and suffering rather than more. I don't think my preference is demanded or sanctioned by some objective moral law; it is merely my preference.

I recognize now that the reason I thought it was "wrong" is that I had the underlying preference all along and that I recognized that my behavior was inconsistent with my fundamental preferences (and that I desired to act more consistently with my fundamental beliefs).

Would I prefer that more people were vegetarians? Yes. Is it because I think unnecessary pain and suffering are "wrong"? No. I just don't like unnecessary pain and suffering and would prefer for there to be less rather than more. If you take the person who says it is "wrong", and keep probing them for more fundamental reasons that they have this feeling of "wrongness", asking them "why do you believe that?" again and again, eventually you come to a point where they say "I just believe this".

As Wittgenstein said:

If I have exhausted the justifications I have reached bedrock, and my spade is turned. Then I am inclined to say: “This is simply what I do.”

Believers in morality try to convince us that there is a bedrock that justifies everything else but needs no justification itself, but there is no uncaused cause and there can be no infinite regress. Our evolved tendencies as they express themselves as a result of our life experience are the bedrock, and nothing else is necessary. Morality is just a fairy tale that we build upon the bedrock in order to convince ourselves that reality or nature (or God) cares about what we do and that we are absolved of responsibility for our behavior as long as we were "trying to do the right thing" (which is a more subtle version of the "I was just following orders" defense).

One might argue that I believe in "morality" but have merely substituted "preferences" for "moral beliefs", but the difference is that I don't think any of my preferences are different in kind from any others, so there is no justification for picking a subset of them and calling that subset "the moral preferences" and arguing that they are fundamentally different from any other preference I have.

Ah, I'm rambling ... Too much coffee.

Comment by Joseph_Knecht on What Would You Do Without Morality? · 2008-06-29T06:47:19.000Z · LW · GW

Like many others here, I don't believe that there is anything like a moral truth that exists independently of thinking beings (or even dependently on thinking beings in anything like an objective sense), so I already live in something like that hypothetical. Thus my behavior would not be altered in the slightest.

Comment by Joseph_Knecht on Possibility and Could-ness · 2008-06-20T20:08:00.000Z · LW · GW

@Richard: I think that's a valid reduction. It explains non-negative integers reductively in terms of an isomorphism between two groups of things without appealing to numbers or number concepts.

@constant: regardless of the label, you still have 2 sets of things, those which it is possible to label fizzbin (following the rules) and those which it is not. Possibility is still there. So what does it mean that it is possible to label a node fizzbin? Does that mean that in order to understand the algorithm, which relies on possibility of labelling nodes "fizzbin" or not, we now must set up a different search space, make an preliminary assumption about which nodes in that space we think it is possible to label fizzbin (following the rules), and then start searching and changing labels? How does this process terminate? It terminates in something other than the search algorithm, a primitive sense of possibility that is more fundamental than the concept of "can label fizzbin".

Comment by Joseph_Knecht on Possibility and Could-ness · 2008-06-20T07:46:00.000Z · LW · GW

To be clear, there are two different but related points that I've tried to make here in the last few posts.

Point 1 is a minor point about the Rationalist's Taboo game:

With regard to this point, as I've stated already, the task was to give a reductive explanation of the concept of possibility by explaining it in terms of more fundamental concepts (i.e., concepts which have nothing to do with possibility or associated concepts, even implicitly). I think that Eliezer failed that by sneaking the concept of "possibile to be reached" (i.e., "reachable") into his explanation (note that what we call that concept is irrelevant, reachable or fizzbin or whatever).

Point 2 is a related but slightly different point about whether state-space searching is a helpful way of thinking about possibility and whether it actually explains anything.

I think I should summarize what I think Eliezer's thesis is first so that other people can correct me (Eliezer said he is "done explaining" anything to me, so perhaps others who think they understand him well would speak up if they understand his thesis differently).

Thesis: regarding some phenomenon as possible is nothing other than the inner perception a person experiences (and probably also the memory of such a perception in the past) after mentally running something like the search algorithm and determining that the phenomenon is reachable.

Is this substantially correct?

The problem I have with that position is that when we dig a little deeper and ask what it means for a state to be reachable, the answer circularly depends on the concept of possibility, which is what we are supposedly explaining. A reachable state is just a state that it is possible to reach. If you don't want to talk about reaching, call it transitioning or changing or whatever. The key point is that the algorithm divides the states (or whatever) into two disjoint sets: the ones it's possible to get to and the ones it is not possible to get to. What distinguishes these two sets except possibility?

You might say that there is no problem, this possibility is again to be explained in terms of a search (or the result of a search), recursively. But you can't have an infinite regress, so there must be a base case. The base case might be something hardwired into us, but then that would mean that the thing that is hardwired into us is what possibility really is, and that the sensation of possibility that arises from the search algorithm is just something that depends on our primitive notion of possibility. If the base case isn't something hardwired into us, it is still necessarily something other than the search algorithm, so again, the search algorithm is not what possibility is.

Comment by Joseph_Knecht on Possibility and Could-ness · 2008-06-20T03:02:00.000Z · LW · GW

Cyan: I think your quibble misses the point. Eliezer's directions were to play the rationality taboo game and talk about possibility without using any of the forbidden concepts. His explanation failed that task, regardless of whether either he or Brandon were referring to the map or the territory. (Note: this point is completely unrelated to the specifics of the planning algorithm.)

I'll summarize my other points later. (But to reiterate the point that Eliezer doesn't get and pre-empt anybody else telling me yet again that the label is irrelevant, I realize the label is irrelevant, and I am not talking about character strings or labels at all.)

Comment by Joseph_Knecht on Possibility and Could-ness · 2008-06-20T00:40:00.000Z · LW · GW

Eliezer, my point was that you dedicated an entire follow-up post to chiding Brandon, in part for using realizable in his explanation since it implicitly refers to the same concept as could, and that you committed the same mistake in using reachable.

Anyway, I guess I misunderstood the purpose of this post. I thought you were trying to give a reductive explanation of possibility without using concepts such a "can", "could", and "able". If I've understood you correctly now, that wasn't the purpose at all: you were just trying to describe what people generally mean by possibility.

Comment by Joseph_Knecht on Possibility and Could-ness · 2008-06-19T22:39:00.000Z · LW · GW

Can you talk about "could" without using synonyms like "can" and "possible"? .... Can you describe the corresponding state of the world without "could", "possible", "choose", "free", "will", "decide", "can", "able", or "alternative"?

My point being that you set out to explain "could" without "able" and you do it by way of elaboration on a state being "able to be reached".

What you decide to label the concept does not change the fact that the concept you've decided upon is a composite concept that is made up of two more fundamental concepts: reaching (or transitioning to) and possibility.

You've provided 1 sense of possibility (in terms of reachability), but possibility is a more fundamental concept than "possible to be reached".

Comment by Joseph_Knecht on Possibility and Could-ness · 2008-06-19T21:19:00.000Z · LW · GW

bump

Comment by Joseph_Knecht on Possibility and Could-ness · 2008-06-18T17:35:00.000Z · LW · GW

I don't think reachable is permissible in the game, since reach-able means able to be reached, or possible to be reached.

Possibility is synonymous with to be able, so any term suffixed with -able should be forbidden.

The reachability explanation of possibility is also just one instance of possibility among many. to be able (without specifying in what manner) is the general type, and able to be reached is one particular (sub-) type of possibility. The more traditional understanding of possibility is able to exist, but others have been used too, such as able to be conceived, able to be done, able to be caused to exist, etc.

Comment by Joseph_Knecht on Possibility and Could-ness · 2008-06-16T22:14:00.000Z · LW · GW

Robin Z:

I think it is not only a clash of intuitions, since the success rates of pre-scientific theory and folk psychology are poor. This should urge caution in keeping concepts that seem to give rise to much confusion. I would argue that the default attitude towards pre-scientific concepts that have been shrouded in confusion for thousands of years, with still no clarity in sight, should be to avoid them when possible.

When you say that you haven't seen evidence that puts "soul" on shaky grounds, do you mean that assuming determinism and what we know of human physiology that you believe there are still good reasons for positing the existence of a soul? If so, please explain what you mean by the term and why you think it is still a valuable concept. I think the notion of "soul" arose out of ignorance about the nature of living things (and how human beings were different from non-humans in particular) and that it not only serves no positive purpose but causes confusion, and I suspect that "choice" would never have arisen in anything like the form it did if we were not also confused about things like "free will" and the nature of thought.

Regarding "fire" and that it persists as a term, the only aspects of fire that continue to exist from ancient times are descriptions of its visual appearance and its obvious effects (that is, just the phenomenology of fire). Everything else about the concept has been abandoned or re-explained. Since the core of the concept (that it has the distinctive visual appearance it does and "it burns stuff") remain, it is reasonable to keep the term and redefine the explanation for the core aspects of the concept.

The case with choice is quite different, as the phenomenology is much more complex, it is less direct, and it is "mental" and not visual (and thus much more likely to be confused/confusing). Things like "fire" and "thunder" and "rain" and "sun" can easily be re-explained since the phenomenology was reasonably accurate and that was the basis for the concept, but we don't all agree on what is meant by choice or what the concept is supposed to explain.


Eliezer:

If determinism is true, what is the difference between "I could reach state X by taking action Y, if I wanted" and "I could reach state X by taking action Y, if 1=2"?

If determinism is true, then I couldn't have wanted to do X, just as it couldn't have been the case that 1=2, so the conditional is vacuously true, since the antecedent is false and must be false.

This doesn't seem to me like a plausible explanation of possibility or could-ness, unless you can explain how to distinguish between "if I wanted" and "if 1=2" without mentioning possible worlds or possibility or even reachability (since reachability is what you are defining).

Comment by Joseph_Knecht on Possibility and Could-ness · 2008-06-16T04:34:15.000Z · LW · GW

Nick, your example confuses more than it clarifies. What exactly is the choice? Brain processes that occur in 0 < t < 1? Brain processes occurring in that slice that have certain causal relations with future actions? Conscious brain processes occurring such that...? Conscious brain processes occurring such that ... which are initiated by (certain) other brain processes?

You speak as if "choice" means something obvious that everybody understands, but it only has such a meaning in the sense that everybody knows what is meant by "soul" (which refers to a non-existent thing that means something different to practically everybody who uses it and usually results in more confusion than clarification).

Comment by Joseph_Knecht on Possibility and Could-ness · 2008-06-16T03:05:28.000Z · LW · GW

@Jagadul:

by "constraints", I meant that Eliezer specified only that some particular processes happening in the brain are sufficient for choice occurring, which my example refuted, to which you added the ideas that it is not mere happening in the brain but also the additional constraints entailed by concepts of Eliezer-the-person and body-shell-of-Eliezer and that the former can be destroyed while the latter remains, which changes ownership of the choice, etc.

Anyway, I understand what you're saying about choice as a higher-level convenience term, but I don't think it is helpful. I think it is a net negative and that we'd do better to drop it. You gave the thought, "given these options, what will he choose?", but I think the notion of choice adds nothing of value to the similar question, "given these options, what will he do?" You might say that it is different, since a choice can be made without an action occurring, but then I think we'd do better to say not "what will he choose?" but something more like "what will he think?", or perhaps something else depending on the specifics of the situation under consideration.

I believe there's always a way of rephrasing such things so as not to invoke choice, and all that we really give up is the ability to talk about totally generic hypothetical situations (where it isn't specified what the "choice" is about). Whenever you flesh out the scenario by specifying the details of the "choice", then you can easily talk about it more accurately by sidestepping the notion of choice altogether.

I don't think that "choice" is analogous to Newtonian mechanics before relativity. It's more akin to "soul", which we could have redefined and retrofitted in terms of deterministic physical processes in the brain. But just as it makes more sense to forget about the notion of a soul, I think it makes more sense to forget about that of "choice". Just as "soul" is too strongly associated with ideas such as dualism and various religious ideas, "choice" is too strongly associated with ideas such as non-determinism and moral responsibility (relative to some objective standard of morality). Instead of saying "I thought about whether to do X, Y, or Z, then choose to do X, and then did X", we can just say "I thought about whether to do X, Y, or Z, then did X."

@Constant:

I think "choice" is closer to "caloric" than "heat", because I don't believe there is any observable mundane phenomenon that it refers to. What do you have in mind that cannot be explained perfectly well without supposing that a "choice" must occur at some point in order to explain the observed phenomenon?

Comment by Joseph_Knecht on Possibility and Could-ness · 2008-06-15T08:11:49.000Z · LW · GW

Jadagul: I'm saying that Eliezer's explanation of what a choice is is not a sufficient condition. You suggested some additional constraints, which I would argue may be necessary but are still not sufficient conditions for a choice occurring.

My key point, though, as Schizo noted, was that I don't think the concept should be salvaged, any more than phlogiston or caloric should have been salvaged.

Comment by Joseph_Knecht on Possibility and Could-ness · 2008-06-15T00:45:52.000Z · LW · GW

Robin, I don't think "whose brain it is" is really a meaningful and coherent concept, but that is another topic.

My general point was that Eliezer seemed to be saying that certain things occurring in his brain are sufficient for us to say that he made a choice and is morally responsible for the choice. My example was intended to show that while that may be a necessary condition, it is not sufficient.

As for what I actually believe, I think that while the notions of choice and moral responsibility may have made sense in the context in which they arose (though I have strong doubts because that context is so thoroughly confused and riddled with inconsistencies), they don't make sense outside of that context. Freewill and choice (in the west at least) mean what they mean by virtue of their place in a conceptual system that assumes a non-physical soul-mind is the ultimate agent that controls the body and that what the soul-mind does is not deterministic.

If we give up that notion of a non-physical, non-deterministic soul-mind as the agent, the controller of the body, the thing that ultimately makes choices and is morally responsible (here or in the afterlife), I think we must also give up concepts like choice and moral responsibility.

That is not to say that they won't be replaced with better concepts, but those concepts will relate to the traditional concepts of choice and moral responsibility as heat relates to caloric.

Comment by Joseph_Knecht on Possibility and Could-ness · 2008-06-14T23:41:32.000Z · LW · GW

Eliezer said:

I am not saying that choice is an illusion. I am pointing to something and saying: "There! Right there! You see that? That's a choice, just as much as a calculator is adding numbers! It doesn't matter if it's deterministic! It doesn't matter if someone else predicted you'd do it or designed you to do it! It doesn't matter if it's made of parts and caused by the dynamics of those parts! It doesn't matter if it's physically impossible for you to have finally arrived at any other decision after all your agonizing! It's still a choice!"

--

Does it not follow from this that if I installed hardware into your brain that completely took control of your brain and mapped my thoughts onto your brain (and the deliberations in your brain would be ones that I initiated) that you would still be choosing and still be morally responsible? If I decided to kill someone and then forced you to carry out the murder, you would be morally responsible. I find this hard to accept.

Comment by Joseph_Knecht on Causality and Moral Responsibility · 2008-06-14T23:26:00.000Z · LW · GW

Nick: note that I said "conscious thoughts" and not "thoughts", and I specified that the individual is not aware of the inputs/outputs from/to the actuators/sensors and has no control over them.

Comment by Joseph_Knecht on Causality and Moral Responsibility · 2008-06-14T08:27:02.000Z · LW · GW

Nick, I don't understand what you mean by random. There is nothing in the slightest random (as I understand the term) in the scenario I gave. The primary difference between the two cases is that in one case you believe you are effecting action via your conscious thoughts (but you aren't) and in the other you do not believe you are effecting action via your conscious thoughts. In both cases, your actions are fully determined by what is going on in your brain; it's just that your conscious thoughts are irrelevant to what happens (and you are deluded about this in one of the two scenarios).

Comment by Joseph_Knecht on Causality and Moral Responsibility · 2008-06-14T07:17:13.000Z · LW · GW

Eliezer: why do you say John-1 (the "coward") is morally responsible if under your scenario it was physically impossible for him to act as John-2 did given his initial physical conditions? (If it were not impossible, then his actions wouldn't have been fully determined by his initial physical condition.)

To possibly confuse matters a little more, here's a thought experiment that occurred to me for some reason. I'd be curious to hear what anybody who says that determinism does not undermine moral responsibility, or who makes the even stronger claim that there is absolutely no conflict between determinism and moral responsibility, has to say about the following:

You wake up in front of a schoolhouse, where you've just taken a nap, and discover that your body has been encased in thick metal armor that has actuators at all the joints and is covered in sensors (nobody else did it; this just happens to be the one in a google^^^google chance of this spontaneously happening -- so there's nobody else to "blame"). You are not strong enough to move the armor or break free, but the sensors and actuators are wired up to your brain such that the sensors send their data to certain parts of your brain that you have no conscious awareness of, and the actuators respond to signals from some (perhaps the same) part of your brain whose happenings you are also not conscious of.

The schoolhouse is burning, cherubic youth are screaming, and you could probably save a child or two. But of course, you are not physically capable of doing anything except going along for the ride and doing whatever the armor does based on the firings in your brain that you have no control over or awareness of.

Let's say that the armor turns and runs. Are you -- the person inside -- morally responsible?

If under normal circumstances one's actions were totally predetermined, does one have any more ability to choose than the individual in the armor does? If not, how do you assert that the John-1 would be morally responsible but armored John-A1 would not be morally responsible?

I'm not sure what I think about determinism and moral responsibility, but I have a difficult time understanding how these two topics could have no relation to each other, as some people in this thread seem to believe.

Comment by Joseph_Knecht on Bloggingheads: Yudkowsky and Horgan · 2008-06-09T06:08:33.000Z · LW · GW

AI researchers of previous eras made predictions that were wildly wrong. Therefore, human-level AI (since it is a goal of some current AI researchers) cannot happen in the foreseeable future. They were wrong before, so they must be wrong now. And dawg-garn it, it seems like some kind of strange weirdo religious faith-based thingamajiggy to me, so it must be wrong.

Thanks for a good laugh, Mr. Horgan! Keep up the good work.

Comment by Joseph_Knecht on The Rhythm of Disagreement · 2008-06-02T01:36:03.000Z · LW · GW

In the video link that komponisto gave, the relevant section of the video starts at 49:00 or so. He doesn't argue there though that consciousness -- he uses the term sentience -- will necessarily remain a mystery but only that this might turn out to be the case. He makes the analogy of trying to think about time before the big bang and that there is some kind of conceptual category-type error going on there that gives it its subjective feeling of being mysterious and unknowable, and states that this may be the case with sentience/consciousness, free-will, etc.

Comment by Joseph_Knecht on A Premature Word on AI · 2008-06-02T00:19:00.000Z · LW · GW

Robin, I agree that the charitable interpretation is possible. But when he said "It seemed a safe answer since no one could ever tell me I was wrong", I took him to be saying that whether he could be shown to be wrong or not was a major factor in saying what he did.

In that case, if he cared more about saying something that he couldn't be held responsible for if wrong than about having a justified and likely-to-be-correct position, I'd say that is at least as bad.

Comment by Joseph_Knecht on A Premature Word on AI · 2008-06-01T21:02:00.000Z · LW · GW

Robin, the issue is not "having made a mistake years ago", which we all have done; it is "knowingly being intellectually sloppy/dishonest because you know you can get away with it for a while".

Comment by Joseph_Knecht on Einstein's Superpowers · 2008-06-01T00:17:00.000Z · LW · GW

Caledonian, the childish "I have a secret that I'm not going to tell you, but here's a hint" bs is very annoying and discourages interacting with you. If you're not willing to spell it out, just don't say it in the first place. Nobody cares to play guessing games with you.

Comment by Joseph_Knecht on Einstein's Superpowers · 2008-05-31T20:08:00.000Z · LW · GW

Caledonian: were you trolling, or are you going to explain the "gaping hole" and "false equivalence" you mentioned?

Comment by Joseph_Knecht on Einstein's Superpowers · 2008-05-31T01:34:31.000Z · LW · GW

What "giant hole"? What "false equivalence"?

Comment by Joseph_Knecht on Einstein's Superpowers · 2008-05-31T00:21:23.000Z · LW · GW

Personally, I don't there is a trick, and I don't think he's keeping it private for those reasons. I think his method, if something so obvious (which is not to say easy) can be called a method, is to discuss the issue and interact with the person long enough to build up a model of the person, what he values and fears most, and then probe for weaknesses & biases where that individual seems most susceptible, and follow those weaknesses -- again and again.

I think most, perhaps all, of us, unless we put our fingers in our ears and refuse to honestly engage, are capable of being convinced by a skilled interlocutor who has MUCH more experience thinking about the issue than we do.

Of course, I could be wrong, and there could be some argument that would convince me in minutes, or there could be some trick, but I'd be very surprised if so.

Comment by Joseph_Knecht on Einstein's Superpowers · 2008-05-30T21:46:00.000Z · LW · GW

@DaveInNYC: what you can and can't assume is not relevant to whether the transcripts should be private or not. If they were public, anybody predisposed to explanations like "they must have been more simple-minded than me" could just as easily find another equally "compelling" explanation, like "I didn't think of that 'trick', but now that I know it, I'm certain I couldn't be convinced!"

I personally think they should remain private, as frustrating as it is to not know how Eliezer convinced them. Not knowing how Eliezer did it nicely mirrors the reality of our not knowing how a much smarter AGI might go about it.

Comment by Joseph_Knecht on Einstein's Superpowers · 2008-05-30T21:14:45.000Z · LW · GW

Eliezer: if you're going to point to the AI Box page, shouldn't you update it to include more recent experiments (like the ones from 2005 where the gatekeeper did not let the AI out)?

Comment by Joseph_Knecht on Einstein's Superpowers · 2008-05-30T18:23:08.000Z · LW · GW

When did "genius" (as in "just another Jewish genius") as a term become acceptable to use in the sense of mere "exceptional ability" without regard to accomplishment/influence or after-the-fact eminence? I know it is commonly (mis-)used in this sense, but it seems to me that "unaccomplished genius" should be an oxymoron, and I'm somewhat surprised to see it used in this sense so much in this thread (and on this forum).

I have always considered the term to refer (after the fact) to those individuals who shaped the intellectual course of humanity -- e.g., Shakespeare, Newton, Darwin, Einstein -- and not just high-IQ individuals who may or may not actually do anything of consequence. It is what Newton and Mozart and Picasso actually did, the effect they had on intellectual history, that justifies our calling them geniuses, not the mere fact that they were exceptionally talented.

What do others think? Perhaps we misuse the word because there is no other single word that is appropriate? Or is there some word I'm not thinking of to describe exceptionally intelligent and creative people (without regard to what they do with their abilities)? "Brilliant" as an adjective, if pronounced emphatically enough, conveys the sense, but it's not a noun.

Comment by Joseph_Knecht on Reductive Reference · 2008-04-04T19:49:35.000Z · LW · GW

Eliezer: no comment on my point that 'single-levelness' is an attribute of your model of reality rather than of reality itself? And that saying "reality is single-level" is therefore misleading.

Comment by Joseph_Knecht on Reductive Reference · 2008-04-03T23:59:06.000Z · LW · GW

I prefer to think of the distinction between reality and our models of it not in terms of single-level versus multi-level but rather in terms of the thing itself versus descriptions of the thing: conceptual systems have levels, reality does not.

Thinking in terms of levels is applicable to conceptual systems, but not applicable to reality — applicable to conceptual systems about reality, but not to reality itself.

The notion of level is just a mental construct that is helpful in describing how we think and how we represent. It is a function of how we carve up reality conceptually, but reality itself doesn't contain bones and joints that are waiting to be carved — not even just one fundamental joint that we have no choice about how to carve.

Comment by Joseph_Knecht on Joy in Discovery · 2008-03-23T03:22:37.000Z · LW · GW

Caledonian, that's mere sophistry to say "mathematics is physics because it is performed by a brain or analogous physical device".

According to that definition, no matter what you study at university, you are really doing physics. Every single human being that has ever earned a university degree earned a physics degree (since English is Physics, Art History is Physics, etc.), and every individual whose work involves use of her brain (even if only for respiration and basic metabolic processes) is a physicist.

I think I'll stick with the understanding of physics that the rest of the world uses.

Comment by Joseph_Knecht on Joy in Discovery · 2008-03-23T01:37:24.000Z · LW · GW

Me: Physics could at some point be completely solved, which is to say that at some point, there would be no further knowledge that would ever allow us to do anything new, to make any better a prediction, to do anything more efficiently, etc.

Tobbic: It seems to me that ppl have a tendency to overstate their knowledge. What does a slug know about physics? Respectively, what does a human know about (possible) 101th dimension or travel through time or any of the stuff some posthuman might do "physics" about.

I didn't claim that it is a fact that physics could be completely solved. My point is that it is conceivable that it could be -- there is no apparent logical contradiction from believing this -- but it is absurd to say that every story could ever be told, every painting could ever be painted, or that we could run out of theorems to solve and new areas of mathematics to invent.

Reality does not constrain mathematics -- it doesn't have to have applications -- but it obviously does constrain physics, and sets an upper bound on what can be discovered.

Note: this argument says nothing about whether it is likely that we ever will completely solve it or not, as you seem to assume.

Comment by Joseph_Knecht on Joy in Discovery · 2008-03-23T01:27:39.000Z · LW · GW

Caledonian: Every mathematical statement is a claim about the behavior of the physical world.

Please interpret the following statements for me in terms of the behavior of the physical world, and tell me which branch of physics deals with the behavior of each:

  • The cardinality of the set of real numbers is greater than the cardinality of the set of natural numbers.

  • The continuum hypothesis is independent of ZF and ZFC set theory.

  • There are no solutions to the equation a^n + b^n = c^n for non-zero integers a, b, and c and integer n > 2.

Comment by Joseph_Knecht on Joy in Discovery · 2008-03-22T05:56:06.000Z · LW · GW

The mathematics of physics is just an infinitesimal part of all of mathematics.

Physics could at some point be completely solved, which is to say that at some point, there would be no further knowledge that would ever allow us to do anything new, to make any better a prediction, to do anything more efficiently, etc.

There is no such limit to mathematics though, because mathematics, unlike physics, is not constrained by reality. It only needs to be self consistent (under perhaps limitless different conceptions of consistency) given a particular starting point, and there is no limit to the number of starting points or perspectives upon which can be built new systems. And there are concepts analogous to quantity, transformation, shape, etc., that that have not yet been invented but will be fertile ground for new branches of mathematics someday.

I think it's possible that all useful mathematics could someday be discovered (if you consider all art useless), but that would still be just an infinitesimal part of all of mathematics. To say that mathematics could be exhausted is to say that all stories may one day have been told, and fiction may be exhausted. It just can't happen.

Comment by Joseph_Knecht on Joy in Discovery · 2008-03-21T19:55:13.000Z · LW · GW

I think the shot of adrenaline to the ego is what gives the sense of high in most cases, and what motivates most scientists. And it probably is almost entirely the source of the high of the non-world changing and minor discoveries.

Having said that, I do think that in some cases, very few, there is perhaps a stronger element of what Eliezer briefly touched on towards the end of the essay: that one has just added to the sum total of humanity's knowledge, and inched us toward the perfect understanding of the world around us that science constantly seeks.

To think that one has just discovered something that will affect all of humanity for the rest of time by adding to the knowledge we have and providing a foundation for all knowledge that builds upon it is a dizzying thought, and I think the high is not only that "I'm such a genius and I'll be remembered forever and be envied by all my contemporaries", but also consists in the realization of the incalculable consequences of what you have just discovered.

Of course, this applies to discoveries of the nature of Newton or Darwin, rather than lesser discoveries, and I'd attribute the high of lesser discoveries to more egocentric thoughts. (And perhaps in Newton's case as well, since he was an quite a self-centered individual, but that's another subject.)

To summarize, while it may be that the ego-centered explanations of the high is the dominant explanation in all minor or trivial discoveries, and is present in all greater discoveries, in some cases, the high may be even more strongly based on the sense of steering the future of mankind, or at least science, of leading us into new territory. If it feels good to help an old lady across the street, how would it feel to give a gift to the trillions of human beings that do not yet exist? And this explains why the high is probably that much greater -- at least upon reflection -- for something that one thinks might not have been discovered for a long time otherwise, as opposed to the things that were in the air at the time and would certainly have been discovered in the very near future by somebody else (e.g., Archimedes' method of exhaustion [if he'd have sensed the implications], close to the modern use of limits in calculus and analysis, versus Watson/Crick who were racing to beat Linus Pauling).

Comment by Joseph_Knecht on Words as Mental Paintbrush Handles · 2008-03-04T01:01:08.000Z · LW · GW

Caledonian: you said "Your visualizations include such details? As the description didn't include such details, they're necessarily undefined - so why did you define them out of their uncertainty?"

I understood from your statement that you expressed surprise that the reported visualization contained such details as "which side of the street the person is walking down". This implied to me that you believe it is possible to visualize a man walking down a street, but not be either walking down the left or right side or in the street itself, etc.

Comment by Joseph_Knecht on Words as Mental Paintbrush Handles · 2008-03-04T00:23:39.000Z · LW · GW

I personally saw the man walking away from me on the left side of the street, and my persective was just to the left of the curb on that side of the street and slightly higher than the man, who was a short distance from me. I saw him turn left into a drugstore for a split second, and then when I realized the joke briefly saw him morph into a drugstore on the sidewalk.

To the people who say that they visualized the scene but, for example, didn't see the person walking towards or away from you, or didn't see the man on one side of the street or the other: how can you visualize a man walking down a street and not at the same time have him be on one side of the street or the other (or in it), and was he not either facing you or facing away from you in the image? If not, was it that the man was vague and so didn't have features like a face that would indicate direction? I'm still puzzled how he could not have a position relative to the street in an image that includes both a person and a street.

Comment by Joseph_Knecht on The Second Law of Thermodynamics, and Engines of Cognition · 2008-03-01T03:11:51.000Z · LW · GW

anonymous:

I don't see how your response addresses my concern that saying accurate belief requires observation implies unacceptable consequences for the man on the street, such as that his correct belief that the Giants would win on Sunday is nevertheless not an accurate belief.