Complexity of Value ≠ Complexity of Outcome

post by Wei Dai (Wei_Dai) · 2010-01-30T02:50:49.369Z · LW · GW · Legacy · 223 comments

Contents

223 comments

Complexity of value is the thesis that our preferences, the things we care about, don't compress down to one simple rule, or a few simple rules. To review why it's important (by quoting from the wiki):

I certainly agree with both of these points. But I worry that we (at Less Wrong) might have swung a bit too far in the other direction. No, I don't think that we overestimate the complexity of our values, but rather there's a tendency to assume that complexity of value must lead to complexity of outcome, that is, agents who faithfully inherit the full complexity of human values will necessarily create a future that reflects that complexity. I will argue that it is possible for complex values to lead to simple futures, and explain the relevance of this possibility to the project of Friendly AI.

The easiest way to make my argument is to start by considering a hypothetical alien with all of the values of a typical human being, but also an extra one. His fondest desire is to fill the universe with orgasmium, which he considers to have orders of magnitude more utility than realizing any of his other goals. As long as his dominant goal remains infeasible, he's largely indistinguishable from a normal human being. But if he happens to pass his values on to a superintelligent AI, the future of the universe will turn out to be rather simple, despite those values being no less complex than any human's.

The above possibility is easy to reason about, but perhaps does not appear very relevant to our actual situation. I think that it may be, and here's why. All of us have many different values that do not reduce to each other, but most of those values do not appear to scale very well with available resources. In other words, among our manifold desires, there may only be a few that are not easily satiated when we have access to the resources of an entire galaxy or universe. If so, (and assuming we aren't wiped out by an existential risk or fall into a Malthusian scenario) the future of our universe will be shaped largely by those values that do scale. (I should point out that in this case the universe won't necessarily turn out to be mostly simple. Simple values do not necessarily lead to simple outcomes either.)

Now if we were rational agents who had perfect knowledge of our own preferences, then we would already know whether this is the case or not. And if it is, we ought to be able to visualize what the future of the universe will look like, if we had the power to shape it according to our desires. But I find myself uncertain on both questions. Still, I think this possibility is worth investigating further. If it were the case that only a few of our values scale, then we can potentially obtain almost all that we desire by creating a superintelligence with just those values. And perhaps this can be done manually, bypassing an automated preference extraction or extrapolation process with their associated difficulties and dangers. (To head off a potential objection, this does assume that our values interact in an additive way. If there are values that don't scale but interact nonlinearly (multiplicatively, for example) with values that do scale, then those would need to be included as well.)

Whether or not we actually should take this approach would depend on the outcome of such an investigation. Just how much of our desires can feasibly be obtain this way? And how does the loss of value inherent in this approach compare with the expected loss of value due to the potential of errors in the extraction/extrapolation process? These are questions worth trying to answer before committing to any particular path, I think.
P.S., I hesitated a bit in posting this, because underestimating the complexity of human values is arguably a greater danger than overlooking the possibility that I point out here, and this post could conceivably be used by someone to rationalize sticking with their "One Great Moral Principle". But I guess those tempted to do so will tend not to be Less Wrong readers, and seeing how I already got myself sucked into this debate, I might as well clarify and expand on my position.

223 comments

Comments sorted by top scores.

comment by Toby_Ord · 2010-01-30T11:45:41.076Z · LW(p) · GW(p)

There are a lot of posts here that presuppose some combination of moral anti-realism and value complexity. These views go together well: if value is not fundamental, but dependent on characteristics of humans, then it can derive complexity from this and not suffer due to Occam's Razor.

There are another pair of views that go together well: moral realism and value simplicity. Many posts here strongly dismiss these views, effectively allocating near-zero probability to them. I want to point out that this is a case of non-experts being very much at odds with expert opinion and being clearly overconfident. In the Phil Papers survey for example, 56.3% of philosophers lean towards or believe realism, while only 27.7% lean towards or accept anti-realism.

http://philpapers.org/surveys/results.pl

Given this, and given comments from people like me in the intersection of the philosophical and LW communities who can point out that it isn't a case of stupid philosophers supporting realism and all the really smart ones supporting anti-realism, there is no way that the LW community should have anything like the confidence that it does on this point.

Moreover, I should point out that most of the realists lean towards naturalism, which allows a form of realism that is very different to the one that Eliezer critiques. I should also add that within philosophy, the trend is probably not towards anti-realism, but towards realism. The high tide of anti-realism was probably in the middle of the 20th Century, and since then it has lost its shiny newness and people have come up with good arguments against it (which are never discussed here...).

Even for experts in meta-ethics, I can't see how their confidence can get outside the 30%-70% range given the expert disagreement. For non-experts, I really can't see how one could even get to 50% confidence in anti-realism, much less the kind of 98% confidence that is typically expressed here.

Replies from: CarlShulman, Zack_M_Davis, Roko, Eliezer_Yudkowsky, JamesAndrix, TruePath, ciphergoth, Wei_Dai, ciphergoth, whpearson, mattnewport, Stuart_Armstrong, CarlShulman, taw, jhuffman
comment by CarlShulman · 2010-01-30T20:59:40.660Z · LW(p) · GW(p)

Among target faculty listing meta-ethics as their area of study moral realism's lead is much smaller: 42.5% for moral realism and 38.2% against.

Looking further through the philpapers data, a big chunk of the belief in moral realism seems to be coupled with theism, where anti-realism is coupled with atheism and knowledge of science. The more a field is taught at Catholic or other religious colleges (medieval philosophy, bread-and-butter courses like epistemology and logic) the more moral realism, while philosophers of science go the other way. Philosophers of religion are 87% moral realist, while philosophers of biology are 55% anti-realist.

In general, only 61% of respondents "accept" rather than lean towards atheism, and a quarter don't even lean towards atheism. Among meta-ethics specialists, 70% accept atheism, indicating that atheism and subject knowledge both predict moral anti-realism. If we restricted ourselves to the 70% of meta-ethics specialists who also accept atheism I would bet at at least 3:1 odds that moral anti-realism comes out on top.

Since the Philpapers team will be publishing correlations between questions, such a bet should be susceptible to objective adjudication within a reasonable period of time.

A similar pattern shows up for physicalism.

In general, those interquestion correlations should help pinpoint any correct contrarian cluster.

Replies from: Wei_Dai, Toby_Ord
comment by Wei Dai (Wei_Dai) · 2010-02-01T00:27:02.454Z · LW(p) · GW(p)

In general, those interquestion correlations should help pinpoint any correct contrarian cluster.

This is why I put more weight on Toby's personal position, than on the majority expert position. As far as I know, Toby is in the same contrarian cluster as me, yet he seems to give much more weight to moral realism (and presumably not the Yudkowskian kind either) than I do. Like ciphergoth, I wish he would tell us which arguments in favor of realism, or against anti-realism, that he finds persuasive.

Replies from: CarlShulman
comment by CarlShulman · 2010-02-01T01:17:37.244Z · LW(p) · GW(p)

It seems that would be more likely if some people would put effort into apparently wanting to learn more about moral realism, or would read and present some of the arguments charitably to LW.

comment by Toby_Ord · 2010-01-30T21:53:52.409Z · LW(p) · GW(p)

Thanks for looking that up Carl -- I didn't know they had the break-downs. This is the more relevant result for this discussion, but it doesn't change my point much. Unless it was 80% or so in favour of anti-realism, I think holding something like 95% credence in anti-realism this is far too high for non-experts.

Replies from: CarlShulman
comment by CarlShulman · 2010-01-30T23:59:53.643Z · LW(p) · GW(p)

Atheism doesn't get 80% support among philosophers, and most philosophers of religion reject it because of a selection effect where few wish to study what they believe to be non-subjects (just as normative and applied ethicists are more likely to reject anti-realism).

Replies from: Vladimir_Nesov, Toby_Ord, ciphergoth
comment by Vladimir_Nesov · 2010-01-31T10:22:03.171Z · LW(p) · GW(p)

Perhaps we shouldn't look for professional consensus on things we accept with almost-certainty, because things that can be correctly accepted with almost-certainty by amateurs will not be professionally studied, except by people who are systematically confused. Instead, we should ask non-professional opinion of people who are in the position to know most about the subject, but don't study it professionally.

comment by Toby_Ord · 2010-01-31T18:13:49.818Z · LW(p) · GW(p)

You are correct that it is reasonable to assign high confidence to atheism even if it doesn't have 80% support, but we must be very careful here. Atheism is presumably the strongest example of such a claim here on Less Wrong (i.e. one which you can tell a great story why so many intelligent people would disagree etc and hold a high confidence in the face of disagreement). However, this does not mean that we can say that any other given view is just like atheism in this respect and thus hold beliefs in the face of expert disagreement, that would be far too convenient.

Replies from: CarlShulman, komponisto
comment by CarlShulman · 2010-01-31T18:23:23.038Z · LW(p) · GW(p)

Strong agreement about not overgeneralizing. It does appear, however, that libertarianism about free well, non-physicalism about the mind, and a number of sorts of moral realism form a cluster, sharing the feature of reifying certain concepts in our cognitive algorithms even when they can be 'explained away.' Maybe we can discuss this tomorrow night.

comment by komponisto · 2010-01-31T18:38:09.269Z · LW(p) · GW(p)

However, this does not mean that we can say that any other given view is just like atheism in this respect and thus hold beliefs in the face of expert disagreement, that would be far too convenient.

Of course not; the substance of one's reasons for disagreeing matters greatly. In this case, I suspect there's probably a significant amount of correlation/non-independence between the reasons for believing atheism and believing something like moral non-realism.

One thing we should take away from cases like atheism is that surveys probably shouldn't be interpreted naively, but rather as somewhat noisy information. I think my own heuristic (on binary questions where I already have a strong opinion) is basically to look on which side of 50% my position falls; if the majority agrees with me (or, say, the average confidence in my position is over 50%), I tend to regard that as (more) evidence in my favor, with the strength increasing as the percentage increases.

(This, I think, would be part of how I would answer Yvain.)

comment by Paul Crowley (ciphergoth) · 2010-01-31T11:14:55.394Z · LW(p) · GW(p)

I think the arguments you're developing here go a long way towards answering Toby's point, but what safeguards can we use to ensure we can't use it as a generalized anti-expert defence?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-31T11:58:35.827Z · LW(p) · GW(p)

The prerequisite for this heuristic is coming to a conclusion with near-certainty on an amateur level. The safeguard has to be general ability to not get that much unjustified overconfidence.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-31T12:09:33.812Z · LW(p) · GW(p)

Are you proposing a safeguard here or setting out what the safeguard has to achieve?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-31T12:12:34.842Z · LW(p) · GW(p)

I'm pointing out that there is already a generally applicable enough set of safeguards that covers this case in particular, adequate or not. That is, this heuristic doesn't automatically lead as astray.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-31T12:55:26.268Z · LW(p) · GW(p)

I don't think I can understand you properly; it reads like you're saying that we can be confident in rejecting expert advice if we've already reached a contrary position with high confidence. That doesn't sound Bayesian. I suspect the error is mine but I'd appreciate your help in finding and fixing it!

Replies from: CarlShulman, Vladimir_Nesov
comment by CarlShulman · 2010-01-31T13:04:52.795Z · LW(p) · GW(p)

EDIT: I [not Vladimir] would say that if we have one position that we can be confident in (atheism) we can use it as an indicator of expert quality, and pay more attention to those experts on other issues (e.g. moral realism as philosophers define it).

And with respect to the selection effect among philosophers of religion, there's overwhelming direct evidence on this in the form of the Catholic Church push on this front.

Replies from: Vladimir_Nesov, ciphergoth
comment by Vladimir_Nesov · 2010-01-31T13:28:10.811Z · LW(p) · GW(p)

Re: correction:

I [not Vladimir] would say

I would say so too, though I wasn't saying that here. It is the mechanism through which we can reject expert opinion, but also as applied to the very claim that is being contested, not just the other slam-dunk claims.

comment by Paul Crowley (ciphergoth) · 2010-01-31T13:59:42.991Z · LW(p) · GW(p)

Only where there's a relationship of course. We would be unwise to reject medical expertise from a body where atheists were few, unless religion impinged on that advice eg abortion, cryonics. Here a relationship with religion is clear.

Replies from: CarlShulman
comment by CarlShulman · 2010-01-31T15:14:34.268Z · LW(p) · GW(p)

I would say that if on some matter of medical controversy atheist doctors and medical academics tended to come out one way, while the median opinion came out the other way, we should go with the atheist medical opinion, ceteris paribus. Atheism is a proxy for intelligence and scientific thinking, a finding which has a mountain of evidence in its favor.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-31T19:52:34.764Z · LW(p) · GW(p)

Definitely if the majority opinion among atheist experts differed from the majority opinion among all experts, I'd go for the former, but if say the majority of doctors studying a disease were Catholic for simple geographic reasons, I'd still defer to their expertise.

comment by Vladimir_Nesov · 2010-01-31T13:03:14.474Z · LW(p) · GW(p)

it reads like you're saying that we can be confident in rejecting expert advice if we've already reached a contrary position with high confidence

I agree with this interpretation.

Zack is making basically the same point here.

(This discussion is about meta-level mechanism for agreement, where you accept a conclusion; experts might well have persuasive arguments that inverse one's confidence.)

Replies from: RobinZ
comment by Zack_M_Davis · 2010-01-30T21:59:39.061Z · LW(p) · GW(p)

Many posts here strongly dismiss [moral realism and simplicity], effectively allocating near-zero probability to them. I want to point out that this is a case of non-experts being very much at odds with expert opinion and being clearly overconfident. [...] For non-experts, I really can't see how one could even get to 50% confidence in anti-realism, much less the kind of 98% confidence that is typically expressed here.

One person's modus ponens is another's modus tollens. You say that professional philosophers' disagreement implies that antirealists shouldn't be so confident, but my confidence in antirealism is such that I am instead forced to downgrade my confidence in professional philosophers. I defer to experts in mathematics and science, where I can at least understand something of what it means for a mathematical or scientific claim to be true. But on my current understanding of the world, moral realism just comes out as nonsense. I know what it means for a computation to yield this-and-such a result, or for a moral claim to be true with respect to such-and-these moral premises that might be held by some agent. But what does it mean for a moral claim to be simply true, full stop? What experiment could you perform to tell, even in principle? If the world looks exactly the same whether murder is intrinsically right or intrinsically wrong, what am I supposed to do besides say that there simply is no fact of the matter, and proceed with my life just as before?

I realize how arrogant it must seem for young, uncredentialled (not even a Bachelor's!) me to conclude that brilliant professional philosophers who have devoted their entire lives to studying this topic are simply confused. But, disturbing as it may be to say ... that's how it really looks.

Replies from: Eliezer_Yudkowsky, MichaelVassar, Technologos, ciphergoth, timtyler
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-30T22:17:43.927Z · LW(p) · GW(p)

But what does it mean for a moral claim to be simply true, full stop?

Well, in my world, it means that the premises are built into saying "moral claim"; that the subject matter of "morality" is the implications of those premises, and that moral claims are true when they make true statements about these implications. If you wanted to talk about the implications of other premises, it wouldn't be the subject matter of what we name "morality". Most possible agents (e.g. under a complexity-based measure of mind design space) will not be interested in this subject matter - they won't care about what is just, fair, freedom-promoting, life-preserving, right, etc.

This doesn't contradict what you say, but it's a reason why someone who believes exactly everything you do might call themselves a moral realist.

In my view, people who look at this state of affairs and say "There is no morality" are advocating that the subject matter of morality is a sort of extradimensional ontologically basic agent-compelling-ness, and that, having discovered this hypothesized transcendental stuff to be nonexistent, we have discovered that there is no morality. In contrast, since this transcendental stuff is not only nonexistent but also poorly specified and self-contradictory, I think it was a huge mistake to claim that it was the subject matter of morality in the first place, that we were talking about some mysterious ineffable confused stuff when we were asking what is right. Instead I take the subject matter of morality to be what is fair, just, freedom-promoting, life-preserving, happiness-creating, etcetera (and what that starting set of values would become in the limit of better knowledge and better reflection). So moral claims can be true, and it all adds up to normality in a rather mundane way... which is probably just what we ought to expect to see when we're done.

Replies from: Zack_M_Davis, Wei_Dai, MichaelVassar, wedrifid
comment by Zack_M_Davis · 2010-01-30T23:00:18.903Z · LW(p) · GW(p)

Yes, but I think that my way of talking about things (agents have preferences, some of which are of a type we call moral, but there is no objective morality) is more useful than your way of talking about things (defining moral as a predicate referring to a large set of preferences), because your formulation (deliberately?) makes it difficult to talk about humans with different moral preferences, which possibility you don't seem to take very seriously, whereas I think it very likely.

comment by Wei Dai (Wei_Dai) · 2010-01-31T02:20:55.682Z · LW(p) · GW(p)

Well, in my world, it means that the premises are built into saying "moral claim"; that the subject matter of "morality" is the implications of those premises, and that moral claims are true when they make true statements about these implications.

So, according to this view, moral uncertainty is just a subset of logical uncertainty, where we restrict our attention to the implication of a fixed set of moral premises. But why is it that I feel uncertain about which premises I should accept? I bet that when most people talk about moral realism and moral uncertainty, that is what they're talking about.

(and what that starting set of values would become in the limit of better knowledge and better reflection)

Why/how does/should one's moral premises change as one gains knowledge and ability to reflect? (Note that in standard decision theory one's values simply don't change this way.) It seems to me this ought to be the main topic of moral inquiry, instead of being relegated to a parenthetical remark. The subsequent working out of implications seems rather trivial by comparison.

So moral claims can be true, and it all adds up to normality in a rather mundane way... which is probably just what we ought to expect to see when we're done.

Maybe, but we're not there yet.

Replies from: Eliezer_Yudkowsky, Vladimir_Nesov
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-31T03:07:01.218Z · LW(p) · GW(p)

But why is it that I feel uncertain about which premises I should accept?

You've got meta-moral criteria for judging between possible terms in your utility function, a reconciliation process for conflicting terms, other phenomena which are very interesting and I do wish someone would study in more detail, but so far as metaethics goes it would tend to map onto a computation whose uncertain output is your utility function. Just more logical uncertainty.

How can I put it? The differences here are probably very important to FAI designers and object-level moral philosophers, but I'm not sure they're metaethically interesting... or they're metaethically interesting, but they don't make you confused about what sort of stuff morality could possibly be made out of. Moral uncertainty is still made out of a naturalistic mixture of physical uncertainty and logical uncertainty.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-01-31T07:59:36.430Z · LW(p) · GW(p)

Suppose there's an UFAI loose on the Internet that's not yet very powerful. In order to gain more power, it wants me to change my moral premises (so I'll help it later), and to do that, it places a story on the web for me to find. I read the story, and it "inspires" me to change my values in the direction that the UFAI prefers. In your view, how do we say that this is bad, if this is just what my meta-moral computation did?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-31T08:06:40.854Z · LW(p) · GW(p)

If the UFAI convinced you of anything that wasn't true during the process - outright lies about reality or math - or biased sampling of reality producing a biased mental image, like a story that only depicts one possibility where other possibilities are more probable - then we have a simple and direct critique.

If the UFAI never deceived you in the course of telling the story, but simple measures over the space of possible moral arguments you could hear and moralities you subsequently develop, produce a spread of extrapolated volitions "almost all" of whom think that the UFAI-inspired-you has turned into something alien and unvaluable - if it flew through a persuasive keyhole to produce a very noncentral future version of you who is disvalued by central clusters of you - then it's the sort of thing a Coherent Extrapolated Volition would try to stop.

See also #1 on the list of New Humane Rights: "You have the right not to have the spread in your volition optimized away by an external decision process acting on unshared moral premises."

Replies from: PeerInfinity, Wei_Dai
comment by PeerInfinity · 2010-01-31T17:09:46.947Z · LW(p) · GW(p)

New Humane Rights:

You have the right not to have the spread in your volition optimized away by an external decision process acting on unshared moral premises.

You have the right to a system of moral dynamics complicated enough that you can only work it out by discussing it with other people who share most of it.

You have the right to be created by a creator acting under what that creator regards as a high purpose.

You have the right to exist predominantly in regions where you are having fun.

You have the right to be noticeably unique within a local world.

You have the right to an angel. If you do not know how to build an angel, one will be appointed for you.

You have the right to exist within a linearly unfolding time in which your subjective future coincides with your decision-theoretical future.

You have the right to remain cryptic.

-- Eliezer Yudkowsky

(originally posted sometime around 2005, probably earlier)

comment by Wei Dai (Wei_Dai) · 2010-01-31T09:44:33.600Z · LW(p) · GW(p)

What about the least convenient world where human meta-moral computation doesn't have the coherence that you assume? If you found yourself living in such a world, would you give up and say no meta-ethics is possible, or would you keep looking for one? If it's the latter, and assuming you find it, perhaps it can be used in the "convenient" worlds as well?

To put it another way, it doesn't seem right to me that the validity of one's meta-ethics should depend on a contingent fact like that. Although perhaps instead of just complaining about it, I should try to think of some way to remove the dependency...

(We also disagree about the likelihood that the coherence assumption holds, but I think we went over that before, so I'm skipping it in the interest of avoiding repetition.)

Replies from: Eliezer_Yudkowsky, Roko
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-31T17:33:57.393Z · LW(p) · GW(p)

I think this is about metamorals not metaethics - yes, I'm merely defining terms here, but I consider "What is moral?" and "What is morality made of?" to be problems that invoke noticeably different issues. We already know, at this point, what morality is made of; it's a computation. Which computation? That's a different sort of question and I don't see a difficulty in having my answer depend on contingent facts I haven't learned.

In response to your question: yes, if I had given a definition of moral progress where it turned out empirically that there was no coherence in the direction in which I was trying to point and the past had been a random walk, then I should reconsider my attempt to describe those changes as "progress".

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-01-31T21:59:13.335Z · LW(p) · GW(p)

Which computation? That's a different sort of question and I don't see a difficulty in having my answer depend on contingent facts I haven't learned.

How do you cash "which computation?" out to logical+physical uncertainty? Do you have in mind some well-defined metamoral computation that would output the answer?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-31T22:44:45.681Z · LW(p) · GW(p)

I think you just asked me how to write an FAI. So long as I know that it's made out of logical+physical uncertainty, though, I'm not confused in the same way that I was confused in say 1998.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-01-31T22:57:54.678Z · LW(p) · GW(p)

"Well-specified" may have been too strong a term, then; I meant to include something like CEV as described in 2004.

Is there an infinite regress of not knowing how to compute morality, or how to compute (how to compute morality), or how to compute (how to compute (...)), that you need to resolve; do you currently think you have some idea of how it bottoms out; or is there a third alternative that I should be seeing?

comment by Roko · 2010-01-31T11:50:56.785Z · LW(p) · GW(p)

it doesn't seem right to me that the validity of one's meta-ethics should depend on a contingent fact like that

I think it is a powerful secret of philosophy and AI design that all useful philosophy depends upon the philosopher(s) observing contingent facts from their sensory input stream. Philosophy can be thought of as an ultra high level machine learning technique that records the highest-level regularities of our input/output streams. And the reason I said that this is a powerful AI design principle, is that you realize that your AI can do good philosophy by looking for such regularities.

comment by Vladimir_Nesov · 2010-01-31T11:11:06.917Z · LW(p) · GW(p)

But why is it that I feel uncertain about which premises I should accept?

Think of it as a foundational struggle: you've got non-rigorous ideas about what is morally true/right, and you are searching of a way to build a foundation such that any right idea will follow from that foundation deductively. Arguably, this task is impossible within human mind. A better human-level approach would be structural, where you recognize certain (premise) patterns in reliable moral ideas, and learn heuristics that allow to conclude other patterns wherever you find the premise patterns. This constitutes ordinary moral progress, when fixed in culture.

comment by MichaelVassar · 2010-01-31T10:40:45.539Z · LW(p) · GW(p)

I would agree with the above, but I would also substitute 'god', 'fairies', 'chi' and 'UFO abductions', among other things, in place of 'morality'.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-31T17:48:16.906Z · LW(p) · GW(p)

In cases like that, I am perfectly willing to say that we have discovered that the subject matter of "fairies" is a coherent, well-formed concept that turns out to have an empty referent. The closet is there, we opened it up and looked, and there was nothing inside. I know what the world ought to look like if there were fairies, or alternatively no fairies, and the world looks like it has no fairies.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-02-01T06:46:32.573Z · LW(p) · GW(p)

I think that a very large fraction of the time, when a possibility appears to be coherent and well formed, it may turn out not to be upon more careful examination. I would see the subject matter of "fairies" as "that which causes us to talk about fairies", the subject matter of "dogs" as "that which causes us to talk about dogs", and the subject matter of "morality" as "that which causes us to talk about morality". All three are interesting.

comment by wedrifid · 2010-01-31T03:49:10.766Z · LW(p) · GW(p)

This is a theme that crops up fairly frequently as a matter of semantic confusion and is a confusion that is difficult to resolve trivially due to inferential differences to the actual abstract concepts. I haven't seen this position explained so coherently in one place before. Particularly the line:

I think it was a huge mistake to claim that it was the subject matter of morality in the first place, that we were talking about some mysterious ineffable confused stuff when we were asking what is right.

... and the necessary context. I would find it useful to have this as a top level post to link to. Even if, as you have just suggested to JamesAndrix, it is just a copy and paste job. It'll save searching through comments to find a permalink if nothing else.

Replies from: matt
comment by matt · 2010-02-01T22:02:32.974Z · LW(p) · GW(p)

Copy it to the wiki yourself.

Replies from: wedrifid
comment by wedrifid · 2010-02-02T02:50:20.116Z · LW(p) · GW(p)

What name?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-02-02T18:21:50.648Z · LW(p) · GW(p)

Such things should go through a top-level post first, original content doesn't work well for the wiki.

comment by MichaelVassar · 2010-01-31T10:50:55.082Z · LW(p) · GW(p)

Doctors or medicine, investors or analysis of public information, scientists or science, philosophers or philosophy... maybe it's the process of credentialing that we should be downgrading our credence in. Really, why should the prior for credentials being a very significant form of evidence ever have been very high?

Replies from: CarlShulman
comment by CarlShulman · 2010-01-31T12:54:10.888Z · LW(p) · GW(p)

The philpapers survey is for the top 99 departments. Things do get better as you go up. Among hard scientists, elite schools are more atheist, and the only almost entirely atheist groups are super-elite, like the National Academy of Sciences/Royal Society.

comment by Technologos · 2010-01-31T20:04:50.008Z · LW(p) · GW(p)

I realize how arrogant it must seem for young, uncredentialled (not even a Bachelor's!) me to conclude that brilliant professional philosophers who have devoted their entire lives to studying this topic are simply confused. But, disturbing as it may be to say ... that's how it really looks.

Perhaps the fact that they have devoted their lives to a topic suggests that they have a vested interest in making it appear not to be nonsense. Cognitive dissonance can be tricky even for the pros.

comment by Paul Crowley (ciphergoth) · 2010-01-30T22:05:48.470Z · LW(p) · GW(p)

Maybe they mean something different by it than we're imagining?

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2010-01-30T22:08:37.632Z · LW(p) · GW(p)

Quite possible. But in that case I would say that we're just talking about things in different ways, and not actually disagreeing on anything substantive.

comment by timtyler · 2010-01-31T21:23:14.504Z · LW(p) · GW(p)

Say we did a survey of 1000 independent advanced civilizations - and found they all broadly agreed on some moral proposition X.

That's the kind of evidence that I think would support the idea of morality inherent in the natural world.

comment by Roko · 2010-01-30T20:15:08.664Z · LW(p) · GW(p)

Toby, I spent a while looking into the meta-ethical debates about realism. When I thought moral realism was a likely option on the table, I meant:

Strong Moral Realism: All (or perhaps just almost all) beings, human, alien or AI, when given sufficient computing power and the ability to learn science and get an accurate map-territory distinction, will agree on what physical state the universe ought to be transformed into, and therefore they will assist you in transforming it into this state.

But modern philosophers who call themselves "realists" don't mean anything nearly this strong. They mean that that there are moral "facts". But what use is it if the paperclipper agrees that it is a "moral fact" that human rights ought to be respected, if it then goes on to say it has no desire to act according to the prescription of moral facts, and moral facts can't somehow revoke it.

The force of "scientific facts" is that they constrain the world. If an alien wants to get from Andromeda to here, it has to take at least 2.5 million years, the physical fact of the finite speed of light literally stops the alien from getting here sooner, whether it likes it or not.

The 56.3/27.7% split on philpapers seems to me to be an argument about whether you should be allowed to attach the word "fact" to your preferences, kind of as a shiny badge of merit, without actually disagreeing on any physical prediction about the world. The debate between weak moral realists and antirealists sounds like the debate where two people ask "if a tree falls in the forest, does it really make a sound?" - they're not arguing about anything substantive.

So, I ask, how many philosophers are strong moral realists, in the sense I defined?

EDIT: After seeing Carl's comment, it seems likely to me that there probably are a bunch of theists who would, in fact, support the strong moral realism position; but they're clowns, so who cares.

Replies from: RobinHanson, Vladimir_Nesov, Toby_Ord, timtyler
comment by RobinHanson · 2010-01-30T21:38:30.980Z · LW(p) · GW(p)

I strongly agree with Roko that something like his strong version is the interesting version. What matters is what range of creatures will come to agree on outcomes; it matters much less what range of creatures think their desires are "right" in some absolute sense, if they don't think that will eventually be reflected in agreement.

Replies from: timtyler
comment by timtyler · 2010-01-31T19:28:49.867Z · LW(p) · GW(p)

Roko's question seems engineered to be wrong to me.

If this is what people think moral realism means - or should mean - no wonder they disagree with it.

comment by Vladimir_Nesov · 2010-01-31T08:44:52.064Z · LW(p) · GW(p)

The force of "scientific facts" is that they constrain the world.

In the context of this comment, the goal of FAI can be said to be to constrain the world by "moral facts", just like laws of physics constrain the world by "physical facts". This is the sense in which I mean "FAI=Physical Laws 2.0".

Replies from: Roko
comment by Roko · 2010-01-31T15:12:01.623Z · LW(p) · GW(p)

Only in a useless way: there is a specific FAI that does the "truly right" thing, but the truthhood of rightness doesn't stop you from having to code the rightness in. Goodness is not discoverably true: if you don't already know exactly what goodness is, you can't find out.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-31T15:22:09.626Z · LW(p) · GW(p)

I'm describing the sense of post-FAI world.

Replies from: Roko
comment by Roko · 2010-01-31T16:41:54.172Z · LW(p) · GW(p)

hmmm. That is interesting. Well, let us define the collection W_i of worlds run by superintelligences with the subscript i ranging over goals. No matter what i is, those worlds are going to look, to any agents in them, like worlds with "moral truths".

However, any agent that learned the real physics of such a world would see that the goodness is written in to the initial conditions, not the laws.

comment by Toby_Ord · 2010-01-30T22:15:09.030Z · LW(p) · GW(p)

Roko, you make a good point that it can be quite murky just what realism and anti-realism mean (in ethics or in anything else). However, I don't agree with what you write after that. Your Strong Moral Realism is a claim that is outside the domain of philosophy, as it is an empirical claim in the domain of exo-biology or exo-sociology or something. No matter what the truth of a meta-ethical claim, smart entities might refuse to believe it (the same goes for other philosophical claims or mathematical claims).

Pick your favourite philosophical claim. I'm sure there are very smart possible entities that don't believe this and very smart ones that do. There are probably also very smart entities without the concepts needed to consider it.

I understand why you introduced Strong Moral Realism: you want to be able to see why the truth of realism would matter and so you came up with truth conditions. However, reducing a philosophical claim to an empirical one never quite captures it.

For what its worth, I think that the empirical claim Strong Moral Realism is false, but I wouldn't be surprised if there was considerable agreement among radically different entities on how to transform the world.

Replies from: Roko, Roko, CarlShulman
comment by Roko · 2010-01-31T16:11:21.872Z · LW(p) · GW(p)

Pick your favourite philosophical claim. I'm sure there are very smart possible entities that don't believe this and very smart ones that do

If there's a philosophical claim that intelligent agents across the universe wouldn't display massive agreement on, then I don't really think it is worth its salt. I think that this principle can be used to eliminate a lot of nonsense from philosophy.

Which of anti-realism or weak realism is true seems to be a question we can eliminate. Whether strong realism is true or not seems substantive, because it matters to our policy which is true.

comment by Roko · 2010-01-31T16:17:02.167Z · LW(p) · GW(p)

However, reducing a philosophical claim to an empirical one never quite captures it.

There are clearly some examples where there can be interesting things to say that aren't really empirical, e.g. decision theory, mystery of subjective experience. But I think that this isn't one of them.

Suffice it to say I can't think of anything that makes the debate between weak realism and antirealism at all interesting or worthy of attention. Certainly, Friendly AI theorists ought not care about the difference, because the empirical claims about an AI system will do are identical. Once the illusions and fallacies surrounding rationalist moral psychology has been debunked, proponents of other AI motivation methods than FAI also ought not to care about the weak realism vs. anti-realism pseudo-question

comment by CarlShulman · 2010-01-30T23:39:54.306Z · LW(p) · GW(p)

I'm having trouble reconciling this with the beginning of your first comment:

These views go together well: if value is not fundamental, but dependent on characteristics of humans, then it can derive complexity from this and not suffer due to Occam's Razor.

comment by timtyler · 2010-01-30T20:42:27.330Z · LW(p) · GW(p)

Not me.

An "optimal organism" may be a possibility, though. Assuming god's utility function, it is theoretically possible that a unique optimal agent might exist. Whether it would be found before the universal heat death is another issue, though.

From my naturalist POV, you need to show me a paperclipper before it is convincing evidence about the real world. Paperclipper's are theoretical possibilities, but who would build one, why, and how long would it last in the wild?

...and if the "paperclips" part is a metaphor, then which preferred ordered atomic states count, and which don't? Is a cockroach a "paperclipper" - because it acts as though it wants to fill the universe with its DNA?

Replies from: Zack_M_Davis, Roko
comment by Zack_M_Davis · 2010-01-30T22:25:40.118Z · LW(p) · GW(p)

Yes, paperclips are a metaphor. No one expects a literal paperclip maximizer; the point is to illustrate unFriendly AI as a really powerful system with little or no moral worth as humans would understand moral worth. A non-conscious superintelligent cockroach-type thing that fills the universe with its DNA or equivalent would indeed qualify.

Replies from: timtyler
comment by timtyler · 2010-01-30T22:52:12.385Z · LW(p) · GW(p)

In that case, I don't think a division of superintelligences into paperclippers and non-paperclippers "carves nature at the joints" very well. It appears to be a human-centric classification scheme.

I've proposed another way of classifying superintelligence goal systems - according to whether or not they are "handicapped".

Healthy superintelligences execute god's utility function - i.e. they don't value anything apart from their genes.

Handicapped superintelligences value other things - paperclips, gold atoms, whatever. Genes are valued too - but they may only have proximate value.

According to this classification scheme, the cockroach and paperclipper would be in different categories.

"Handicapped" superintelligences value things besides their genes. They typically try and leave something behind. Most other agents keep dissipating negentropy until they have flattened energy gradients as much as they can - the way most living ecosystems do.

http://alife.co.uk/essays/handicapped_superintelligence/

Replies from: Zack_M_Davis, AngryParsley, Vladimir_Nesov, Peter_de_Blanc
comment by Zack_M_Davis · 2010-01-30T23:07:06.142Z · LW(p) · GW(p)

It appears to be a human-centric classification scheme.

Yes, that's the point! We're humans, and so for some purposes we find it useful to categorize superintelligences into those that do and don't do what we want, even if it isn't a natural categorization from a more objective standpoint.

Replies from: timtyler
comment by timtyler · 2010-01-30T23:57:38.775Z · LW(p) · GW(p)

Right - well, fine. One issue is that the classification into paperclippers and non-paperclippers was not clear to me until you clarified it. Another poster has "clarified" things the other way in response to the same comment. So, as a classification scheme, IMO the idea seems rather vague and unclear.

The next issue is: how close does an agent have to be to what you (we?) want before it is a non-paperclipper?

IMO, the idea of a metaphorical unfriendly paperclipper appears to need pinning down before it is of very much use as a means of superintelligence classification scheme.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2010-01-31T00:08:44.898Z · LW(p) · GW(p)

Another poster has "clarified" things the other way in response to the same comment.

I'm pretty confident Roko agrees with me and that this is just a communication error.

So, as a classification scheme, IMO the idea seems rather vague and unclear.

I'm given to understand that the classification scheme is Friendly versus unFriendly, with paperclip maximizer being an illustrative (albeit not representative) example of the latter. I agree that more rigor (and perhaps clearer terminology) is in order.

Replies from: timtyler
comment by timtyler · 2010-01-31T00:43:57.910Z · LW(p) · GW(p)

Machine intelligences seem likely to vary in their desirability to humans.

Friendly / unFriendly seems rather binary, maybe a "desirability" scale would help.

Alas, this seems to be drifting away from the topic.

Replies from: gregconen
comment by gregconen · 2010-01-31T01:34:10.391Z · LW(p) · GW(p)

Machine intelligences seem likely to vary in their desirability to humans.

Technically true. However, most naive superintelligence designs will simply kill all humans. You've accomplished quite a lot to even get to a failed utopia, much less deciding whether you want Prime Intellect or Coherent Extrapolated Volition.

It's also unlikely you'll accidentally do something significantly worse than killing all humans, for the same reasons. A superintelligent sadist is just as hard as a utopia.

comment by AngryParsley · 2010-01-31T10:13:03.548Z · LW(p) · GW(p)

I read the essay you linked to. I really don't know where to start.

Now, we are not currently facing threats from any alien races. However, if we do so in the future, then we probably do not want to have handicapped our entire civilisation.

So we should guard against potential threats from non-human intelligent life by building a non-human superintelligence that doesn't care about humans?

While dependencies on humans may have the effect of postponing the demise of our species, they also have considerable potential to hamper and slow evolutionary progress.

Postpone? I thought the point of friendly AI was to preserve human values for as long as physically possible. "Evolutionary progress?" Evolution is stupid and doesn't care about the individual organisms. Evolution causes pointless suffering and death. It produces stupid designs. As Michael Vassar once said: think of all the simple things that evolution didn't invent. The wheel. The bow and arrow. The axial-flow pump. Evolution had billions of years creating and destroying organisms and it couldn't invent stuff built by cave men. Is it OK in your book that people die of antibiotic resistant diseases? MRSA is a result of evolutionary progress.

For example humans have poor space-travel potential, and any tendency to keep humans around will be associated with remaining stuck on the home world.

Who said humans have to live on planets or breathe oxygen or run on neurons? Why do you think a superintelligence will have problems dealing with asteroids when humans today are researching ways to deflect them?

I think your main problem is that you're valuing the wrong thing. You practically worship evolution while neglecting important things like people, animals, or anything that can suffer. Also, I think you fail to notice the huge first-mover advantage of any superintelligence, even one as "handicapped" as a friendly AI.

Finally, I know the appearance of the arguer doesn't change the validity of the argument, but I feel compelled to tell you this: You would look much better with a haircut, a shave, and some different glasses.

Replies from: timtyler
comment by timtyler · 2010-01-31T11:16:56.949Z · LW(p) · GW(p)

Briefly:

I don't avocate building machines that are indiffierent to humans. For instance, I think machine builders would be well advised to (and probably mostly will) construct devices that obey the law - which includes all kinds of provisions for preventing harm to humans.

Evolution did produce the wheel and the bow and arrow. If you think otherwise, please state clearly what definition of the term "evolution" you are using.

Regarding space travel - I was talking about wetware humans.

Re: "Why do you think a superintelligence will have problems dealing with asteroids when humans today are researching ways to deflect them?"

...that is a projection on your part - not something I said.

Re: "Also, I think you fail to notice the huge first-mover advantage of any superintelligence"

To quote mine myself:

"IMHO, it is indeed possible that the first AI will effectively take over the world. I.T. is an environment with dramatic first-mover advantages. It is often a winner-takes-all market – and AI seems likely to exhibit such effects in spades."

"Google was not the first search engine, Microsoft was not the first OS maker - and Diffie–Hellman didn't invent public key crypto.

Being first does not necessarily make players uncatchable - and there's a selection process at work in the mean time, that weeds out certain classes of failures."

I have thought and written about this issue quite a bit - and my position seems a bit more nuanced and realistic than the position you are saying you think I should have.

comment by Vladimir_Nesov · 2010-01-31T09:19:09.557Z · LW(p) · GW(p)

Healthy superintelligences execute god's utility function - i.e. they don't value anything apart from their genes.

Superintelligences don't have genes.

Replies from: wedrifid, timtyler
comment by wedrifid · 2010-01-31T09:41:06.704Z · LW(p) · GW(p)

Well, most superintelligences don't have genes.

Replies from: timtyler
comment by timtyler · 2010-01-31T10:50:04.231Z · LW(p) · GW(p)

They do if you use an information-theory definition of the term - like the ones on:

http://alife.co.uk/essays/informational_genetics/

Replies from: wedrifid
comment by wedrifid · 2010-01-31T12:38:43.767Z · LW(p) · GW(p)

I disagree even with your interpretation of that document, but that is not the point emphasized in the grandparent. I acknowledge that while a superintelligence need not have genes it is in fact possible to construct a superintelligence that does relies significantly on "small sections of heritable information", including the possibility of a superintelligence that relies on genes in actual DNA. Hence the slight weakening of the claim.

comment by timtyler · 2010-01-31T10:49:25.645Z · LW(p) · GW(p)

What follows is just a copy-and-paste of another reply, but:

By "gene" I mean:

"Small chunk of heritable information"

http://alife.co.uk/essays/informational_genetics/

Any sufficiently long-term persistent structure persists via a copying process - and so has "genes" in this sense.

comment by Peter_de_Blanc · 2010-01-30T22:57:29.143Z · LW(p) · GW(p)

I think your term "God's utility function" is a bit confusing - as if it's just one utility function. If you value your genes, and I value my genes, and our genes are different, then we have different utility functions.

Also, the vast majority of possible minds don't have genes.

Replies from: timtyler, timtyler
comment by timtyler · 2010-01-30T23:49:06.814Z · LW(p) · GW(p)

Maybe. Though if you look at:

http://originoflife.net/gods_utility_function/

...then first of all the term is borrowed/inherited from:

http://en.wikipedia.org/wiki/God%27s_utility_function

...and also, I do mean it in a broader sense where (hopefully) it makes a bit more sense.

The concept is also referred to as "Goal system zero" - which I don't like much.

My latest name for the idea is "Shiva's goals" / "Shiva's values" - a reference to the Hindu god of destruction, creation and transformation.

comment by timtyler · 2010-01-30T23:42:34.711Z · LW(p) · GW(p)

By "gene" I mean:

"Small chunk of heritable information"

http://alife.co.uk/essays/informational_genetics/

Any sufficiently long-term persistent structure persists via a copying process - and so has "genes" in this sense.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-31T10:06:55.467Z · LW(p) · GW(p)

By "gene" I mean: "Small chunk of heritable information"

Any sufficiently long-term persistent structure persists via a copying process - and so has "genes" in this sense.

What we mean by preference. Except that preference, being a specification of a computation, has a lot of forms of expression, so it doesn't "persist" by a copying process, it "persists" as a nontrivial computational process.

A superintelligence that persists in copying a given piece of information is running a preference (computational process) that specifies copying as the preferable form of expression, over all the other things it could be doing.

Replies from: timtyler
comment by timtyler · 2010-01-31T10:58:30.904Z · LW(p) · GW(p)

No, no! Genes is just intended to refer to any heritable information. Preferences are something else entirely. Agents can have preferences which aren't inherited - and not everything that gets inherited is a preference.

Anything information that persists over long periods of time persists via copying.

"Copying" just means there's Shannon-mutual information between the source and the destination which originated in the source. Complex computations are absolutely included - provided that they share this property.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-31T11:16:19.475Z · LW(p) · GW(p)

[Any] information that persists over long periods of time persists via copying.

"Copying" just means there's Shannon-mutual information between the source and the destination which originated in the source. Complex computations are absolutely included - provided that they share this property.

Then preference still qualifies. This holds as a factual claim provided we are talking about reflectively consistent agents (i.e. those that succeed in not losing their preference), and as a normative claim regardless.

I would appreciate it if you avoid redefining words into highly qualified meanings, like "gene" for "anything that gets copied", and then "copying" for "any computation process that preserves mutual information".

Replies from: timtyler
comment by timtyler · 2010-01-31T11:35:35.701Z · LW(p) · GW(p)

Re: Then preference still qualifies. This holds as a factual claim provided [bunch of conditions]

Yes, there are some circumstances under which preferences are coded genetically and reliably inherited. However, your claim was stronger. You said what meant by genes was what "we" would call preferences. That implies that genes are preferences and preferences are genes.

You have just argued that a subset of preferences can be genetically coded - and I would agree with that. However, you have yet to argue that everything that is inherited is a preference.

I think you are barking up the wrong tree here - the concepts of preferences and genes are just too different. For example, clippy likes paperclips, in addition to the propagation of paperclip-construction instructions. The physical paperclips are best seen as phenotype - not genotype.

Re: "I would appreciate it if you avoid redefining words into highly qualified meanings [...]"

I am just saying what I mean - so as to be clear.

If you don't want me to use the words "copy" and "gene" for those concepts - then you are out of luck - unless you have a compelling case to make for better terminology. My choice of words in both cases is pretty carefully considered.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-31T11:44:41.496Z · LW(p) · GW(p)

Re: Then preference still qualifies. This holds as a factual claim provided [bunch of conditions]

Not "bunch of conditions". Reflective consistency is the same concept as "correctly copying preference", if I read your sense of "copying" correctly, and given that preference is not just "thing to be copied", but also plays the appropriate role in decision-making (wording in the grandparent comment improved). And reflectively consistent agents are taken as a natural and desirable (from the point of view of those agents) attractor where all agents tend to end up, so it's not just an arbitrary category of agents.

That implies that genes are preferences and preferences are genes.

But there are many different preferences for different agents, just as there are different genes. Using the word "genes" in the context where both human preference and evolution are salient is misleading, because human genes, even if we take them as corresponding to a certain preference, don't reflect human preference, and are not copied in the same sense human preference is copied. Human genes are exactly the thing that currently persists by vanilla "copying", not by any reversible (mutual information-preserving) process.

If you don't want me to use the words "copy" and "gene" for those concepts - then you are out of luck

Confusing terminology is still bad even if you failed to think up a better alternative.

Replies from: Vladimir_Nesov, timtyler
comment by Vladimir_Nesov · 2010-01-31T11:47:36.746Z · LW(p) · GW(p)

If you don't want me to use the words "copy" and "gene" for those concepts - then you are out of luck

Confusing terminology is still bad even if you failed to think up a better alternative.

comment by timtyler · 2010-01-31T12:06:01.019Z · LW(p) · GW(p)

You appear to be on some kind of different planet to me - and are so far away that I can't easily see where your ideas are coming from.

The idea I was trying to convey was really fairly simple, though:

"Small chunks of heritable information" (a.k.a. "genes") are one thing, and the term "preferences" refers to a different concept.

As an example of a preference that is not inherited, consider the preference of an agent for cats - after being bitten by a dog as a child.

As an example of something that is inherited that is not a preference, consider the old socks that I got from my grandfather after his funeral.

These are evidently different concepts - thus the different terms.

Thanks for your terminology feedback. Alas, I am unmoved. That's the best terminology I have found, and you don't provide an alternative proposal. It is easy to bitch about terminology - but not always so easy to improve on it.

comment by Roko · 2010-01-30T20:46:03.726Z · LW(p) · GW(p)

I meant a literal paperclip maximizing superintelligent AI, so no, a cockroach is not one of those.

Replies from: timtyler
comment by timtyler · 2010-01-30T20:55:25.094Z · LW(p) · GW(p)

Right - well, that seems pretty unlikely. What is the story? A paperclip manufacturer with an ambitious IT department that out-performs every other contender? How come the government doesn't just step on the results?

Is there a story about how humanity drops the reins of civilisation like that that is not extremely contrived?

Replies from: magfrump
comment by magfrump · 2010-01-31T00:30:11.959Z · LW(p) · GW(p)

I am unclear on how this story is contrived. There are vast numbers of business with terrible externalities today; this is deeply related to the debate on climate change. Alternately, we have large cutting machines and those machines don't care whether they are cutting trees or people; if a wood chipper could pick things up but not distinguish between which things it was picking up it would be very dangerous (but still very useful if you have a large space of things you want chipped).

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-30T19:24:22.059Z · LW(p) · GW(p)

I am a moral cognitivist. Statements like "ceteris paribus, happiness is a good thing" have truth-values. Such moral statements simply are not compelling or even interesting enough to compute the truth-value of to the vast majority of agents, even those which maximize coherent utility functions using Bayesian belief updating (that is, rational agents) or approximately rational agents.

AFAICT the closest official term for what I am is "analytic descriptivist", though I believe I can offer a better defense of analytic descriptivism than what I've read so far.

EDIT: Looking up moral naturalism shows that Frank Jackson's analytic descriptivism aka moral functionalism is listed as a form of moral naturalism: http://plato.stanford.edu/entries/naturalism-moral/#JacMorFun

Note similarity to "Joy in the Merely Good".

Replies from: lukeprog
comment by lukeprog · 2012-04-06T23:34:38.341Z · LW(p) · GW(p)

For the interested: A good summary/defense of Jackson's moral functionalism can be found in Jackson (2012), "On ethical naturalism and the philosophy of language."

Now, should we call this a form of "moral realism"? I dunno. That's something I'd prefer to taboo. Even famous error theorist Richard Joyce kinda agrees.

comment by JamesAndrix · 2010-01-30T16:38:40.957Z · LW(p) · GW(p)

From your SEP link on Moral Realism: "It is worth noting that, while moral realists are united in their cognitivism and in their rejection of error theories, they disagree among themselves not only about which moral claims are actually true but about what it is about the world that makes those claims true. "

I think this is good cause for breaking up that 56%. We should not take them as a block merely because (one component of) their conclusions match, if their justifications are conflicting or contradictory. It could still be the case that 90% of expert philosophers reject any given argument for moral realism. (This would be consistent with my view that those arguments are silly.)

I may have noticed this because the post on Logical Rudeness is fresh in my mind.

Replies from: Toby_Ord
comment by Toby_Ord · 2010-01-30T21:48:39.551Z · LW(p) · GW(p)

You are entirely right that the 56% would split up into many subgroups, but I don't really see how this weakens my point: more philosophers support realist positions than anti-realist ones. For what its worth, the anti-realists are also fragmented in a similar way.

Replies from: JamesAndrix
comment by JamesAndrix · 2010-01-31T00:24:49.529Z · LW(p) · GW(p)

Disagreeing positions don't add up just because they share a feature. On the contrary, If people offer lots of different contradictory reasons for a conclusion (even if each individual has consistent beliefs) it is a sign that they are rationalizing their position.

If 2/3's of experts support proposition G , 1/3 because of reason A while rejecting B, and 1/3 because of reason B while rejecting A, and the remaining 1/3 reject A and B; then the majority Reject A, and the majority Reject B. G should not be treated as a reasonable majority view.

This should be clear if A is the koran and B is the bible.

If we're going to add up expert views, we need to add up what experts consider important about a question, not features of their conclusions.

You shouldn't add up two experts if they would consider each other's arguments irrational. That's ignoring their expertise.

Replies from: Eliezer_Yudkowsky, Toby_Ord, wedrifid, blogospheroid, MichaelVassar
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-31T03:16:05.857Z · LW(p) · GW(p)

I know it might seem difficult to expand this into a top-level post, but if you just want to post it verbatim, I'd say go for it.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-01-31T10:45:08.454Z · LW(p) · GW(p)

Yes James, I'd also appreciate that. Maybe we should encourage more short top-level posts and comment upgrades to posts. I think that would be great if we could develop a good procedure.

comment by Toby_Ord · 2010-01-31T19:41:03.873Z · LW(p) · GW(p)

This certainly doesn't work in all cases:

There is a hidden object which is either green, red or blue. Three people have conflicting opinions about its colour, based on different pieces of reasoning. If you are the one who believes it is green, you have to add up the opponents who say not-green, despite the fact that there is no single not-green position (think of the symmetry -- otherwise everyone could have too great confidence). The same holds true if these are expert opinions.

The above example is basically as general as possible, so in order for your argument to work it will need to add specifics of some sort.

Also, the Koran/Bible case doesn't work. By symmetry, the Koran readers can say that they don't need to add up the Bible readers and the atheists, since they are heterogeneous, so they can keep their belief in the Koran...

Replies from: JamesAndrix
comment by JamesAndrix · 2010-01-31T23:34:09.386Z · LW(p) · GW(p)

In practice all arguments will share some premises and some conclusions, in messy asymmetrical ways.

If the not-greens share a a consistent rationale about why the object cannot be green, then I need to take that into account.

If the red supporter contends that all green and blue objects were lost in the color wars, while the blue supporter contends that all objects are fundamentally blue and besides the color wars never happened, then their opinions roughly cancel each other out. (Barring other reasons for me to view one as more rational than the other.)

I suspect that there are things to be said about islam that both atheists and christians would agree on. That's a block that a rational muslim should take into account. Our disagreeing conclusions about god are secondary.

If I'm going to update my position because 56% of experts agree on something, then I want to know what I'm going to update to.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-02-04T07:55:20.340Z · LW(p) · GW(p)

This discussion continues here.

BTW, I wish there is a way to upgrade a comment into a post and automatically move all the discussions under the new post as well.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-02-05T00:02:32.640Z · LW(p) · GW(p)

BTW, I wish there is a way to upgrade a comment into a post and automatically move all the discussions under the new post as well.

The only reason I can think of to upgrade a comment to a post is to draw attention to it, whether google attention, naturality of external linking, or the attention of the regular readers. In all these cases, it seems to me that it is the duty of the author, who is demanding time from many readers, to spend time summarizing the old discussion and making it easy for new readers to join.

comment by wedrifid · 2010-01-31T03:37:40.675Z · LW(p) · GW(p)

I haven't heard it put that way before. But your explanation makes it seem obvious!

comment by blogospheroid · 2010-01-31T15:43:57.632Z · LW(p) · GW(p)

Ignoring their expertise, but counting only popularity. Moderator, does that mean that Less Wrong's karma system might be modified to take into account why a comment was upvoted?

A valid principle James, but a bad example which might be contested by those more knowledgeable of the matter.

Islam considers itself the best of the revealed religions and jesus is revered as a prophet in Islam.

So, in this case, christians reject the koran, but the muslims do not completely reject the bible.

I'm not sure what might serve as a better example, though. The multiple possible explanations of the present recession may serve as a better example, incase you want to make this a top level post.

Replies from: Technologos
comment by Technologos · 2010-01-31T20:21:42.370Z · LW(p) · GW(p)

What you say is true while the Koran and the Bible are referents, but when A and B become "Mohammed is the last prophet, who brought the full truth of God's will" and "Jesus was a literal incarnation of God," (the central beliefs of the religions that hold the respective books sacred) then James' logic holds.

comment by MichaelVassar · 2010-01-31T10:43:26.744Z · LW(p) · GW(p)

This applies very generally when the evidential properties of reference classes are brought up.

comment by TruePath · 2010-02-03T07:51:02.817Z · LW(p) · GW(p)

The right response to moral realism isn't to dispute it's truth but to simply observe you don't understand the concept.

I mean imagine someone started going around insisting some situations were Heret and others were Grovic but when asked to explain what made a situation Heret or Grovic he simply shrugged and said they were primitive concepts. But you persist and after observing his behavior for a period of time you work out some principle that perfectly predicts which category he will assign a given situation to, even counterfactually but when you present the algorithm to him and ask, "Ohh so is it satisfying this principle that makes one Heret rather than Grovic?" he insists that while your notion will always agrees with his notion that's not what he means. Moreover, he insists that no definition in terms of physical state could capture these concepts.

Confused you press him and he says that there are special things which we can't casually interact with that determine Heret or Grovic status. Bracketing your skepticism you ask him to say what properties these new ontological objects must have. After listing a couple he adds that most importantly they can't just be random things with this structure but they also have to be Heret making or Grovic making and that's what distingushes them from all the other casually inaccessible things out there that might otherwise yield some slightly different class of things as Heret and Grovic.

Frustrated you curse the guy saying he hasn't really told you anything since you didn't know what it meant to be Heret or Grovic in the first place so you surely don't know what it means to be Heret making or Grovic making. The man's reply is simply to shrug and say, "well it's a fundamental concept, if you don't understand I can't explain it to you anymore than I could explain the perceptual experience of redness to a man who had never experienced color."


In such a situation the only thing you can do is give up on the notion of Heret and Grovic. Debating about whether to say it's an incoherent notion, a concept you lack the facilities to comprehend or something else would just waste time with a useless word game. Ultimately you just have to ignore such talk as something that lacks content for you and treat it the same way as you would meaningless gibberish.

The fact that when moral realists say the same thing about good and evil they are using the same sounds that we understand to mean something different shouldn't change the situation at all.

comment by Paul Crowley (ciphergoth) · 2010-01-30T11:59:36.415Z · LW(p) · GW(p)

Could you direct us to the best arguments for moral realism, or against anti-realism? Thanks!

Replies from: Toby_Ord, timtyler
comment by Toby_Ord · 2010-01-30T14:35:43.072Z · LW(p) · GW(p)

In metaethics, there are typically very good arguments against all known views, and only relatively weak arguments for each of them. For anything in philosophy, a good first stop is the Stanford Encyclopedia of Philosophy. Here are some articles on the topic at SEP:

I think the best book to read on metaethics is:

Replies from: Wei_Dai, ciphergoth, ciphergoth
comment by Wei Dai (Wei_Dai) · 2010-02-02T03:55:10.909Z · LW(p) · GW(p)

Toby, I read through those SEP articles but couldn't find the good arguments against anti-realism that you mentioned. In contrast, the article on deontology laid out the arguments for and against it very clearly.

Can you please point us more specifically to the arguments that you find persuasive? Maybe just give us some page numbers in the book that you referenced? Most of us don't really have the time to read something like that cover to cover in search of a few nuggets of information.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-02T09:07:01.759Z · LW(p) · GW(p)

Thank you for doing that, and may I second this. I started reading those articles, then after a bit started scanning for the anti-realism articles, and worried after not finding them that I'd not read carefully enough, so I'm glad to have your report on this.

I really am curious to read these arguments, so I hope someone can point us to them.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-02-10T16:56:01.405Z · LW(p) · GW(p)

I managed to find a draft of a book chapter titled In Defence of Moral Realism. I'm still wondering what Toby thinks the best arguments are, but alas he doesn't seem to be following this discussion anymore.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-10T17:12:26.301Z · LW(p) · GW(p)

Thanks! Again, didn't get much from a quick skim, let me know if you find any real meat in there.

The thing that really got my attention wasn't the assertion that there are some arguments in favour of realism, but that there are good arguments specifically against anti-realism.

I know I've spoken of "skimming" twice here. I promise, if Toby Ord were to say to me "this contains good arguments against anti-realism" I would read it carefully.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-02-10T17:39:42.122Z · LW(p) · GW(p)

The thing that really got my attention wasn't the assertion that there are some arguments in favour of realism, but that there are good arguments specifically against anti-realism.

But surely an argument against anti-realism is also an argument for realism? I'm interpreting Toby's comment as saying that there are good arguments for realism in general, but not for any particular realist meta-ethical theory.

Again, didn't get much from a quick skim, let me know if you find any real meat in there.

The author says in the conclusion, "I do not pretend to give any knock-down argument in this chapter for the thesis that objective moral facts or reasons exist, independently of our thoughts and actions." So I think it's mostly a matter of how convincing one finds the argument that he does give.

It seems likely, given that the author is a specialist in and proponent of moral realism, that he would give the best arguments that he knew, so this paper seems like good evidence for what kind of arguments for realism is currently available.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-10T17:43:40.656Z · LW(p) · GW(p)

It seems likely, given that the author is a specialist in and proponent of moral realism, that he would give the best arguments that he knew, so this paper seems like good evidence for what kind of arguments for realism is currently available.

Will read carefully on that basis. Thanks.

comment by Paul Crowley (ciphergoth) · 2010-01-30T18:34:32.342Z · LW(p) · GW(p)

Do you have a personal favourite argument against moral anti-realism in there you could point me to?

comment by Paul Crowley (ciphergoth) · 2010-01-30T14:57:25.186Z · LW(p) · GW(p)

Thanks! There were several points in your PhD thesis where I couldn't work out how to square your position with moral anti-realism - I guess I know why now :-)

comment by timtyler · 2010-01-30T13:30:48.882Z · LW(p) · GW(p)

My case was here:

http://lesswrong.com/lw/1m5/savulescu_genetically_enhance_humanity_or_face/1fuv

Basically, morality is a product of evolution - which can be expected to favour some moral values over other ones - just as it favours certain physical structures like eyes and legs.

Things like: "under most circumstances, don't massacre your relatives or yourself" can be reasonably expected to be widespread values in the universe. The idea gives morality a foundation in the natural world.

Replies from: byrnema
comment by byrnema · 2010-01-30T14:54:16.675Z · LW(p) · GW(p)

It is useful that Tim summarizes his position in this context, voted up.

My position, developed with no background in philosophy or meta-ethics whatsoever and thus likely to be error-riddled or misguided, is that I consider it an unsolved problem within physical materialism (specifically, within the context of moral anti-realism) how "meaning" (the meaning of life and/or the value of values) can be a coherent or possible concept.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-31T10:37:01.138Z · LW(p) · GW(p)

Leave humans out of it and try to think about meanings of signals among animals, with an evolutionary perspective.

comment by Wei Dai (Wei_Dai) · 2010-01-31T02:42:22.155Z · LW(p) · GW(p)

I accept this may be a case of the Popularization Bias (speaking for myself). I'd like to see some posts on the arguments against anti-realism...

Replies from: CarlShulman
comment by CarlShulman · 2010-01-31T13:24:13.425Z · LW(p) · GW(p)

Agreed. Perhaps Toby or David Pearce can be persuaded.

Replies from: Toby_Ord
comment by Toby_Ord · 2010-01-31T19:27:58.632Z · LW(p) · GW(p)

I don't think I can persuaded.

I have many good responses to the comments here, and I suppose I could sketch out some of the main arguments against anti-realism, but there are also many serious demands on my time and sadly this doesn't look like a productive discussion. There seems to be very little real interest in finding out more (with a couple of notable exceptions). Instead the focus is on how to justify what is already believed without finding out any thing else about what the opponents are saying (which is particularly alarming given that many commenters are pointing out that they don't understand what the opponents are saying!).

Given all of this, I fear that writing a post would not be a good use of my time.

Replies from: CarlShulman, ciphergoth, DonGeddis
comment by CarlShulman · 2010-01-31T21:23:18.404Z · LW(p) · GW(p)

Alas. Perhaps some Less Wrongers with more time will write and post a hypothetical apostasy. I invite folk to do so.

comment by Paul Crowley (ciphergoth) · 2010-01-31T20:01:18.288Z · LW(p) · GW(p)

many commenters are pointing out that they don't understand what the opponents are saying

This is a little unfair; as soon as you take a deflationary stance on anything, you're saying that the other stance doesn't really have comprehensible content, and it's a mistake to turn that into a general-purpose dismissal of deflationary stances.

There seems to be very little real interest in finding out more (with a couple of notable exceptions). Instead the focus is on how to justify what is already believed without finding out any thing else about what the opponents are saying

If you think that's more true here than it is in other discussion forums, we're doing something very wrong. I understand that you're not able to spend time writing for this audience, but for those of us who do want to find out more about what moral realists are saying, every link you can provide to existing essays is valuable.

comment by DonGeddis · 2010-01-31T20:07:11.344Z · LW(p) · GW(p)

I, for one, am interested in hearing arguments against anti-realism.

If you don't have personal interest in writing up a sketch, that's fine. Might you have some links to other people who have already done so?

Replies from: Zack_M_Davis, CarlShulman
comment by CarlShulman · 2010-01-31T21:04:06.891Z · LW(p) · GW(p)

Toby already linked to the SEP articles on moral realism and anti-realism in another comment.

comment by Paul Crowley (ciphergoth) · 2010-01-30T22:10:34.446Z · LW(p) · GW(p)

This point has given me a lot of pause, so forgive me my many replies. Part of the problem is that even if I were only 60% confident of moral anti-realism, I would still act on it as if I were 100% confident because I don't understand moral realism at all, and my 60% confidence is in the belief that no-one else does either.

comment by whpearson · 2010-01-30T12:02:48.699Z · LW(p) · GW(p)

Can you give pointers to prominent naturalist realists?

comment by mattnewport · 2010-01-30T21:40:58.953Z · LW(p) · GW(p)

My impression of academic philosophers is that their 'expertise' is primarily in knowledge of what other philosophers have said and in the forms of academic philosophical argument. It is not expertise in true facts about the world. In other words, I would defer to their expertise on the technical details of academically accepted definitions of philosophical terms, or on the writings of Kant, much as I would defer to an expert in literary criticism on the details of what opinions other literary critics have expressed. In neither case however do I consider their opinions to be particularly relevant to the pursuit of true facts about the world.

The fact that the survey you link finds 27% of philosophers 'accept or lean towards non-physicalism' increases my confidence in the above thesis.

comment by Stuart_Armstrong · 2010-02-01T12:52:56.292Z · LW(p) · GW(p)

Even for experts in meta-ethics, I can't see how their confidence can get outside the 30%-70% range given the expert disagreement. For non-experts, I really can't see how one could even get to 50% confidence in anti-realism, much less the kind of 98% confidence that is typically expressed here.

It depends on the expertise; for instance, if we're talking about systems of axioms, then mathematicians may be those with the most relevant opinions as to whether one system has preference over others. And the idea that a unique system of moral axioms would have preference over all others makes no mathematical sense. If philosphers were espousing the n-realism position ("there are systems of moral axioms that are more true than others, but there will probably be many such systems, most mutually incompatible"), then I would have a hard time arguing against this.

But, put quite simply, I dismiss the moral realistic position for the moment as the arguments go like this:

  • 1) There are moral truths that have special status; but these are undefined, and it is even undefined what makes them have this status.
  • 2) These undefined moral truths make a consistent system.
  • 3) This system is unique, according to criteria that are also undefined.
  • 4) Were we to discover this system, we should follow it, for reasons that are also undefined.

There are too many 'undefined's in there. There is also very little philosphical literature I've encountred on 2), 3) and 4), which is at least as important as 1). A lot of the literature on 1) seems to be reducible to linguistic confusion, and (most importantly) different moral realists have different reasons for believing 1), reasons that are often contradictory.

From a outsider's perspective, these seem powerful reasons to assume that philosphers are mired in confusion on this issue, and that their opinions are not determining. My strong mathematical reasons for claiming that there is no "superiority total ordering" on any general collection of systems of axioms clinches the argument for me, pending further evidence.

comment by CarlShulman · 2010-01-30T21:53:42.168Z · LW(p) · GW(p)

Looking further through the philpapers data, a big chunk of the belief in moral realism seem to be coupled with theism, where anti-realism is coupled with atheism and knowledge of science. The more a field is taught at Catholic or other religious colleges (medieval philosophy, bread-and-butter courses like epistemology and logic) the more moral realism, while philosophers of science go the other way. Philosophers of religion are 87% moral realist, while philosophers of biology are 55% anti-realist.

In general, only 61% of respondents "accept" rather than lean towards atheism, and a quarter don't even lean towards atheism. Among meta-ethics specialists, 70% accept atheism, indicating that atheism and subject knowledge both predict moral anti-realism. If we restricted ourselves to the 70% of meta-ethics specialists who also accept atheism I would bet at at least 3:1 odds that moral anti-realism comes out on top.

Since the Philpapers team will be publishing correlations between questions, such a bet should be susceptible to objective adjudication within a reasonable period of time.

A similar pattern shows up for physicalism.

In general, those interquestion correlations should help pinpoint any correct contrarian cluster.

comment by taw · 2010-01-31T21:59:13.103Z · LW(p) · GW(p)

I don't see in what meaningful sense these people are "experts".

comment by jhuffman · 2010-01-30T13:19:33.195Z · LW(p) · GW(p)

Is there a reason I should care about the % of any group of people that think this or that? Just give us the argument, or write another article about it. It sounds interesting.

Replies from: timtyler, Nick_Tarleton
comment by timtyler · 2010-01-30T13:36:52.007Z · LW(p) · GW(p)

Re: "Is there a reason I should care about the % of any group of people that think this or that?"

Generally speaking, yes, of course. If lots of experts in a relevant field think something is true, then their opinion carries some weight.

Replies from: jhuffman
comment by jhuffman · 2010-01-30T22:15:05.586Z · LW(p) · GW(p)

In things related to observable facts or repeatable experiments I'd agree. In more abstract things, I'm less interested in what the polls say.

Moral realism is a school of thought which has come in and out of style and favor among philosophers. Plato was arguably an moral realist; this isn't a new idea or area of debate amongst philosophers. Telling me where we are on the constantly shifting scale of acceptance is really pretty meaningless. Its like telling me 58% of fashion designers like the color black this year.

Replies from: Nick_Tarleton, Douglas_Knight
comment by Nick_Tarleton · 2010-01-30T22:28:19.459Z · LW(p) · GW(p)

Just to be sure, are you saying that you think there is a fact of the matter about whether moral realism is true, but you don't think philosophers' opinions are significantly correlated with this fact?

Replies from: jhuffman
comment by jhuffman · 2010-01-31T00:05:07.085Z · LW(p) · GW(p)

Moral realism is a meta-ethical view - I do not know that a such a viewpoint can be as a matter of fact correct or incorrect. Maybe an ethical realist would argue that it is a matter of fact, I'm not sure - an anti-realist might argue that neither viewpoint can be a matter of fact. The whole argument is really about "what are facts" and "what can be objectively true or false" so I suppose that someone may extend this view to the meta-layer where the merits of the viewpoint itself are discussed although I think that would not be very useful.

Replies from: Kevin
comment by Kevin · 2010-01-31T07:45:42.946Z · LW(p) · GW(p)

I'm going to deploy what I call the Wittgenstein Chomsky blah blah blah argument. Philosophy is just words in English; there is little ultimate meaning we are going to find here unless we declare our mathematical axioms. Already most of the views here seem reconcilable by redefining what exactly the different words mean.

To answer the question: some things can be proven objectively true, some things can be proven objectively false, some things can be proven to be undecidable. A fact is a true statement that follows from your given system of axioms. I personally am unsure if most moral principles or meta ethical systems can be declared objectively true or false with a standard ethical system, but I'm not going to take it seriously until a theorem prover says so. We are never going to convince each other of ultimate philosophical truth by having conversations like this.

I suppose this makes me an anti-realist, unless someone feels like redefining realism for me. :D

Again, it feels like I am missing something... http://plato.stanford.edu/entries/truth-axiomatic/ helped a little.

comment by Douglas_Knight · 2010-02-01T02:55:31.065Z · LW(p) · GW(p)

Moral realism is a school of thought which has come in and out of style and favor among philosophers.

While at times Toby Ord refers to 56% as "most" philosophers, a claim that is disputable on grounds of fashion, at other times he draws the line at 20%; the point is that realist philosophers are not a tiny minority, rejecting widely accepted arguments.

comment by Nick_Tarleton · 2010-01-30T21:41:19.746Z · LW(p) · GW(p)

Upvoted for being a legitimate question, from a fairly new poster, that really shouldn't be at -4.

comment by Vladimir_Nesov · 2010-01-30T11:06:36.174Z · LW(p) · GW(p)

To head off a potential objection, this does assume that our values interact in an additive way.

...and this is an assumption of simplicity of value. That we can see individual "values" only reflects the vague way in which we can perceive our preference. Some "values" dictate the ways in which other "values" should play together, so there is no easy way out, no "additive" or "multiplicative" clean decomposition.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-02T01:43:00.045Z · LW(p) · GW(p)

Now censoring replies by DWCrmcm.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-02-02T01:45:59.356Z · LW(p) · GW(p)

Aww, I wanted to play with him. ;)

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-02T08:48:33.951Z · LW(p) · GW(p)

I really don't want us to go there, here; I think it will reduce the quality of the site significantly. At the moment I can follow Recent Comments and find quite a few little nuggets of gold. If we get into arguing with people like this, the good content will be harder to find.

Replies from: Vive-ut-Vivas
comment by Vive-ut-Vivas · 2010-02-02T18:21:27.171Z · LW(p) · GW(p)

I strongly agree with this.

From his own website repeatedly linked here:

The Rational Model of Complex Mechanisms asserts that the universe exists within God. The Model asserts the Theory of relativity is wrong, and that nature of the universe encapsulated by E=MC2 is found not in plastic time but instead in the frequency of GodSong - The Speed Of Light Squared - C2 .

This is not the kind of "nugget of gold" that we want to see on here, I would think.

Replies from: ciphergoth, DWCrmcm
comment by Paul Crowley (ciphergoth) · 2010-02-02T18:35:41.323Z · LW(p) · GW(p)

We've actually done remarkably well - "rationality" is generally a banner to which every green-ink vendor rallies, but I think this is our first full-on green-ink contributor.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2010-02-02T18:55:36.283Z · LW(p) · GW(p)

"green ink"?

Replies from: Cyan
comment by Cyan · 2010-02-02T19:02:33.534Z · LW(p) · GW(p)

I encountered the term here.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2010-02-02T19:27:16.147Z · LW(p) · GW(p)

Oh, okay. *follows along until he sees the wiki link* aaah.

Thanks.

comment by DWCrmcm · 2010-02-03T00:04:21.372Z · LW(p) · GW(p)

Not to worry you won't see anymore. Good luck with your project and God bless.

comment by mattnewport · 2010-02-02T02:27:48.087Z · LW(p) · GW(p)

Your sesquipedalian obscurantism may fool your usual audience but you won't find it very successful here.

comment by avturchin · 2018-12-14T11:52:36.314Z · LW(p) · GW(p)

A possible list of human values which are scalable:

Safety - we prefer that no sources of dangers exist anywhere in the universe

Self-replication - (at least some humans) prefer to gave as many descendants as possible and would be happy to tile the universe with their own grandchildren.

Power - A human often wants to become a king or god. So all the universe must be under his control.

Life extension - some wants immortality

Be the first - one must ensure that he is better than any other being in the universe

Exploration - obviously, scalable

Compassion to other beings.

comment by LucasSloan · 2010-02-02T08:01:15.793Z · LW(p) · GW(p)

You were dropping a lot of unfamiliar terminology, the end result of which was failing utterly to communicate what your point was. If you want us to understand your point, you're going to have to unpack most of your sentences.

(easy example: what does Christian NeoRationalist mean?)

Replies from: DWCrmcm
comment by DWCrmcm · 2010-02-02T18:09:48.407Z · LW(p) · GW(p)

Density. Yes my sentences are very dense. If I change that I would be in violation of my own constraints. Let me try this. I assert and the Model asserts: "We" are falsely dichotomous (?) in our use of the word technology. If a mechanism or machine is non carbon based then it is technological. If a mechanism or machine is organic it is not technological. I assert, that the existential difference between them is an illusion. The model describes a hierarchy that dispels the illusion.

Change of context.

All complexities/simplicities have inner or internal attributes and outer or external attributes. If the universe is a complexity/simplicity, then it follows that the universe too must have external and internal attributes.

I believe that the model asserts that There is a God and God gave rise to this rational complex universe. I am a new order Rationalist of the Christian sect.

Hence Christian Neo-Rationalist.

Would you share some of your reflections on Value and Complexity with me in the context of my previous assertion?

edit: Maybe it would be helpful to assert that this false dichotomy applies equally to levers and pendulums. They are just different forms of the same thing.

comment by timtyler · 2010-01-30T11:17:13.978Z · LW(p) · GW(p)

"rather there's a tendency to assume that complexity of value must lead to complexity of outcome"

The main problem I see here is the other way around:

There's a tendency to assume that complexity of outcome must have been produced by complexity of value.

AFAICS, it is only members of this community that think this way. Noboby else seems to have a problem with the idea of goals that can be concisely expressed - like: "trying to have as many offspring as possible" - leading to immense diversity and complexity.

This is a facet of an even more basic principle - that extremely simple rules can produce extremely complex outcomes - e.g. see the r-pentomino in the game of life.

You can see it clearly in the case of simple goals like "winning games of go". The simple goal leads to an explosion of complexity - in the form of the resulting go-playing programs.

Replies from: Peter_de_Blanc, Wei_Dai
comment by Peter_de_Blanc · 2010-01-30T20:47:41.803Z · LW(p) · GW(p)

Are you talking about Kolmogorov complexity or something else? Because the outcome which optimizes a simple goal would have a low Kolmogorov complexity.

Replies from: timtyler
comment by timtyler · 2010-01-30T20:56:50.315Z · LW(p) · GW(p)

Kolmogorov complexity is fine by me.

What make you say that? It isn't right.

Filling the universe with orgasmium involves interstellar and intergalactic travel, stellar farming, molecular nanotechnology, coordinating stars to leap between galaxies, mastering nuclear fusion, conquering any other civilisations it might meet along the way - and many other complexity-requiring activities.

Replies from: Roko, Peter_de_Blanc
comment by Roko · 2010-01-31T11:41:37.569Z · LW(p) · GW(p)

Tim, you seem to be failing to distinguish between complex in the technical sense, and complex-looking. Remember that the mandelbrot set is simple, not complex in the technical sense.

Replies from: timtyler
comment by timtyler · 2010-01-31T11:47:02.172Z · LW(p) · GW(p)

Indeed - sorry! The r-pentomino's evolution is not a good example of high Kolmogorov complexity - though as you say, it is complex in other senses.

I had forgotten that I gave that as one of my examples when I retroactively assented to the use Kolmogorov complexity as a metric.

comment by Peter_de_Blanc · 2010-01-30T21:10:01.267Z · LW(p) · GW(p)

Well, if you had a utility function over a finite set of possible outcomes, then you can run a computer program to check every outcome and pick the one with the highest utility. So the complexity of that outcome is bounded by the complexity of the set of possible outcomes plus the complexity of the utility function plus a constant.

EDIT: And none of those things you mentioned require a lot of complexity.

Replies from: timtyler
comment by timtyler · 2010-01-30T21:41:38.411Z · LW(p) · GW(p)

If the things I mentioned are so simple, perhaps you could explain how to do them?

I would be especially interested in a "simple" method of conquering any other civilisations which we might meet - so perhaps you might like to concentrate on that?

Replies from: Peter_de_Blanc
comment by Peter_de_Blanc · 2010-01-30T21:58:40.929Z · LW(p) · GW(p)

I would be especially interested in a "simple" method of conquering any other civilisations which we might meet

Build AIXItl.

Replies from: timtyler, Roko
comment by timtyler · 2010-01-30T22:05:13.960Z · LW(p) · GW(p)

Alas, AIXItl is a whole class of things, many of which are likely to be highly complex.

Replies from: ciphergoth, Peter_de_Blanc
comment by Paul Crowley (ciphergoth) · 2010-01-31T10:50:44.317Z · LW(p) · GW(p)

This contradicts my understanding of AIXI from Shane Legg's Extrobritannia presentation. What's the variable bit? Not the utility function; that's effectively external and after the fact, and AIXI infers it.

Replies from: timtyler
comment by timtyler · 2010-01-31T11:43:08.262Z · LW(p) · GW(p)

I think I answered that in the other sub-thread descended from the parent coment.

comment by Peter_de_Blanc · 2010-01-30T22:19:24.837Z · LW(p) · GW(p)

If you're referring to the parameters t and l, I'll suggest a googolplex as a sufficiently large number with low Kolmogorov complexity.

Replies from: timtyler
comment by timtyler · 2010-01-30T22:38:52.583Z · LW(p) · GW(p)

No. AIXItl will need to have other complexity - if you want it to work in a reasonable quantity of time - e.g. see, for example:

"Elimination of the factor 2˜l without giving up universality will probably be a very difficult task. One could try to select programs p and prove VA(p) in a more clever way than by mere enumeration. All kinds of ideas like, heuristic search, genetic algorithms, advanced theorem provers, and many more could be incorporated.""

Replies from: Peter_de_Blanc
comment by Peter_de_Blanc · 2010-01-30T22:50:03.765Z · LW(p) · GW(p)

It seems that you think "complex" means "difficult." It doesn't. Complex means "requires a lot of information to specify." There are no simple problems with complex solutions, because any specification of a problem is also a specification of its solution. This is the point of my original post.

Replies from: timtyler
comment by timtyler · 2010-01-30T23:39:12.740Z · LW(p) · GW(p)

So: a galaxy-conquering civilisation has low Kolmogorov complexity - because it has a short description - namely "a galaxy-conquering civilisation"???

If you actually attempted to describe a real galaxy-conquering civilisation, it would take a lot of bits to specify which one you were looking at - because the method of getting there will necessarily have involved time-and-space constraints.

Those bits will have come from the galaxy - which is large and contains lots of information.

More abstractly, "Find a root of y = sin(x)" is a simple problem with many K-complex solutions. Simple problems really can have K-complex solutions.

Replies from: Peter_de_Blanc
comment by Peter_de_Blanc · 2010-01-31T00:10:46.173Z · LW(p) · GW(p)

A particular galaxy-conquering civilization might have high Kolmogorov complexity, but if you can phrase the request "find me a galaxy-conquering civilization" using a small number of bits, and if galaxy-conquering civilizations exist, then there is a solution with low Kolmogorov complexity.

Hmm, okay. I should not have said "there are no simple problems with complex solutions." Rather, there are no simple problems whose only solutions are complex. Are we in agreement?

Replies from: CronoDAS, timtyler
comment by CronoDAS · 2010-02-01T02:00:43.042Z · LW(p) · GW(p)

Joke counterexample:

x^2 = -1 is a simple problem that only has complex solutions. ;)

(Of course, that's not the meaning of "complex" that you meant.)

Serious counterexample:

The four-color theorem is relatively simple to describe, but the only known proofs are very complicated.

Replies from: wedrifid, Jordan
comment by wedrifid · 2010-02-01T03:05:48.798Z · LW(p) · GW(p)

Gah, don't over-qualify jokes! It's a supplicating behavior and seeking permission to be funny blunts the effect. Just throw the "X^2 = -1" out there (which is a good one by the way) and then go on to say "A more serious counterexample". That's more than enough for people to 'get it' and anyone who doesn't will just look silly.

This is the Right (Wedrifid-Laughter-Maximising) thing to do.

Replies from: CronoDAS
comment by CronoDAS · 2010-02-01T03:56:42.304Z · LW(p) · GW(p)

I'm sorry. :(

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2010-02-01T04:18:00.627Z · LW(p) · GW(p)

Was that a practical joke on wedrifid?

Replies from: CronoDAS
comment by CronoDAS · 2010-02-01T04:28:25.714Z · LW(p) · GW(p)

It is now!

Replies from: wedrifid
comment by wedrifid · 2010-02-01T05:54:13.262Z · LW(p) · GW(p)

Nice. Die. :P

comment by Jordan · 2010-02-01T02:10:21.959Z · LW(p) · GW(p)

But that complicated proof could be concisely provided via a universal proof algorithm and the statement of the four color theorem.

Replies from: Peter_de_Blanc
comment by Peter_de_Blanc · 2010-02-01T05:47:52.440Z · LW(p) · GW(p)

Exactly! The Kolmogorov complexity is not very high.

comment by timtyler · 2010-01-31T00:29:58.834Z · LW(p) · GW(p)

I am not sure.

How about: what is the smallest number that can't be described by an English sentence of less than ten thousand words? ;-)

Of course, knowing that a K-simple solution existed in the form of the problem specification would not help very much in constructing/implementing it.

comment by Roko · 2010-01-31T11:43:19.015Z · LW(p) · GW(p)

Simple in terms of kolmogorov complexity, that is. Simple to do? No.

comment by Wei Dai (Wei_Dai) · 2010-01-30T11:41:06.964Z · LW(p) · GW(p)

AFAICS, it is only members of this community that think this way.

Who are you referring to here? I myself wrote "Simple values do not necessarily lead to simple outcomes either."

Replies from: timtyler
comment by timtyler · 2010-01-30T12:53:44.465Z · LW(p) · GW(p)

AFAICT, the origin of these ideas is here:

http://lesswrong.com/lw/l3/thou_art_godshatter/

http://lesswrong.com/lw/lb/not_for_the_sake_of_happiness_alone/

http://lesswrong.com/lw/lq/fake_utility_functions/

http://lesswrong.com/lw/y3/value_is_fragile/

This seems to have led a slew of people to conclude that simple values lead to simple outcomes. You yourself suggest that the simple value of "filling the universe with orgasmium" is one whose outcome would mean that "the future of the universe will turn out to be rather simple".

Things like that seem simply misguided to me. IMO, there are good reasons for thinking that that would lead to enormous complexity - in addition to lots of orgasmium.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-01-30T18:16:15.475Z · LW(p) · GW(p)

Things like that seem simply misguided to me. IMO, there are good reasons for thinking that that would lead to enormous complexity

...but not in the least convenient possible world with an ontologically simple turn-everything-into-orgasmium button; and the sort of complexity that you mention that (I agree) would be involved in the actual world isn't a sort that most people regard as terminally valuable.

Replies from: timtyler
comment by timtyler · 2010-01-30T20:27:41.755Z · LW(p) · GW(p)

Here we were talking about a superintelligent agent whose "fondest desire is to fill the universe with orgasmium". About the only way such an agent would fail to produce enormous complexity is if it died - or was otherwise crippled or imprisoned.

Whether humans would want to live - or would survive in - the same universe as an orgasmium-loving superintelligence seems like a totally different issue to me - and it seems rather irrelevant to the point under discussion.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-01-30T21:32:21.219Z · LW(p) · GW(p)

Here we were talking about a superintelligent agent whose "fondest desire is to fill the universe with orgasmium". About the only way such an agent would fail to produce enormous complexity is if it died - or was otherwise crippled or imprisoned.

Or if the agent has a button that, through simple magic, directly fills the universe with (stable) orgasmium. Did you even read what I wrote?

Whether humans would want to live - or would survive in - the same universe as an orgasmium-loving superintelligence seems like a totally different issue to me - and it seems rather irrelevant to the point under discussion.

Human morality is the point under discussion, so of course it's relevant. It seems clear that the chief kind of "complexity" that human morality values is that of conscious (whatever that means) minds and societies of conscious minds, not complex technology produced by unconscious optimizers.

Replies from: timtyler
comment by timtyler · 2010-01-30T21:52:16.451Z · LW(p) · GW(p)

Re: Did you even read what I wrote?

I think I missed the bit where you went off into a wild and highly-improbable fantasy world.

Re: Human morality is the point under discussion

What I was discussing was the "tendency to assume that complexity of outcome must have been produced by complexity of value". That is not specifically to do with human values.

comment by Kevin · 2010-01-30T08:32:50.289Z · LW(p) · GW(p)

Does any existing decision theory make an attempt to decide based on existing human values? How would one begin to put human values into rigorous mathematical form?

I've convinced a few friends that the most likely path to Strong AI (i.e. intelligence explosion) is a bunch of people sitting in a room doing math for 10 years. But that's a lot of math before anyone even begins to start plugging in the values.

I suppose it does make sense for us to talk in English about what all of these things mean, so that in 10+ years they can be more easily translated into machine language with sufficient rigor. So can anyone here conceive what the equations for the values of a FAI begin to look like? I can't right now and it seems like I am missing something important when we are just talking about all of this in English.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-30T08:53:06.591Z · LW(p) · GW(p)

Here's Eliezer's position on that question as of 2004.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-30T09:27:57.049Z · LW(p) · GW(p)

That's not non-English.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-30T11:37:36.328Z · LW(p) · GW(p)

Sure, but it helps to be familiar with it if you're having this discussion all the same.

comment by xxd · 2011-12-20T21:20:42.394Z · LW(p) · GW(p)

I've struggled with the concept of how an orgasmium optimizing AI could come about or a paperclipper or a bucketmaker or any of the others but this clarifies things. It's the programmer who passes the values on to the AI that is the cause, it's not necessarioy going to be an emergent property.

That makes things easier I believe as it means the code for the seed AI needs to be screened for maximization functions.

comment by DWCrmcm · 2010-12-10T20:08:40.776Z · LW(p) · GW(p)

-3 lol, Well I can see that you are no closer to AI than you were last year. Do you have a definition of value yet? Life? Complexity?

I thought not.

Respectufly W

comment by Paul Crowley (ciphergoth) · 2010-02-05T08:34:42.157Z · LW(p) · GW(p)

Please leave.

Replies from: DWCrmcm, DWCrmcm
comment by DWCrmcm · 2010-02-05T22:13:04.007Z · LW(p) · GW(p)

As I see it, you can all treat me affectionately as your own personal crazy. Enjoy me. Criticize my definitions and and my structures. I would love that. That is why I came here. I was looking for intelligent criticism of my model. What I got instead was upsetting and ridicule. I have a neurological disorder and it was acting up. I didn't think. Then after adjusting my meds and my diet I realized that voting down my comments was irrelevant as I could reproduce them on my blog anyway. I overreacted. I'm sorry if I offended anyone. My partner usually has my back, but he didn't know that I was getting upset. My wife was the one who alerted me to it. Anyway I am going through the sequences which is where you are supposed to start. So I can do my thing here or on my blog and on my Face Book page.

You decide.

comment by DWCrmcm · 2010-02-05T21:57:55.119Z · LW(p) · GW(p)

I tried to delete my profile and all my comments But to no avail. So until you delete "all" of my content and any references to those comments which are my property, then I will continue to post and link my posts on my blog so that others may see how you treat eccentrics edge dwellers and free thinkers - and how quickly you discount radical ideas as green ink.

comment by LucasSloan · 2010-02-02T08:23:58.907Z · LW(p) · GW(p)

She on the other hand had no clue about what I was trying to express.

The commonality in these situations is you.

Replies from: ciphergoth, DWCrmcm
comment by Paul Crowley (ciphergoth) · 2010-02-02T08:37:21.425Z · LW(p) · GW(p)

I urge you to engage with this user only if you want them to stay here. There is no argument that will convince a rock.

Replies from: LucasSloan, DWCrmcm
comment by LucasSloan · 2010-02-02T08:38:23.520Z · LW(p) · GW(p)

Very well.

comment by DWCrmcm · 2010-02-02T17:09:22.879Z · LW(p) · GW(p)

You are right of course about the rock. Your arguments will fall on deaf ears. I am here to discuss. and explore. Some of what I read here makes my head spin. I pick up some ideas here and leave what I have to offer in return.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-02T17:17:53.620Z · LW(p) · GW(p)

Admins - for more information on this user, google Donald Weetman Cameron.

comment by DWCrmcm · 2010-02-02T16:57:58.552Z · LW(p) · GW(p)

Yes it is, that is a good point.

comment by snarles · 2010-01-31T12:20:21.864Z · LW(p) · GW(p)

One more reason why I think Faustian singleton is the most likely final outcome, even if FAI succeeds. Unlike material or social desires, curiosity can scale endlessly--and to the point where humans become willing to suspend their individuality for the sake of computational efficiency.

comment by timtyler · 2010-01-30T11:25:42.419Z · LW(p) · GW(p)

Re: "the future of the universe will turn out to be rather simple"

You do realise that filling the universe with orgasmium involves interstellar and intergalactic travel, stellar farming, molecular nanotechnology, coordinating stars to leap between galaxies, mastering nuclear fusion, conquering any other civilisations it might meet - and many other high-tech wonders?

How is any of that that "simple"? Do you just mean: "somewhat less complex than it could conceivably be?"

comment by jhuffman · 2010-01-30T03:53:41.674Z · LW(p) · GW(p)

If it were the case that only a few of our values scale, then we can potentially obtain almost all that we desire by creating a superintelligence with just those values.

Can we really expect a superintelligence to stick with the values we give it ? Our own values change over time; sometimes without even external stimulus just internal reflection. I don't see how we can bound a superintelligence without doing more computation than we expect it to do in its lifetime.

Replies from: Zack_M_Davis, wedrifid, timtyler, Thomas
comment by Zack_M_Davis · 2010-01-30T04:02:53.796Z · LW(p) · GW(p)

Our own values change over time

I tend to file this under "humans are stupid." Messy creatures like ourselves undergo value drift, but decision-theoretically speaking, systems designed to optimize for some particular criterion have a natural incentive to keep that criterion. Cf. "The Basic AI Drives."

Replies from: timtyler, Nick_Tarleton
comment by timtyler · 2010-01-30T13:54:01.149Z · LW(p) · GW(p)

It is probably best to model those as infections - or sometimes malfunctions.

Humans get infected with pathogens that make them do things like sneeze. Their values have not changed to value spreading snot on their neigbours, rather they are infected with germs - and the germs do value that.

It's much the same with mind-viruses. A catholic conversion is best modelled as a memetic infection - rather than a genuine change in underlying values. Such people can be cured.

Replies from: gregconen
comment by gregconen · 2010-01-30T18:17:07.386Z · LW(p) · GW(p)

Such people can be cured.

The fact that a change is reversible does not make it not real.

The fact that the final value system can be modeled as a starting value system modified by "memetic infection" does not make the final value system invalid. They are two different but equivalent ways of modelling the state.

Replies from: timtyler
comment by timtyler · 2010-01-30T20:32:32.412Z · LW(p) · GW(p)

Right. The point is that - under the "infection" analogy - people's "ultimate" values change a lot less. How much they change depends on the strength of people's memetic immune system - and there are some people with strong memetic immune systems whose values don't change much at all.

Replies from: gregconen
comment by gregconen · 2010-01-31T01:16:48.112Z · LW(p) · GW(p)

I'm not sure I follow you.

Are you saying that some agents change their values less often than others (or equivalently, are less likely to acquire "infections")?

comment by Nick_Tarleton · 2010-01-30T21:36:56.999Z · LW(p) · GW(p)

Also, I suspect a lot of people who talk about how human values change are thinking of things, like aesthetics and preferred flavors of ice cream, that aren't plausibly terminal values and that we often want to change over time.

comment by wedrifid · 2010-01-30T15:46:09.879Z · LW(p) · GW(p)

Can we really expect a superintelligence to stick with the values we give it ?

Yes.

I don't see how we can bound a superintelligence without doing more computation than we expect it to do in its lifetime.

I once proved that a program will print out only prime numbers endlessly. I really, really wish I kept the working out.

Replies from: timtyler
comment by timtyler · 2010-01-30T17:42:43.211Z · LW(p) · GW(p)

Is that program still running? ;-)

Replies from: wedrifid
comment by wedrifid · 2010-01-30T22:37:43.511Z · LW(p) · GW(p)

Hush you. You weren't supposed to notice that. :D

comment by timtyler · 2010-01-30T13:35:17.025Z · LW(p) · GW(p)

Quite a bit of ink has been spilled on this issue. Eliezer Yudkowsky and Steve Omohundro have argued that it is possible. Have you examined their arguments?

comment by Thomas · 2010-01-30T12:15:22.150Z · LW(p) · GW(p)

Nothing changes from the inside, unless it is preprogrammed for.

Replies from: jhuffman
comment by jhuffman · 2010-01-30T13:12:50.376Z · LW(p) · GW(p)

You cannot pre-program all the routines for handling all future states for anything you can call an AI much less a "superintelligence". AI must be able to learn, and there is no reason all such learning is only based on new external stimuli.

Replies from: Thomas
comment by Thomas · 2010-01-30T13:18:18.299Z · LW(p) · GW(p)

So you say, then a magic happens and something new is born.

No, it doesn't. Just the physics acted onto the engraved algorithms and/or data.

Replies from: jhuffman
comment by jhuffman · 2010-01-30T13:30:25.433Z · LW(p) · GW(p)

No magic; and yes all you have is algorithms and data. Obviously the algorithms contain an aspect of learning, and eventually the data guides decision pathways far more than the original algorithms; and even the algorithms themselves are mutable data.

edit: I should note, I'm just talking about some of our crude "AI" systems that we build today. I don't know that this would be the actual software architecture of anything that could become a superintelligence. But it would have these capabilities and more...

Replies from: Thomas
comment by Thomas · 2010-01-30T13:40:53.692Z · LW(p) · GW(p)

Crude or non crude AI, a physical configuration at the start and a physical configuration at any time since.

You can name it whatever you choose.

comment by DWCrmcm · 2010-02-01T23:39:10.432Z · LW(p) · GW(p)

Democracy trumps truth?

Replies from: AdeleneDawner, ciphergoth
comment by AdeleneDawner · 2010-02-01T23:43:38.495Z · LW(p) · GW(p)

In some cases, unfortunately. However, in this case, democracy trumps spam.

comment by Paul Crowley (ciphergoth) · 2010-02-02T00:02:36.746Z · LW(p) · GW(p)

Yes, this community is hopeless like that; best to give up now.