John Danaher on 'The Superintelligent Will'

post by lukeprog · 2012-04-03T03:08:48.937Z · LW · GW · Legacy · 12 comments

Philosopher John Danaher has written an explication and critique of Bostrom's "orthogonality thesis" from "The Superintelligent Will." To quote the conclusion:

 

Summing up, in this post I’ve considered Bostrom’s discussion of the orthogonality thesis. According to this thesis, any level of intelligence is, within certain weak constraints, compatible with any type of final goal. If true, the thesis might provide support for those who think it possible to create a benign superintelligence. But, as I have pointed out, Bostrom’s defence of the orthogonality thesis is lacking in certain respects, particularly in his somewhat opaque and cavalier dismissal of normatively thick theories of rationality.

As it happens, none of this may affect what Bostrom has to say about unfriendly superintelligences. His defence of that argument relies on the convergence thesis, not the orthogonality thesis. If the orthogonality thesis turns out to be false, then all that happens is that the kind of convergence Bostrom alludes to simply occurs at a higher level in the AI’s goal architecture. 

What might, however, be significant is whether the higher-level convergence is a convergence towards certain moral beliefs or a convergence toward nihilistic beliefs. If it is the former, then friendliness might be necessitated, not simply possible. If it is the latter, then all bets are off. A nihilistic agent could do pretty anything since, no goals would be rationally entailed.

 

12 comments

Comments sorted by top scores.

comment by Manfred · 2012-04-03T15:49:35.699Z · LW(p) · GW(p)

Hm, the Future Tuesday Indifference example is an interesting one. The reason it seems reflectively incoherent is because it violates an expected utility axiom if interpreted the typical way. If you calculate the expected utility of an option, but forget to add in the expected utility from future Tuesdays, you simply get the wrong answer.

However, interestingly, you can't self-modify to being a normal hedonist with only causal decision theory. If it's not tuesday, then changing to include tuesdays doesn't increase what you calculate as the expected utility. If it is tuesday, then it's too late unless you have a decision theory that allows you to treat a change to optimality as a good idea no matter when you do it.

Replies from: DuncanS
comment by DuncanS · 2012-04-03T23:36:32.088Z · LW(p) · GW(p)

The problem is that the utility isn't constant. If you, today are indifferent to what happens on future Tuesdays, then you will also think it's a bad thing that your future self cares what happens on that Tuesday. You will therefore replace your current self with a different self that is indifferent to all future Tuesdays, including the ones that it's in, thus preserving the goal that you have today.

Replies from: Manfred
comment by Manfred · 2012-04-04T00:28:16.870Z · LW(p) · GW(p)

Good point. I have to remember not to confuse expected utility with future utility.

comment by Lapsed_Lurker · 2012-04-03T12:00:32.508Z · LW(p) · GW(p)

I am not sure what 'accurate moral beliefs' means. By analogy with 'accurate scientific beliefs', it seems as if Mr Danaher is saying there are true morals out there in reality, which I had not thought to be the case, so I am probably confused. Can anyone clarify my understanding with a brief explanation of what he means?

Replies from: JohnD, Jayson_Virissimo
comment by JohnD · 2012-04-04T10:20:49.009Z · LW(p) · GW(p)

Well, I suppose I had in mind the fact that any cognitivist metaethics holds that moral propositions have truth values, i.e. are capable of being true or false. And if cognitivism is correct, then it would be possible for one's moral beliefs to be more or less accurate (i.e. to be more or less representative of the actual truth values of sets of moral propositions).

While moral cognitivism is most at home with moral realism - the view that moral facts are observer-indepedent - it is also compatible with some versions of anti-realism, such as the constructivist views I occasionally endorse.

The majority of moral philosophers (a biased sample) are cognitivists, as are most non-moral philosophers that I speak to (pure anecdotal evidence). If one is not a moral cognitivist, then the discussion on my blog post will of course be unpersuasive. But in that case, one might incline towards moral nihilism, which could, as I pointed out, provide some support for the orthogonality thesis.

comment by Jayson_Virissimo · 2012-04-04T08:23:02.856Z · LW(p) · GW(p)

And yet, several high-status Less Wrongers continue to affirm utilitarianism (specifically, with equal weight for each person in the social welfare function). I have criticized these beliefs in the past (as not, in any way, constraining experience), but have not received a satisfactory response.

Replies from: Lapsed_Lurker
comment by Lapsed_Lurker · 2012-04-04T08:39:07.835Z · LW(p) · GW(p)

And yet, several high-status Less Wrongers continue to affirm utilitarianism with equal weight for each person in the social welfare function. I have criticized these beliefs in the past (as not, in any way, constraining experience), but have not received a satisfactory response.

I'm not sure how that answers my question, or follows from it. Can you clarify?

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-04-04T08:44:22.372Z · LW(p) · GW(p)

It wasn't meant as an attempt to answer your question. I was pointing out that this isn't only a problem for Danaher.

comment by CarlShulman · 2012-04-03T04:45:53.863Z · LW(p) · GW(p)

The post needs a direct link. The current version only links to Danaher's homepage and Bostrom's article.

Replies from: lukeprog
comment by lukeprog · 2012-04-03T08:34:14.102Z · LW(p) · GW(p)

Oops, lol. Fixed.

comment by Desrtopa · 2012-04-03T14:27:59.017Z · LW(p) · GW(p)

And here I was wondering if this was a paper from the esteemed Brazilian jiu jitsu coach (who does in fact have a Masters degree in philosophy.)

Rather than doing pretty much anything, it seems more likely to me that a genuinely nihilistic agent would default to doing nothing.

Replies from: JohnD
comment by JohnD · 2012-04-04T10:27:31.967Z · LW(p) · GW(p)

I think that's an interesting point. I suppose I was thinking that nihilism, at least in the way its typically discussed, holds not that doing nothing is rational but, rather, that no goals are rational (a subtle difference, perhaps). This, in my opinion, might equate with all goals being equally possible. But, as you point out, if all goals are equally possible the agent might default to doing nothing.

One might put it like this: the agent would be landed in the equivalent of a Buridan's Ass dilemma. As far as I recall, the possibility that a CPU would be landed in such a dilemma was a genuine problem in the early days of computer science. I believe there was some protocol introduced to sidestep the problem.