Posts

Comments

Comment by VCM on Is the argument that AI is an xrisk valid? · 2021-07-27T09:13:29.300Z · LW · GW

One more consideration about "instrumental intelligence": we left that somewhat under-defined, more like "if I had that utility function, what would I do?" ... but it is not clear that this image of "me in the machine" captures what a current or future machine would do. In other words, people who use instrumental intelligence for an image of AI owe us a more detailed explanation of what that would be, given the machines we are creating - not just given the standard theory of rational choice.

Comment by VCM on Is the argument that AI is an xrisk valid? · 2021-07-26T14:50:34.097Z · LW · GW

Thanks, it's useful to bring these out - though we mention them in passing. Just to be sure: We are looking at the XRisk thesis, not at some thesis that AI can be "dangerous", as most technologies will be. The Omhundro-style escalation is precisely the issue in our point that instrumental intelligence is not sufficient for XRisk.

Comment by VCM on Is the argument that AI is an xrisk valid? · 2021-07-26T08:29:34.663Z · LW · GW

... we aren't trying to prove the absence of XRisk, we are probing the best argument for it?

Comment by VCM on Is the argument that AI is an xrisk valid? · 2021-07-25T14:08:47.015Z · LW · GW

We tried to find the strongest argument in the literature. This is how we came up with our version:

"
Premise 1: Superintelligent AI is a realistic prospect, and it would be out of human control. (Singularity claim)

Premise 2: Any level of intelligence can go with any goals. (Orthogonality thesis)

Conclusion: Superintelligent AI poses an existential risk for humanity
"

====
A more formal version with the same propositions might be this:

1. IF there is a realistic prospect that there will be a superintelligent AI system that is a) out of human control and b) can have any goals, THEN there is existential risk for humanity from AI

2. There is a realistic prospect that there will be a superintelligent AI system that is a) out of human control and b) can have any goals

->

3. There is existential risk for humanity from AI

====

And now our concern is whether a superintelligence can be both a) and b) - given that a) must be understood in a way that is strong enough to generate existential risk, including "widening the frame", and b) must be understood as strong enough to exclude reflection on goals. Perhaps that will work only if "intelligent" is understood in two different ways? Thus Premise 2 is doubtful.

Comment by VCM on Delta variant: we should probably be re-masking · 2021-07-25T13:27:23.651Z · LW · GW

Even if that is true, you would still get a) a lot of sickness & suffering, and b) infect a lot of other people (who infect further). So some people would be seriously ill and some would die as a result of this experiment.

Comment by VCM on Is the argument that AI is an xrisk valid? · 2021-07-25T07:38:09.111Z · LW · GW

Can one be a moral realist and subscribe to the orthogonality thesis? In which version of it? (In other words, does one have to reject moral realism in order to accept the standard argument for XRisk from AI? We should better be told! See section 4.1)

Comment by VCM on Is the argument that AI is an xrisk valid? · 2021-07-25T07:33:06.224Z · LW · GW

But reasoning about morality? Is that a space with logic or with anything goes?

Comment by VCM on Is the argument that AI is an xrisk valid? · 2021-07-25T07:32:00.314Z · LW · GW

Thanks. We are actually more modest. We would like to see a sound argument for XRisk from AI and we investigate what we call 'the standard argument'; we find it wanting and try to strengthen it, but we fail. So there is something amiss. In the conclusion we admit "we could well be wrong somewhere and the classical argument for existential risk from AI is actually sound, or there is another argument that we have not considered."

I would say the challenge is to present a sound argument (valid + true premises) or at least a valid argument with decent inductive support for the premises. Oddly, we do not seem to have that.

Comment by VCM on Is the argument that AI is an xrisk valid? · 2021-07-25T07:23:25.472Z · LW · GW

... plus we say that in the paper :)

Comment by VCM on Is the argument that AI is an xrisk valid? · 2021-07-24T15:50:13.320Z · LW · GW
  • Maximal overall utility is better than minimal overall utility. Not sure what that means. The NPCs in this simulation don't have "utility". The real humans in the secret prison do.

This should have been clearer. We meant this in Bentham's good old way: minimal pain and maximal pleasure. Intuitively: A world with a lot of pleasure (in the long run) is better than a world with a lot of pain. - You don't need to agree, you just need to agree that this is worth considering, but on our interpretation the orthogonality thesis says that one cannot consider this.

Comment by VCM on Is the argument that AI is an xrisk valid? · 2021-07-24T15:37:28.381Z · LW · GW

Thanks for this. Indeed, we have no theory of goals here and how the relate, maybe they must be in a hierarchy, as you suggest. And there is a question, then, whether there must be some immovable goal or goals that would have to remain in place in order to judge anything at all. This would constitute a theory of normative judgment ... which we don't have up our sleeves :)

Comment by VCM on Is the argument that AI is an xrisk valid? · 2021-07-24T15:34:41.394Z · LW · GW

We suggest that such instrumental intelligence would be very limited.

In fact, there is a degree of generality here and it seems one needs a fairly high degree to get to XRisk, but that high degree would then exclude orthogonality.

Comment by VCM on Is the argument that AI is an xrisk valid? · 2021-07-24T15:32:33.760Z · LW · GW

Yes, that means "this argument".

Comment by VCM on Is the argument that AI is an xrisk valid? · 2021-07-24T15:27:29.206Z · LW · GW

Thanks for the 'minor' point, which is important: yes, we meant definitely out of human control. And perhaps that is not required, so the argument has a different shape.

Our struggle was to write down a 'standard argument' in such a way that it is clear and its assumptions come out - and your point adds to this.

Comment by VCM on Is the argument that AI is an xrisk valid? · 2021-07-24T15:25:15.818Z · LW · GW

Here we get to a crucial issue, thanks! If we do assume that reflection on goals does occur, do we assume that the results have any resemblance with human reflection on morality? Perhaps there is an assumption about the nature of morality or moral reasoning in the 'standard argument' that we have not discussed?

Comment by VCM on Is the argument that AI is an xrisk valid? · 2021-07-24T15:22:39.127Z · LW · GW

We do not say that there is no XRisk or no XRisk from AI.

Comment by VCM on Is the argument that AI is an xrisk valid? · 2021-07-24T15:21:47.381Z · LW · GW

... well, one might say we assume that if there is 'reflection on goals', the results are not random.

Comment by VCM on Is the argument that AI is an xrisk valid? · 2021-07-24T15:20:09.351Z · LW · GW

apologies, I don't recognise the paper here :)

Comment by VCM on Is the argument that AI is an xrisk valid? · 2021-07-24T15:19:10.178Z · LW · GW

We tried to frame the discussion internally, i.e. without making additional assumptions that people may or may not agree with (e.g. moral realism). If we did the job right, the assumptions made in the argument are in the 'singularity claim' and the 'orthogonality thesis' - and there the dilemma is that we need an assumption in the one (general intelligence in the singularity claim) that we must reject in the other (the orthogonality thesis).

What we do say (see figure 1) is that two combinations are inconsistent:

a) general intelligence + orthogonality

b) instrumental intelligence + existential risk

So if one wants to keep the 'standard argument', one would have to argue that one of these two, a) or b) are fine.

Comment by VCM on Is the argument that AI is an xrisk valid? · 2021-07-24T15:12:44.955Z · LW · GW

Is this 'standard argument' valid? We only argue that is problematic.

If this argument is invalid, what would a valid argument look like? Perhaps with a 'sufficient probability' of high risk from instrumental intelligence?

Comment by VCM on Is the argument that AI is an xrisk valid? · 2021-07-24T15:12:03.514Z · LW · GW
Comment by VCM on The flawed Turing test: language, understanding, and partial p-zombies · 2013-05-19T19:58:07.862Z · LW · GW

The combinatorial explosion is on the side of the TT, of course. But storage space is on the side of "design to the test", so if you can make up a nice decisive question, the designer can think of it, too (or read your blog) and add that. The question here is whether Stuart (and Ned Block) are right that such a "giant lookup table" a) makes sense and b) has no intelligence. "The intelligence of a toaster" as Block said.

Comment by VCM on The flawed Turing test: language, understanding, and partial p-zombies · 2013-05-19T19:47:03.377Z · LW · GW

That's why the test only offers a sufficient condition for intelligence (not a necessary one) - at least that's the standard view.

Comment by VCM on The flawed Turing test: language, understanding, and partial p-zombies · 2013-05-18T12:55:49.388Z · LW · GW

P.S.: Whether all this has to do with conscious experience ("consciousness") we don't know, I think.

Comment by VCM on The flawed Turing test: language, understanding, and partial p-zombies · 2013-05-18T12:54:45.555Z · LW · GW

The classical problem is that the Turing Test is behavioristic and only provides a sufficient criterion (rather than replacing talk about 'intelligence' as Turing suggests). And it doesn't provide a proper criterion in that it relies on human judges - who tend to take humans for computers, in practice. - Of course it is meant to be open-ended in that "anything one can talk about" is permitted, including stuff that's not on the web. That is a large set of intelligent behavior, but a limited set - so the "design to the test" you are pointing out is precisely what chatterbot people use. And it's usually pretty dumb and provides no insight into human flexibility in using language (which can be used for more than "talking about stuff"). I also suspect that we'll have passing the test pretty soon, in the sense of non-sophisticated judges. So far, results are hopeless, however! The main weakness is that the machines don't do much analysis of the conversation so far. --- Essentially, we know from similar problems (e.g. speech recognition) that one can get very good, but somewhere in the upper 90% there is a limit that's very hard to break without using more data.

Comment by VCM on A brief history of ethically concerned scientists · 2013-03-13T06:51:05.128Z · LW · GW

Thanks, insightful post. I find the research a bit patchy. Only on the atomic bomb there is vast literature since the 1950ies, even in popular fiction - and a couple of crucial names like Oppenheimer (vs. Teller), the Russell–Einstein Manifesto or v. Weizsäcker are absent here.