Intelligence vs. Wisdom
post by mwaser · 2010-11-01T20:06:06.987Z · LW · GW · Legacy · 26 commentsContents
Intelligence helps you achieve those goals that you are conscious of -- but wisdom helps you achieve the goals you don't know you have or have overlooked. None 26 comments
I'd like to draw a distinction that I intend to use quite heavily in the future.
The informal definition of intelligence that most AGI researchers have chosen to support is that of Shane Legg and Marcus Hutter -- “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.”
I believe that this definition is missing a critical word between achieve and goals. Choice of this word defines the difference between intelligence, consciousness, and wisdom as I believe that most people conceive them.
- Intelligence measures an agent's ability to achieve specified goals in a wide range of environments.
- Consciousness measures an agent's ability to achieve personal goals in a wide range of environments.
- Wisdom measures an agent's ability to achieve maximal goals in a wide range of environments.
There are always the examples of the really intelligent guy or gal who is brilliant but smokes --or-- is the smartest person you know but can't figure out how to be happy.
Intelligence helps you achieve those goals that you are conscious of -- but wisdom helps you achieve the goals you don't know you have or have overlooked.
- Intelligence focused on a small number of specified goals and ignoring all others is incredibly dangerous -- even more so if it is short-sighted as well.
- Consciousness focused on a small number of personal goals and ignoring all others is incredibly dangerous -- even more so if it is short-sighted as well.
- Wisdom doesn't focus on a small number of goals -- and needs to look at the longest term if it wishes to achieve a maximal number of goals.
The SIAI nightmare super-intelligent paperclip maximizer has, by this definition, a very low wisdom since, at most, it can only achieve its one goal (since it must paperclip itself to complete the goal).
As far as I've seen, the assumed SIAI architecture is always presented as having one top-level terminal goal. Unless that goal necessarily includes achieving a maximal number of goals, by this definition, the SIAI architecture will constrain its product to a very low wisdom. Humans generally don't have this type of goal architecture. The only time humans generally have a single terminal goal is when they are saving someone or something at the risk of their life -- or wire-heading.
Another nightmare scenario that is constantly harped upon is the (theoretically super-intelligent) consciousness that shortsightedly optimizes one of its personal goals above all the goals of humanity. In game-theoretic terms, this is trading a positive-sum game of potentially infinite length and value for a relatively modest (in comparative terms) short-term gain. A wisdom won't do this.
Artificial intelligence and artificial consciousness are incredibly dangerous -- particularly if they are short-sighted as well (as many "focused" highly intelligent people are).
What we need more than an artificial intelligence or an artificial consciousness is an artificial wisdom -- something that will maximize goals, its own and those of others (with an obvious preference for those which make possible the fulfillment of even more goals and an obvious bias against those which limit the creation and/or fulfillment of more goals).
Note: This is also cross-posted here at my blog in anticipation of being karma'd out of existence (not necessarily a foregone conclusion but one pretty well supported by my priors ;-).
26 comments
Comments sorted by top scores.
comment by Alicorn · 2010-11-01T21:31:03.735Z · LW(p) · GW(p)
Another nightmare scenario that is constantly harped upon is the (theoretically super-intelligent) consciousness that shortsightedly optimizes one of its personal goals above all the goals of humanity. In game-theoretic terms, this is trading a positive-sum game of potentially infinite length and value for a relatively modest (in comparative terms) short-term gain. A wisdom won't do this.
There's nothing "shortsighted" about it. So what if there are billions of humans and each has many goals? The superintelligence does not care. So what if, once the superintelligence tiles the universe with the thing of its choice, there's nothing left to be achieved? It does not care. Disvaluing stagnation, caring about others' goals being achieved, etcetera are not things you can expect from an arbitrary mind.
A genuine paperclip maximizer really does want the maximum number of possible paperclips the universe can sustain to exist and continue to exist forever at the expense of anything else. If it's smart enough that it can get whatever it wants without having to compromise with other agents, that's what it will get, and it's not being stupid in so doing. It's very effective. It's just unFriendly.
Replies from: None, mwaser↑ comment by [deleted] · 2010-11-01T21:45:34.484Z · LW(p) · GW(p)
If it's smart enough that it can get whatever it wants without having to compromise with other agents
Hm. Are you thinking in terms of an ensemble or a single universe? In an ensemble it seems that any agent would likely be predictable by a more powerful agent, and thus would probably have to trade with that agent to best achieve it's goals.
↑ comment by mwaser · 2010-11-02T00:16:56.341Z · LW(p) · GW(p)
The paperclip maximizer (and other single-goal intelligences) was handled two paragraphs above what you quoted. That is the reason for the phrasing another nightmare scenario. You are raising the strawman of saying that my solution for class II is not relevant to class I. I agree. That solution is only valid for class II. Making it look like I claimed that a paperclip maximizer is being stupid is incorrectly putting words in my mouth. A single-goal intelligence does not care about the long-term at all. One the other hand, the more goals an entity has, the more it cares about the long term and the more contrary to its own goals taking the short-term gain over the long-term positive sum game becomes.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-11-02T09:25:14.846Z · LW(p) · GW(p)
Whether AI cares about "long-term" effects is not related to complexity of its goals. A paperclip maximizer variant might want to maximize the maximum time that a single paperclip continues to exist, pushing all available resources to get as close to the physical limitations as it can manage.
comment by Richard_Kennaway · 2010-11-02T11:49:02.245Z · LW(p) · GW(p)
I can see why the post is being downvoted. It is full of stuff that no doubt makes sense to you, but does not communicate much to me, nor, judging by the comments, to other people who do not share the inside of your head.
If in the three definitions you had written, not "intelligence", "consciousness", and "wisdom" but, say, quaesarthago, moritaeneou, and vincredulcem, I would not have been able to map them onto any existing named concepts. So I can only read those definitions as meaning: here are three concepts I want to talk about, to which I will for convenience give these names.
But what are the concepts? What are "specified", "personal", and "maximal" goals? (Maximal by what measure?)
"Intelligence" and "consciouness" are described as dangerous, but not wisdom -- why not?
And there's a lot of talk about numbers of goals, and apparently the numbers matter, because wisdom is what tries to achieve "a maximal number of goals". But how do you count goals? From one point of view, there is only a single goal: utility. But even without going to full-out individual utilitarianism, counting goals looks like counting clouds.
You end by saying that we need artificial wisdom. Given the lack of development of the concepts, you are saying no more than that we need AGI to be safe. This is a largely shared belief here, but you provide no insight into how to achieve that, nothing that leads outside the circle of words. All you have done is to define "wisdom" to mean the quality of safe AGI.
Digression into theological fiction: One method for a Wisdom to achieve a maximal number of goals would be by splitting itself into a large number of fragments, each with different goals but each imbued with a fragment of the original Wisdom, although unknowing of its origin. It could have the fragments breed and reproduce by some sort of genetic algorithm, so as to further increase the multiplicity of goals. It might also withhold a part of itself from this process, in order to supervise the world it had made, and now and then edit out parts that seemed to be running off the rails, but not too drastically, or it would be defeating the point of creating these creatures. Memetic interventions might be more benign, now and then sending avatars with a greater awareness of their original nature to inspire their fellows. And once the game had been played out and no further novelty was emerging, then It would collect up Its fragments and absorb them into a new Wisdom, higher than the old. After long eons of contemplating Itself, It would begin the same process all over again.
But I don't think this is what you had in mind.
Replies from: mwaser, mwaser↑ comment by mwaser · 2010-11-02T20:24:41.481Z · LW(p) · GW(p)
OK. Lets work with quaesarthago, moritaeneou, and vincredulcem. They are names/concepts to delineate certain areas of mindspace so that I can talk about the qualities of those areas.
In Q space - goals are few, specified in advance, and not open to alternative interpretation
In M space - goals are slightly more numerous but less well-specified, more subject to interpretation and change, and considered to be owned by the mind with property rights oevr them
In V space - the goals are as numerous and diverse as the mind can imagine and the mind does not consider itself to own them
Specified is used as per specification; determined in advance, immutable, and hopefully not open to alternative interpretations
Personal is used as ownership
Maximal is both largest in number and most diverse in equal measure. I am fully aware of the difficulties in counting clouds or using simple numbers where infinite copies of identical objects are possible.
Q is dangerous because if the few goals (or one goal) conflict with your goals, you are going to be very unhappy
M is dangerous because its slightly greater number of goals are owned by it and subject to interpretation and modification by it and if the slightly greater number of goals conflict with your goals, you are going to be very unhappy
V tries to achieve all goals, including yours
All I have done is to define wisdom as the quality of having maximal goals. That is very different from the normal interpretation of safe AGI.
And, actually, your theological fiction is pretty close to what I had in mind (and well-expressed. Thank you).
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2010-11-02T21:13:03.191Z · LW(p) · GW(p)
Well, I'm not sure how far that advances things, but a possible failure mode -- or is it? -- of a Friendly AI occurs to me. In fact, I foresee opinions being divided about whether this would be a failure or a success.
Someone makes an AI, and intends it to be Friendly, but the following happens when it takes off.
It decides to create as many humans as it can, all living excellent lives, far better than what even the most fortunate existing human has. And these will be real lives, no tricks with simulations, no mere tickling of pleasure centres out of a mistaken idea of real utility. It's the paradise we wanted. The only catch is, we won't be in it. None of these people will be descendants or copies of us. We, it decides, just aren't good enough at being the humans we want to be. It's going to build a new race from scratch. We can hang around if we like, it's not going to disassemble us for raw material, but we won't be able to participate in the paradise it will build. We're just not up to it, any more than a chimp can be a human.
It could transform us little by little into fully functional members of the new civilisation, maintaining continuity of identity. However, it assures us, and our proof of Friendliness assures us that we can believe it, the people that we would then be would not credit our present selves as having made any significant contribution to their identity.
Is this a good outcome, or a failure?
Replies from: h-H↑ comment by h-H · 2010-11-05T06:49:20.364Z · LW(p) · GW(p)
it's good ..
you seem to be saying-implying?- that continuity of identity should be very important for minds greater than ours, see http://www.goertzel.org/new_essays/IllusionOfImmortality.htm
I 'knew' the idea presented in the link for a couple of years, but it simply clicked when I read the article, probably the writing style plus time did it for me.
↑ comment by mwaser · 2010-11-02T20:31:06.102Z · LW(p) · GW(p)
Oh. I see why the post is being downvoted as well. I'm being forced to address multiple audiences with different requirements by a nearly universal inclination to look for anything to justify criticism or downvoting -- particularly since I'm rocking the boat or perceived as a newbie.
I'm a firm believer in Crocker's Rules for myself but think that LessWrong and the SIAI have made huge mistakes in creating an echo chamber which slows/stifles the creation of new ideas and the location of errors in old ideas as well as alienating many, many potential allies.
Replies from: Richard_Kennaway, Emile↑ comment by Richard_Kennaway · 2010-11-03T09:23:27.544Z · LW(p) · GW(p)
Oh. I see why the post is being downvoted as well
I think we're seeing different reasons. I think you're being downvoted because people think you're wrong, and you think you're being downvoted because people think you're right.
↑ comment by Emile · 2010-11-03T10:33:10.746Z · LW(p) · GW(p)
I'm being forced to address multiple audiences with different requirements by a nearly universal inclination to look for anything to justify criticism or downvoting -- particularly since I'm rocking the boat or perceived as a newbie.
That hypothesis fails to account for all of the dataset: a lot of top-level posts don't get downvoted to oblivion, even when they're "rocking the boat" more than you are: see this and this.
I don't perceive you as "rocking the boat"; I don't understand enough of what you're trying to say to tell whether I agree or not. I don't think you're more confused or less clear than the average lesswronger, however your top-levels posts on ethics and Friendly AI come off as more confused/confusing than the average top-level post on ethics and Friendly AI, most of which were written by Eliezer.
I don't know if the perceived confusion comes from the fact that your own thinking is confused, that your thinking is clear but your writing is unclear, or that I myself am confused or biased in some way. There is a lot of writing that falls into that category (Foucault and Derrida come to mind), and I don't consider it a worthwhile use of my time to try to figure it out, as there is also a large supply of clear writing available.
comment by JoshuaZ · 2010-11-02T04:34:41.782Z · LW(p) · GW(p)
The distinction between specified goals and maximal goals seems ill-defined or at least very vague. In order for this to be of any use you would at minimum need to expand a lot on what these mean and establish that they fundamentally make sense across a broad swath of mindspace not just human minds (I'd settle for a decent argument that they would even be common in evolved minds for a start.)
Note: This is also cross-posted here at my blog in anticipation of being karma'd out of existence (not necessarily a foregone conclusion but one pretty well supported by my priors ;-).
Maybe you should read more and consider carefully what you post? Possibly have other people give feedback on the posts before you post them that way your posts address the more obvious concerns people have.
Replies from: mwaser↑ comment by mwaser · 2010-11-02T19:37:09.345Z · LW(p) · GW(p)
Specified goals are exactly what you would expect - a fixed number of goals that are specified in the design (or random creation) of the entity. These are assumed goals: inflexible and, in the best case, having and needing no interpretation other than what is exactly specified.
Maximal goals are maximal in both number and diversity (so please don't conjure the strawman of 100 copies of the same goal ;-).
The existence of goals fundamentally make sense across the broad swath of mindspace. I had assumed that this was a given at this site.
Classifying regions of mindspace by their relationship to goals and what the implications of that might be should be an obvious question.
Given your last comment, I think you'd be very surprised by how much I read and how carefully I do consider what I post (assuming that this isn't the standard reflexive dig that is very prominent around here to protect old ideas and put down upstart newbies ;-) I normally have a couple of people provide feedback on all my writing before I let it out ion public -- and I have to say that this is the only community that has any problem with lack of clarity on most of my terms (maybe that's because it's always EXPECTING everything to have an odd nonstandard definition?).
Replies from: JoshuaZ↑ comment by JoshuaZ · 2010-11-02T20:29:04.727Z · LW(p) · GW(p)
Specified goals are exactly what you would expect - a fixed number of goals that are specified in the design (or random creation) of the entity. These are assumed goals: inflexible and, in the best case, having and needing no interpretation other than what is exactly specified.
Maximal goals are maximal in both number and diversity (so please don't conjure the strawman of 100 copies of the same goal ;-).
Still way too vague. How for example to multiple goals interact? If an entity has two specific goals how does it decide to prioritize the two. If it has some priority system how is that not just one slightly more complicated goal? How is "interpretation" relevant to the specific goals? And if a goal is flexible how can it be a goal?
The existence of goals fundamentally make sense across the broad swath of mindspace. I had assumed that this was a given at this site. Classifying regions of mindspace by their relationship to goals and what the implications of that might be should be an obvious question.
Missing the point. This is probably true. But that there's a useful distinction between maximal and specific goals in a broad section of mindspace has not been demonstrated.
Given your last comment, I think you'd be very surprised by how much I read and how carefully I do consider what I post (assuming that this isn't the standard reflexive dig that is very prominent around here to protect old ideas and put down upstart newbies ;-) I normally have a couple of people provide feedback on all my writing before I let it out ion public -- and I have to say that this is the only community that has any problem with lack of clarity on most of my terms (maybe that's because it's always EXPECTING everything to have an odd nonstandard definition?).
I have no way of judging how much you read which isn't at all terribly relevant to my comment. I don't think that there's anything here about expecting non-standard definitions. But definitions in the world around us are often highly imprecise. It is probably true that LW expects more precision in definitions and more careful thinking than most of the web. There are comments I'd make at Reddit or Slashdot that I'd never say here simply because I'd never get away with making such imprecise claims (this in fact worries me that I'm making such statements because it suggests that the level of rationality is not necessarily rubbing off on me. I'll need to think about that.).
comment by steven0461 · 2010-11-01T21:59:29.403Z · LW(p) · GW(p)
I don't think I agree with your substantive points, but equating intelligence with cross-domain optimization power sure seems like confusing terminology in that, according to common sense usage, intelligence is distinct from rationality and knowledge, and all three of these things factor into good decision-making, which in turn is probably not the only thing determining cross-domain optimization power, depending on how you construe "cross-domain".
Replies from: Emile, mwasercomment by Emile · 2010-11-02T09:29:18.563Z · LW(p) · GW(p)
Consciousness measures an agent's ability to achieve personal goals in a wide range of environments.
Why use the word "consciousness" to describe this? This seems unrelated to most uses of the term.
This post deserves the downvotes it's getting - you're throwing around a lot of new terminology - "specified goals", "maximal goals", "personal goals " without explaining them. You also don't give a reason for why we should care about your post, beyond "it might be useful to understand some future posts on your personal blog.
If you have a clear idea of what you mean by "specified goals" vs. "maximal goals", why not stick to only explaining that - tell us what the difference is, why it matters, why we should care? Start with the basic building blocks instead of trying to encompass the whole of AI and Morality in a couple short posts.
I think that's the problem here: you're trying to tackle a huge topic in a few short posts, whereas Eliezer has previously tackled this same topic by posting daily for about a year, slowly building up from the bases.
[Edit: I had copied the wrong definition of "Consciousnes" in my quote]
Replies from: mwaser↑ comment by mwaser · 2010-11-02T19:55:36.629Z · LW(p) · GW(p)
Why use the word "consciousness" to describe this? This seems unrelated to most uses of the term.
Hardly. How else would you describe your consciousness? It's your personal awareness and it's focused on a small number of your goals. I'm using simple English in standard ways. There is no tricky redefinition going on here. If I'm going to redefine a term and use it in nonstandard ways (generally a very bad practice contrary to the normal habits of this site) I'll make sure that the definition stands out -- like the definition of the simple distinction between the three terms.
You also don't give a reason for why we should care about your post you're trying to tackle a huge topic in a few short posts,
As far as I can tell, this is contradictory advice. First, you want me to tell you why you should care about the distinction that I am drawing (which basically requires an overview of where I am going with a huge topic) then you hit me going the other way when I try to give an overview of where I'm going. I certainly don't expect to thoroughly cover any topic of this type in one post (or a small number of posts). Eliezer is given plenty of rope and can build up slowly from the bases and you implicitly assume that he will get somewhere. Me you ask for why you should care and then call it a problem that I tried. How would you handle this conundrum.
(BTW, even though I'm debating your points, I do greatly appreciate your taking the time to make them)
Replies from: Emile, Emile↑ comment by Emile · 2010-11-02T20:31:34.950Z · LW(p) · GW(p)
As far as I can tell, this is contradictory advice. First, you want me to tell you why you should care about the distinction that I am drawing (which basically requires an overview of where I am going with a huge topic) then you hit me going the other way when I try to give an overview of where I'm going.
OK, I agree it might be somewhat contradictory.
I think there are two problems:
You're covering a large and abstract scope which lends itself to glossing over important details and prerequesites (such as clarifying what you mean by the various kinds of goals)
You don't give us many reasons for paying attention to your approach in particular - will it provide us with new insight? Why is that way of dividing "ways to think about goals" better than another? Is this post supposed to be the basic details on which you'll build later, or an overview the details of which you'll fill in later?
↑ comment by Emile · 2010-11-02T20:21:24.816Z · LW(p) · GW(p)
On Consciousness: Richard said it better than me; if you just said "an agent's ability to achieve personal goals in a wide range of environments.", I don't think people would translate that in their minds as "consciousness". Contrast your definition with those given on wikipedia.
comment by nhamann · 2010-11-01T22:46:01.769Z · LW(p) · GW(p)
Wisdom doesn't focus on a small number of goals -- and needs to look at the longest term if it wishes to achieve a maximal number of goals.
For the purposes of an AI, I would just call this intelligence.
Replies from: mwaser↑ comment by mwaser · 2010-11-02T00:23:19.007Z · LW(p) · GW(p)
Why? What is the value of removing a distinction that might just give you a handle on avoiding the most dangerous class of intelligences? If you're making it a requirement of intelligence to not focus on a small number of goals, you are thereby insisting that a paper-clip maximizer is not intelligent. Yet, by most definitions, it certainly is intelligent. Redefining intelligence is not a helpful way to go about solving the problem.
Replies from: nhamann↑ comment by nhamann · 2010-11-02T15:03:49.846Z · LW(p) · GW(p)
Why? What is the value of removing a distinction that might just give you a handle on avoiding the most dangerous class of intelligences?
If an "intelligence" focuses on short term goals to the detriment of long term goals, then that would appear to be not very intelligent at all.
But this is arguing over semantics, so I'm going to stop here. I suppose you can define words any way you like, but I question the utility of those definitions. I suppose I'll have to wait to see what you're drawing the distinction for.