What is Wisdom?

post by ig0r · 2017-06-29T21:44:07.117Z · LW · GW · Legacy · 10 comments

Contents

10 comments
Cross-posted on my blog: http://garybasin.com/what-is-wisdom/

What could go wrong if we develop technology to significantly amplify the intelligence of human minds? Intelligence is tricky to understand and I get confused when comparing it to the related concepts of wisdom and rationality. I'd like to draw clear distinctions between them. In a nutshell, rationality is the tendency to apply the capacity of intelligence, whereas wisdom describes the embodied knowledge of human behavioral patterns, specifically in terms of failure modes.
 
The relationship between rationality and intelligence seems better understood. My favorite exposition is in the excellent What Intelligence Tests Miss (good summary on LW). Of course, LessWrong itself is partially devoted to understanding this distinction and CFAR was built to see if we can isolate and train rationality (as opposed to intelligence). Intelligence is typically viewed as the capacity to perform the relevant moves -- explicit reasoning, analogical application of past experiences, and avoiding biased heuristics of thought -- when presented with a well-formed problem. In practice, the hard part of taking advantage of intelligence is having the awareness that one is facing a situation where intelligence can be explicitly applied. Thus, one can perform well when formally posed a problem, such as on an IQ or SAT test, yet still behave foolishly in the real world where the problems are not clearly structured and labeled. A colloquialism which approximates this dynamic is the idea of "book" and "street" smarts. Thus, to be rational requires not only some capacity for intelligence but, more importantly, the habits of identifying when and where to apply it in the wild.
 
How does wisdom fit into this? Informally, wisdom refers to the ability to think and act with sound judgment and common sense, often developed through a diversity of life experiences. We tend to look to the aged members of society as a font of wisdom rather than those with merely a large raw capacity for reasoning (intelligence). This corresponds with the heuristic of listening to your elders even when it doesn't always make sense. Wisdom is often associated with conservativism and functions as a regulatory mechanism for societal change. The young and clever upstart has the energy and open-mindedness to create new technology and push for change while the old and wise have seen similar attempts fail enough times to raise a note of caution. The intelligent (and rational) are not more careless than the wise but rather seem to have more blind spots -- perhaps as a result of seeing fewer well-laid plans fail in unexpected ways. To anticipate failure -- to predict the future -- we rely on models. Ideally, we deduce from known laws -- this is possible in the physical sciences. In messier and more complex systems, like human interactions, we are forced to primarily rely on experience from analogous situations (inductive and abductive reason). It is no surprise that the hardest failures to predict relate to how humans will act -- politics, not rocket science.
 
Looking through the literature on measuring wisdom (1, 2, 3), one major commonality is the emphasis on modeling psychological dynamics: intrapersonal (knowing thyself) and interpersonal (making sense of interactions with, and between, other humans). Proficiency in these domains seems to only become possible through experience (specifically, exposure to extremes) interacting with other humans and introspecting, or reflecting, on experience. In contrast, a foundation in the physical sciences and mathematics seems to be learnable by interaction with text, thought, exercises, and experiments performable without significant interpersonal dynamics. In a sense, we can say that proficiency in the "hard" sciences is intelligence-constrained whereas proficiency in predicting and interacting with humans is constrained by a lack of diverse personal experience data and the ability to act upon heuristics extracted from it.
 
This can be understood as a difference in modelability -- the extent to which we can formalize useful (predictive) models of the system. With mathematics and the physical sciences -- at least when applied to sufficiently simplified slices of reality -- we are able to constrain non-determinism into a probabilistic model with well-behaved errors. On the other hand, modeling humans presents us with an uncertainty of a kind that we struggle to reduce (see: the struggle of the social sciences to successfully science). Even residing in a deterministic universe amenable to reductionism, and being armed with excellent models of sub-atomic interactions, we are unable to build the machinery necessary to predict the behavior of human beings. There are too many moving parts for a supercomputer, let alone the highly-constrained working memory of a human brain, to make useful predictions by analyzing the interactions of the component parts. On the other hand, the human brain has evolved to be quite good at modeling itself and other humans -- we are social animals, after all. We perform this feat by observing behavior and automatically chunking it into categories and schemas to be recalled in future situations that appear similar enough. Unfortunately, we have not yet found a shortcut for developing this repository of experiences and the corresponding heuristics derived from it. This is the hard-to-replicate thing we tend to call wisdom.
 
The weak relationship between intelligence (or rationality) and wisdom should make us wary of the consequences of intelligence amplification. Increasing our capacity for intelligence and rationality without a corresponding increase in wisdom -- which appears constrained by experience and associated reflection-based learning -- may be dangerous. Amplified intelligence allows us to make better predictions of the physical world which can be leveraged to build more powerful systems and technologies, like nukes in the 20th century and more powerful AI in the 21st century. However, if we fail to simultaneously increase our wisdom we face the risk of unleashing capabilities onto humanity which may be quite safe in theory but in practice may lead to disaster when they come into contact with human society. We need more foresight into the disastrous failure modes of interactions between humans and their tools. How do we amplify wisdom?

10 comments

Comments sorted by top scores.

comment by Erfeyah · 2017-07-02T10:38:13.221Z · LW(p) · GW(p)

Unfortunately, we have not yet found a shortcut for developing this repository of experiences and the corresponding heuristics derived from it.

There is also the possibility that wisdom is not achieved through the intellect but through experience. Indeed this is what all the wisdom texts have said for thousand and thousand of years. That is why they use phrases like "He who tastes knows", "To ask about experience is absurd, because experience is the annihilation of speech" or "The way is through knowledge and practice, not through intellect and talk". I am not making a rational argument for the reality or not of this perspective but why would we use a different word (wisdom) if it is still rationality? And if you agree (and I think you do) that it is not rationality then your question "How do we amplify wisdom?" has been answered by the wise. People first study for years the teaching stories and other writings of the wisdom traditions in their modern form. Then some may, presumably, find a teacher that guides them through experiences to go further.

Increasing our capacity for intelligence and rationality without a corresponding increase in wisdom.. ..may be dangerous.

Exactly. It is my estimation as well that this is where we are. There are attempts to create a rational bridge to the wisdom traditions but the rationalists would have to stop calling anything that is not rationality 'magic' in a derogatory manner and study the material.

comment by arisen · 2017-07-03T01:37:18.671Z · LW(p) · GW(p)

When you value science for its prediction ability then automatically rationality becomes a burden. If you have a machine that can predict the lottery I don't really want to know how it works. It's merely enough you know how it works and I trust you. "Civilization advances by extending the number of important operations which we can perform without thinking about them." Wisdom is just the window dressing of science working by abductive reasoning to inspire optimism in the users. Marketing of honest goods! :)

To be precise Wisdom is a regulator of Landauer thermodynamic costs of biological computation. In non equilibrium systems such as the human body, wisdom would raise the temperature of the body to favour winning a computational battle with the virus producing more complexity than entropy in replication.

comment by turchin · 2017-06-30T13:15:55.067Z · LW(p) · GW(p)

I would agree that predictive ability consists of two parts. One is a set of rules, and another we could call "trained neural net". It is also similar to System 1 and System 2 by Kahneman.

Most people outside LW tend to associate the word "rationality" with a set of rules and logical reasoning based on it.

This division also corresponds to the two approaches in AI creation: GOFAI and neural net based. Currently NN appraoch is winning, partly because it allows gradual tweaking.

The only known source of wisdom is an old man, preferably scientist, who has enourmous ammount of experiense (but still don't have ALZ).

Replies from: username2
comment by username2 · 2017-07-02T08:38:58.758Z · LW(p) · GW(p)

Currently the NN approach is getting a lot of attention because of the machine learning fad. There have yet to be NN-only architectures that approach the level of generality of GOFAI approaches like OpenCog, however.

Replies from: turchin
comment by turchin · 2017-07-02T11:39:15.721Z · LW(p) · GW(p)

I could imagine two ways how to create universal mind using NNs.

One is to use NNs at low level, like image recognition, and something like rule based inference engine on the upper level, which will used data from NNs for decision making. I think that NN using self-driving cars are built this way.

Another is to create large scale functional model of a human brain, where each block will consist of NN, but the blocks will be rather independent of each other and only exchange information. There are at least 50 Brodmann areas in the human brain with different anatomy.

I am sure we will see an attempt to create universal robotic brains using similar approaches.

Added: The third way is to create very large NN able to predict the whole behaviour of a person based on the whole recording of his previous activities. It will behave as if it is reasoning and as if it understands. As a result, we will have something like rather primitive and noisy upload, very resource hungry. It could be useful to create self-driving cars, which behave exactly like human drivers. But they will fail in non-standard situations. The size of human learning dataset is around 90 000 hours, or something like 100 TB of video, which is 100 000 times more than the size of Imagenet labelled database. https://www.fastcompany.com/1733627/mit-scientist-captures-90000-hours-video-his-sons-first-words-graphs-it The difficulty in training NN is growing non-lineary with the dataset size.

Replies from: username2
comment by username2 · 2017-07-02T13:08:44.649Z · LW(p) · GW(p)

The first one you mention IS the OpenCog model ;) The second isn't really an architecture.

There are ideas for AGI based on pure NN primitives -- such as what DeepMind is working towards -- but so far they are just ideas and napkin sketches. The only working general intelligence codebases are GOFAI to varying degrees at this time.

Replies from: turchin
comment by turchin · 2017-07-02T13:18:50.907Z · LW(p) · GW(p)

Personal phenomenological observation: when I write a text, I feel like some generative network creates a text stream similar to everything I read before, so it works like RNN described by Karpathy. But above it is reasoning engine, which checks if there is some meaning in this generated stream.

comment by tadasdatys · 2017-06-30T06:51:02.507Z · LW(p) · GW(p)

Your division of predictive ability into intelligence and wisdom is very artificial. People are not magic, they're just chaotic. They are not fundamentally different from other complex and chaotic systems. There is no reason to expect that raising general predictive ability wouldn't help predicting them.

Replies from: ig0r
comment by ig0r · 2017-06-30T15:49:43.113Z · LW(p) · GW(p)

I agree that raising general predictive ability would also tend to increase wisdom. I think my main point, which I probably didn't sufficiently highlight, is that wisdom is bottlenecked on data (and also maybe seems to require more abstraction and abduction than other learning) moreso than other knowledge we tend to collect due to the underlying complexity of the thing we are trying to predict (human behavior)

Replies from: tadasdatys
comment by tadasdatys · 2017-06-30T20:17:43.663Z · LW(p) · GW(p)

If the super intelligent agent would lack data, he would realize this and then go collect some. The situation is only dangerous if the agent decides to take drastic action without evaluating his own accuracy. But if the agent is too stupid to evaluate his own accuracy, he's probably too stupid to implement the drastic action in the first place. And if the agent is able to evaluate itself, but ignores the result, that's more a problem of evil than a lack of wisdom.