Bounded rationality abounds in models, not explicitly defined

post by Stuart_Armstrong · 2018-12-11T19:34:17.476Z · LW · GW · 9 comments

Last night, I did not register a patent to cure all forms of cancer. Even though it’s probably possible to figure such a cure out, from basic physics and maybe a download of easily available biology research papers.

Can we then conclude that I don’t want cancer to be cured – or, alternatively, that I am pathologically modest and shy, and thus don’t want the money and fame that would accrue?

No. The correct and obvious answer is that I am boundedly rational. And though an unboundedly rational agent – and maybe a superintelligence – could figure out a cure for cancer from first principles, poor limited me certainly can’t.

Modelling bounded rationality is tricky, and it is often accomplished by artificially limiting the action set/action space. Many economic models use revealed preferences, and feature agents that are assumed to be fully rational, but who are restricted to choosing between a tiny set of possible goods or lotteries. They don’t have the options of developing new technologies, rousing the population to rebellion, going online and fishing around for functional substitutes, founding new political movements, begging, befriending people who already have the desired goods, setting up GoFundMe pages, and so on.

There’s nothing wrong with modelling bounded rationality via action set restriction, as long as we’re aware of what we’re doing. In particular, we can’t naively conclude that because a such a model fits with observation, that therefore humans actually are fully rational agents. In particular, though economists are right that humans are more rational than we might naively suppose, thinking of us as rational, or “mostly rational”, is a colossally erroneous way of thinking. In terms of achieving our goals, as compared with a rational agent, we are barely above agents acting randomly.

Another problem with using small action sets, is that it may lead us to think that an AI might be similarly restricted. That is unlikely to be the case; an intelligent robot walking around would certainly have access to actions that no human would, and possibly ones we couldn’t easily imagine.

Finally, though action set reduction can work well in toy models, it is wrong about the world and about humans. So as we make more and more sophisticated models, there will come a time when we have to discard it, and tackle head-on the difficult issue of defining bounded rationality properly. And it’s mainly for this last point I’m writing this post; we’ll never see the necessity of better ways of defining bounded rationality, unless we realise that modelling it via action set restriction is a) common, b) useful, and c) wrong.

9 comments

Comments sorted by top scores.

comment by DanielFilan · 2018-12-11T20:25:06.433Z · LW(p) · GW(p)

I think that I'm more optimistic about action set restriction than you are. In particular, I view the available action set as a fact about what actions the human is considering and choosing between, rather than a statement of what things are physically possible for the human to do. In this sense, action set restriction seems to me to be a vital part of the story of human bounded rationality, although clearly not the entire story (since we need to know why the action set is restricted in the way that it is).

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2018-12-11T22:42:13.442Z · LW(p) · GW(p)

I agree it's part of the story, but only a part. And real humans don't act as if there was a set of actions of size n, and they could consider all of them with equal ease. Sometimes humans have much smaller action sets, sometimes they can produce completely unexpected actions, and most of the time we have a pretty small set of obvious actions and a much larger set of potential actions we might be able to think up at the cost of some effort.

Replies from: DanielFilan
comment by DanielFilan · 2018-12-12T21:34:28.413Z · LW(p) · GW(p)

I guess I like the hierarchical planning-type view that our 'available action sets' can vary in time, and that one of them can be 'try to think of more possible actions'. Of course, not only do you need to specify the hierarchical structure here, you also need to model the dynamics of action discovery, which is a pretty daunting task.

comment by avturchin · 2018-12-12T00:04:54.248Z · LW(p) · GW(p)

What could be a better measure of the bounded rationality? A Kolmogorov complexity of the solution? Or a number of computations made for the answer?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2018-12-12T00:07:36.274Z · LW(p) · GW(p)

If we want to apply it to humans, something much more complicated than that, which uses some measure of how complex humans see actions, takes into account how and when we search for alternate solutions. There's a reason most models don't use bounded rationality; it ain't simple.

comment by Jan_Kulveit · 2018-12-13T00:06:30.041Z · LW(p) · GW(p)

Good way, I would almost say, the right way, how to do bounded rationality is the information-theoretic bounded rationality. There is a post about it in the works...

comment by Pattern · 2018-12-11T20:47:33.043Z · LW(p) · GW(p)Replies from: TheWakalix
comment by TheWakalix · 2018-12-11T22:27:53.554Z · LW(p) · GW(p)

I don't think you've understood this article if that's your response. The point of the article is that real human beings can in fact set up GoFundMe pages, and many more things, but economic models rarely include all these options. It is only through restricting the options to be considered that we can model unboundedly rational agents. Stuart Armstrong is trying to raise awareness of the limitations of restricted-option models.

(I'm not saying that to be rude, but because I think people can benefit from considering the possibility "I have completely misunderstood what this person is trying to tell me", and responses like yours are mostly only made by people who have completely misunderstood. There's always the possibility that I'm the completely wrong one - if so, I'd be glad to understand the intended meaning your post is trying to convey, and which I am not seeing.)

Replies from: Pattern
comment by Pattern · 2018-12-25T18:15:31.772Z · LW(p) · GW(p)

That makes sense. Thank you for your brief summary.