What is control theory, and why do you need to know about it?

post by Richard_Kennaway · 2009-04-28T09:25:48.139Z · LW · GW · Legacy · 48 comments

Contents

  1. Alien Space Bats have abducted you.
  2. Two descriptions of the same thing that both make sense but don't fit together.
  3. Why it matters.
  4. Conclusion.
  5. Things I have not yet spoken of.
  6. WARNING: Autonomous device
None
48 comments

This is long, but it's the shortest length I could cut from the material and have a complete thought.

1. Alien Space Bats have abducted you.

In the spirit of this posting, I shall describe a magical power that some devices have. They have an intention, and certain means available to achieve that intention. They succeed in doing so, despite knowing almost nothing about the world outside. If you push on them, they push back. Their magic is not invincible: if you push hard enough, you may overwhelm them. But within their limits, they will push back against anything that would deflect them from their goal. And yet, they are not even aware that anything is opposing them. Nor do they act passively, like a nail holding something down, but instead they draw upon energy sources to actively apply whatever force is required. They do not know you are there, but they will struggle against you with all of their strength, precisely countering whatever you do. It seems that they have a sliver of that Ultimate Power of shaping reality, despite their almost complete ignorance of that reality. Just a sliver, not a whole beam, for their goals are generally simple and limited ones. But they pursue them relentlessly, and they absolutely will not stop until they are dead.

You look inside one of these devices to see how it works, and imagine yourself doing the same task...

Alien Space Bats have abducted you. You find yourself in a sealed cell, featureless but for two devices on the wall. One seems to be some sort of meter with an unbreakable cover, the needle of which wanders over a scale marked off in units, but without any indication of what, if anything, it is measuring. There is a red blob at one point on the scale. The other device is a knob next to the meter, that you can turn. If you twiddle the knob at random, it seems to have some effect on the needle, but there is no fixed relationship. As you play with it, you realise that you very much want the needle to point to the red dot. Nothing else matters to you. Probably the ASBs' doing. But you do not know what moves the needle, and you do not know what turning the knob actually does. You know nothing of what lies outside the cell. There is only the needle, the red dot, and the knob. To make matters worse, the red dot also jumps along the scale from time to time, in no particular pattern, and nothing you do seems to have any effect on it. You don't know why, only that wherever it moves, you must keep the needle aligned with it.

Solve this problem.

That is what it is like, to be one of these magical devices. They are actually commonplace: you can find them everywhere.

They are the thermostat that keeps your home at a constant temperature, the cruise control that keeps your car at a constant speed, the power supply that provides a constant voltage to your computer's circuit boards. The magical thing is how little they need to know to perform their tasks. They have just the needle, the mark on the scale, the knob, and hardwired into them, a rule for how to turn the knob based only on what they see the needle and the red dot do. They do not need to sense the disturbing forces, or predict the effects of their actions, or learn. The thermostat does not know when the sun comes out. The cruise control does not know the gradient of the road. The power supply does not know why or when the mains voltage or the current demand will change. They model nothing, they predict nothing, they learn nothing. They do not know what they are doing. But they work.

These things are called control systems. A control system is a device for keeping a variable at a specified value, regardless of disturbing forces in its environment that would otherwise change it. It has two inputs, called the perception and the reference, and one output, called the output or the action. The output depends only on the perception and the reference (and possibly their past histories, integrals, or derivatives) and is such as to always tend to bring the perception closer to the reference.

Why is this important for LW readers?

2. Two descriptions of the same thing that both make sense but don't fit together.

I shall come to that via an autobiographical detour. In the mid-90's, I came across William Powers' book, Behavior: the Control of Perception, in which he set out an analysis of human behaviour in terms of control theory. (Powers' profession was -- he is retired now -- control engineering.) It made sense to me, and it made nonsense of every other approach to psychology. He gave it the name of Perceptual Control Theory, or PCT, and the title of his book expresses the fundamental viewpoint: all of the behaviour of an organism is the output of control systems, and is performed with the purpose of controlling perceptions at desired reference values. Behaviour is the control of perception.

This is 180 degrees around from the behavioural stimulus-response view, in which you apply a stimulus (a perception) to the organism, and that causes it to emit a response (a behaviour). I shall come back to why this is wrong below. But there is no doubt that it is wrong. Completely, totally wrong. To this audience I can say, as wrong as theism. That wrong. Cognitive psychology just adds layers of processing between stimulus and response, and fares little better.

I made a simulation of a walking robot whose control systems were designed according to the principles of PCT, and it works. It stands up, walks over uneven terrain, and navigates to food particles. (My earliest simulation is still on the web in the form of this Java applet.) It resists a simulated wind, despite having no way to perceive it. It cannot see, sensing the direction of food only by the differential scent signals from its antennae. It walks on uneven terrain, despite having no perception of the ground other than the positions of its feet relative to its body.

And then, a year or two ago, I came upon Overcoming Bias, and before that, Eliezer's article on Bayes' theorem. (Anyone who has not read that article should do so: besides being essential background to OB and LW, it's a good read, and when you have studied it, you will intuitively know why a positive result on a screening test for a rare condition may not be telling you very much.) Bayes' theorem itself is a perfectly sound piece of mathematics, and has practical applications in those cases where you actually have the necessary numbers, such as in that example of screening tests.

But it was being put forward as something more than that, as a fundamental principle of reasoning, even when you don't have the numbers. Bayes' Theorem as the foundation of rationality, entangling one's brain with the real world, allowing the probability mass of one's beliefs to be pushed by the evidence, acting to funnel the world through a desired tunnel in configuration space. And it was presented as even more than a technique to be learned and applied well or badly, but as the essence of all successful action. Rationality not only wins, it wins by Bayescraft. Bayescraft is the single essence of any method of pushing probability mass into sharp peaks. This all made sense too.

But the two world-views did not seem to fit together. Consider the humble room thermostat, which keeps the temperature within a narrow range by turning the heating on and off (or in warmer climes, the air conditioning), and consider everything that it does not do while doing the single thing that it does:

And yet despite that, it has a sliver of the Ultimate Power, the ability to funnel the world through its desired tunnel in configuration space. In short, control systems win while being entirely arational. How is this possible?

If you look up subjects such as "optimal control", "adaptive control", or "modern control theory", you will certainly find a lot of work on using Bayesian methods to design control systems. However, the fact remains that the majority of all installed control systems are nothing but manually tuned PID controllers. And I have never seen, although I have looked for it, any analysis of general control systems in Bayesian terms. (Except for one author, but despite having a mathematical background, I couldn't make head nor tail of what he was saying. I don't think it's me, because despite his being an eminent person in the field of "intelligent control", almost no-one cites his work.) So much for modern control theory. You can design things that way, but you usually don't have to, and it takes a lot more mathematics and computing power. I only mention it because anyone googling "Bayes" and "control theory" will find all that and may mistake it for the whole subject.

3. Why it matters.

If this was only about cruise controls and room thermostats, it would just be a minor conundrum. But it is also about people, and all living organisms. The Alien Space Bat Prison Cell describes us just as much as it describes a thermostat. We have a large array of meter needles, red dots, and knobs on the walls of our cell, but it remains the case that we are held inside an unbreakable prison exactly the same shape as ourselves. We are brains in vats, the vat of our own body. No matter how we imagine we are reaching out into the world to perceive it directly, our perceptions are all just neural signals. We have reasons to think there is a world out there that causes these perceptions (and I am not seeking to cast doubt on that), but there is no direct access. All our perceptions enter us as neural signals. Our actions, too, are more neural signals, directed outwards -- we think -- to move our muscles. We can never dig our way out of the cell. All that does is make a bigger cell, perhaps with more meters and knobs.

We do pretty well at controlling some of those needles, without having received the grace of Bayes. When you steer your car, how do you keep it directed along the intended path? By seeing through the windscreen how it is positioned, and doing whatever is necessary with the steering wheel in order to see what you want to see. You cannot do it if the windows are blacked out (no perception), if the steering linkage is broken (no action), or if you do not care where the car goes (no reference). But you can do it even if you do not know about the cross-wind, or the misadjusted brake dragging on one of the wheels, or the changing balance of the car according to where passengers are sitting. It would not help if you did. All you need is to see the actual state of affairs, and know what you want to see, and know how to use the steering wheel to get the view closer to the view you want. You don't need to know much about that last. Most people pick it up at once in their first driving lesson, and practice merely refines their control.

Consider stimulus/response again. You can't sense the crosswind from inside a car, yet the angle of the steering wheel will always be just enough to counteract the cross-wind. The correlation between the two will be very high. A simple, measurable analogue of the task is easily carried out on a computer. There is a mark on the screen that moves left and right, which the subject must keep close to a static mark. The position of the moving mark is simply the sum of the mouse position and a randomly drifting disturbance calculated by the program. So long as the disturbance is not too large and does not vary too rapidly, it is easy to keep the two marks fairly well aligned. The correlation between the mouse position (the subject's action) and the disturbance (which the subject cannot see) is typically around -0.99. (I just tried it and scored -0.987.) On the other hand, the correlation between mouse position and mark position (the subject's perception) will be close to zero.

So in a control task, the "stimulus" -- the perception -- is uncorrelated with the "response" -- the behaviour. To put that in different terminology, the mutual information between them is close to zero. But the behaviour is highly correlated with something that the subject cannot perceive.

When driving a car, suppose you decide to change lanes? (Or in the tracking task, suppose you decide to keep the moving mark one inch to the left of the static mark?) Suddenly you do something different with the steering wheel. Nothing about your perception changed, yet your actions changed, because a reference signal inside your head changed.

If you do not know that you are dealing with a control system, it will seem mysterious. You will apply stimuli and measure responses, and end up with statistical mush. Since everyone else does the same, you can excuse the situation by saying that people are terribly complicated and you can't expect more. 0.6 is considered a high correlation in a psychology experiment, and 0.2 is considered publishable (link). Real answers go ping!! when you hit them, instead of slopping around like lumpy porridge. What is needed is to discover that a control system is present, what it is controlling, and how.

There are ways of doing that, but this is enough for one posting.

4. Conclusion.

Conclusion of this posting, not my entire thoughts on the subject, not by a long way.

My questions to you are these.

Control systems win while being arational. Either explain this in terms of Bayescraft, or explain why there is no such explanation.

If, as is speculated, a living organism's brain is a collection of control systems, is Bayescraft no more related to its physical working than arithmetic is? Our brains can learn to do arithmetic, but arithmetic is not how our brains work. Likewise, we can learn Bayescraft, or some practical approximation to it, but do Bayesian processes have anything to do with the mechanism of brains?

Does Bayescraft necessarily have anything to do with the task of building a machine that ... can do something not to be discussed here yet?

5. Things I have not yet spoken of.

The control system's designer who put the rule in, that tells it what output to emit given the perception and the reference: whether he supplied the rationality that is the real source of its miraculous power?

How to discover the presence of a control system and discern its reference, even if its physical embodiment remains obscure.

How to control a perception even when you don't know how.

Hierarchical arrangements of control systems as a method of building more complex control systems.

Simple control systems win at their limited tasks while being arational. How much more is possible for arational systems built of control systems?

6. WARNING: Autonomous device

After those few thousand words of seriousness, a small dessert.

Exhibit A: A supposedly futuristic warning sign.

Exhibit B: A contemporary warning sign in an undergraduate control engineering lab: "WARNING: These devices may start moving without warning, even if they appear powered off, and can exert sudden and considerable forces. Exercise caution in their vicinity."

They say the same thing.

48 comments

Comments sorted by top scores.

comment by Steve_Rayhawk · 2009-04-29T03:27:18.383Z · LW(p) · GW(p)

I have not studied control theory, but I think a PID controller may be the Bayes-optimal controller if:

  • the system is a second-order linear system with constant coefficients,
  • the system is controllable,
  • all disturbances in the system are additive white noise forcing terms,
  • there is no noise in perception,
  • the cost functional is the integral of the square of the error,
  • the time horizon is infinite in both directions (no transients), and
  • the prior belief distribution over possible reference signals is the same as if the reference signal was a Brownian motion (which needs first-order control) plus an integral of a Brownian motion (which needs second-order control).

What makes it a full Bayesian decision problem is the prior belief distribution over possible reference signals. At each time, you don't know what the future reference signal is going to be, but you have a marginal posterior belief distribution over possible future reference signals given what the reference signal has been in the past. Part of this knowledge about possible future reference signals is represented in the state of the system you have been controlling, and part of it is represented in the state of the I element of the controller. You also don't know what the delayed effects of past disturbances will be, but you have a marginal posterior belief distribution over possible future delayed effects given what the perception signal has been in the past. Part of this knowledge is also represented in the system and in the I element. (Not all of your knowledge about possible future reference signals and possible future delayed effects of past disturbances is represented, only your knowledge about possible future differences between them.) This representation is related to sufficient statistics ("sufficiency is the property possessed by a statistic . . . when no other statistic which can be calculated from the same sample provides any additional information") and to updating of the parameter for a parametric family of belief distributions.

In a real engineering problem, the true belief about expected possible reference signals would be more specific than a belief of a random Brownian motion. But if a reference signal would not be improbable for Brownian motion, then a PID controller can still do well on that reference signal.

I think these conditions are sufficient but not necessary. If I knew control theory I would tell you more general conditions. If the cost functional has a term for the integral of the squared control signal, then a PID controller may not be optimal without added filters to keep the control signal from having infinite power.

Example 6.3-1 in Optimal Control and Estimation by Robert F. Stengel (1994 edition, pp. 540-541) is about PID controllers as optimal regulators in linear-quadratic-Gaussian control problems.

I see optimal control theory as the shared generalization of Bayesian decision networks and dynamic Bayesian networks in the continuous-time limit. (Dynamic Bayesian networks are Bayes nets which model how variables change over discretized time steps. When the time step size goes to zero and the variables are continuous, the limit is stochastic differential equations such as the equations of Brownian motion. When the time step size goes to zero and the variables are discrete, the limit is almost Uri Nodelman's continuous-time Bayesian networks. Bayesian decision networks are Bayes nets which represent a decision problem and contain decision nodes, utility nodes, and information arcs.)

Replies from: Steve_Rayhawk, Richard_Kennaway
comment by Steve_Rayhawk · 2009-04-30T02:00:28.460Z · LW(p) · GW(p)

(Not all of your knowledge about possible future reference signals and possible future delayed effects of past disturbances is represented, only your knowledge about possible future differences between them.)

So this isn't a sufficient statistic, it's only a sufficient-for-policy-implications statistic. Is there a name for that?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-06-27T17:32:26.836Z · LW(p) · GW(p)

All "sufficient" statistics are only "sufficient" for some particular set of policy or epistemic implications. You could always care about the number of 1 bits, if you're allowed to care about anything.

Replies from: Steve_Rayhawk
comment by Steve_Rayhawk · 2009-06-28T02:41:11.462Z · LW(p) · GW(p)

Then every "sufficient-for-policy-implications" statistic can become a "sufficient-for-implications-for-beliefs-about-the-future" statistic, under a coarsening of the sample space by some future-action-preserving and conditional-ratios-of-expected-payoff-differences-preserving equivalence relation?

(Would we expect deliberative thinking and memory to physically approximate such coarsenings, as linear controllers do?)

comment by Richard_Kennaway · 2009-04-29T23:30:37.624Z · LW(p) · GW(p)

Thank you for those references -- exactly the sort of thing I've been looking for.

comment by gjm · 2009-04-28T11:07:38.283Z · LW(p) · GW(p)

I like this perspective.

Control systems win while being arational. Either explain this in terms of Bayescraft, or explain why there is no such explanation.

This (from Richard's post) seems to me very much parallel to this (which I just made up):

Cricketers and baseball players win at ball-catching while knowing nothing about Newtonian mechanics, fluid dynamics, or solving differential equations. Either explain this in terms of physics, or explain why there is no such explanation.

Anyone who says anything close to "Cox's theorem; therefore you must make your decisions by making Bayesian calculations" is broken. But it could still be reasonable to say "However you make your decisions, the results should be as close as you can make them to those of an ideal reasoner doing Bayesian calculations on the information you have". I don't see any contradiction, or even any tension, here. As for an actual specific explanation that matches the facts, that would seem to need to be done afresh for every control system that works; for some cases (like our brains) the answers might be unmanageably complicated.

Do Bayesian processes have anything to do with the mechanism of brains?

In the same sense as differential equations have something to do with the mechanism of people catching balls: when brains function well at maintaining reasonable beliefs, on some level of abstraction they have to act at least a little bit like Bayesian systems. But there needn't be anything in the mechanisms that resembles the form (as opposed to the output) of the idealizations.

Does Bayescraft necessarily have anything to do with the task of building a machine that [...]

Since we might be able to do that by building a very big very low-level model of an entire human brain, without any understanding at all of what's going on, obviously in some sense the answer is no. But if you want to understand what you're doing -- well, how much physics do you need to know if you want to get a space probe to Neptune? My guess is that even if you do it by making something that you launch into space at random and that then goes looking heuristically for something that might be Neptune, the chances are you're going to want quite a lot of physics while you're designing it.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2009-04-28T13:14:17.850Z · LW(p) · GW(p)

The ball-catching example is interesting, as it's another control problem, and has been studied as such. The fielder must get to where the ball will land. The predictive method would be to look at the ball, estimate its trajectory, then go to where you predict it will come down. This will not be very effective, because you cannot estimate the trajectory well enough. Instead, one method that will work is to move so as to maintain the direction from yourself to the ball constant in both azimuth and elevation. This is a control task, akin to the cursor-tracking task I discussed in the posting. You just have to move faster or slower and vary your direction, in whatever way will keep the direction constant. (The reason this works is that if the direction is constant, the ball is moving directly towards you in the frame of reference that moves with you. Or directly away, but in that case you won't be able to run fast enough to catch it.)

Devise such a control model, put in some parameters, add the physics of flying balls, solve the differential equations, and compare the results to the performance of actual fielders, and you have explained it in terms of physics.

How would Jeffreyssai analyse a PID loop?

Replies from: gjm, SilasBarta
comment by gjm · 2009-04-28T13:41:39.683Z · LW(p) · GW(p)

The ball-catching example is interesting, as it's another control problem [...]

That's why I chose it.

How would Jeffreyssai analyse a PID loop?

I decline to speculate on the internal workings of someone who is (1) fictional and (2) much cleverer, or at least better-trained in relevant areas, than me. But a generic Bayesian rationalist might say something like this:

"My goal is to have my beliefs track the range of possible futures. The mathematics of probabilistic inference founded by Bayes, Lagrange, Jaynes, etc., tells me that if I do this then the dynamics of my belief-updating must satisfy certain equations. Unfortunately, my brain is not fast enough, precise enough, or reliable enough to do that in real time, so I'd better look for some more tractable approximation that will produce results similar enough to those of updating according to the equations. Hmm, let's see ... scribble scribble ... hack hack ... scribble hack think scribble think scribble ... OK, it turns out that in this special case, I can do well enough by just keeping track of the expected value minus half the standard deviation, and (provided things change in roughly the way I expect them to) that quantity satisfies this nice simple differential equation, which I can approximate with a finite-difference equation; so, simplifying a bit, it turns out that I can do a decent job by updating my estimate like so. [At which point he has written down the guts of a PID controller.] Unfortunately, that only gives me a point estimate; fortunately, if all goes according to plan my optimal posteriors are all of roughly the same shape and if I really have to I can get a decent approximation to the other parameter I need by doing this... [He writes down the coefficients for another PID controller.] I've had to make some assumptions that amount to having a prior with zero probabilities all over the place, which is ugly. Perhaps there's some quantity I can keep track of that will stay close to zero as long as my model holds, but that has no reason to do so if the model's wrong. ... scribble scribble think scribble hack ... Well, it's not great, but if I also compute this and this, then while my underlying assumptions hold they should be very close to equal, so a lower bound on Pr(the model is broken) is such-and-such, so let's watch that as well."

Of course all the actual analysis is missing here. That would be because "a PID loop" can describe a vast range of different systems. And I'm assuming that our hypothetical rationalist knows enough about the relevant domain to be able to do the analysis, because otherwise your question seems a bit like asking how Jeffreyssai would do the biological research to know that he could take Pr(evolution) to be very close to 1. (Answer: He doesn't need to; other people have already done it.)

(I have the feeling that one of us is missing the other's point.)

comment by SilasBarta · 2009-04-28T21:13:36.058Z · LW(p) · GW(p)

RichardKennaway, very interesting post. I actually specialized in control theory in graduate school, but didn't finish the program. I must object to what you've said here, in that control theory most certainly does make extensive use of Bayesian inferenence, under the name of the Kalman filter.

The Kalman filter is a way of estimating the paramaters of a system, given your observations and your knowledge of the system's dynamics. While it may not help you pick a good control input algorithm, and while the problems you listed there may not need such accurate estimation of the data, it is an integral part of finding out how much the system deviates from where you want it to be, and is used extensively in controls.

comment by Cyan · 2009-04-28T17:12:05.981Z · LW(p) · GW(p)

A couple of points. First, you've described only feedback control systems -- you've omitted control systems with feedforward components. Feedforward systems have another signal, the perturbation signal, in addition to the current state and target state signals. (Pure feedforward systems which have only the perturbation signal are also possible provided the process is extremely well-modelled -- which is to say, almost never.) Feedforward control systems see a lot of use in chemical engineering, where PID control may not be sufficient to satisfy the design specs. Information about a feedstream is extremely useful for keeping a chemical process at steady state.

Second, the ability of human eyes to track moving objects (while our own heads are also moving!) is a pure control problem with a solution implemented in neurons. Provided I understood correctly what Mimi Galiana taught me, I should point out that our object-tracking abilities aren't based on PID control -- they're based on a (consciously inaccessible but) explicit neural prediction circuit.

In short, the presented view is rather incomplete. There's a limit to how good you can do with PID control; beyond that, you need more information. That said, constraining the future is all about control (or vice verse?), and I think the connection between control theory and rationality is important.

Replies from: Tom_Talbot
comment by Tom_Talbot · 2009-04-29T19:12:46.891Z · LW(p) · GW(p)

Coincidentally, today I was reading an interesting paper about forward and inverse models in the cerebellum. Here's a quote:

Humans demonstrate a remarkable ability to generate accurate and appropriate motor behaviour under many different and often uncertain environmental conditions. Considering the number of objects and environments, and their possible combinations, that can influence the dynamics of the motor system, the controller must be capable of providing approximate motor commands for a multitude of distinct contexts, such as different tasks and interactions with objects, that are likely to be experienced. Given this multitude of contexts, there are two qualitatively distinct strategies to motor control and learning. The first is to use a single controller that uses all the contextual information in an attempt to produce an appropriate control signal. However, such a controller would require enormous complexity to allow for all possible scenarios. If this controller were unable to encapsulate all the contexts, it would need to adapt every time the context of the movement changed before it could produce approximate motor commands - this would produce transient and possibly large performance errors. Alternatively, a modular approach can be used in which multiple controllers co-exist, with each controller suitable for one or a small set of contexts. Depending on the current context, only those appropriate controllers would be active to generate the motor command. While forward and inverse models could be learned by a single module, there are three potential benefits to employing a modular approach. First, the world is essentially modular, in that we interact with multiple qualitatively different objects and environments. By using multiple inverse models, each of which might capture the motor commands necessary when acting with a particular object or within a particular environment, we could achieve an efficient coding of the world. In other words, the large set of environmental conditions in which we are required to generate movement requires multiple behaviours or sets of motor commands, each embodied within a module. Secondly, the use of a modular system allows individual modules to adapt through motor learning without affecting the motor behaviours already learned by other modules. Thirdly, many situations that we encounter are derived from combinations of previously experienced contexts, such as novel conjoints of previously manipulated objects and environments. By modulating the contribution to the final motor command of of the outputs of the inverse modules, an enormous repertoire of behaviours can be generated. With as few as 32 inverse models, in which the output of each model either contributes or does not contribute to the final motor command, we have 2^32 or 10^10 behaviours - sufficient for a new behaviour for every second of one's life. Therefore, multiple internal models can be regarded conceptually as motor primitives, which are the building blocks used to construct intricate motor behaviours with an enormous vocabulary.

Replies from: Richard_Kennaway, Strange7
comment by Richard_Kennaway · 2009-05-02T18:17:43.951Z · LW(p) · GW(p)

Thanks for that reference. For anyone who doesn't have access to a library subscribing to Trends in Cognitive Sciences, here's a copy that's free to access.

comment by Strange7 · 2010-04-06T19:32:50.553Z · LW(p) · GW(p)

However, such a controller would require enormous complexity to allow for all possible scenarios. If this controller were unable to encapsulate all the contexts, it would need to adapt every time the context of the movement changed before it could produce approximate motor commands - this would produce transient and possibly large performance errors.

I've heard it said that when someone slips on a banana, the humor is closely connected to the way that normal walking movement continues into an inappropriate context. That sounds to me like a large performance error, and a brain is certainly complex.

comment by michael · 2009-04-28T12:09:25.038Z · LW(p) · GW(p)

Isn’t a model of the outside world built in – implicit – in the robot’s design? Surely it has no explicit knowledge of the outside world, yet it was built in a certain way so that it can counteract outside forces. Randomly throwing together a robot most certainly will not get you such a behaviour – but design (or evolution!) will give you a robot with a implicit model of the outside world (maybe at some point one who can formulate explicit models). I wouldn’t be so fast and just throw away the notion of a model.

I find the perspective very intriguing, but I think of it more as nature’s (or human designer’s) way of building quick and dirty, simple and efficient machines. To achieve that goal implicit models are very important. There is no magic – you need a model, albeit one that is implicit.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2009-04-28T12:53:00.327Z · LW(p) · GW(p)

Certainly I, as the designer, had a model of the robot and its environment when I wrote that program, and the program implements those models. But the robot itself has no model of its environment. It calculates the positions of its feet, relative to itself, by sensing its joint angles, knowing the lengths of its limb segments and calculating, so it does have a fairly limited model of itself: it knows its own dimensions. However, it does not know its own mass, or the characteristics of its sensors and actuators.

The fact that it works does not mean that it has an "implicit" model of the environment: "implicit", in a context like this, means "not". What is a model? A model is a piece of mathematics in which certain quantities correspond to certain properties of the thing modelled, and certain mathematical relationships between these correspond to certain physical relationships. Maxwell's equations model electromagnetic phenomena. The Newton-Raphson (EDIT: I meant Navier-Stokes) equation models fluid flow. "Implicit model" is what one says, when one expects to find a model and finds none. The robot's environment contains a simulated wind pushing on the robot, and a simulated hand giving it a shove. The robot knows nothing of this: there is no variable in the part of the program that deals with the robot's sensors, actuators, and control algorithms that represents the forces acting on it. The robot no more models its environment than a thermostat models the room outside it.

Since it is possible to build systems that achieve goals without models, and also possible, but in general rather more complicated, to build such systems that do use models, I do not think that the blind god of evolution is likely to have put models anywhere. It has come up with something -- us, and probably the higher animals -- that can make models, but nothing currently persuades me that models are how brains must work. I see no need of that hypothesis.

I'd rather like to build that robot. If I did, I would very likely use an onboard computer just to have flexibility in reconfiguring its control algorithms, but the controllers themselves are just PID loops. If, having got it to work robustly, I were to hard-wire it, the control circuitry would consist of a handful of analogue components for each joint, and no computer required. I still find it remarkable, how much it can do with so little.

Replies from: gjm, MrShaggy, gjm, MrHen
comment by gjm · 2009-04-28T16:26:26.188Z · LW(p) · GW(p)

The Newton-Raphson equation models fluid flow.

Er, I think you mean Navier-Stokes.

"Implicit model" is what one says, when one expects to find a model and finds none.

I think that's unfair. The notion of an implicit model (meaning something like "a model such that a system making use of it would behave just like this one") is a useful one; for instance, suppose you are presented with a system designed by someone else that isn't working as it should; one way to diagnose its troubles is to work out what assumptions about the world are implicit in its design (they might not amount to anything quite so grand as a "model", I suppose) and how they fail to match reality, and then -- with the help of one's own better model of the world -- to adjust the system's behaviour.

Or, of course, you can just poke at it until it behaves better. But then I'd be inclined to say that you're still using a model of the world -- you're exploiting the world's ability to be used as a model of itself. If a system gets "poked at until it behaves better" often enough and in varied enough ways, it can end up with a whole lot of information about the world built into it. If you don't want to call that an "implicit model", fair enough; but what's wrong with doing so?

Replies from: Strange7
comment by Strange7 · 2010-04-06T19:51:22.277Z · LW(p) · GW(p)

Poking at it until it works isn't revising a model, in the same sense that walking toward the pole star when you want to go North isn't cartography.

Replies from: gjm
comment by gjm · 2010-04-06T23:12:43.118Z · LW(p) · GW(p)

I didn't say that poking at something until it works is revising a model, I said that it's using a model (in, doubtless, a rather trivial sense). And, if I'm understanding your analogy right, surely the analogous claim would be that walking (as nearly as possible given that one remains on the surface of the earth) towards the pole star isn't reading a map (even an "implicit" one), not that it isn't cartography; and I don't think that's quite so obvious. (Also: it seems to me that "maps" have more in common than "models", and I think that's relevant.)

comment by MrShaggy · 2009-04-28T16:32:55.507Z · LW(p) · GW(p)

Could one argue the tuning by the programmer incorporates the relevant aspects of the model? (Which is what I think commenter meant by "implicit.") In my mom's old van, going down a steep hill would mess up the cruise control: as you say, if you push hard enough, you can over come a control loop's programming. So a guess as to relation to Bayescraft: certain real world scenarios operate within a narrow enough set of parameters enough of the time that one can design feedback loops that do not update based on all evidence and still work well enough.

comment by gjm · 2009-04-28T16:29:47.176Z · LW(p) · GW(p)

nothing currently persuades me that models are how brains must work.

Who's saying that they are?

(And: Is what you're expressing skeptical about the idea that brains usually use models, or the idea that they ever do? I know that I use models quite often -- any time I try to imagine how something I do will work out -- and if it isn't my brain doing that, I don't know what it is.)

comment by MrHen · 2009-04-28T13:59:42.629Z · LW(p) · GW(p)

I'd rather like to build that robot.

If you have not seen it yet, check out Ballbot. This video is it responding to a disturbance. I know nothing of its programming, but it acts as if it is using the same control systems you are describing.

Also, Beyond AI has a lot of discussion about how simple control structures may eventually work its way into building a general AI. I do not know if there is an online version hanging around, but if you are interested I can type up a summary article after the General AI topic ban is lifted.

In terms of your original post, another random example of simple control structures providing control over extremely complex systems would be video games. The controllers generally affect one thing and after my mind understands the movements I can guide a little soldier to kill other soldiers. I find that learning these control systems makes me a better driver, makes me better at operating small backhoes, or anything else that can be expressed in terms of simple control structures. An interesting side-topic to your article would be taking a look at how we control control structures and working to improve the feedback and response times. My talent for video games may be related to my intuitive ability to balance when walking on the curb or why I instinctively want to respond to a emotional tragedy by responding with a soft push toward emotional safety. "Fixing it all at once" is likely to overcorrect.

I am rambling now, but this article connected a few unassociated behaviors in my head. Cool.

Replies from: derekz
comment by derekz · 2009-04-28T14:59:16.057Z · LW(p) · GW(p)

For a continuation of the ideas in Beyond AI, relevant to this LW topic, see:

http://agi-09.org/papers/paper_22.pdf

Replies from: MrHen
comment by MrHen · 2009-04-28T15:19:57.265Z · LW(p) · GW(p)

Thanks; added to reading list.

comment by JGWeissman · 2009-04-29T03:04:01.835Z · LW(p) · GW(p)

Control systems win while being arational. Either explain this in terms of Bayescraft, or explain why there is no such explanation.

The control system a person uses to steer a car would fail if it were not calibrated by processing evidence in a manner idealized by Bayescraft. Knowing the correct amount to turn the wheel to correct a deviation of the perceived direction from the desired direction depends on one's previous experience turning the wheel, the evidence of how the car reacts to turning the wheel a given amount.

I often help to teach sailing classes, and I observe that inexperienced students have the problems with steering that would be expected for one unfamiliar with steering their boat. They are either too timid on the helm, allowing the boat to stay off course, or too aggressive, overshooting the desired course, and then over correcting again the other way. As they gain experience, that is, as they process the evidence of how the boat reacts to their use of the tiller, their control improves to the point that they can maintain their desired course. This is one reason we like students to start with smaller, more responsive boats, which give the evidence more quickly and obviously than larger boats that take time to react.

Control systems are useful, but they are useful because we use evidence to select the particular control system that wins.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2009-04-29T13:54:07.007Z · LW(p) · GW(p)

Knowing the correct amount to turn the wheel to correct a deviation of the perceived direction from the desired direction depends on one's previous experience turning the wheel, the evidence of how the car reacts to turning the wheel a given amount.

That isn't the case with the control systems in the OP. A thermostat doesn't know how long it will need to stay on to reach the desired temperature from the current temperature. Even its designers didn't necessarily know that. It just

(1) turns on;

(2) checks the temperature

(3) stays on if still hasn't reached desired temperature; else turns off.

Moreover, it doesn't even learn from this experience. The next time it finds itself with exactly the same disparity between current and desired temperature, it will go through exactly the same procedure, without benefiting from its previous experience at all.

All that matters is that the system responds in a way that (1) approaches the desired state, and (2) won't overshoot---i.e., won't reach the desired state so quickly that the system can't turn off the response in time. These seem to be what were missing with your sailing students.

Edited to correct format

Replies from: JGWeissman, JamesAndrix
comment by JGWeissman · 2009-04-29T18:28:19.299Z · LW(p) · GW(p)

That isn't the case with the control systems in the OP.

From the OP

If this was only about cruise controls and room thermostats, it would just be a minor conundrum. But it is also about people, and all living organisms.

My point was that features of the thermostat that the OP attempted to generalize to control systems used by people do not actually generalize. A thermostat is a simple system to solve a simple problem (though it still takes some evidence, that a given device cools or heats a room). A more complex problem requires a more complex solution, and more evidence to calibrate.

All that matters is that the system responds in a way that (1) approaches the desired state, and (2) won't overshoot---i.e., won't reach the desired state so quickly that the system can't turn off the response in time. These seem to be what were missing with your sailing students.

While technically true at a certain level of abstraction, that is just not helpful. The reason why the students do not approach the desired state, or overshoot, is important. If I just told them "approach the desired course, but don't overshoot", it would not help. They already know they want to do that, but not how to do that. I need to tell them more precisely how to use the tiller to do that. I tell them, "pull the tiller towards you, a little more ... now back in the center", and get them to observe the effect this has on the boat. It is after going through this exercise a few times that they are able to implement the control system themselves, and process higher level instructions.

comment by JamesAndrix · 2009-04-29T16:29:49.760Z · LW(p) · GW(p)

(2) won't overshoot---i.e., won't reach the desired state so quickly that the system can't turn off the response in time. These seem to be what were missing with your sailing students.

But that's a a result of the high responsiveness of the furnace vs. the low responsiveness of the boat. You couldn't blindly let a thermostat control a boat or a missle, you would have to tune it. It some situations it might need to turn itself back off before it's input (heading) has noticeably changed.

Replies from: themightypuck
comment by themightypuck · 2009-06-29T23:42:47.208Z · LW(p) · GW(p)

Consciousness explained.

comment by Psychohistorian · 2009-04-28T14:23:49.504Z · LW(p) · GW(p)

But there is no doubt that it is wrong. Completely, totally wrong. To this audience I can say, as wrong as theism.

I did not really see this being backed up, certainly not to a "wrong as theism" level. Much more importantly, it being wrong has little to do with you being right, any more than if Darwinian evolution is wrong, there must be an all-powerful father figure who cares deeply about what you do in the bedroom and if you eat meat on Fridays.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2009-04-28T19:45:17.002Z · LW(p) · GW(p)

I was just trying to convey the degree of wrongness that I was claiming, not drawing any detailed connection between them.

comment by PhilGoetz · 2009-04-29T23:22:06.831Z · LW(p) · GW(p)

This is 180 degrees around from the behavioural stimulus-response view, in which you apply a stimulus (a perception) to the organism, and that causes it to emit a response (a behaviour). I shall come back to why this is wrong below. But there is no doubt that it is wrong. Completely, totally wrong. To this audience I can say, as wrong as theism.

No. Absolutely not. Stimulus-response works, has worked reliably for 70 years, and we now know how some specific brain circuits encode stimulus-response learning.

Servos also work. Both have their uses.

comment by SilasBarta · 2009-04-28T22:46:32.296Z · LW(p) · GW(p)

Richard's post is similar to something I was thinking about a few months ago. I tried to attack the problem of AI by looking at very simple systems that can be said to accomplish "goals" without all the fancy stuff that people typically think they have to put in AI, and and asking how that works.

For example, a mass hanging by a spring: it moves the mass back to its equilibrium position without doing the things listed in 2). But here, Richard is asking an easier question in 4), since he's asking about systems that are specifically designed to track some reference, rather than systems that happen to do it as a consequence of their other properties.

In that case, the answer (about how an arational system accomplishes the goals of rationality) is pretty simple: the system has been physically set up in a way that exploits the laws of nature to create mutual information between the system and its environment. If you view Bayescraft as a way to increase the mutual information between yourself (hopefully meaning the brain part!) and your environment, then the system is in fact doing that, so it is not arational. Its design implements Bayesian inference.

In the case of the thermostat, the temperature sensor, via heat transfer, becomes entangled with its environment, a natural process that happens to have an isomorphism to the Bayes Theorem. Then, something else senses the reading, causing another set of effents that determines what temperature air to blow out.

The next question is why this mutual information is such that it keeps the temperature within a specific range, rather than making it spiral out of control. The answer to that part, as others have mentioned, is that the person who set up the system, chose rules that happened to work. That required another kind of entanglement with the environment, which does not need to be done again during the operation of the thermostat.

Well, as long as the assumptions it's based on don't change too much...

comment by Richard_Kennaway · 2009-04-28T20:02:51.439Z · LW(p) · GW(p)

Thanks for the discussion. At this point I'll write another top-level posting rather than make a dozen point-for-point replies. Also, some of the comments, mine included, have pressed a little too hard on the embargo against a certain topic.

Besides, the topic we do not speak of yet is one I'm not much interested in talking about at all, and will be avoiding as far as I can. It has its own forums. I wonder if the original reason for the embargo might justify keeping it permanently? This is intended to be a forum about human rationality, and talk of any other sort should only be incidental.

comment by MrShaggy · 2009-04-28T12:51:02.777Z · LW(p) · GW(p)

I liked the Alien Space Bat description of a control system. The idea that our psychology is a collection of control systems, originated by a control engineer sounds like the cliche "if you're holding a hammer, everything look like a nail" and I don't know how the belief itself controls anticipation (http://www.overcomingbias.com/2007/07/making-beliefs-.html). So as of now, I still don't know why I need to know about control theory.

comment by i77 · 2009-04-28T21:33:01.600Z · LW(p) · GW(p)

Very interesting article. Yes, the controller is not intelligent but you have to factor in the designer. (I think this is something like a response to the Chinese Room argument). Just a few comments:

It has no model of its surroundings.

It has, a very simple one: the sign of the gain of the plant (steady-state).

It has no model of itself.

No, but its maker does: the transfer function of the controller.

It makes no predictions.

As in the first point: implicit in the design of the system is that temperature goes up with +1 output. If you flip the sign you get positive feedback and the system does not work as intended.

It has no priors.

Its designer knows some a priori things, like the typical time constant of the temperature trajectory and its range.

It has no utility function.

Maybe not a formal one, but you could build one with things like integrated squared error.

Replies from: Nelson_Flood, Nelson_Flood
comment by Nelson_Flood · 2013-09-12T01:09:16.454Z · LW(p) · GW(p)

Concerning your first point, that the designer has to hand-insert that all-important sign bit. So how do humans come up with these sign bits? I imagine a trial-and-error process of interacting with the controlled system. During this, the person's brain is generating an error signal derived directly or indirectly from an evolutionarily-fixed set point. While trying to control the system manually using an initially random sign bit, I suppose the brain can analyze at a low level in the hardware that the error is 1) changing exponentially, and 2) has a positive or negative slope, as the case may be. If the situation is exponential and the slope is positive, you synaptically weld the cortical representation of the controlled variable to the antagonist muscle of the one currently energized, and if negative, to the energized muscle itself. Bayesian inference would enter as a Kalman filter used to calculate the controlled variable. I suppose the process of acquiring the sign bit of the slope could not be separated from acquiring the model needed by the Kalman filter, so some kind of bootstrapping process could be involved. In his book "Neural Engineering..." (2004), Chris Eliasmith makes a case that the brain contains Kalman filters.

Is the evolutionary process responsible for the original hard-wired set point itself a controller? I doubt it, because, to use Douglas Adams' analogy, control principles to not seem to be involved in getting the shape of a puddle to match that of the hole it's in.

comment by Nelson_Flood · 2013-09-09T04:21:57.408Z · LW(p) · GW(p)

Concerning your first point, the designer has to hand-insert that all-important sign bit. So how do humans come up with these sign bits? I imagine a trial-and-error process of interacting with the controlled system. During this, the person's brain is generating an error signal derived over learning time by classical conditioning from an evolutionarily-derived hypothalamic error signal. While trying to control the system manually using an initially random sign bit, I suppose the brain can analyze at a low level in the hardware that the error is 1) changing exponentially, and 2) has a positive or negative slope, as the case may be. If the slope is positive, you synaptically weld the cortical representation of the controlled variable to the antagonist muscle of the one currently moving, and if negative, to the moving muscle itself. Bayesian inference would enter as a Kalman filter used to calculate the controlled variable. I suppose the process of acquiring the sign bit of the slope could not be separated from acquiring the model needed by the Kalman filter. In his book "Neural Engineering..." (2004), Chris Eliasmith makes a case that the brain contains Kalman filters.

Is the evolutionary process responsible for the original hard wired error signal itself a controller? I doubt it, because, to use Douglas Adams' analogy, control principles to not seem to be involved in getting the shape of a puddle to match that of the hole it's in.

comment by JulianMorrison · 2009-04-28T20:41:05.154Z · LW(p) · GW(p)

Isn't a control system using feedback basically analogous to a look-up table? Feedbacks by themselves aren't optimizers, they're happenstance. Feedbacks that usefully seek a goal constitute the output of an optimization process that ran beforehand.

Replies from: Cyan
comment by Cyan · 2009-04-28T21:00:46.590Z · LW(p) · GW(p)

Isn't a control system using feedback basically analogous to a look-up table?

Only in the sense that, say, the Lotka-Volterra equations are basically analogous to a look-up table. You'd be missing out if you thought that's all it was.

Replies from: JulianMorrison
comment by JulianMorrison · 2009-04-28T21:46:41.219Z · LW(p) · GW(p)

Those ones are happenstance. They just feed back, they're not going anywhere.

The analogy I mean is that like a LUT, the answer to any particular question is embodied in the pre-existing structure. And this correlation of response to result is optimized, it's not luck.

comment by Kaj_Sotala · 2009-04-28T19:02:20.831Z · LW(p) · GW(p)

Interesting. I'm immediately reminded of the set points for happiness and weight. Also, Eliezer described the phenomenon in "It's okay to be a little irrational" as "removal of pressure -> removal of counterpressure -> collapse of irrationality". The pressure -> counterpressure -mechanism sounds like it may be related to this.

Extending it a bit further... often we seem to seek evidence that confirms our beliefs, not evidence that would actually challenge them. Wouldn't that, too, be an instance of trying to set one's perceptions at desired values?

comment by pjeby · 2009-04-28T16:36:19.110Z · LW(p) · GW(p)

I'm looking forward to seeing more from you on this. NLP has a couple of bits and bobs of control theory in it, most notably the foundational ideas that the way to get a person to change (or any other result) is to be more flexible in your behavior than any other part of the system, and that you need to be able to measure yourself relative to a well-defined outcome. Even Robert Fritz's "creative process" books emphasize a concept of structural tension, which is the distance between a goal state and reality. My thoughts-into-action video is based on initiating internal measurement of the distance between a clean desk and a messy one, then standing back and letting the control system do its job.

Btw, while it isn't necessary for a control system to predict, remember, or model anything, in humans predictive modeling is an important part of the control system nonetheless. (See e..g the experiments that show humans can detect probability patterns without even having conscious awareness.)

Actually, risk homeostasis is another good example of a human control system that requires a predictive model in order to establish a set-point... heck, I imagine you can't even catch a ball unless you can predict where it's going to be.

Interesting anecdote: I recently read a Wired article about perception that mentions a professional pickpocket (entertainer/magician) who found that the way to have your hands be quicker than someone's eyes is to move your hands in a curve -- because if you move in a straight line, the person's eyes go to where your hands are going to be, rather than tracking where they are.

You could view all of these things as simply setting goals for a control system, but I find Hawkins' HTM model of the cortex more compelling from an evolutionary point of view. A design based on predictive memory control systems being "all the way down" is easier to evolve than one that has to have a bunch of collaborating components to produce the same behaviors, whereas a HTM-based cortex can just get bigger and add more layers. And at the early end of the evolutionary chain, incrementally adding memory/prediction to existing control systems is an equally incremental win -- i.e., "easy to evolve".

Replies from: MrHen
comment by MrHen · 2009-04-28T18:28:36.403Z · LW(p) · GW(p)

I imagine you can't even catch a ball unless you can predict where it's going to be.

Mmm... does "you" mean a person or does "you" mean anything? Catching a ball can easily be done without predicting its final location and was discussed in a different thread.

Replies from: pjeby
comment by pjeby · 2009-04-28T19:26:47.935Z · LW(p) · GW(p)

Mmm... does "you" mean a person or does "you" mean anything? Catching a ball can easily be done without predicting its final location and was discussed in a different thread.

That depends on what you mean by "predict". I don't mean a conscious prediction, I just mean a model that tells you how to get there. Even if that model is an algorithm, it's still a prediction.

Consider the ball player who runs to catch the ball, and then realizes he's not going to make it and stops trying. How is that not a prediction?

Replies from: MrHen
comment by MrHen · 2009-04-28T20:17:13.864Z · LW(p) · GW(p)

I just mean a model that tells you how to get there.

Oh, okay. I misunderstood what you meant.

Consider the ball player who runs to catch the ball, and then realizes he's not going to make it and stops trying. How is that not a prediction?

That has little to do with what I was talking about. Something that "predicts" by thinking "If I am not holding the ball, move closer" has no concept of being able to "make it" to the landing spot. It couldn't care less where the ball ends up. All it needs to know is if it is currently holding the ball and how to get closer. The "how to get closer" is the predictor.

Replies from: pjeby
comment by pjeby · 2009-04-28T21:37:47.069Z · LW(p) · GW(p)

That has little to do with what I was talking about. Something that "predicts" by thinking "If I am not holding the ball, move closer" has no concept of being able to "make it" to the landing spot. It couldn't care less where the ball ends up. All it needs to know is if it is currently holding the ball and how to get closer. The "how to get closer" is the predictor.

As I said, I understand you can make a control system that works that way. I'm just saying that humans don't appear to work that way, and possibly cortically-driven behaviors in general (across different species) don't work that way either.

Edit to add: see also the Memory-prediction Framework page on Wikipedia, for more info on feed-forward predictive modeling in the neocortex, e.g.:

The central concept of the memory-prediction framework is that bottom-up inputs are matched in a hierarchy of recognition, and evoke a series of top-down expectations encoded as potentiations. These expectations interact with the bottom-up signals to both analyse those inputs and generate predictions of subsequent expected inputs.

Replies from: MrHen
comment by MrHen · 2009-04-29T00:47:15.822Z · LW(p) · GW(p)

I'm just saying that humans don't appear to work that way, and possibly cortically-driven behaviors in general (across different species) don't work that way either.

Yeah, this makes sense and that is why I asked the question about who "you" was.

Mmm... does "you" mean a person or does "you" mean anything?

comment by scientism · 2009-04-28T20:18:51.918Z · LW(p) · GW(p)

If your approach isn't representationalist then why in the world would you maintain that we're "brains in vats" and have no direct access to the world? What do we have "access" to in lieu of the real world if there's no intervening model, representation, sense data, etc? Why not say it's the relative position of the car we're controlling rather than the "neural signals"? (How does one "see" a neural signal anyway?) It seems like your approach would be much more at home with direct realism.