Without models
post by Richard_Kennaway · 2009-05-04T11:31:38.399Z · LW · GW · Legacy · 55 commentsContents
Exercises. None 55 comments
Followup to: What is control theory?
I mentioned in my post testing the water on this subject that control systems are not intuitive until one has learnt to understand them. The point I am going to talk about is one of those non-intuitive features of the subject. It is (a) basic to the very idea of a control system, and (b) something that almost everyone gets wrong when they first encounter control systems.
I'm going to address just this one point, not in order to ignore the rest, but because the discussion arising from my last post has shown that this is presently the most important thing.
There is a great temptation to think that to control a variable -- that is, to keep it at a desired value in spite of disturbing influences -- the controller must contain a model of the process to be controlled and use it to calculate what actions will have the desired effect. In addition, it must measure the disturbances or better still, predict them in advance and what effect they will have, and take those into account in deciding its actions.
In terms more familiar here, the temptation to think that to bring about desired effects in the world, one must have a model of the relevant parts of the world and predict what actions will produce the desired results.
However, this is absolutely wrong. This is not a minor mistake or a small misunderstanding; it is the pons asinorum of the subject.
Note the word "must". It is not disputed that one can use models and predictions, only that one must, that the task inherently requires it.
A control system can work without having any model of what it is controlling.
The designer will have a model. For the room thermostat, he must know that the heating should turn on when the room is too cold and off when it is too hot, rather than the other way around, and he must arrange that the source of heat is powerful enough. The controller he designs does not know that; it merely does that. (Compare the similar relationship between evolution and evolved organisms. How evolution works is not how the evolved organism works, nor is how a designer works how the designed system works.) For a cruise control, he must choose the parameters of the controller, taking into account the engine's response to the accelerator pedal. The resulting control system, however, contains no representation of that. According to the HowStuffWorks article, they typically use nothing more complicated than proportional or PID control. The parameters are chosen by the designer according to his knowledge about the system; the parameters themselves are not something the controller knows about the system.
It is possible to design control systems that do contain models, but it is not inherent to the task of control. This is what model-based controllers look like. (Thanks to Tom Talbot for that reference.) Pick up any book on model-based control to see more examples. There are signals within the control system that are designed to relate to each other in the same way as do corresponding properties of the world outside. That is what a model is. There is nothing even slightly resembling that in a thermostat or a cruise control. Nor is there in the knee-jerk tendon reflex. Whether there are models elsewhere in the human body is an empirical matter, to be decided by investigations such as those in the linked paper. To merely be entangled with the outside world is not what it is, to be a model.
Within the Alien Space Bat Prison Cell, the thermostat is flicking a switch one way when the needle is to the left of the mark, and the other when it is to the right. The cruise control is turning a knob by an amount proportional to the distance between the needle and the mark. Neither of them knows why. Neither of them knows what is outside the cell. Neither of them cares whether what they are doing is working. They just do it, and they work.
A control system can work without having any knowledge of the external disturbances.
The thermostat does not know that the sun is shining in through the window. It only knows the current temperature. The cruise control does not sense the gradient of the road, nor the head wind. It senses the speed of the car. It may be tuned for some broad characteristics of the vehicle, but it does not itself know those characteristics, or sense when they change, such as when passengers get in and out.
Again, it is possible to design controllers that do sense at least some of the disturbances, but it is not inherent to the task of control.
A control system can work without making any predictions about anything.
The room thermostat does not know that the sun is shining, nor the cruise control the gradient. A fortiori, they do not predict that the sun will come out in a few minutes, nor that there is a hill in the distance.
It is possible to design controllers that make predictions, but it is not an inherent requirement of the task of control. The fact that a controller works does not constitute a prediction, by the controller, that it will work. I am belabouring this point, because the error has already been belaboured.
But (it was maintained) doesn't the control system have an implicit model, implicit knowledge, and implicitly make predictions?
No. None of these things are true. The very concepts of implicit model, implicit knowledge, and implicit prediction are problematic. The phrases do have sensible meanings in some other contexts, but not here. An implicit model is one in which functional relationships are expressed not as explicit functions y=f(x), but as relations g(x,y)=k. Implicit knowledge is knowledge that one has but cannot express in words. Implicit prediction is an unarticulated belief about the effect of the actions one is taking.
In the present context, "implicit" is indistinguishable from "not". Just because a system was made a certain way in order to interact with some other system a certain way, it does not make the one a model of the other. As well say that a hammer is a model of a nail. The examples I am using, the thermostat and the cruise control, sense temperature and speed respectively, compare them with their set points, and apply a rule for determining their action. In the rule for a proportional controller:
output = constant × (reference - perception)
there is no model of anything. The gain constant is not a model. The perception, the reference, and the output are not models. The equation relating them is my model of the controller. It is not the controller's model of anything: it is what the controller is.
The only knowledge these systems have is their perceptions and their references, for temperature or speed. They contain no "implicit knowledge".
They do not "implicitly" make predictions. The designer can predict that they will work. The controllers themselves predict nothing. They do what they do whether it works or not. Sometimes, in fact, these systems do not work. The thermostat will fail to control if the outside temperature is above the set point. The cruise control will fail to control on a sufficiently steep downhill gradient. They will not notice that they are not working. They will not behave any differently as a result. They will just carry on doing o=c×(r-p), or whatever their output rule is.
I don't know if anyone tried my robot simulation applet that I linked to, but I've noticed that people I show it to readily anthropomorphise it. (BTW, if its interface appears scrambled, resize the browser window a little and it should sort itself out.) They see the robot apparently going around the side of a hill to get to a food particle and think it planned that, when in fact it knows absolutely nothing about the shape of the terrain ahead. They see it go to one food particle rather than another and think it made a decision, when in fact it does not know how many food particles there are or where. There is almost nothing inside the robot, compared to what people imagine: no planning, no adaptation, no prediction, no sensing of disturbances, and no model of anything but its own geometry. The 6-legged version contains 44 proportional controllers. The 44 gain constants are not a model, they merely work.
(A tangent: people look at other people and think they can see those other people's purposes, thoughts, and feelings. Are their projections any more accurate than they are when they look at that robot? If you think that they are, how do you know?)
Now, I am not explaining control systems merely to explain control systems. The relevance to rationality is that they funnel reality into a narrow path in configuration space by entirely arational means, and thus constitute a proof by example that this is possible. This must raise the question, how much of the neural functioning of a living organism, human or lesser, operates by similar means? And how much of the functioning of an artificial organism must be designed to use these means? It appears inescapable that all of what a brain does consists of control systems. To what extent these may be model-based is an empirical question, and is not implied merely by the fact of control. Likewise, the extent to which these methods are useful in the design of artificial systems embodying the Ultimate Art.
Evolution operates statistically; I would be entirely unsurprised by Bayesian analyses of evolution. But how evolution works is not how the evolved organism works. That must be studied separately.
I may post something more on the relationship between Bayesian reasoning and control systems neither designed by nor performing the same when I've digested the material that Steve_Rayhawk pointed to. For the moment, though, I'll just remark that "Bayes!" is merely a mysterious answer, unless backed up by actual mathematical application to the specific case.
Exercises.
1. A room thermostat is set to turn the heating on at 20 degrees and off at 21. The ambient temperature outside is 10 degrees. You place a candle near the thermostat, whose effect is to raise its temperature 5 degrees relative to the body of the room. What will happen to (a) the temperature of the room and (b) the temperature of the thermostat?
2. A cruise control is set to maintain the speed at 50 mph. It is mechanically connected to the accelerator pedal -- it moves it up and down, operating the throttle just as you would be doing if you were controlling the speed yourself. It is designed to disengage the moment you depress the brake. Suppose that that switch fails: the cruise control continues to operate when you apply the brake. As you gently apply the brake, what will happen to (a) the accelerator pedal, and (b) the speed of the car? What will happen if you attempt to keep the speed down to 40 mph?
3. An employee is paid an hourly rate for however many hours he wishes to work. What will happen to the number of hours per week he works if the rate is increased?
4. A target is imposed on a doctor's practice, of never having a waiting list for appointments more than four weeks long. What effect will this have on (a) how long a patient must wait to see the doctor, and (b) the length of the appointments book?
5. What relates questions 3 and 4 to the subject of this article?
6. Controller: o = c×(r-p). Environment: dp/dt = k×o + d. o, r, and p as above; c and k are constants; d is an arbitrary function of time (the disturbance). How fast and how accurately does this controller reject the disturbance and track the reference?
55 comments
Comments sorted by top scores.
comment by Vladimir_Nesov · 2009-05-04T15:29:35.070Z · LW(p) · GW(p)
This post is an example of how words can go wrong. Richard hasn't clearly specified what this 'model' or 'implicit model' stuff is, yet for the whole post he repeats again and again that it's not in control systems. What is the content of this assertion? If I accept it, or if I reject it, how is this belief going to pay its rent? What do I anticipate differently?
Can anything be 'model'? How do I know that there is a model somewhere?
The word itself is so loaded that without additionally specifying what you mean, it can be used only to weakly suggest, not strongly assert a property.
Any property you see in a system is actually in your interpretation of the system, in its semantics (you see a map, not the territory, this is not a pipe). Interpretation and the procedure of establishing it given a system are sometimes called a 'model' of the system, this is a general theme in what is usually meant by a model. Interpretation doesn't need to happen in anyone's head, it may exist in another system, for example in a computer program, or it can be purely mathematical, arising formally from the procedure that specifies how to build it.
In this sense, to call something a model is to interpret it as an interpretation of something else. Even a rock may be said to be a model of the universe, under the right interpretation, albeit a very abstract model, not useful at all. Of course, you can narrow down this general theme to assert that rocks can't model the universe, in particular because they can't simulate certain properties, or because your interpretation procedure breaks down when you present it with a rock. But you actually have to state the meaning of your terms in the cases like this, hopefully with a definition-independent goal to accomplish by finally getting the message through.
Replies from: SilasBarta, cousin_it, Cyan, kpreid, Tiiba, cousin_it↑ comment by SilasBarta · 2009-05-05T04:07:19.968Z · LW(p) · GW(p)
This is exactly why I tried to restate the situation in terms of the more precise concept of "mutual information" in Richard's last topic, although I guess I was a bit vague at points as to how it works.
So in the context of Bayesian inference, and rationality in general, we should start with:
"A controller has a model (explicit or implicit) of it's environment iff there is mutual information between the controller and the environment."
This statement is equivalent to:
"A controller has a model (explicit or implicit) of its environment iff, given the controller, you require a shorter message to describe its environment (than if you could not reference the controller)."
From that starting point, the question is easier to answer. Take the case of the thermostat. If the temperature sensor is considered part of this controller, then yes, it has a model of its environment. Why? Because if you are given the sensor reading, you can more concisely describe the environment: "That reading, plus a time/amplitude shift."
Richard puts a lot of emphasis on how cool it is that the thermostat doesn't need to know if the sun is shining. This point can be rephrased as:
"A controller does not need to have mutual information with all of its environment to work." Or,
"Learning a controller, and the fact that it works, does not suffice to tell you everything about its environment."
I think that statement sums up what Richard is trying to say here.
And of course you can take this method further and discuss the mutual information between a) the controller, b) the output, c) the environment. That is, do a) and b) together suffice to tell you c)?
EDIT: Some goofs
↑ comment by cousin_it · 2009-05-05T08:44:10.574Z · LW(p) · GW(p)
My belief will pay rent as follows: I no longer expect by default to find computers inside any mechanism that exhibits complex behavior. For clarity let me rephrase the discussion, substituting some other engineering concept in place of "model".
RichardKennaway: Hey guys, I found this nifty way of building robots without using random access memory!
Vladimir_Nesov: WTF is "random access memory"? Even a rock could be said to possess it if you squint hard enough. Your words are meaningless. Here, study this bucket of Eliezer's writings.
Replies from: kpreid↑ comment by kpreid · 2009-05-05T12:00:30.084Z · LW(p) · GW(p)
The substitution is not equivalent; people are more likely to agree whether something contains "random access memory" than whether it contains "a model".
Replies from: cousin_it↑ comment by cousin_it · 2009-05-05T23:34:59.070Z · LW(p) · GW(p)
I think philosophers could easily blur the definition of "random access memory", they just didn't get around to it yet. A competent engineer can peek inside a device and tell you whether it's running a model of its surroundings, so the word "model" does carry some meaning regardless of what philosophers say. If you want a formal definition, we could start with something like this: does the device contain independent correlata for independent external concepts of interest?
↑ comment by Cyan · 2009-05-04T17:16:02.513Z · LW(p) · GW(p)
He wrote at the top,
There are signals within the control system that are designed to relate to each other in the same way as do corresponding properties of the world outside. That is what a model is.
Is this definition inadequate? To me it seems to capture (up to English language precision) what it means to have a control system with a model in it.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-05-04T17:43:01.762Z · LW(p) · GW(p)
This is very broad definition, the flexibility hiding in the word 'corresponding', and in the choice of properties to model. In a thermostat, for example, the state of thermometer, together with the fact that its readings correspond to the temperature of the world outside, seems to satisfy this definition (one signal, no internal structure). This fact is explicitly denied in the article, but without clear explanation as to why. A more strict definition will of course be able to win this argument.
↑ comment by kpreid · 2009-05-04T20:16:37.144Z · LW(p) · GW(p)
I was going to say that any stateful controller has a model — the state constitutes the model — but reading this comment made me realize that that would be arguing about whether the tree makes a sound.
↑ comment by Tiiba · 2009-05-05T17:32:21.082Z · LW(p) · GW(p)
What I think Richard is denying is that a thermostat models certain things. It's not that it doesn't have a model, but it's a model in the same sense that a line is a conic section (degenerate). It does not predict the future, it does not remember the past, and there is nothing in it that resembles Bayesian probabilities. It knows what the temperature is, but the latter is wired directly into the next action. Thought tends to involve a few intermediate stages.
↑ comment by cousin_it · 2009-05-05T08:22:47.001Z · LW(p) · GW(p)
Scholastics strikes again!
The belief will pay its rent as follows: previously, whenever I saw a mechanism that exhibited complex and apparently goal-directed behavior, I expected to find a computer inside. Now I don't.
Also, I could dismantle the "rational reasoning" with the same ease as you dismantled "model". How do you tell if a system contains reasoning? Ooh, it's all-so-subjective.
comment by conchis · 2009-05-04T12:14:56.717Z · LW(p) · GW(p)
An employee is paid an hourly rate for however many hours he wishes to work. What will happen to the number of hours per week he works if the rate is increased?
It depends. The substitution effect says work more, the income effect says work less. The result depends on which one dominates.
comment by JGWeissman · 2009-05-04T18:04:08.213Z · LW(p) · GW(p)
If I use my rational model to give instructions to someone who blindly follows, lacking his own model, and he achieves his goal, I would not say he succeeded arationally. The fact that the agent did not contain the rationality but only received the conclusions does not mean the rationality was not important to the process.
If I extend the situation, so that I use my rational model to describe simple rules that produce success in a broad class of situations, and an agent lacking a model succeed by following my rules, this still is not arational. The agent is still using my rationality indirectly.
If I take the simple rules I derived from my rational model, and design a device to mechanically follow the rules, the device will work, not arationally, but indirectly rationally. This is your control system. It does not contain a model, it just blindly follows rules that were produced by rationality.
comment by Richard_Kennaway · 2009-05-06T07:26:59.684Z · LW(p) · GW(p)
A collective reply to comments so far.
All the posted answers to the exercises so far are correct.
1. Warming the thermostat with a candle will depress the room temperature while leaving the thermostat temperature constant.
2. Pressing the brake when the cruise control does not disengage will leave the car speed constant while the accelerator pedal goes down -- until something breaks.
3. The effect of raising a piece-rate worker's hourly rate will depend on what the worker wants (and not on what the employer intended to happen).
4. The doctor's target will be met while patients will still have to wait just as long, they just won't be able to book more than four weeks ahead. (This is an actual example from the British National Health Service.)
Does no-one want to tackle 5 or 6? Anyone who knows the derivative of exp(a t) knows enough to do number 6.
Thank you, kpreid, for linking to the very article that I knew, even while writing the original post, I would be invoking in response to the comments. Anyone who has not come across it before, please read it, and then I will talk about the concept that (it turns out) we are all talking about, when we talk about models, except for the curious fit that comes over some of us when contemplating the simple thermostat.
i77: As you say, the Smith predictor contains a model, and the subsystem C does not. Likewise the MRAC. In the PID case, the engineer has a model. But don't slide from that to attributing a model to the PID system. There isn't one there.
Vladimir_Nesov, pretty much all the concepts listed in the first three sections of that article are special cases of what is here meant by the word. As for the rest, I think we can all agree that we are not talking about a professional clothes horse or a village in Gmina Pacyna. I don't believe I have committed any of these offences (another article I'd recommend to anyone who has only just now had the good fortune to encounter it), but let those call foul who see any.
So, what are we talking about, when we talk about models? What I am talking about -- I'll come to the "we" part -- I said in a comment of mine on my first post:
What is a model? A model is a piece of mathematics in which certain quantities correspond to certain properties of the thing modelled, and certain mathematical relationships between these correspond to certain physical relationships.
and more briefly in the current post:
signals ... that are designed to relate to each other in the same way as do corresponding properties of the world outside
This is exactly what is meant by the word in model-based control theory. I linked to one paper where models in precisely this sense appear, and I am sure Google Books or Amazon will show the first chapters of any number of books on the subject, all using the word in exactly the same way. There is a definite thing here, and that is the thing I am talking of when I talk of a model.
This is not merely a term of art from within some branch of engineering, in which no-one outside it need be interested. Overcoming Bias has an excellent feature, a Google search box specialised to OB. When I search for "model", I get 523 hits. The first five (as I write -- I daresay the ranking may change from time to time) all use it in the above sense, some with less mathematical content but still with the essential feature of one thing being similar in structure to another, especially for the purpose of predicting how that other thing will behave. Here they are:
"So rather than your model for cognitive bias being an alternative model to self-deception..." (The model here is an extended analogy of the brain to a political bureaucracy.)
"Data-based model checking is a powerful tool for overcoming bias" (The writer is talking about statistical models, i.e. "a set of mathematical equations which describe the behavior of an object of study in terms of random variables and their associated probability distributions.")
"the model predicts much lower turnout than actually occurs" (The model is "the Rational Choice Model of Voting Participation, which is that people will vote if p times B > C".)
"I don't think student reports are a very good model for this kind of cognitive bias." (I.e. a system that behaves enough like another system to provide insight about that other.)
The 5th is a duplicate of the 2nd.
Those are enough examples to quote, but I inspected the rest of the first ten and sampled a few other hits at random (nos. 314, 159, 265, and 358, in fact), and except for a mention of a "role model", which could be arguable but not in any useful way, found no other senses in use.
When I googlesearch LW, excluding my own articles and the comments on them, the first two hits are to this, and this. These are also using the word in the same sense. The models are not as mathematical as they would have to be for engineering use, but they are otherwise of the same form: stuff here (the model) which is similar in structure to stuff there (the thing modelled), such that the model can be used to predict properties of the modelled.
In other words, what I am talking about, when I talk about models, is exactly what we on OB and LW are all talking about, when we talk about models, every time we talk about models. There is a definite thing here that has an easily understood shape in thingspace, we all call it a model, and to a sufficiently good approximation we call nothing else a model.
Until, strangely, we contemplate some very simple devices that reliably produce certain results, yet contain nothing drawn from that region of thingspace. Suddenly, instead of saying, "well well, no models here, fancy that", the definition of "model" is immediately changed to mean nothing more than mere entanglement, most explicitly by SilasBarta:
"A controller has a model (explicit or implicit) of it's environment iff there is mutual information between the controller and the environment."
Or the model in the designer's head is pointed to, and some sort of contagion invoked to attribute it to the thing he designed. No, this is butter spread over too much bread. That is not what is called a model anywhere on OB or LW except in these comment threads; it is not what is called a model, period.
You can consider the curvature of a bimetallic strip a model of the temperature if you like. It's a trivial model with one variable and no other structure, but there it is. However, a thermometer and a thermostat both have that model of the temperature, but only the thermostat controls it. You can also consider the thermostat's reference input to be a model of the position of the control dial, and the signal to the relay a model of the state of the relay, and the relay state a model of the heater state, but none of these trivial models explain the thermostat's functioning. What does explain the thermostat's functioning is the relation "turn on if below T1, turn off if above T2". That relation is not a model of anything. It is what the thermostat does; it does not map to anything else.
Exercise 7. How can you discover someone's goals? Assume you either cannot ask them, or would not trust their answers.
Replies from: JGWeissman, Chris_Leong, dclayh, HA2, JGWeissman, SilasBarta↑ comment by JGWeissman · 2009-05-06T18:08:26.470Z · LW(p) · GW(p)
A collective reply to comments so far.
This forum has a wonderful feature that allows us to respond to individual comments, generating threads within the discussion that focus on a particular aspect of the topic. Using this feature would be a much better alternative to a single long comment separated from the various comments it refers to.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2009-05-07T06:56:40.515Z · LW(p) · GW(p)
As I was making a single point in response to many comments, I made the judgement that to say it once in a single place was preferable to splitting it up into many fragments.
Replies from: JGWeissman↑ comment by JGWeissman · 2009-05-07T19:17:47.678Z · LW(p) · GW(p)
No, you made several separate points: 4 responses to 4 solutions to 4 of your exercises, a specific response to i77, your view that some commenters are trying to define away the issues you point out (which could have been a response to Vladimir_Nesov's comment), and your straw man summary of the idea that the controllers are a reflection of the rational process that produced them and your unsubstantiated rejection and don't even indicate which comments you are responding to. This is not just for the benefit of the commenters you respond to, but for those who are following, and may join, the discussion.
There are quite a few different objections to your assertion that control systems work arationally, and your attempt to make a blanket refutation for all of them is unconvincing. In particular, I think you are glossing over the argument that control systems are produced by rational processes by lumping it in with the attempts to redefine models.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2009-05-07T21:57:27.072Z · LW(p) · GW(p)
Perhaps my prolixity has obscured the substance. Here is a shorter version. The claims are:
1. The concept of a model is entirely unproblematic in this forum.
2. In that entirely unproblematic sense, neither a thermostat nor a cruise control contains a model.
3. The designer of a control system has a model. That model is located in the designer. He may or may not put a model into the system he designs. In the case of the thermostat and the cruise control, he does not.
I shall not repeat all of the evidence and argument, only summarise it:
1. Evidence was given in exhausting detail that I, and we on OB/LW, and the books all mean exactly the same thing by a model. Only in the threads on my two postings on control systems have some people tried to make it mean something different. But changing the definition is irrelevant to the truth-value of the original assertions. I think that no-one is disputing this now, although I shall not be surprised to see further expansions of the concept of a model. (I look forward to SilasBarta's promised article on the subject.)
2. Except for trivial models (one scalar "modelling" another) that leave out what the controller actually does (i.e. control something), there is nothing in either of these controllers but a simple rule generating its output from its inputs. That rule is not a model of something else. It acts upon the world, it does not model the world.
3. That the designer has a model is agreed by everyone. For some reason, though, when I say that the designer has a model, as I have done several times now, people protest that the designer has a model. We are in violent agreement. As for it being in his head, where else does he keep his thinking stuff? Well, "in his head" was not accurate, he might also make a computer simulation, or a physical mock-up. But when it comes time to build the actual system, what he builds is the actual system. The designer models the system; the system does not model the designer.
As for the appropriate form of my response, my judgement on that differs from yours. I shall stop at noting this meta-level disagreement.
Replies from: pjeby, JGWeissman↑ comment by pjeby · 2009-05-07T22:58:51.022Z · LW(p) · GW(p)
The concept of a model is entirely unproblematic in this forum.
In that entirely unproblematic sense, neither a thermostat nor a cruise control contains a model.
From a computer programmer's perspective, a model is something that reflects the state of something else -- even a trivial single value like "the current temperature" or "the desired temperature".
If a thermostat only had a desired-temperature knob or only a "current temperature" indicator, I might agree that there's no model. A thermometer and a control knob don't "model" anything, in that there is nothing "reflecting" them. In the programming sense, there's no "view" or "controller".
But the moment you make something depend on these values (which in turn depend on the state of the world), it's pretty clear in programming terms that the values are models.
↑ comment by JGWeissman · 2009-05-07T22:47:13.310Z · LW(p) · GW(p)
For some reason, though, when I say that the designer has a model, as I have done several times now, people protest that the designer has a model.
What I have observed is that you say that it is not important that the designer has a model, because that model is not part of the control system, and we protest that it is important that the designer has a model, because without that designer and his model, the control system would not exist.
We are in violent agreement.
You claimed in your previous article that control systems succeed arationally, though you do not list that claim here. Do you now agree that by following rules produced rationally by an outside agent, the control system is using rationality (indirectly) to succeed?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2009-05-08T08:03:23.012Z · LW(p) · GW(p)
Do you now agree that by following rules produced rationally by an outside agent, the control system is using rationality (indirectly) to succeed?
No. The control system exists because of someone's rational process. Once it exists, it does not work by means of that process. When completed and installed, its operation is screened off from that earlier process. It works only by means of what the designer put into it, not how the designer did that.
The distinction of levels is important. Faced with a control system, to understand how it works it is not necessary to know the designer's thinking, although it may be illuminating in a looking-up-the-answer-in-the-back-of-the-book sort of way. It is only necessary to examine the controller. It is easy to confuse the two, because both the designer and the controller are goal-seeking entities, and there is some overlap between their goals: what the controller controls, the designer designed it to control. But what each does to that end is different.
The distinction is especially important in the case of systems created by evolution, not by a Designer. It is the same distinction that was made between maximising fitness (what the evolutionary process does) and performing the resulting adaptations (what the individual organism does).
Replies from: JGWeissman↑ comment by JGWeissman · 2009-05-09T02:14:55.877Z · LW(p) · GW(p)
It works only by means of what the designer put into it, not how the designer did that.
You might as well say then that a rationalist only succeeds by his actions, not by the process of choosing those actions, since performing those same actions for any reasons would result in the same success. However, the reasons for those actions are important. Systematic success requires systematically choosing good actions.
A rationalist will often encounter a familiar situation, and without rebuilding his model or recalculating expected utility for various actions, will simply repeat the action taken previously, executing a cached result. This is still rational. Despite the fact that the rationality occurred much earlier than the action it caused, it still caused that action and the resulting success. Notice, using rationality does not necessarily mean going through the rational process. In this case, it means using the results that were previously produced by rationality.
A thermostat follows rational rules, despite being incapable of generating rational rules or even evaluating the effectiveness of the rules it follows. If it were completely "screened off" from the rationality that produced those rules, it would lose access to those rules. You might consider it partially "screened off" in that the rational process does not update the thermostat with new rules, but the initial rules remain a persistent link in the causal chain between rationality and the thermostat's success. I will consider a thermostat's success to be arational when it is actually produced arationally.
As for evolved control systems, evolution is a crude approximation of evidence based updating. Granted, it does not update deep models that can be used to predict the results of proposed actions. It simply updates on propositions of the form that a given allele contributes more to reproductive fitness than alternatives, as represented by the allele frequencies in the population. The crudeness of the approximation and the lack of more advanced rationality features explain why the process is so slow, but the weak rationality of the approximation explains why it works eventually. And the success of evolved control systems owes the effective rules they follow to this weak rationality in the process of evolution.
Replies from: Cyan↑ comment by Cyan · 2009-05-09T02:59:28.053Z · LW(p) · GW(p)
I don't think the two of you disagree about any actual thing happening when a person designs a thermostat and sets it to run, or when a homeostatic biological system evolves. You only disagree about how to use a certain word.
Replies from: JGWeissman↑ comment by JGWeissman · 2009-05-09T04:51:44.978Z · LW(p) · GW(p)
Well, then let me taboo the issue of whether we call the control systems arational and present my position that I have been arguing for.
Control systems are systematically correlated to features of their environment, particularly the variable they control and their mechanisms for manipulating it. This correlation is achieved by some sort of evidence processor, for example, evolution or a deliberative human designer. This explains why out of the space of possible control systems, the ones we actually observe tend to be effective, as well as why control systems can be effective without processing additional evidence to increase their correlation with their environment.
Perhaps RichardKennaway could follow the same taboo explain his position that the success of control systems indicates a problem with the importance we place on Bayescraft.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2009-05-12T07:38:27.187Z · LW(p) · GW(p)
Control systems are systematically correlated to features of their environment, particularly the variable they control and their mechanisms for manipulating it. This correlation is achieved by some sort of evidence processor, for example, evolution or a deliberative human designer.
They work either because they were designed to by people or because evolution stumbled on something that happened to work. No disagreement there. What I've been at pains to emphasize is what is in the control system and what is not. Unless one is clear about what is actually present in the control system, it is impossible to understand how it operates -- see the recent confusion about the concept of a model.
In particular, the reasons for what is in a control system being in it are among the things that are not to be found in the control system. The mechanism by which it works is completely different from the mechanism by which it was created. To discover how it works, the primary source is the mechanism itself. It is not unknown for a designer to be mistaken about how his invention really works, and "reproductive fitness" will not predict any particular mechanism, nor illuminate its operation. We already know that mammals can regulate their body temperature: "reproductive fitness" is merely an allusion to a very general mechanism that happened to come up with the phenomenon, but tells nothing about how the mammals do it.
In the case of my running examples, there is no Bayescraft* being performed by the systems themselves. That it may have happened elsewhere does not illuminate their operation.
* I suspect this word may be being stretched as well. I have understood it to mean Bayesian reasoning as a self-conscious mental art, as practiced and taught by the fictional beisutsukai, but scarcely attained to in the real world, except fitfully by occasional geniuses, and certainly not performed at all by the blind idiot god. But sometimes it seems to be being used to mean any process describable in Bayesian terms.
Replies from: JGWeissman↑ comment by JGWeissman · 2009-05-12T18:00:35.730Z · LW(p) · GW(p)
What I've been at pains to emphasize is what is in the control system and what is not. Unless one is clear about what is actually present in the control system, it is impossible to understand how it operates -- see the recent confusion about the concept of a model.
Seriously, you can stop belaboring that point. I am well aware that the control system does not itself process evidence into correlation between itself and its environment or contain a mechanism to do so. I have also explained that the reason it does not need to process evidence to be successful is that an outside evidence processor* has created the control system with sufficient correlation to accomplish its task. Yes, we can understand specifically how that correlation causes success in particular control systems independently of understanding the source of the correlation. So what? This explanation is not the one true cause. Why is it surprising to the theory that reality funneling power comes from Bayescraft, that there is an intermediate cause between Bayescraft and the successful reality funneling?
* I consider processing evidence into correlation of something with its environment to be the core feature of Bayescraft. Processing the evidence into correlation with models that can be extended by logical deduction is an advanced feature that explains the vast difference in effectiveness of deliberative human intelligence, which uses it, and evolution, which does not.
↑ comment by Chris_Leong · 2020-03-30T00:03:13.162Z · LW(p) · GW(p)
Could you explain the answer to 4?
↑ comment by dclayh · 2009-05-06T21:23:24.922Z · LW(p) · GW(p)
Re. exercise 3, Homo economicus will in general work more hours, as working has now become more valuable relative to other uses of her time. In reality, as you say, anything could happen, depending on the individual's utility function for money.
Re. number 6, I've taken too many DEs classes to be excited by this problem, but in general a critically damped system will recover optimally from perturbations.
Re. number 7, one good way is to perturb their behavior in various ways by making them offers (ideally orthogonal ones) and observing their reactions.
↑ comment by HA2 · 2009-05-06T20:34:19.748Z · LW(p) · GW(p)
"Exercise 7. How can you discover someone's goals? Assume you either cannot ask them, or would not trust their answers."
I'd guess that the best way is to observe what they actually do and figure out what goal they might be working towards from that.
That has the unfortunate consequence of automatically assuming that they're effective at reaching their goal, though. So you can't really use a goal that you've figured out in this way to estimate how good an agent is at getting to its goals.
And it has the unfortunate side effect of ascribing 'goals' to systems that are way too simple for that to be meaningful. You might as well say that the universe has a "goal" of maximizing its entropy. I'm not sure that it's meaningful to ascribe a "goal" to a thermostat - while it's a convenient way of describing what it does ("it wants to keep the temperature constant, that's all you need to know about it"), in a community of people who talk about AI I think it would require a bit more mental machinery before it could be said to have "goals".
↑ comment by JGWeissman · 2009-05-06T18:22:19.218Z · LW(p) · GW(p)
Or the model in the designer's head is pointed to, and some sort of contagion invoked to attribute it to the thing he designed. No, this is butter spread over too much bread. That is not what is called a model anywhere on OB or LW except in these comment threads; it is not what is called a model, period.
It is not about contagion. The point is, the reason that a particular control system even exists, as a opposed to a less effective control system or no control system at all, is that a process that implements some level of rationality produced it. The fact that a control system only needs the cached results of past rationality, and does not even have the capacity to execute additional rationality, does not change the fact that rationality plays a role in its effectiveness.
Replies from: Cyan↑ comment by Cyan · 2009-05-06T20:19:13.697Z · LW(p) · GW(p)
Semantics check: I assert that evidence accumulation does not imply some (non-zero) level of rationality. Ex gratia, evolution by natural selection accumulates evidence without any rationality. Does my word use accord with yours?
Replies from: JGWeissman↑ comment by JGWeissman · 2009-05-06T21:17:15.261Z · LW(p) · GW(p)
I think of the process of rationality as using evidence to (on average) improve behavior in the sense of using behaviors that produce better results. Evolution is a strange example, in that this process of improvement is not deliberative. It has no model, even metaphorically, that is deeper than "this gene contributes to genetic fitness". It is incapable of processing any evidence other than the actual level of reproductive success of a genetic organism, and even then it only manages to update gene frequencies in the right direction, not nearly the rationally optimal amount (or even as close as deliberative human rationality gets). It is this small improvement in response to evidence that I consider rational (at a very low level). The fact that we can trace the causal steps of the evidence (reproductive fitness) to the improvement at a deep physical level matters only as much as the fact that in principle we could do the same with the causal steps of evidence I observe influencing the neurons in my brain which implements my rationality.
Replies from: Cyan↑ comment by Cyan · 2009-05-06T21:21:40.456Z · LW(p) · GW(p)
...so that's a "no," then? (I don't think we disagree about what is actually (thought to be) happening, only on the words we'd use to describe it.)
Replies from: JGWeissman↑ comment by JGWeissman · 2009-05-06T21:40:47.745Z · LW(p) · GW(p)
That is correct. We are using the word differently. What do you mean by "rationality"?
Replies from: Cyan↑ comment by SilasBarta · 2009-05-06T17:59:11.768Z · LW(p) · GW(p)
FYI: I'm working on a reply to this, which is becoming long enough and broad enough that I'm going to submit it for the front page.
comment by Steve_Rayhawk · 2009-05-06T23:51:02.123Z · LW(p) · GW(p)
It sounds like you think that, if someone thinks about control theory the way I do, they will make prediction mistakes or bad decisions. And you want to keep people from making prediction mistakes or bad decisions, but first you have to make them see that there's something wrong with their thinking, and you don't know a direct argument that will make them see that, so you have to use a lot of arguments from examples. Can you say more directly what kinds of prediction mistakes or bad decisions you think that people who think about control theory the way I do will make?
The cruise control does not sense the gradient of the road, nor the head wind. It senses the speed of the car. It may be tuned for some broad characteristics of the vehicle, but it does not itself know those characteristics, or sense when they change, such as when passengers get in and out.
I didn't expect that the cruise control would be able to do that (without another controller for its tuning). That's why one of my list of sufficient conditions for optimality of a PID controller was, "the system is a second-order linear system with constant coefficients". If the coefficients change, then the same PID controller may not be optimal. Did you expect that I would have expected the cruise control to be able to sense changes in the characteristics of the vehicle? Or are you trying to say that, if someone thought about control systems the way I did, they would have expected the cruise control to be able to know when the vehicle characteristics change, except if they were thinking carefully at the time the way I was? For example, are you trying to say that I might have had this mistaken expectation if I was only thinking about the cruise control as part of thinking about something else? And you want to make me see that there's something wrong with my thinking that makes me make prediction mistakes when I'm not thinking carefully?
An implicit model is one in which functional relationships are expressed not as explicit functions y=f(x), but as relations g(x,y)=k.
It sounds like you want to say that a controller doesn't have any implicit model unless it has a separate, identifiable physical part or software data structure that expresses a relationship and has no other function. If one controller is mathematically equivalent to another controller that does have a separate, identifiable part that expresses a relationship, but the controller itself doesn't have a separate, identifiable part that expresses a relationship, does it still not have an implicit model?
Linear controllers are optimal for many control problems that are natural limiting cases or approximations of real-world families of control problems. In a control problem where a linear controller is the Bayes-optimal controller, it is literally impossible for any controller with different outputs from the linear controller to have a lower average cost. Even if a controller was made of separate identifiable parts that implemented the separate identifiable parts of Bayesian sequential decision theory, and even if some of those parts expressed relations between past, present, or future perception signals, reference signals, control signals, or system states, the controller still couldn't do any better than the optimal linear controller. And all the information that a Bayesian optimal controller could use is already in either the state of the optimal linear controller or the state of the system which the controller has been controlling. If a Bayesian decision-theoretic controller had to take over from an optimal linear controller, it would have no use for any more information than the state of the controller and the perception and reference signals at the controller, which is also the only information that the linear controller was able to use. If the Bayesian controller was given more information about past reference signals or perception signals, that would not help its performance. At any time, the posterior belief distribution in a Bayesian controller can be set equal to a new posterior belief distribution defined using only the perception signal, the reference signal, and the state that the optimal linear controller would have had at that time, and the Bayesian controller will still have optimal performance. And the state in an optimal linear controller can be set equal to a new state defined using only the perception signal, the reference signal, and the posterior belief distribution that a Bayesian controller would have had at that time. This means that, whatever information processing a Bayesian optimal controller for a linear-quadratic-Gaussian control problem would be doing that would affect the control signal, the optimal linear controller (together with the system it is controlling) is already doing that information processing. They are mathematically equivalent.
A linear controller can be optimal for more than one control problem. To define a linear controller's implicit model of the system and disturbances, you need a model of the reference signal and the cost functional; to define a linear controller's implicit model of the reference signal, you need a model of the cost functional and the system and disturbances; and to define a linear controller's implicit model of the cost functional, you need a model of the reference signal and of the system and disturbances. Some of these implicit models are only defined up to a constant factor. And a linear controller that is optimal for some control problems can also perform well on other control problems that are near them.
(An optimal Bayesian controller for a linear-quadratic-Gaussian control problem isn't able to change its model when the system coefficients or the statistical properties of the disturbances or reference signal change. This is because the controller would have no prior belief that a change was possible. All of the controller's prior probability would be on the belief that the system and disturbances and reference signal would act like the problem the controller was designed to be optimal for. If the controller had any prior probability on any other belief, it would make decisions that wouldn't be optimal for the problem it was designed to be optimal for.)
Now, I am not explaining control systems merely to explain control systems. The relevance to rationality is that they funnel reality into a narrow path in configuration space by entirely arational means, and thus constitute a proof by example that this is possible.
It sounds like you are saying that the math for when a controller works doesn't leave any shadow at all in the math of what a good controller does. If you aren't saying that, then I disagree less with what you have said.
This must raise the question, how much of the neural functioning of a living organism, human or lesser, operates by similar means?
Agreed.
\5. What relates questions 3 and 4 to the subject of this article?
Are questions 3 and 4 situations in which people who think about control theory the way I do might make prediction mistakes when they aren't thinking carefully? Are they situations in which the employer has a mistaken implicit model of how to increase (reference signal) the employee's hours (perception signal) by changing his wages (control signal) and the medical bureaucrats have a mistaken implicit model of how to decrease (reference signal) the doctor's time per patient (perception signal) by controlling his target (control signal)? Are they situations in which the employer has a mistaken belief about the control system inside the employee and the medical bureaucrats have a mistaken belief about the control system inside the doctor?
\6. Controller: o = c×(r-p). Environment: dp/dt = k×o + d. o, r, and p as above; c and k are constants; d is an arbitrary function of time (the disturbance). How fast and how accurately does this controller reject the disturbance and track the reference?
Errors will decay exponentially with rate constant k×c, if k×c is positive.
If d is constant and r is constant, then p = r+d/(k×c).
If d is zero and r = m×t, then p = r-m/(k×c): p will lag by m/(k×c). The P controller implicitly predicts that the future changes of r will on average be equal to the integral of d, which is zero. Because the average future change of r is something other than zero, on average the P controller lags. A tuned PI controller could implicitly learn m and implicitly predict future changes in r and not lag on average.
If the controller had the control law o = -c×p, its implicit model would be that r(t) will on average be equal to the integral of e^(-k×c×s)×d(t-s) with respect to s for s from zero to infinity, and that there is no information about future values of r in the current value of r that's not also in the current value of that integral. If the controller had the implicit model that r was constant at zero, its control law would be o = -∞×p, because in our model of the environment that we are using to define the controller's implicit model, there are no measurement errors, no delayed effects, and no control costs.
comment by abigailgem · 2009-05-04T12:14:19.903Z · LW(p) · GW(p)
Exercise 4. When targets were imposed on British GPs, the effect at my practice was that only a few appointments were available. I had to sit by the phone from 8.30am when the practice opened, phoning, getting the engaged tone, repeatedly pressing the redial button. Then I got the appointment time I wanted. Phone later in the day and there were no appointments available.
A GP may decide that his amour propre (signalling?) is more important than conforming to targets.
Experience on targets appears to indicate that people find ways of meeting the target, which may or may not be by achieving what the target-setter wished to achieve.
comment by billswift · 2009-05-04T18:24:13.366Z · LW(p) · GW(p)
A control system doesn't model a system, to a large degree it is the designers' model of the system it controls.
Replies from: Vladimir_Nesov, Cyan↑ comment by Vladimir_Nesov · 2009-05-04T18:35:38.065Z · LW(p) · GW(p)
This assertion suffers from the same problems of shuffling detached handles. You need to be more technical in expressing things like this, otherwise a curious reader is left with no choice other than to invent definitions that can make your assertion either true or false.
Replies from: billswift↑ comment by billswift · 2009-05-04T21:58:37.389Z · LW(p) · GW(p)
A model is a simplified, abstracted representation of an object or system that presents only the information needed by its user. For example, the plastic models of aircraft I built as a kid abstract away everything except the external appearance, a mathematical model of a system shows only those dimensions and relationships useful to the model's users, a control system is a model of the relationships between the stimuli and the response desired by the designer and user of the larger system being controlled (evolution as designer and organism as user in biological analogy).
Replies from: Sebastian_Hagen↑ comment by Sebastian_Hagen · 2009-05-05T09:39:24.481Z · LW(p) · GW(p)
A model is a simplified, abstracted representation of an object or system that presents only the information needed by its user.
If I understand this definition correctly, then temperature, as used by a thermostat, is a model of a system: It abstracts away all the details about the energy of individual particles in the system, except for a single scalar value representing the average of all those energies.
Replies from: billswift↑ comment by Cyan · 2009-05-04T19:46:13.077Z · LW(p) · GW(p)
I was going to post something in a similar vein, but then I remembered that one of RichardKennaway's points was about thinking about living beings as control systems. Evolved control systems don't have designers per se. Whether they do interesting things like account for disturbances or have internal models of the external world depends on the strength of the selection pressure that generated the system.
Replies from: JGWeissman↑ comment by JGWeissman · 2009-05-04T19:57:02.138Z · LW(p) · GW(p)
Evolved systems, though they are not designed, represent an accumulation of evidence of what did and did not work. The process of evolution is a very poor approximation of rationality, and as would be expected, it takes far more evidence for it to produce results than it would take for an ideal Bayesian intelligence, or even an imperfect human designer.
Replies from: kim0↑ comment by kim0 · 2009-05-05T09:44:31.318Z · LW(p) · GW(p)
What is your evidence for this assertion?
In my analysis, evolution by sexual reproduction can be very good at rationality, collecting information of about 1 bit per generation per individual, because an individual can only be naturally selected or die 1 time.
The factors limiting the learning speed of evolution is the high cost of this information, namely death, and that this is the only kind data going into the system. And the value to be optimized is avoidance of death, which also avoids data gathering. And this optimization function is almost impossible to change.
If the genes were ideal Bayesian intelligences, they would still be limited by this high cost of data gathering. It would be something like this:
Consider yourself in a room. On the wall there is a lot of random data. You can do whatever you want with it, but whatever you do, it will change your chance of dying or being copied with no memory. The problem is that you do not know when you die or are copied. Your task is to decrease your chance of dying. This is tractable mathematically, but I find it somewhat tricky.
Kim Øyhus
Replies from: JGWeissman↑ comment by JGWeissman · 2009-05-05T18:11:42.669Z · LW(p) · GW(p)
Well, perhaps you could reduce the effectiveness of even a Bayesian super intelligence to the level of evolution by restricting the evidence it observes to the evidence that evolution actually uses. But that is not the point.
Evolution ignores a lot of evidence, for example, it does not notice that a gene that confers a small advantage is slowly increasing in frequency and that it would save a lot of time to just give every member of the evolving population a copy of that gene. When a mutation occurs, evolution is incapable of copying that mutation in a hundred organisms to filter out noise from other factors in evaluating its contribution to fitness. An amazingly beneficial mutation could die with the first organism to have it, because of the dumb luck of being targeted by a predator when only a few days old.
For more on the limitations of evolution, and some example of how human intelligence does much better, see Evolutions Are Stupid.
Replies from: kim0↑ comment by kim0 · 2009-05-05T21:08:37.512Z · LW(p) · GW(p)
Very interesting article that.
However, evolution is able to test and spread many genes at the same time, thus achieving higher efficiency than the article suggests. Sort of like spread spectrum radio.
I am quite certain its speed is lower than some statistical methods, but not by that much. I guess at something like a constant factor slower, for doubling gene concentration, as compared to 1 std deviation certainty for the goodness of the gene by Gaussian statistics.
Random binary natural testing of a gene is less accurate than statistics, but it avoids putting counters in the cells for each gene, thus shrinking the cellular machinery necessary for this sort of inference, thus increasing the statistical power per base pair. And I know there are more complicated methods in use for some genes, such as anti bodies, methylation, etc.
And then there is sexual selection, where the organisms use their brains to choose partners. This is even closer to evolution assisted by Bayesian super intelligence.
So I guess that evolutions is not so slow after all.
Kim Øyhus
Replies from: timtyler↑ comment by timtyler · 2010-09-03T08:50:22.291Z · LW(p) · GW(p)
We can see that intelligent design beats random mutations by quite a stretch - by looking at the acceleration of change due to cultural evolution and technology.
Of course cultural evolution is still a kind of evolution - but intelligent mutations, multi-partner recombination and all the other differences do seem to add up to something pretty substantial.
comment by HA2 · 2009-05-06T20:23:10.854Z · LW(p) · GW(p)
"Now, I am not explaining control systems merely to explain control systems. The relevance to rationality is that they funnel reality into a narrow path in configuration space by entirely arational means, and thus constitute a proof by example that this is possible."
I don't think you needed control systems to show this. Gravity itself is as much of a 'control system' - it minimizes the potential energy of the system! Heck, if you're doing that, lots of laws of physics fit that definition - they narrow down the set of possible realities...
" This must raise the question, how much of the neural functioning of a living organism, human or lesser, operates by similar means? "
So, I'm still not sure what you mean by 'similar means'.
We know the broad overview of how brains work - sensory neurons get triggered, they trigger other neurons, and through some web of complex things motor neurons eventually get triggered to give outputs. The stuff in the middle is hard; some of it can be described as "memory" (patterns that somehow represent past inputs), some can be represented various other abstractions. Control systems are probably good ways of interpreting a lot of combinations of neurons, and some have been brought up here. It seems unlikely that they would capture all of them - but if you stretch the analogy enough, perhaps it can.
"And how much of the functioning of an artificial organism must be designed to use these means? "
Must? I'd guess absolutely none. The way you have described them, control systems are not basic - for 'future perception' to truly determine current actions, would break causality. So it's certainly possible to describe/build an artificial organism without using control systems, though it seems like it would be pretty inconvenient and pointlessly harder than an already impossible problem given how useful you're describing them to be.
comment by i77 · 2009-05-05T12:33:28.553Z · LW(p) · GW(p)
OK. So let'ts take a controller with an explicit (I hope you agree) model, the Smith predictor. The controller as a whole has a model, but the subsystem C(z) (in the wiki example) has not (in your terms).
Or better yet, a Model Reference Adaptive Control. The system as a whole IS predictive, uses models, etc.. but the "core" controller subsystem does "not".
Then I'd argue that in the simple PID case, the engineer does the job of the Model/Adjusting Mechanism, and it's a fundamental part of the implementation process (you don't just buy a PID and install it without tuning it first!)
So, in every control systems there is a model. It's only that when the plant is "simple enough" and invariant in the long term the model/adjusting subsystem is implemented in wetware, and only used during install.
This is just arguing semantics, though.
comment by kim0 · 2009-05-05T07:04:04.215Z · LW(p) · GW(p)
All control systems DO have models of what they are controlling. However, the models are typically VERY simple.
A good principle for constructing control systems are: Given that I have a very simple model, how do I optimize it?
The models I learned about in cybernetics were all linear, implemented as matrices, resistors and capacitors, or discrete time step filters. The most important thing was to show that the models and reality together did not result in amplification of oscillations. Then one made sure that the system actually did some controlling, and then one could fine tune it to reality to make it faster, more stable, etc.
One big advantage of linear models is that they can be inverted, and eigenvectors found. Doing equivalent stuff for other kinds of models is often very difficult, requiring lots of computation, or is simply impossible.
As has someone has written before here: It is mathematically justified to consider linear control systems as having statistical models of reality, typically involving gaussian distributions.
Kim Øyhus
comment by abigailgem · 2009-05-04T12:06:54.493Z · LW(p) · GW(p)
Exercise 1. Assuming the truth of the statement that the candle has the effect of raising the temperature of the thermostat like that, (I do not have the knowledge to state whether that is or is not the case) the room temperature will oscillate between 15 and 16 degrees as the temperature of the thermostat continues to oscillate between 20 and 21, until the candle burns out.
comment by Jotaf · 2009-05-11T01:04:28.401Z · LW(p) · GW(p)
I agree with the thesis; the referenced paper is really interesting, but the article on LW is a bit long-winded in trying to convey the notion that "there is no internal model". Amusingly the paper's title is "Internal models in the cerebellum"!
comment by abigailgem · 2009-05-04T12:17:52.883Z · LW(p) · GW(p)
Exercise 2. The accelerator will go to maximum, and the driver would have to brake maximally, until something burned out or the driver put the car out of gear or turned off the cruise control.