Behavior: The Control of Perception

post by Vaniver · 2015-01-21T01:21:58.801Z · LW · GW · Legacy · 26 comments

Contents

26 comments

This is the second of three posts dealing with control theory and Behavior: The Control of Perception by William Powers. The previous post gave an introduction to control theory, in the hopes that a shared language will help communicate the models the book is discussing. This post discusses the model introduced in the book. The next post will provide commentary on the model and what I see as its implications, for both LW and AI.

B:CP was published in 1973 by William Powers, who was a controls engineer before he turned his attention to psychology. Perhaps unsurprisingly, he thought that the best lens for psychology was the one he had been trained in, and several sections of the book contrast his approach with the behaviorist approach. This debate is before my time, and so I find it mostly uninteresting, and will only bring it up when I think the contrast clarifies the difference in methodology, focusing instead on the meat of his model.

The first five chapters of B:CP introduce analog computers and feedback loops, and make the claim that it's better to model the nervous system as an analog computer, with continuous neural currents (with the strength of the current determined by the rate of the underlying impulses and number of branches, as each impulse has the same strength) rather than as a digital computer. On the macroscale, this seems unobjectionable, and while it makes the model clearer I'm not sure that it's necessary. He also steps through how to physically instantiate a number of useful mathematical functions with a handful of neurons; in general, I'll ignore that detailed treatment but you should trust that the book makes a much more detailed argument for the physical plausibility of this model than I do here.

The sixth chapter discusses the idea of hierarchical modeling. We saw a bit of that in the last post, with the example of a satellite that had two control systems, one which used sense data of the thruster impulses and the rotation to determine the inertia model, and the other which uses the rotation and the inertia model to determine the thruster impulses. The key point here is that the models are inherently local, and thus can be separated into units. The first model doesn't have to know that there's another feedback loop; it just puts the sense data it receives through a formula, and uses another formula to update its memory, which has the property of reducing the error of its model. Another way to look at this is that control systems are, in some sense, agnostic of what they're sensing and what they're doing, and their reference level comes from the environment similar to any other input. While one might see the two satellite systems as not being stacked, when one control circuit has no outputs except to vary the reference levels of other control circuits, it makes sense to see the reference-setting control circuit as superior in the hierarchical organization of the system.

There's a key insight hidden there which is probably best to show by example. The next five chapters of B:CP step through five levels in the hierarchy. Imagine this section as building a model of a human as a robot- there are output devices (muscles and glands and so on) and input devices (sensory nerve endings) that are connected to the environment. Powers discusses both output and input, but here I'll just discuss output for brevity's sake.

Powers calls the first level intensity, and it deals directly with those output and input devices. Consider a muscle; the control loop there might have a reference tension in the muscle that it acts to maintain, and that tension corresponds to the outside environment. From the point of view of measured units, the control loops here convert between some physical quantity and neural current.

Powers calls the second level sensation, and it deals with combinations of first level sensors. As we've put all of the actual muscular effort of the arm and hand into the first level, the second level is one level of abstraction up. Powers suggests that the arm and hand have about 27 independent movements, and each movement represents some vector in the many-dimensional space of however many first order control loops there are. (Flexing the wrist, for example, might reflect an increase of effort intensity of muscles in the forearm on one side and a decrease of effort intensity of muscles on the other side.) Note that from this point on, the measured units are all the same--amperes of neural current--and that means we can combine unlike physical things because they have some conceptual similarity. This is the level of clustering where it starts to make sense to talk about a 'wrist,' or at least particular joints in it, or something like a 'lemonade taste,' which exists as a mental combination of the levels of sugar and acid in a liquid. The value of a hierarchy also starts to become clear- when we want to flex the wrist, we don't have to calculate what command to send to each individual muscle- we simply set a new reference level for the wrist-controller, and it adjusts the reference levels for many different muscle-controllers.

Powers calls the third level configuration, and it deals with combinations of second level loops. The example here would be the position of the joints in the arm or hand, such as positioning the hand to perform the Vulcan salute. 

Powers calls the fourth level transition, and it deals with combinations of third level loops, as well as integration or differentiation of them. A third order control loop might put the mouth and vocal cords where they need to be to make a particular note, and a fourth order control loop could vary the note that third order control loop is trying to hit in order to create a pitch that rises at a certain rate.

Powers calls the fifth level sequence, and it deals with combinations and patterns of the fourth level loops. Here we see patterns of behavior- a higher level can direct a loop at this level to 'walk' to a particular place, and then the orders go down until muscles contract at the lowest level.

The key insight is that we can, whenever we identify a layer, see that layer as part of the environment of the hierarchy above that layer. At the 0th layer, we have the universe- at the 1st layer, we have a body, at the 2nd layer, we have a brain (and maybe a spine and other bits of the nervous system). Moving up layers in the organization is like peeling an onion; we consider smaller and smaller portions of the physical brain and consider more and more abstract concepts.

I'm not a neuroscientist, but I believe that until this point Powers's account would attract little controversy. The actual organization structure of the body is not as cleanly pyramidal as this brief summary makes it sound, but Powers acknowledges as much and the view remains broadly accurate and useful. There's ample neurological evidence to support that there are parts of the brain that do the particular functions we would expect the various orders of control loops to do, and the interested reader should take a look at the book.

Where Powers's argument becomes more novel, speculative, and contentious is the that the levels keep going up, with the same basic architecture. Instead of an layered onion body wrapped around an opaque homunculus mind, it's an onion all the way to the center- which Powers speculates ends at around the 9th level. (More recent work, I believe, estimates closer to 11 levels.) The hierarchy isn't necessarily neat, with clearly identifiable levels, but there is some sort of hierarchical block diagram that stretches from terminal goals to the environment. He identifies the levels as relationships, algorithms (which he calls program control), principles, and system concepts. As the abstractness of the concepts would suggest, their treatment is more vague and he manages to combine them all into a single chapter, with very little of the empirical justification that filled earlier chapters.

This seems inherently plausible to me:

  1. It's parsimonious to use the same approach to signal processing everywhere, and it seems easier to just add on another layer of signal processing (which allows more than a linear increase in potential complexity of organism behavior) than to create an entirely new kind of brain structure.
  2. Deep learning and similar approaches in machine learning can fit comparable architecture in an unsupervised fashion. My understanding of the crossover between machine learning and neuroscience is that we understand machine vision the best, and many good algorithms line up with what we see in human brains--pixels get aggregated to make edges which get aggregated to make shapes, and so on up the line. So we see the neural hierarchies this model predicts, but this isn't too much of a surprise because hierarchies are the easiest structures to detect and interpret.

What is meant by "terminal goals"? Well, control systems have to get their reference from somewhere, and the structure of the brain can't be "turtles all the way up." Eventually there should be a measured variable, like "hunger," which is compared to some reference, and any difference between the variable and the reference leads to action targeted at reducing the difference.

That reference could be genetic/instinctual, or determined by early experience, or modified by chemicals, or so on, but the point is that it isn't the feedback of a neural control loop above it. Chapter 14 discusses learning as the reorganization of the control system, and the processes used there seem potentially sufficient to explain where the reference levels and the terminal goals come from.

Indeed, the entire remainder of the book, discussing emotion, conflict, and so on, fleshes out this perspective in a full way that I could only begin to touch on here, so I will simply recommend reading the book if you're interested in his model. Here's a sample on conflict:

Conflict is an encounter between two control systems, an encounter of a specific kind. In effect, the two control systems attempt to control the same quantity, but with respect to two different reference levels. For one system to correct an error, the other system must experience error. There is no way for both systems to experience zero error at the same time. Therefore the outputs of the system must act on the shared controlled quantity in opposite directions.
If both systems are reasonably sensitive to error, and the two reference levels are far apart, there will be a range of values of the controlled quantity (between the reference levels) throughout which each system will contain an error signal so large that the output of each system will be solidly at its maximum. These two outputs, if about equal, will cancel, leaving essentially no net output to affect the controlled quantity. Certainly the net output cannot change as the “controlled” quantity changes in this region between the reference levels, since both outputs remain at maximum.
This means there is a range of values over which the controlled quantity cannot be protected against disturbance any more. Any moderate disturbance will change the controlled quantity, and this will change the perceptual signals in the two control systems. As long as neither reference level is closely approached, there will be no reaction to these changes on the part of the conflicted systems.
When a disturbance forces the controlled quantity close enough to either reference level, however, there will be a reaction. The control system experiencing lessened error will relax, unbalancing the net output in the direction of the other reference level. As a result, the conflicted pair of systems will act like a single system having a “virtual reference level,” between the two actual ones. A large dead zone will exist around the virtual reference level, within which there is little or no control.
In terms of real behavior, this model of conflict seems to have the right properties. Consider a person who has two goals: one to be a nice guy, and the other to be a strong, self-sufficient person. If he perceives these two conditions in the “right” way (for conflict) he may find himself wanting to be deferential and pleasant, and at the same time wanting to speak up firmly for his rights. As a result, he does neither. He drifts in a state between, his attitude fluctuating with every change in external circumstances, undirected. When cajoled and coaxed enough he may find himself beginning to warm up, smile, and think of a pleasant remark, but immediately he realizes that he is being manipulated and resentfully breaks off communication or utters a cutting remark. On the other hand if circumstances lead him to begin defending himself against unfair treatment, his first strong words fill him with remorse and he blunts his defense with an apologetic giggle. He can react only when pushed to one extreme or the other, and his reaction takes him back to the uncontrolled middle ground.

So what was that about behaviorism?

According to Powers, most behaviorists thought in terms of 'stimulus->response,' where you could model a creature as a lookup table that would respond in a particular way to a particular stimulus. This has some obvious problems--how do we cluster stimuli? Someone saying "I love you" means very different things depending on the context. If the creature has a goal that depends on a relationship between entities, like wanting there to not be an unblocked line between their eyes and the sun, then you need to know the position of the sun to best model their response to any stimulus. Otherwise, if you just record what happens when you move a shade to the left, you'll notice that sometimes they move left and sometimes they move right. (Consider the difference between 1-place functions and 2-place functions.)

Powers discusses a particular experiment of neural stimulation in cats where the researchers couldn't easily interpret what some neurons were doing in behaviorist terms, because the cat would inconsistently move one way or another, but the control theory view parsimoniously explained the neurons as being higher order, which meant that the original position had to be taken into account to determine what the error was when the reference was adjusted by electrical stimulation, as it's the error that determines the response rather than just the reference.

If we want to have a lookup table in which the entire life history of the creature is the input, then figuring out what this table looks like is basically impossible. We want something that's complex enough to encode realistic behavior without being complex enough to encode unrealistic behavior--that is, we want the structure of our model to match the structure of the actual brain and behavior, and it looks like the control theory view is a strong candidate.

Unfortunately, I'm not an expert in this field, so I can't tell you what the state of the academic discussion looks like now. I get the impression that a number of psychologists have at least partly bought into the BCP paradigm (called Perceptual Control Theory) and have been working on their interests for decades, but it doesn't seem to have swept the field. As a general comment, controversies like this are often resolved by synthesis rather than the complete victory of one side over the other. If modern psychologists have learned a bit of the hierarchical control systems viewpoint and avoided the worst silliness of the past, then the historic criticisms are no longer appropriate and most of the low-hanging fruit from adopting this view haven already been picked. 

Next: a comparison with utility, discussion of previous discussion on LW, and some thoughts on how thinking about control systems can impact thinking about AI.

26 comments

Comments sorted by top scores.

comment by [deleted] · 2015-01-21T17:46:01.927Z · LW(p) · GW(p)

Excellent post. I've been enjoying your series so far. Control theory feels useful in a "this is the key to everything" sort of way.

comment by Kaj_Sotala · 2015-01-28T07:56:45.385Z · LW(p) · GW(p)

Unfortunately, I'm not an expert in this field, so I can't tell you what the state of the academic discussion looks like now. I get the impression that a number of psychologists have at least partly bought into the BCP paradigm (called Perceptual Control Theory) and have been working on their interests for decades, but it doesn't seem to have swept the field.

At least on a superficial level, the model reminds me somewhat of the hierarchical prediction model, in that both postulate the brain to be composed of nested layers of controllers, each acting on the errors of the earlier layer. (I put together a brief summary of the paper here, though it was mainly intended as notes for myself so it's not as clear as it could be.) Do you have a sense on how similar or different the models are?

Replies from: Vaniver
comment by Vaniver · 2015-01-28T14:46:09.100Z · LW(p) · GW(p)

Thanks for the paper! It was an interesting read and seems very relevant (and now I've got some reference chains to follow).

Do you have a sense on how similar or different the models are?

My impression is that if they describe someone as a cyberneticist, then they're operating on a model that's similar enough. First three sentences of the paper:

“The whole function of the brain is summed up in: error correction.” So wrote W. Ross Ashby, the British psychiatrist and cyberneticist, some half a century ago. Computational neuroscience has come a very long way since then. There is now increasing reason to believe that Ashby’s (admittedly somewhat vague) statement is correct, and that it captures something crucial about the way that spending metabolic money to build complex brains pays dividends in the search for adaptive success.

From my read of the rest of paper, the similarities go deep. Control theory is explicitly discussed in this section:

A closely related body of work in so-called optimal feedback control theory (e.g., Todorov 2009; Todorov & Jordan 2002) displays the motor control problem as mathematically equivalent to Bayesian inference. Very roughly – see Todorov (2009) for a detailed account – you treat the desired (goal) state as observed and perform Bayesian inference to find the actions that get you there. This mapping between perception and action emerges also in some recent work on planning (e.g., Toussaint 2009). The idea, closely related to these approaches to simple movement control, is that in planning we imagine a future goal state as actual, then use Bayesian inference to find the set of intermediate states (which can now themselves be whole actions) that get us there. There is thus emerging a fundamentally unified set of computational models which, as Toussaint (2009, p. 29) comments, “does not distinguish between the problems of sensor processing, motor control, or planning.” Toussaint’s bold claim is modified, however, by the important caveat (op. cit., p. 29) that we must, in practice, deploy approximations and representations that are specialized for different tasks. But at the very least, it now seems likely that perception and action are in some deep sense computational siblings and that:

The best ways of interpreting incoming information via perception, are deeply the same as the best ways of controlling outgoing information via motor action … so the notion that there are a few specifiable computational principles governing neural function seems plausible. (Eliasmith 2007, p. 380)

Action-oriented predictive processing goes further, however, in suggesting that motor intentions actively elicit, via their unfolding into detailed motor actions, the ongoing streams of sensory (especially proprioceptive) results that our brains predict. This deep unity between perception and action emerges most clearly in the context of so-called active inference, where the agent moves its sensors in ways that amount to actively seeking or generating the sensory consequences that they (or rather, their brains) expect (see Friston 2009; Friston et al. 2010). Perception, cognition, and action – if this unifying perspective proves correct – work closely together to minimize sensory prediction errors by selectively sampling, and actively sculpting, the stimulus array. They thus conspire to move a creature through time and space in ways that fulfil an ever-changing and deeply inter-animating set of (sub-personal) expectations. According to these accounts, then:

Perceptual learning and inference is necessary to induce prior expectations about how the sensorium unfolds. Action is engaged to resample the world to fulfil these expectations. This places perception and action in intimate relation and accounts for both with the same principle. (Friston et al. 2009, p. 12)

Basically, it looks like their view fits in with the hierarchical controls view and possibly adds burdensome details (in the sense that they believe the reference values take on a specific form that the hierarchical control theory view allows but does not require).

comment by majus · 2015-02-12T17:14:57.971Z · LW(p) · GW(p)

The quote on conflict reminds me of Jaak Panksepp's "Affective Neuroscience: The Foundations of Human and Animal Emotions", or a refracted view of it presented in John Gottman's book, "The Relationship Cure". Panksepp identifies mammalian emotional command systems he names FEAR, SEEKING, RAGE, LUST, CARE, PANIC/GRIEF, PLAY; Gottman characterizes these systems as competing cognitive modules: Commander-in-chief, Explorer, Sentry, Energy Czar, Sensualist, Jester or Nest Builder. It is tempting now to think of them as very high-level controllers in the hierarchy.

comment by dvasya · 2015-01-21T21:30:50.557Z · LW(p) · GW(p)

Here's another excellent book roughly from the same time: "The Phenomenon of Science" by Valentin F. Turchin (http://pespmc1.vub.ac.be/posbook.html). It starts from largely similar concepts and proceeds through the evolution of the nervous system to language to math to science. I suspect it may be even more AI-relevant than Powers.

Replies from: Vaniver
comment by Vaniver · 2015-01-22T01:02:19.980Z · LW(p) · GW(p)

Thanks for the link (which has the free pdf, for anyone else interested)! After a few months at being only at a book or two, my reading queue is up towards a dozen again, so I'm not sure when I'll get to reading it.

comment by msheehan · 2015-02-04T09:43:41.829Z · LW(p) · GW(p)

In terms of robotics, BCP or PCT seems a lot like Rodney Brooks' Subsumption Architecture: Eliezer has written a not particularly favourable post about it. It was such an important idea, it formed the basis for nearly all robots for quite a long time. It is an idea he had in the 1980s when we was bitten by a mosquito in Indonesia while on holiday there I believe. At the time, all robots were programmed using rules, probably close to the lookup table approach you mention, and were very slow, and not particularly useful. Brooks' idea was to mimic the behaviour of very simple animals and work up to humans (bottom-up approach), rather than the other way around, which was try and create robots from logic given certain situations and rules (top-down approach). A summary of his ideas is this 1990 paper, Elephants don't play chess. I thought it would be interesting to introduce this idea here since subsumption in my mind links with BCP (or PCT) to an important sub-set of the AI world, robotics, which shows clear examples of the theory in practice. Just to let you know, the main problems that have been found with implementation of subsumption-style AI is that successive layers or heirachies of control get quite difficult to implement. Its major criticism has been although it has led to the ability to create robots that handle real-world environments very well, they tend to 'think' on the same level as insects; there are difficulties in implementing higher level thinking, i.e. logic and reasoning.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-02-04T11:03:02.713Z · LW(p) · GW(p)

There is an important difference between hierarchical PCT and subsumption.

In subsumption, higher-level controllers operate instead of lower-level controllers. When a higher-level controller needs to do something, it overrides lower-level controllers in order to do it. The robot senses an obstacle, so the "walk forwards" code is suspended while the "avoid obstacle" code takes over driving the legs.

In HPCT, higher-level controllers operate by means of lower-level controllers. When a higher-level controller needs to do something, it does so by setting reference levels for lower-level controllers. When the robot encounters an obstacle, the reference for desired direction of motion is changed, and the walk controllers continue to do their job with a different set of reference signals. The obstacle-avoidance controller does not even need to know whether the robot is on legs or wheels, only that it can send a signal "go in this direction" and it will happen. Each layer of controllers, in effect, implements a set of virtual actuators for the next level up to use.

comment by SarahNibs (GuySrinivasan) · 2015-01-21T15:32:31.318Z · LW(p) · GW(p)

Suppose I am in the presence of a bunch of data going this way and that into and out of a bunch of black boxes. What kind of math or statistics might tell me or suggest to me that boxes 2, 7, and 32 are probably simple control systems with properties x, y, and z? Seems I should be looking for a function of the inputs that is "surprisingly" approximately constant, and if there's a simple map from that function's output to states of some subset of the outputs then we've got a very strong clue, or if we find that some output strongly negatively correlates with a seemingly unrelated time series somewhere else that might be a clue... Anyone have a link to a good paper on this?

Replies from: Vaniver
comment by Vaniver · 2015-01-21T16:26:38.334Z · LW(p) · GW(p)

Seems I should be looking for a function of the inputs that is "surprisingly" approximately constant

I think in most situations where you don't have internal observations of the various actors, it's more likely that outputs will be constant than a function of the inputs. That is, a control system adjusts the relationship between an input and an output, often by counteracting it completely--thus we would see the absence of a relationship that we would normally expect to see. (But if we don't know what we would normally expect, then we have trouble.)

Anyone have a link to a good paper on this?

I'm leaning pretty heavily on a single professor/concept for this answer, but there's a phrase called "Milton Friedman's Thermostat," perhaps best explained here (which also has a few links for going further down the trail):

If a house has a good thermostat, we should observe a strong negative correlation between the amount of oil burned in the furnace (M), and the outside temperature (V). But we should observe no correlation between the amount of oil burned in the furnace (M) and the inside temperature (P). And we should observe no correlation between the outside temperature (V) and the inside temperature (P).

An econometrician, observing the data, concludes that the amount of oil burned had no effect on the inside temperature. Neither did the outside temperature. The only effect of burning oil seemed to be that it reduced the outside temperature. An increase in M will cause a decline in V, and have no effect on P.

A second econometrician, observing the same data, concludes that causality runs in the opposite direction. The only effect of an increase in outside temperature is to reduce the amount of oil burned. An increase in V will cause a decline in M, and have no effect on P.

But both agree that M and V are irrelevant for P. They switch off the furnace, and stop wasting their money on oil.

They also give another example with a driver adjusting how much to press the gas pedal based on hills here, along with a few ideas on how to discover the underlying relationships.


I feel like it's worth mentioning the general project of discovering causality (my review of Pearl, Eliezer's treatment), but that seems like it's going in the reverse direction. If a controller is deleting correlations from your sense data, that makes discovering causality harder, and it seems difficult to say "aha, causality is harder to discover than normal, therefore there are controllers!", but that might actually be effective.

Replies from: Richard_Kennaway, Richard_Kennaway
comment by Richard_Kennaway · 2015-01-21T22:37:55.035Z · LW(p) · GW(p)

If a controller is deleting correlations from your sense data, that makes discovering causality harder, and it seems difficult to say "aha, causality is harder to discover than normal, therefore there are controllers!", but that might actually be effective.

Yes, in the PCT field this is called the Test for the Controlled Variable. Push on a variable, and if it does not change, and it doesn't appear to be nailed down, there's probably a control system there.

I have an unpublished paper relating the phenomenon to causal analysis à la Pearl, but it's been turned down by two journals so far, and I'm not sure I can be bothered to rewrite it again.

Replies from: V_V, Vaniver
comment by V_V · 2015-02-04T14:18:26.473Z · LW(p) · GW(p)

I have an unpublished paper relating the phenomenon to causal analysis à la Pearl, but it's been turned down by two journals so far, and I'm not sure I can be bothered to rewrite it again.

arXiv?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-02-04T14:28:26.020Z · LW(p) · GW(p)

arXiv?

I looked at arXiv, but there's still a gateway process. It's less onerous than passing referee scrutiny, but still involves getting someone else with sufficient reputation on arXiv to ok it. As far as I know, no-one in my university department or in the research institute I work at has ever published anything there. I have accounts on researchgate and academia.edu, so I could stick it there.

Replies from: IlyaShpitser, Lumifer
comment by IlyaShpitser · 2015-02-04T15:38:17.375Z · LW(p) · GW(p)

I have never had any issues putting things up on the arXiv (just have to get through their latex process, which has some wrinkles). I think I have seen a draft of your paper, and I don't see how arXiv would have an issue with it. Did arXiv reject your draft somehow?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-02-04T21:13:46.159Z · LW(p) · GW(p)

I haven't sent it there. I created an account on arXiv a while back, and as far as I recall there was some process requiring a submission from someone new to be endorsed by someone else. This, I think, although on rereading I see that it only says that they "may" post facto require endorsement of submissions by authors new to arXiv, it's not a required part of the submission process. What happened the very first time you put something there?

Replies from: satt
comment by satt · 2015-02-09T04:05:09.264Z · LW(p) · GW(p)

(I know I'm not IlyaShpitser, but better my reply than no reply.) I have several papers on the arXiv, and the very first time I submitted one I remember it being automatically posted without needing endorsement (and searching my inbox confirms that; there's no extra email there asking me to find an endorser). If you submit a not-obviously-cranky-or-offtopic preprint from a university email address I expect it to sail right through.

Replies from: Richard_Kennaway, None, alienist
comment by Richard_Kennaway · 2015-02-11T07:46:30.425Z · LW(p) · GW(p)

Well, I've just managed to put a paper up on arXiv (a different one that's been in the file drawer for years), so that works.

comment by [deleted] · 2015-02-11T02:09:13.651Z · LW(p) · GW(p)

Because they're so small, I feel like their policies can be really inconsistent from circumstance to circumstance. I've got a couple papers on arXiv, but my third one has been mysteriously on hold for some months now for reasons that are entirely unclear to me.

comment by alienist · 2015-02-10T05:04:34.031Z · LW(p) · GW(p)

(I know I'm not IlyaShpitser, but better my reply than no reply.) I have several papers on the arXiv, and the very first time I submitted one I remember it being automatically posted without needing endorsement

How long ago was this? I believe the endorsement for new submitters requirement was added ~6 years ago.

Replies from: satt
comment by satt · 2015-02-11T00:26:53.082Z · LW(p) · GW(p)

My first submission was in 2012. I'm fairly sure I read about the potential endorsement-for-new-submitters condition at the time, too.

comment by Lumifer · 2015-02-04T15:26:02.604Z · LW(p) · GW(p)

SSRN?

comment by Vaniver · 2015-01-22T01:00:27.642Z · LW(p) · GW(p)

I have an unpublished paper relating the phenomenon to causal analysis à la Pearl, but it's been turned down by two journals so far, and I'm not sure I can be bothered to rewrite it again.

I'd be interested in seeing it, if you don't mind! (My email is my username at gmail, or you can contact me any of the normal ways.)

comment by Richard_Kennaway · 2015-01-22T00:11:19.031Z · LW(p) · GW(p)

That is, a control system adjusts the relationship between an input and an output, often by counteracting it completely--thus we would see the absence of a relationship that we would normally expect to see.

The words "input" and "output" are not right here. A controller has two signals coming into it and one coming out of it. What you above called the "output" is actually one of the input signals, the perception. This is fundamental to understanding control systems.

The two signals going into the controller are the reference and the perception. The reference is the value at which the control system is trying to bring the perception to. The signal coming out of the controller is the output, action or behaviour of the controller. The action is being emitted in order to bring the perception towards the reference. The controller is controlling the relationship between its two input signals, trying to make that relationship the identity. The italicised words are somewhere between definitions and descriptions. They are the usual words used to name these signals in PCT, but this usage is an instance of their everyday meanings.

In concrete terms, a thermostat's perception is (some measure of) the actual temperature. Its reference signal is the setting of the desired temperature on a dial. Its output or behaviour is the signal it sends to turn the heat source on and off. In a well-functioning control system, one observes that as the reference changes, the perception tracks it very closely, while the output signal has zero correlation with both of them. The purpose of the behaviour is to control the perception -- hence the title of William Powers' book, "Behavior: The Control of Perception". All of the behaviour of living organisms is undertaken for a purpose: to bring some perception close to some reference.

Replies from: Vaniver
comment by Vaniver · 2015-01-22T00:52:34.494Z · LW(p) · GW(p)

The words "input" and "output" are not right here.

Yeah, that paragraph was sloppy and the previous sentence didn't add much, so I deleted it and reworded the sentence you quoted. I'm used to flipping my perspective around a system, and thus 'output' and 'input' are more like 'left' and 'right' to me than invariant relationships like 'clockwise' and 'counterclockwise'-- with the result that I'll sometimes be looking at something from the opposite direction of someone else. "Left! No, house left!"

(In this particular case, the system output and the controller input are the same thing, and the system input is the disturbance that the controller counteracts, and I assumed you didn't have access to the controller's other input, the reference.)

comment by Arkanj3l · 2015-01-29T18:23:12.639Z · LW(p) · GW(p)

Similar in theme is "Vehicles: Experiments in Synthetic Psychology" by Valentino Braitenberg, in that creating simple systems that aren't goal driven can nonetheless produce behavior that we characterize as emotional or thoughtful, somehow. It's more exploratory and illustrative than principled or conceptual, but should be a good read.

comment by Flextechmgmt · 2015-01-25T23:46:51.383Z · LW(p) · GW(p)

I agree with your post for the most part, with a few caveats. There are far more conflicts for gamers or athletes than this theory accounts for. Conflicts (& their resolutions) are the most important part of behaviorism. Every match online, say, on Battle.net playing Starcraft 2, is a fight, also known as a conflict. Therefore if a person plays several game matches a night, he is engaged in several complex conflict events that require later thought while laying in bed in order to coherently analyze & find points of behavioral improvement. Therefore I find that your theory is insufficiently general to be useful on a large scale.