Stigmergy and Pickering's Mangle

post by Johnicholas · 2010-01-02T19:14:21.432Z · LW · GW · Legacy · 9 comments

Contents

9 comments

Stigmergy is a notion that an agent's behavior is sometimes best understood as coordinated by the agent's environment. In particular, social insects build nests, which have a recognizable standard pattern (different patterns for different species). Does the wasp or termite have an idea of what the standard pattern is? Probably not. Instead, the computation inside the insect is a stateless stimulus/response rule set. The partially-constructed nest catalyzes the next construction step.

An unintelligent "insect" clambering energetically around a convoluted "nest", with the insect's local perceptions driving its local modifications is recognizably something like a Turing machine. The system as a whole can be more intelligent than either the (stateless) insect or the (passive) nest. The important computation is the interactions between the agent and the environment.

Theraulaz and Bonabeau have simulated lattice swarms, and gotten some surprisingly realistic wasp-nest-like constructions. A paper is in CiteSeer, but this summary gives a better rapid overview.

Humans modify the environment (e.g. by writing books and storing them in libraries), and human behavior is affected by their environment (e.g. by reading books). Wikipedia is an excellent example of what human-human stigmergic coordination looks like. Instead of interacting directly with one another, each edit leaves a trace, and future edits respond to the trace (this impersonal interaction may avoid some biases towards face-saving and status-seeking).

Andrew Pickering is a sociologist who studies science and technology. He wrote a book called "The Mangle of Practice". He includes in his sociology non-human "actors". For example, he would say that a bubble chamber acts on a human observer when the observer sees a tracery of bubbles after a particle physics experiment. This makes his theory less society-centric and more recognizable to a non-sociologist.

As a programmer, the best way I can explain Pickering's mangle is by reference to programming. In trying to accomplish something using a computer, you start with a goal, a desired "capture of machinic agency". You interact with the computer, alternating between human-acts-on-computer (edit) phases, and computer-acts-on-human (run) phases. In this process, the computer may display "resistances" and, as a consequence, you might change your goals. Not all things are possible or feasible, and one way that we discover impossibilities and infeasibilities is via these resistances. Pickering would say that your goals have been "mangled". Symmetrically, the computer program gets mangled by your agency (mangled into existence, even).

Pickering says that all of science and technology can be described by an network including both human and non-human actors, mangling each other over time, and in his book he has some carefully-worked out examples - Donald Glaser's invention of the bubble chamber, Morpurgo's experiments measuring a upper bound on the number of free quarks and Hamilton's invention of quaternions, and a few more.

I hope you find these notions (stigmergy and the mangle) as provocative and intriguing as I do. The rest of this post is my own thoughts, far more speculative and probably not as valuable.

Around each individual is a shell of physical traces that they have made - books that they've chosen to keep nearby, mementos and art that they have collected, documents that they've written. At larger radiuses, those shells become sparser and intermingle more, but eventually those physical traces comprise a lot of what we call "civilization". Should a person's shell of traces be considered integral to their identity? 

Most of the dramatic increases in our civilization's power and knowledge over the past few thousand years have been improvements in these stigmergic traces. Does this suggest that active, deliberate stigmergy is an appropriate self-improvement technique, in rationality and other desirable traits? Maybe exoself software would be a good human rationality-improving project. I wrote a little seed called exomustard, but it doesn't do much of anything.

Might it be possible for some form of life to exist within the interaction between humans and their environment? Perhaps the network of roads, cars, and car-driving could be viewed as a form of life. If all physical roads and cars were erased, humans would remember them and build them again. If all memory and experience of roads and cars were erased, humans would discover the use of the physical objects quickly. But if both were erased simultaneously, it seems entirely plausible that some other form of transportation would become dominant. Nations are another example. These entities self-catalyze and maintain their existence and some of their properties in the face of change, which are lifelike properties.

What would conflict between a human and a stigmergic, mangled-into-existance "capture of machinic agency" look like? At first glance, this notion seems like some quixotic quest to defeat the idea and existence of automobiles (or even windmills). However, the mangle does include the notion of conflict already, as "resistances". Some resistances, like the speed of light or the semi-conservation of entropy, we're probably going to have to live with. Those aren't the ones we're interested in. There are also accidental resistances due to choices earlier in the mangling process.

Bob Martin has a paper where he lists some symptoms of bad design - rigidity, fragility, immobility, viscosity. We might informally say "The system *wants* to do such-and-so.", often some sort of inertia or otherwise continuing on a previous path. These are examples of accidental resistances that humans chose to mangle into existence, and then later regret. Every time you find yourself saying "well, it's not good, but it's what we have and it would be too expensive/risky/impractical to change", you're finding yourself in conflict with a stigmergic pattern.  

Paul Grignon has a video "Money as Debt", that describes a world where we have built an institution gradually over centuries which is powerful and which (homeostatically) defends its own existence, but also (due to its size, power, and accidentally-built-in drive) steers the world toward disaster. The video twigs a lot of my conspiracy-theory sensors, but Paul Grignon's specific claims are not necessary for the general principle to be sound: we can build institutions into our civilization that subsequently have powerful steering effects on our civilization - steering the civilization into a collision course with a wall, maybe.

In conclusion, stigmergy and Pickering's mangle are interesting and provocative ideas and might be useful building blocks for techniques to increase human rationality and reduce existential risk.

9 comments

Comments sorted by top scores.

comment by Steve_Rayhawk · 2010-01-03T12:26:47.898Z · LW(p) · GW(p)

Sasha Chislenko, when he was still alive, took this idea in the opposite direction:

Traffic signs are a great invention embodying the ancient dream of humanity: they tell you where you should stop, when you can go, what is around the corner, and some of them - traffic lights - even predict what will happen in near future. That's about all I would like to know on my life path. People dreamed of knowing these things before traffic signs existed, and they looked at nature trying to interpret natural events as "omens" - signs put on their path by some intelligent agencies, with the purpose to tell them where they should go and what they can expect to happen there. Alas, nature wasn't very helpful here, even though eventually people learned to recognize some phenomena as precursors to others. So with time, people decided to take things into their own hands, and to augment natural locales with clear indicators of what should be happening here, what is around, where are other similar places, and provide any other information that may be of value to their visitor.

[. . .] A similar process once happened to primitive biological organisms, which used to suffer from lack of guidance and tended to "fall into the same pits" over and over again. Road signs would have been quite useful then as well, but putting them up was too hard. So the poor critters developed little attached signs that would see the situation and tell the bodies what is going on and what they should do. We call these "smart signs" sensors and brains[. . .]

(Sasha was deeply interested in technologies for collective epistemology. I wonder if we would be in as bad a position as we're in now if he hadn't died... Which also means I wonder if there's anyone else we—("we", who?—) should be taking unusual effort to keep alive.)

Replies from: Steve_Rayhawk
comment by Steve_Rayhawk · 2010-01-03T12:30:46.108Z · LW(p) · GW(p)

(Sasha was deeply interested in technologies for collective epistemology. I wonder if we would be in as bad a position as we're in now if he hadn't died... Which means I wonder if there's anyone else we—("we", who?—) should be taking unusual effort to keep alive.)

comment by Kaj_Sotala · 2010-01-03T08:38:32.605Z · LW(p) · GW(p)

It's not entirely the same thing, but this reminds me of the distributed cognition paradigm. Compare with the quote from this introductory article:

In several environments we found subjects using space to simplify choice by creating arrangements that served as heuristic cues. For instance, we saw them covering things, such as garbage disposal units or hot handles, thereby hiding certain affordances or signaling a warning and so constraining what would be seen as feasible. At other times they would highlight affordances by putting items needing immediate attention near to them, or creating piles that had to be dealt with. We saw them lay down items for assembly in a way that was unambiguously encoding the order in which they were to be put together or handed off. That is, they were using space to encode ordering information and so were off-loading memory. These are just a few of the techniques we saw them use to make their decision problems combinatorially less complex.

We also found subjects reorganizing their workspace to facilitate perception: to make it possible to notice properties or categories that were not noticed before, to make it easier to find relevant items, to make it easier for the visual system to track items. One subject explained how his father taught him to place the various pieces of his dismantled bicycle, many of which were small, on a sheet of newspaper. This made the small pieces easier to locate and less likely to be kicked about. In videos of cooking we found chefs distinguishing otherwise identical spoons by placing them beside key ingredients or on the lids of their respective saucepans, thereby using their positions to differentiate or mark them. We found jigsaw puzzlers grouping similar pieces together, thereby exploiting the capacity of the visual system to note finer differences between pieces when surrounded by similar pieces than when surrounded by different pieces.

Finally, we found a host of ways that embodied agents enlist the world to perform computation for them. Familiar examples of such off-loading show up in analog computations. When the tallest spaghetti noodle is singled out from its neighbors by striking the bundle on a table, a sort computation is performed by using the material and spatial properties of the world. But more prosaically we have found in laboratory studies of the computer game Tetris that players physically manipulate forms to save themselves computational effort [Kirsh 2001; Kirsh and Maglio 1995]. They modify the environment to cue recall, to speed up identification, and to generate mental images faster than they could if unaided. In short, they make changes to the world to save themselves costly and potentially error-prone computations.

All the work we have discussed above points to one fact: people form a tightly coupled system with their environments. The environment is one’s partner or cognitive ally in the struggle to control activity. Although most of us are unaware of it, we constantly create external scaffolding to simplify our cognitive tasks.

comment by Nominull · 2010-01-03T05:45:02.369Z · LW(p) · GW(p)

What does this predict?

Replies from: Morendil
comment by Morendil · 2010-01-03T07:03:28.498Z · LW(p) · GW(p)

Pickering's "mangle" isn't so much a theory, which would entail distinct predictions in some situations, as a change in perspective.

Think of it, to start with, as writing the same equations using a different notation. In math, this often has powerful effects - even though it shouldn't, since you're saying the same thing. But the new notation appeals to different intuitions, which may make it easier to think about the underlying situation.

The "mangle" term is a short word for "dialectic of resistance and accomodation", where "dialectic" is itself a term of art among philosophers, for ideas which have some kind of generative quality. The concrete constituents of Pickering's "mangle" are things like transposition, bridgding, filling, free moves and forced moves. These terms occur in an extended analysis of "thinking by analogy". A good illustration is Pickering's dissection of quaternions.

Very briefly, Hamilton starts out interested in extending complex numbers to three dimensions. This is a "free move", and in a certain sense every analogy is a free move of that same type. You take something in one domain and "transpose" it to a different domain. In most cases this transposition isn't direct and easy, for instance, Hamilton had to try several different ways of extending complex numbers before hitting on one that made sense.

You have to fiddle with your tentative first results quite a bit, in other words, and the "mangle" view is that this fiddling can be interpreted as your being acted on by reality, just as much as you're acting on reality.

In this sense the mangle view predicts something: that in the history of ideas, whether scientific, artistic or political, we shouldn't expect innovation to be direct and easy, but to be characterized by unexpected resistances and thinkers' accomodation to such resistances.

One practical application for rationalists is to more readily distinguish truth from fiction in accounts of how a given idea was discovered. Actual accounts most often involve false starts, interventions by unlikely characters, etc. If the account is too glib, too much a just-so story where an idealized Scientist comes along and thanks to an act of Original Seeing discerns a truth which had eluded everyone before... it's probably myth.

comment by CronoDAS · 2010-01-02T23:23:46.466Z · LW(p) · GW(p)

Every time you find yourself saying "well, it's not good, but it's what we have and it would be too expensive/risky/impractical to change", you're finding yourself in conflict with a stigmergic pattern.

Microsoft Windows, the QWERTY keyboard, the Electoral College, and the roads in Boston are things that tend to fall under this category - things that people often wish they could change but can't.

comment by CronoDAS · 2010-01-02T23:20:35.211Z · LW(p) · GW(p)

I've seen "Money as Debt". The description of the mechanics of money creation is accurate, but the "And this is bad" part is rather nuts. (The part where they say that bankers will own everything because they charge interest fails logic forever; money lenders spend the money they receive as interest, so their share of wealth isn't destined to increase to 100%.)

Replies from: Johnicholas
comment by Johnicholas · 2010-01-03T22:47:28.694Z · LW(p) · GW(p)

The detailed claims in the video are dubious, but the broad scenario of somehow installing a mechanism in in our civilization that steers us toward exponential growth of a sort that the earth cannot support seems like a legitimate existential risk to me.

comment by Morendil · 2010-01-02T20:05:26.168Z · LW(p) · GW(p)

Thanks for an interesting introduction to these ideas - quite different from how I might have chosen to present them, but all the more interesting for that.

Pickering and others' sociology of knowledge point to an important distinction between "pure reason" and "rationality". The distinction is under-appreciated and, it seems to me, one explanation for the "Spock" caricature of rationality.