Agency is bugs and uncertainty

post by Shmi (shminux) · 2015-06-06T04:53:19.307Z · LW · GW · Legacy · 30 comments

Contents

30 comments

(Epistemic status: often discussed in bits in pieces, haven't seen it summarized in one place anywhere.)

Do you feel that your computer sometimes has a mind of its own? "I have no idea why it is doing that!" Do you feel that, the more you understand and predict someone's action, the less intelligent and more "mechanical" they appear? 

My guess is that, in many cases, agency (as in, the capacity to act and make choices) is a manifestation of the observer's inability to explain and predict the agent's actions. To Omega in the Newcomb's problem humans are just automatons without a hint of agency. To a game player some NPCs appear stupid and others smart, and the more you play and the more you can predict the NPCs, the less agenty they appear to you.

Note that randomness is not the same as uncertainty, since if you can predict that someone or something behaves randomly, it is still a prediction. What I mean is more of a Knightian uncertainty, where one fails to make a useful prediction at all. Something like a tornado may appear to intentionally go after you if you fail to predict where it will be going and you have trouble escaping.

If you are a user of a computer program, and it does not behave as you expect it to, you often get a feeling of there being a hostile intelligence opposing you, occasionally resulting in an aggressive behavior toward it, usually with verbal violence, though occasionally getting physical, the way we would confront an actual enemy. On the other hand, if you are the programmer who wrote the code in question, you think of the misbehavior as bugs, not intentional hostility, and treat the code by debugging or documenting. Mostly. Sometimes I personalize especially nasty bugs.

I was told by a nurse that this is also how they are taught to treat difficult patients: you don't get upset at someone's misbehavior and instead treat them not as an agent, but more like an algorithm in need of debugging. Parents of young children are also advised to take this approach.

This seems to also apply to self-analysis, though to a lesser degree. If you know yourself well, and can predict what you would do in a specific situation, you may feel that your response is mechanistic or automatic and not agenty or intelligent. Or maybe not. I am not sure. I think if I had the capacity for full introspection, not just the surface level understanding of my thoughts and actions, I would ascribe much less agency to myself. Probably because it would cease to be a useful concept. I wonder if this generalizes to a superintelligence capable of perfect or near perfect self-reflection.

This leads us to the issue of feelings, deliberate choices, free will and ability to consent and take responsibility. These seem to be useful, if illusory, concepts for when you live among your intellectual peers and want to be treated at least as having as much agency as you ascribe to them. But this is a topic for a different post. 

 

30 comments

Comments sorted by top scores.

comment by spxtr · 2015-06-06T05:37:32.310Z · LW(p) · GW(p)

Story time! Shortly after Brawl came out, I got pretty good at it. I could beat all my friends without much effort, so I decided to enter in a local tournament. In my first round I went up against the best player in my state, and I managed to hit him once, lightly, over the course of two games. I later became pretty good friends and practiced with him regularly.

At some point I completely eclipsed my non-competitive friends, to the extent that playing with them felt like a chore. All I had to do was put them in certain situations where I knew how they would react and then punish. It became a simple algorithm. Get behind them in shield, wait for the roll, punish. Put them in the air, jump after them, wait for the airdodge, punish. Throw them off the ledge, wait for the jump, punish. It felt like I was playing a CPU.

Meanwhile, I still couldn't reliably beat the best player from my state. One day, after he took off a particularly gruesome stock, I paused and, exasperated, asked for advice. We watched a replay and he showed me how I responded to certain situations in the same way every time, leading to a punish. My habits were less obvious than those of my friends, but they were still habits. He said, "you play like a robot, in a bad way."

So yeah. In that context, I've downgraded friends to CPUs because of their predictability, and been downgraded to a CPU by omega, because of my predictability.

Replies from: MrMind, shminux
comment by MrMind · 2015-06-08T07:42:45.691Z · LW(p) · GW(p)

To a lesser degree, I feel the same is happening to me in go.
I regularly play against GnuGo, and at lower levels (that is, when the CPU has more handicap) I can strongly feel where it's going, and beat him pretty solidly. At the same time, when confronted without handicap all I can manage is a tie, it feels a lot more unpredictable.

comment by Shmi (shminux) · 2015-06-07T02:50:37.033Z · LW(p) · GW(p)

Did you learn from it? Improved your Brawl-agency?

Replies from: spxtr
comment by spxtr · 2015-06-08T04:20:46.113Z · LW(p) · GW(p)

It might be wishful thinking, but I feel like my smash experience improved my meatspace-agency as well.

comment by Houshalter · 2015-06-07T09:33:30.561Z · LW(p) · GW(p)

Ancient people, who didn't know any better, saw agency in everything around them. There was a weather god controlling the weather, and a sea god controlling the sea, etc. I wonder to what extent this explains that.

EDIT: This could explain the AI effect too. Once we understand how something works, we don't see it as AI.

Replies from: shminux
comment by Shmi (shminux) · 2015-06-08T19:49:38.639Z · LW(p) · GW(p)

A good example re the AI effect.

comment by Unknowns · 2015-06-06T06:06:10.018Z · LW(p) · GW(p)

If you know yourself well enough to know for sure what you would do in a certain situation, but don't like what you would do, then you consider this mechanistic and not agenty. But you think it is agenty if you like it and think it's a great idea. So this leads to a bias in favor of thinking that whatever you are going to do must be a great idea. You want to think that so that you can think that you are agenty, even if you are not.

Replies from: shminux
comment by Shmi (shminux) · 2015-06-06T06:33:16.526Z · LW(p) · GW(p)

If you know yourself well enough to know for sure what you would do in a certain situation, but don't like what you would do, then you consider this mechanistic and not agenty.

Yes. Let me also note that being unable to change what you would do means a constraint on how well you do know yourself.

So this leads to a bias in favor of thinking that whatever you are going to do must be a great idea. You want to think that so that you can think that you are agenty, even if you are not.

It depends on whether you generally like yourself. If you were brought up to feel inferior or stupid or something else negative, which is rather common, then the bias might be the opposite.

comment by MotivationalAppeal · 2016-12-16T09:09:50.521Z · LW(p) · GW(p)

I love this post! I've been thinking about it a lot. I think it's mostly wrong, but it's been very stimulating. The main problem is that it wobbles between treating agency in the stated sense of "a capacity to act and make choices", and treating it as intelligence, and treating it as personhood. For one thing, a game player whose moves are predictable will seem dumb, but they still have agency in the stated sense.

Let me see if I can sketch out some slightly improved categories. A predictably moving pendulum is non-agentic, and therefore also not in the class of things that we see as stupid or intelligent: we don't use inverse planning to infer a pendulum's beliefs, preferences, goals, or behavioral intents. When we see an event that's harder to predict or retrodict, like the swaying of a weather vane, the wandering of a tornado, or the identity of a card drawn out from a deck, we reach out for explanations, and planning process explanations may occasionally momentarily be credible, and may be credible more than momentarily if someone is indeed pulling the strings.

As for the relationship between estimation of intelligence and the algorithmic comprehensibility of a goal-directed agent, I propose that AlphaGo seems intelligent even to the people who wrote it while they are playing against it, and that it's rather a judge's uncertainty about individual moves or actions which is a determining factor (alongside the success of the move) in judging intelligence. Or maybe uncertainty about the justification of the move? Either way, a person's ability to examine (or even edit) a mind in detail should not necessarily cause the person to see the mind as being inanimate or non-agentic or non-consequentialist or non-intelligent or non-personlike or unworthy of moral consideration. Whew! I'm glad that the moral status of a thing doesn't depend on an attribution that I can only make in ignorance. That would be devastating!

To recap in more direct terms, predictable things can have agency. Choice-processes are a kind of explanation which may be considered when examining an unpredictable thing, but that doesn't mean that prediction uncertainty always or typically leads to attributions of agency or intelligence (and I really think that a hypothesis space of "agenty or random" is very impoverished).

As for the cognitive antecedents of personification of inanimate things and the conditions that make useful an impersonal, debugging mindset in dealing with people, those are very interesting. From your examples, it seems like there's an implicit belief when we attribute personhood that the thing will respond predictably to displays of anger, and that an impersonal stance is useful or appropriate when a person won't alter their tendency toward a behavior when they hear expressions of negative moral judgement. This would mean that when a judge makes a well-founded attribution of personhood, what they're doing is recognizing that a thing's behavior is well explained by something like "a decision process that considers social norms and optimizes its behavior with an eye toward social consequences". As for what leads to unfounded attributions of personhood, like getting uselessly angry at a beeping smoke alarm, that's still an open question in my mind! Or maybe getting angry at a smoke alarm isn't a kind of personification, and hitting inanimate things that won't stop noising at you is actually a fine response? Hmmmm.

A while ago I asked myself, "If agency is bugs and uncertainty, does that tell us anything about akrasia?" Well, I no longer think that agency is bugs and uncertainty, but re-reading the comments in the thread gave me some new insight into the topic. Oh shit, it's 3:00 am. Maybe I'll sleep on it and write it up tomorrow.

comment by kilobug · 2015-06-08T14:31:05.427Z · LW(p) · GW(p)

I see your point, but I think you're confusing a partial overlapping with an identity.

There are many bugs/uncertainty that appear as agency, but there are also many bugs/uncertainty which doesn't appear as agency (as you said about true randomness), and there are also behavior that are actually smart and that appear as agency because of smartness (like the way I was delighted with Emacs the first time I realized that if I asked it to replace "blue" with "red", it would replace "Blue" with "Red" and "BLUE" with "RED"), I got the same "feeling of agency" there that I could have on bugs.

So I wouldn't say that agency is bugs, but that we have evolved to mis-attribute attribute agency to things that are dangerous/unpleasant (because it's safest to mis-attribute agency to nothing that doesn't have it, than to not attribute it to something that does have it), the same way our ancestors used to see the sun, storms, volcanoes, ... as having agency.

Agency is something different, hard to exactly pinpoint (philosophers have been going at it for centuries), but that involves ability to have a representation of reality, to plan ahead for a goal, a complexity of representation and ability to explore solution-space in a way that will end up surprising us, not because of bugs, but because of its inherent complexity. And we have been evolved to mis-attribute agency to things which behave in unexpected ways. But that's a bug of our own ability to detect agency, not a feature of agency itself.

Replies from: shminux
comment by Shmi (shminux) · 2015-06-08T17:01:21.179Z · LW(p) · GW(p)

So I wouldn't say that agency is bugs, but that we have evolved to mis-attribute attribute agency to things that are dangerous/unpleasant

I don't know if I would call it "mis-"attribute. My point, confirmed by spxtr and some other commenters, is that agency is relative to the observer, that there is no absolute difference between a "true" agency and an "apparent agency".

Agency is something different, hard to exactly pinpoint (philosophers have been going at it for centuries), but that involves ability to have a representation of reality, to plan ahead for a goal, a complexity of representation and ability to explore solution-space in a way that will end up surprising us [...]

I think most of this statement follows from its last part, "ability to explore solution-space in a way that will end up surprising us". Once that happens, we assign the rest of the agenty attributes to whatever has surprised us.

But that's a bug of our own ability to detect agency, not a feature of agency itself.

I guess this is the crux of our disagreement. To a superintelligence, we are CPUs without agency.

Replies from: Lumifer
comment by Lumifer · 2015-06-08T17:25:09.247Z · LW(p) · GW(p)

As often happens, LW discusses theology without realizing it.

To a superintelligence, we are CPUs without agency.

aka "Do you really have free will if God knows everything you will decide?" X-)

Replies from: spriteless
comment by spriteless · 2015-06-08T18:55:12.421Z · LW(p) · GW(p)

I would argue that Theologians have used the wide idea-space of their mythology to cover a lot of questions some of which also applicable outside of their theology.

I mean, it's not as though religion has a monopoly on that idea. It is mentioned in the article how it has applications in any care-taking role.

Now if you can find someone talking about the possibility of a virgin getting pregnant through her ear or nose, that I will grant you is pretty unique to Christianity in specific time periods when social mores say pregnancy is good, vaginas are bad, and virginity is good..

comment by [deleted] · 2015-06-08T19:50:31.562Z · LW(p) · GW(p)

To Omega in the Newcomb's problem humans are just automatons without a hint of agency.

I think there's such a thing as too much dissolving of useful concepts. Just because you know how humans run doesn't mean they cease to behave as agents. A human, especially when engaging in goal-directed behavior, is indeed acting as a causal component of the universe that takes in energy and turns it into optimization of events and waste-heat.

Replies from: shminux
comment by Shmi (shminux) · 2015-06-08T20:32:14.073Z · LW(p) · GW(p)

Just because you know how humans run doesn't mean they cease to behave as agents.

So the question is whether agency is a good abstraction when you know everything there is to know about a complex-enough system. My experience in software development suggests that, while you can encode some requirements as "goals", you rarely think about your code as having a human-like "capacity to act". On the other hand software agent is a useful concept.

A human, especially when engaging in goal-directed behavior, is indeed acting as a causal component of the universe that takes in energy and turns it into optimization of events and waste-heat.

"is" seems too strong a term. "Can be usefully modeled as in some cases" seems more accurate.

comment by MrMind · 2015-06-08T07:44:59.126Z · LW(p) · GW(p)

My guess is that, in many cases, agency (as in, the capacity to act and make choices) is a manifestation of the observer's inability to explain and predict the agent's actions.

This is my guess too. Agency, intentionality, free will, etc. to me are all cases of the Kolmogorov complexity of the agent being higher than the Kolmogorov complexity of the goal, so we (meaning 'we primates, we humans') tend to assign a mind to the agent, just as we do with our peers.

comment by torekp · 2015-06-07T15:41:21.275Z · LW(p) · GW(p)

I think you're probably right about people's feelings regarding buggy software. But for self-reflection, it's different.

In a slogan: agency is self-fulfilling prophecy. What gives us the feeling of agency is the (usually correct) thought that affirming any of a range of thoughts about our own action, would be self-fulfilling. Jenann Ismael -pdf- has a good discussion.

comment by TheAncientGeek · 2015-06-06T12:59:25.903Z · LW(p) · GW(p)

Is this the intentional stance?

Replies from: shminux
comment by Shmi (shminux) · 2015-06-06T17:04:03.083Z · LW(p) · GW(p)

Ah, I suppose it is similar, My point is that something like the intentional stance is forced on us when we fail to predict a behavior and end up, often subconsciously, explaining it in terms of agency, free will and choices. So it is not so much a "stance", but an involuntary action.

comment by Pentashagon · 2015-06-08T22:59:20.661Z · LW(p) · GW(p)

So agentiness is having an uncomputable probability distribution?

Replies from: shminux
comment by Shmi (shminux) · 2015-06-09T04:42:36.599Z · LW(p) · GW(p)

I don't know if I would put it this way, just that if you cannot predict someone's or something's behavior with any degree of certainty, they seem more agenty to you.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-06-09T12:46:20.626Z · LW(p) · GW(p)

I don't know if I would put it this way, just that if you cannot predict someone's or something's behavior with any degree of certainty, they seem more agenty to you.

The weather does not seem at all agenty to me. (People in former times have so regarded it; but we are not talking about former times.)

Replies from: Pentashagon, shminux, gjm
comment by Pentashagon · 2015-06-12T07:55:43.590Z · LW(p) · GW(p)

We have probabilistic models of the weather; ensemble forecasts. They're fairly accurate. You can plan a picnic using them. You can not use probabilistic models to predict the conversation at the picnic (beyond that it will be about "the weather", "the food", etc.)

What I mean by computable probability distribution is that it's tractable to build a probabilistic simulation that gives useful predictions. An uncomputable probability distribution is intractable to build such a simulation for. Knightian Uncertainty is a good name for the state of not being able to model something, but not a very quantitative one (and arguably I haven't really quantified what makes a probabilistic model "useful" either).

I think the computability of probability distributions is probably the right way to classify relative agency but we also tend to recognize agency through goal detection. We think actions are "purposeful" because they correspond to actions we're familiar with in our own goal-seeking behavior: searching, exploring, manipulating, energy-conserving motion, etc. We may even fail to recognize agency in systems that use actions we aren't familiar with or whose goals are alien (e.g. are trees agents? I'd argue yes, but most people don't treat them like agents compared to say, weeds). The weather's "goal" is to reach thermodynamic equilibrium using tornadoes and other gusts of wind as its actions. It would be exceedingly efficient at that if it weren't for the pesky sun. The sun's goal is to expand, shed some mass, then cool and shrink into its own final thermodynamic equilibrium. It will Win unless other agents interfere or a particularly unlikely collision with another star happens.

Before modern science no one would have imagined those were the actual goals of the sun and the wind and so the periodic, meaningful-seeming actions suggested agency toward an unknown goal. After physics the goals and actions were so predictable that agency was lost.

comment by Shmi (shminux) · 2015-06-09T15:15:50.830Z · LW(p) · GW(p)

I agree. As I mentioned in the post, expected randomness is not the same as unpredictability. Also as mentioned in the post, if you were trying to escape, say, a tornado, and repeatedly failing to predict where it would move and end up in danger again and again, it would feel to you like this weather phenomenon "has a mind of its own".

Another example is the original Gaia hypothesis, which framed a local equilibrium of the Earth's environment in teleological terms.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-06-09T16:10:57.354Z · LW(p) · GW(p)

Also as mentioned in the post, if you were trying to escape, say, a tornado, and repeatedly failing to predict where it would move and end up in danger again and again, it would feel to you like this weather phenomenon "has a mind of its own".

The tornado isn't going to follow you by chance. In fact, if it does follow you despite your efforts to evade it, that would be evidence of agentiness, of purpose. Something would have to be actively trying to steer it towards you.

For those going to LWCW in Berlin this weekend, this is one of the things I'll be talking about.

Replies from: shminux
comment by Shmi (shminux) · 2015-06-09T16:33:25.148Z · LW(p) · GW(p)

The tornado isn't going to follow you by chance. In fact, if it does follow you despite your efforts to evade it, that would be evidence of agentiness, of purpose. Something would have to be actively trying to steer it towards you.

Here is a counterexample: Suppose, unbeknownst to you, your movement creates a disturbance in the air that results in the tornado changing its path. Unless you can deduce this, you would assign agentiness to a weather phenomenon, whereas the only agentiness here (if any) is your own.

Oh, and if you have slides or a transcript of your talk, feel free to post it here, could be interesting.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-06-09T19:54:53.757Z · LW(p) · GW(p)

At this point we must play "follow the improbability". When you imagine the tornado following you around, however you try to get away from it, I ask, "what is the mechanism of this remarkably improbable phenomenon?" It seems that the agency is being supplied by your imagination, wrapped up in the word "suppose".

More illuminating are some real examples of something following something else around.

  1. A bumper sticker follows the car it is attached to. Wherever the car goes, there goes the bumper sticker.

  2. Iron filings follow a magnet around.

  3. Objects near a planet follow the planet around.

  4. A dog follows its master around.

Here, we're looking closely at the edge of the concept of purpose, and that it may be fuzzy is of little significance, since everything is fuzzy under a sufficiently strong magnifying glass. I draw the line between purpose and no purpose between 3 and 4. One recipe for drawing the line is that purpose requires the expenditure of some source of energy to accomplish the task. Anything less and it is like a ball in a bowl: the energy with which it tries to go to the centre was supplied by the disturbance that knocked it away. You expend no energy to remain near the Earth's surface; the dog does expend energy to stay with its master.

Oh, and if you have slides or a transcript of your talk, feel free to post it here, could be interesting.

I won't know what I'm going to say until I've said it, but I'll try to do a writeup afterwards.

comment by gjm · 2015-06-09T15:09:28.246Z · LW(p) · GW(p)

That's consistent with the following modified claim: in the absence of firm knowledge of how agenty a thing "really" is, you will tend to take its unpredictability as an indication of agentiness.

However, I am skeptical about that too; the results of die rolls and coin flips don't seem every agenty to most people (though to some gamblers I believe they do). Perhaps what it takes is a combination of pattern and unpredictability? If your predictions are distinctly better than chance but nothing you can think of makes them perfect, that feels like agency. Especially if the difference between your best predictions and reality isn't a stream of small random-looking errors but has big fat tails with occasional really large errors. Maybe.

Replies from: Lumifer, Richard_Kennaway
comment by Lumifer · 2015-06-09T16:08:43.915Z · LW(p) · GW(p)

the results of die rolls and coin flips don't seem every agenty to most people

I think the perception of agency is linked not to unpredictability, but rather to the feeling of "I don't understand".

Coin flips are unpredictable, but we understand them very well. Weather is (somewhat) unpredictable as well, but we all have a lot of experience with it and think we understand it. But some kind of complex behaviour and we have no idea what's behind it? Must be agency.

comment by Richard_Kennaway · 2015-06-09T15:36:03.373Z · LW(p) · GW(p)

That's consistent with the following modified claim: in the absence of firm knowledge of how agenty a thing "really" is, you will tend to take its unpredictability as an indication of agentiness.

I think unpredictability is a complete red herring here. What I notice about the original examples is that the perceived lack of agency was not merely because the game-player was predictable, but because they were predictably wrong. Had they been predictably right, in the sense that the expert player watching them had a sense of understanding from their play how they were thinking, and judging their strategy favourably, I doubt the expert would be saying they were "playing like a robot".

I happen to have a simulation of a robot here. (Warning: it's a Java applet, so if you really want to run it you may have to jump through security hoops to convince your machine to do so.) In hunting mode, it predictably finds and eats the virtual food particles. I am quite willing to say it has agency, even though I wrote it and know exactly how it works. A limited agency, to be sure, compared with humans, but the same sort of thing.