To what degree do you model people as agents?
post by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-08-25T19:29:33.808Z · LW · GW · Legacy · 132 commentsContents
What does it mean to model someone as an agent? How does this affect relationships with people? How does this affect moral value judgements? Conclusion None 132 comments
The idea for this post came out of a conversation during one of the Less Wrong Ottawa events. A joke about being solipsist turned into a genuine question–if you wanted to assume that people were figments of your imagination, how much of a problem would this be? (Being told "you would be problematic if I were a solipsist" is a surprising compliment.)
You can rephrase the question as "do you model people as agents versus complex systems?" or "do you model people as PCs versus NPCs?" (To me these seem like a reframing of the same question, with a different connotation/focus; to other people they might seem like different questions entirely). Almost everyone at the table immediately recognized what we were talking about and agreed that modelling some people as agents and some people as complex systems was a thing they did. However, pretty much everything else varied–how much they modelled people as agents overall, how much it varied in between different people they knew, and how much this impacted the moral value that they assigned to other people. I suspect that another variable is "how much you model yourself as an agent"; this probably varies between people and impacts how they model others.
What does it mean to model someone as an agent?
The conversation didn't go here in huge amounts of detail, but I expect that due to typical mind fallacy, it's a fascinating discussion to have–that the distinctions that seem clear and self-evident to me probably aren't what other people use at all. I'll explain mine here.
1. Reliability and responsibility. Agenty people are people I feel I can rely on, who I trust to take heroic responsibility. If I have an unsolved problem and no idea what to do, I can go to them in tears and say "fix this please!" And they will do it. They'll pull out a solution that surprises me and that works. If the first solution doesn't work, they will keep trying.
In this sense, I model my parents strongly as agents–I have close to 100% confidence that they will do whatever it takes to solve a problem for me. There are other people who I trust to execute a pre-defined solution for me, once I've thought of it, like "could you do me a huge favour and drive me to the bike shop tomorrow at noon?" but whom I wouldn't go to with "AAAAH my bike is broken, help!" There are other people who I wouldn't ask for help, period. Some of them are people I get along with well and like a lot, but they aren't reliable, and they're further down the mental gradient towards NPC.
The end result of this is that I'm more likely to model people as agents if I know them well and have some kind of relationship where I would expect them to want to help me. Of course, this is incomplete, because there are brilliant, original people who I respect hugely, but who I don't know well, and I wouldn't ask or expect them to solve a problem in my day-to-day life. So this isn't the only factor.
2. Intellectual formidability. To what extent someone comes up with ideas that surprise me and seem like things I would never have thought of on my own. This also includes people who have accomplished things that I can't imagine myself succeeding at, like startups. In this sense, there are a lot of bloggers, LW posters, and people on the CFAR mailing list who are major PCs in my mental classification system, but who I may not know personally at all.
3. Conventional "agentiness". The degree to which a person's behaviour can be described by "they wanted X, so they took action Y and got what they wanted", as opposed to "they did X kind of at random, and Y happened." When people seem highly agenty to me, I model their mental processes like this–my brother is one of them. I take the inside view, imagining that I wanted the thing they want and had their characteristics, i.e. relative intelligence, domain-specific expertise, social support, etc, and this gives better predictions than past behaviour. There are other people whose behaviour I predict based on how they've behaved in the past, using the outside view, while barely taking into account what they say they want in the future, and this is what gives useful predictions.
This category also includes the degree to which people have a growth mindset, which approximates how much they expect themselves to behave in an agenty way. My parents are a good example of people who are totally 100% reliable, but don't expect or want to change their attitudes or beliefs much in the next twenty years.
These three categories probably don't include all the subconscious criteria I use, but they're the main ones I can think of.
How does this affect relationships with people?
With people who I model as agents, I'm more likely to invoke phrases like "it was your fault that X happened" or "you said you would do Y, why didn't you?" The degree to which I feel blame or judgement towards people for not doing things they said they would do is almost directly proportional to how much I model them as agents. For people who I consider less agenty, whom I model more as complex systems, I'm more likely to skip the blaming step and jump right to "what are the things that made it hard for you to do Y? Can we fix them?"
On reflection, it seems like the latter is a healthier way to treat myself, and I know this (and consistently fail at doing this). However, I want to be treated like an agent by other people, not a complex system; I want people to give me the benefit of the doubt and assume that I know what I want and am capable of planning to get it. I'm not sure what this means for how I should treat other people.
How does this affect moral value judgements?
For me, not at all. My default, probably hammered in by years of nursing school, is to treat every human as worthy of dignity and respect. (On a gut level, it doesn't include animals, although it probably should. On an intellectual level, I don't think animals should be mistreated, but animal suffering doesn't upset me on the same visceral level that human suffering does. I think that on a gut level, my "circle of empathy" includes human dead bodies more than it includes animals).
One of my friends asked me recently if I got frustrated at work, taking care of people who had "brought their illness on themselves", i.e. by smoking, alcohol, drug use, eating junk food for 50 years, or whatever else people usually put in the category of "lifestyle choices." Honestly, I don't; it's not a distinction my brain makes. Some of my patients will recover, go home, and make heroic efforts to stay healthy; others won't, and will turn up back in the ICU at regular intervals. It doesn't affect how I feel about treating them; it feels meaningful either way. The one time I'm liable to get frustrated is when I have to spend hours of hard work on patients who are severely neurologically damaged and are, in a sense, dead already, or at least not people anymore. I hate this. But my default is still to talk to them, keep them looking tidy and comfortable, et cetera...
In that sense, I don't know if modelling different people differently is, for me, a morally a right or a wrong thing to do. However, I spoke to someone whose default is not to assign people moral value, unless he models them as agents. I can see this being problematic, since it's a high standard.
Conclusion
As usual for when I notice something new about my thinking, I expect to pay a lot of attention to this over the next few weeks, and probably notice some interesting things, and quite possibly change the way I think and behave. I think I've already succeeded in finding the source of some mysterious frustration with my roommate; I want to model her as an agent because of #1–she's my best friend and we've been through a lot together–but in the sense of #3, she's one of the least agenty people I know. So I consistently, predictably get mad at her for things like saying she'll do the dishes and then not doing them, and getting mad doesn't help either of us at all.
I'm curious to hear what other people think of this idea.
132 comments
Comments sorted by top scores.
comment by johnswentworth · 2013-08-25T03:55:37.171Z · LW(p) · GW(p)
In the past year or two, I've spent a lot of time explicitly trying to taboo "agenty" modelling of people from my thoughts. I didn't have a word for it before, and I'm still not sure agenty is the right word, but it's the right idea. One interesting consequence is that I very rarely get angry any more. It just doesn't make sense to be angry when you think of everyone (including yourself) mechanically. Frustration still happens, but it lacks the sense of blame that comes with anger, and it's much easier to control. In fact, I often find others' anger confusing now.
At this point, my efforts to taboo agenty thinking have been successful enough that I misinterpreted the first two paragraphs of this post. I thought it was about the distinction between people I model as full game-theoretic agents (I account for them accounting for my actions) versus people who will execute a fixed script without any reflective reasoning. To me, that's the difference between PCs and NPCs.
More recently, following this same trajectory, I've experimented with tabooing moral value assignments from my thoughts. Whenever I catch myself thinking of what one "should" do, I taboo "should" and replace it with something else. Originally, this amorality-via-taboo was just an experiment, but I was so pleased with it that I kept it around. It really helps you notice what you actually want, and things like "ugh" reactions become more obvious. I highly recommend it, at least as an experiment for a week or two.
Replies from: shminux, Swimmer963, Dabor, wallywalrus↑ comment by Shmi (shminux) · 2013-08-25T06:26:44.477Z · LW(p) · GW(p)
Maybe you can write a post detailing your experiences? Sounds quite interesting.
↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-08-25T13:43:13.314Z · LW(p) · GW(p)
At this point, my efforts to taboo agenty thinking have been successful enough that I misinterpreted the first two paragraphs of this post. I thought it was about the distinction between people I model as full game-theoretic agents (I account for them accounting for my actions) versus people who will execute a fixed script without any reflective reasoning. To me, that's the difference between PCs and NPCs.
This is exactly the kind of other-people-thinking-differently-than-I-do interestingness that caused me to write this post!
The thing that was most interesting to me, on reflection, is that I do get angry less since I've started modelling most people "mechanically". It's jus that my brain doesn't automatically extend that to people whom I respect a lot for whatever reason. For them, I will get angry. Which isn't helpful, but it is informative. I think it might just show that I'm more surprised when people who I think of as PCs let me down, and that when I get angry, it's because I was relying on them and hadn't made fallback plans, so the anger is more just my anxiety about my plans no longer working.
Replies from: Lumifer↑ comment by Lumifer · 2013-08-27T17:47:24.440Z · LW(p) · GW(p)
I do get angry less since I've started modelling most people "mechanically". It's jus that my brain doesn't automatically extend that to people whom I respect a lot for whatever reason.
It seems that once you assign specific people to the NPC category you think of them as belonging to a lesser, inferior kind. That's why you get less angry at them and that's why those you respect don't get assigned there.
↑ comment by Dabor · 2013-08-26T16:41:58.083Z · LW(p) · GW(p)
I've gone through a change much like this over the past couple of years, although not with explicit effort. I would tend to get easily annoyed by crossing inconsequential stupidity or spite somewhere on the internet (not directed at me), and then proceed to be disappointed in myself for having something like that hang on my thoughts for a few hours.
Switching to a model in which I'm responsible for my own reaction to other people does a wonder for self control and saves some needless frustration.
I can only think of one person (that I know personally) whom I treat as possessing as much agency as I expect of myself, and that results in offering and expecting full honesty. If I view somebody as at all agenty, I generally wouldn't try to spare their feelings or in any way emotionally manipulate them for my own benefit. I don't find that to be a sustainable way to act with strangers: I can't take the time to model why somebody flinging a poorly written insult over a meaningless topic that I happened to skim over is doing so, and I'd gain nothing (and very probably be wrong) in assuming they have a good reason.
As was mentioned with assigning non-agents negligible moral value, it does lead to higher standards, but those standards extend to oneself, potentially to one's benefit. Once you make a distinction of what the acts of a non-agent look like, you start more consistently trying to justify everything you say or do yourself. Reminds me a bit of "Would an idiot do that?' And if they would, I do not do that thing."
I can still rather easily choose to view people as agents and assign moral value in any context where I have to make a decision, so I don't think having a significantly reduced moral value for others is to my detriment: it just removes the pressure to find a justification for their actions.
This will constitute my first comment on Less Wrong, so thank you for the interesting topic, and please inform me of any errors or inconveniences in my writing style.
Replies from: hairyfigment, pangel, johnswentworth, Document↑ comment by hairyfigment · 2013-08-26T17:16:07.821Z · LW(p) · GW(p)
Welcome!
"Would an idiot do that?' And if they would, I do not do that thing."
Slightly wrong, but as you're still breathing I assume you know this.
Replies from: Dabor↑ comment by Dabor · 2013-08-26T17:22:38.442Z · LW(p) · GW(p)
I was quoting. It would be more accurate to say that "Would this be done exclusively by idiots?", what with reversed stupidity. Alternatively, if the answer to the default version is yes, that just suggests that you require further consideration. Either way, it's pretty tautological "Would only smart people do this? If not, am I doing it for a smart reason?" but having an extra layer of flags for thinking doesn't hurt.
↑ comment by pangel · 2013-08-28T21:39:40.128Z · LW(p) · GW(p)
Being in a situation somewhat similar to yours, I've been worrying that my lowered expectations about others' level of agency (with elevated expectations as to what constitutes a "good" level of agency) has an influence on those I interact with: if I assume that people are somewhat influenced by what others expect of them, I must conclude that I should behave (as far as they can see) as if I believed them to be as capable of agency as myself, so that their actual level of agency will improve. This would would work on me, for instance I'd be more generally prone to take initiative if I saw trust in my peers' eyes.
↑ comment by johnswentworth · 2013-08-27T00:10:18.473Z · LW(p) · GW(p)
Well posted. I hope we will hear more from you in the future.
↑ comment by Document · 2013-08-26T19:34:08.055Z · LW(p) · GW(p)
As was mentioned with assigning non-agents negligible moral value, it does lead to higher standards, but those standards extend to oneself, potentially to one's benefit.
I can't parse this. Is it a reference to something someone else is the thread said?
Replies from: Dabor↑ comment by Dabor · 2013-08-26T19:54:39.097Z · LW(p) · GW(p)
In that sense, I don't know if modelling different people differently is, for me, a morally a right or a wrong thing to do. However, I spoke to someone whose default is not to assign people moral value, unless he models them as agents. I can see this being problematic, since it's a high standard.
From the main post.
Replies from: Document↑ comment by Document · 2013-08-26T20:48:17.882Z · LW(p) · GW(p)
Thanks. Given that you want to improve your rationality to begin with, though, is believing that your moral worth depends on it really beneficial? Pain and gain motivation seems relevant.
Later, you say:
I can still rather easily choose to view people as agents and assign moral value in any context where I have to make a decision, so I don't think having a significantly reduced moral value for others is to my detriment
Do you actually value us and temporarily convince yourself otherwise, or is it the other way around?
Replies from: Dabor↑ comment by Dabor · 2013-08-27T00:08:55.503Z · LW(p) · GW(p)
Given that you want to improve your rationality to begin with, though, is believing that your moral worth depends on it really beneficial?
I'm not sure if you're asking my moral worth of myself or others, so I'll answer both.
If you're referring to my moral worth of myself, I'm assuming that the problem would be that, as I learn about biases, I would consider myself less of an agent, so I wouldn't be motivated to discover my mistakes. You'll have to take my word for it that I pat myself on the back whenever I discover an error in thinking and mark it down, but other than that, I don't have an issue with my self-image being (significantly, long term) tied to how I estimate my efficacy at rationality, one way or another. I just enjoy the process.
If you're referring to how I value others, then rationality seems inextricably tied to how I think of others. As I learn about how people get to certain views or actions, I consider them either more or less justified in doing so, and more or less "valuable" than others, if I may speak so bluntly of my fellow man. If I don't think there's a good reason to vandalize someone's property, and I think that there is a good reason to offer food to a homeless man, then if given that isolated knowledge, and a choice from Omega on who I wish to save (assuming that I can't save both), I'll save the person who commits more justified actions. Learning about difficult to lose biases that can lead one to do "bad things" or about misguided notions that can cause people to do right for the wrong reason inevitably changes how I view others (however incrementally), even if I don't offer them agency and see them as "merely" complex machines.
Do you actually value us and temporarily convince yourself otherwise, or is it the other way around?
Considering that I know that saying I value others is the ideal, and that if I don't believe so, I'd prefer to, it would be difficult to honestly say that I don't value others. I'm not an empathetic person and don't tend to find myself worrying about the future of humanity, but I try to think as if I do for the purpose of moral questions.
Seeing as that I value valuing you, and am, from the outside, largely indistinguishable from somebody who values you, I think I can safely say that I do value others.
But, I didn't quite have the confidence to answer that flatly.
↑ comment by wallywalrus · 2013-08-28T19:34:02.001Z · LW(p) · GW(p)
My problem with this is that I want people to be agenty. For me the distinction between agent and complex system is about self-awareness and mindfullness. If you think about yourself and what you are and aren't capable of and how you interact with the world, you will have more agency and be a better person. I'm disgusted by people who just live like thoughtless animals.
I guess the obvious solution is to get over it. But I'm not sure I want to. It holds people to a higher standard.
Replies from: Lumifer↑ comment by Lumifer · 2013-08-28T20:36:05.979Z · LW(p) · GW(p)
I think you're confusing being proactive and being a good person.
If a homicidal maniac acquires more agency that doesn't make him a better person, it just makes him more dangerous.
Replies from: tzok↑ comment by tzok · 2013-09-01T17:05:10.707Z · LW(p) · GW(p)
I think what OP meant was the following. Having two people with the same, positive aims (e.g. be a good parent, do your job well), the agency-driven one will achieve more with the same hard work as the another. Therefore, for people around you, you would wish them to be more agenty as a default.
Replies from: Lumifer↑ comment by Lumifer · 2013-09-03T16:16:55.464Z · LW(p) · GW(p)
will achieve more with the same ... work
That's generally described by words like "effective" and "high-productivity".
Therefore, for people around you
Why are you assuming that people around me have positive aims? Moreover, what's important is not just aims, but also the costs (and who pays them)
comment by Vaniver · 2013-08-25T05:02:48.098Z · LW(p) · GW(p)
One of my habits while driving is to attempt to model the minds of many of the drivers around me (in situations of light traffic). One result is that when someone does something unexpected, my first reaction is typically "what does he know that I don't?" rather than "what is that idiot doing?". From talking to other drivers, this part of my driving seem abnormal.
In this sense, I model my parents strongly as agents–I have close to 100% confidence that they will do whatever it takes to solve a problem for me.
One of the frequent complaints about the 'agent' concept space, and the "heroic responsibility" concept in particular, is that it rarely seems to take into account people's spheres of responsibility. Are your parents the sort of people who would be able to solve anyone's problem, or are they especially responsible for you? Are other people that seem to be NPCs to you just people that don't care enough about you to spend limited cognitive (and other) resources on you and your problems?
With people who I model as agents, I'm more likely to invoke phrases like "it was your fault that X happened" or "you said you would do Y, why didn't you?" The degree to which I feel blame or judgement towards people for not doing things they said they would do is almost directly proportional to how much I model them as agents. For people who I consider less agenty, whom I model more as complex systems, I'm more likely to skip the blaming step and jump right to "what are the things that made it hard for you to do Y? Can we fix them?"
Do you get more of what you want by blaming people or assigning fault?
Replies from: Swimmer963, Kaj_Sotala, Lumifer, skepsci↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-08-25T13:54:00.998Z · LW(p) · GW(p)
One of the frequent complaints about the 'agent' concept space, and the "heroic responsibility" concept in particular, is that it rarely seems to take into account people's spheres of responsibility. Are your parents the sort of people who would be able to solve anyone's problem, or are they especially responsible for you? Are other people that seem to be NPCs to you just people that don't care enough about you to spend limited cognitive (and other) resources on you and your problems?
My younger self didn't get this. I remember being surprised and upset that my parents, who would always help me with anything I needed, wouldn't automatically also help me help other people when I asked them. For example, my best friend needed somewhere to stay with her one-year-old, and I was living with my then-boyfriend, who didn't want to share an apartment with a toddler. I was baffled and hurt that my parents didn't want her staying in my old bedroom, even if she paid rent! I'd taken responsibility for helping her, and they had responsibility for helping me, so why not?
Now I know that that's not how most people behave, and that if it was, it might actually be quite dysfunctional.
Do you get more of what you want by blaming people or assigning fault?
I don't think so.
Replies from: Vaniver↑ comment by Kaj_Sotala · 2013-08-25T08:40:46.517Z · LW(p) · GW(p)
One of the frequent complaints about the 'agent' concept space, and the "heroic responsibility" concept in particular, is that it rarely seems to take into account people's spheres of responsibility. Are your parents the sort of people who would be able to solve anyone's problem, or are they especially responsible for you? Are other people that seem to be NPCs to you just people that don't care enough about you to spend limited cognitive (and other) resources on you and your problems?
I agree with this. I keep being a little puzzled over the frequent use of the "agenty" term at LW, since I haven't really seen any arguments establishing why this would be a useful distinction to make in the first place. At least some of the explanations of the concept seem mostly like cases of correspondence bias (I was going to link an example here, but can't seem to find it anymore).
Replies from: Vaniver, Swimmer963↑ comment by Vaniver · 2013-08-25T19:25:04.431Z · LW(p) · GW(p)
I keep being a little puzzled over the frequent use of the "agenty" term at LW, since I haven't really seen any arguments establishing why this would be a useful distinction to make in the first place.
Here is my brief impression of what the term "agenty" on LW means:
An "agent" is a person with surplus executive function.
"Executive function" is some combination of planning ability, willpower, and energy (only somewhat related to the concept in psychology). "Surplus" generally means "available to the labeler on the margin." Supposing that people have some relatively fixed replenishing supply of executive function, and relatively fixed consistent drains on executive function, then someone who has surplus executive function today will probably have surplus executive function tomorrow, or next week, or so on. They are likely to be continually starting and finishing side projects.*
The practical usefulness of this term seems obvious: this is someone you can delegate to with mission-type tactics (possibly better known as Auftragstaktik). This ability makes them good people to be friends with. Having this ability yourself both makes you better able to achieve your goals and makes you a more valuable friend, raising the quality of the people you can be friends with.
*Someone who has lots of executive function, but whose regular demands take all of it, will have lots of progress in their primary work but little progress on side projects. Someone whose demands outstrip their supply is likely to be dropping balls and burning out.
Replies from: Kaj_Sotala, Document, Lumifer, Document↑ comment by Kaj_Sotala · 2013-08-25T21:12:24.484Z · LW(p) · GW(p)
Okay, now that does sound like a useful term.
Does anyone happen to know of reliable ways for increasing one's supply of executive function, by the way? I seem to run out of it very quickly in general.
Replies from: Vaniver, pscheyer, wedrifid, James_Miller↑ comment by Vaniver · 2013-08-25T21:24:27.813Z · LW(p) · GW(p)
Does anyone happen to know of reliable ways for increasing one's supply of executive function, by the way? I seem to run out of it very quickly in general.
There are a handful of specific small fixes that seem to be helpful. For example, having a capture system (which many people are introduced to by Getting Things Done) helps decrease cognitive load, which helps with willpower and energy. Anti-akrasia methods tend to fall into clusters of increasing executive function or decreasing value uncertainty / confusion. A number of people have investigated various drugs (mostly stimulants) that boost some component.
I get the impression that, in general, there are not many low hanging fruit for people to pick, but it is worth putting effort into deliberate upgrades.
↑ comment by pscheyer · 2013-08-26T02:16:39.239Z · LW(p) · GW(p)
After joining the military, where executive function on demand is sort of the meta-goal of most training exercises, i found that having a set wardrobe actually saves a great deal of mental effort. You just don't realize how much time you spend worrying about clothes until you have a book which literally has all the answers and can't be deviated from. I know that this was also a thing that Steve Jobs did- one 'uniform' for life. President Obama apparently does it as well. http://www.forbes.com/sites/jacquelynsmith/2012/10/05/steve-jobs-always-dressed-exactly-the-same-heres-who-else-does/
There are a number of other things i've learned for this which are maybe worth writing up as a separate post. Not sure if that's within the purview of LW though.
Replies from: metastable, KnaveOfAllTrades↑ comment by metastable · 2013-08-26T03:06:34.686Z · LW(p) · GW(p)
I agree, though it's always been interesting to me how the tiniest details of clothing become much clearer signals when eveybody's almost the same. Other military practices that I think conserve your energy for what's important:
-Daily, routinized exercise. Done in a way that very few people are deciding what comes next.
-Maximum use of daylight hours
-Minimized high-risk projects outside of workplace (paternalistic health care, insurance, and in many cases, housing and continuing education.)
Replies from: KnaveOfAllTrades↑ comment by KnaveOfAllTrades · 2013-08-26T03:41:04.579Z · LW(p) · GW(p)
It's plausible to me that a much higher proportion of peeps than is generally realized operate substantially better on different sleep schedules to what a 9-5 job forces, in which case enforced maximal (or at least, greater) use of daylight hours is possibly taking place on a societal (global?) level, though not as strongly as in militaries.
Replies from: metastable↑ comment by metastable · 2013-08-26T04:22:11.392Z · LW(p) · GW(p)
This is plausible to me, too. I've had very productive friends with very different rhythms.
But I suspect far more people believe they operate best staying up late and sleeping late than actually do. There's a reason day shifts frequently outperform night shifts given the same equipment. And we know a lot of people suffer health-wise on night shift.
Replies from: Document↑ comment by Document · 2013-08-26T19:36:33.961Z · LW(p) · GW(p)
I don't think one forced sleep schedule outperforming another is strong evidence that forced schedules are better than natural schedules.
Edit: Also, depending on geography, time of year and commute a 9-5 job may force one to get up some time before dawn and/or stay up some time after dark.
Replies from: Decius↑ comment by KnaveOfAllTrades · 2013-08-26T03:30:50.264Z · LW(p) · GW(p)
I'd be interested to see this in Discussion.
I'm going the opposite way: Paying more attention to non-formulaic outfits, after years of {{varying only within one or two very circumscribed formulas, or even wearing one of exactly the same few set outfits for months--or more--at a time}}. So far it's interesting figuring things out, but it's increasing wardrobe load, and if I continue expanding my collection, it could become substantially more expensive than what I was doing before.
The dialectic outside view suggests I'll end up settling down a bit and going back to a more repetitive approach, but with a greater number of variables (e.g. introducing variables for level of formality, weather, audience, tone-fancied-on-given-day, etc.) and items from which to choose.
Replies from: pscheyer↑ comment by pscheyer · 2013-09-09T23:49:04.288Z · LW(p) · GW(p)
As requested.
http://lesswrong.com/r/discussion/lw/il7/military_rationalities_and_irrationalities/
Replies from: KnaveOfAllTrades↑ comment by KnaveOfAllTrades · 2013-09-12T05:49:50.499Z · LW(p) · GW(p)
Awesome!
↑ comment by wedrifid · 2013-08-26T09:14:01.891Z · LW(p) · GW(p)
Does anyone happen to know of reliable ways for increasing one's supply of executive function, by the way?
Stimulants, exercise and the removal of chronic stress.
Replies from: Decius↑ comment by Decius · 2013-08-27T01:07:33.717Z · LW(p) · GW(p)
That sounds like ways of reducing the demand, not increasing the supply.
"Spending it better" is one option, but not the one that I want.
Replies from: wedrifid↑ comment by James_Miller · 2013-09-09T23:57:31.886Z · LW(p) · GW(p)
Lumosity's new game "Train of Thought" might do it.
↑ comment by Document · 2013-08-29T03:04:04.613Z · LW(p) · GW(p)
someone who has surplus executive function today will probably have surplus executive function tomorrow, or next week, or so on. They are likely to be continually starting and finishing side projects.
Obligatory: http://wondermark.com/638/
Replies from: gwern↑ comment by Lumifer · 2013-08-27T17:56:40.338Z · LW(p) · GW(p)
An "agent" is a person with surplus executive function.
Would that be a synonym of "has his shit together" and "gets stuff done"?
Replies from: Vaniver↑ comment by Vaniver · 2013-08-27T20:08:14.698Z · LW(p) · GW(p)
Mostly. I'm trying to make the concept precise and transparent; saying "an agent is a person who gets stuff done" leaves the mechanism by which they get things done opaque, and most of the posts discussing agency seem to have a flavor of "an agent is a person who gets my stuff done" (notably including the possibility that the speaker is not an agent in that sense).
↑ comment by Document · 2013-08-26T00:38:25.104Z · LW(p) · GW(p)
From the Wikipedia article you link:
In today's German army, the Bundeswehr, the term "Auftragstaktik" is considered an incorrect characterization of the concept; instead "Führen mit Auftrag" ("leading by mission") is used.
This before the article actually defines "the concept" except by introducing it as "Auftragstaktik".
↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-08-25T13:49:19.634Z · LW(p) · GW(p)
I don't know if it's a useful way to think, but it's the way I do think in practice, and not necessarily because of reading Less Wrong; I think that's just where I found words for it. And based on the conversation I mentioned, other people also think like this, but using different criteria than mine. Which is really interesting. And after reflecting on this a bit and trying to taboo the term "agenty" and figure out what characteristics my brain is looking at when it assigns that label, I probably will use it less to describe other people.
In terms of describing myself, I think it's a good shorthand for several characteristics that I want to have, including being proactive, which is the word I substitute in if I'm talking to someone outside Less Wrong about my efforts at self-improvement.
↑ comment by Lumifer · 2013-08-27T17:53:27.629Z · LW(p) · GW(p)
when someone does something unexpected, my first reaction is typically "what does he know that I don't?" rather than "what is that idiot doing?". From talking to other drivers, this part of my driving seem abnormal.
For me that depends on what this "unexpected" is. For example, if I see a car in the next lane and ahead of me start to slightly drift into my lane, my reaction is that I know what this idiot is doing -- he is about to switch lanes and he doesn't see me. On the other hand, if a car far ahead hits the brakes and I don't know why -- there my reaction is "he knows something I don't"...
↑ comment by skepsci · 2013-09-03T14:37:30.953Z · LW(p) · GW(p)
I do the same sort of thinking about the motivations of other drivers, but it seems strange to me to phrase the question as "what does he know that I don't?" More often than not, the cause of strange driving behaviors is lack of knowledge, confusion, or just being an asshole.
Some examples of this I saw recently include 1) a guy who immediately cut across two lanes of traffic to get in the exit lane, then just as quickly darted out of it at the beginning of the offramp; 2) A guy on the freeway who slowed to a crawl despite traffic moving quickly all around him; 3) That guy who constantly changes lanes in order to move just slightly faster than the flow of traffic.
I'm more likely to ask "what do they know that I don't?" when I see several people ahead of me act in the same way that I can't explain (e.g. many people changing lanes in the same direction).
comment by Manfred · 2013-08-25T16:21:10.109Z · LW(p) · GW(p)
I propose a theme song for this comment section.
One of my fond memories of high school is being a little snot and posing a math problem to a "dumb kid". I proceeded to think up the wrong answer, and he got the right one (order of operations :D ). This memory is a big roadblock to me modeling other people as different "types" - differences are mostly of degree, not kind. A smart person can do math? Well, a dumb person has math that they can do well. A smart person plans their life? Dumb people make plans too. A dumb person uses bad reasoning? Smart people use bad reasoning.
I consistently, predictably get mad at her for things like saying she'll do the dishes and then not doing them
This doesn't sound like a lack of agentiness. This sounds like a communication problem. Do you think that you're more likely to think of someone as "agenty" if their planning processes are (seemingly) transparent to you (e.g. "this person said they wanted a cookie, then they took actions to get a cookie") vs. non-transparent (e.g. "that person said they wanted a salad, then they took actions to get a cookie")?
↑ comment by ArisKatsaris · 2013-09-01T17:57:11.787Z · LW(p) · GW(p)
The above is a troll fake account (see "Eliezar" instead of "Eliezer", and "Yudkowky" instead of Yudkowsky), please delete and ban him.
comment by kalium · 2013-08-26T03:30:41.488Z · LW(p) · GW(p)
#1 grates for me. If a friend goes to me in tears more than a couple of times demanding that I fix their bicycle/grades/relationship/emotional problems, I will no longer consider them a friend. If you ask politely I'll try to get you on the right track (here's the tool you need and here's how to use it/this is how to sign up for tutoring/whatever), but doing much more than that is treating the asker as less than an agent themself. Going to your friend in tears before even trying to come up with a solution yourself is not a good behavior to encourage (I've been on both sides of this, and it's not good for anyone).
Don't confuse reliability and responsibility with being a sucker.
Replies from: derefr, Swimmer963, Decius, kalium, MugaSofer↑ comment by derefr · 2013-08-30T08:06:13.127Z · LW(p) · GW(p)
There's a specific failure-mode related to this that I'm sure a lot of LW has encountered: for some reason, most people lose 10 "agency points" around their computers. This chart could basically be summarized as "just try being an agent for a minute sheesh."
I wonder if there's something about the way people initially encounter computers that biases them against trying to apply their natural level of agency? Maybe, to coin an isomorphism, an "NPC death spiral"? It doesn't quite seem to be learned helplessness, since they still know the problem can be solved, and work toward solving it; they just think solving the problem absolutely requires delegating it to a Real Agent.
Replies from: kalium↑ comment by kalium · 2013-08-30T19:44:31.495Z · LW(p) · GW(p)
Many people vastly overestimate the likelihood of results like "computer rendered unbootable" or "all your data is lost forever." (My grandfather won't let anyone else touch the TV or remote because he thinks we could break it by trying to change the channel in the wrong way.) If I thought those were likely results of clicking on random menu items I'd want to delegate too.
When I notice myself acting this way around computers, though, the thought process goes something like this:
I have a problem, likely because I did something that my social circle would consider stupid.
Past attempts to solve computer-related problems myself have a low (30% or so) success rate, so I am likely to have to explain the whole situation to someone who will judge me less intelligent as a result.
Any attempts at solving the problem myself will lengthen this explanation and raise the chance that it includes something truly idiotic (this also makes the explanation more stressful, which makes me worse at explaining everything I've done, which makes the problem harder for an expert to solve).
Meanwhile, if I succeed it is unimpressive. "Oh, you're 25 and just figured out how to tie your own shoes?" Not exactly an accomplishment I can feel good about.
Just ask for help now before I make it any worse (or perhaps read for a while, try one or two methods based not on likelihood of working but on how easy they are to justify under stress, then ask for help).
↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-08-26T03:50:31.711Z · LW(p) · GW(p)
1 grates for me. If a friend goes to me in tears more than a couple of times demanding that I fix their bicycle/grades/relationship/emotional problems, I will no longer consider them a friend.
I guess being a PC in that sense sucks.
Going to your friend in tears before even trying to come up with a solution yourself is not a good behavior to encourage (I've been on both sides of this, and it's not good for anyone).
I try not to do this. When I go to my parents in tears, it's because I've tried all the usual solutions and they aren't working and I don't know why, and/or because everything else possible is going wrong at the same time and I don't have the mental energy to deal with my broken bike on top of disasters at work and my best friend having a meltdown.
Likewise, being the one who takes heroic responsibility for someone isn't necessarily a healthy role to take, as I've realized.
Replies from: Decius↑ comment by Decius · 2013-08-27T01:04:30.866Z · LW(p) · GW(p)
Sometimes heroic responsibility requires metaphorically throwing a guide to fishing at someone.
Sometimes it requires the metametaphor (metwophor?) of telling them where the library is that contains that kind of book.
And sometimes it requires giving them a literal fish.
↑ comment by kalium · 2013-08-26T22:13:11.312Z · LW(p) · GW(p)
To clarify, there's a big difference between coming to me in tears asking for help and coming to me in tears asking for a complete solution handed to you on a platter, I've just seen enough of the latter that it really really irritates me.
Also, "solves my problem immediately when asked, regardless of whether it's in his interest" seems to me like an attribute of an NPC and not that of a PC.
↑ comment by MugaSofer · 2013-08-26T17:45:44.464Z · LW(p) · GW(p)
I don't know, it's less annoying then coming to you in tears and then getting annoyed when provided with a solution.
(Although that may depend on whether you consider your friends agents ... hmm.)
Replies from: kalium, Vaniver↑ comment by kalium · 2013-08-26T22:22:19.690Z · LW(p) · GW(p)
If you have a problem and don't ask for a solution, then I'd try not to be annoyed with you if you're annoyed at being offered one. Maybe you already know exactly what you're going to do but just want to get some complaining in first. Nothing wrong with that.
↑ comment by Vaniver · 2013-08-26T17:48:39.207Z · LW(p) · GW(p)
I don't know, it's less annoying then coming to you in tears and then getting annoyed when provided with a solution.
I suspect kalium would downgrade someone from friend status if that happened once, which does map on to the annoyance difference.
Replies from: MugaSofercomment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-25T19:14:37.999Z · LW(p) · GW(p)
PCs are also systems; they're just systems with a stronger heroic responsibility drive. On the other hand, when you successfully do things and I couldn't predict exactly how you would do them, I have no choice but to model you as an 'intelligence'. But that's, well... really rare.
Replies from: Swimmer963, None, fowlertm, Halfwitz, Vaniver, derefr, MugaSofer↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-08-26T03:47:02.194Z · LW(p) · GW(p)
I guess for me it's not incredibly rare that people successfully do things and I can't predict exactly how they would do them. It doesn't seem to be the main distinction that my brain uses to model PC-ness versus NPC-ness, though.
↑ comment by [deleted] · 2013-09-01T06:47:11.351Z · LW(p) · GW(p)
I find this comment...very, very fun, and very, very provocative.
Are you up for -- in a spirit of fun -- putting it to the test? Like, people could suggest goals that the successful completion of which would potentially label themselves as "an Intelligence" according to Eliezer Yudkowky -- and then you would outline how you would do it? And if you either couldn't predict the answer, or we did it in a way different enough from your predictions (as judged by you!), we'd get bragging rights thereafter? (So for instance, we could put in email sigs, "An intelligence, certified Eliezar Yudkowky." That kind of thing.)
A few goals right off the top of my head:
- Raise $1000 for MIRI or CFAR
- Get a celebrity to rec HPMOR, MIRI or CFAR (the term "celebrity" would require definition)
- Convince Eliezer to change his mind on any one topic of significance (as judged by himself)
- Solve any "open question" that EY cares to list (again, as judged by himself -- I know that "how to lose weight" is such a question, and presumably there are others)
Basically the idea is that we get to posit things we think we know how to do and you don't... and you get to posit things that you don't know how to do but would like to...and then if we "win" we get bragging rights.
There's pretty obviously some twisted incentives here (mostly in your favor!) but we'll just have to assume that you're a man of honor. And by "a man of honor" I mean "a man whose reputation is worth enough that he won't casually throw a match."
I dunno, does that sound fun to anybody else?
↑ comment by fowlertm · 2013-08-26T06:26:57.497Z · LW(p) · GW(p)
Do you mean to say that you can generally predict not only what person A will do but precisely how they will do it? Or do you mean that if a person succeeds then you are unsurprised by how they did it, but if they fail or do something crazy you aren't any better than other people at prediction? Either way I would be interested in hearing more about how you do that.
Since I've been teaching I've gotten much better at modeling other people-- you might say I've gotten a hefty software patch to my Theory of Mind. Because I mostly interact with children that's what I am calibrated to, but adult's have also gotten much less surprising. I attribute my earlier problems mostly to lack of experience and to simply not trying very hard to model people's motivations or predict their behavior.
Further, I've come to realize how important these skills are, and I aspire to reaching Quirrellesque heights of other-modeling. Some potential ways to improve theory of mind:
Study the relevant psychology/neuroscience.
Learn acting.
Carefully read fiction which explores psychology and behavior in an in-depth way (Henry James?) Plays might be even better for this, as you'd presumably have to fill in a lot of the underlying psychology on your own. In conjunction with acting this would probably be even more powerful. You could even go as far as to make bets on what characters will do so as to better calibrate your intuitions.
Write fiction which does the same.
Placing bets could be extended to real groups of people, though you might not want to let anyone know you were doing this because they might think it's creepy and it could create a kind of anti-induction.
Replies from: MugaSofer↑ comment by Halfwitz · 2013-08-26T18:56:25.434Z · LW(p) · GW(p)
If you regularly associate with people of similar intelligence, how rare can that be? Even if you are the smartest person you know (unlikely considering the people you know, some of whom exceed your competence in mathematics and philosophy), anyone with more XP in certain areas would behave unpredictably in said areas, even if they had a smaller initial endowment. My guess is your means-prediction lobe is badly calibrated because after the fact you say to yourself, “I would have predicted that.” This could be easily tested.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-26T19:35:26.944Z · LW(p) · GW(p)
Intelligent people tend to only on rare occasions tackle problems where it stretches the limit of their cognitive abilities to (predict how to) solve them. Thus, most of my exposure to this comes by way of, e.g., watching mathematicians at decision theory workshops prove things in domains where I am unfamiliar - then they can exceed my prediction abilities even when they are not tackling a problem which appears to them spectacularly difficult.
Replies from: Decius, Halfwitz, Halfwitz↑ comment by Decius · 2013-08-27T00:46:31.769Z · LW(p) · GW(p)
Where the task they are doing has a skill requirement that you do not meet, you cannot predict how they will solve the problem.
Does that sound right? It's more obvious that the prediction is hard when the decision is "fake-punt, run the clock down, and take the safety instead of giving them the football with so much time left" rather than physical feats. Purely mental feats are a different kind of different.
↑ comment by Halfwitz · 2013-08-27T23:09:13.438Z · LW(p) · GW(p)
My scepticism depends on how detailed your predictions are, though your fiction/rhetorical abilities likely stem in part form unusually good person-modelling abilities. Do you find yourself regularly and correctly predicting how creative friends will navigate difficult social situations or witty conversations, e.g, guessing punchlines to clever jokes, predicting the coarse of a status game?
↑ comment by Halfwitz · 2013-08-26T20:16:55.455Z · LW(p) · GW(p)
I may be confused about the "resolution" of your predictions. Suppose you were trying to predict how intelligent person x will seduce intelligent person y. If you said, "X will appeal to Y's vanity and then demonstrate social status." I feel that kind of prediction is pretty trivial. But predicting more exactly how X would do this seems vastly more difficult. How would you rate your abilities in this situation if 1 equals predictions at the resolution of the given example and 10 equals "I could draw you a flow chart which will more-or-less describe the whole of their interaction."
↑ comment by Vaniver · 2014-03-28T17:04:22.848Z · LW(p) · GW(p)
Relevant: an article that explains the Failed Simulation Effect, by Cal Newport.
↑ comment by derefr · 2013-08-30T07:23:52.145Z · LW(p) · GW(p)
I note that this suggests that an AI that was as smart as an average human, but also as agenty as an average human, would still seem like a rather dumb computer program (it might be able to solve your problems, but it would suffer akrasia just like you would in doing so.) The cyberpunk ideal of the mobile exoself AI-agent, Getting Things Done for you without supervision, would actually require something far beyond equivalent to an average human to be considered "competent" at its job.
comment by Moss_Piglet · 2013-08-25T02:49:16.734Z · LW(p) · GW(p)
The OP here raises a very interesting question, but I can't help but be distracted by the phrasing. Humans are both decision-making agents and complex biochemical systems so my poor pedantic brain is spinning it's wheels trying to turn that into a dichotomy. If it were me I would have said Subject v Object, especially since this ties into objectification, but that's a nitpick too minor for me not to upvote it. Anyway...
Personally I lean towards a "complex systems" model of other humans. People can surprise you, pleasantly or unpleasantly, in how they react to things, but then again you could say the same about any sufficiently complex process. And it's not very hard to learn patterns to predict how people (or groups of people) react to a given class of stimulus if you're empathetic and observant, just like with animals and computers. I consider it one of my better qualities that I can usually guess the inputs people are looking for when they talk to me and subtly work them into the conversation; it sounds somewhat artificial when put in those words but its actually a really intuitive process once you get the hang of it.
comment by skepsci · 2013-09-03T14:45:46.064Z · LW(p) · GW(p)
I model basically everyone I interact with as an agent. This is useful when trying to get help from people who don't want to help you, such as customer service or bureaucrats. By giving the agent agency, it's easy to identify the problem: the agent in question wants to get rid of you with the least amount of effort so they can go back to chatting with their coworkers/browsing the internet/listening to the radio. The solution is generally to make it seem like less effort to get rid of you by helping you with your problem (which is their job after all) than something else. This can be done by simply insisting on being helped, making a ruckus, or asking for a manager, depending on the situation.
comment by simplicio · 2013-08-28T17:34:39.322Z · LW(p) · GW(p)
Good article!
One of my hobbyhorses is that you can gain a good deal of insight into someone's political worldview by observing whom they blame versus absolve for bad acts, since blame implies agency and absolution tends to minimize it. Often you find this pattern to be the reverse of stated sympathies. Examples left as an exercise to the reader.
comment by someonewrongonthenet · 2013-08-25T17:37:11.371Z · LW(p) · GW(p)
Almost everyone at the table immediately recognized what we were talking about and agreed that modelling some people as agents and some people as complex systems was a thing they did.
Wow. I did not realize that so many other people felt aware of this dichotomy.
So, usually when I'm in a good mood, there isn't any dichotomy. I model everyone in exactly the same way that I model myself - as individuals with certain strengths and weaknesses. You might say that I even model myself as a complex system, to a degree. The model is complete enough that the complex system feels like an agent - things like personal responsibility and ability to do stuff are parts of the model.
However, when I am in a depressed mood, things start to change. I start modelling everyone as complex systems and the models are pessimistic and I do it in such a way that somehow makes me perceive them like objects, or perhaps like non-human animals or the senile. As a result of not feeling like they are people, I am overcome by a feeling of intense loneliness.
As with your roommate, much of the frustration is that I expect them to behave the way that you describe "agents", but they keep failing to meet expectations- and some of this frustration is self directed as well, when I see myself making the same mistakes despite knowing that they are mistakes and knowing the neurological reasons behind why I keep making the mistakes. The is-aught dichotomy is the cause of the frustration - it's the sort of frustration you might direct at a broken computer tinged with a sense of disappointment, a sense that it could be so much more.
But when I'm in a good mood, this all fades away. I'm able to model people as agenty-complex systems while retaining normal emotional attachment and still feeling like they are people. When I'm in a good mood, awareness of my own limitations and other's limitations actually makes it easier to accept when we make mistakes.
I want to model her as an agent... she's one of the least agenty people I know...So I consistently, predictably get mad at her
So I think you may be like bad-mood-me in this respect: You might have difficulty simultaneously modeling people as complex systems while relating to them emotionally as people.
Even though there isn't actually an IQ-depression correlation (or at least, I haven't found one in the literature) I think this whole "most people are NPCs" complex might strike at the heart of why there is a perception that smart people feel lonely and depressed - or at least misanthropic.
Other than positive affect, one thing that somewhat cures this misanthropy problem for me is compassion meditation. I start with people who are closest to me, and think about the love that they feel for me. Then I imagine people who are further for me, and imagine the love that they feel for each other. Basically, I re-affirm that even if people don't always act the correct way, they share fundamental values with me. I try to extent the thoughts as far as I can, making sure to identify the people who frustrate me the most and identifying the points at which we share fundamental values. This practice doesn't necessarily increase positive affect, but it does make the sense of loneliness go away.
Basically, instead of modelling people as complex systems in a way that deagentizes them, it's better to model people as complex systems who are also agent-like and who are trying to be a certain way, but failing because of x,y, z complex system flaws.
comment by [deleted] · 2013-08-25T14:49:22.019Z · LW(p) · GW(p)
I aspire to model myself as the only "agent" in the system, kind of like Harry does in HPMOR (with the possible exception of Professor Quirrell). I'm the one whose behavior I can change most directly, so it is unhelpful (at least for me) to model circumstances (which can cause a dangerous victim mentality) or other people as agents. Even if I know I can make an argument to try to change another person's mind, and estimate I have a 50/50 chance of success, it is still me who is making the choice to use Argument A rather than Argument B.
In terms of an NPC/PC distinction, anyone whom I have the ability to reliably model accurately is an NPC to me, and the PC's are everyone else. Both have moral value to me, though people I consider PC's are obviously more interesting for me to hang out with. (Almost) Everyone's a PC to someone, though, so I definitely don't hold it against the people I consider NPCs.
For an example, I consider all three members of my immediate family (my younger sister, my father, and my mother) NPCs. They are all fervent evangelical Christians and that constrains their behaviors significantly. Nonetheless, spending time with them (well, sometimes) evokes positive emotions from me, and I happily do chores for them.
Replies from: scaphandre↑ comment by scaphandre · 2013-08-26T01:09:08.466Z · LW(p) · GW(p)
I imagine it is probably emotionally taxing and isolating for a human to model themselves as the only true agent in their world. That's a lot of responsibility, inefficient for big projects (where coordinating with other 'proper' agents might be particularly useful) and probably kinda lonely.
I am all for personal responsibility and recognise that acting to best improve the world is up to me. I am currently implemented in a great ape – a mammal with certain operating requirements. Part of my behaviour in the world has to include acting to keep that great ape working well.
To avoid exposing that silly ape with the emotional weight of the being the only responsible agent in the system and to allow more fun agent-agent interactions, it might make sense to lower the mental bar for those you would call PCs?
Replies from: Decius↑ comment by Decius · 2013-08-27T00:42:19.171Z · LW(p) · GW(p)
If I am the only agent in my circle of knowledge, I want to believe so.
Replies from: scaphandre↑ comment by scaphandre · 2013-08-27T01:56:29.419Z · LW(p) · GW(p)
Agreed. But I'd place more value on searching for other agents when I know none.
From this thread we can see there is not a fixed concept of what meets the agent criteria. If I knew zero other agents, I'd be more inclined to spend more effort searching or perhaps be a little more flexible with my interpretation of what an agent might be.
Of course tricking yourself into solipsism or Wilson worship is a conceivable failure mode, but I don't think it's likely here.
comment by Armok_GoB · 2013-08-25T02:00:41.589Z · LW(p) · GW(p)
Hmm. I seem to very much, very purely, model myself as an NPC by these definitions. By extension, since I can't use empathic modelling to differentiate like you describe doing, so I model exactly everyone as NPCs. It's also the case that I've never had to model a PC in detail; I know about some people who are, probably including you, but I've never really had the opportunity to interact with such a rare creature for long enough to develop a new way of modelling and seem to be just wining it b just assigning a probability bending magic black box power called "rationality".
comment by Brillyant · 2013-08-27T21:43:38.673Z · LW(p) · GW(p)
I suspect all people, including me, are NPC meat-computers running firmware/software that provides the persistent, conscious illusion of PC-ness (or agenty-ness). Some people are more advanced computers and, therefore, seem more agenty... but all are computers nontheless.
Modeling people this way (as very complex NPCs), as some have pointed out in the comments, seems to be a rather effective means of limiting the experience of anger and frustration... or at least making anger and frustration seem irrational, thereby causing it (at least in my experience) to lose it's appeal (catharsis, or whatever) over time. It has worked that way for me.
...
I'm curious... and perhaps someone (smarter than I) can help enlighten me...
How is a discussion of free will different (or similar to) PC vs. NPC?
Replies from: derefr↑ comment by derefr · 2013-08-30T07:39:16.870Z · LW(p) · GW(p)
This seems to suggest that modelling people (who may be agents) as non-agents has only positive consequences. I would point out one negative consequence, which I'm sure anyone who has watched some schlock sci-fi is familiar with: you will only believe someone when they tell you you are caught in a time-loop if you already model them as an agent. Substitute anything else sufficiently mind-blowing and urgent, of course.
Since only PCs can save the world (nobody else bothers trying, after all), then nobody will believe you are currently carrying the world on your shoulders if they think you're an NPC. This seems dangerous somehow.
comment by Become_Stronger · 2013-09-30T03:05:16.460Z · LW(p) · GW(p)
Any mind that I can model sufficiently well to be accurate ceases to be an agent at that point.
If I can predict what you are going to do with 100% certainty, then it doesn't matter what internal processes lead you to take that action. I don't need to see into the black box to predict the action of the machine.
People I know well maintain their agenthood by virtue of the fact that they are sufficiently complex to think in ways I do not.
For these reasons, I rarely attempt to model the mental processes of minds I consider to be stronger than mine (in the rational sense.) Attempting to ask myself what a powerful rationalist would do is not a useful heuristic, as my model of a strong rationalist is not, in itself, stronger than my own understanding of rationalism.
comment by nicdevera · 2013-09-04T01:47:51.619Z · LW(p) · GW(p)
Schelling's Strategy of Conflict says that in some cases, advertising non-agency can be useful, something like "If you cross this threshold, that will trigger punitive retaliation, regardless of cost-benefit, I have no choice in the matter."
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-09-07T12:46:03.875Z · LW(p) · GW(p)
Is there a situation where it would be strategic to live all your life, or large areas of your life in non-agency?
Maybe a life in a dictatorship is like this. Be too agenty for someone to notice, and they may decide you are a potential risk, and your genes and memes get eliminated.
Later, even if the dictatorship is gone, the habits and the culture remain.
Is there a way to compare average citizens' agency in different nations, and correlate that with their history?
Replies from: nicdevera↑ comment by nicdevera · 2013-09-08T10:24:50.938Z · LW(p) · GW(p)
I guess signalling non-agency is tactical level; protective camouflage, poker bluffing etc. Agenty thinking as above is essentially strategic, winning with moves that are creative, devious, hard to predict or counter, going meta, gaming the system. Pretending to be a loyal citizen of Oceania is a good tactic while you covertly work towards other goals.
For cultural agency, the Wikipedia page on locus of control's one place to start. And there was the Power Distance Index in Gladwell's Outliers.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-09-08T11:30:36.754Z · LW(p) · GW(p)
Humans are not very good at pretending. If you pretend something, you start believing it. Especially if you have to pretend it for years. And even if you succeeded in it, it would be very difficult to teach your children -- if they do it wrong, it may result in a death for your whole family, but if you wait until they are reasonable enough, they may already strongly believe other things.
comment by Lumifer · 2013-08-27T17:45:20.395Z · LW(p) · GW(p)
Hm, interesting. I have some terminological confusion to battle through here.
My mind associates "agent" with either Bond/MiB creatures or game theory and economics. The distinction you're drawing I would describe as active and passive. "Agenty"/PC people are the active ones, they make things happen, they shape the narrative, they are internally driven to change their environment. By contrast the "complex-system"/NPC people are the passive ones, they react to events, they go with the flow, the circumstances around them drive their behavior.
I don't think of active and passive as two kinds of people. I think of them as two endpoints on an axis with most people being somewhere in the middle. It's a characteristic of a person, a dimension of her personality.
Replies from: derefr↑ comment by derefr · 2013-08-30T07:50:27.953Z · LW(p) · GW(p)
A continuum is still a somewhat-unclear metric for agency, since it suggests agency is a static property.
I'd suggest modelling a sentience as a colony of basic Agents, each striving toward a particular utility-function primitive. (Pop psychology sometimes calls these "drives" or "instincts.") These basic Agents sometimes work together, like people do, toward common goals; or override one-another for competing goals.
Agency, then, is a bit like magnetism--it's a property that arises from your Agent-colony when you've got them all pointing the same way; when "enough of you" wants some particular outcome that there's no confusion about what else you could/should be doing instead. In effect, it allows your collection of basic Agents to be abstracted as a single large Agent with its own clear (though necessarily more complex) goals.
Replies from: Lumifer↑ comment by Lumifer · 2013-08-30T15:16:47.301Z · LW(p) · GW(p)
That's a bit different, I think. You're describing what I'd call the degree of internal conflict (which is only partially visible to the conscious mind, of course). However it seems that agency is very much tied to propensity to act and that is not a function solely of how much in agreement your "drives" are.
Crudely speaking, agency is the ability to get your ass off the couch and do stuff. Depressed people, for example, have very low agency and I don't think that's because they have a particularly powerful instinct which says "I want to sit here and mope".
comment by teageegeepea · 2013-08-25T03:18:57.060Z · LW(p) · GW(p)
I thought #3 was the definition of "agent", which I suppose is why it got that label. #1 sounds a little like birds confronted by cuckoo parasitism, which Eliezer might call "sphexish" rather than agenty.
comment by PhilGoetz · 2013-08-26T22:40:49.399Z · LW(p) · GW(p)
I've used "agentness" in at least one LessWrong post to mean the amount of information you need to predict their behavior, given their environment, though I don't think I defined it that way. A person whose actions can always be predicted from existing social conventions, or from the content of the Bible, is not a moral agent. You might call them a moral person, but they've surrendered their agency.
Perhaps I first got this notion of agency from the Foundation Trilogy: Agenthood is the degree to which you mess up Hari Seldon's equations.
My preference in cases like this is not to puzzle over what the word "agent" means, but to try to come up with a related concept that is useful and (theoretically) measurable. Here I'd suggest measuring the number of bits that a person requires you to add to your model of the world. This has the advantage that a complex person who you're able to predict through long association with them still has a large number, while a random process is impossible to predict yet adds few bits to your model.
This kind of distinction is one of the reasons I say that the difference between highly-intelligent people and people of average intelligence is greater than the difference between people of average intelligence and dogs.
(bits of surprise provided by a highly-intelligent person - bits of surprise provided by a human of average intelligence) > (bits of surprise provided by a human of average intelligence - bits of surprise provided by a dog).
Though my greater experience with humans, their greater homogeneity owing to language and culture, and whatever rationality they possess, biases each human to require adding fewer bits, so that bits of surprise provided by a dog may on average be greater than bits of surprise provided by an average person. There's something wrong with this measure if it penalizes rationality and language use.
comment by pwno · 2013-08-26T06:47:54.689Z · LW(p) · GW(p)
The degree to which I feel blame or judgement towards people for not doing things they said they would do is almost directly proportional to how much I model them as agents.
I've noticed that people are angrier at behaviors they can't explain. The anger subdues when they learn about the motives and circumstances that led to the behavior. If non-agents are suppose to be less predictable, I'd guess we're more inclined to judge/blame them.
comment by itaibn0 · 2013-08-25T16:30:07.544Z · LW(p) · GW(p)
Here's my answer to the title question, before reading the post*:
I understand the word "agent" to refer to model I created specifically for modeling humans. The agree to such a degree that any discrepancy is almost entirely due to the ambiguity of these words.
After reading the post: I don't notice myself making the distinction you describe. Under your distinction, the way I model people seems more like treating everyone (including myself) as a complex system than treating everyone (including myself) as an agent, but I'm not sure of this.
*Well, I peeked at the first few sentences.
comment by Shmi (shminux) · 2013-08-25T06:31:47.572Z · LW(p) · GW(p)
Upon reflection, I think I consider people whose behavior I have trouble modeling/predicting (roughly those smarter than I am) as PCs and the rest (including myself, unfortunately) as NPCs. However, sometimes I get surprised by NPCs behaving in an agenty way, and sometimes I get surprised by PCs behaving predictably, including predictably wrong.
Replies from: Document↑ comment by Document · 2013-08-25T07:36:17.970Z · LW(p) · GW(p)
sometimes I get surprised by PCs behaving predictably, including predictably wrong.
Is this a clever "paradoxical" description of what happens that I'm not quite parsing, or is it just a contradiction?
Replies from: Decius↑ comment by Decius · 2013-08-27T00:37:48.408Z · LW(p) · GW(p)
The expected action of someone more agenty than oneself, when confronted with certain situations, is to take an action which falls into the category "all others".
When they pick "repeat the same course of action which just failed", it is surprising that they picked a predictable response, rather than a response not contained in the predictions.
comment by Adam Zerner (adamzerner) · 2015-04-20T02:10:17.605Z · LW(p) · GW(p)
This seems to me to be a conversation about semantics. Ie.
IF
You and I both view John to have the same:
1) Reliability and responsibility 2) Intellectual formidability 3) Conventional "agentiness"
BUT
You think that intellectual formidability is part of what makes someone "agenty" and I don't.
THEN
We agree about everything that's "real", and only are choosing to define the word "agent" differently.
I anticipate a reasonable chance that something in this conversation just went right over my head, and that it's about something much deeper than semantics.
For what it's worth, when I think of the word "agenty" I think almost exclusively about property 3. Another way to put it might be the extent to which your behavior is a result of your impulses.
I imagine some sort of spectrum from "I act based on my impulses and I believe things based on the immediate feelings they provoke in me" to "I stop and think about the costs and benefits before acting, I explore multiple hypotheses, and I do the same for my beliefs".
comment by [deleted] · 2013-11-03T03:15:27.308Z · LW(p) · GW(p)
There's no natural grouping to your examples. Some of them are just people who care about you. Others are people who do things you find impressive.
Frankly, this whole discussion comes across as arrogant and callous. I know we're ostensibly talking about "degree of models" or whatever, but there are clear implicit descriptive claims being made, based on value judgments.
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-11-03T17:30:42.274Z · LW(p) · GW(p)
I'm aware that my brain may group things in ways that aren't related to useful criteria or criteria I would endorse. My brain was doing this anyway before I wrote the post. Discussing it is an essential part of noticing it and self-modifying or compensating in some way.
How, specifically, do you think that having this discussion is arrogant and callous? What would have to be different about it for it not to be arrogant and callous?
Replies from: None↑ comment by [deleted] · 2013-11-03T21:25:25.479Z · LW(p) · GW(p)
Sure, I should have been more specific.
Here are two questions:
1) How do I model the minds of other people?
2) What are the minds of other people like?
My objection is that answers to (1) are being confused with answers to (2). In particular, a reductive (non-agenty) answer to (1) will tend to drift towards a reductive answer to (2).
The "arrogance" I see stems from the bias towards using non-reductive models when dealing with behaviors we approve of, and reductive models when dealing with behaviors we don't approve of.
For example, consider a devout Mormon, who spends two years traveling in a foreign country on a religious mission. Is this person an agent? Those already sympathetic to Mormon beliefs will be more likely to advance an agent-like explanation of this behavior than someone who doesn't believe in Mormon claims.
As another example, is Eliezer an agent? If you share his beliefs about UFAI, you probably think so. But if you think the whole AI/Singularity thing is nonsense, you're more likely to think of Eliezer as just another time-wasting blogger best known for fan-fiction. Why doesn't he get a real job? :P
Can a smoker be an agent? We tend to assume any unhealthy behavior has a non-agenty explanation, while healthy behaviors are agenty. We can't imagine the mind of an agent who really doesn't believe or care that smoking is unhealthy.
Your roommate doesn't wash the dishes. Have you tried imagining a model of her as an agent, in which she acts in accordance with her own values and decides not to wash the dishes? If she places very little internal value on clean dishes, she may not be able to relate to the mind of an agent who places any value on washed dishes. She may even be modeling you as a non-agent with a quirky response to the stimulus "dirty-dishes". (Did your parents never mistake a value they didn't understand for non-agenty behavior on your part?)
People of one political ideology utterly fail to model those with other political ideologies as agents, even while considering themselves to be agents.
Are athletes agents? Becoming an athlete requires as much goal-directed behavior as becoming a mathematician. But mathematicians often have terribly condescending opinions of the agenty-ness of athletes.
In all these examples, value judgments precede the models we make of agency-ness, which will slide towards evaluations of actual agency-ness.
It's very hard to create a non-reductive model of an agent with sufficiently alien values. Even defining agency in terms of "executive function" still relies on a comparison to your own mind (to decide what counts as goal-directed and how much it should count). Since a failure to build a model suggests that no model is possible, we will be biased towards only considering minds similar to our own as agents. The result in an strong in-group bias.
I hope this is a bit clearer. I don't mean to criticize you for reporting patterns in your own thought, and I understand that's not meant to make any normative claims. To report some patterns of my own, the "red flags" my brain noted reading the comments were the phrase "Heroic responsibility drive" and a bunch of really math-y types claiming they had everyone else figured out.
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-11-04T13:38:13.114Z · LW(p) · GW(p)
Agreed that you have to be very careful about letting your answers to (1) slide into your answers to (2). But I don't think you c an do th
If she places very little internal value on clean dishes, she may not be able to relate to the mind of an agent who places any value on washed dishes. She may even be modeling you as a non-agent with a quirky response to the stimulus "dirty-dishes".
Oh, she likes clean dishes all right. She nagged me about them plenty. It was just that her usual response to dirty dishes was "it's too ughy to go in the kitchen so I just won't cook either." She actually verbalized this to me at some point. She also said (not in so many words) that she would prefer to be the sort of person who just washed dishes and got on with life. So there was more to "what she said" than saying to me that she would wash the dishes (which someone who didn't care about dishes might say anyway for social reasons).
Obviously all people are agents to some degree, and can be agents to different degrees on different days depending on, say, tiredness or whether they're around their parents. (I become noticeably less agenty around mine). But these distinctions aren't actually what my brain perceives; my brain latches onto some information that in retrospect is probably relevant, like my roommate saying she wants to be the sort of person who just washes dishes but not washing any dishes, and things that aren't relevant to agentiness, like impressiveness.
comment by LM7805 · 2013-09-24T15:46:21.646Z · LW(p) · GW(p)
I model people constantly, but agency and the "PC vs. NPC" distinction don't even come into it. There are classes of models, but they're more like classes of computational automata: less or more complex, roughly scaling with the scope of my interactions with a person. For instance, it's usually fine to model a grocery store cashier as a nondeterministic finite state machine; handing over groceries and paying are simple enough interactions that an NFSM suffices. Of course the cashier has just as much agency and free will as I do -- but there's a point of diminishing returns on how much effort I invest into forming a more comprehensive model, and since my time and willpower are limited, I prefer to spend that effort on people I spend more time with. Agency is always present in every model, but whether it affects the predictions a given model outputs depends on the complexity of the model.
comment by FiftyTwo · 2013-09-24T12:24:08.296Z · LW(p) · GW(p)
I was thinking about this recently in the context of the game Diplomacy. One way top play is to model your opponents as rational self interested actors making the optimal move for them at a particular time. This can seperate off your attitudes to people in the game from your normal emotional reactions (e.g. move from "he stabbed me in the back" to "He acted in his self interest as I should have expected").
[An interesting exercise would be to write down each turn what you predict the other players will do and compare that to their actions.]
comment by [deleted] · 2013-09-11T03:41:31.502Z · LW(p) · GW(p)
There are at best seven people in the world that are actually modelled as agents in my own head. My algorithm for predicting behaviour of an individual generally follows 1) Find out what someone of their social class normally does 2) Assume they will continue to do that plus or minus some hobbies and quirks 3) If they deviate really strangely, check how they have reacted to past crises and whether they have any interests which make them likely to deviate again. If this fails, then begin modelling them to increase prediction accuracy.
This works reasonably well, but likely betrays a rather weak theory of mind. It would probably be more easy to make close friends and communicate if I could at least temporarily assign agency to people around me, but doing so is difficult. Most social interactions can be spoofed with little or no personal attachment to an individual, model the map between someones environment and goals to their actions, and you have to troubleshoot all kinds of errors, such as when someones goals are self defeating.
comment by [deleted] · 2013-09-07T08:03:31.876Z · LW(p) · GW(p)
I'd personally name the ability to change opinions and behaviour as the most important difference between PC and NPC.
comment by Sithlord_Bayesian · 2013-08-27T05:12:32.539Z · LW(p) · GW(p)
So, you (Swimmer963) think of agenty people as being those who:
- Are reliable
- Are skilled (in areas you are less familiar with)
- Act deliberately, especially for their own interest
It is interesting that all three of these behaviors seem to be high status behaviors. So, my question is this: does high status make someone seem more agenty to you? Could sufficiently high status be a sufficient condition for someone being "agenty"?
Replies from: pwno, metatrollcomment by [deleted] · 2013-08-25T06:24:52.996Z · LW(p) · GW(p)
After I read the question "do you model people as agents versus complex systems?", I started to wonder which of the two options is more "sophisticated". Is an agent more sophisticated than a complex system, or vice versa? I don't really have an opinion here.
Something I like to tell myself is that people are animals first and foremost. Whenever anyone does anything I find strange, unusual, or irrational, my instinct is to speculate about the cause of the behavior. If person A is rude towards person B, I don't think, "person A is being a bad person"; I think something like, "person A is frustrated with person B and believes person B is misbehaving, and believes that rudeness is justified in this situation".
So I guess that when I ask myself what's the difference between an agent and a complex system, my first thought is to say that an agent is not composed of parts, whereas a complex system is. Under this definition, it's a fact that humans are complex systems, not agents.
My older brother is the type of person who is "conventionally agenty": he has goals, and he attempts to achieve them by applying problem-solving skills, usually with great success. A certain other person I know (call him P), on the other hand, is the opposite: he has goals, but he makes no apparent effort to achieve them, and so he doesn't. (He's definitely getting better, though—just not very quickly.) The difference between my brother and P seems to come down to one single difference in attitude. My brother's attitude toward goals is to think about how they could be achieved, and to try to figure out how to achieve the achievable ones. P's attitude toward goals is to go ahead and achieve them if he already knows how to, and just ignore them otherwise.
"Agent-like" definitely isn't the way I'd describe my brother; I'd call him proactive.
For what it's worth, my attitude toward goals is less my-brother-like than seems ideal. Given a goal other than overcoming procrastination, I think about it and try to determine whether it's a good use of my time or not. If it is, I add it to my to-do list; otherwise, I forget about it. The goal of overcoming procrastination is something I think about carefully many times every day. This goal seems to be extremely difficult for me to achieve, which makes me wonder why everyone else seems to have it so easy.
Replies from: Viliam_Bur, Decius, Document↑ comment by Viliam_Bur · 2013-08-25T16:49:06.301Z · LW(p) · GW(p)
Is an agent more sophisticated than a complex system, or vice versa?
Agent is a specific kind of a complex system.
Replies from: None↑ comment by [deleted] · 2013-08-27T14:48:08.504Z · LW(p) · GW(p)
I thought we were pretending that the two are mutually exclusive. Agents have magical free will, complex systems don't.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-08-31T09:55:15.196Z · LW(p) · GW(p)
Okay. I just don't like words defined as "X, except for Y" (specifically: complex systems, except for those who have magical free will). If we tried to avoid this "excepting", the question would be rephrased as:
Is a complex system with magical free will more sophisticated than a complex system without magical free will, or vice versa?
But I am not sure how exactly that helps, so... uhm, end of nitpicking.
↑ comment by Decius · 2013-08-27T01:01:19.789Z · LW(p) · GW(p)
I model my computer as a complex system; when it has undesired behavior, I give it a known set of conditions and it behaves consistently and often predictably.
I don't expect it to engage in goal-oriented behavior.
There are people who I model in a similar manner- I know what they do in certain conditions, and I don't ask what it is they are trying to accomplish. There are cases where I behave in a similar manner, performing sphexish behavior even while consciously aware of it. Noticing that I am doing that evokes cognitive dissonance, so I guess I don't actually model myself that way, even when it would be accurate to do so.
Replies from: None↑ comment by [deleted] · 2013-08-27T14:55:22.229Z · LW(p) · GW(p)
There are cases where I behave in a similar manner, performing sphexish behavior even while consciously aware of it. Noticing that I am doing that evokes cognitive dissonance, so I guess I don't actually model myself that way, even when it would be accurate to do so.
Huh. I frequently notice myself behaving in a seemingly robotic fashion, doing stuff "automatically" with no real conscious input (e.g. when doing simple, routine tasks like folding laundry), but it doesn't give me any feeling of cognitive dissonance.
Replies from: Decius↑ comment by Decius · 2013-08-27T20:28:39.721Z · LW(p) · GW(p)
What about when the behavior you are doing has counterproductive results?
Replies from: None↑ comment by [deleted] · 2013-09-01T21:14:38.014Z · LW(p) · GW(p)
What are you asking, exactly?
To try to answer your question: If I find myself behaving "automatically" in a counterproductive manner, that's an uncomfortable situation to be in, and to me, it emphasizes the fact that I'm not a "pure goal-oriented agent". I do feel a sort of cognitive dissonance in this cases, I think; I feel like the fact that I'm not behaving productively is "my fault" and it would be easy for me to stop doing what I'm doing, while simultaneously feeling like it would be very difficult to stop doing what I'm doing.
Replies from: Decius↑ comment by Decius · 2013-09-02T00:07:31.866Z · LW(p) · GW(p)
What are you asking, exactly?
Because I described a situation in which I felt a certain way, and you expressed that you felt a different way in a situation which had certain similarities. I felt that I could identify a significant difference between those situations and wanted to confirm that we probably have similar subjective experiences when confronted with similar enough circumstances.
Had I discovered a difference, it would be worth further discussion. I'm unsure if this similarity is worth further discussion. Feeling like it would be trivial to do something else, believing that I want to do something else, but not doing something else is a common enough failure mode for me to be worrisome.
Replies from: None↑ comment by [deleted] · 2013-09-02T15:05:08.631Z · LW(p) · GW(p)
nod
Tangential question: why did you use "failure mode" there instead of "problem"?
Replies from: Decius↑ comment by Decius · 2013-09-02T18:25:46.280Z · LW(p) · GW(p)
I haven't codified the exact distinction that I make between those two concepts; in the case of material science, a 'problem' would be a pressure vessel at a low temperature containing high pressure; the failure mode of such a problem would be brittle fracture.
In this case it might also have made sense to call it a class of problems; each instance is different enough that a general solution would be different in nature from a series of specific solutions which combined covered every individual exemplified case.
↑ comment by Document · 2013-08-25T07:34:29.006Z · LW(p) · GW(p)
If person A is rude towards person B, I don't think, "person A is being a bad person"; I think something like, "person A is frustrated with person B and believes person B is misbehaving, and believes that rudeness is justified in this situation".
You assume that when someone appears to be acting in anger, they're actually acting in the way they've decided was best after weighing the facts?
Replies from: None, Decius, None↑ comment by [deleted] · 2013-08-27T14:47:18.648Z · LW(p) · GW(p)
Well, no. In the particular case I had in mind, person A was being rude, and so I figured person A was frustrated with person B and believed person B was misbehaving. I asked person A if he thought rudeness was justified in this situation, and he said yes.
Replies from: Document↑ comment by Decius · 2013-08-27T00:41:05.326Z · LW(p) · GW(p)
What's the difference between someone who commonly believes that rudeness is appropriate, and a rude person?
Replies from: PeterisP, Document↑ comment by PeterisP · 2013-08-27T14:28:01.617Z · LW(p) · GW(p)
If you model X as "rude person", then you expect him to be rude with a high[er than average] probability cases, period.
However, if you model X as an agent that believes that rudeness is appropriate in common situations A,B,C, then you expect that he might behave less rudely (a) if he would percieve that this instance of a common 'rude' situation is nuanced and that rudeness is not appropriate there; or (b) if he could be convinced that rudeness in situations like that is contrary to his goals, whatever those may be.
In essence, it's simpler and faster to evaluate expected reactions for people that you model as just complex systems, you can usually do that right away. But if you model goal-oriented behavior, "walk a mile in his shoes" and try to understand the intent of every [non]action and the causes of that, then it tends to be tricky but allows you more depth in both accurate expectations, and ability to affect the behavior.
However, if you do it poorly, or simply lack data neccessary to properly understand the reasons/motivations of that person then you'll tend to get gross misunderstandings.
↑ comment by [deleted] · 2013-08-25T08:30:49.027Z · LW(p) · GW(p)
That's not what they said. They said that they believe that rudeness is justified in the situation. That belief could change (or could not) upon further reflection. Hence the concept of regret.
Replies from: Document↑ comment by Document · 2013-08-25T08:33:58.671Z · LW(p) · GW(p)
Not thinking about a question isn't a belief, or rocks have beliefs.
Replies from: None↑ comment by [deleted] · 2013-08-25T08:39:32.652Z · LW(p) · GW(p)
There's a difference between the slow methodical relatively inefficient (in terms of effort required for a decision) mode of thought, and the instant thoughts we all have (which we use for almost everything we do and are pretty good about many things but not all things).
Replies from: Document