Think in Terms of Actions, not Attributes

post by Collisteru · 2020-11-11T15:57:19.476Z · LW · GW · 5 comments

Contents

5 comments

How do we model the actions of those around us? This problem has been crucial for human survival since before the Holocene, so the brain has developed clever tricks to solve it. Personal attributes emerged as one of these tricks, as reflected in language: Bob is reliable, while Sally's a perfectionist. When these attributes are extended to groups, they become stereotypes: politicians are liars, but priests are honest.  

This allows us to categorize people and then predict the actions of any person with a certain attribute. Rather than ask, "What would Bob do?" we ask, "What would a reliable person do?" It becomes unnecessary to examine individuals.

This type of reasoning creates problems. On a small scale, it can cause prejudice, rash judgement, and hurt feelings. On a large scale, it can lead to demographic conflict, systemic bias, and social ruin. We can do better! How can we reliably predict the others' actions without relying on stereotypes?

The answer is to look directly at the data—previous actions. While we already use past actions to justify given attributes, I propose cutting out the middleman and using previous actions to directly predict future ones. Rather than trusting Bob to protect a keepsake because he's reliable, trust him because he has always returned borrowed items quickly. Instead of calling a public figure a liar, show that past claims have been false 15% of the time, and make a prediction.

When using past actions to predict future ones, use actions that are as closely related as possible to what you want to predict. For example, the fact that Bob return books quickly doesn't necessarily mean he will come to your meeting on time, even though both would traditionally indicate reliability.  

We can extend this beyond people to replace many more vague attributes with specific observations.  One example is saying, "Mathematics involves extreme precision and abstraction" instead of "Mathematics is difficult." It's important to understand that attributes are nothing more than a shortcut: a more accurate option is always available. This is the same idea underpinning both E-prime and the writing rule show, don't tell.

Once thinking in this way becomes the default, mental impressions of both people and situations become more specific, more actionable, and simply more accurate.

5 comments

Comments sorted by top scores.

comment by jimv · 2020-11-12T01:07:19.488Z · LW(p) · GW(p)

This seems to be a close analogue of something I've seen in business communications settings: customer segmentation. In that context, I had the same reaction to it as you're expressing on an individual interpersonal basis: it seems better to make predictions about the individual directly, rather than binning people into segments and then making predictions about the segments.

If you have a bunch of data about a bunch of (prospective?) customers, can your algorithms perform better (say in terms of identifying people's preferred means of contact) by predicting for each individual customer, rather than going via some pencil-sketched segment and then declaring that for such-and-such a segment you're best off communicating with them via email because that's what that segment as a whole prefers?

Replies from: Collisteru
comment by Collisteru · 2020-11-12T21:00:24.686Z · LW(p) · GW(p)

Predicting on an individual basis would be more accurate, since it's much more likely to catch edge cases and follow preference changes over time. At the same time, it's more expensive and the price scales with the size of your customer base, unlike categorization. That's probably why most businesses don't practice it. Social media algorithms use both, maybe because technologies like neural networks push the price down.

comment by Rudi C (rudi-c) · 2020-11-12T10:31:52.670Z · LW(p) · GW(p)

I think your post recommendation is a combination of unbundling and relying less on heuristics. Both are expensive and g-loaded (probably), and a good portion of their benefits go to other people, not the agent itself. Still, I agree that most people should do more of these, as their genetic programming has been tuned for far simpler societies.

comment by adamShimi · 2020-11-12T00:20:59.532Z · LW(p) · GW(p)

When reading the first sentence, I thought about the intentional stance, but then you went into another interesting direction.

I think the main problem with your proposal is that it's way harder. So I don't see how most people will actually implement it, and keep at it. But that's a nice goal to strive towards.