# Most Prisoner's Dilemmas are Stag Hunts; Most Stag Hunts are Battle of the Sexes

post by abramdemski · 2020-09-14T22:13:01.236Z · LW · GW · 22 comments## Contents

Your PD Is Probably a Stag Hunt Battle of the Sexes Taking Pareto Improvements Moloch is the Folk Theorem Lessons in Slaying Moloch None 22 comments

I previously claimed that most apparent Prisoner's Dilemmas are actually Stag Hunts. I now claim that they're Battle of the Sexes in practice. I conclude with some lessons for fighting Moloch.

*This post turned out especially dense with inferential leaps and unexplained terminology. If you're confused, try to ask in the comments and I'll try to clarify.*

*Some ideas here are due to Tsvi Benson-Tilsen.*

(Edited to add, based on comments:)

Here's a summary of the central argument which, despite the lack of pictures, may be easier to understand.

- Most Prisoner's Dilemmas are actually iterated.
- Iterated games are a whole different game with a different action space (because you can react to history), a different payoff matrix (because you care about future payoffs, not just the present), and a different set of equilibria.
- It is characteristic of PD that players are incentivised to play away from the Pareto frontier; IE, no Pareto-optimal point is an equilibrium.
*This is not the case with iterated PD.* - It is characteristic of Stag Hunt that there is a Pareto-optimal equilibrium, but there is also another equilibrium which is far from optimal.
*This is also the case with iterated PD.*So**iterated PD resembles Stag Hunt**. - However, it is furthermore true of iterated PD that
*there are multiple different Pareto-optimal equilibria, which benefit different players more or less.*Also, if players don't successfully coordinate on one of these equilibria, they can end up in a worse overall state (such as mutual defection forever, due to playing grim-trigger strategies with mutually incompatible demands).**This makes iterated PD resemble Battle of the Sexes.**

In fact, the Folk Theorem suggests that *many *iterated games will resemble Battle of the Sexes in this way.

In a comment [LW(p) · GW(p)] on The Schelling Choice is "Rabbit", not "Stag" [LW · GW] I said:

In the book

The Stag Hunt,Skyrms similarly says that lots of people use Prisoner's Dilemma to talk about social coordination, and he thinks people should often use Stag Hunt instead.

I think this is right. Most problems which initially seem like Prisoner's Dilemma are actually Stag Hunt, because there are potential enforcement mechanisms available. The problems discussed in Meditations on Moloch are mostly Stag Hunt problems, not Prisoner's Dilemma problems -- Scott even talks about enforcement, when he describes the dystopia where everyone has to kill anyone who doesn't enforce the terrible social norms (including the norm of enforcing).

This might initially sound like good news. Defection in Prisoner's Dilemma is an inevitable conclusion under common decision-theoretic assumptions. Trying to escape multipolar traps with exotic decision theories might seem hopeless. On the other hand, rabbit in Stag Hunt is

notan inevitable conclusion, by any means.

Unfortunately, in reality, hunting stag is actually quite difficult.

("The schelling choice is Rabbit, not Stag... and that really sucks!")

Inspired by Zvi's recent sequence on Moloch [? · GW], I wanted to expand on this. These issues are important, since they determine how we think about group action problems / tragedy of the commons / multipolar traps / Moloch / all the other synonyms for the same thing.

My current claim is that most Prisoner's Dilemmas are actually *battle of the sexes*. But let's first review the relevance of Stag Hunt.

# Your PD Is Probably a Stag Hunt

There are several reasons why an apparent Prisoner's Dilemma may be more of a Stag Hunt.

- The game is actually an iterated game.
- Reputation networks could punish defectors and reward cooperators.
- There are enforceable contracts.
- Players know quite a bit about how other players think (in the extreme case, players can view each other's source code).

Each of these formal model creates a situation where players * can* get into a cooperative equilibrium. The challenge is that you can't unilaterally decide everyone should be in the cooperative equilibrium. If you want good outcomes for yourself, you have to account for what everyone else probably does. If you think everyone is likely to be in a bad equilibrium where people punish each other for cooperating, then aligning with that equilibrium might be the best you can do! This is like hunting rabbit.

**Exercize**: is there a situation in your life, or within spitting distance, which seems like a Prisoner's Dilemma to you, where everyone is stuck hurting each other due to bad incentives? Is it an iterated situation? Could there be reputation networks which weed out bad actors? Could contracts or contract-like mechanisms be used to encourage good behavior?

So, why do we perceive so many situations to be Prisoner's Dilemma -like rather than Stag Hunt -like? Why does Moloch sound more like *each individual is incentivized to make it worse for everyone else* than *everyone is stuck in a bad equilibrium?*

Sarah Constantine writes:

A friend of mine speculated that, in the decades that humanity has lived under the threat of nuclear war, we’ve developed the assumption that we’re living in a world of one-shot Prisoner’s Dilemmas rather than repeated games, and lost some of the social technology associated with repeated games. Game theorists do, of course, know about iterated games and there’s some fascinating research in evolutionary game theory, but the original formalization of game theory was for the application of nuclear war, and the 101-level framing that most educated laymen hear is often that one-shot is the prototypical case and repeated games are hard to reason about without computer simulations.

To use board-game terminology, the *game* may be a Prisoner's Dilemma, but the *metagame *can use enforcement techniques. Accounting for enforcement techniques, the game is more like a Stag Hunt, where defecting is "rabbit" and cooperating is "stag".

# Battle of the Sexes

But this is a bit informal. You don't separately choose how to metagame and how to game; really, your iterated strategy determines what you do in individual games.

So it's more accurate to just think of the iterated game. There are a bunch of iterated strategies which you can choose from.

The key difference between the single-shot game and the iterated game is that cooperative strategies, such as Tit for Tat (but including [LW · GW] others [LW · GW]), are avaliable. These strategies have the property that (1) they are equilibria -- if you know the other player is playing Tit for Tat, there's no reason for you not to; (2) if both players use them, they end up cooperating.

A key feature of Tit for Tat strategy is that if you do end up playing against a pure defector, you do almost as well as you could possibly do with them. This doesn't sound very much like a Stag Hunt. It begins to sound like a Stag Hunt in which you can change your mind and go hunt rabbit if the other person doesn't show up to hunt stag with you.

Sounds great, right? We can just play one of these cooperative strategies.

The problem is, there are many possible self-enforcing equilibria. Each player can threaten the other player with a *Grim Trigger* strategy: they defect forever the moment some specified condition isn't met. This can be used to extort the other player for more than just the mutual-cooperation payoff. Here's an illustration of possible outcomes, with the enforceable frequencies in the white area:

Alice could be extorting Bob by cooperating 2/3rds of the time, with a grim-trigger threat of never cooperating at all. Alice would then get an average payoff of 2⅓, while Bob would get an average payout of 1⅓.

In the artificial setting of Prisoner's Dilemma, it's easy to say that Cooperate, Cooperate is the "fair" solution, and an equilibrium like I just described is "Alice exploiting Bob". However, real games are not so symmetric, and so it will not be so obvious what "fair" is. The purple squiggle highlights the Pareto frontier -- the space of outcomes which are "efficient" in the sense that no alternative is purely better for everybody. These outcomes may not all be fair, but they all have the advantage that no "money is left on the table" -- any "improvement" we could propose for those outcomes makes things worse for at least one person.

Notice that I've also colored areas where Bob and Alice are doing worse than payoff 1. Bob can't enforce Alice's cooperation while defecting more than half the time; Alice would just defect. And vice versa. All of the points within the shaded regions have this property. So not *all* Pareto-optimal solutions can be enforced.

Any point in the white region can be enforced, however. Each player could be watching the statistics of the other player's cooperation, prepared to pull a grim-trigger if the statistics ever stray too far from the target point. This includes so-called * mutual blackmail* equilibria, in which both players cooperate with probability slightly better than zero (while threatening to never cooperate at all if the other player detectably diverges from that frequency). This idea -- that 'almost any' outcome can be enforced -- is known as the Folk Theorem in game theory.

The Battle of the Sexes part is that (particularly with grim-trigger enforcement) everyone has to choose the same equilibrium to enforce; otherwise everyone is stuck playing defect. You'd rather be in even a bad mutual-blackmail type equilibrium, as opposed to selecting incompatible points to enforce. Just like, in Battle of the Sexes, you'd prefer to meet together at any venue rather than end up at different places.

Furthermore, I would claim that *most* apparent Stag Hunts which you encounter in real life are actually battle-of-the-sexes, in the sense that there are many different stags to hunt and it isn't immediately clear which one should be hunted. Each stag will be differently appealing to different people, so it's difficult to establish common knowledge [? · GW] about which one is worth going after together.

**Exercize**: what stags aren't you hunting with the people around you?

# Taking Pareto Improvements

Fortunately, Grim Trigger is not the *only* enforcement mechanism which can be used to build an equilibrium. Grim Trigger creates a crisis in which you've got to guess which equilibrium you're in very quickly, to avoid angering the other player; and no experimentation is allowed. There are much more forgiving strategies (and contrite ones, too, which helps in a different way).

Actually, *even using Grim Trigger to enforce things*, why would you punish the other player for doing something *better for you? *There's no motive for punishing the other player for raising their cooperation frequency.

In a scenario where you don't know which Grim Trigger the other player is using, but you don't think they'll punish you for cooperating *more* than the target, a natural response is for both players to just cooperate a bunch.

So, it can be very valuable to **use enforcement mechanisms which allow for Pareto improvements.**

Taking Pareto improvements is about moving from the middle to the boundary:

(I've indicated the directions for Pareto improvements starting from the origin in yellow, as well as what happens in other directions; also, I drew a bunch of example Pareto improvements as black arrows to illustrate how Pareto improvements are awesome. Some of the black arrows might not be perfectly within the range of Pareto improvements, sorry about that.)

However, there's also an argument against taking Pareto improvements. If you accept *any* Pareto improvements, you can be exploited in the sense mentioned earlier -- you'll accept any situation, so long as it's not worse for you than where you started. So you will take some pretty poor deals. Notice that one Pareto improvement can prevent a different one -- for example, if you move to (1/2, 1), then you can't move to (1,1/2) via Pareto improvement. So you could always reject a Pareto improvement because you're holding out for a better deal. (This is the *Battle of the Sexes* aspect of the situation -- there are Pareto-optimal outcomes which are better or worse for different people, so, it's hard to agree on which improvement to take.)

That's where Cooperation between Agents with Different Notions of Fairness [LW · GW]comes in. The idea in that post is that you don't take *just any* Pareto improvement -- you have standards of fairness -- but you don't just completely defect for less-than-perfectly-fair deals, either. What this means is that two such agents with incompatible notions of fairness can't get all the way to the Pareto frontier, but the closer their notions of fairness are to each other, the closer they can get. And, if the notions of fairness *are* compatible, they can get all the way.

# Moloch is the Folk Theorem

Because of the Folk Theorem, *most* iterated games will have the same properties I've been talking about (not just iterated PD). Specifically, most iterated games will have:

**Stag-hunt-like property 1:**There is a Pareto-optimal equilibrium, but there is also an equilibrium far from Pareto-optimal.**The battle-of-the-sexes-like property:**There are multiple Pareto-optimal equilibria, so that even if you're trying to cooperate, you don't necessarily know which one to aim for; and, different options favor different people, making it a complex negotiation even if you can discuss the problem ahead of time.

There's a third important property which I've been assuming, but which doesn't follow so directly from the Folk Theorem: **the suboptimal equilibrium is "safe", in that you can unilaterally play that way to get some guaranteed utility.** The Pareto-optimal equilibria are not similarly safe; mistakenly playing one of them when other people don't can be worse than the "safe" guarantee from the poor equilibrium.

A game with all three properties is like Stag Hunt with multiple stags (where you all must hunt the same stag to win, but can hunt rabbit alone for a guaranteed mediocre payoff), or battle of the sexes where you can just stay home (you'd rather stay home than go out alone).

# Lessons in Slaying Moloch

0. I didn't even address this in this essay, but it's worth mentioning: * not all conflicts are zero-sum.* In the introduction to the 1980 edition of

*The Strategy of Conflict*, Thomas Schelling discusses the reception of the book. He recalls that a prominent political theorist "exclaimed how much this book had done for his thinking, and as he talked with enthusiasm I tried to guess which of my sophisticated ideas in which chapters had made so much difference to him. It turned out it wasn't any particular idea in any particular chapter. Until he read this book, he had simply not comprehended that an inherently non-zero-sum conflict could exist."

1. In situations such as iterated games, * there's no in-principle pull toward defection.* Prisoner's Dilemma seems paradoxical when we first learn of it (at least, it seemed so to me) because we are not accustomed to such a harsh divide between individual incentives and the common good. But perhaps, as Sarah Constantine speculated in Don't Shoot the Messenger, modern game theory and economics have conditioned us to be used to this conflict due to their emphasis on single-shot interactions. As a result, Moloch comes to sound like an inevitable gravity, pulling everything downwards. This is not necessarily the case.

2. Instead, * most collective action problems are bargaining problems*. If a solution can be agreed upon, we can generally use weak enforcement mechanisms (social norms) or strong enforcement (centralized governmental enforcement) to carry it out. But, agreeing about the solution may not be easy. The more parties involved, the more difficult.

3. * Try to keep a path open toward better solutions.* Since wide adoption of a particular solution can be such an important problem, there's a tendency to treat alternative solutions as the enemy. This bars the way to further progress. (One could loosely characterize this as the difference between religious doctrine and democratic law; religious doctrine trades away the ability to improve in favor of the more powerful consensus-reaching technology of immutable universal law. But of course this oversimplifies things somewhat.) Keeping a path open for improvements is hard, partly because it can create exploitability. But it keeps us from getting stuck in a poor equilibrium.

## 22 comments

Comments sorted by top scores.

## comment by romeostevensit · 2020-09-15T00:10:16.462Z · LW(p) · GW(p)

I found these three papers highly useful, especially the first one

1. Statistical physics of human cooperation

2. Evolutionary dynamics of group interactions on structured populations: a review

3. Understanding and Addressing Cultural Variation in Costly Antisocial Punishment

What this means is that two such agents with incompatible notions of fairness can't get all the way to the Pareto frontier, but the closer their notions of fairness are to each other, the closer they can get.

this is helpful for clarifying some thoughts. Thanks.

most collective action problems are bargaining problems.

I came to this conclusion and the nice thing about it is that it collapses a bunch of problems into the signaling landscape problem. (I don't know if there's an academic term for the prevailing state of the signaling landscape at a given time). The frustrating thing about the signaling landscape is that there's a lot of security through obscurity (eg shibboleths).

## comment by lionhearted · 2020-09-16T12:50:56.167Z · LW(p) · GW(p)

Going through these now. I started with #3. It's astoundingly interesting. Thank you.

## comment by Sniffnoy · 2020-09-15T17:51:23.616Z · LW(p) · GW(p)

So, why do we perceive so many situations to be Prisoner's Dilemma -like rather than Stag Hunt -like?

I don't think that we do, exactly. I think that most people only know the term "prisoners' dilemma" and haven't learned any more game theory than that; and then occasionally they go and actually attempt to map things onto the Prisoners' Dilemma as a result. :-/

## comment by Raemon · 2020-09-15T00:37:01.803Z · LW(p) · GW(p)

Only a third of the way through so far, but confused – you cite "iteration and enforcement mechanisms" as things that make PDs more like Stag Hunts. But, isn't iteration and enforcement a property that either PD or SH can have? My understanding of the difference between PD and Stag Hunt was about what the actual payoffs are, and that C/C can be a nash equilibrium because you wouldn't get more payoff by defecting (although D/D is also a nash equilibrium)

## comment by abramdemski · 2020-09-15T15:51:49.365Z · LW(p) · GW(p)

In game theory, iterated PD would be a different game than PD. PD as typically defined *is* a single-shot game. The same is true of stag hunt, battle of the sexes, and many other games: if I say "stag hunt" to a game theorist, they probably don't ask "single shot or iterated?". Rather, if I say "stag hunt" and then start talking about iterated strategies, they might be like "oh, you mean iterated stag hunt?"

Similarly with enforcement mechanisms and so on. None of these are assumed by default.

In (single-shot) PD, the "possible strategies" are the moves you can make (or mixtures of moves, if you have randomness available). In iterated PD, however, the strategy space is much more complex: it's the set of possible iterated strategies, including any possible function of the game history. This gives us a correspondingly much more complex set of equilibria to consider.

It is characteristic of PD that players are incentivised to play away from the Pareto frontier; IE, no Pareto-optimal point is an equilibrium. **This is not the case with iterated PD.**

It is characteristic of Stag Hunt that there is a Pareto-optimal equilibrium, but there is also another equilibrium which is far from optimal. **This is also the case with iterated PD.**

Hence my assertion that iterated PD is more like stag hunt.

However, it is furthermore true of iterated PD that *there are multiple different Pareto-optimal equilibria, which benefit different players more or less.* Also, if players don't successfully coordinate on one of these equilibria, they can end up in a worse overall state (such as mutual defection forever, due to playing grim-trigger strategies with mutually incompatible demands). This makes iterated PD resemble Battle of the Sexes.

## comment by TurnTrout · 2020-09-15T17:02:19.691Z · LW(p) · GW(p)

However, it is furthermore true of iterated PD that

there are multiple different Pareto-optimal equilibria, which benefit different players more or less.Also, if players don't successfully coordinate on one of these equilibria, they can end up in a worse overall state (such as mutual defection forever, due to playing grim-trigger strategies with mutually incompatible demands). This makes iterated PD resemble Battle of the Sexes.

I think this paragraph very clearly summarizes your argument. You might consider including it as a TL;DR at the beginning.

## comment by Raemon · 2020-09-15T20:36:10.327Z · LW(p) · GW(p)

Okay. I think I get what you are saying now, but it wasn't clear on my initial read through.

I *did* understand on the initial read-through (or, currently think I understand?) that when you say "most games turn out to be Battle of the Sexes in practice", you mean that there is an emergent property of the iterated game that turns it into Battle of the Sexes.

My current summary of what you are intending to say (correct me if I got it wrong ) is:

1. Most prisoners dilemma games are actually iterated.

2. Iterated prisoners dilemma is actually a different game with a different payoff matrix that has a different set of nash equilibria. Choosing which strategy to play in iterated prisoner's dilemma is similar to playing Stag Hunt.

3. Then there is a further step where the process of deciding on how to coordinate (meta-strategy?) that you are choosing in a stag hunt is more similar to battle of the sexes.

I think what I was missing the first time through was #2. I was intepreting you to mean "the thing about stag hunts is that they are iterated, and your PD is probably iterated", where what you actually meant was "your PD is iterated, and iterated PD is actually isomorphic to stag hunt."

## comment by Bucky · 2020-09-15T10:18:21.434Z · LW(p) · GW(p)

I had the same confusion.

One of the key things I think between the 3 games is whether communication beforehand helps (in single shot games).

In PD communication doesn't really help much as you there is little reason to trust what the other person.

In SH communication should be able to solve your problem as S-S is optimal for both players.

In BotS communication which results in agreement can at least be trusted as co-ordinating is optimal for both players. Choosing which option to co-ordinate on is another matter.

(assuming you've included the pleasure of spiting the other person etc. in the payoff matrix)

## comment by abramdemski · 2020-09-15T15:52:56.215Z · LW(p) · GW(p)

See my reply to Raemon for the aforementioned confusion.

## comment by magfrump · 2020-09-17T04:18:02.128Z · LW(p) · GW(p)

Nitpick:

While by the end of the article I feel like I understood what you mean by battle of the sexes, I didn't at the start and there is neither an explanation of the battle of the sexes game (even at the beginning of the section titled Battle of the Sexes!) nor is there a link to a post or article about it.

## comment by riceissa · 2020-09-16T04:52:31.429Z · LW(p) · GW(p)

In the Alice/Bob diagrams, I am confused why the strategies are parameterized by the frequency of cooperation. Don't these frequencies depend on what the other player does, so that the same strategy can have different frequencies of cooperation depending on who the other player is?

## comment by abramdemski · 2020-09-16T17:10:15.714Z · LW(p) · GW(p)

First off, I'm not trying to illustrate the many-player game here. So imagine there's just Alice and Bob. I agree that the many-player version is relevant, but I was just dealing with the complexities that arise from iteration.

Second, yeah, absolutely: strategies in iterated games can be *any function of the history*. But that's a really complicated strategy space to try and draw. Essentially I'm showing you just a very high-level summary, focusing on frequency of cooperation as a salient feature.

The idea is that frequency is something each player can observe about the other. Alice can implement a Grim Trigger strategy to enforce any given frequency of cooperation from Bob. It needs to have some wiggle room, to allow chance fluctuations in frequency without pulling the Grim Trigger; but Alice can include wiggle room while enforcing tight enough a guarantee that Bob is forced to cooperate with the desired frequency in the limit, and Alice runs only a small risk of spuriously Grim Triggering.

## comment by AllAmericanBreakfast · 2020-09-23T22:14:49.626Z · LW(p) · GW(p)

Here's my summary of this post. Is this getting at the point you're trying to make?

The essential difference between a one-off Prisoner's Dilemma and a Stag Hunt is that in the one-shot PD, the prisoners cannot punish or reward each other for cooperating. In a Stag Hunt, the hunters can punish defection and reward cooperation. In both cases, the best outcome is equally good for all players.

The essential difference between a Stag Hunt and a Battle of the Sexes is that in the Stag Hunt, the best outcome is equally successful for all. In Battle of the Sexes, the best one-off outcomes are always unfair to at least one of the participants.

In most real-world situations, we can enforce cooperation. Generally, the outcomes won't be perfectly fair. They'll resemble a Battle of the Sexes more than a Stag Hunt or one-off Prisoner's Dilemma. So the problem is negotiating which unfair outcome the participants will choose. But because the Prisoner's Dilemma is so well-known, people often resort to it as their first game-theoretic analysis of any given situation.

## comment by Pongo · 2020-09-24T01:14:12.076Z · LW(p) · GW(p)

This summary was helpful for me, thanks! I was sad cos I could tell there was something I wanted to know from the post but couldn't quite get it

In a Stag Hunt, the hunters can punish defection and reward cooperation

This seems wrong. I think the argument goes "the essential difference between a one-off Prisoner's Dilemma and an IPD is that players can punish and reward each other in-band (by future behavior). In the real world, they can also reward and punish out-of-band (in other games). Both these forces help create another equilibrium where people cooperate and punishment makes defecting a bad idea (though an equilibrium of constant defection still exists). This payoff matrix is like that of a Stag Hunt rather than a one-off Prisoner's Dilemma"

## comment by Dagon · 2020-09-15T16:15:01.151Z · LW(p) · GW(p)

not all conflicts are zero-sum.

This should be the lede. Most real-world interactions lose a lot of options, and a lot of potential value, by being simplified to a (n iterated) PD, SH, or BotS.

In reality, there's almost always un-modeled transfer and payouts - just being able to say "good job, thanks!" after a result is FREE UTILITY! Also, non-pathological humans have terms for the other player(s) in their utility function. Most importantly, there are far too many future games in the iteration set of a human lifetime for anyone to model, so reputation and self-image effects very often will dominate the modeled payouts.

## comment by FactorialCode · 2020-09-15T14:49:11.270Z · LW(p) · GW(p)

I'm OOTL, can someone send me a couple links that explain the game theory that's being referenced when talking about a "battle of the sexes"? I have a vague intuition from the name alone, but I feel this is referencing a post I haven't read.

Edit: https://en.wikipedia.org/wiki/Battle_of_the_sexes_(game_theory)

## comment by Jacobian · 2020-10-01T15:25:46.541Z · LW(p) · GW(p)

The longer (i.e., more iterations) you spend in the shaded triangles of defection the more you'll be pulled to the defect-defect equilibrium as a natural reaction to what the other person is doing and the outcome you're getting. The longer you spend in the middle "wedge of cooperation", the more you'll end moving up and to the right in Pareto improvements. So we want to make that wedge bigger.

The size of that wedge is determined by the ratio of a player's outcome from C-C to their outcome in D-D. In this case the ratio is 2:1, so the wedge is between the slopes of 2 and 1/2. If C-C only guaranteed 1.1-1.1 to each player while a defection got them at least 1, the wedge would be a tiny sliver. Conversely, if the payoff for C-C was 999-999 almost the entire square would be the wedge.

But the bigger the wedge, the more difference there is between outcomes on the pareto frontier so the outcome of 100% C-C is a lot less stable than if any deviation from it immediately led to non-equilibrium points that degenerate to D-D.

## comment by habryka (habryka4) · 2020-09-22T23:09:53.152Z · LW(p) · GW(p)

Promoted to curated: This post made a point that I have been grasping at for a while, and made it quite well. For better or for worse, I use Prisoner's Dilemma analogies at least 5 times a week, and so understanding the dynamics around those dilemmas is quite important to me. This post felt like it connected a number of ideas in this space in a way that I expect to refer back to in the future at least a few times.

## comment by ChristianKl · 2020-09-15T10:40:31.776Z · LW(p) · GW(p)

I would prefer headlines to spell out the terms they use to make it clearer to a reader that scans the headlines what a post is about.

## comment by abramdemski · 2020-09-15T15:54:14.019Z · LW(p) · GW(p)

Fixed.

## comment by ChristianKl · 2020-09-15T16:39:05.025Z · LW(p) · GW(p)

Thanks.