Parasitic Language Games: maintaining ambiguity to hide conflict while burning the commons

post by Hazard · 2023-03-12T05:25:26.496Z · LW · GW · 17 comments

Contents

  Case Study: "Are Founders Allowed to Lie?"
    Context & Language Games
    Who benefits from pre-truth?
    Sticky equilibrium?
    Hyperstitioning Coordination for Stag Hunts
  Generalizing: Parasitic Language Games
None
17 comments

“They are playing a game. They are playing at not playing a game. If I show them I see they are, I shall break the rules and they will punish me. I must play their game, of not seeing I see the game”
- R. D. Laing

"It's not lying if everyone knows it's lying."

I see this sentiment in a lot of places. It pops up in corporate managerial contexts. It's been used as a legal defense and worked. It's a claim that communication that looks adversarial isn't, it's just high-context communication between people "in the know", there's no deception happening, no conflict, you just don't get how we do things here.

I don't buy it. My claim in a nutshell:

It situations where people insist "it's not lying because everyone knows it's lying" the people in the know aren't deceiving each other, but the reason this game is being played is to fool people not in the know, and insisting that it's just "high context communication" is part of an effort to obscure the fact that a conflict is going on.

If that makes perfect sense to you, dope, you already get my main point. The rest of this post is adding nuance, actually arguing the case, and providing more language for talking about these sorts of dynamics.

Case Study: "Are Founders Allowed to Lie?"

This essay by Alex Danco talks about how "it's not lying because everybody knows it's lying" works in the Silicon Valley startup scene. It's short enough that it's worth reading now so you can decide for yourself if I'm misrepresenting him. If you don't feel like reading it I still quote enough of it for my post to make sense.

Some snippets.

It's really hard to start a business without lying:

If you are only allowed to tell the literal, complete truth, and you’re compelled to tell that truth at all times, it is very difficult to create something out of nothing. You probably don’t call it “lying”, but founders have to will an unlikely future into existence. To build confidence in everyone around you – investors, customers, employees, partners – sometimes you have to paint a picture of how unstoppable you are, or how your duct tape and Mechanical Turk tech stack is scaling beautifully, or tell a few “pre-truths” about your progress. Hey, it will be true, we’re almost there, let’s just say it’s done, it will be soon enough.

It's not lying because everyone's in on it.

You’re not misleading investors; your investors get it: they’re optimizing for authenticity over ‘fact-fulness’. It’s not fraud. It’s just jump starting a battery, that’s all.

Some abstracted examples of what this "pre-truth" looks like:

You’ve all seen this. It doesn’t look like much; the overly optimistic promises, the “our tech is scaling nicely” head fakes, the logo pages of enterprise customers (whose actual contract status might be somewhat questionable), maybe some slightly fudged licenses to sell insurance in the state of California. It’s not so different from Gates and Allen starting Microsoft with a bit of misdirection. It comes true in time; by the next round, for sure.

Why it's important and also why you can't talk about it:

Founders will present you with something pre-true, under the total insistence that it’s really true; and in exchange, everyone around them will experience the genuine emotion necessary to make the project real. Neither party acknowledges the bargain, or else the magic is ruined.

Before investigating if Danco's story checks out I'm going to introduce some frames for talking about communication to make it easier for me to clarify what's going on here.

Context & Language Games

All communication relies on context and context has a nested structure which operates on multiple levels of communication. Some context operates piece-wise at the level of words and phrases; "me", "now", and "this" all offload their meaning to the surrounding context. Besides resolving the referents of particular words and phrases that are ambiguous without context, there’s a broader way in which all communication is contextual to your understanding the world, your interlocutor, and what you think you’re both trying to do by saying words.

There’s a sort of figure-ground dynamic where the shared understanding of the pre-linguistic “Situation” is the ground that allows explicit communication to even be a figure. The explicit components of communication provides a set of structured references that pare down possible things you could be talking about and doing via talking about them, and it’s the ground, the Situation, that allows your utterances to “snap-to-grid” and form a coherent unit of communication that another person can interact with. Without the “so what?”, no communication really makes sense. The only reason this isn’t always obvious is because the “so what?” can be so simple, general, or obvious that it barely feels worth mentioning. If I shout “Fire!” the “so what?” is so patently obvious to any person who’s flammable, the snap-to-grid happens so instantaneously, that it barely feels like a “step” in the process of understanding. Non-sequiturs can only be a thing because people are always attending to “what is The Situation and what are we both doing here?” in a very general sense.

"Situations" can vary from incredibly stereotyped and well trodden like "gas station clerk and customer", to idiosyncratic and bespoke like the communication between a couple that’s been married for 50 years. In addition to the relationship aspects of context connect, people have lots of shared "modes" of context. Story-telling, jokes, sarcasm, earnestly explaining your beliefs, these are all high level contextual modes that have different answers to “what are we doing with words here?”

I use the term Language Game to gesture at the whole package deal process a person is using for meaning making at any given moment. The term comes from Wittgenstein who introduced it largely to shed light on the vast diversity of things people do when communicating. This pushed back against his philosophical peers, people like the Logical Positivists and Bertrand Russel, who claimed that the basis of language was asserting propositions. Wittgenstein claimed that things like:

are all very different uses of language, uses which incorporate propositions but can’t be accounted for in terms of propositions alone. When I talk about language games I’m also trying to highlight the interlocking nested layers of context that language is processed through in order for an utterance to be "snapped-to-grid" and mean something to you

In plenty of cases people can notice the discrepancies between the language games they're playing and sync up to create a more coordinated understanding of The Situation. In plenty of cases this doesn't happen easily. Weird stuff happens when people don't understand or don't care that they're using the same words to play very different language games.


All that being said, here's how I'd phrase Danco's claim in my own language:

Instead of playing the "truth-claim" language game, where one interprets others as making actual truth claims about the way things are and when you speak you expect others to interpret you similarly, they are synced up on playing the pre-truth language game. In this game it's understood that when a founder tells a VC that they've got a demo ready right now, the VCs knows that a demo may or may not actually be ready and that they aren’t expected to believe it definitely is. The utterance was made under the shared context that the specifics are less important than enacting a role, conveying "authenticity", and "evoking in people the genuine emotion necessary to make the project real". Pre-truths aren't part of an effort to deceive, but are a way to engage in a social ritual.

I think Danco describes a lot of the pre-truth game pretty well. Where I disagree is when it comes to the effects of the pre-truth game and the function it serves in the startup ecosystem. Specifically, he claims that 1) no one's getting deceived / there's no conflict happening / this is a cooperative game for all involved and 2) the pre-truth game provides positive sum benefits to the ecosystem that have nothing to do with deception. But before arguing those points I’m going to contrast the pre-truth game with a similar looking situation to further highlight how it’s supposed to work.

Imagine an environment where everyone is playing the truth-claim game but there's a high frequency of liars. Remember, the truth-claim game is characterized not by everyone being honest, but by how people expect to be interpreted. We'll call this a low-trust truth-claim situation. Here, if a VC has a way to verify the claims of a founder or is plugged into a functional reputation network they can make decisions about the honesty of their interlocutor. When they don't have that, VC's will likely resort to using base-rates about lying in the ecosystem and "round down" the claims of founders when making a guess at what's the case.

This is not the situation Danco is describing. The pre-truth game is a high trust cooperative dynamic among those playing it, where founders "rounding up" and VCs "rounding down" happen as a matter of convention. Keep that in mind as we poke at the rest of his story.

Who benefits from pre-truth?

The pre-truth game is cooperative among those in the know who are playing it, but what about everyone else? Danco makes clear that not everyone is in on it. The rules and limits of the pre-truth game are nebulous and implicit, part of a "nudge-wink fraternal understanding", and even the existence of the game is taboo to talk about. People outside Silicon Valley aren’t synced up on playing pre-truth with founders. He also notes there being plenty of founders who don't pre-truth, not just because of principled reasons, but because they don't have social access to "the rules". Given all the obfuscation and the nature of the pre-truth game I'm very confident that not everybody knows [LW · GW], and there's going to be plenty of collisions between pre-truthers and truth-claimers. When these mismatches occur the mismatch will "fail silently" and not be immediately obvious to either party.

In these mismatches, are the truth-claimers being lied to? A better question, "when these mismatches in language games occur, who reliably benefits?" Since pre-truthing founders are always "rounding up", truth-claimers will see the investment opportunity as better than it actually is. It will be situational as to whether or not this would counterfactually change the investors decision (you could decide not to invest even with the rounded-up assessment, and both the rounded-up and actual assessment could indicate good bets to you), but either way they’ve been reliably misinformed. The pre-truthers frequently get more capital than they would have otherwise and the truth-claimers frequently make less informed decisions than they would have otherwise. The directional gains from this are clear.

Sticky equilibrium?

A defender of pre-truth might say:

"Okay fine, there are negative externalities, but those are accidental! Sure we benefit when someone mistakes pre-truthing for truth-claiming, but that's not why we pre-truth. It's a crucial dynamic that has this 'je ne sais quoi' which helps both founders and VC's in a non-extractive, non-deceptive way. The negative externalities are lamentable, but it's a trade-off to get the secret sauce, and this only works if we don't talk about it so there's not really a way to get rid of the negative externalities."

The emphasis on how important it is to not talk about the game is very suspicious to me. It's clear why the existence of the pre-truthing game needs to be obscured if it's primarily an extractive play; potential marks need to not know about the game in order to be fooled. It's a lot less clear what sort of non-extractive secret sauce this game could have that requires it to not be talked about openly.

Remember, for this game to not be deceptive between VC's and founders it needs to be the case that when founders claim to have made certain progress, the VC's know that there's a decent spread on what the actual situation could be. And if they're still deciding to invest, that means their risk-profile is okay with that spread of possibilities. Which means if they'd both been operating from a truth-claim orientation, the founder could have just told them how things are and nothing would have been lost.

"But since most people are already playing the pre-truth game, if you don't 'round up' then VC's will 'round down' your pitch and your startup will look less promising than it is! You have to pre-truth just to maintain equal footing."

This sounds like a benefit of the pre-truth game, but it's not. This is claiming that the pre-truth game is purely an equilibrium point in a pure coordination game, and that you'll be misunderstood if you don't play. If the merits of syncing to the pre-truth game were just that it was a prevailing coordination point, people should have no preference for the pre-truth game over the truth-claim game except for the costs of switching. And this isn't the sort of coordination equilibrium that's hard to switch out of. Consider a canonically difficult coordination problem; getting all your friends to switch to a different social media platform. Different platforms have different merits, and the biggest benefit of a platform is the network effects. If you hate twitter and want to switch, you could make a platform that's in many ways superior but it won't get anyone more of what they want until a critical mass switches to the new coordination point.

Switching from the pre-truth game, which we've identified has negative externalities, to the truth-claim game has none of the qualities that make for hard coordination problems. There's no network effects; the only people you need to coordinate with for an interaction to work are the people you're currently talking to. This also isn't a schelling point like problem where you have to pick the same coordination point without talking to each other and hope you picked the same thing. You can simply clarify with each other what language game y'all are gonna be playing. And remember, VC's aren't rounding down because it's a low-trust truth claim environment, they're rounding down as part of a cooperative understanding, which means you don't have to deal with the difficulty of reliably communicating honesty. In fact, the only thing that makes it hard to switch out of the pre-truth game equilibrium is the taboo about acknowledging the game. The taboo is an active force that both props up the negative externalities of the game and props up the "you take a hit if you don't play" incentive.

Hyperstitioning Coordination for Stag Hunts

The last vaguely plausible defense I can imagine for the pre-truth game:

"Fine, I'll spill the beans on the secret sauce. We do it as part of a stag hunt coordination strategy [LW · GW]. For a startup to succeed, you don't just need to convince people you have a good idea, you need to convince them that others will buy in as well."

This is the only angle that can give even a little bit of an explanation for all the obfuscation. It... almost makes sense. Something like the pre-truth game is by no means the only or best strategy for coordinating people in a stag hunt, but it can get the job done. This is what I think Danco is gesturing at when he talks about getting everyone to "experience the genuine emotion necessary to make the project real".

I do agree that there's a non-trivial stag-hunt aspect to the task of making a startup. A huge part of convincing any given person to participate, whether it's first hires or investors, is convincing them that you can convince others. As an employee I could think the idea is great but you'd need a lot of capital, and if I don't expect you to be able to raise enough money I won't join. As a VC, I could believe in your business but if I'm not willing or able to fund you the whole way through, funding you now means making a bet that you can convince others to fund you down the road.

The problem is that there's way more that goes into starting a successful company than just getting people to believe in you. The glowing cloud of endorsement is needed in addition to the tech being possible, having the talent, having a good product, the market conditions being right, and basically everything else about making a business work. For someone to think it’s a good idea to join you, they need to be convinced that all of those details check out. How are you going to legitimately convince people of that? The pre-truth game doesn't come with any way of clarifying which parts of your pitch are coordinative pre-truth hype and which parts are actual real aspects of your plan that are important for others to understand. The nature of the game and the obfuscation around it works against such clarification. It actively degrades everyone's ability to clearly communicate about the very real underlying reality which someone needs to be keeping track of for this whole thing to work.

This whole game actively makes it easier for grifters to succeed in your ecosystem. Sure, maybe you happen to be the heart-of-gold-god-tier-competence-noble-lie-champion that can make pre-truth work in a way that benefits everybody, but you did it at the cost of reinforcing a game that breaks the error correction capacity of your community. The taboo on talking about the game pushes against people openly trying to keep track of who’s playing which language game and sharing that information, and if you try to update the mainline reputation network on people’s honest you’ll face resistance because “it’s not lying, stop trying to ruin people’s reputation!” It’s not impossible, but it becomes much harder. All this means grifters can flood in with little resistance. You get all these negative consequences, and it's not even the case that pre-truth is the only way to get coordinated group buy-in!

The pre-truth game combined with the taboo on talking about it produce minimal positive gains, ones that can be achieved by other means, while creating a context that will reliably extract from those not in the know, while also gumming up the error-correction capacity of the ecosystem in a way that will lead to increasingly extractive behavior over time. In other words, it's fucked.

Generalizing: Parasitic Language Games

I'm not particularly entangled with the SF startup scene, so while I care about trying to set the record straight about what's going on there as a matter of principle, I mostly care about it as a way to illustrate the general dynamic.

I call the pre-truth game a parasitic language game with respect to the host language game of truth-claims. Its existence is powered by gains it extracts from those who mistake it for the host. Sometimes these gains are had through straightforward deception, but they can also be had when obfuscating the playing field creates enough plausible deniability that third parties can be prevented from intervening on the underlying conflict.

A parasitic language game does damage to the host language game, and to the entire discursive ecosystem it inhabits. Those playing the host game who are more trusting wind up the marks who are deceived and extracted from. Those playing the host game who catch on to the mismatch between what is said and what is functionally find themselves to be in a low-trust environment that for mysterious reasons is fighting against the typical verification and reputation network methods that can ease the burden of having to shift through the untrustworthy. When players of the host game try to confront parasitic players they're first be met with insistence that they are in fact playing the host game and express offense at being attacked. If pressed further, the parasitic player will admit to playing to playing a different game and quickly pivot to lines like "that's just how things are done here..." and "don't be so naive, everybody knows..." which serve the dual purposes of trying to avoid reputational blow-back among host players and trying to instill the taboo about open communication in the interrogator so that they can keep pretending the parasitic game doesn't exist.

Importantly, engaging in a parasitic language game is part of a coalitional strategy. If you're pre-truthing all on your own, you're simply a liar. You need a sufficient amount of people playing the game together in order to have the cover of "things work differently here, you just don't get it." I don't have well formed thoughts on how parasitic language games can come to dominate a given ecosystem but I'd guess it's somewhat related to what Ben Hoffman talks about in his "Guilt, Shame, and Depravity" post.

While many biological parasites have control systems to create a "controlled burn" that ensures the host stays alive long enough for the parasite’s needs, parasitic language games don't have similar steering mechanisms. The constraint of operating strictly in the implicit impose a huge loss of organizational capacity. If the players in the know are also engaged in overt conspiracy, frequently talking clearly with each other to keep track of the underlying reality while staying out of sight of those not in the know, they'd retain a lot of their ability to strategize and steer the situation. But it seems like parasitic strategies are most often powered by compartmentalization, motivated ignorance, and self-deception. Such parasitic language games have a short life span if the players are harming their collective ability to keep track of and communicate clearly about the underlying reality of their situation. One way a parasitic language game can survive longer is if it reaches the point of becoming a "too big to fail" dynamic.


A closing thought: the communication distortions I’ve been exploring are ones that are actively being maintained, both as a way to hide the presence of a conflict and as a tool to engage in said conflict. They’re very different from the conflict-language entanglements where problems in communication generate a conflict, one that could be resolved if only people could work through their mutual misunderstandings. When it comes to parasitic language games, the typical pro-social "resolve misunderstandings and ambiguity" tool-box isn't going to help because the misunderstandings and ambiguity are serving a purpose.

I’d like to be able to share thoughts on combating these dynamics, but I don’t have too many. At a general level, acting more directly on the underlying conflict instead of on the communicative symptoms seems like a start. Just being aware at all of the games people play is useful for staying oriented and not getting mystified by all the obfuscation. If anyone has more worked out strategies I’d love to hear them.

17 comments

Comments sorted by top scores.

comment by Chris_Leong · 2023-03-13T02:26:02.653Z · LW(p) · GW(p)

I'm wondering if the parasitism goes both ways in that founders exaggerate in order to gain benefits from investors and suppliers, but that these parties also gain from any founder who wants to play being forced to have "skin in the game" such that they can always be scapegoated if things explode too badly.

Replies from: Hazard
comment by Hazard · 2023-03-13T06:37:43.727Z · LW(p) · GW(p)

Yeah, the parasitic dynamic seems to set up the field for the scapegoating backup such that I'd expect to often find the scapegoating move in parasitic ecosystems that have been running their course for a while.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-03-14T18:44:26.901Z · LW(p) · GW(p)

So it's not parasitic but symbiotic instead?

Replies from: Hazard
comment by Hazard · 2023-03-15T04:37:08.594Z · LW(p) · GW(p)

Symbiotic would be a mutually beneficial relationship. What I described is very clearly not that

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-03-15T14:26:01.323Z · LW(p) · GW(p)

I'm wondering if the parasitism goes both ways in that founders exaggerate in order to gain benefits from investors and suppliers, but that these parties also gain from any founder who wants to play being forced to have "skin in the game" such that they can always be scapegoated if things explode too badly.

This seems to indicate mutual benefits for both founders and investors/suppliers ?

Which you acknowledged could be possible in the previous comment?

Although individual cases may still be net negative, it certainly possible that in aggregate it's symbiotic.

comment by SomeoneYouOnceKnew · 2023-03-12T08:16:58.855Z · LW(p) · GW(p)

I met an ex-Oracle sales guy-turned medium-bigwig at other companies once. 

He justified it by calling it "selling ahead", and it started because the reality is that if you tell customers no, you don't get the deal. They told the customers they would have requested features. The devs would later get notice when the deal was signed, and no one on management ever complained, and everyone else on his "team" was doing it.

comment by DirectedEvolution (AllAmericanBreakfast) · 2023-03-13T17:28:11.258Z · LW(p) · GW(p)

I think the way VCs tend to deal with this in biotech is by employing scientific advisors to judge the viability of a technology and the quality of results. They build semi-trusting relationships with each other, so that a VC-ally who's asking you to invest in a promising biotech they're already invested in becomes trustworthy, despite the apparent conflict of interest, because they're hoping to do repeat business with you in the future. Excessive exaggerations put companies at risk of losing their credibility or even of legal action (i.e. Theranos). These aspects seem to put some limits on the exaggerations and lies that take place in the startup world.

My sense is that Tyler Cowen and Daniel Gross are trying to deal with this problem in the context of hiring in their newish book "Talent." They look for job-specific objective metrics (i.e. IQ for a high intelligence-requiring job, but not for jobs that don't demand high intelligence since that would be wasting resources on an overqualified person). They also look for surprising questions that shift the conversational dynamic during an interview to one that is more authentic, and harder for the candidate to BS their way through. Lately, Tyler is trying "what is a commonplace opinion that you absolutely agree with?", since he feels people are overprepared for "what is a commonplace opinion that you know to be false?"

Another way to frame your question is that it's fundamentally about how to extract accurate, valuable investing information, at lower cost, in a competitive context, when the relevant information is often highly nonstandard the founders are often doing what they're doing for the first time, and their ability to attract new funding is life and death for their company. When you put it that way, it's clear that it makes perfect sense that this is so difficult!

comment by Self (CuriousMeta) · 2024-12-05T12:46:36.887Z · LW(p) · GW(p)

I find it important for rationalists to think and talk more about deception. 

While in honesty the post is a bit long for my taste, I like the way it approaches the overton window with this kind of dark-artsy, borderline-political topic and presents a plainly-insightful case study. 

comment by Review Bot · 2024-04-09T12:58:53.202Z · LW(p) · GW(p)

The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

comment by Ninety-Three · 2023-03-13T02:34:57.418Z · LW(p) · GW(p)

Here is a possible defense of pre-truth. I'm not sure if I believe it, but it seems like one of several theories that fit the available evidence.

Willingness to lie is a generally useful business skill. Businesses that lie to regulators will spend less time on regulatory compliance, businesses that lie to customers will get more sales, etc. The optimal amount of lying is not zero.

The purpose of the pre-truth game is to allow investors to assess the founder's skill at lying, because you wouldn't want to fund some chump who can't or won't lie to regulators. Think of it as an initiation ritual: if you run a criminal gang it might be useful to make sure all your new members are able to kill a man, and if you run a venture capital firm it might be useful to make sure all the businessmen you invest in are skilled liars. The process generates value in the same way as any other skill-assessing job interview. There's a conflict which features lying, but it's a coalition of founders and investors against regulators and customers.

So why keep the game secret? Well it would probably be bad for the startup scene if it became widely known that everyone's hoping startups will lie to regulators and customers. Also, by keeping the game secret you make "figure out what game we're playing" a part of the interview process, and you'd probably prefer to invest in people savvy enough to figure that out on their own.

Replies from: Hazard
comment by Hazard · 2023-03-13T06:28:37.015Z · LW(p) · GW(p)

Your comment seems like an expansion on who is the party being fooled and it also points out another purpose for the obfuscation. A defense of pre-truth would be a theory that shows how it's not deceptive and not a way to cover up a conflict. That being said I agree that an investor that plays pre-truth does want founders to lie, and it seems very plausible that they orient to their language game as a "figure it out" initiation ritual.

Replies from: Ninety-Three
comment by Ninety-Three · 2023-03-13T14:25:18.440Z · LW(p) · GW(p)

Depending on exactly where the boundaries of the pre-truth game are, I think I could argue no one is being deceived (I mean realistically there will be at least a couple naive investors who think founders are speaking literal truth, but there could be few enough that hoodwinking them isn't the point).

When founders present a slide deck full of pre-truths about how great their product is, that slide deck is aimed solely at investors. The founder usually doesn't publish the slide deck, and if they did they wouldn't expect Joe Average to care much. The purpose of the pre-truths isn't to make anyone believe that their product is great (because all the investors know that this is an audition for lying, so none of them are going to take the claims literally), rather it is to demonstrate to investors that the founder is good at exaggerating the greatness of their product. This establishes that a few years later when they go to market, they will be good at telling different lies to regulators, customers, etc.

The pre-truth game could be a trial run for deceiving people, rather than itself being deceptive.

Replies from: RamblinDash
comment by RamblinDash · 2023-03-13T15:21:14.418Z · LW(p) · GW(p)

This still creates another problem for founders who aren't part of the good-ol-boys network and don't know they are supposed to "pre-truth." Their companies will be evaluated worse than they "should" by VCs because VCs downgrade their actually currently true claims as if they were pre-truths. Nobody is being "deceived" per se, but these founders are being harmed.

Replies from: Ninety-Three
comment by Ninety-Three · 2023-03-13T15:37:23.711Z · LW(p) · GW(p)

If you believe strongly enough in the Great Man theory of startups then it's actually working as intended. If startups are more about selling the founder rather than the product, if the pitch is "I am the kind of guy who can do cool business stuff" rather than "Look at this cool stuff I made", then penalizing founders who don't pre-truth is correctly downranking them for being some kind of chump. A better founder would have figured out that he was supposed to pre-truth and it is significant information about his competence that he did not.

Realistically it is surely at least a little bit about the product itself, and honest founders must be "unfairly" losing points on the perceived merits of their product, but one could argue that identifying people savvy enough to play the game creates more value than is lost by underestimating the merits of honest product pitches.

Replies from: RamblinDash
comment by RamblinDash · 2023-03-15T14:38:27.862Z · LW(p) · GW(p)

Although to the extent that the founder's savvy is used for the purpose of taking the company public even though their product is not as good as they "pre-truthed" it, then the ultimate purpose is that the founder and the early investors are essentially colluding to defraud less-savvy IPO investors. Seems not great.

 

SQZ Biotech is possibly a good example of this (I live nearby so have followed the story but don't have any inside info about this company).

comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2023-03-13T18:06:48.475Z · LW(p) · GW(p)

Correlated equilibria are more general, more realistic equilibrium concept than Nash equilibria. They can arise in situations where by making use of a public signal the players can improve on the Nash equilibria. Providing that public signal is a huge value vacuum begging to be Pareto-optimised

comment by Dagon · 2023-03-12T17:04:53.810Z · LW(p) · GW(p)

Multi-party game theory is COMPLEX!  Add in that payoff matrices, utility curves, and player capabilities in human social situations are highly variant and not well-known (and in many games/conflicts/situations, actively obfuscated).  Then add in that all of these games are somewhat connected to each other so there's no isolation to make the calculations feasible.

Recognizing that every interaction is part of multiple games, and that the players are insanely complex neural networks (literally) which have evolved over millenea for this kind of game, is a good first step.  You CANNOT calculate any of the results, you can only observe and intuit, and be aware (to some extent) of your own preferences and capabilities (aka "payoff matrix").