Towards optimal play as Villager in a mixed game
post by Benquo · 2019-05-07T05:29:50.826Z · LW · GW · 40 commentsThis is a link post for http://benjaminrosshoffman.com/towards-optimal-play-as-villager-in-a-mixed-game/
Contents
40 comments
On Twitter, Freyja wrote:
Things capitalism is trash at:
Valuing preferences of anything other than adults who earn money (i.e. future people, non-humans)
Pricing non-standardisable goods (i.e. information)
Playing nicely with non-quantifiable values + objectives (i.e. love, ritual)
Things capitalism is good at:
Incentivising the production of novel goods and services
Coordinating large groups of people to produce complex bundles of goods
The obvious: making value fungible
Anyone know of work on -
a) integrating the former into existing economic systems, or
b) developing new systems to provide those things while including capitalism's existing benefits?
This intersected well enough with my current interests and those of the people I've been discoursing with most closely that I figured I'd try my hand at a quick explanation of what we're doing, which I've lightly edited into blog post form below. This is only a loose sketch, I think it does reasonably precisely outline the argument, but many readers may find that there are substantial inferential leaps. Questions in the comments are strongly encouraged.
Any serious attempt at (b) will first have to unwind the disinformation that claims that the thing we have now is capitalism, or remotely efficient.
The short version of the project: learning to talk honestly within a small group about how power works, both systemically and as it applies to us, without trying to hold onto information asymmetries. (There's pervasive temptation to withhold political information as part of a zero-sum privilege game, like Plato's philosopher-kings.)
Some background: post-WWII elite institutions (e.g. corps) are competitive to enter, but not under performance pressure, because of US government policy. This strongly selects for zero-sum games, which mimic but wreck discourse. (See Moral Mazes for more, especially the case studies that make up most of the book, starting around chapter 3.)
This creates opportunity in two ways.
First, institutions are mostly too stupid to model their environment beyond the zero-sum games they specialize in, so a small group that's able to maintain information hygiene and not turn on each other should be able to take & hold territory. "And not turn on each other" turns out to be really hard, because all our role models and intuitions for how to survive in this world involve doing that all the time. But we're learning!
(A mundane example of a decisive advantage due to information hygiene: Paul Graham writes about how his startup did better because it used an elegant programming language. That's only information hygiene on the purely technical level, but that was enough to outmaneuver huge corporations with a strong perceived incentive to ruin them, for quite a while. For a less mundane example, the story of how Elisha outmaneuvered multiple ruling dynasties is a personal favorite - 2 Kings 5-10. The narrative distorts the "miracles" a bit but it's not hard to reconstruct how he actually did it.)
Second, because most supposed productive activity is done in the context of huge stable corporations, people are trying to maximize the number of jobs and complexity per unit of output. This implies that many things can be done much more easily.
So that implies that if we can have good enough information hygiene and group cohesion not to fall victim to the perverse impulse to do the kind of make-work or artificial scarcity that creates much of cost disease, we can learn how to build a nearly full-stack civilization in a small city-state. Obviously there are many steps between here and there, but since lots of them involve getting collectively smarter, a detailed plan would be inappropriate.
What does good information hygiene and group cohesion look like? The game Werewolf is a good example. Players are secretly assigned the identity of Villager (initially the majority) or Werewolf (minority). Each round all players vote one player out, and Werewolves secretly do the same. There are other details that allow villagers to make some inferences about who the werewolves are. But they have to play the first few rounds right or they lose.
Optimal play for Werewolves involves (a) targeting whichever villagers are the most helpful to public deliberation, for exclusion, and (b) during public deliberation, being as unhelpful as they can get away with while appearing to try to help at other times. I realized a lot of things about how social skills feel from the inside when I finally figured out how to play correctly as a Werewolf.
Optimal play for Villagers involves creating as much clarity as possible, as soon as possible, and being willing to assume that people who seem to be foolishly gumming up the works are Werewolves if there's no other clear target.
With optimal play, Villagers usually win, but in practice, at best one or two people try to create clarity and are picked off in the first round by the Werewolves. The other Villagers are resigned to trying to die last, so they lose.
The thing I said about elite culture favoring zero-sum games can be recast as: the social environment favors playing Werewolf over playing Villager. In case it's not obvious, optimal real-world play for Villagers can often involve leaving the Werewolves alone. In real life there are better things to do than murder your enemies, like hang out. Villagers just need to defend themselves if and when they're actually threatened.
We're trying to learn how to play the Villager strategy successfully, in a context where we've mostly been acculturated to play as Werewolves, especially among elites. This has to involve figuring out how to do interpersonal fault analysis (identify when people are being Werewolfy) without scapegoating (assuming that fault -> blame -> exclusion).
In other words, justice seeks truth, but intends to leave no one behind; people who can't contribute need to feel safe admitting that, and people who hurt the group need the option to repent & heal the breach.
We don't have great finesse yet but optimal play in our world seems to be some fluid integration of talking about politics, healing personal trauma, and intersubjective openness.
Havel's The Power of the Powerless describes a similar (but less self-aware) strategy which he calls "dissidence." He (accurately, I think) predicts that the situation in Capitalist countries will be more difficult than the situation in Communist ones, because Capitalist ideology is more persuasive because it's more plausibly true.
40 comments
Comments sorted by top scores.
comment by Raemon · 2019-05-07T19:41:13.901Z · LW(p) · GW(p)
My best guess is that the sort of strategy hinted at here can scale up to a hundred people, but not much farther. It's relatively common wisdom that a small startup can outperform a big company by being more nimble and coordinated. But, small startups still eventually (often) scale up into big companies – for the simple reason that there's still a limit to the power and profitability a small startup can achieve.
This brings to mind a somewhat different lens into the institutional failure modes here, that a friend described to me recently. (I'm not 100% sure I grok or agree with this view, but will try to explain my version of it here)
tl;dr: Good Organizations scale to the point where competent people are spread thin juuuuust to the point where werewolves can rise in power, without affecting the innermost circle of power.
Organizations that scale less than that are outcompeted by those that do. Organizations that scale more than that self destruct.
...
Competent People are Rare and the World is Big
The post How to Use Bureaucracies [LW · GW] notes:
A bureaucracy is an automated system of people created to accomplish a goal. It’s a mech suit composed of people. The owner of a bureaucracy, if an owner exists, is the person who can effectively shape the bureaucracy. Bureaucrats are people who are part of a bureaucracy (excluding the owner).
Not all organizations are bureaucracies. Most organizations are mixed — they have both bureaucratic and non-bureaucratic elements.
The Purpose of Bureaucracies
The purpose of a bureaucracy is to save the time of a competent person. Put another way: to save time, some competent people will create a system that is meant to do exactly what they want — nothing more and nothing less. In particular, it’s necessary to create a bureaucracy when you are both (a) trying to do something that you do not have the capacity to do on your own, and (b) unable to find a competent, aligned person to handle the project for you. Bureaucracies ameliorate the problem of talent and alignment scarcity.
I think this can often generalize somewhat to non-bureaucracy institutions – a competent person (say, the founder(s) of a startup), will eventually want to scale their org so it can accomplish more things. One of the ways to scale it involve lots of bureaucracy. Others involve distributed teams that mostly pursue their own agenda while contributing to a common good, sometimes through non-bureaucratic hierarchy. (Where there's a series of managers. At each ring of the hierarchy, managers ostensibly coordinate on high level projects. But the environment that allows promotion within the hierarchy creates incentives for werewolves)
Scaling the company inevitably produces wasted motion, in some form. And you might look at most of the stuff happening, and think "man, this is so dystopian and sad. Can't we just coordinate better?" and the answer is yes, you can, but not at this scale.
Somewhere at the top of the company is a small team of founders, who (maybe) actually trust each other, have good plans, preserve absolute power, and carefully vet the people they allow into the innermost circle. They have worldscale goals, so they have scaled the company to function on the worldscale. They know that this means, underneath them, this has Games of Thrones and tons of wasted bureaucratic motion. But this is a necessary byproduct of extending their own power.
Biological metaphor
In biological evolution, there's some reason to think humans are the stupidest example of a smart-person-architecture you could come up with – as soon as evolution produced something smart that could reasonable scale, it straightforwardly scaled it without spending a lot of time fine tuning the architecture.
When humans have goals they can't accomplish on their own, they build orgs. When the orgs aren't big enough, they expand the orgs. A lot of human goals are "be able to compete on the worldscale, where at least some people are competing infinitely – they want to be the best, most profitable company or country. Because at least some orgs have unbounded goals, even if your goal is bounded, if it impacts the worldscale you must contended with cancerous, unbounded agents.
This means every competitive org must be the largest, most rickety versions of themselves that can reasonably function.
If there was an org that figured out how to be less rickety, with less wasted motion... they would spend free energy on expansion, gaining more resources, turning those resources into more actions.
Kingdom metaphor
If you're a king with 5 districts, and you have 20 competent managers who trust each other... one thing you can do is assign 4 competent managers to each fortress, to ensure the fortress has redundancy and resilience and to handle all of its business without any backstabbing or relying on inflexible bureaucracies. But another thing you can do is send 10 (or 15!) of the managers to conquer and reign over *another* 5 (or 15!) districts.
Each district produces grain, and more population. At least some of the population is... at least a mediocre or moderately good manager. A given district performs better if it has a competent manager running the fields, the schools, the guilds, and the military. But the kingdom as a whole performs better if each district has one competent manager, and relies on mediocre managers for each of its subtasks.
Mediocre managers are not skilled enough to use very good game theory – they're just skilled enough to add friction to the game of thrones, to keep it sluggish and predictable enough that the Competent Managers can see what's coming and step in when it's about to get out of hand.
i.e. werewolves get to flourish up till the point where they could affect the Inner Council of the King. By then, they have revealed themselves, and the Inner Council of the King beheads them.
This is bad if you're one of the millions of people who live in the kingdom, who have to contend with werewolves.
It's an acceptable price to pay if you're actually the king. Because if you didn't pay the price, you'd be outcompeted by an empire who did. And meanwhile it doesn't actually really affect your plans that much.
Ramifications / Takeaways
This is all described in oversimplified and idealized terms. I think in the real world, there often isn't an inner circle of competent people who detect werewolves before it's too late. And I do think you can outcompete other orgs by actually having an inner circle of people with high trust. And figuring out tools to expand that circle and scale their coordination is quite an important goal – I have reason to think there's low (or mid) hanging fruit here.
But I do currently think that at the end of the day, this doesn't result in a world that looks all that different. Behind the veil of anthropics, you should mostly expect to be located in one of the kingdoms sacrificed to the werewolves. And as you explore different kingdoms and hierarchies, you should mostly find more of the same. And this would be the case even in a world where the leaders of orgs are mostly good people (but at least some have opposing unbounded goals)
Replies from: Raemon, Benquo, Benquo, Pattern, jessica.liu.taylor, Davidmanheim, Raemon↑ comment by Raemon · 2019-05-07T19:49:58.656Z · LW(p) · GW(p)
(in terms of ameliorating missing moods, I want to clarify that while I roughly believe the above, I think it's a sub-point of the single most saddest and scariest thing in the entire multiverse, and I have a post upcoming about how awful it makes me feel)
Replies from: Benquo↑ comment by Benquo · 2019-05-08T14:20:53.503Z · LW(p) · GW(p)
I think I have less despair than you because some of what we see around us that you're interpreting as the universal way of things, I'm interpreting as the result of specific policy decisions within the current global imperium, which are different from the way things have been at other times and places. In particular, the Post-WWII corporate environment is one in which scale doesn't just allow organizations to field large armies (which still need tactical intelligence, effective command structures, mobilization capacity, and competently administered logistics, all of which are corroded by too much of the sort of internal decay Werewolfism is one variety of). It also lets them leverage a superstructure of state authority to defend themselves against potential rivals via regulatory capture.
Replies from: Raemon↑ comment by Raemon · 2019-05-08T18:52:27.802Z · LW(p) · GW(p)
Hmm. I would be interested in a sketch of a situation which you think would output a different strategic equilibrium.
It seems to me that "competent people are rare, and the world is big" to be a very general problem that applies across most domains.
(Your other comment about how the Jewish people were able to survive as a culture does point to some very interesting things, but I'm not sure they resolve the things I'm most worried about. It's good enough to create pockets of goodness, but the actual problem is that Hell Exists and Must Be Destroyed and you actually need to be powerful enough to destroy hell. A longterm strategy only works if it eventually allows you to grow faster than hell)
Replies from: Benquo, Benquo, Benquo↑ comment by Benquo · 2019-05-08T19:37:24.732Z · LW(p) · GW(p)
I would be interested in a sketch of a situation which you think would output a different strategic equilibrium.
I don't think I fully understand what you're asking, but here's a partial answer.
I think that the US North before the Civil War but decreasingly between then and WWII had a different equilibrium, in which people could viably just be landholders who improved their lot through, well, improving their lot. The Puritan states especially tended towards this. Merchants and artisans were also a thing, but a single whaling expedition was kind of a big venture compared with most things, and most business didn't occur at scale. The rise of major government expenditures (direct wartime expenses and indirect mobilization-related ones like railroads) changed that, as did the central controls put in place between the World Wars to manage the disruptions this caused, which naturally were designed to accommodate powerful incumbent stakeholders.
If you read accounts of economic life from that time, it really doesn't look like "competent people are rare, and the world is big," it looks like "ultra-competent people are gold, most people are good enough at something to get by, there are lots of different types of people with their own weirdness going on, and the world is very messy and diverse." Moby Dick definitely gives me this sense.
Replies from: Dagon↑ comment by Dagon · 2019-05-08T20:20:27.414Z · LW(p) · GW(p)
I'm very torn. I fully agree with Raemon's concerns, and might even go further: competent people are rare, and fully goal-aligned people are nonexistent. Looking at accounts from previous times is an existence proof of different equilibria, but does not imply that they're available today.
And if you look closer, those previous equilibria were missing some features that we hold dear today, such as fairly long periods at the beginning and end of life where economic production isn't a driving need, some amount of respect for people very different from ourselves, and a knowledge that the current equilibrium isn't permanent.
The part I'm torn on is that I deeply support experimenting and thinking on these topics, and I very much hope that my predictions are incorrect. This is a case where investing mental energy on a low-probability high-payoff topic seems justified.
↑ comment by Benquo · 2019-05-08T19:41:17.242Z · LW(p) · GW(p)
More generally there have been lots of times and places where some people have been playing the game of scale-to-maximum, and people not playing that game have often had to adjust for its existence, but can often do quite a lot anyway, including weird bank-shot projects like Christianity and Buddhism that end up scaling a bunch, across borders, without having any sort of army or centralized infrastructure for quite a while. Things that don't scale can engineer things that do. Obviously none of the past attempts have been good enough to permanently solve the problem, but you can look at how much deliberation went into them vs level of alignment and impact. To me, it looks like a Dunbar group that is modeling this situation explicitly has a pretty decent chance of building something much better, which in turn should improve the rate at which we get chances to try things.
↑ comment by Benquo · 2019-05-08T13:38:37.697Z · LW(p) · GW(p)
I agree very strongly that the strategy requires some sort of qualitative change when it has to scale beyond a Dunbar / hunter-gatherer-band sized group. I expect that group to have a much easier time solving that problem than we currently do, though, and I'd consider it fantastically good news to know for certain that the current things we're trying resulted in the formation of such a group, either collocated in space or with some reasonable information technology substitute, with a means of support that provides enough leisure to think about the problem part-time for a year together.
↑ comment by Pattern · 2019-05-12T23:39:39.795Z · LW(p) · GW(p)
One of the issues that was brought up in a post about organizations, is life/having a good leader. What makes this tricky is succession (if the organization outlasts the leader), and a culture where 'disrupting industry' is seen as good, is one where it is recognized that succession isn't being handled well. So maybe there not being an anti-werewolf circle is a result of failed succession?
↑ comment by jessicata (jessica.liu.taylor) · 2019-05-07T20:48:36.102Z · LW(p) · GW(p)
I agree with the general picture that scaling organizations results in wasted motion due to internal competitive dynamics. Some clarifications:
Because at least some orgs have unbounded goals, even if your goal is bounded, if it impacts the worldscale you must contended with cancerous, unbounded agents.
This means every competitive org must be the largest, most rickety versions of themselves that can reasonably function.
This assumes orgs with bounded goals that have big leads can't use their leads to suppress competition, by e.g. implementing law or coordinating with each other. Monopolies/oligopolies are common, as are governments. A critical function of law is to suppress illegitimate power grabs.
werewolves get to flourish up till the point where they could affect the Inner Council of the King. By then, they have revealed themselves, and the Inner Council of the King beheads them.
This assumes that implementing law/bureaucracy internally at lower levels than the inner council is insufficient for detecting effective werewolf behavior. Certainly, it's harder, but it doesn't follow that it isn't possible.
Behind the veil of anthropics, you should mostly expect to be located in one of the kingdoms sacrificed to the werewolves.
This is like saying you should anthropically expect to already have cancer. Kingdoms sacrificed to the werewolves have lower capacity and can collapse or be conquered.
Replies from: Raemon, Raemon↑ comment by Raemon · 2019-05-07T21:10:56.308Z · LW(p) · GW(p)
This assumes that implementing law/bureaucracy internally at lower levels than the inner council is insufficient for detecting effective werewolf behavior. Certainly, it's harder, but it doesn't follow that it isn't possible.
My core thesis here is that if you have a lowel-level manager that is as competent at detecting werewolves, you will be more powerful if you instead promote those people to a higher level, so that you can expand and gain more territory. You can choose not to do that, but then another kingdom which does promote it's competent mid-level agents to expansionary goals will overtake you and probably swallow you.
I do think that the eventually longterm victory for good guys looks something like "an inner council with enough global power, who wont fall to whatever forces eventually seem to kill empires by default, who then has so much free energy they can slowly improve the quality of their mid-level systems."
But you don't actually get to succeed fully at that until you are actually in a position to win. (And winning is [at the very least] on the universal scale, if not the multiverse scale, so you probably don't get to win in a domain that humans feel comfortable in).
[acknowleding that that paragraph has loads of complex unstated assumptions, which I think are beyond scope of the thread for me to argue for. I have some plans to write up a post about my current understanding and confusions about the Long Game]
That said, because the world is a mixed game, and we currently live in the Dream Time, and some people just actually want to be mid-level-managers despite it being strategically better for them to help conquer new territory, and some people have skills to detect werewolves but lack other skills that make them inappropriate to promote, we do have a "utopia has started to arrive early, but is unevenly distributed."
This sort of maps to "you can do it, but it's hard." But actually, it's only possible-and-strategically-advisable if you have managers who are limited in their growth potential (i.e. people who for some reason you can't use for expansionary purposes). The good guys would win more quickly if their mid-level-people-who-are-perfectly-good-at-werewolf-spotting also gained other skills that allowed them to expand.
(You obviously need at least some werewolf spotting skills at the mid levels, just that that's not where it's strategically adviseable to put your best people there).
Replies from: Raemon, jessica.liu.taylor↑ comment by Raemon · 2019-05-07T21:31:42.174Z · LW(p) · GW(p)
I should note that I'm not very confident about the strategy here. (I'm speaking in stereotypical more-confident-than-I-have-right-to-be-rationalist-voice that gets criticized sometimes. But I'm, like, less than 50% confident of any of my claims here. This whole model has the pluratity but not majority of my confidence)
I think there are probably ways to build a pocket of power and trustworthiness, which does get absorbed by powerful empires which rise and fall around it, and doesn't lose it's soul in the absorption process. Rather than try and compete with empires on their own terms, make sure you either look uninteresting or illegible to empires, or build good relationships with them so you get to keep surviving. (Hmm. The illegible strategy maps to the Roma, the good-relationship strategy maps to the Amish?)
Then, slowly expand. Optimize for lasting longer than empires at the expense of power. Maybe you incrementally gain illegible power and eventually get to win on the global scale. I think this would work fine if you don't have important time-sensitive goals on the global scale.
Replies from: Benquo, jessica.liu.taylor, ryan_b↑ comment by Benquo · 2019-05-08T13:50:11.748Z · LW(p) · GW(p)
I think there are probably ways to build a pocket of power and trustworthiness, which does get absorbed by powerful empires which rise and fall around it, and doesn't lose it's soul in the absorption process. Rather than try and compete with empires on their own terms, make sure you either look uninteresting or illegible to empires, or build good relationships with them so you get to keep surviving. (Hmm. The illegible strategy maps to the Roma, the good-relationship strategy maps to the Amish?)
I'll add Israelites / Jews to the list here. Not the same kinds of good relations as the Amish - we're more of a target - but we seem to be able to survive in many more, varied political environments, and with a different kind of long-run ambition - and we've been around for longer.
Getting conquered and exiled by the Babylonians precipitated a rapid change in strategies from territorial integrity around a central physical cultic site, to memetic integrity oriented around a set of texts. This hasty transition worked well enough that when the Persians conquered the Babylonian empire, Jews were able to play court politics well enough to (a) avoid getting murdered for retaining a group identity, (b) return to their original territory, and (c) get permission to rebuild their physical cultic site, all without having to fight at a scale that could take on the empire.
By the time of the Roman exile, the portable cultural tech had been improved enough to sustain something recognizable for multiple millennia, though the pace of progress also slowed by quite a lot.
There was also a prior transition from having almost exclusively nonhierarchical distributed governance, to having a king, also partially in response to external pressure.
↑ comment by jessicata (jessica.liu.taylor) · 2019-05-07T22:00:37.038Z · LW(p) · GW(p)
I agree with the strategy in this comment, for some notions of "absorbed"; being absorbed territorially or economically might be fine, but being absorbed culturally/intellectually probably isn't. Illegibility and good relationships seem like the most useful approaches.
Replies from: Raemon↑ comment by Raemon · 2019-05-07T22:10:32.256Z · LW(p) · GW(p)
Nod. In modern times, I'd consider a relevant strategy is "how to get your company purchased by a larger company in a way that lets your company mostly get to keep doing what it's doing."
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-05-07T22:21:27.787Z · LW(p) · GW(p)
Example: Deepmind?
↑ comment by ryan_b · 2019-05-17T15:03:00.600Z · LW(p) · GW(p)
Then, slowly expand. Optimize for lasting longer than empires at the expense of power. Maybe you incrementally gain illegible power and eventually get to win on the global scale. I think this would work fine if you don't have important time-sensitive goals on the global scale.
I have a stub post about this in drafts, but the sources are directly relevant to this section and talk about underlying mechanisms, so I'll produce it here:
~~~
The blog post is: Francisco Franco, Robust Action, and the Power of Non-Commitment
The paper is: Robust Action and the Rise of the Medici
- Accumulation of power, and longevity in power, are largely a matter of keeping options open
- In order to keep options as open as possible, commit to as few explicit goals as possible
- This conflicts with our goal-orientation
- Sacrifice longevity in exchange for explicit goal achievement: be expendable
- Longevity is therefore only a condition of accumulation - survive long enough to be able to strike, and then strike
- Explicit goal achievement does not inherently conflict with robust action or multivocality, but probably does put even more onus on calculating the goal well beforehand
~~~
Robust action and multivocality are sociological terms. In a nutshell, the former means 'actions which are very difficult to interfere with' and the latter means 'communication which can be interpreted different ways by different audiences'. Also, it's a pretty good paper in its own right.
↑ comment by jessicata (jessica.liu.taylor) · 2019-05-07T21:36:22.328Z · LW(p) · GW(p)
My core thesis here is that if you have a lowel-level manager that is as competent at detecting werewolves, you will be more powerful if you instead promote those people to a higher level, so that you can expand and gain more territory.
Either it's possible to produce people/systems that detect werewolves at scale, or it isn't. If it isn't, we have problems. If it is, you have a choice of how many of these people to use as lower-level managers versus how many to use for expansion. It definitely isn't the case that you should use all of them for expansion, otherwise your existing territories become less useful/productive, and you lose control of them. The most competitive empire will create werewolf detectors at scale and use them for lower management in addition to expansion.
Part of my thesis is that, if you live in a civilization dominated by werewolves and you're the first to implement anti-werewolf systems, you get a big lead, and you don't have to worry about direct competitors (who also have anti-werewolf systems but who want to expand indefinitely/unsustainably) for a while; by then, you have a large lead.
Replies from: Raemon↑ comment by Raemon · 2019-05-07T21:47:50.409Z · LW(p) · GW(p)
My current guess is that many organizations already have anti-werewolf systems, but it's a continuous anti-inductive process, and everyone is deploying their most powerful anti-werewolf social tech at the highest levels. (which consists of a small number of people. So, the typical observer notices "jeez, there are lots of werewolves around, why isn't anyone doing anything about this?" but actually yes people are doing that, and they're keeping their anti-werewolf tech illegible so that it's hard for werewolves to adapt to it. This just isn't much benefit to the average person.
I also assume anti-werewolf tech is only one of many things you need to succeed. If you were to develop a dramatic advance in anti-werewolf tech, it'd give you an edge, if you are also competent at other things. And the probably is that most of the things you need are anti-inductive – at least the werewolf tech, having a core product that is better than the competition. And many other good business practices are, if not anti-inductive, at least quite hard.
If it were possible to develop anti-werewolf tech that is robust to being fully public about how it works, I agree that'd be a huge advance. I'm personally skeptical that this will ever work as a silver bullet* though.
*(lol at accidental werewolf metaphor)
To be clear, I'm very glad you're working on anti-werewolf tech, I think it's one of the necessary things to have good guys working on, I just don't expect it to translate into decisive strategic advantage.
Replies from: Raemon, jessica.liu.taylor↑ comment by Raemon · 2019-05-07T21:48:51.185Z · LW(p) · GW(p)
Or to put another way:
Either it's possible to produce people/systems that detect werewolves at scale, or it isn't. If it isn't, we have problems.
My prediction is that "we have problems", and that the solutions will necessarily involve dealing with those problems for a very long time, the hard way.
(I'd also reword to "it's either possible to produce werewolf detection at a scale and reliability that outpaces werewolf evolution, or it isn't." Which I think maps pretty cleanly to medicine – we discovered antibiotics, which was real powerful for awhile, but eventually runs the risk of stopping working)
↑ comment by jessicata (jessica.liu.taylor) · 2019-05-07T21:57:53.187Z · LW(p) · GW(p)
To be clear, I’m very glad you’re working on anti-werewolf tech, I think it’s one of the necessary things to have good guys working on, I just don’t expect it to translate into decisive strategic advantage.
I agree, it's necessary to reach at least the standard of mediocrity on other aspects of e.g. running a business, and often higher standards than that. My belief isn't that anti-werewolf tech immediately causes you to win, so much as that it expands your computational ability to the point where you are in a much better position to compute and implement the path to victory, which itself has many object-level parts to it, and requires adjustment over time.
Replies from: Raemon↑ comment by Raemon · 2019-05-07T22:09:50.054Z · LW(p) · GW(p)
Nod. I think we may disagree upon some of the nuances of the ways reality-is-likely-to-turn out, but are in rough agreement on the broad strokes (and in any case I think I've run out of cached beliefs that are relevant to the conversation, and don't expect to make progress in the immediate future on figuring out the details of those nuances).
↑ comment by Raemon · 2019-05-07T21:02:17.270Z · LW(p) · GW(p)
Upvoted for the motion of checking-for-assumptions, although I think my assumptions are a bit different.
This assumes orgs with bounded goals that have big leads can't use their leads to suppress competition
My assumption is more like 'that's already part of the game and factored into the calculus'
I think, when examining global scale actors, most of the actors are there because they have runaway, goodharty goals. Romeostevens summarizes on "why do people become ambitious [LW(p) · GW(p)]?" as:
redirection of sex or survival drives get caught up in some sort of stable configuration where they can never be satisfied yet the person doesn't notice that aspect of the loop and thus keeps Doing the Thing far past the time normal people notice.
While there are also people with bounded goals on the global scale, the runaway unbounded-goal-actors got there first, and are already using their leads to squash competition. So while a bounded actor can do the same, they still have to behave as if they were an unbounded actor in order to get the chance.
I notice that empires, despite their lead, keep declining and falling. This implies that something weird is going on at the global scale. My guess is that empires rise when they have a good inner circle, but that inner circle is eventually lost to entropy – either the King dies and the successor king is slightly less good, or the king dies and there isn't even a successor king to consider.
This does imply that final victory for the "good guys" has to look like:
1. build an empire that can compete with unbounded powerhungry agents.
2. build a succession tool with no loss to entropy, where the Inner Council always consists of trustworthy people.
3. repeat for literally eternity.
And this actually seems possible... but important to note that success here looks very weird to naive human values, because you don't get to cash in your victory chips until you are perfectly safe.
↑ comment by Davidmanheim · 2019-11-20T19:57:06.054Z · LW(p) · GW(p)
On your points about scaling, I mostly agree, but want to note that there are fundamental issues with scaling that I explained in a post here: https://www.ribbonfarm.com/2016/03/17/go-corporate-or-go-home/
The post is rather long. In short, however, I don't think that your Kingdom metaphor works, because large bureaucracies are big *not* because they have many mini-kingdoms doing similar things in parallel, but because they need to specialize and allow cross-functional collaboration, which requires lots of management.
Replies from: Raemon↑ comment by Raemon · 2019-11-20T21:19:08.758Z · LW(p) · GW(p)
So I've updated a bit about the kingdom metaphor, but I don't think that particular critique feels right to me – competent people are rare, and this is just as important whether you need someone to oversee a new department (i.e. your startup now needs to hire an actual legal team and you need someone you can trust to make sure they're aligned with you, i.e. specialization), or you if you're opening up on a new store in another city (parallelization). Either way, alignment/competence are limited and bottlenecking.
Replies from: Davidmanheim↑ comment by Davidmanheim · 2019-11-24T09:57:45.294Z · LW(p) · GW(p)
My claim is that *competence* isn't the critical limiting factor in most cases because structure doesn't usually allow decoupling, not that it's not limited. When it IS the limiting factor, I agree with you, but it rarely is. And I think alignment is a different argument.
In EA orgs, alignment can solve the delegation-without-management problem because it can mitigate principal-agent issues. Once we agree on goals, we're working towards them, and we can do so in parallel and coordinate only when needed. In most orgs, alignment cannot accomplish this, because it's hard to get people to personally buy into your goals when those goals are profit maximization for a company. (Instead, you use incentive structures like bonuses to align them. But then you need to monitor them, etc.)
↑ comment by Raemon · 2019-05-07T19:54:17.328Z · LW(p) · GW(p)
This might actually bear on why YCombinator suggests startups stay small as long as they can. My impression is there's basically two phases of play – the stage where you're building up your high trust network of competent people, and the stage where you scale.
Scale too quickly, and you easily risk not maintaining the networks that keep the Game of Thrones in check, and then the king finds himself beheaded because the werewolves were too competent and his network spread too thin. (In the fictional '20 competent advisors, 5 districts' example, probably the best thing to do is aim for 2 competent people per distrinct rather than 1. A Kingdom that naively scales from 5 to 20 will have shortterm advantage but probably stretch itself too thin and fail in the long game.)
Replies from: Davidmanheim↑ comment by Davidmanheim · 2019-11-24T10:03:04.624Z · LW(p) · GW(p)
There is also overhead to scaling and difficulty aligning goals that they want to avoid. (As above, I think my Ribbonfarm post makes this clear.) Once you get bigger, the only way to ensure alignment is to monitor - trust, but verify. And verification is a large part of why management is so costly - it takes time away from actually doing work, it is pure overhead for the manager, and even then, it's not foolproof.
When you're small, on the other hand, high-trust is almost unnecessary, because the entire org is legible, and you can see that everyone is (or isn't) buying in to the goals. In typical startups, they are also well aligned because they all have similar levels of payoff if things go really well.
comment by romeostevensit · 2019-05-08T20:38:47.567Z · LW(p) · GW(p)
I think the hard reification of villagers and werewolves winds up stopping curiosity at the wrong places in the abstraction stack. Seeing agents as following mixed strategies determined by local incentives which tend to be set by super-cooperators and super-defectors seems better to me. It's also a much more tractable problem and matches what I see on the ground in orgs.
Replies from: Benquo↑ comment by Benquo · 2019-05-08T22:02:14.433Z · LW(p) · GW(p)
Incentives matter, but trauma matters too. And learning what it's like to "play Werewolf" or "play Villager" on purpose in a stereotyped environment with divided roles is helpful for learning the discernment to notice these processes, which are often quite subtle.
comment by ryan_b · 2019-05-09T17:35:16.125Z · LW(p) · GW(p)
Not directly relevant to this post, but following through the what social skills feel like from the inside link:
He responded that she should really be persuaded by what he'd already done – that she should do things his way rather than the other way around because he has better social skills.
I'm not fully aware of the context, but in every one I have experience of this is considered a hideous faux pas. I strongly expect that this fellow has the worse social skills of the two of them.
This is an example of a rule that is pretty consistent across domains: if someone feels the need to state their status, we should infer that is not their real status. Few songs about being #1 are written by the people who are actually #1, any man who must say 'I am the King' is no true king, etc. Consider how ridiculous it would sound if in programming someone were to say "This is not a bug and we shouldn't change it because I am a better programmer."
Replies from: Benquo↑ comment by Benquo · 2019-05-09T17:40:11.256Z · LW(p) · GW(p)
This may be true in the particular case mentioned - I think you only get that sort of maladaptive level of transparency from people to whom the paradigm doesn't feel native and they have to consciously learn it. (Similarly, part of why the case of GiveWell [LW · GW] is so valuable is that GiveWell doesn't lie or bullshit about what it's doing to the extent that more conventional orgs do - its commitment to transparency is in some tension with its actual strategy, but executed well enough that it tells the truth about some of the adversarial games it's playing.)
But there's a transformation of "I have more social skills, so you should do what I say" that does work, where multiple people within a group will coordinate to invalidate bids for clarity as socially unskilled. This tends to work to silence people when it accumulates enough social proof. To go to the king example, a lot of royal pomp is directly about creating common knowledge that one person is the king and everyone else accepts this fact.
Replies from: Dagon↑ comment by Dagon · 2019-05-09T19:10:29.439Z · LW(p) · GW(p)
The coordination case is not directly comparable to the direct claim of authority. Getting many people to perform the ceremonies and to publicly proclaim that one is king is direct evidence of deep influence over at least those people. Claiming to be king is unnecessary if there is already such evidence, and ineffective if there is not.
Multiple people within a group saying it is not a transformation of the claim, it's direct evidence for the claim.
Replies from: Benquo↑ comment by Benquo · 2019-05-09T19:54:46.568Z · LW(p) · GW(p)
Claiming to be king is unnecessary if there is already such evidence, and ineffective if there is not.
Actual kings thought otherwise strongly enough to have others who claimed to be king of their realm killed if at all possible. Repetition of royal pomp within a king's lifetime implies that claiming to be king is not redundant for an already-acknowledged king either. Often there were annual or even more frequent reaffirmations, plus constant reinforcement via local protocol among whoever was physically near the king.
Replies from: Dagon, ryan_b↑ comment by Dagon · 2019-05-09T22:24:49.400Z · LW(p) · GW(p)
Sorry, I was making a distinction between a lone individual making a claim ("I'm the king, listen to me" or "I have social skills, listen to me") vs enough OTHER people making the claim ("he's the king, listen to him") to give evidence that it's already accepted. The first is useless, the second is powerful enough to obviate the first.
↑ comment by ryan_b · 2019-05-17T14:36:17.895Z · LW(p) · GW(p)
Actual kings thought otherwise strongly enough to have others who claimed to be king of their realm killed if at all possible.
My model for this: other claims to being king say nothing about the claimant, but send signals about the current king they need to quash.
1. There was always a population of people who are opposed to the king, or think they could get a better deal from a different one. This makes any other person who claims to be king a Schelling Point for the current king's enemies, foreign and domestic. Consider Mary, Queen of Scots and Elizabeth, where Mary garnered support from domestic Catholics, and also the French.
2. In light of 1, making a public claim to the throne implicitly claims that the current monarch is too weak to hold the throne. I expect this to be a problem because the weaker the monarch seems, the safer gambling on a new one seems, and so more people who are purely opportunistic are willing to throw in their lot with the monarch's enemies.
comment by Dagon · 2019-05-07T16:16:51.990Z · LW(p) · GW(p)
Also, one of the keys to winning as a villager is to recognize that there ARE werewolves, and they will interfere in two ways: they'll gum up your communication and trust, and they will kill you if you are the biggest threat to them.
That said, I'm not sure the game metaphor applies well to real interactions. In reality, everyone is simultaneously a villager and a werewolf (a value producer and a competitor for resources/attention), and we shift between the roles pretty fluidly on different topics in different situations. And there are very real differences in underlying personality and capability dimensions, so there's always the unknown distinction between "confused villager" and "werewolf". Combined with the fact that at least some scarcity is not artificial (and even artificial scarcity is a constraint that can last for many years), I have to admit I'm not super hopeful of a solution.
And now that that's said, I look forward to hearing about the experiments - finding ways to cooperate in the face of scarcity and distinct individual wants is the single most difficult problem in the future of intelligence and flourishing that I know of. I'll gladly participate if it'll help, but I highly recommend you start in small groups with more than just internet contact among them. (note: I'm not sure how strongly I recommend this - the scaling problem is real, and what works for a small group with only a few dozen potential coalitions may be near-irrelevant to larger structures with trillions of edges in the graph).
Replies from: Benquo↑ comment by Benquo · 2019-05-08T13:41:42.798Z · LW(p) · GW(p)
I strongly agree that in real life people aren't assigned a "Villager" or "Werewolf" identity, rather they respond to perceived incentives and acculturation to sometimes present a character that implements Werewolf tactics, and other times they do things more characteristic of a Villager. My sense is that the social component of the problem is so strong that creating a pure-Villager feedback loop is going to be much more effective than trying to diffusely increase the quantity of Villagerness in the world (Cf. "raising the sanity waterline"), even though the former has to ultimately pay off in the latter.