Create a Full Alternative Stack
post by Zvi · 2020-01-31T17:10:00.543Z · LW · GW · 14 commentsContents
Solution 10: Create a Full Alternative Stack None 14 comments
Last time I proposed nine strategies for fighting mazes. All of them were either systemic solutions requiring coordinated and/or political action, or cultural shifts that happen one person at a time.
Now for the tenth and final proposal for how to fight mazes. The tenth is a proposed strategy that one dedicated person with sufficient resources could implement on their own.
If you are in a position where you have the resources to implement this, please make an effort to take this proposal seriously. And please contact me if you are potentially interested and wish to discuss it further.
Solution 10: Create a Full Alternative Stack
In some ways this is the most ambitious solution here. It may seem Utopian.
In other ways, it is the least ambitious, and most practical. It could be implemented by a single sufficiently wealthy and committed individual or organization. All other known solutions can be implemented locally, and would help locally, but need general adaptation to succeed in general.
The full alternative stack offers a contract.
Disengage entirely with mazes and traditional distortionary incentives, competitions and signals of all kinds, and discard all zero-sum activity, in favor of doing the thing. Whatever the thing in question may be. Make no compromises to make oneself legible or attractive to outside sources of funding. Tolerate no maze behaviors of any kind. Hire and fire with this deeply in mind.
In exchange, if you keep your end of the bargain, the stack will fully fund you and your operations, at fair prices that do not take advantage of the hold-up opportunity presented by your giving up of other opportunities. Evaluation will be fully on the object-level merits, and the generative processes involved.
This is a form of tenure for the people. If they continue to act with integrity and work to accomplish positive-sum things relevant to the stack’s interests, and spend responsibly, they and their family will have strong financial security.
Think of this as similar to tenure at a university, or to the system of universal employment for partisan hacks. If you are promising the stack gives you the opportunity to prove yourself. Once you have proven yourself, we take care of you, even if you don’t bear as much fruit as we would like, provided you continue to play by the rules of the stack and honor its values. Unlike many tenured professors nowadays, we would not then force you to seek grants, outside investments, or other outside funding for your work. On the contrary, you would be agreeing not to seek outside funding, so as to protect your incentives from corruption.
This is also a form of secured financing for corporations and other organizations. While they need funding to reach maturity, they will be evaluated on whether they are succeeding at doing the thing. Traditional signals, and anticipation of future traditional signals, will be not only disregarded but punished – it’s fine to look good naturally, but if you are doing things in order to look good or successful to outsiders rather than be good or successful, then that breaks the compact.
We call this a full alternative stack because the ideal version is literally a full alternative stack. It recreates civilization. Those involved would not need or depend on outside goods or services. There would be a local area fully owned by and given over to the project.
That is the full version. The full version is ambitious and difficult, but likely far less ambitious and difficult, and far less expensive, than it appears. We would soon find out how much of current activity is rent extraction or otherwise unproductive, and how much is necessary to keep things running.
A lesser version, built around a particular cause or goal, or to give this freedom to select individuals and corporations, would still be extremely valuable.
The MacArthur grant is a template for what this looks like on a personal level, with a shift in focus from creativity to integrity, and a bump in compensation – $625,000 is a lot of money, but that money is designed to be seed money for an activity rather than financial security. Those getting a MacArthur grant still face the specter of future financial needs. One needs an order of magnitude more than that over a lifetime to be secure while not compromising one’s interactions with society.
For startup corporations, this can be similar to the standard method of funding a biotechnology company to pursue a new drug of unknown efficacy. Milestones are set. If they are met, funding is unlocked at pre-negotiated levels, locked in for both sides in advance. There is no reason to worry about signaling in other ways unless the company is about to fail. We would add the condition of not then selling out to a maze (in the biotech example, a big pharma company, or taking the company public) when successful, instead keeping the operation privately owned by its founders to prevent it from being eaten and transformed or killed. Public markets exert strong pressure towards maze behaviors, so such companies would need to commit to staying away from them.
I believe there is a strong opportunity for a venture capital fund that promises committed, full funding to projects in this way in fields outside biotechnology. Projects that are freed from having to gain strong negotiating positions regarding raising capital could be much better at pursuing actual success and production. To succeed, such a fund would need to honor its commitments carefully and be credible at every stage. This includes its commitments not to respond positively to things that would on the outside be viewed as good news, if they are not in fact relevantly good news. Its word would be its bond. It would also need to be highly skilled at choosing superior evaluation techniques. There are many terrible things about current systems of venture funding, but naive replacement models threaten to be easily gameable or otherwise create new and perhaps much worse versions of the same problems.
Most people nowadays are forced, both within an enterprise and overall in their lives, to structure and censure everything they do in light of their potential future need to look legible, comfortable and successful or valuable to mazes. The prospect of having this option cut off fills them with terror, whether or not this should not be the case. Even when they do not fear it, those around them who rely on them fear it, which has a similar effect.
I am unusually immune to these pressures. I have skills that can earn money on demand, without getting a formal job or the approval of a maze, if I need that. I also have robust savings and a family and community that would save me if I fell upon hard times. This terror is still one of the things I have often struggled with.
Freeing a select group to do things without regard to such concerns, and people knowing they have the option to join this group, would be a major cultural change. Ideally this would then become the ‘city on a hill’ that shows what is possible, and gets emulated elsewhere. Regulatory and other legal issues would still have to be navigated, which would often be most of the difficulty of any worthwhile operation. This is why typical versions of this type of proposal go to places like seasteading. Mazes will instinctively attempt to crush whatever is being built.
If one is sitting on a large pile of money and wishes to do good, or simply wishes to increase production, deploying that money effectively has proven a very hard problem. This is to be expected, under even the best/worst conditions, as such problems are anti-inductive. Any easy answers get utilized until they stop being easy answers. Once others find out your criteria for spending or granting money, some of them will Goodhart and/or commit fraud to extract those funds.
The closer you attempt to stick to specified metrics and use criteria you can explain and justify that looks consistent, the more you are optimizing over time for those who Goodhart and commit fraud, in order to grant the appearance of having the appearance of attempting to help in the approved ways, rather than optimizing for actually helping. This is certainly a danger to the full stack operation as well, and the best reason to keep the operation relatively small.
The closer you do not stick to such methods, the more illegible you become, the more blameworthy you appear, and the more likely you are actually buying things that make you feel good about them as your metric. Which, in turn, is even easier to Goodhart or commit fraud on.
Sticking to ‘do the right thing,’ as this solution suggests, and rewarding those who do right things is a rather crazy ask without rich contextual knowledge. The larger you scale, the more universal you attempt to get, the crazier it gets. Goodharting or committing fraud on ‘right thingness’ is as much a threat as Goodharting or committing fraud on anything else, if you’re not staying a step ahead. That very freedom from mazes, Goodharting and fraud is the precious thing you’re trying to get in the first place.
The project has to cash itself out purely on its own terms. It has to care more about doing things its own way than getting things done or looking effective, where that own way is a ruthless focus on what will actually work. Everyone’s instinct, even that of the best possible additions, will be to abandon this at every step. Everyone will face constant pressure to do so.
But without sufficient scale to complete the stack, how do you break free from, and securely break the right people away from the need to worry about, mazes and other outside forces?
Threading that needle is going to be very difficult, even if the other impossible problems are solved. I do not think any one person or formal group can be the head of the entire stack without it getting too large. One must form a distinct subset, and hope others form the required other parts, and until that happens purchase what one needs from the outside using capital, and trust those in the project to continue interacting economically in some ways outside of the stack.
Ideally one does not need to literally go to Mars to be allowed to complete the project. However, if one does need to literally go to Mars, then there is a fair argument that literally going to Mars is a reasonable price to pay to be allowed to complete the project.
The next post asks what we should do when we have a project that would benefit from a large organization.
14 comments
Comments sorted by top scores.
comment by johnswentworth · 2020-01-31T18:01:05.525Z · LW(p) · GW(p)
My interpretation of the previous several posts is: alignment of organizations is hard, and if you're even a little bit misaligned, the mazeys will exploit that misalignment to the hilt. Allow any divergence between measures of performance and actual performance, and a whole bureaucracy will soon arise, living off of that divergence and expanding it whenever possible.
My interpretation of this post is: let's solve it by making a fund which pays people to always be aligned! The only hard part is figuring out how to verify that they are, in fact, aligned.
... Which was the whole problem to begin with.
The underlying problem is that alignment is hard. If we had a better way to align organizations, then organizations which use that method would already be outperforming everyone else. The technique would already be used. Invent that technology (possibly a social technology), and it will spread. The mazes will fight it, and the mazes will die. But absent some sort of alignment technology, there is not much else which will help.
This is a problem which fundamentally cannot be fixed by throwing money at it. Setting up a fund to pay people for being aligned will result in people trying to look aligned. Without some way of making one's measure of alignment match actual alignment, this will not do any good at all.
Replies from: Zvi↑ comment by Zvi · 2020-02-01T19:43:38.112Z · LW(p) · GW(p)
I was in no way trying to disguise that the problem of people faking alignment with the stack in order to extract resources is the biggest problem with the project, if someone were to actually try to implement it. If I get feedback that this wasn't clear enough I will edit to make it more clear. And certainly one does not simply throw money at the problem.
So that far, fair enough.
However, this also incorporates a number of assumptions, and a general view of how things function, that I do not share.
First, the idea that alignment is a singular problem, or that it either does or does not have a solution. That seems very wrong to me. Alignment has varying known solutions depending on the situation and which prices you are wiling to pay and how much you care, and varies based on what alignment you are trying to verify. You can also attempt structure the deal such that people that are non-aligned (e.g. with the maze nature, or even not very into being opposed to it) do not want what you are offering.
I don't think there are cheap solutions. And yes, eventually you will fail and have to start over, but I do think this is tractable for long enough to make a big difference.
Second, the idea that if there was a solution then it would be implemented because it outcompetes others just doesn't match my model on multiple levels. I don't think it would be worth paying the kind of prices the stack would be willing to pay, in order to align a generic corporation. It's not even clear that this level of anti-maze would be an advantage in that spot, given the general reaction to such a thing on many levels and the need for deep interaction with mazes. And it's often the case that there are big wins, and people just don't know about them, or they know about them but for some reason don't take them. I've stopped finding such things weird.
You can also do it backwards-only if you're too scared of this - award it to people who you already are confident in now, and don't extend it later to avoid corruption. It would be a good start on many goals.
In any case, yes, I have thought a lot about the practical problems, most of which such people already face much worse in other forms, and have many many thoughts about them, and the problem is hard. But not 'give up this doesn't actually help' kinds of hard.
Not going to go deeper than that here. If I decide to expand on the problem I'll do it with more posts (which are not currently planned).
Replies from: niark↑ comment by senguidev (niark) · 2022-05-11T06:34:52.170Z · LW(p) · GW(p)
If you want to go further, "Reinventing Organizations" by Frederic Laloux is basically a book on "creating full alternative stacks".
He tried to compile examples of organizations working with this mindset. He tries to build intuition on why these are successful and how to reproduce their success. It goes into practical details of internal processes / tools adapted to this new way of working.
My take is that it's hard. But probably worth trying because there isn't any better alternatives.
If you're seriously interested in finding a way out of bad equilibria, this book surprisingly clarifies a lot of unsuspected options. I highly recommend it.
comment by Vaniver · 2020-01-31T20:11:17.467Z · LW(p) · GW(p)
One needs an order of magnitude more than that over a lifetime to be secure while not compromising one’s interactions with society.
I'm confused by this. Isn't compromising one's interactions with society the point?
That is, suppose most costs are rent-seeking. If you want your startup to do something in the Bay, you have to pay Bay Area landlords about half of your investment capital (through rent and increased salaries). But why does your startup have to do something in the Bay? Because marketing, both to customers and future investors, and employees who want to be able to jump between companies, and various other things.
If you instead want to deliberately avoid marketing / mazes, why not do it in rural Pennsylvania? The rents are low, there.
Like, it seems to me the thing you're suggesting is something like an Amish community, but with something more healthy than God at the center. And that suggests you should do something more like what the Amish do, and less like staying in NYC, where if you don't have ~$10M in the bank or in expected future compensation you're not going to be financially secure while purchasing all the markers of being in the professional class. Like, why pay more for a house in a 'good' school district when you're just going to unschool your kids anyway?
Freeing a select group to do things without regard to such concerns, and people knowing they have the option to join this group, would be a major cultural change. Ideally this would then become the ‘city on a hill’ that shows what is possible, and gets emulated elsewhere.
I think the history of the Thiel Fellowship should be interesting, in this regard. My sense is that it tried to do this, couldn't find the people for it, and then pivoted to just be another way to perform well in the broader maze.
Replies from: Zvi↑ comment by Zvi · 2020-02-01T18:27:23.546Z · LW(p) · GW(p)
The point is to compromise one's interactions with society in the sense that you want to change what they are. But in this frame, the idea is that your interactions were previously being compromised by the worry that some day you may need to extract money from society / mazes, and this seeks to prevent that.
Consider the Thiel fellowship. Yes, it helps people get their start, but their orders are to go out into the world and start a normal business and raise money the normal way. It's better than letting those people go to college, so yay fellowship, but it's totally not this thing. It was a way to let kids who knew that college was a trap skip college. Or at least, that's my understanding.
Thiel literally proposed funding me in the full stack way at a meeting - not personally for life, but for a proposed company, which was going to be biotech-related so it was much closer to normal procedure. He got the logic. But when he came back to his social situation he couldn't follow through. Biotech has to work this way for companies because of hold-up problems and dependencies, you agree on the later rounds in advance with criteria for unlocking them. It's not the full full stack, but it's the core idea that you need to be secure from concerns that would bury the real operation if you had to worry about them.
Creating a new entire community in a new location makes perfect sense, and is one good way to consider implementation.
comment by Ben Pace (Benito) · 2021-12-27T18:31:32.988Z · LW(p) · GW(p)
Create a Full Alternative Stack [LW · GW] is probably in the top 15 ideas I got from LW in 2020. Thinking through this as an option has helped me decide when and where to engage with "the establishment" in many areas (e.g. academia). Some parts of my life I work with the mazes whilst trying not getting too much of it on me, and some parts of my life I try to build alternative stacks. (Not the full version, I don't have the time to fix all of civilization.) I give it +4.
Broader comment on the Mazes sequence as a whole:
The sequence is an extended meditation on a theme, exploring it from lots of perspective, about how large projects and large coordination efforts end up being eaten by Moloch. The specific perspective reminds me a bit of The Screwtape Letters. In The Screwtape Letters, the two devils are focused on causing people to be immoral. The explicit optimization for vices and personal flaws helps highlight (to me) what it looks like when I'm doing something really stupid or harmful within myself.
Similarly, this sequence explores the perspective of large groups of people who live to game a large company, not to actually achieve the goals of the company. What that culture looks like, what is rewarded, what it feels like to be in it.
I've executed some of these strategies in my life. I don't think I've ever lived the life of the soulless middle-manager stereotyped by the sequence, but I see elements of it in myself, and I'm grateful to the sequence for helping me identify those cognitive patterns.
Something the sequence really conveys, is not just that individuals can try to game a company, but that a whole company's culture can change such that gaming-behavior is expected and rewarded. It contains a lot of detail about what that culture looks and feels like.
The sequence (including the essay "Motive Ambiguity") has led me see how in such an environment groups of people can end up optimizing for the opposite of their stated purpose.
The sequence doesn't hold together as a whole to me. I don't get the perfect or superperfect competition idea at the top. Some of the claims seem like a stretch or not really argued for, just completing the pattern when riffing on a theme. But I'm not going to review the weaknesses here, my goal is mostly to advocate for the best parts of it that I'd like to see score more highly in the book.
This post is one of my three picks from the sequence, along with The Road to Mazedom, and Protecting Large Projects Against Mazedom. (Also Motive Ambiguity which is not technically part of the sequence.)
(This review is taken from my post Ben Pace's Controversial Picks for the 2020 Review [LW · GW].)
comment by Decius · 2020-04-18T00:09:12.305Z · LW(p) · GW(p)
What would such a full-stack organization do?
They couldn't try to optimize for any thing.
So... don't optimize? Not even for longevity. But also don't signal anti-optimization, or whatever the opposite of optimizing is. Don't reward signalling, and don't do any of the opposites of rewarding signalling.
I try to imagine such an entity ex nihilo, and I keep getting a novel experience that I'm going to call 'paradigm error'. But when I try to apply those qualities to various organizations, there is simply a mismatch- and different organizations mismatch different things.
Don't try to optimize anything, including meta to this description. Instead, try to do a good job of that thing. Including doing a good job at this meta-thing. My intuition says that doing a good job at avoiding Goodhart while doing a good job of doing a good job things is often going to mean using fuzzy metrics. I can't describe what I actually mean by 'fuzzy metrics', but using fuzzy metrics to evaluate a thing is adjacent to, and not, having someone observe the thing and then rate how good they think it was on a scale (that method is a badly done hard metric). It might look like a narrative evaluation of an expert observer, but I think a core feature of what I'm calling a 'fuzzy metric' is there is no way to generate a fuzzy metric by following a written or unwritten formal procedure.
When checking to see if you're doing a good job, look at some things that can be measured objectively, and combine those measurements with a thing that it is impossible for me to tell you how to get. Maybe "Don't not go with your gut." (double negative intended) might be a good job of explaining the non-measurement part of evaluation.
Such a full-stack organization can of course not optimize for maintaining an ideal culture, because that would be sacrificing literally all value. But they can try to do a good job of maintaining a good culture, identifying people who make the culture worse and humanely moving them to locations where they stop influencing the organizational culture.
In large organizations, hierarchy is impossible to avoid. I think a good tool to reduce that is to say that each level of hierarchy should have a unique object-level thing that they do, beyond bookkeeping or managerial tasks for the other levels. If in a corporate context, anyone who successfully replaces their own job with a few spreadsheet fomulae should not by default be punished for/with Redundancy.
comment by Dagon · 2020-01-31T20:19:50.545Z · LW(p) · GW(p)
Hmm. This seems to ignore the underlying actor-agent problem that is a partial cause of mazes: most (perhaps all, perhaps including you, certainly including me) people aren't willing to truly dedicate their entire life to a thing. There just aren't enough people who are willing/able/whatever to ignore all interpersonal competition for some of the slack (whether that be money or time or other non-shared-goal-directed value).
comment by Gunnar_Zarncke · 2022-02-14T01:25:17.201Z · LW(p) · GW(p)
I think I am missing one possible solution in your list of ten solutions though the tenth one seems closest to what I called "General Public Contract" in this LW question from 2020 [LW(p) · GW(p)] (though the idea is older):
General Public Contract - A growing cooperative society - by some called a cult - that applies a positive-sum mechanism to build a better society on top of - or embedded in the existing society at large. It does so by using a valid legal contract among all parties. A contract like the "General Public Virus", that requires participants to contribute to it to gain its benefits. One extreme example would be a contract that requires you to effectively give up your private property except for your immediate belongings in exchange for access to the net utility of the property managed under the contract.
comment by ChristianKl · 2020-02-01T20:08:21.291Z · LW(p) · GW(p)
Milestones are set. If they are met, funding is unlocked at pre-negotiated levels, locked in for both sides in advance.
It seems to me like the pressure to Goodhart are higher when you agree to be only funded by a single entity with specific pre-negotiated milestones then if you are in a state where you are going to seek capital from a bunch of people with at least slightly different evaluation criteria.
Replies from: Dagon↑ comment by Dagon · 2020-02-01T22:00:24.756Z · LW(p) · GW(p)
Goodheart applies to any use of an alignment indicator, not just funding.
Replies from: ChristianKl↑ comment by ChristianKl · 2020-02-02T17:21:49.875Z · LW(p) · GW(p)
Yes, I don't see how it changes the fact that this setup causes stronger incentives to Goodhart the milestones given that they decide about whether the companies dies or can continue to function.
comment by TeaTieAndHat (Augustin Portier) · 2021-12-16T12:53:44.164Z · LW(p) · GW(p)
I may be oversimplifying here, but if I wanted to sum up what being a maze is about in a few words, I would say it has to do with a kind of Goodhart law problem: in large organizations, it is hard for the top to get information, so everything is reduced to simple metrics, and we end up optimizing for them, eventually destroying everything else. If that problem of information really is the bulk of it, the fact that your plan does not really remove the need for indirect measurements of things is the big issue, and I am not sure of what could be done to solve that. In fact, I’m not sure we could do it completely in any context: You want to reward people who "Disengage entirely with mazes and traditional distortionary incentives, competitions and signals of all kinds", for mazes, something might be tried, but disengaging from status-seeking, attractive as it sounds, looks like saying "just disengage with cognitive biases", good goal if you know you won’t achieve it.
More broadly, the most reliable thing I can think of to reduce these communication problems — indeed the one thing we replaced with mazes — is a lot of social capital, which probably implies limiting oneself to small communities and small businesses. Also, much more social pressure, with all its problems. To an extent, that’s the point, but it’s also something we really would want to avoid doing too much. Or it may even be that we are mainly complaining about mazes because that’s what they are already doing. I wonder to what extent the need for social capital if we don’t want to have mazes might be compensated by the fact that, compared to an hypothetical pre-mazes era, communication costs are down by a massive amount nowadays.