How Real Moral Mazes (in Bay Area startups)?

post by Gordon Seidoh Worley (gworley) · 2022-04-03T18:08:54.220Z · LW · GW · 9 comments

Contents

  However there's a related thing going on in middle management that is real but is not strictly mazes, and that's Goodharting.
  So in general I don't see a lot of moral mazes in tech startups, but there is a lot of working within systems that are designed to maximize money rather than human happiness.
None
9 comments

How real are moral mazes [? · GW]? That is, how much is it the case that middle management is all about internal status competitions rather than about doing real useful things for the business?

In general I don't know. I'm not going to do a survey of businesses to figure this out. But I can say a bit about what I've seen and heard about from friends working in software startups in the SF Bay Area.

Context: I'm a senior staff software engineer [LW · GW] at a growing startup. This is a job I roughly describe to people as engineering middle management: I'm not a director or VP, so not literally middle management, but my level is director-equivalent and I have a similar scope of responsibility within the org (multiple teams that are part of a group; I'm responsible for the group rather than individual teams). I've also talked to a lot of people in similar roles past and present through work relationships, friends, and interviewing candidates.

So, the natural question [LW(p) · GW(p)] is, how real do I find moral mazes to be?

Short answer: I mostly don't see them. I don't think this is some naivety on my part, either. I've seen and heard of cases where they were starting to grow, and the people responsible were transitioned to other roles. My theory: startups don't have a lot of time for bullshit, even as they are growing fast and have the most opportunity for bullshit to creep in, because bullshit can be the difference between a successful exit with large upside and making nothing (cf. WeWork, Theranos, etc.). So if you're not doing something that directly drives impact for the business, people will notice and not let you get away with it because it's a waste of precious resources needed to make money.

This leads me to a key theory: moral mazes won't exist in places where the actions of individuals have a lot of impact on the success of the business. There's simply too much at stake for everyone to let anyone get away with moral maziness.

There's a bit more of moral maziness at established tech companies that are no longer startups. But even there it's not strongly of the type Zvi talks about, even at companies that suffer from the problem of more engineers than problems to solve. There's just not enough cultural tolerance for bullshit to let moral mazes flourish. I think this is thanks to the orgs being staffed with engineers and former engineers who will walk if the bullshit factor gets too high.

However there's a related thing going on in middle management that is real but is not strictly mazes, and that's Goodharting.

Here's what I tend to see happen. The business puts in place some process to ensure everyone is incentivized to do what is best for the org. Some examples:

These all have the best of intentions, and do, I think, generally produce better results than would be otherwise achieved without such mechanisms. If every team just did what they wanted, for example, rather than work that can be easily measured, I suspect that would make it much easier for things like moral mazes to spring up as teams would work on fun things that would help them grow their org and keep their people happy but wouldn't actually help the business. Similarly, if we yank out the employee evaluation and interview rubrics, this creates a lot more opportunity for nepotism and bias to enter into the process since with the rubrics others double check your work and push back if it doesn't make sense.

The bad side of these mechanisms, though, is that at the margins they make it hard to do valuable work, promote valuable employees, or hire great candidates who don't fit sufficiently well within the evaluation model. And this is where I see the great trouble in middle management: you sometimes know in your gut that something is the right call, but the only way you can make it happen is by finding some way to "game" the processes in order to make it palatable.

This means that I spend a lot of time crafting and telling stories about the work that I think needs to be done. These aren't false stories, but they are stories structured to play on the expectations generated by the org's processes and culture. For example, maybe the reason I think we should do a project is because it's going to enable us to scale in 18 months. How do I sell this? Talking about scaling is a good start, but that's often not enough. So I look at additional things. Are there stories I can tell about how this will impact business metrics? Is the work something that will help people get promoted? Does it create opportunities to grow a team? Does it align with a decision we've already made for another team and create "synergy"? I'm looking for how this project fits into the ways of evaluating what work to do that we've already all agreed on and then telling a story based on that.

Sometimes this means we don't do the work. Collectively we've chosen to prioritize different things and any particular project might not fit. If the story isn't good enough then the work doesn't rise to the top of the stack. That's not the end of the world, it's just how the sausage gets made.

This is, however, the part where I think middle management looks inhuman to people, even if there are no moral mazes, so it's worth pointing out. People get attached to their ideas. They tend to derive some self-worth from putting forward ideas and having those be the ideas that other people like and act on. To succeed in middle management you've got to give that up! The machine is going to move forward based on how it's designed, not how you feel, although your feelings are input to the design (if you work at a relatively humane company). If it kills your pet project because it's not going to maximize the right metrics, so be it, that's how the system is designed. Sometimes this will be the wrong call and that will feel bad. On the other hand, sometimes you'll be asked to work on a project that you know is going to fail but you can't convince anyone of this fact until after it's already failed. Such is the cost of a system that can't do the impossible and eliminate all false positives and all false negatives.

So in general I don't see a lot of moral mazes in tech startups, but there is a lot of working within systems that are designed to maximize money rather than human happiness.

There's two ways to be okay with this. One is the bad way. That's to think of yourself as "selling out". You suppress and surrender your humanness to the needs of the system. You still deep down care about this project or that, still want to get praise for your ideas because you derive your self-worth from that. This will probably make you miserable. If this is the only way you can join middle management or stay there, you're probably making yourself miserable for no good reason. Do something else. Many companies offer paths now outside middle management if you want to keep growing (specialist paths for engineers and senior team management paths for managers) where you'll be insulated from the pain of how orgs work by managers and processes.

The other way, what I would call the good way, is to honestly stop finding self-worth in your work. This is harder and requires extensive personal psychological work, but if you can find a source of self-esteem that is intrinsic rather than extrinsic then you won't be threatened when your idea doesn't go forward. Yes, you still have to make a enough good decisions and do enough good work to be valuable to the business to keep your job, but you also don't have to feel like shit because one of your decisions or bits of work failed. If you're not emotionally mature enough to do that, middle management is probably not for you (yet!).


You might have stories about why I got this all wrong. Great! Share them in the comments! What I've written above is based on the evidence I've personally gathered, but there's likely some filtering effects. For example, maybe I'm really good at staying away from the worst places, the worst places produce people who I never talk to or interview, or the worst places fail so quickly I never get much chance to hear about them. I see this post as a step in collectively figuring out what's going on in the world.

9 comments

Comments sorted by top scores.

comment by Davidmanheim · 2022-04-03T19:56:39.762Z · LW(p) · GW(p)

I think there's a big gap here between what Zvi discusses, and what you're looking at. Because mazes don't usually happen in smaller and more legible startups, they happen in large bureaucracies. And as I've argued in the past, the structural changes which startups undergo to turn into large corporations are significant and inevitable. And you're talking about moral mazes not existing while you're doing things in order to scale, but the key changes happen once you have scaled up.

As I said in that post, the loss of legibility and flexibility are exactly what goes away with scale - and once it gets big enough, middle management changes from "these are the compromises needed to align people with business goals," to "this is how the system works," regardless of purpose. And that's when the mazes start taking over.

Replies from: SDM
comment by Sammy Martin (SDM) · 2022-04-05T12:10:39.181Z · LW(p) · GW(p)

From reading your article, it seems like one of the major differences between yours and Zvi's understanding of 'Mazes' is that you're much more inclined to describe the loss of legibility and flexibility as necessary features of big organizations that have to solve complex problems, rather than something that can be turned up or down quite a bit if you have the right 'culture', while not losing size and complexity.

Holden Karnofsky argued for something similar, i.e. that there's a very deep and necessary link between 'buearactatic stagnation'/'mazes' and taking the interests of lots of people into account: https://www.cold-takes.com/empowerment-and-stakeholder-management/

Replies from: Davidmanheim, gworley
comment by Davidmanheim · 2022-04-05T14:55:00.043Z · LW(p) · GW(p)

I think I disagree less that you're assuming. Yes, a large degree of the problem is inevitable due to the nature of people and organizational dynamics, and despite that, of course it can differ between organizations. But I do think culture is only ever a partial solution, because of the nature of scaling and communication in organizations. And re: #2, I had a long-delayed, incomplete draft post on "stakeholder paralysis" that was making many of the points Holden did, until I saw he did it much better and got to abandon it.

comment by Gordon Seidoh Worley (gworley) · 2022-04-05T14:42:39.398Z · LW(p) · GW(p)

I think that sounds right. Even in a totally humane system there's going to be more indirection as you scale and that leads to more opportunities for error, principal agent problems, etc.

Replies from: Davidmanheim
comment by Davidmanheim · 2022-04-05T14:58:52.240Z · LW(p) · GW(p)

Yes, as I pointed out in the linked piece - and the relationship with principle-agent problems is why the post I linked above led me to thinking about Goodhart's law.

comment by Dagon · 2022-04-03T23:53:31.487Z · LW(p) · GW(p)

Have been in a comparable role (Principal Engineer in multiple parts of a FAANG, now DE of a late-stage startup) for a long time.  I've seen some areas where things are maze-like, but overall agree with your assessment - a lot of goodhearting, a lot of short-term thinking, and some amount of "empire building" that is less severe than the Moral Mazes descriptions.  Mixed with a lot of effective software development and successful business units that improve customers' lives and make money doing so.

Replies from: Davidmanheim
comment by Davidmanheim · 2022-04-04T07:41:21.318Z · LW(p) · GW(p)

Interesting. I'm wondering how large the company you were in was, and how long it had been large, since this doesn't match what others seem to report.

Facebook has 60,000 employees, Apple has 132,00, and Alphabet has 100,000 - but Netflix has 12,000, and Amazon has 1.6m. On the other hand, both Amazon and Apple have weird structures where most employees are working in distribution centers or stores, and their core business is far smaller. I also think that mazes take time to develop - Facebook was under 5,000 employees a decade ago, while Apple was already above 70,000, so I'd expect very different experiences.

Replies from: Dagon
comment by Dagon · 2022-04-04T18:26:49.002Z · LW(p) · GW(p)

I keep my work identity(-ies) somewhat separate from my online social discussions, so I won't go into specifics, but it was on the larger end of the scale, and the company did have a noticeable split between the  corporate/retail, software dev, and operations/logistics parts of the business.  I can only really speak to the software dev world, which was by itself extremely large.  

The thing that's not easy to see/remember from outside is that there is a very large amount of variance in structure and culture across the overall company, and the variance occurs at multiple levels.  Some SVP-level orgs seem more maze-y than others, and some 50-person orgs within a division within an org seem culturally different from others.  I suspect the competent engineers self-select to the better-functioning areas, which makes the overall company seem better functioning to them (because their peers are pretty reasonable).  

I think my main concern with the Maze framing is that it assumes more homogeneity than is justified.  There are absolutely parts of every large company I know of (I have close friends working at many different ones) that sound horrible.  But the ones I know well also have pretty reasonable parts as well (not perfect by any means, there are lots of frustrating hindrances that get in the way, but nowhere as one-dimensionally horrific as described in Moral Mazes).  As an employee, it's best to consider the group you're working in (say, 2-3 levels of management above your job) as somewhat independent in terms of work style of the overall company averages or typical outside reputation.

Replies from: Davidmanheim
comment by Davidmanheim · 2022-04-04T19:33:57.019Z · LW(p) · GW(p)

Thanks - I agree, that all seems correct. I'm not sure if Zvi intended the maze framing to imply every part of every large org was that way, but to the extent he did, yes, that's going too far.