Dan Luu on "You can only communicate one top priority"
post by Raemon · 2023-03-18T18:55:09.998Z · LW · GW · 18 commentsThis is a link post for https://twitter.com/danluu/status/1487228574608211969
Contents
18 comments
h/t to rpglover64 who pointed me towards this twitter thread in this comment [LW(p) · GW(p)].
Here's Dan Luu's take on what happens when orgs try to communicate nuanced priorities. (Related to my You Get About Five Words [LW · GW] post)
One thing it took me quite a while to understand is how few bits of information it's possible to reliably convey to a large number of people. When I was at MS, I remember initially being surprised at how unnuanced their communication was, but it really makes sense in hindsight.
For example, when I joined Azure, I asked people what the biggest risk to Azure was and the dominant answer was that if we had more global outages, major customers would lose trust in us and we'd lose them forever, permanently crippling the business.
Meanwhile, the only message VPs communicated was the need for high velocity. When I asked why there was no communication about the thing considered the highest risk to the business, the answer was if they sent out a mixed message that included reliability, nothing would get done.
The fear was that if they said that they needed to ship fast and improve reliability, reliability would be used as an excuse to not ship quickly and needing to ship quickly would be used as an excuse for poor reliability and they'd achieve none of their goals.
When I first heard this, I thought it was odd, but having since paid attention to what happens when VPs and directors attempt to communicate information downwards, I have to concede that it seems like the MS VPs were right and nuanced communication usually doesn't work at scale.
I've seen quite a few people in upper management attempt to convey a mixed/nuanced message since my time at MS and I have yet to observe a case of this working in a major org at a large company (I have seen this work at a startup, but that's a very different environment).
I've noticed this problem with my blog as well. E.g., I have some posts saying BigCo $ is better than startup $ for p50 and maybe even p90 outcomes and that you should work at startups for reasons other than pay. People often read those posts as "you shouldn't work at startups".
I see this for every post, e.g., when I talked about how latency hadn't improved, one of the most common responses I got was about how I don't understand the good reasons for complexity. I literally said there are good reasons for complexity in the post!
As noted previously, most internet commenters can't follow constructions as simple as an AND, and I don't want to be in the business of trying to convey what I'd like to convey to people who won't bother to understand an AND since I'd rather convey nuance
But that's because, if I write a blog post and 5% of HN readers get it and 95% miss the point, I view that as a good outcome since was useful for 5% of people and, if you want to convey nuanced information to everyone, I think that's impossible and I don't want to lose the nuance
If people won't read a simple AND, there's no way to simplify a nuanced position, which will be much more complex, enough that people in general will follow it, so it's a choice between conveying nuance to people who will read and avoiding nuance since most people don't read
But it's different if you run a large org. If you send out a nuanced message and 5% of people get it and 95% of people do contradictory things because they understood different parts of the message, that's a disaster. I see this all the time when VPs try to convey nuance.
BTW, this is why, despite being widely mocked, "move fast & break things" can be a good value. It coneys which side of the trade-off people should choose. A number of companies I know of have put velocity & reliability/safety/etc. into their values and it's failed every time.
MS leadership eventually changed the message from velocity to reliability First one message, then the next. Not both at once When I checked a while ago, measured by a 3rd party, Azure reliability was above GCP and close enough to AWS that it stopped being an existential threat
Azure has, of course, also lapped Google on enterprise features & sales and is a solid #2 in cloud despite starting with infrastructure that was a decade behind Google's, technically. I can't say that I enjoyed working for Azure, but I respect the leadership and learned a lot.
18 comments
Comments sorted by top scores.
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-03-18T23:20:59.191Z · LW(p) · GW(p)
This analysis is relevant to understanding political fights online. Most activist groups are smart enough to realize that they need to handle tradeoffs in their personal lives. But when it comes to shaping society's priorities, activists push for one clear-cut top goal. That's why calls for nuance or pointing out overdoing things don't sit well with them: these suggestions might cause mixed signals and create a laid-back vibe where not sticking to one value is okay if you're focusing on another. This whole idea is based on the belief that political talks like these are mostly about setting the rules for everyday people who can't handle nuanced thinking.
If this analysis really nails down what each side in these fights is objecting to from the other's point of view, I'm curious if it'd be helpful for someone to address this issue directly. A lot of times, when I get pulled into these debates, it's like deep down, I feel the other side is totally overlooking a genuine tradeoff, and making bad decisions because they're not thinking at the margin. But if they're actually assuming that society can only focus on one priority at a time and arguing for what they believe is the most important, then I'm getting worked up over a totalizing worldview they don't really have. Instead, I should talk about why I think some other value matters more.
It'd be awesome if people could and would prioritize handling nuance and tradeoffs in political talks. Maybe we should make that society's top priority?
Replies from: gwern, JBlack, deluks917↑ comment by gwern · 2023-03-19T18:34:51.574Z · LW(p) · GW(p)
This is also relevant to understanding why the genre of off-the-cuff tossoffs like "what if corporations are the real superintelligence" or "why can't we solve AGI alignment the same way we solved 'aligning corporations'?" are so wrong.
Corporations are not superintelligences. They are, in fact, extremely stupid, much stupider than the sum of their parts (a million corporate employees sum to a lot less than a million times smarter human), suffer from severe diseconomies of scale, and subject to only the weakest forms of natural selection due to their inability to replicate themselves reliably leading to the permanent existence of very large dispersion in efficiency/quality between corporations. (You will never see a single especially-well-run corporation take over most of the business world, the way you repeatedly saw more-fit COVID viruses drive to extinction lesser variants.) They are so stupid that they cannot walk and chew bubblegum at the same time, and must choose, because they can only have 1 top priority at a time - and CEOs exist mostly to repeat the top priority that "we do X".
Why then do we have corporations and they have any real-world power at all? Because they are simply very large and parallel and potentially-immortal, and are the least-bad organizations human minds can reliably form at present given the blackbox of human minds & inability to copy them. Not because they are optimal or intelligent.
Replies from: lc↑ comment by JBlack · 2023-03-19T00:49:38.161Z · LW(p) · GW(p)
if you have a thousand organisations each pushing in a different cardinal direction in some high-dimensional space, getting backing and making progress based on how important it is to varying numbers of people, that looks a lot like some sort of gradient descent. Maybe this sort of single-issue focus isn't as inefficient as it might appear?
There are plenty of ways this analogy can break down, and also plenty of ways it can go wrong even within the analogy. A major victory in one direction can easily "overshoot" into a highly sub-optimal state (e.g. revolution), or various factors can consolidate a lot of update power into just two opposed directions (e.g. polarized two-party states).
Plus of course gradient descent is generally based on some error function that can be evaluated precisely and doesn't change while you're trying to optimize, neither of which is true in politics, so the analogy is far from perfect.
Replies from: AllAmericanBreakfast, Davidmanheim↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2023-03-19T01:09:22.020Z · LW(p) · GW(p)
This is going to be a fun idea to think about! Thanks.
↑ comment by Davidmanheim · 2023-03-19T12:55:23.949Z · LW(p) · GW(p)
It's a reasonable model. One problem with this as a predictive model, however, is that log-rolling happens across issues; a politician might give up on their budget-cutting to kill an anti-business provision, or give up an environmental rule to increase healthcare spending. So the gradients aren't actually single valued, there's a complex correlation / tradeoff matrix between them.
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2023-03-19T16:29:58.416Z · LW(p) · GW(p)
It seems like large organizations achieve structure through a combination of legislation and value-setting. They use policies and rules to legislate nuance, but rely on a single value to steer daily decision-making. This whole analysis really needs to be understood as being about the daily decision-making piece of the puzzle.
Replies from: Davidmanheim↑ comment by Davidmanheim · 2023-03-20T08:03:29.430Z · LW(p) · GW(p)
I think this ignores how decisions actually get made, but I think we're operating at too high a level of abstraction to actually disagree productively.
↑ comment by sapphire (deluks917) · 2023-03-19T21:51:31.480Z · LW(p) · GW(p)
Arguably EA/Rationality needed much simpler and less nuanced messaging on how to deal with aI capabilities companies. We really should have gone with 'absolutely do not help or work for companies increasing ai capabilities. Only work directly on safety.' Nuance is cool and all but the nuanced messaging arguably just ended up enabling Anthropic and OpenAI.
comment by tailcalled · 2023-03-20T13:42:08.573Z · LW(p) · GW(p)
I think internal communication in organizations like MS can be pretty bad. There's almost no attempts to build shared frames or make documentation on object-level questions, let alone making frames about understanding the interactions in the organization.
I think that forms an obstacle for communication. If people don't have a shared set of concepts, it seems inherently hard to communicate things. I would be interested in whether more nuance can be achieved if more philosophical legworks is done.
comment by Review Bot · 2024-04-02T02:45:36.245Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
comment by M. Y. Zuo · 2023-03-20T14:14:16.550Z · LW(p) · GW(p)
Isn't this a bit tautological? After all by definition 'top priority' implies a singular 'top'...
Replies from: sharmake-farah↑ comment by Noosphere89 (sharmake-farah) · 2023-03-20T18:41:22.050Z · LW(p) · GW(p)
I do think there's a bit more lurking here, and the basic implication of Dan Luu's tweets is that you can have only priority at all, 2 already is a mess and nothing gets done, and it gets worse with the number of priorities you have.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-03-22T19:15:06.207Z · LW(p) · GW(p)
If the implication is that people can't have secondary priorities of lower importance, then that seems just false?
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2023-03-22T19:24:37.464Z · LW(p) · GW(p)
Have you read the post? It specifically says this is for big organizations, and not relevant to small ones (or by extension individuals).
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-03-22T22:20:29.832Z · LW(p) · GW(p)
The post lays out a decent argument for why organizations can't maintain two priorities of roughly the same importance.
But I don't see why that bars the possibility of secondary priorities that are clearly stated to be much less important?
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2023-03-22T22:58:43.872Z · LW(p) · GW(p)
I see what you’re saying - I thought you were referring to individual people. I’m pretty sure we all agree here and this is just a semantics thing.