In Search of Strategic Clarity

post by james.lucassen · 2022-07-08T00:52:02.794Z · LW · GW · 1 comments

This is a link post for https://jlucassen.com/strategic-clarity/

Contents

  What is Strategic Clarity?
  And How Do I Get It?
None
1 comment

Context: quickly written up, less original than I expected it to be, but hey that's a good sign. It all adds up to normality [? · GW].

The concept of "strategic clarity" has recently become increasingly important to how I think. It doesn't really have a precise definition that I've seen - as far as I can tell it's mostly just used to point to something roughly like "knowing what the fuck is going on". Personally, the strongest association I have with the term is a pointer away from a certain undesirable state. When I feel like the problem I'm thinking about is a big muddle, and I don't have a clear way to make progress because there are no good starting points with solid footing, I call that "lack of strategic clarity".

Anecdotally, it seems like I rarely produce thinking that I consider good and useful when I try to just charge through a lack of strategic clarity. I often get frustrated, or wind up on claims whose truthfulness I don't know how to assess, or just freeze up and find myself staring blankly out the window all of a sudden. It's gotten to the point where I pretty much never do that anymore - when I feel muddled, I almost instinctively flee back up the abstraction ladder, and think about ways I could approach the problem that wouldn't feel so muddled.

But there's a hole in this that bothers me. I don't have a crisp definition for what "strategic clarity" actually is! I'm fine with using it as a loose felt-sense label when it's not very load-bearing, but if I'm frequently going out and searching for strategic clarity, I'd better know what exactly I'm looking for!

So if you're willing to tolerate a frankly heinous degree of abstraction, join me as I attempt to get strategic clarity on how to get strategic clarity. Ugh just looking at that sentence makes me want to vo-

What is Strategic Clarity?

The number one concrete example I have for a topic where I lack strategic clarity is AI strategy/governance. When I try to think about AI strategy, the thing that jumps out at me first is the massive empirical uncertainty. How will the first AGI be developed? Who will develop it? When? How fast? What will it be used for?

But if I imagine I had a magic box that could give me answers to these questions, I don't think the lack-of-strategic-clarity feeling would go away. Combining all those answers into a strategy still feels very muddled. Lottery tickets have tons of empirical uncertainty, but I feel pretty darn clear about whether I should buy lottery tickets or not. Besides, I can think of examples where I lack strategic clarity with very little empirical uncertainty. I never had a Rubik's cube phase, and to this day don't know how to solve one. If I imagine sitting down with a Rubik's cube, I know exactly what every move does, and exactly where I'm trying to go, but it still feels like a big unclear mess when I try to figure out what to do. I can't identify any sub-problems, or even moves that would make my situation less scrambled - the task still feels monolithic and unresponsive. Before making any real attempt at a solution, I would have to spend a while figuring out what's even going on, until those things go away.

"Monolithic and unresponsive" feels like it's getting at the heart of the problem, but it also feels like a redundant description. Sitting with a Rubik's cube does feel both monolithic and unresponsive, but I think it's unresponsive because it's monolithic. If I could break down the problem into sub-problems, that would make it much easier to solve - each sub-problem would have fewer degrees of freedom to deal with, and would be easier to think through. Maybe I could even break it down further [? · GW], until the sub-problems are easy enough to solve by inspection.

Now that I think about it, this is usually how naive attempts to solve a Rubik's cube go! You try and solve one color first. Sometimes you even succeed. But then it's much harder to do anything to the other colors without screwing up your one solved color. You can technically decompose the problem into six sub-problems if you want, but this doesn't actually make anything easier because you can't solve the problems independently. It doesn't save you any interactions to think about - you still need to consider the global situation to solve one side at a time.

This also matches with my impression of why AI strategy is hard. The problem is obviously too big for me to solve by inspection. But when I try to think about any particular sub-problem, it doesn't simplify things by that much. Imagine I try to factor the problem into legislative and non-legislative measures - this doesn't save me any interactions, because the ideal legislation depends in large part on the non-legislative situation, and vice versa. You can't solve for them independently. One example of a factorization that's pretty good is the technical/strategy distinction - solving the technical alignment problem is a clearly necessary sub-goal, and mostly separate from the strategy problem. The "alignment tax [? · GW]" is one important example of how the two sub-problems do still have interactions.

This is my current theory of strategic clarity - it's the art of finding sub-problems whose interactions matter little or not at all. I'm also hesitantly partial to a variety of pithier but more figurative summaries: factoring problems, or cleaving problems at the joints.

And How Do I Get It?

There are a bunch of things I've been doing to try and get strategic clarity, which now make sense in this more explicit framing. Maybe I'll even be able to come up with some new ones, or get better at using these tools.

1 comments

Comments sorted by top scores.

comment by Emrik (Emrik North) · 2022-09-01T14:43:03.682Z · LW(p) · GW(p)

This is excellent and I'm dismayed that it only has two votes. It clarified something important for me.