Strategic Thinking: Paradigm Selection
post by katydee · 2017-01-24T06:24:42.727Z · LW · GW · Legacy · 16 commentsContents
16 comments
Perhaps the most important concept in strategy is the importance of operating within the right paradigm. It is extremely important to orient towards the right style or the right doctrine before you begin implementing your plan - that could be "no known doctrine, we'll have to improvise", but if it is you need to know that! If you choose the wrong basic procedure or style, you will end up refining a plan or method that ultimately can't get you to where you want to, and you will likely find it difficult to escape.
This is one of the Big Deep Concepts that seem to crop up all over the place. A few examples:
- In software development, one form of this error is known as "premature optimization," where you focus on optimizing existing processes before you consider whether those processes are really what the final version of your system needs. If those processes end up getting cut, you've wasted a bunch of time; if you end up avoiding "wasting work" by keeping these processes, the sunk cost fallacy may have blocked you from implementing superior architecture.
- In the military, a common mistake of this type leads to "fighting the last war" - the tendency of military planners and weapons designers to create strategies and weapon systems that would be optimal for fighting a repeat of the previous big war, only to find that paradigm shifts have rendered these methods obsolete. For instance, many tanks used early in World War II had been designed based on the trench warfare conditions of World War I and proved extremely ineffective in the more mobile style of warfare that actually developed.
- In competitive gaming, this explains what David Sirlin calls "scrubs" - players who play by their own made-up rules rather than the true ones, and thus find themselves unprepared to play against people without the same constraints. It isn't that the scrub is a fundamentally bad or incompetent player - it's just that they've chosen the wrong paradigm, one that greatly limits their ability when they come into contact with the real world.
This same limitation is present in almost every field that I have seen, and considering it is critical. Before you begin investing heavily in a project, you should ask yourself whether this is really the right paradigm to accomplish your goals. Overinvesting in the wrong paradigm has a doubly pernicious effect - not only are your immediate efforts not as effective as they could be, but it also renders you especially vulnerable to the sunk cost fallacy. Keep in mind that even those who are aware of the sunk cost fallacy are not immune to it!
Therefore, when making big decisions, don't just jump into the first paradigm that presents itself, or even the one that seems to make the most sense on initial reflection. Instead, realy truly consider whether this approach is the best one to get you what you want. Look at the goal that you're aiming for, and consider whether there are other ways to achieve it that might be more effective, less expensive, or both.
Here are some sample situations that can be considered paradigm-selection problems:
- Do you really need to go and get a CS degree in order to become a computer programmer, or will a bootcamp get you started faster and cheaper?
- Does your organization's restructuring plan really hit the core problems, or is it merely addressing the most obvious surface-level issues?
- Will aircraft carrier-centric naval tactics be effective in a future large-scale conventional war, or is the aircraft carrier the modern equivalent of the battleship in WW2?
I don't necessarily know the answers to all these questions - note that only one is even framed as a clear choice between two options, and there are obviously other options available even in that case - but I do know that they're questions worth asking! When it comes time to make big decisions, evaluating what paradigms are available and whether the one you've chosen is the right one for the job can be critical.
16 comments
Comments sorted by top scores.
comment by JenniferRM · 2017-01-25T07:42:50.364Z · LW(p) · GW(p)
Elon Musk is sort of obsessed with thinking about things "from first principles" rather than "by analogy". Arguably, this is a generic solution to the paradigm selection problem.
Replies from: katydee↑ comment by katydee · 2017-01-25T20:45:14.449Z · LW(p) · GW(p)
In some cases it can be - and I will discuss this further in a later post. However, there are many situations where the problems you're encountering are cleanly solved by existing paradigms, and looking at things from first principles leads only to reinventing the wheel. For instance, the appropriate paradigm for running a McDonald's franchise is extremely understood, and there is little need (or room) for innovation in such a context.
Replies from: Viliam, Connor_Flexman↑ comment by Viliam · 2017-01-26T10:06:38.820Z · LW(p) · GW(p)
My quite simplistic understanding is that (a) yes, there are already existing solutions, but (b) the people providing those solutions will charge you a lot of money, especially if you later become dependent on them; and you still need to check their work, and they may disagree with you on some details because at the end of the day they are optimizing for themselves, not for you.
Doing things yourself requires extra time and energy, but the money which would otherwise become someone else's profit now stays in your pockets. Essentially, as soon as you feel reasonably sure that the project will be successful, getting rid of each subcontractor means increasing your profit. You don't need to become an expert on everything, you can still hire the experts, but now they are your employees working for a salary, instead of a separate company optimizing for their own profit.
Not sure how realistic this is, but if you imagine that even a typical successful company somewhat resembles the Dilbert comic, then if you can build your own company better, you can just take over their people who do the actual work, and stop feeding the remaining ones.
EDIT: I don't have an experience running a company, but I am thinking about a friend who recently reconstructed his house. His original thoughts were "I am a software developer, this is my competitive advantage, so I will just pay the people who are experts on house construction", but it turned out that the real world doesn't work this way. Most of the so-called experts were quite incompetent, and he had to do a lot of research in their field of expertise just to be able to tell the difference. When the reconstruction was over, he already felt like he could start a new profession and do better than most of these experts. In this case, however, those experts were typically sole proprietors. If instead they would have been companies renting the experts, and if my friend would be in some kind of a business of repeatedly reconstructing houses, it would make sense for him to start optimizing the details seriously, and build replacements of what exists out there.
↑ comment by Connor_Flexman · 2017-01-26T02:24:41.178Z · LW(p) · GW(p)
I agree with this response; using first principles is a heuristic, and heuristics always have pros and cons. Just in terms of performance, the benefit is that you can re-assess assumptions but the cost is that you ignore a great amount of information gathered by those before you. Depending on the value of this information, you should frequently seek it out, as least as a supplement to your derivation.
comment by Viliam · 2017-01-24T17:55:41.023Z · LW(p) · GW(p)
In competitive gaming, this explains what David Sirlin calls "scrubs" - players who play by their own made-up rules rather than the true ones, and thus find themselves unprepared to play against people without the same constraints. It isn't that the scrub is a fundamentally bad or incompetent player - it's just that they've chosen the wrong paradigm, one that greatly limits their ability when they come into contact with the real world.
I suspect that in software development, trying to develop a good, bug-free program makes you a "scrub". A more reliable path to victory is to quickly make something that can be sold to the customer, and fix it later when necessary. While your competitor develops a bug-free solution, you already own the market. Furthermore, you can spend the money you made to create an improved version 2.0 of your program, so now you have both money and quality. (But maybe even this makes you a "scrub", and you should be developing another application and taking over yet another market instead.)
Replies from: niceguyanon, Lumifer, ChristianKl↑ comment by niceguyanon · 2017-01-24T19:28:19.583Z · LW(p) · GW(p)
This is a really good example of when the organization does gets it right on the big picture, but it seems like they didn't pick the right paradigm. An observation of mine is that organizations often seem dysfunctional to a lot of participants because they aren't part of the profit center or privy to the overall strategy. A company can be fully aware of dysfunction or inefficiencies within, and find it acceptable, because fixing it or making someone happy isn't worth the resources.
Replies from: Viliam↑ comment by Viliam · 2017-01-25T09:10:23.052Z · LW(p) · GW(p)
Unfortunately, for people who are not members of the inner circle this kind of optimization may be indistinguishable from mere incompetence, or malice. Do we produce sloppy code? Maybe delivering the code fast is more important than the code quality. Do we have an incompetent person on the team? Maybe he or she is a relative of someone important, and it is very important to gain a favor from that person. Did we actually deliver the sloppy code late? Maybe the delay way strategic somehow; maybe the company is paid by hour so delivering the product late was used as an excuse to extract more money from the customer; or maybe it made them more dependent for us; or maybe it was somehow strategically important to deliver it on Thursday. Is the company financially in loss? Maybe the key people are actually transferring company money to their private accounts, so everything goes according to the plan.
I don't know where is the balance between understanding that there may be some higher strategy that I am not aware of, and simply blindly trusting the authorities (it is easy to rationalize the latter as the former). I guess it is important to notice that the "higher strategy" is not necessarily optimizing in my favor, so in some sense from my point of view there sometimes needs not to be a difference between "it is all going to hell" and "it all goes according to the plan, but a part of the plan is sacrificing me". That means that unless I trust the secret wisdom and benevolence of the people behind the wheel, I should treat all apparent dysfunction as potentially bad news.
Replies from: Connor_Flexman↑ comment by Connor_Flexman · 2017-01-26T02:49:51.222Z · LW(p) · GW(p)
As you say, the inner circle certainly may have reason to do non-obvious things. But while withholding information from people can be occasionally politically helpful, it seems usually best for the company to have the employees on the same page and working toward a goal they see reason for. Because of this, I would usually assume that seemingly poor decisions in upper management are the result of actual incompetence or a deceitful actor in the information flow on the way down.
Replies from: katydee↑ comment by katydee · 2017-01-26T07:49:01.265Z · LW(p) · GW(p)
Broadly agreed - this is one of the main reasons I consider internal transparency to be so important in building effective organizations. in some cases, secrets must exist - but when they do, their existence should itself be common knowledge unless even that must be secret.
In other words, it is usually best to tell your teammates the true reason for something, and failing that you should ideally be able to tell them that you can't tell them. Giving fake reasons is poisonous.
↑ comment by ChristianKl · 2017-01-26T12:19:21.151Z · LW(p) · GW(p)
I suspect that in software development, trying to develop a good, bug-free program makes you a "scrub". A more reliable path to victory is to quickly make something that can be sold to the customer, and fix it later when necessary.
That's the lean startup movement is about. Rush for the minimum viable product.
comment by turchin · 2017-01-24T10:50:17.841Z · LW(p) · GW(p)
I think that in Friendly AI creation the main strategy is "How we could create Friendly AI", but the real question is: "Chinese are going to create AI soon. What can we do to make it safe?" It could be not Chinese, but any other entity not in our control, and it implies different strategy.
Replies from: Connor_Flexman↑ comment by Connor_Flexman · 2017-01-26T02:39:07.505Z · LW(p) · GW(p)
I think people have already considered this, but the strategies converge. If someone else is going to make it first, you have only two possibilities: seize control by exerting a strategic advantage, or let them keep control but convince them to make it safe.
To do the former is very difficult, and the little bit of thinking that has been done about it has mostly exhausted the possibilities. To do the latter requires something like 1) giving them the tools to make it safe, 2) doing enough research to convince them to use your tools or fear catastrophe, and 3) opening communications with them. So far, MIRI and other organizations are focusing on 1 and 2, whereas you'd expect them to primarily do 1 if they expected to get it first. We aren't doing 3 with respect to China, but that is a step that isn't easy at the moment and will probably get easier as time goes on.
Replies from: turchin↑ comment by turchin · 2017-01-26T11:40:41.486Z · LW(p) · GW(p)
I am now writing an article where I explore this type of solutions.
One similar to ones you listed is to sell AI Safety as a service, so any other team could hire AI Safety engineers to help align their AI (basically it is a way to combine the tool and the way to deliver it.)
Another (I don't say it is the best, but possible) is to create as many AI teams in the world as possible, so hard-takeoff will always happen in several teams, and the world will be separated on several domains. Simple calculation shows that we need around 1000 AI teams running simultaneously, to get many fooms. In fact actual number of AI startups, research groups and powerful individual is around 1000 now and growing.
There are also some other ideas, hope to publish draft here on LW next month.
comment by tukabel · 2017-01-29T08:18:26.744Z · LW(p) · GW(p)
And the most obvious and most costly example is the way our "advanced" (in reality bunch of humanimals that got too much power/tech/science by memetic supercivilization of Intelligence) society is governed, called politics.
Politicianwill defend any stupid decision to death (usually of others) - shining example is Merkel and her crimmigrants (result: merkelterrorism and NO GO zones => Europe is basically a failed state right now, that does not have control of its own borders, parts of its land and security in general)... and no doubt we will see many examples from Trump as well.
This is especially effective in current partocratic demogarchy - demos, the people, vote for mafias called political parties, but candidates are selected by oligarchy anyway... so not much consequences when defending bad decision, it is more important "not to lose your face".
comment by ChristianKl · 2017-01-26T12:16:32.730Z · LW(p) · GW(p)
Will aircraft carrier-centric naval tactics be effective in a future large-scale conventional war, or is the aircraft carrier the modern equivalent of the battleship in WW2?
Aircraft carriers can be useful outside of large-scale conventional war because they allow plays to be deployed to countries like Somalia even when there's no drone base nearby.