What are the most common and important trade-offs that decision makers face?
post by Andy_McKenzie · 2014-11-03T05:03:16.968Z · LW · GW · Legacy · 30 commentsContents
30 comments
30 comments
Comments sorted by top scores.
comment by Sarunas · 2014-11-03T11:13:35.800Z · LW(p) · GW(p)
A few quick thoughts.
You should look into TRIZ, which seems to be related to your idea, albeit it is mostly related to engineering (however, proponents of TRIZ try to apply it to other domains). Basically TRIZ is a collection of common engineering trade-offs that are arranged into a contradiction matrix and a set of advice how to come up with workarounds, called 40 Inventive Principles (How to use it? You are supposed to formulate your problem as a trade-off, find a relevant cell in the contradiction matrix, and look the numbers up in the list of 40 principles). However, taking the outside view, you should note that TRIZ is still relatively unknown outside the former Soviet Union. If it was as useful as it is sometimes claimed to be, wouldn't it be very popular by now? Or was it cultural barriers that prevented it from spreading? Frankly, I don't know.
In statistics, there is a well known Bias–variance tradeoff, which affects a wide range of situations. The trade-off between the goodness of fit of the model and its complexity (4th item in your list) is somewhat related (though not identical) to it.
A general pattern. Some trade-offs are due to Berkson's paradox, because sometimes the situation we have is "prefiltered" by the past, and we can observe negative corellations between variables even though they are not intrinsically negatively correlated. A special case: if we can do both X and Y, but that requires resources and we have a fixed amount of resources, then the decision how much X and how much Y should be done becomes a trade-off.
Closely related example is the trade-off between quantity and quality, e.g. r/K selection theory.
Replies from: snarles, Gunnar_Zarncke, Andy_McKenzie↑ comment by snarles · 2014-11-03T16:33:22.770Z · LW(p) · GW(p)
A different kind of "bias-variance" tradeoff occurs in policy-making. Take college applications. One school might admit students based only on the SAT score. Another admits students based on scores, activities, essays, etc. The first school might reject a lot of exceptional people who just happen to be bad at test-taking. The second school tries to make sure they accept those kinds of exceptional people, but in the process of doing so, they will admit more unexceptional people with bad test scores who somehow manage to impress the admissions committee. The first school is "biased" against exceptional students with bad test grades-- the second school has more "variance" because by attempting to capture the students that the first school who wrongly reject, they admit more low quality students as well. You might interpret this particular example as "sensitivity vs specificity."
Another example would be a policy for splitting tips at a restaurant. One policy would be to have all the staff split the tips equally. Another policy would be to have no splitting of tips. Splitting tips incurs bias, not splitting incurs variance. An intermediary policy would be to have each staff member keep half of their own tips, and to contribute the other half to be redistributed.
Replies from: Andy_McKenzie↑ comment by Andy_McKenzie · 2014-11-04T02:11:42.790Z · LW(p) · GW(p)
Thanks, these are great examples. How did you come up with them?
↑ comment by Gunnar_Zarncke · 2014-11-03T15:52:50.896Z · LW(p) · GW(p)
I think TRIZ has multiple reasons it's not adopted as much as the benefits might imply due to multiple reasons, its Russian origin only one of some. The reason is also: heavily paywalled and most low-hanging fruits in technological innovation have be taken, thus the value of picking the remaining must be balanced against all the other efforts of a technological business.
↑ comment by Andy_McKenzie · 2014-11-04T02:10:11.549Z · LW(p) · GW(p)
You should look into TRIZ, which seems to be related to your idea
I hadn't heard of this and it's interesting. Thanks. In what context did you find it?
In statistics, there is a well known Bias–variance tradeoff, which affects a wide range of situations
I have it subsumed under Precision vs Simplicity; I'll make this explicit in the next iteration.
A general pattern. Some trade-offs are due to Berkson's paradox, because sometimes the situation we have is "prefiltered" by the past, and we can observe negative corellations between variables even though they are not intrinsically negatively correlated
Thanks, this is an interesting point, and one that I have thought about, see here.
r/k selection theory and quality vs quantity
I have these subsumed under Surely Some vs Maybe More; I'll make this explicit in the next iteration.
comment by Gavin · 2014-11-03T21:06:16.569Z · LW(p) · GW(p)
The most standard business tradeoff is Cheap vs Fast vs Good, which typically you're only supposed to be able to get two of.
Replies from: Andy_McKenzie↑ comment by Andy_McKenzie · 2014-11-04T02:19:13.134Z · LW(p) · GW(p)
Yeah I find these three pronged trade-offs fairly interesting. I think it's wrong to say "choose two"; for example, you could always choose to be somewhere in the middle if you consider the space to be a triangle.
Do you know of the word for a three pronged trade-off?
Replies from: Gavincomment by Alejandro1 · 2014-11-03T05:29:46.908Z · LW(p) · GW(p)
Replies from: ChristianKl↑ comment by ChristianKl · 2014-11-03T10:26:48.206Z · LW(p) · GW(p)
I think that's Sensitivity vs Specificity
Replies from: Froolow↑ comment by Froolow · 2014-11-03T12:26:17.087Z · LW(p) · GW(p)
They are slightly different, but in practical terms they describe the same error; sensitivity and specificity are properties of a test while Type I and II errors are properties of a system, but both errors are basically saying, "Our test is not perfectly accurate so if we want to catch more people with a disease we need to misdiagnose more people"
To illustrate the distinction, consider a test which is 90% sensitive and 90% specific in a population of 100 where a disease has a 50% prevelance. This means 50 people have the disease, of which the test will identify 45 as having the problem (90% sensitive). 50 people are free of the disease, of which the test will correctly identify 45 (90% specific). So if diagnosed your probability of the diagnosis being a Type I error is 5/50 = 10% (if given the all clear the same logic applies for a Type II error). You derive this from the number of people in the population who were told they have a disease who were incorrectly diagnosed divided by the total population who were told they have a disease (rightly or wrongly)
But if the disease prevelance changes due to demographic pressue to 10% then 10 people have the disease of whom 9 are diagnosed, and 90 people are disease-free of whom 81 are given the all-clear. This means the probabilities of the different 'Type' errors change dramatically; now 9/18 = 50% for a Type I error and 1/82 ~ 1.2% for a Type II error. But the sensitivity and specificity of the test are completely unchanged.
comment by Bundle_Gerbe · 2014-11-05T10:35:05.647Z · LW(p) · GW(p)
How about:
Specialization of Labor vs. Transaction/Communication costs: a trade off between having a task split between multiple people/organizations vs. done by a single person. Generalism vs.Specialization might be a more succinct way to put it.
Also, another pair that has a close connection is 3 and 7. Exploration is flexible strategy, since it leaves open resources to exploit better opportunities that turn up, while exploitation gains in commitment.
Replies from: MaximumLiberty↑ comment by MaximumLiberty · 2014-11-06T19:51:31.227Z · LW(p) · GW(p)
I was thinking build vs buy or I source vs outsource being much like some of the first point.
comment by [deleted] · 2014-11-03T14:45:42.227Z · LW(p) · GW(p)
I believe Signalling vs Actually getting your job done is extremely pervasive and not accounted for by your list.
Replies from: Andy_McKenzie↑ comment by Andy_McKenzie · 2014-11-04T02:13:13.122Z · LW(p) · GW(p)
I think that Signaling vs Actually doing things would map to Saving vs Savoring in the typical case that you want status in the short run via signaling ("savoring") and shifting the world more towards your values if you choose to signal less ("saving").
Replies from: Lumifer, Luke_A_Somers↑ comment by Luke_A_Somers · 2014-11-04T18:53:36.344Z · LW(p) · GW(p)
Those two really need a better name. Profit-Taking vs Investment?
Replies from: Lumifer↑ comment by Lumifer · 2014-11-04T19:04:16.304Z · LW(p) · GW(p)
One marshmallow vs. Two marshmallows :-)
Economists would call it consumption vs. investment.
The trade-off is basically about delayed gratification.
comment by DanielLC · 2014-11-04T01:40:24.987Z · LW(p) · GW(p)
1) Efficiency vs Unpredictability
Why is unpredictability a good thing? Is this something where you're playing a game with another agent and you don't want him to predict what you'll do?
2) Speed vs Accuracy
I don't think those are opposite. They'll occur in tradeoffs, but only in the sense that any two things will. There's also speed vs cost, or speed vs documentation, etc.
5) Surely Some vs Maybe More
I believe this is normally stated as "Risk vs Reward".
Replies from: Andy_McKenzie↑ comment by Andy_McKenzie · 2014-11-04T02:14:54.687Z · LW(p) · GW(p)
Is this something where you're playing a game with another agent and you don't want him to predict what you'll do?
Yes, exactly. Kind of narrow, but I just find it really cool. And, lots of things decision makers have to do can be formulated as games with other agents.
I don't think those are opposite. They'll occur in tradeoffs, but only in the sense that any two things will. There's also speed vs cost, or speed vs documentation, etc.
I agree that they might not be "intrinsic", but if they're very commonly inversely correlated in situations we care about (which I think they are), then considering it as a trade-off can still be useful. See here for more.
"Risk vs Reward"
Thanks. I'll make this explicit next time. I like to have an upside connoted by both words of the trade-offs, which is why I didn't like risk vs reward.
Replies from: DanielLC↑ comment by DanielLC · 2014-11-04T02:51:24.926Z · LW(p) · GW(p)
Come to think of it, "Risk vs Reward" is oddly-worded. It's both vs neither. If you had to choose between the two, you'd pick reward every time.
Replies from: Lumifer↑ comment by Lumifer · 2014-11-04T02:55:31.775Z · LW(p) · GW(p)
Come to think of it, "Risk vs Reward" is oddly-worded.
That's because there ain't no such thing :-) there is no "vs" there.
People usually talk about risk/reward ratios. The trade-off is between a larger reward with more uncertainty and a lower reward with less uncertainty. Your preferences in this trade-off are often described as risk aversion.
comment by JosephY · 2014-11-04T00:08:38.385Z · LW(p) · GW(p)
Constant, predictable gains vs. Black Swans
Replies from: ChristianKl, Andy_McKenzie↑ comment by ChristianKl · 2014-11-05T19:40:00.961Z · LW(p) · GW(p)
A lot of gains that aren't constant but are variable have still nothing to do with Black Swans.
↑ comment by Andy_McKenzie · 2014-11-04T02:16:34.311Z · LW(p) · GW(p)
I think this is Surely Some vs Maybe More -- right? If so, helpful to recall the black swan meme and map it to this, thanks.
comment by Vaniver · 2014-11-03T21:20:05.647Z · LW(p) · GW(p)
It seems to me that 6 and 11 are pretty close; what's the difference between them that you wanted to highlight?
Replies from: Andy_McKenzie↑ comment by Andy_McKenzie · 2014-11-04T02:17:49.283Z · LW(p) · GW(p)
Good find and I agree that they are pretty similar. As I write in the text,
This trade-off [#11] can be “constructed” by considering the interaction of Some Now vs More Later [#6] and Loyalty vs Universality, where “savoring” is loyalty to oneself in the short run and “saving” is universality over the long run. However, this is a common enough interaction that it seems canon-worthy.
Do you agree?
Replies from: Vaniver↑ comment by Vaniver · 2014-11-04T14:32:10.851Z · LW(p) · GW(p)
I interpreted "saving" as "saving for yourself," like putting money into a bank account so you'll have it later rather than spending it now. Universality over the long run seems to me more like 'donating' or 'investing' or... in Triumphs of Experience, this gets called "guardianship" vs. "hoarding," but I think that's still framing both in long-term terms, and it's not obvious that "guardianship" means "developing a community using your resources" (the intended meaning) rather than "protecting things."
Replies from: Andy_McKenzie↑ comment by Andy_McKenzie · 2014-11-05T05:06:03.175Z · LW(p) · GW(p)
Interesting, thanks for explaining how you interpreted it. I interpreted it as "saving the world." I'll try to see what others think as well.