Why do all out attacks actually work?

post by lc · 2020-06-12T20:33:53.138Z · LW · GW · 9 comments

Contents

9 comments

A surprising amount of rationalists agree that people can often do what seems impossible otherwise if they try really, really hard. It's something on the outset I wouldn't have expected a community of rationalists to believe - on the surface it seems like pure magical thinking - but I think it's nevertheless true anyways, because I see a lot of evidence for it. Lots of startup advice is repetitions of this point; if you want to succeed you can't give up, you have to look for solutions where you don't think on the outset there are any, optimists do better than pessimists, etc. etc. And anecdotally I've found that I have a lot of trouble accurately assessing the limit of my own abilities. I'm usually wrong, not only wrong in the sense that I'm generally inaccurate, but wrong in a specific direction, i.e. I consistently underestimate what I will accomplish when I invoke my inner anime character. 

To say this is true of most people is something that should be pretty jarring. It's not obvious why humans should have a bias toward underestimating themselves in this way. On the other hand, you could also say that this entire phenomenon is all just confirmation and sampling bias. After all, I only ever really find confirmation of the reverse - when I think I can't do something and I find out I can. Most startups do fail, and the ones that succeed are subject to survivorship bias. Intuitively, however, it feels like this is a more common experience for me than it should be, or at least that my errors in this vein are fairly conspicuous examples of faulty reasoning. It's not that I can't think of things I can't do, persay. It would be difficult for me to take a sudden tumble into the sun and survive with no prep time. It's that when I am wrong, the errors seem particularly egregious. I don't know that everyone has this experience, but if my experience is the same for most people, I think have an explanation for it. 

One insight I have is that when doing the impossible there aren't really just two modes, all-out-attack and normal attempts. What I've found is that there's actually a continuum of impossibility goggles. When I was in middle school and my parents wanted me to get to baseball practice, I would, in full honesty, propose the most trivial obstacles as evidence for why that was not going to happen. It was raining, for instance. Our coach cancelled and the team wasn't going to be there. I don't think there was a single circumstance, really, where I could not have gone out to the baseball field and just hit balls off the tee by myself, or balls pitched by my Dad. I just didn't want to do it. The thing is, sometimes in the moment I really did think these things make it "impossible". If you had prompted me then and asked me if these things were physically impossible, I might have said no, but that didn't stop me from thinking in the moment they were entirely impractical. 

The setting of these exceptionally bad assessments suggests a motivation - I had a third party, my parents, who needed to be convinced that they shouldn't try to send me out to the park to hit baseballs. A common theme of Robin Hanson's EitB is that we sometimes lie to ourselves to make it easier to lie to others, and I think that goes a long way in explaining why this occurs. When someone asks you to do something you don't want to do, and you say "I might if my daughter's life depended on it, but I won't do it as it is now", that's a much more frustrating objection than "I can't do it because of x, y, and z obstacles." Socially, it's more agreeable to say "I can't get this done for you because of these bureaucratic checks" than "I could break some laws/call a dozen different offices until it happened, but that's a lot of effort for a random favor." Before we choose to solve problems, we scan the solution space to see if solving it is plausible. A lack of motivation makes us scan in a narrower space and compels us to give up quicker than we otherwise might, so that we can honestly claim to others that we can't see how it can be done. Obviously sometimes we do in full consciousness lie to other people about these things, but sometimes we do not, and just give up.

Even when there's not a third party that has to be convinced, or has a personal interest in you completing the task you think is impossible, not completing it may reflect badly on you. Which sounds better: my startup could have succeeded but I didn't have Elon Musk-tier drive, or my startup would have succeeded except for those meddling bureaucrats! If you take a public undertaking, and lose, explaining that it was impossible to accomplish saves some face. It keeps your image as a respectable, formidable person, who just happened to attempt something that no other person could do if they were in your shoes, or faced the resource constraints you had. Explaining to people that maximum effort may have done the job is like explaining to people that you're not sure that they're wrong during an argument, you're just almost positive. The contents of your words are true, but what you signal to the other person is different. Non-rationalists will take your "almost positive" as a sign of confusion or internal doubt, rather than pedantry, just as they will take your "almost maximum effort" as a sign that you couldn't really muster the effort. And this goes for things you have attempted, as well as things you haven't attempted. People judge you on your ability to accomplish things you haven't actually tried, too. If you convincingly mark a large part of the possibility space off as "impossible", then you can explain your inability to move forward as a product of the problem, rather than a shortage of internal willpower.

In the worst cases, this can be an unconscious effort to convince other people not to try. Publicly failing to solve a technical problem at your engineering company is only worse than failing and then having someone else, or another team, come in and clean house. I wonder if the oft-cited quote about grey-haired scientists declaring something possible or impossible has something to do with this. If you are an eminent scientists who has attempted a problem or worked within a closely related field for your entire life, perhaps there are some emotional reasons to suggest at the end of your career that a target is unassailable. If you, the prestigious Nobelist, couldn't do it, who is any future researcher or engineer to say they can?

9 comments

Comments sorted by top scores.

comment by johnswentworth · 2020-06-12T22:21:15.103Z · LW(p) · GW(p)

My current model of this centers on status, similar to your last paragraph. I'll flesh it out a bit more.

Suppose I build a net-power-generating fusion reactor in my garage. In terms of status, this reflects very badly on an awful lot of high-status physicists and engineers who've sunk massive amounts of effort and resources into the same goal and completely failed to achieve it. This applies even without actually building the thing: if I claim that I can build a net-power-generating fusion reactor in my garage, then that's a status grab (whether I intend it that way or not); I'm claiming that I can run circles around all those people who've tried and failed. People react to that the way people usually react to status grabs: they slap it down. After all, if they don't slap it down, then the status-grab "succeeds" - it becomes more plausible in the eyes of third parties that I actually could build the thing, which in turn lowers the status of all the people who failed to do so.

Now, flip this back around: if I want to avoid being perceived as making a status grab (and therefore being slapped down), then I need to avoid being perceived as claiming to be able to do anything really big-sounding. And, as you mention, the easiest way to avoid the perception of a grand claim is to honestly believe that I can't do the grand thing.

From the inside, this means that we try to predict what we'll be able to do via an algorithm like:

  • How much social status would I have if I did X?
  • How much social status do I have?
  • If the first number is much larger than the second, then I probably can't do the thing.

Presumably this is not an especially accurate algorithm, but it is a great algorithm for avoiding conflict. It avoids making claims (even unintentionally) for which we will be slapped down.

I'm pretty sure Yudkowsky sketched a model like this in Inadequate Equilibria [? · GW], which is probably where I got it from.

Replies from: jmh, rudi-c
comment by jmh · 2020-06-13T12:45:33.526Z · LW(p) · GW(p)

But why doesn't the all out attack work against status?

This model, when we're only talking about status, seems like another reflection of the "I can't" view so no commitment to make the effort is made.

I assume your "slap down" is not merely those with status ridiculing the idea attempting to point out flaws in the theory or design but rather that of applying both economic, political and perhaps even raw force to stop you. In that case the issue doesn't seem to be status (though clearly that might indicate a level or location of risk). It issue is the ability of others with an interest in stopping you in achieving that goal. Seems to me that decision process there would be performing a calculation on a different set of inputs than status.

Replies from: johnswentworth
comment by johnswentworth · 2020-06-13T17:10:44.706Z · LW(p) · GW(p)

But why doesn't the all out attack work against status?

I think it often does. All out attacks do actually work quite often.

comment by Rudi C (rudi-c) · 2020-06-13T09:50:42.849Z · LW(p) · GW(p)

What are some examples of this algorithm being inaccurate? It seems awfully like the efficient market hypothesis to me. (I don’t particularly believe in EMH, but it’s an accurate enough heuristic.)

Replies from: johnswentworth, Viliam
comment by johnswentworth · 2020-06-13T17:08:41.698Z · LW(p) · GW(p)

In principle I agree with Villiam, though often these situations are sufficiently unlike markets that thinking of it in EMH terms will lead intuitions astray. So I'll emphasize some other aspects (though it's still useful to consider how the aspects below generalize to other EMH arguments).

Situations where all out attacks work are usually situations where people nominally trying to do the thing are not actually trying to do the thing. This is often for typical Inadequate Equilibria reasons - i.e. people are rewarded for looking like they're making effort, rather than for success, because it's often a lot easier to verify that people look-like-they're-making-effort than that they're actually making progress.

I think this happens a lot more in everyday life than people realize/care to admit: employers in many areas will continue to employ employees without complaint as long as it looks like they're trying to do The Thing, even if The Thing doesn't get done very quickly/very well - there just needs to be a plausible-sounding argument that The Thing is more difficult than it looks. (I've worked in several tech startups, and this incentive structure applied to basically everyone.) Whether consciously or unconsciously, a natural result is that employees don't really put forth their full effort to finish things as quickly and perfectly as possible; there's no way for the employer to know that The Thing could have been done faster/better.

(Thought experiment: would you do your job differently if you were going to capture the value from the product for yourself, and wouldn't get paid anything besides that?)

The whole status-attack problem slots neatly into this sort of scenario: if I come along and say that I can do The Thing in half the time and do a better job of it too, then obviously that's going to come across as an attack on whoever 's busy looking-like-they're-doing The Thing.

comment by Viliam · 2020-06-13T14:29:39.830Z · LW(p) · GW(p)
It seems awfully like the efficient market hypothesis to me.

Then the reasoning wouldn't apply when the "market" is not efficient. For example, when something cannot be bought or sold, when the information necessary to determine the price is not publicly available, when the opportunity to buy or sell is limited to a few people (so the people with superior knowledge of market situation cannot participate), and when the people who buy or sell have other priorities stronger than being right (for example a tiny financial profit caused by being right would be balanced by a greater status loss).

comment by Viliam · 2020-06-13T15:14:09.505Z · LW(p) · GW(p)

I think there are multiple factors behind people systematically not trying hard enough:

Status / power. People who spend extraordinary amounts of work and achieve extraordinary results can be perceived as trying to get power, and can be punished to teach them their place.

Incentives are different than in our evolutionary past. Spending 100% of your energy on task A is more risky than spending 20% of your energy on tasks A, B, C, D, and E. The former is "all or nothing", the latter is "likely partial success in some tasks". The former is usually a bad strategy if "nothing" means that you die, and "all" will probably be taken from you unless you have the power to defend it. On the other hand, in science or startups, giving it only your 20% is almost a guaranteed failure, but 100% has a tiny chance of huge success, and hopefully you have some safety net if you fail.

The world is big, and although it contains many people with more skills and resources than you have, there are even more different things they could work on. Choose something that is not everyone's top priority, and there is a chance the competitors you fear will not even show up. (Whatever you do, Elon Musk could probably do hundred times better, but he is already busy doing other things, so simply ignore him.) This is counter-intuitive, because in a less sophisticated society there were fewer things to do, and therefore great competition at most things you would think about. (Don't start a company if your only idea is: "Facebook, but owned by me".)

Even freedom and self-ownership seem new from the evolutionary perspective. If you are a slave, and show the ability to work hard and achieve great things, your master will try to squeeze more out of you. "From each according to his ability, to each according to his needs" also makes it strategically important to hide your abilities. Whether harder work -- even when it brings fruit -- will lead to greater rewards, is far from obvious. Even in capitalism, the person who succeeds to invent something is not necessarily the one who will profit from it.

This feels a bit repetitive, and could be reduced to two things:

1) Whether the situation is such that spending 100% of energy on one task will on average create more utility than splitting the energy among multiple tasks. Assume you choose a task that is important (has a chance to generate lots of utility), but not in everyone's focus.

2) The hard work is going to be all yours, but how much of the created utility will you capture? Will it at least pay for your work better than doing the default thing?

To use an example from Inadequate Equilibria, even if we assume that Eliezer's story about solving the problem with seasonal depression is a correct and complete description of the situation, I would still assume that someone else will get the scientific credit for solving the problem, and if it becomes a standard solution, someone else will make money out of it. Which would explain why most people were not trying so hard to solve this problem -- there was nothing in it for them. Eliezer had the right skills to solve the problem, and the personal reward made it worth for him; but for most people this is not the case.

comment by gbear605 · 2020-06-21T19:42:47.587Z · LW(p) · GW(p)

I've found that I can often overcome this by asking myself "if this were possible, how would that have happened." With your baseball example, if you had a time machine and looked into the future and saw that you were practicing baseball, you could probably come up with ways to accomplish that once you believe that it is possible.

comment by lisperati · 2020-06-14T21:43:21.463Z · LW(p) · GW(p)

I think two completely different hypotheses for this phenomenon are more likely:

Hypothesis #1: It can be a dog whistle between investors that an entrepreneur will "stop at nothing to succeed" which can include borderline unethical behavior. Investors may prefer to invest in companies that act in this way, but the investors don't want to overtly condone unethical behavior and instead use coded language to avoid personal liability. The poster child of this is probably Travis Kalanick, the CEO of Uber who reportedly had no problems with booking fake rides on competitor's ride sharing platforms in order to gain an advantage. I bet early Uber investors said stuff like "I had lunch with Travis and we should invest in that guy! He's so driven, he has such a singular focus to succeed!"

Hypothesis #2: Solving valuable business problems is nowadays extremely difficult, because all the low hanging fruit no longer exist. Therefore, it's very easy to run out of money before such problems are solved. This means that the payoff for any effort is extremely non-linear, and an all-out attack is far more likely to succeed before any funding dries up. According to this hypothesis, it may paradoxically NOT be beneficial to do an all-out attack if you have enough funding (but typically people don't have this luxury.) If this hypothesis is true, a person with enough $$ and time may want to "put their eggs in several baskets" and have a greater chance of success through diversification (though it would likely take a longer "clock time" for any project to succeed). Certain types of artists/creators may fall into this zone, and hence many such creative people would probably not benefit from the "all in" approach- I actually developed a productivity system for such people (http://www.lisperati.com/#!A_Productivity_System_For_Creators) which is the antithesis of the "all in" startup mentality.