Leading The Parade
post by johnswentworth · 2024-01-31T22:39:56.499Z · LW · GW · 31 commentsContents
Background Terminology: Counterfactual Impact vs “Leading The Parade” Examples Research Politics and Activism Business Status is a Terrible Proxy for Counterfactual Impact A Missing Mood A Project I’d Like To See What If Leading The Parade Grants Counterfactual Impact Later? Summary None 31 comments
Background
Terminology: Counterfactual Impact vs “Leading The Parade”
Y’know how a parade or marching band has a person who walks in front waving a fancy-looking stick up and down? Like this guy:
The classic 80’s comedy Animal House features a great scene in which a prankster steals the stick, and then leads the marching band off the main road and down a dead-end alley.
In the context of the movie, it’s hilarious. It’s also, presumably, not at all how parades actually work these days. If you happen to be “leading” a parade, and you go wandering off down a side alley, then (I claim) those following behind will be briefly confused, then ignore you and continue along the parade route. The parade leader may appear to be “leading”, but they do not have any counterfactual impact on the route taken by everyone else; the “leader” is just walking slightly ahead.
(Note that I have not personally tested this claim, and I am eager for empirical evidence from anyone who has, preferably with video.)
A lot of questions about how to influence the world, or how to allocate credit/blame to produce useful incentives, hinge on whether people in various positions have counterfactual impact or are “just leading the parade”.
Examples
Research
I’m a researcher. Even assuming my research is “successful” (i.e. I solve the problems I’m trying to solve and/or discover and solve even better problems), even assuming my work ends up adopted and deployed in practice, to what extent is my impact counterfactual? Am I just doing things which other people would have done anyway, but maybe slightly ahead of them? For historical researchers, how can I tell, in order to build my priors?
Looking at historical examples, there are at least some cases where very famous work done by researchers was clearly not counterfactual. Newton’s development of calculus is one such example: there was simultaneous discovery by Leibniz, therefore calculus clearly would have been figured out around the same time even without Newton.
On the other end of the spectrum, Shannon’s development of information theory is my go-to example of research which was probably not just leading the parade. There was no simultaneous discovery, as far as I know. The main prior research was by Nyquist and Hartley about 20 years earlier - so for at least two decades the foundations Shannon built on were there, yet nobody else made significant progress toward the core ideas of information theory in those 20 years. There wasn’t any qualitatively new demand for Shannon’s results, or any key new data or tool which unlocked the work, compared to 20 years earlier. And qualitatively, the gap between Shannon’s discoveries and Nyquist/Hartley seems quite wide: Shannon’s theorems on the fungibility of information [? · GW] both pose and answer a whole new challenge compared to the earlier work. So that all suggests Shannon was not just leading the parade; it would likely have taken decades for someone else to figure out the core ideas of information theory in his absence.
Politics and Activism
Imagine I’m a politician or activist pushing some policy or social change. Even assuming my preferred changes come to pass, to what extent is my impact counterfactual?
Looking at historical examples, there are at least some cases where political/activist work was probably not very counterfactual. For instance, as I understand it the abolition of slavery in the late 18th/early 19th century happened in many countries in parallel around broadly the same time, with relatively little unification between the various efforts. That’s roughly analogous to “simultaneous discovery” in science: mostly-independent simultaneous passing of similar laws in different polities suggests that the impact of particular politicians or activists was not very counterfactual, and the change would likely have happened regardless. Another type of evidence to look for here is policy changes being made at earlier times, but having relatively little real impact - for instance, Wikipedia’s timeline of the abolition of slavery mentions in 1542 “The New Laws ban slave raiding in the Americas and abolish the slavery of natives, but replace it with other systems of forced labor like the repartimiento.”.
On the other hand, there are probably cases where political/activist work was counterfactually impactful. While I’m less familiar with the history here, one natural place to look would be polities which had very unusual outcomes relative to other similar polities - think e.g. the East Asian growth miracle, especially Japan and South Korea.
(Aside: at least some economists have been thinking properly about causality and counterfactual impact for a while, so this is something you can probably find literature on especially in the context of the economic effects of policy.)
Business
Business “success” does seem-to-me to typically track counterfactual impact at least somewhat better than scientific “success”, insofar as business success requires outperforming one’s competition. Plenty of people built home computers in the 80’s; Jobs and Wozniak got famous for building better computers. And that continues to Apple later on: Apple’s high status among businesses is mostly about its products being better than their competitors’. Now, we can certainly debate the extent to which that reputation is accurate (I personally am no fan of most Apple products since the early iPod shuffle), but it is at least “about the right thing” - i.e. Apple’s high status among businesses is largely about how well their products compare to the next best. That’s a reasonable proxy for counterfactual impact.
… but there’s still lots of loopholes. For instance, in any “natural monopoly” industry, a business can easily become disproportionately “successful” merely by being first (and thereby grabbing the monopoly). Then we have the same issue as in science: the first mover ends up with high status merely for leading the parade, even if someone else would have done the same thing soon after. Think Facebook or Bell Telephone.
As another example, consider the sort of businesses which produce mediocre products but survive through strong sales teams. Big defense contractors, most large B2B businesses, that sort of thing. There again, business success comes decoupled from counterfactual impact.
(We could also make an economics-101-style argument here. Insofar as markets are efficient, business success should track (positive) counterfactual impact. And we have some standard economic conditions under which markets are efficient: most notably competition and informed consumers. When those conditions fail, business success doesn’t track counterfactual impact so well.)
At a personal level, there’s also the contentious question of how counterfactually impactful the business founder is specifically. The obvious types of evidence relevant here are:
- Does the founder found other comparably-successful businesses?
- Does the company perform much less well after the founder leaves?
Picking some contemporary examples: Elon Musk is a go-to example of someone who has founded multiple ridiculously-successful businesses, and is therefore probably not just leading the parade. (And as an added bonus, those businesses are usually not natural monopolies - e.g. Tesla and SpaceX both had to outcompete entrenched competitors.) Another example: Steve Jobs is arguably someone whose company performed much less well in their absence, and was therefore probably not just a parade-leader.
Status is a Terrible Proxy for Counterfactual Impact
Consider our example above of the invention of calculus by Newton and Leibniz. There was simultaneous discovery, so clearly neither of them had very much counterfactual impact. Even if both of them had done other things instead, calculus would very likely have been discovered shortly after - after all, if two people discovered calculus around the same time, it would be rather surprising if nobody else was on the brink of discovery around that time.
… and yet, Newton and Leibniz are among the most famous, highest-status mathematicians in history. In Leibniz’ case, he’s known almost exclusively for the invention of calculus. So even though Leibniz’ work was very clearly “just leading the parade”, our society has still assigned him very high status for leading that parade.
More generally… the part of history I’m most familiar with is the history of science and invention. And it sure seems like leading the parade is the default for high-status historical scientists. Not the universal default - e.g. there’s still Shannon, as a probable counterexample. But at least the majority of historical “major scientific discoveries” involve some strong evidence that the discovery was not counterfactual. Most often, that evidence is one or both of:
- Prior or simultaneous discovery (often ignored due to lack of demand)
- The discovery coming very shortly after some prerequisite is available, like e.g. some new data or instrument
Of course, there’s still a continuum in “how counterfactual” research was - really the question is not “counterfactual: yes or no?” but rather “how many years would the discovery have been delayed?”. But the main point still stands: it seems to me that a majority (though importantly not all!) scientific discoveries would likely have occurred within a relatively short time, even if their original discoverers had done something else.
I expect this generalizes to politics. In business, I expect that counterfactual impact is instead the norm, especially among relatively high status small-to-medium businesses, but parade-leaders are still a large minority - especially among the very large businesses which tend to be natural monopolies.
A Missing Mood
I’ve been writing this in a relatively neutral tone so far, but this all implies a mood. A mood which is extremely skeptical and unimpressed by mainstream status by default… but conversely more impressed with evidence of counterfactual impact.
If someone [LW · GW] is like “Bell Labs used <management technique>, and they were super successful, we should emulate them!” then I’m like… yeah sure, if your goal is to achieve high status by frontrunning the parade. Now, if Shannon specifically were counterfactually downstream of that management technique, then you’d maybe have an argument. But most of Bell Labs supposed accomplishments, as far as I can tell, were not very counterfactual [LW(p) · GW(p)].
Look, man, I just don’t really give a shit about who’s leading the parade, except insofar as they’re the noise I’m trying to sort through when looking for signal in history books. I’m aiming to play a different game than that.
On the flip side, insofar as most supposedly-high-impact people were leading the parade, that means high counterfactual impact is even rarer than our high-status-focused instincts make it seem. Therefore, we should pay proportionately more attention to those rare cases when we do find them. It also means that we should give proportionately more credit relatively-small-seeming counterfactual impact - e.g. making a discovery ten years earlier than it would otherwise have occurred may not sound like much, but if that’s the high end of what’s been achieved historically, then those are key data points to pay attention to!
A Project I’d Like To See
Note again the last sentence of the previous section:
e.g. making a discovery ten years earlier than it would otherwise have occurred may not sound like much, but if that’s the high end of what’s been achieved historically, then those are key data points to pay attention to!
Is that the high end of what’s been achieved historically? I don’t know, because I’ve never seen someone systematically study the question over a wide range of historical cases.
That’s just one question among many which could all be answered with the same project. Here’s what the project would look like:
- Take a big list of discoveries and inventions
- For each of them, look for some standard types of evidence of counterfactual impact or its absence - like e.g. independent simultaneous discovery or a wide/short time-gap between the prerequisites becoming available and the discovery/invention occurring
- Ideally, make an order-of-magnitude estimate of how long each would have been delayed had the discoverer/inventor worked on something else. (Months? Years? Decades?)
- Put it all in a big table and sort it.
People often say “there’s no obvious recipe for high impact research/invention”, but I’ve never actually seen someone do the work to sort by counterfactual impact. So… I have no idea. It’s the sort of exercise which I’d expect to reveal surprising patterns.
(An aside: I’m sure some of you are thinking “let’s use a large language model to do this!”. Not a bad idea. I did experiment a little, and found that for basically every discovery, GPT-4 says the discovery was not very counterfactual, and usually gives a fairly generic argument with very similar structure - “Simultaneous discovery, or the idea of ‘multiple discovery,’ suggests that scientific breakthroughs often occur independently and almost concurrently by different people”, blah blah blah. It does give different estimates for quantitatively how long discoveries would be delayed if the original discoverers/inventors hadn’t worked on it, and it is a good source of potential prior/simultaneous discoveries to check. But it does very much require checking.)
What If Leading The Parade Grants Counterfactual Impact Later?
People occasionally come up with plans like "I'll lead the parade for a while, thereby accumulating high status. Then, I'll use that high status to counterfactually influence things!". This is one subcategory of a more general class of plans: "I'll chase status for a while, then use that status to counterfactually influence things!". Various versions of this often come from EAs who are planning to get a machine learning PhD or work at one of the big three AI labs.
A proper analysis of such plans would take a whole post on its own, and distract more than I want from the rest of this post. But since it's relatively common among my social circles and adjacent to the post topic, I will drop a few quick takes on this sort of plan.
When someone tells me they're going to optimize for status and then use that status for counterfactual influence, here are thoughts which go through my head:
- "Reeeeeaaaaalllly? You're chasing status for counterfactual influence and not because, say, chasing status is inherently rewarding and familiar and a lot easier than directly tackling hard research problems? I have some True Rejection [LW · GW]-style questions for you..."
- "Your stated plan of <getting PhD/working at big three lab> will not actually get you much status. You're following a local status gradient, but it's a local gradient which maxes out with you being a peon. If you want to have nontrivial counterfactual impact via status, then stop being such a wuss and tackle something ambitious enough to get you a lot of status. Go found a startup or something."
- <Mental picture of someone marching along at the front of a parade, then veering off onto a side street looking very official, and the rest of the parade totally ignoring them.>
Summary
Status is a shit proxy for counterfactual impact, especially in science. High-status historical figures largely seem to be leading the parade: if they hadn’t made the discovery or invention, someone else likely would have shortly after. That said, at least some cases were probably counterfactually impactful… but high social status isn’t a very good proxy for identifying those cases.
There are simple kinds of historical evidence we can look for, to tell whether someone was counterfactually impactful or just leading the parade. Simultaneous discovery is the clearest - e.g. since Newton and Leibniz discovered calculus at basically the same time, clearly neither invention was counterfactual, and likely someone else would have figured it out soon after even if both Newton and Leibniz hadn’t. On the other side, when there's a wide gap in time between prerequisites and the discovery/invention, that does suggest high counterfactual impact.
This all adds up to a mood which is very skeptical of mainstream social status: “look, man, I just don’t really give a shit about who’s leading the parade, except insofar as they’re the noise I’m trying to sort through when looking for signal in history books”. Until someone does the work to check which historical discoveries or inventions were counterfactual, and which were just leading the parade, that’s my default attitude.
31 comments
Comments sorted by top scores.
comment by nim · 2024-01-31T23:32:05.125Z · LW(p) · GW(p)
If you're talking about literal parades -- I lead them annually at a smallish renaissance fair. Turns out that people with the combination of willingness to run around in front of a group looking silly, and enough time anxiety to actually show up to the morning ones, are in short supply.
That parade goes where I put it. There are several possible paths through the faire and I choose which one the group takes, and make the appropriate exaggerated gestures to steer the front of the crowd in that direction, and then the rest follow.
I also play a conspicuous looking instrument in the parade at a small annual local event that we convene a "band" for, as well. Since the instrument is large and obvious, I'm typically shoved to the front of the group as we line up. I'm pretty sure that if I went off script and took the parade out of the gathering's area, they'd probably follow me, because nobody else is quite sure where we're supposed to be going. If I conspired with the other musicians to take the group out of the event, we could almost certainly make that happen. I'm curious how far down the road we could get the dancers following the parade before they realize something is amiss, but also really don't want to be the individual to instigate that sort of experiment.
Back in high school, I did marching band, I think if our leader had been misinformed about where we should go, we would have followed them anyway. That's mostly because marching band has an almost paramilitary obedience theme going on, and can get a bit culty about directors or leaders in my experience. Marching as a group also confers a certain immunity to individual responsibility as long as you're following your orders. There's this confidence that if the leader takes the group off course, that leader will be the only individual who's personally in trouble for the error. The group might get yelled at collectively for having followed, but no one person in the group is any more responsible for the error than any other, except for the leader.
From these experiences, I'd speculate that the reason we don't see literal parades being counterfactually led off course like that on a regular basis is because the dynamic of leading it disincentivizes abusing that power. Being chosen and trusted by a group to lead them in a public setting where any errors you make will be instantly obvious to all onlookers confers a powerful desire to not mess up.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2024-02-04T21:29:17.966Z · LW(p) · GW(p)
perhaps "front of a wave" or something then
comment by aysja · 2024-02-02T21:39:22.152Z · LW(p) · GW(p)
I do think that counterfactual impact is an important thing to track, although two people discovering something at the same time doesn’t seem like especially strong evidence that they were just "leading the parade." It matters how large the set is. I.e., I doubt there were more than ~5 people around Newton’s time who could have come up with calculus. Creating things is just really hard, and I think often a pretty conjunctive set of factors needs to come together to make it happen (some of those are dispositional (ambition, intelligence, etc.), others are more like “was the groundwater there,” and others are like “did they even notice there was something worth doing here in the first place” etc).
Another way to say it is that there’s a reason only two people discovered calculus at the same time, and not tens, or hundreds. Why just two? A similar thing happened with Darwin, where Wallace came up with natural selection around the same time (they actually initially published it together). But having read a bunch about Darwin and that time period I feel fairly confident that they were the only two people “on the scent,” so to speak. Malthus influenced them both, as did living in England when the industrial revolution really took off (capitalism has a “survival of the fittest” vibe), so there was some groundwater there. But it was only these two who took that groundwater and did something powerful with it, and I don’t think there were that many other people around who could have. (One small piece of evidence that that effect: Origin of Species was published a year and a half after their initial publication, and no one else published anything on natural selection within that timespan, even after the initial idea was out there).
Also, I mostly agree about Shannon being more independent, although I do think that Turing was “on the scent” of information theory as well. E.g., from The Information: “Turing cared about the data that changed the probability: a probability factor, something like the weight of the evidence. He invented a unit he named a ‘ban.’ He found it convenient to use a logarithmic scale, so that bans would be added rather than multiplied. With a base of ten, a ban was the weight of evidence needed to make a fact ten times as likely.” This seems, to me, to veer pretty close to information theory and I think this is fairly common: a few people “on the scent,” i.e., noticing that there’s something interesting to discover somewhere, having the right questions in the first place, etc.—but only one or two who actually put in the right kind of effort to complete the idea.
There’s also something important to me about the opposite problem, which is how to assign blame when “someone else would have done it anyway.” E.g., as far as I can tell, much of Anthropic’s reasoning for why they’re not directly responsible for AI risk is because scaling is inevitable, i.e., that other labs would do it anyway. I don’t agree with them on the object-level claim (i.e., it seems possible to cause regulation to institute a pause), but even if I did, I still want to assign them blame for in fact being the ones taking the risky actions. This feels more true for me the fewer actors there are, i.e., at the point when there are only three big labs I think each of them is significantly contributing to risk, whereas if there were hundreds of leading labs I’d be less upset by any individual one. But there’s still a part of me that feels deontological about it, too—a sense that you’re just really not supposed to take actions that risky, no matter how inculpable you are counterfactually speaking.
Likewise, I have similar feelings about scientific discoveries. The people who did them are in fact the ones who did the work, and that matters to me. It matters more the smaller the set of possible people is, of course, but there’s some level upon which I want to be like “look they did an awesome thing here; it in fact wasn’t other people, and I want to assign them credit for that.” It’s related to a sense I have that doing great work is just really hard and that people perpetually underestimate this difficulty. For instance, people sometimes write off any good Musk has done (e.g., the good for climate change by creating Tesla, etc.) by saying “someone else would have made Tesla anyway” and I have to wonder, “really?” I certainly don’t look at the world and expect to see Teslas popping up everywhere. Likewise, I don’t look at the world and expect to see tons of leading AI labs, nor do I expect to see hundreds of people pushing the envelope on understanding what minds are. Few people try to do great things, and I think the set of people who might have done any particular great thing is often quite small.
Replies from: kave, alexander-gietelink-oldenziel↑ comment by kave · 2024-02-03T04:19:14.713Z · LW(p) · GW(p)
Don't forget Edward Blyth for something in the vicinity of groundwater or on the scent.
↑ comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2024-03-22T12:41:34.574Z · LW(p) · GW(p)
Great comment Ajsja, you hit the mark. Two small comments:
(i) The 'correct', 'mathematically righteous' way to calculate credit is through a elaboration of counterfactual impact: the Shapley value. [EA · GW] I believe it captures the things you want from credit that you write here.
(ii) On Turing being on the scent of information theory - I find this quote not that compelling. The idea of information as a logarithmic quantity was important but only a fraction of what Shannon did. In general, I agree with Schmidthuber's assesment that Turing' scientific stature is a little overrated.
A better comparison would probably be Ralph Hartley pioneered information-theoretic ideas (see e.g. the Shannon-Hartley theorem). I'm sure you know more about the history here than I do.
I'm certain one could write an entire book about the depth, significance and subtlety of Claude Shannon's work. Perennially underrated.
Replies from: ryan_greenblatt↑ comment by ryan_greenblatt · 2024-03-24T18:14:22.165Z · LW(p) · GW(p)
Shapley seems like quite an arbitrary choice (why uniform over all coalitions?).
I think the actually mathematically right thing is just EDT/UDT, though this doesn't imply a clear notion of credit. (Maximizing shapley yields crazy results.)
Unfortunately, I don't think there is a correct notion of credit.
Replies from: alexander-gietelink-oldenziel, kave↑ comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2024-03-25T14:00:25.801Z · LW(p) · GW(p)
Averaging over all coalitions seems quite natural to me; it averages out the "incidental, contigent, unfair" factor of who got in what coalition first. But tastes may differ.
Shapley value has many other good properties nailing it down as a canonical way to allocate credit.
Quoting from nunoSempere's article [EA · GW]:
The Shapley value is uniquely determined by simple properties.
These properties:
- Property 1: Sum of the values adds up to the total value (Efficiency)
- Property 2: Equal agents have equal value (Symmetry)
- Property 3: Order indifference: it doesn't matter which order you go in (Linearity). Or, in other words, if there are two steps, Value(Step1 + Step2) = Value(Step1) + Value(Step2).
And an extra property:
- Property 4: Null-player (if in every world, adding a person to the world has no impact, the person has no impact). You can either take this as an axiom, or derive it from the first three properties.
In the context of scientific contributions, one might argue that property 1 & 2 are very natural, axiomatic while property 3 is merely very reasonable.
I agree Shapley value per se isn't the answer to all questions of credit. For instance, the Shapley value is not compositional: merging players into a single player doesn't preserve Shapley values.
Nevertheless, I feel it is a very good idea that has many or all properties people want when they talk about a right notion of credit.
- I don't know what you mean by UDT/EDT in this context - I would be super curious if you could elucidate! :)
- What do you mean by maximizing Shapley value gives crazy results? (as I point out above, Shapley value isn't the be all and end all of all questions of credit and in e.g. hierarchichal composition of agency isn't well-behaved).
comment by Jozdien · 2024-02-01T00:34:01.357Z · LW(p) · GW(p)
… and yet, Newton and Leibniz are among the most famous, highest-status mathematicians in history. In Leibniz’ case, he’s known almost exclusively for the invention of calculus. So even though Leibniz’ work was very clearly “just leading the parade”, our society has still assigned him very high status for leading that parade.
If I am to live in a society that assigns status at all, I would like it to assign status to people who try to solve the hard and important problems that aren't obviously going to be solved otherwise. I want people to, on the margin, do the things that seems like it wouldn't get done otherwise. But it seems plausible that sometimes, when someone tries to do this - and I do mean really tries to do this, and actually put a heroic effort toward solving something new and important, and actually succeeds...
... someone else solves it too, because you weren't working on something that hard to identify, even if it was very hard to identify in any other sense of the word. Reality doesn't seem that chaotic and humans that diverse for this to not have been the case a few times (though I don't have any examples here).
I wouldn't want those people to not have high status in this world where we're trying at all to assign high status to things for the right reasons. I think they probably chose the right things to work on, and the fact that there were other people who did as well through no way they could have easily known shouldn't count against them. Would Shannon's methodology have been any less meaningful if there were more highly intelligent people with the right mindset in the 20th century? What I want to reward is the approach, the mindset, the actually best reasonable effort to identify counterfactual impact, not the noisier signal. The opposite side of this incentive mechanism is people optimizing too hard for novelty where impact was more obvious. I don't know if Newton and Leibniz are in this category or not, but I sure feel uncertain about it.
I agree with everything else in this post very strongly, thank you for writing it.
Replies from: johnswentworth↑ comment by johnswentworth · 2024-02-01T00:57:35.318Z · LW(p) · GW(p)
The obvious but mediocre way to assign credit in the Newton-Leibniz case would be to give them each half credit.
The better but complicated version of that would be to estimate how many people would have counterfactually figured out calculus (within some timeframe) and then divide credit by that number. Which would still potentially give a fair bit of credit to Newton and Leibniz, since calculus was very impactful.
That sounds sensible, though admittedly complicated.
Replies from: alexander-gietelink-oldenziel↑ comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2024-02-02T16:39:23.788Z · LW(p) · GW(p)
This is obviously the correct way to interpret what's happening. At some point the per person Shapley value becomes small but I'd guess that the shapely impact of Newton & Leibniz is substantial for quite a long time.
comment by Donald Hobson (donald-hobson) · 2024-02-13T17:46:13.719Z · LW(p) · GW(p)
I think you are completely overlooking a significant chunk of impact. Suppose that technologies A and B are similar. The techs act as substitutes, say several different designs of engine or something. And if everyone is using tech X, the accumulated experience makes X the better choice. This gives long term control of which path tech goes down to a "who got there first". Could electric cars have taken off before petrol if someone else had led that parade.
There are plenty of substances that increase fuel octane, so if someone else had led the parade around a substance that didn't contain lead, a lot of brain damage could have been prevented.
If some non military group had lead nuclear energy, would reactors use thorium instead of uranium?
Replies from: johnswentworth↑ comment by johnswentworth · 2024-02-13T17:54:56.245Z · LW(p) · GW(p)
Yeah, that sure does seem like a way it should be possible to have a lot of counterfactual impact. I'd be curious for any historical examples of people doing that both successfully and intentionally.
comment by Paradiddle (barnaby-crook) · 2024-02-02T09:37:21.311Z · LW(p) · GW(p)
In Leibniz’ case, he’s known almost exclusively for the invention of calculus.
Was this supposed to be a joke (if so, consider me well and truly whooshed)? At any rate, it is most certainly not the case. Leibniz is known for a great many things (both within and without mathematics) as can be seen from a cursory glance at his Wikipedia page.
Replies from: ryan_bcomment by abramdemski · 2024-02-01T18:03:30.053Z · LW(p) · GW(p)
I am thinking of this as a noise-reducing modification to the loss function, similar to using model-based rather than model-free learning (which, if done well, rewards/punishes a policy based on the average reward/punishment it would have gotten over many steps).
If science were incentivized via prediction market (and assuming scientists can make sizable bets by taking out loans), then the first person to predict a thing wins most of the money related to it. In other words, prediction markets are approximately parade-leader-incentivizing.
But if there's a race to be the first to bet, then this reward is high-variance; Newton could get priority over Leibniz by getting his ideas to the market a little faster.
You recommend dividing credit more to all the people who could have gotten information to the market, with some kind of time-discount for when they could have done it. If we conceive of "who won the race" as introducing some noise into the credit-assignment, this is a way to de-noise things.
This has the consequence of taking away a lot of credit from race-winners when the race was pretty big, which is the part you focus on; based on this idea, you want to be part of smaller races (ideally size 1). But, outside-view, you should have wanted this all along anyway; if you are racing for status, but you are part of a big race, only a small number of people can win anyway, so your outside-view probability of personally winning status should already be divided by the number of racers. To think you have a good chance of winning such a race you must have personal reasons, and (since being in the race selects, in part, for people who think they can win) they're probably overconfident.
So for the most part your advice has no benefit for calibrated people, since being a parade-leader is hard.
There are for sure cases where your metric comes apart from expected-parade-leading by a lot more, though. A few years ago I heard accusations that one of the big names behind Deep Learning earned their status by visiting lots of research groups and keeping an eye out for what big things were going to happen next, and managing to publish papers on these big things just a bit ahead of everyone else. This strategy creates the appearance of being a fountain of information, when in fact the service provided is just a small speed boost to pre-existing trends. (I do not recall who exactly was being accused, and I don't have a lot of info on the reliability of this assessment anyway, it was just a rumor.)
Replies from: johnswentworth↑ comment by johnswentworth · 2024-02-01T18:52:19.157Z · LW(p) · GW(p)
Here's an example use-case where it matters a lot.
Suppose I'm trying to do a typical Progress Studies thing; I look at a bunch of historical examples of major discoveries and inventions, in hopes of finding useful patterns to emulate. For purposes of my data-gathering, I want to do the sort of credit assignment the post talks about: I want to filter out the parade-leaders, because I expect they're mostly dominated by noise.
I do think you're mostly-right for purposes of a researcher rationally optimizing for their own status. But that's a quite different use-case from an external observer trying to understand what factors drive progress, or even a researcher trying to understand key factors in order to boost their own work.
Replies from: ryan_b↑ comment by ryan_b · 2024-02-01T22:29:19.872Z · LW(p) · GW(p)
A sports analogy is Moneyball.
The counterfactual impact of a researcher is analogous to the insight that professional baseball players are largely interchangeable because they are all already selected from the extreme tail of baseball playing ability, which is to say the counterfactual impact of a given player added to the team is also low.
Of course in Moneyball they used this to get good-enough talent within budget, which is not the same as the researcher case. All of fantasy sports is exactly a giant counterfactual exercise; I wonder how far we could get with 'fantasy labs' or something.
Replies from: N1X↑ comment by N1X · 2024-02-18T06:53:21.147Z · LW(p) · GW(p)
One way to identify counterfactually-excellent researchers would be to compare the magnitude of their "greatest achievement" and secondary discoveries, because the credit that parade leaders get is often useful for propagating their future success and the people who do more with that boost are the ones who should be given extra credit for originality (their idea) as opposed to novelty (their idea first). Newton and Leibniz both had remarkably successful and diverse achievements, which suggests that they were relatively high in counterfactual impact in most (if not all) of those fields. Another approach would consider how many people or approaches to a problem had tried and failed to solve it: crediting the zeitgeist rather than Newton and/or Leibniz specifically seems to miss a critical question, namely that if neither of them solved it, would it have taken an additional year, or more like 10 to 50? In their case, we have a proxy to an answer: ideas took months or years to spread at all beyond the "centers of discovery" at the time, and so although they clearly took only a few months or years to compete for the prize of first (and a few decades to argue over it), we can relatively safely conjecture that whichever anonymous contender is third in the running is likely to have been behind on at least that timescale. That should be considered in contrast to Andrew Wiles, whose proof of Rermat's Last Theorem was efficiently and immediately published (and patched as needed) This is also important because other and in particular later luminaries of the field (e.g. Mengoli, Mercator, various Bernoullis, Euler, etc.) might not have had the vocabulary necessary to make as many discoveries as quickly as they did or communicate those discoveries as effectively if not for Newton & Leibniz's timely contributions.
comment by trevor (TrevorWiesinger) · 2024-02-01T01:29:03.330Z · LW(p) · GW(p)
I would say this is the kind of unconventional angle that I was originally hoping to see [LW(p) · GW(p)] pumped out of leading AI alignment researchers when they take a break to look at AI policy instead. I'm not sure how much more can be gained from first principles and publicly available information, as opposed to spending time in DC and talking with people like Akash [LW · GW] to get into the nitty gritty of the situation on the ground.
This reminds me of a pretty serious issue with history; beyond lists of events and dates of when they happened, any further detail gets sucked into disputes as people develop their own theories and get territorial when someone challenges them, and unlike the natural sciences, these hypotheses cannot really be tested since empiricism is dependent on historical documents (and, in recent history e.g. WW2 and Cold War, testimony from retired government officials and intelligence agency leaks, which gradually became around as unreliable as news reporting). Worst of all, incentives to forge written records pointing towards a giant revelation, that will get them lots of citations.
People who went to school will remember that, at the most nuanced parts of history class/textbooks, they would just give lists of causes for major events e.g. 4 major causes of WW1, 6 major causes of the corruption and collapse of Rome or a major Chinese Dynasty, and no weight placed on how much each of those factors contributed, just more lists of reasons how and why (e.g. was corruption one of the reasons that was 40% responsible or was it one of the 5% ones?).
This is why books like Guns, Germs, and Steel became so popular; it was just way above par, not adequate (and it also had to be easy to understand for a wider audience in order to get enough sales/citations to be famous, balancing that with actually being good in reality). There's no reference to bell curves or bayesian reasoning, and you can never even tell whether the scholar thinks humans are neurologically uniform, aside from Benjamin Franklin or Carnegie or Sun Tzu who were just badass™.
Afaik personal recommendations are required, but then you have to trust the people doing the recommending; will they select things maximally similar to Inadequate Equilibira e.g. only reading economic history because those are the only writers who sometimes bother to even attempt to think original thoughts about incentive structures?
comment by NicholasKees (nick_kees) · 2024-02-04T22:37:31.833Z · LW(p) · GW(p)
It's been a while since I read about this, but I think your slavery example might be a bit misleading. If I'm not mistaken, the movement to abolish slavery initially only gained serious steam in the United Kingdom. Adam Hochschild tells a story in Bury the Chains that makes the abolition of slavery look extremely contingent on the role activists played in shaping the UK political climate. A big piece of this story is how the UK used their might as a global superpower to help force an end to the transatlantic slave trade (as well as precedent setting).
comment by Elizabeth (pktechgirl) · 2024-06-30T19:30:22.231Z · LW(p) · GW(p)
re: accumulating status in hope of future counterfactual impact.
I model status-qua-status (as opposed to status as a side effect of something real) as something like a score for "how good are you at cooperating with this particular machine?". The more you demonstrate cooperation, the more the machine will trust you. But you can't leverage that into getting the machine to do something different- that would immediately zero out your status/cooperation score.
There are exceptions. If you're exceptionally strategic you might make good use of that status by e.g. changing what the machine thinks it wants, or coopting the resources and splintering. But this isn't the vibe I get from people I talk to with the 'status then impact' plan, or from any of 80ks advice. They sound like they think status is a fungible resource that can be spent anywhere, like money[1].
So unless you start with a goal and authentically backchain into a plan where a set amount of a specific form of status is a key resource, you probably shouldn't accumulate status.
I think money-then-impact plans risk being nonterminating, but are great if they are responsive and will terminate.
I also think getting a few years of normal work under your belt between college and crazy independent work can be a real asset, as long as you avoid the just-one-more-year trap.
comment by Morpheus · 2024-01-31T23:03:37.823Z · LW(p) · GW(p)
People occasionally come up with plans like "I'll lead the parade for a while, thereby accumulating high status. Then, I'll use that high status to counterfactually influence things!". This is one subcategory of a more general class of plans: "I'll chase status for a while, then use that status to counterfactually influence things!". Various versions of this often come from EAs who are planning to get a machine learning PhD or work at one of the big three AI labs.
Even if you need status, it might be easier to just be or become friends with the people who already have status and credentials, and they can lend you their status if they think your plan/idea is good. For example, when you are writing a letter to a politician or founding a new org/startup.
comment by Ben (ben-lang) · 2024-02-02T11:24:18.045Z · LW(p) · GW(p)
I like the idea of this leading the parade analogy, but I feel there are two concepts here, leading the parade in terms of a powerless figurehead, and a distinct idea. One I more science, the other more politics.
Whether or not your research has counterfactual impact depends not just on what you do, but also on what everyone else in the world does. If there are 100 problems to work on, ranked in "interesting-ness" from top to bottom and you are one of 100 researchers, how do you maximise your counterfactual impact? Obviously not by working on the biggest problems, the "most interesting" problem will probably have 5 other people doing it. (The game-theory thing where "If we all go for the blonde" https://plus.maths.org/content/if-we-all-go-blonde ).
The theory does tell us that if we happen to know problem X is way more interesting than most people give it credit for we should prioritise that, but that's not much of an insight.
I think "leading the parade" has a much more interesting application to politics. Imagine the leader of a political party - depending on the leader, and the party, the party leader might have genuine, power to change the party's policies or political aims, but (usually?) the party already has its "thing" and the leader can only adjust within that. Donald Trump is an example of someone who clearly has real power to change the sorts of things the Republicans stand for, because he already has. In contrast, most party leaders just do what that party would do anyway. That seems like a really important thing to know. Imagine the prime minister has some policy that you really don't like, you manage to get a meeting with them and explain why the policy is bad. Then, in a weird moment of perfect honesty, they look you in the eye and say "yeah, on a personal level I completely agree this policy is bad. But my party are behind this, and its happening with or without my backing. Why are you even talking to me, I am just the prime minister, I lead this party but I don't control it."
(Unrelated, but a few years ago I published a paper that I really thought was like, some crazy thing that would only be seen by someone with my weird perspective on the issue - proper counterfactual impact. It only took 2 years for someone else to independently come up with the same idea (and the same figures!) and put up a preprint of their own.)
comment by KanizsaBoundary · 2024-02-01T20:19:13.241Z · LW(p) · GW(p)
William Thurston seems like a mathematician that was not just leading the parade, but rather made fundamental contributions to his fields that no one else would have made in his time. To quote, "He wanted to avoid in hyperbolic geometry what had happened when his basic papers on foliations “tsunamied” the field in the early 1970s.", meaning that he made so many deep and varied contributions so fast that no one could keep up. More in https://www.ams.org/notices/201511/rnoti-p1318.pdf such as "The huge and daunting advances he made in foliation theory were off-putting, and students stopped going into the area, resulting in an unfortunate premature arrest in the development of the subject while it was still in its prime. (If someone writes a book incorporating Bill’s advances, it will take off again.)"
comment by tailcalled · 2024-02-01T08:09:33.100Z · LW(p) · GW(p)
I feel like it's gotta be pretty tricky to fully eliminate the possibility that someone had counterfactual impact when there was simultaneous invention, if the person who had counterfactual impact had been talking about their partially developed ideas beforehand. It could lead to others developing them further.
comment by mattmacdermott · 2024-02-01T16:39:14.899Z · LW(p) · GW(p)
People occasionally come up with plans like "I'll lead the parade for a while, thereby accumulating high status. Then, I'll use that high status to counterfactually influence things!". This is one subcategory of a more general class of plans: "I'll chase status for a while, then use that status to counterfactually influence things!". Various versions of this often come from EAs who are planning to get a machine learning PhD or work at one of the big three AI labs.
This but skill instead of status?
Replies from: D0TheMath↑ comment by Garrett Baker (D0TheMath) · 2024-02-01T18:55:56.097Z · LW(p) · GW(p)
The people who I know who currently seem most impressive spent a lot of time earlier on gaining a bunch of skill. I don't know anyone who seems impressive by virtue of having gained lots of status which they are now free to divert to good ends. Perhaps I just don't hang around such people, but for this reason I'm much less convinced of John's arguments when you replace status with skill.
Replies from: mattmacdermott, johnswentworth↑ comment by mattmacdermott · 2024-02-01T20:26:14.498Z · LW(p) · GW(p)
Sorry, yeah, my comment was quite ambiguous.
I meant that while gaining status might be a questionable first step in a plan to have impact, gaining skill is pretty much an essential one, and in particular getting an ML PhD or working at a big lab seem like quite solid plans for gaining skill.
i.e. if you replace status with skill I agree with the quotes instead of John.
↑ comment by johnswentworth · 2024-02-01T19:02:16.936Z · LW(p) · GW(p)
+1, "gain a bunch of skill and then use it to counterfactually influence things" seems very sensible. If the plan is to gain a bunch of skill by leading a parade, then I'm somewhat more skeptical about whether that's really the best strategy for skill gain, but I could imagine situations where that's plausible.
comment by Review Bot · 2024-02-16T18:26:03.442Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?