Epistemic standards for “Why did it take so long to invent X?”
post by jasoncrawford · 2020-03-02T18:58:42.783Z · LW · GW · 16 commentsThis is a link post for https://rootsofprogress.org/epistemic-standards-for-why-it-took-so-long
Contents
16 comments
In seeking to understand the history of progress, I keep running across intriguing cases of “ideas behind their time”—inventions that seem to have come along much later than they could have, such as the cotton gin or the bicycle. I’ve started collecting a list here, and will update that page with new analyses as I find them.
Debates on these questions sometimes oddly devolve into arguments, with people fruitlessly talking past each other (although if people on the Internet can argue over how many days are in a week, I guess we can argue about anything). So I want to comment on how we think about such cases and what the standards for evidence are.
To start, there is a need for precision. For example, take the steam engine. There was a thing in antiquity called an engine, that used steam: “Hero’s engine”, also known as the aeolipile. Some people see this and conclude that “steam engines existed in the 1st century” or that there’s no reason ancient Rome couldn’t have used this widely in industrial applications.
This is a mistake. The aeolipile is nothing like the steam engines of the 18th century and later: it’s a turbine, which means it is rotary, rather than using the reciprocating (back-and-forth) motion of a piston, as in Newcomen’s engine. Why does this matter? Because the aeolipile doesn’t generate enough torque for practical applications—one analysis says that Watt’s engine generated a quarter of a million times more torque.
Even before finding that analysis, I had an hunch it would be the case—indeed, that’s how I knew to search for “aeolipile torque”, which led me to that link on the first page of Google results. My intuition was based on a few things. First, if a simple, primitive turbine like the aeolipile could be put to practical use, why didn’t anyone reinvent it in the 18th or even 17th century? Why did Newcomen, Watt, and others focus on much more complicated piston engines? They were smart people and were obviously working hard on the problem, it seems impossible that such a simple solution would have escaped all of them. Second, the aeolipile is small—Hero’s sketch above shows it sitting on a table—but Newcomen’s engine was very large, to the point where a separate shed would be built to house one. Why did the engine have to be that large if a tiny one would do?
Again, a precise understanding of each invention will uncover relevant details like this. A concept, such as “an engine (of any type) that uses steam (in any way)”, is not enough.
This example also illustrates a second principle: practicality matters. A device that works in theory, but is too underpowered, inefficient, expensive, or unreliable, might as well not exist for practical purposes. It must work not only for a demonstration, but for real, human, economic needs, in the context of consumers’ lives, industrial processes, or business operations. Because of this, a difference in degree can become a difference in kind, when an invention crosses a threshold of practicality.
As a side note, this is why it’s perfectly accurate to say that Edison’s lab invented the light bulb, even though there were other light bulbs before it: they were too expensive (e.g., using platinum filaments), or they burned out quickly and thus needed to be replaced too often. In my opinion, it’s redundant to say that someone invented “the first practical X”—this is the same as saying they invented X. To invent something is to invent a practical version of that thing. If your “invention” is impractical, it’s just a demo or prototype. This can be useful to test ideas or to communicate possibilities, but it’s the practical inventions—the ones that actually remove all the obstacles to widespread use—that move history.
Another example of this is the computer. The computer was invented by J. Presper Eckert and John Mauchly at the University of Pennsylvania; their first model was the ENIAC, completed in 1945. It was a breakthrough because it was the first fully electronic computer, and this made it much faster than previous attempts, such as the IBM Automatic Sequence Controlled Calculator (ASSC, aka the Harvard Mark I), which was electromechanical, using magnetic relays. Based on the speeds for these machines given by Wikipedia, the ENIAC was about 600 times faster than the ASSC at division and over 2,000 times faster at multiplication. (The ENIAC was also over 2,000 times faster than a human using a mechanical calculator at calculating a ballistic trajectory, implying that the ASSC was probably not much faster than a human.) The ASSC was an interesting demo that got some press; the ENIAC was the machine that ignited the computer revolution. Again, a difference in degree becomes a difference in kind. (Going even further back, other predecessors such as the Atanasoff–Berry computer or Konrad Zuse’s Z3 were also much slower than the ENIAC, and had other practical limitations. And Babbage’s “computer” was only a concept with an unfinished design that could never have been built with the technology of the day—which is why, despite my respect for his genius, I cannot regard Babbage as the inventor of the computer, any more than da Vinci was the inventor of the helicopter.)
If you want to argue that something could not have been invented before a particular “gate”, I regard this kind of history as the epistemic gold standard: strong economic motivation (and in the case of computers, military motivation in the context of WW2); multiple prior attempts, including some completed projects that were working and reasonably well-publicized; a measurable difference in a key practical dimension (in this case, speed); and an enabling technology that made a significant difference along that dimension (in this case, approximately three orders of magnitude). For this reason I confidently say that the computer—again, it’s redundant to say “practical computer”—could not have existed before the invention of the vacuum tube amplifier in 1907. (It’s plausible, but less certain to me, that it could not have existed until even later, when improved, more reliable vacuum tubes were invented. Plausible, because the ENIAC used over 17,000 tubes, and reliability was a concern among the engineers; less certain, because I don’t know of any failed attempt at building a fully electronic computer with less-reliable tubes, and because some statements from engineers indicate that reliability was less of a problem than feared, particularly if the tubes were operated continuously, to avoid thermal stress. So it might be that, after 1907, all that was holding back the engineering was a key insight.)
On the other hand, if you want to argue that something could have been invented much earlier than it was, you have to do better than glancing at its high-level concepts or components. You need to rigorously examine every part, material, and manufacturing process, and rule them all out as gating technologies. Any detail, even a minor one, can become crucial—especially when we remember that inventions need not only to work but to be practical, which includes performance, reliability and cost. As an example, in my analysis of the bicycle, I described the first proto-bicycle, known as the “draisine” or “Laufmaschine”, as being made of wood with iron tires—both ancient technologies. However, Nick Szabo pointed out to me that it reportedly used brass ball bearings, a much newer technology, which might have been essential to reduce friction.
A related question: how surprised should we be that it took X years for invention Y after enabling technology Z? Inventions do not spring forth immediately upon becoming possible: ideas and information take time to spread, experiments are required, funding must be secured, laboratories organized, materials obtained; and at end of the day all this is performed not by automata or some clockwork mechanism, but by unpredictable individuals with their own vision, inspiration, hopes and fears, operating in complex network of teams, contracts, partnerships, and other social structures. Even in the best of circumstances, a gap of a decade or more from a key enabling technology to the commercial release of an invention is not surprising; if the enabler is a scientific discovery, two or three decades does not surprise me. And chance can intervene—the path to an invention can be derailed by a sudden disease, a financial panic, a war.
In general, I think we should be more surprised at long gaps for inventions that have obvious, predictable impact on major industries. For this reason, the cotton gin and the flying shuttle are more compelling gaps to me than the wheeled suitcase, role-playing games, or the bicycle, which merely offer convenience or entertainment. I think we should also expect longer gaps in places and times that had lower population, less education, less economic surplus (to fund R&D), fewer or less effective financing mechanisms (such as venture capital), less political stability, etc.
My model for this is that innovation, at a societal level, is a stochastic process, with some parameters set by the environment and others by the particular invention in question. The more “pressure” there is to solve a problem (economic motivation), and the more opportunities there are for it to get solved (educated, inventive individuals or organizations, with time, space, materials, and funding, in a context of good legal institutions and political stability), the sooner you expect the leap to be made and the shorter a gap. In the limit, you get simultaneous invention, of which there are many stories (although some are overplayed, in part for the reasons discussed above regarding what counts as an “invention”). Some economics grad student could probably get a PhD thesis out of formalizing this model and fitting the parameters to data—both to quantify the “inventiveness” of a given place and time, and to identify outlier inventions that were truly, measurably, “behind their time.”
16 comments
Comments sorted by top scores.
comment by Dead Hour Canoe · 2020-03-02T22:06:15.303Z · LW(p) · GW(p)
One angle to look at invention from is the curious fact that so many things are invented by different people in different countries; and that if you look into it you generally find that most of these multiple inventors have a point (rather than, as in Star Trek, Russians just being adorable idiots).
Just from your list, and from a British perspective/quick wiki'ing, Swan invented an electric light bulb that worked well enough to make him a lot of money before Edison - and Turing built the first computer, as opposed to calculator. And I'm sure there are French/German/etc equivalents that are just as accurate, and just as partial. Though even though I rationally know this, and have no conscious desire to defend my nation's scientific honour, here I am writing this comment.
So invention as an idea (and as it's normally thought of) is suspiciously connected to tribalism and identity. It may not be much use for describing or investigating how discovery works.
comment by Raemon · 2020-03-06T02:57:35.470Z · LW(p) · GW(p)
Curated.
I'd enjoyed your previous posts on similar subjects – the subject of "how did we invent things" is centrally pretty relevant to my interests. I liked how this tied them together more directly into the central LessWrong question of "what do you think you know and how do you think you know it?", with lots of good contrasting examples.
Replies from: jasoncrawford↑ comment by jasoncrawford · 2020-03-06T04:16:00.769Z · LW(p) · GW(p)
Thanks Raemon!
comment by TurnTrout · 2020-03-03T04:32:54.560Z · LW(p) · GW(p)
Strong upvote – I really enjoyed and appreciated your use of specific examples.
How did you format the captions and center the images?
Replies from: jasoncrawford↑ comment by jasoncrawford · 2020-03-03T04:51:25.800Z · LW(p) · GW(p)
Thanks! Re formatting, I had help from Oliver Habryka who knows special formatting magic
comment by Douglas_Knight · 2021-06-07T21:05:18.120Z · LW(p) · GW(p)
The example of the machined ball bearing is great!
But both your other examples are false. Ctesibius did not just make a tabletop science demo, but also used steam engines to do useful work, namely opening temple doors. Babbage designed a working computer, which we know because people built it. He correctly computed the necessary tolerances and they were within the tolerances available at the time. The only problem was that he was defrauded on tolerances, a failure of social technology, or perhaps, a success of cartel social technology.
Replies from: jasoncrawford↑ comment by jasoncrawford · 2021-06-07T21:13:18.966Z · LW(p) · GW(p)
Thanks.
Re Hero's Engine, that's an interesting reference. Is there any evidence that this was ever built? (Old inventors drew up a lot of plans that were never implemented and may not even have worked.)
Re Babbage: The Difference Engine was not a computer. It was a calculating machine, but it was not programmable or general-purpose. (The Analytic Engine would have been a computer, but Babbage never even finished designing it.)
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2021-06-07T21:29:18.319Z · LW(p) · GW(p)
Sure, Babbage didn't finish the design, but how to you justify
could never have been built with the technology of the day
Do you claim that to distinguish the technology necessary for the two machines?
Replies from: jasoncrawford↑ comment by jasoncrawford · 2021-06-07T21:34:39.851Z · LW(p) · GW(p)
Well, people could barely get computers working with electromechanical parts in the 1930s, and those machines weren't very practical. Just seems impossible on the face of it that you could get something serious working 100 years earlier.
The Difference Engine, as you correctly point out, was much more feasible, and Babbage probably could have finished building it, if he hadn't fumbled the project.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2021-06-07T21:55:17.696Z · LW(p) · GW(p)
That sounds like an outside view argument, making the use of the example in general argument purely circular.
I don't point out that the Difference Engine was more feasible. I specifically asked you for such an argument and you sidestepped. I don't think anyone has ever made such an argument.
I only point out that the Difference Engine was feasible, which is an independent claim. For a century people claimed that Babbage's designs were infeasible. This proves too much. Would you have made that mistake? If the construction disproved the conventional wisdom, it is not enough to minimally adjust your conclusions to avoid the falsehoods, but to adjust your methods.
comment by bfinn · 2020-04-13T10:08:41.226Z · LW(p) · GW(p)
Your point about the difference between a prototype and practical, useful model is of course a matter of degree. A good illustration of this is Apple: various inventions of theirs - the Apple Mac, iPod, smartphone, iPad - had little innovation as such. Desktop computers, GUIs, mice; digital music players; mobile phones, personal digital assistants, touch screens; tablet computers - these all already existed. But in each case Apple's breakthrough was to take them from being commercial but mediocre, to very good implementations. And only when that happened did mass adoption occur, which is a crucial step in the impact of the invention. At that point many people assumed Apple had invented a whole new type of device; whereas in fact that had merely passed a key threshold - a step-change in usefulness (often in Apple's case relating to user interface).
So I reckon there are three key stages: prototype; mediocre implementation with a limited market (maybe there's a snappy term for that); good implementation with a mass market. I assume these stages occur with many inventions.
And presumably the reason an undramatic improvement in implementation can produce a dramatic change in takeup is when the technology passes a crucial threshold, viz. the standard of existing technologies. E.g. when computers got better in every way than a typewriter, why would you buy a typewriter? Price being the only reason, but maybe this is only the key consideration during a transition period. I remember the transition from typewriters to computers in the 1980s. I suspect the reason computers weren't used for routine word processing before then was not just the price, but that computers & printers were too hard to use, unreliable, and bulky. I.e. typewriters really were more practical for most tasks.
comment by MoritzG · 2020-03-08T18:52:20.598Z · LW(p) · GW(p)
Please elaborate on why you think the bicycle "merely offer convenience or entertainment". I understand that people did not understand it's potential and thought of it as a toy for crazy people, but wasn't the same true for the gasoline-automobile? To me the bicycle is of great importance, not just/only for leisure, sport. I understand that the bicycle's value depends on the distance, flatness, wind, road quality traveled, but compared to a horse (that most did not have) it is so much better.
Replies from: jasoncrawford↑ comment by jasoncrawford · 2020-03-08T20:26:26.655Z · LW(p) · GW(p)
Good point. It did evolve into more than just a convenience for many people. In the beginning, though, it was seen as a leisure activity with no real practical value. And even today its economic and social impact is not as great as, say, textile mechanization. Almost everyone on Earth wears mass-manufactured clothes; only a minority of people use a bicycle for anything other than recreation.
Replies from: MoritzG↑ comment by MoritzG · 2020-03-09T10:59:04.373Z · LW(p) · GW(p)
"only a minority of people use a bicycle for anything other than recreation"
I guess my upbringing and surrounding is special (densely populated area in north Europe), but I know plenty of people who move in no other way (shopping, vacation, commute, everything). Before the gasoline motor scooter became wide spread, and poisoned the air in Asia, people used bikes all the time.
comment by Laurence Perkins (laurence-perkins) · 2020-03-06T23:09:44.612Z · LW(p) · GW(p)
Note also that "practical" is extremely subjective. The tungsten filament won out over the carbon arc for home lighting with better light quality and better durability. But it's not what we use to light up the IMAX.
Eli Whitney didn't invent the first cotton gin, he invented the first cotton gin that would work with the American long-fiber cotton without jamming up. That was only needed because better agricultural production technology made it so that there was more cotton harvested than could have the seeds combed out of it by hand.
A lot of it comes down to cost vs benefit. Babbage's difference engine was within the reach of the craftsmen of the time. But the price tag was hideously expensive. And for what? Yes, a completely error-free set of navigation tables would have been useful. But the rate of error was already low enough that the additional benefit of eliminating those errors wouldn't have paid for itself. An aeolipile could become a workable steam-turbine with sufficient investment in nozzle design, but when 80+% of your population has to be full-time farmers to grow enough food for everyone, who has time to work on that?
It's not just what you have to have for precursor technology to build something, it's also what you have to give up when you spend your resources on it. For much of human history that second factor has been quite high.
Replies from: jasoncrawford↑ comment by jasoncrawford · 2020-03-06T23:01:47.976Z · LW(p) · GW(p)
I would say “context-dependent” perhaps rather than “subjective”.
Re the cotton gin, any good reference on that? The story I read made it sound like a fairly de novo invention.