I'm not sure that's really different from the polio story. The world knew that polio vaccines were under development. They knew a big clinical trial was underway starting in 1954. There was a date announced ahead of time when the results of the trial would be announced (April 12, 1955). This seems similar to there being an announcement in the news of the first results of a covid vaccine trial.
Good question. I think what happened instead is that farmers with threshing machines would rent them out to those without, or people would bring portable machines around to farms—see my reply to @ChristianKl.
Why did it happen that way? Not sure. Maybe transportation costs, which were high. Grain is much more compact and high value-density than unthreshed bundles of wheat. Makes more sense to thresh it on-location before transporting it anywhere.
Oh, also—farmers used the straw! For animal bedding, to mix with manure, etc. It really doesn't make sense to transport stalks to a central location, and then send the straw back.
Yeah, it was a pithy tweet-length opener. To be precise, it's unnecessary to have a perfect model of the future / predict it with anything near 100% accuracy, or to have any appreciable degree of accuracy on the long-term future.
Historically, something somewhat different happened: if one farmer owned a threshing machine, other farmers might bring their grain to him and rent time on the machine.
Or, when portable threshing machines were built, someone would travel around to different farms and thresh there for a fee, then move on.
(But a portable machine was nontrivial, especially when the machines were horse-powered, check out this diagram [from this source]—a horse was hooked up to that harness at letter H on the diagram, so you can get a sense of how big that thing was. That model would have been stationary.)
In practice, I don't think it worked that way. If the machine broke, it was not at all easy to repair; you couldn't just factor in a maintenance cost. And if the machine damaged or lost grain, it was worse than useless.
Good question about looms/mills, I don't know. Before the 1800s or so, I think looms were mostly owned by weavers who worked from home. There was no “specialist on site”. But I don't think they broke much, because there weren't high forces involved. (In the late 1800s, when large power looms were set up in factories such as those at Lowell, Mass., I imagine they would have had an engineer on staff.) Re mills, I would guess that a broken mill would be repaired by the local millwright. But I doubt they were on-site.
Your model of costs vs. benefits is logical, but in practice there is uncertainty (about machine reliability / breakdowns) and people tend to avoid tail risk by seeking reliable machines. Also, previous standards of quality (that can be achieved by manual labor) tend to set a quality bar that machines have to meet before they are adopted. People don't like reducing quality, even if the efficiency gain theoretically makes up for it. At least, that's how it seemed to be in the early days of mechanization.
See my reply to @johnswentworth re other benefits of the threshing machine, beyond labor-saving, and evidence that farmers were keenly interested in it.
For looms, rather than comparing loom to no loom, compare the frame looms available in 1700 to the weighted vertical looms or back strap looms from long before. For the printing press, I don't see how this is different. Books were not impossible to make; scribes made them by hand. Again, it was an efficiency gain.
You might be right that something like the spinning wheel, for example, had a stronger economic incentive than a threshing machine. I just don't think that's the main explanation for why it took ~50 years for the technology to diffuse rather than say 20–30.
This doesn't match with what I've read by and about farmers, at least not in the place and time of interest (Britain and US, late 18C on). Farmers were quite interested in improving yields and efficiency. There were many journals devoted to it and many experimental machines and techniques being tried. Capital was a limitation, but many farmers had nonzero capital.
Good thoughts. You're right that a threshing machine only applies to a portion of agricultural labor, for part of the year. (Then again, weaving is only a portion of the textile manufacturing process, and printing is only a portion of the book-making process.)
My first reaction is that there is lots of evidence of farmers being actively interested in threshing machines. See that block quote from McClelland about how many things George Washington tried. Farmer's journals have lots of stories about them. They got exhibited at fairs. When the compilation of The Commercial, Agricultural & Manufacturer’s Magazine came out, the preface to the whole volume mentioned the threshing machine, and then it's the very first article in the first issue. So, there is a lot of interest from farmers.
Another point is that a good threshing machine did the job better than a human, so it not only saved labor, it saved grain. So it was increasing the farmer's harvest, in addition to decreasing costs. And this was something explicitly anticipated / hoped for, even as early as the 1636 “patent” I referenced.
There could be other motivations, too: one article I read mentioned that a threshing machine could help farmers get their harvest ready for market before the rivers froze for the winter.
And in fact, when a good threshing machine was invented in Scotland in 1786, it was adopted locally. It's just that adoption was slow to spread—even to England, let alone to the US (or the rest of the world).
So, although I didn't explicitly consider it, when I add it all up, I don't think lack of interest from farmers was an issue.
Cost was an issue. Some models were just too expensive, especially those imported from overseas. But there were also cheap models—they just didn't work reliably. It seems that an increase in reliability, rather than a decrease in cost, was the key to adoption.
Well, people could barely get computers working with electromechanical parts in the 1930s, and those machines weren't very practical. Just seems impossible on the face of it that you could get something serious working 100 years earlier.
The Difference Engine, as you correctly point out, was much more feasible, and Babbage probably could have finished building it, if he hadn't fumbled the project.
Re Hero's Engine, that's an interesting reference. Is there any evidence that this was ever built? (Old inventors drew up a lot of plans that were never implemented and may not even have worked.)
Re Babbage: The Difference Engine was not a computer. It was a calculating machine, but it was not programmable or general-purpose. (The Analytic Engine would have been a computer, but Babbage never even finished designing it.)
This story of innovation you tell about the industrial revoluation is very different then the one by Clayton Christensen.
… wool is generally better given that it doesn't get dirty as easily as cotton.
Well, that's one axis of value, but not the main one people care about. Wool is heavy and hot and can be scratchy; cotton is light and soft, very comfortable and good for summers and hot climates.
… mass produced clothing fits less well then clothing that's tailored for individual people.
Sure. A tradeoff between cost and quality. It's better for most people to buy standard sizes off the rack. The rich can afford a more labor-intensive process. Maybe someday we'll have some sort of computerized tailor that can custom-fit clothes for everyone without human labor.
Low-hanging fruit alone doesn't explain stagnation, because our ability to pick the fruit has also been improving. To explain stagnation, you have to explain why the former is happening faster than the latter, and why this only started happening in the last ~50 years.
All (most?) invention is engineering, but a lot of engineering is not invention.
Boeing employs many airplane engineers, but they don't really invent new planes. Facebook employs many software engineers but isn't inventing much in software. Both are doing product development engineering—which is fine and something the world certainly needs a lot of, but it's not the same thing.
I think anyone who wanted to be an inventor would train as an engineer. So the education/training part of the inventor career path is there. But it falls apart after university.
Our bodies are equipped with damage repair systems that are pretty darn effective at low dose rates. If this were not the case, then life would never have evolved as it has. Life started about 3 billion years ago when average background radiation was about 10 mSv/y, about 4 times the current average. Life without repair mechanisms would be impossible. But these repair mechanisms can be overwhelmed by high dose rate damage.
The repair mechanisms take a bewildering number of forms, all of which seem to have names requiring a dictionary. And the strategies are remarkably clever. At doses below 3 mSv, a damaged cell attempts no repair but triggers its premature death. However, at higher doses, it triggers the repair process.23 This scheme avoids an unnecessary and possibly erroneous repair process when cell damage rate is so low that the cell can be sacrificed. But if the damage rate is high enough that the loss of the cell would cause its own problems, then the repair process is initiated. This magic is accomplished by activating/repressing a different set of genes for high and low doses.[page 15] LNT denies this is possible.
Even at the cell level, the repair process is fascinating. In terms of cancer, we are most interested in how the cell repairs breaks in its DNA. Single stand breaks are astonishingly frequent, tens of thousands per cell per day. Almost all these breaks are caused by ionized oxygen molecules from metabolism within the cell. MIT researchers observed that 100 mSv/y dose rates increased this number by about 12 per day. Breaks that snap only one side of the chain are repaired almost automatically by the clever chemistry of the double helix itself.
The interesting question is: what happens if both sides of the double helix are broken? Double strand breaks (DSB) also occur naturally. Endogenous, non-radiogenic causes generate a DSB about once every ten days per cell. Average natural background radiation creates a DSB about every 10,000 days per cell. However the break was caused, the DNA molecule is split in two.
Clever experiments at Berkeley show that the two halves migrate to “repair centers”, areas within the cell that are specialized in putting the DNA back together. Berkeley actually has pictures of this process, Figure 4.15 which is a largely complete in about 2 hours for acute doses below 100 mSv and 10 hours for doses around 1000 mSv. These experiments show that if a “repair center” is only faced with one DSB, the repair process rarely makes a mistake in reconstructing the DNA. But if there are multiple breaks per repair center, then the error rate goes up drastically. A few of these errors will survive and a few of those will result in a viable mutation that will eventually cause cancer. The key feature of this process is it is non- linear. And it is critically dose rate dependent. If the damage rate is less than the repair rate, we are in good shape. If the damage rate is greater than the repair rate, we have a problem.
The Berkeley work was part of the DOE funded Low Dose Radiation Research Program. Despite the progress at Berkeley and other labs and bipartisan congressional support, DOE shut the program down in 2015. When the DOE administrator of the program, Dr. Noelle Metting, attempted to defend her program, she was fired and denied access to her office. The program records were not properly archived as required by DOE procedures.
Footnote 23 says:
To be a bit more precise, some repairs can only take place in the G2 phase just before cell division. Radiation to the cell above 3 mSv, activates the ATM-gene, which arrests the cell in the G2 phase. This allows time for the repair process to take place.
Yes, I remember that too—can't remember where I read about it, maybe Yergin's The Prize. The analogy that occurred to me was web/app analytics, especially the social media apps that learned to measure their “viral coefficient” around the late '00s
I agree that the bar keeps getting raised, and therefore progress gets more difficult. I don't see why that implies any asymptote. (I wrote in a previous post why exponential growth should be our baseline, even as we pick off low-hanging fruit.)
Interesting, but I think you're underestimating the impact of other general-purpose technologies, such as in energy or manufacturing. New energy sources can be applied broadly across many areas, for instance.
Ah, you are from Eastern Europe? To clarify, the stagnation hypothesis is about the frontier of technological development in the wealthiest countries. I don't think there has necessarily been stagnation in global development.
This analysis, and the stagnation debate in general, is really about the technological frontier. Global development overall has not necessarily been stagnating—India and China have seen huge growth in the last 50 years.
There was far more progress in aviation from 1920–1970 than from 1970–2020. In 1920, planes were still mostly made of wood and fabric. By 1970 most planes had jet engines and flew at ~600mph. Today planes actually fly a bit slower than they did in 1970. Yes, there has been progress in safety and cost, but it doesn't compare to the previous 50-year period.
Similar pattern for automobiles and even highways.
I'm not convinced by the optimists, either, and ADS made some good points. This post was laying the foundation for my response. With this framework I think you can analyze things in at least a slightly more rigorous way.
OP here. I will recuse myself from the conversation about whether this deserves to be in any list or collection. However, on the topic of whether it belongs on LW at all, I'll just note that I was specifically invited by LW admins to cross-post my blog here.
I'm not a comp bio expert, but the core of @johnswentworth's argument seems to be that “protein shape tells us very little about [protein reactions] without extensive additional simulation”, and “the simulation is expensive in much the same way as the folding problem itself.”
Both true as far as I understand, but that doesn't mean those problems are intractable, any more than protein folding itself was intractable.
So I think you can argue “this doesn't immediately lead to massive practical applications, there are more hard problems to solve”, but not “this isn't a big deal and doesn't really matter” in the long run.
Good question, I don't know. Someone pointed me to this technical description of mRNA technology which I haven't read yet, might see if it answers your question though: https://www.nature.com/articles/nrd.2017.243