True, but it's not that hard to imagine that a cast-iron stove could still be working a century later. It's pretty simple as far as I understand it… pretty much just a metal box with doors and a stovepipe.
I think the point is not that interdependence is inherently safer, but rather that, all things considered, industrial civilization is both safer and more interdependent than the pre-industrial world. The electric grid, for instance, makes us much more interdependent than tallow candles or kerosene lamps, but it's also much safer than using flames for lighting inside the home. The added risk from interdependence is more than compensated for by other factors.
I haven't researched planned obsolescence; are there any good examples of this?
If extra durability/lifespan (beyond the ~15 years that things already last) were possible with a small increase in cost, why wouldn't manufacturers compete on this axis? I imagine that individual homebuyers don't care that much, but, say, a landlord of a large apartment complex who was making a major purchase of stoves would probably want to optimize for 15 vs. 20 or 25-year lifespans. They would have someone doing the calculation.
Problems do have to be solved case-by-case, but your basic premises and values—philosophy—guides what kind of solutions you will seek, how you evaluate them, and what you will accept.
For instance, to address climate change, how do you feel about seeking abundant, cheap, clean energy via nuclear/solar/geothermal? Carbon capture? Geoengineering? Degrowth? Those are very different approaches.
I think the relationship between the philosophy of progress and actual progress is reciprocal. When people believe in progress, they do more of it; and when they see it working, they believe in it.
Note that the idea of progress arguably began around the time of Bacon, which was more than a century before the Industrial Revolution.
Didn't the philosophy of progress fade when technological innovation started producing dangers and destructions that were more obvious and dramatic and story-friendly, and hit people where they live on a daily basis?
Yes, but. Historical events like this pose a challenge to existing ideas—they don't determine how people will interpret them or what new ideas will come along to answer the challenge. Every challenge is a crossroads. I think we took the wrong fork in the mid-20th century, and I want us to get back on track.
Can we really ignite a new philosophy of progress without a concomitant explosion in dramatic, everyday technical innovations that impact ordinary people's daily lives with the same force that the positive innovations of the 19th century produced?
Again, I think this will be reciprocal. If the coming decades see Mars settlements and affordable supersonic passenger travel and CRISPR gene therapies and an mRNA cure for cancer and fusion energy and effective longevity treatments… that will help people believe in progress again. But also, helping spread the idea that progress is real and we can make it happen could help inspire people to build the future.
Rather than increasing the rate at which we create innovations on par with AlphaFold, might a new philosophy of progress just cause more people to be excited about AlphaFold?
Excitement about things translates into money and talent going into them, which causes more of them to happen.
To be clear, you're quoting a sentence from a paragraph that I described as “one possible narrative”, in a section where I described two opposing narratives and then explored which aspects of each seemed to be supported by this story.
I do think that safety measures could have begun earlier. See my reply to jpsmith: https://www.lesswrong.com/posts/DQKgYhEYP86PLW7tZ/how-factories-were-made-safe?commentId=wAPgdiJNHsYewmrzi
I think of “externality” as roughly equivalent to “you hurt someone and you didn't have to pay for it.” If a workplace is neglectful of safety, and the worker gets hurt, and the employer doesn't have to pay, that seems like an externality?
I don't think it would have significantly slowed things down. I think the costs to employers went from like a fraction of a percent of payroll to a few percent. It was a big relative increase, but still a fairly small cost overall. But it was just big enough to make them say “we should have a safety department.”
I'm not sure if there was a specific more-radical proposal on the table, or if that was just a general concern of the businesses. If there was one, I haven't encountered it.
Again, the labor unions actually were originally for the less-radical proposal of simply reforming the tort system (taking away some employer protections) without going all the way to a no-fault system.
The workers themselves seem mostly focused on pay, hours, and other more tangible things.
Did you read all the way to the end? I feel this was addressed in the last few paragraphs.
(Incidentally, many if not most businesses were in favor of the workers' comp law, even though they knew it would raise their costs. In part that was because they thought it would improve labor relations, although it also may have been because they thought some reform was inevitable and they wanted something moderate to pass before something radical came along. Re labor unions, many of them were actually against workers' comp at first, although Samuel Gompers of the AFL then changed his mind and became for it.)
I think there is a clear line. If you take the job at 20 hours/day, you know what you've signed up for. You didn't sign up to be injured in an accident. The liability for that needs to fall somewhere.
(You could argue that workers signed up for certain risks, and this is exactly what employers used to argue in many cases. And I'm not 100% sure that's wrong. So I am still slightly ambivalent about this.)
I don't think the data is cherry-picked, but you could argue with some of his statistical analysis. He lays it all out pretty clearly though, so the book is valuable to read even if you disagree in the end.
He covers violent crime (which would include bar fights) as well.
Sure, a crucial question is whether (and to what extent, and in what way) progress in science, technology, industry, and the economy leads to human well-being. That is at the core of what any philosophy of progress should address
I think the starting point is to examine the moral progress that's been made so far in history, and try to figure out how it happens. The best stuff I've read on this so far is from Pinker (The Better Angels of Our Nature and parts of Enlightenment Now).
Descriptive optimism is contingent: it's an assessment about the world (or a part of it), and so it's only warranted when it's true. There are many aspects of the world that I am descriptively pessimistic about.
But prescriptive optimism is an attitude, a choice. It says that we're going to work hard to solve problems and make the world and our lives better, no matter what, whether the prospects seem rosy or bleak. And I think we need more prescriptive optimism.
Today there's a set of institutions to support science and a whole career path based on them. What remaining important work is there that's not being rewarded? I don't know off the top of my head. My guess is that it's something that most people don't think about and that doesn't have a prominent role in society—like science itself in the 17th/18th centuries.
On your second point, I agree that improving intellectual efficiency is an important part of progress. But I think that pretty much all of information technology, from the first writing system to the Internet, has been part of that effort.
First, I started the research back in ~2017. I'm not writing from a position of total ignorance here.
Look, there are some times where a tough situation means that the rational choice is to accept hardship in order to avoid a worse outcome. Covid is a good example: isolation/“lockdown” measures made sense at least in the early part of the pandemic, despite the hit to the economy and to people's lives.
But (to continue the analogy) the harm to human life from permanent lockdown would be so vast that it doesn't make sense to entertain until you've tried everything else. If in ~Q2 of 2020 someone had proposed a forever-lockdown as the new normal, what would your reaction be? Mine would be: wait, what about the vaccines that are in development? What about the possibility of finding a cure? If nothing else, could we develop cheap rapid testing? Etc. Perma-lockdown would essentially be giving up and admitting defeat—accepting a permanently reduced quality of life, because we just weren't smart or competent enough to come up with an actual solution to the problem and move forward.
That's how I see “degrowth.” Like, let's accept for the sake of argument that degrowth would provide temporary relief for certain problems. Maybe you could even make an argument that it's needed as some stopgap measure, analogous to lockdowns (although I'm skeptical). But the degrowth movement wants to end growth as a permanent measure in response to environmental problems. The missed opportunity to make life better for everyone is so mind-bogglingly vast that it requires extreme justification—there really has to be no other way. And the degrowth movement is extremely far from providing that justification.
I'm not sure that's really different from the polio story. The world knew that polio vaccines were under development. They knew a big clinical trial was underway starting in 1954. There was a date announced ahead of time when the results of the trial would be announced (April 12, 1955). This seems similar to there being an announcement in the news of the first results of a covid vaccine trial.
Good question. I think what happened instead is that farmers with threshing machines would rent them out to those without, or people would bring portable machines around to farms—see my reply to @ChristianKl.
Why did it happen that way? Not sure. Maybe transportation costs, which were high. Grain is much more compact and high value-density than unthreshed bundles of wheat. Makes more sense to thresh it on-location before transporting it anywhere.
Oh, also—farmers used the straw! For animal bedding, to mix with manure, etc. It really doesn't make sense to transport stalks to a central location, and then send the straw back.
Yeah, it was a pithy tweet-length opener. To be precise, it's unnecessary to have a perfect model of the future / predict it with anything near 100% accuracy, or to have any appreciable degree of accuracy on the long-term future.
Historically, something somewhat different happened: if one farmer owned a threshing machine, other farmers might bring their grain to him and rent time on the machine.
Or, when portable threshing machines were built, someone would travel around to different farms and thresh there for a fee, then move on.
(But a portable machine was nontrivial, especially when the machines were horse-powered, check out this diagram [from this source]—a horse was hooked up to that harness at letter H on the diagram, so you can get a sense of how big that thing was. That model would have been stationary.)
In practice, I don't think it worked that way. If the machine broke, it was not at all easy to repair; you couldn't just factor in a maintenance cost. And if the machine damaged or lost grain, it was worse than useless.
Good question about looms/mills, I don't know. Before the 1800s or so, I think looms were mostly owned by weavers who worked from home. There was no “specialist on site”. But I don't think they broke much, because there weren't high forces involved. (In the late 1800s, when large power looms were set up in factories such as those at Lowell, Mass., I imagine they would have had an engineer on staff.) Re mills, I would guess that a broken mill would be repaired by the local millwright. But I doubt they were on-site.
Your model of costs vs. benefits is logical, but in practice there is uncertainty (about machine reliability / breakdowns) and people tend to avoid tail risk by seeking reliable machines. Also, previous standards of quality (that can be achieved by manual labor) tend to set a quality bar that machines have to meet before they are adopted. People don't like reducing quality, even if the efficiency gain theoretically makes up for it. At least, that's how it seemed to be in the early days of mechanization.
See my reply to @johnswentworth re other benefits of the threshing machine, beyond labor-saving, and evidence that farmers were keenly interested in it.
For looms, rather than comparing loom to no loom, compare the frame looms available in 1700 to the weighted vertical looms or back strap looms from long before. For the printing press, I don't see how this is different. Books were not impossible to make; scribes made them by hand. Again, it was an efficiency gain.
You might be right that something like the spinning wheel, for example, had a stronger economic incentive than a threshing machine. I just don't think that's the main explanation for why it took ~50 years for the technology to diffuse rather than say 20–30.
This doesn't match with what I've read by and about farmers, at least not in the place and time of interest (Britain and US, late 18C on). Farmers were quite interested in improving yields and efficiency. There were many journals devoted to it and many experimental machines and techniques being tried. Capital was a limitation, but many farmers had nonzero capital.
Good thoughts. You're right that a threshing machine only applies to a portion of agricultural labor, for part of the year. (Then again, weaving is only a portion of the textile manufacturing process, and printing is only a portion of the book-making process.)
My first reaction is that there is lots of evidence of farmers being actively interested in threshing machines. See that block quote from McClelland about how many things George Washington tried. Farmer's journals have lots of stories about them. They got exhibited at fairs. When the compilation of The Commercial, Agricultural & Manufacturer’s Magazine came out, the preface to the whole volume mentioned the threshing machine, and then it's the very first article in the first issue. So, there is a lot of interest from farmers.
Another point is that a good threshing machine did the job better than a human, so it not only saved labor, it saved grain. So it was increasing the farmer's harvest, in addition to decreasing costs. And this was something explicitly anticipated / hoped for, even as early as the 1636 “patent” I referenced.
There could be other motivations, too: one article I read mentioned that a threshing machine could help farmers get their harvest ready for market before the rivers froze for the winter.
And in fact, when a good threshing machine was invented in Scotland in 1786, it was adopted locally. It's just that adoption was slow to spread—even to England, let alone to the US (or the rest of the world).
So, although I didn't explicitly consider it, when I add it all up, I don't think lack of interest from farmers was an issue.
Cost was an issue. Some models were just too expensive, especially those imported from overseas. But there were also cheap models—they just didn't work reliably. It seems that an increase in reliability, rather than a decrease in cost, was the key to adoption.
Well, people could barely get computers working with electromechanical parts in the 1930s, and those machines weren't very practical. Just seems impossible on the face of it that you could get something serious working 100 years earlier.
The Difference Engine, as you correctly point out, was much more feasible, and Babbage probably could have finished building it, if he hadn't fumbled the project.
Re Hero's Engine, that's an interesting reference. Is there any evidence that this was ever built? (Old inventors drew up a lot of plans that were never implemented and may not even have worked.)
Re Babbage: The Difference Engine was not a computer. It was a calculating machine, but it was not programmable or general-purpose. (The Analytic Engine would have been a computer, but Babbage never even finished designing it.)
This story of innovation you tell about the industrial revoluation is very different then the one by Clayton Christensen.
… wool is generally better given that it doesn't get dirty as easily as cotton.
Well, that's one axis of value, but not the main one people care about. Wool is heavy and hot and can be scratchy; cotton is light and soft, very comfortable and good for summers and hot climates.
… mass produced clothing fits less well then clothing that's tailored for individual people.
Sure. A tradeoff between cost and quality. It's better for most people to buy standard sizes off the rack. The rich can afford a more labor-intensive process. Maybe someday we'll have some sort of computerized tailor that can custom-fit clothes for everyone without human labor.
Low-hanging fruit alone doesn't explain stagnation, because our ability to pick the fruit has also been improving. To explain stagnation, you have to explain why the former is happening faster than the latter, and why this only started happening in the last ~50 years.
All (most?) invention is engineering, but a lot of engineering is not invention.
Boeing employs many airplane engineers, but they don't really invent new planes. Facebook employs many software engineers but isn't inventing much in software. Both are doing product development engineering—which is fine and something the world certainly needs a lot of, but it's not the same thing.
I think anyone who wanted to be an inventor would train as an engineer. So the education/training part of the inventor career path is there. But it falls apart after university.
Our bodies are equipped with damage repair systems that are pretty darn effective at low dose rates. If this were not the case, then life would never have evolved as it has. Life started about 3 billion years ago when average background radiation was about 10 mSv/y, about 4 times the current average. Life without repair mechanisms would be impossible. But these repair mechanisms can be overwhelmed by high dose rate damage.
The repair mechanisms take a bewildering number of forms, all of which seem to have names requiring a dictionary. And the strategies are remarkably clever. At doses below 3 mSv, a damaged cell attempts no repair but triggers its premature death. However, at higher doses, it triggers the repair process.23 This scheme avoids an unnecessary and possibly erroneous repair process when cell damage rate is so low that the cell can be sacrificed. But if the damage rate is high enough that the loss of the cell would cause its own problems, then the repair process is initiated. This magic is accomplished by activating/repressing a different set of genes for high and low doses.[page 15] LNT denies this is possible.
Even at the cell level, the repair process is fascinating. In terms of cancer, we are most interested in how the cell repairs breaks in its DNA. Single stand breaks are astonishingly frequent, tens of thousands per cell per day. Almost all these breaks are caused by ionized oxygen molecules from metabolism within the cell. MIT researchers observed that 100 mSv/y dose rates increased this number by about 12 per day. Breaks that snap only one side of the chain are repaired almost automatically by the clever chemistry of the double helix itself.
The interesting question is: what happens if both sides of the double helix are broken? Double strand breaks (DSB) also occur naturally. Endogenous, non-radiogenic causes generate a DSB about once every ten days per cell. Average natural background radiation creates a DSB about every 10,000 days per cell. However the break was caused, the DNA molecule is split in two.
Clever experiments at Berkeley show that the two halves migrate to “repair centers”, areas within the cell that are specialized in putting the DNA back together. Berkeley actually has pictures of this process, Figure 4.15 which is a largely complete in about 2 hours for acute doses below 100 mSv and 10 hours for doses around 1000 mSv. These experiments show that if a “repair center” is only faced with one DSB, the repair process rarely makes a mistake in reconstructing the DNA. But if there are multiple breaks per repair center, then the error rate goes up drastically. A few of these errors will survive and a few of those will result in a viable mutation that will eventually cause cancer. The key feature of this process is it is non- linear. And it is critically dose rate dependent. If the damage rate is less than the repair rate, we are in good shape. If the damage rate is greater than the repair rate, we have a problem.
The Berkeley work was part of the DOE funded Low Dose Radiation Research Program. Despite the progress at Berkeley and other labs and bipartisan congressional support, DOE shut the program down in 2015. When the DOE administrator of the program, Dr. Noelle Metting, attempted to defend her program, she was fired and denied access to her office. The program records were not properly archived as required by DOE procedures.
Footnote 23 says:
To be a bit more precise, some repairs can only take place in the G2 phase just before cell division. Radiation to the cell above 3 mSv, activates the ATM-gene, which arrests the cell in the G2 phase. This allows time for the repair process to take place.
Yes, I remember that too—can't remember where I read about it, maybe Yergin's The Prize. The analogy that occurred to me was web/app analytics, especially the social media apps that learned to measure their “viral coefficient” around the late '00s