Technology path dependence and evaluating expertise
post by bhauth, Muireall · 2024-01-05T19:21:23.302Z · LW · GW · 2 commentsContents
2 comments
I often wonder if I'd be able to tell if a technology were uncompetitive for fundamental viability reasons or for reasons of path dependence—if the ultimate performance would be a winner once we'd worked out a few trillion dollars of kinks, but we never go down that road because we get higher returns in already-mature industries, isn't that kind of sad from a civilizational point of view?
I suspect this happens a lot in computing—the "bitter lesson" or "long arm of Moore's law"—your best bet is just to hitch yourself to the engine of the general purpose technology. This is becoming less true at least on the hardware side, which makes for an exciting time in the semiconductor industry. Specialization is increasingly worthwhile even if it means you miss out on gains from front-end-of-line improvements at leading-edge process nodes. It could also mean it's worth revisiting older ideas that got left by the wayside while Moore's law was at full steam, which is partly why I'm interested in how you might think about questions like this.
You spend a lot of time thinking about the viability of different technologies. Maybe to be concrete, we can talk about nuclear energy. You've mentioned you're less optimistic about the economics of unregulated nuclear power than a lot of people here. I don't have a strong view on it, but I've absorbed the impression that the viability question here is more on the path-dependent side—it might have been a better idea in the past, but even if you price in CO2 we've been hard on the learning curve for gas and coal while we've been relatively stagnant or even regressing on nuclear. (I'm not particularly attached to this view, just trying to start with a lightweight frame for discussion.) What's your picture of the economics here? Could it have come out otherwise in some alternate history?
I tend to divide tech path-dependence into 3 types:
- X is a standard and switching is hard (eg electrical sockets)
- more money was spent on developing X so it's currently better (eg silicon power electronics vs SiC and GaN)
- powerful groups support X because it exists (eg corn ethanol subsidies)
Here's an example of path dependence I've encountered:
Once, I had a job where I suggested methyl benzoate as a solvent for an application. Its properties were suitable, and I figured that, since methanol and benzoic acid are both cheap and produced on a large scale, and making methyl benzoate from those is fairly easy, it shouldn't be too expensive. But while it's available in small quantities (at high prices) for fragrances, it was hard to find a source for, and it was much more expensive than (harder-to-make) chemicals.
It doesn't even need a giant scale to make it. What keeps it from being used as an industrial solvent is "standardization" on 1 level, but on a lower level, issues include:
- awareness of unused options from potential buyers
- the low-volume high-price market being more profitable than a lower-price higher-volume market
- the tacit knowledge involved in building chemical plants, even for well-understood reactions, which makes it harder for new groups to enter
If you think of society as a machine with some parts that are stuck due to friction, you might wonder where something could be lubricated or hit with a hammer to unstick it. But while friction is a macroscopic phenomenon, it consists of the collective effect of many microscopic interactions.
There's also an example that comes to mind of the opposite of path-dependence: gas turbines. Early engines used pistons and were driven by steam, but decades before (non-steam) gas turbines became practical, it was already a mainstream view among engineers that they'd become important. People were already building them when turbine design and available materials were such that gas turbines could barely make net power.
About nuclear power economics, I conveniently already wrote a post. But some fans of nuclear power don't find it very convincing.
When I say something like "of course the heat exchangers are a major cost, and corrosive molten salts would make them expensive" it relies on a whole pyramid of assumptions that I've built up. When people have conversations, there are usually several underlying layers, and the effectiveness of communication depends on the compatibility of those pyramids. Even if you try to go down to lower layers when communication fails, nobody has the time to do that really extensively, and some of the blocks in that pyramid are inevitably from social trust.
Thanks, I'm reading your nuclear post now. I wouldn't be surprised to find myself agreeing about heat exchangers. (Just about every thermodynamic cycle I've had to work with turned out to be bottlenecked by heat exchanger performance/cost in practice.)
[OK, done reading.] Makes sense to me. I suppose I have some broad questions about the nature of this pyramid. [May pick up some other threads later.]
- How confident are you about this sort of analysis? You do lead with "In my opinion" but otherwise sound very matter-of-fact.
- When you say something like a heat exchanger for molten salts is expensive, where is that information coming from? (For example, is it more from the direction of "when you try to procure one today, you get quoted about this much" or "if you break down the engineering requirements, it adds up"?) Or maybe my underlying questions are more like—how hard have people tried to bring the cost down or circumvent expensive requirements? (If your only customers are building nuclear subs, probably not that hard, relatively speaking?) To what extent does that figure into your thinking? (Maybe you only need to know that it's always going to be relatively expensive compared to a less demanding design? Or maybe you're only concerned with choosing a nuclear plant design today, because if I had an idea for a better heat exchanger, I should start by just selling that.) When you say something is too expensive for a purpose, is that more an elided calculation or a guess? (Or—how much room is there for you to change your mind if you thought about it harder?)
- The only "in my opinion" is:
"In my opinion, if you want to make an economically competitive nuclear power plant, steam is out."
When I use that phrase, it generally means that I would only be wrong if there's a factor I'm overlooking, but that's possible. An obvious possibility here is a broadening of what's considered "economically competitive" - perhaps due to geopolitical concerns about natural gas supply. Another possibility is somebody developing cheaper manufacturing methods for well-understood components like steam turbines. In general, it's less likely that I'm wrong than that a startup like TerraPower or NuScale is wrong, and my blog posts are usually as reliable as an above-average techno-economic analysis or survey paper, but it's the reader's responsibility to evaluate that sort of thing. - Every estimation people make is based on extrapolation or interpolation from something. When I make cost estimates, I often start with the cost of something existing and adjust it by relative production cost. It's hard to make an exact estimate for how much heat exchangers with an unusual alloy would cost, but it's easy to say if they'd be more expensive than standard heat exchangers. You can also compare the costs of different types of existing heat exchangers to estimate how much costs increase with material processing difficulty. The better you understand the materials and manufacturing processes, the more accurate an extrapolation you can make.
Another approach I often take is estimating the cost of components or materials that go into things. For example, you can say that a product using carbon fiber will cost at least as much as that amount of carbon fiber, and you can add some approximation for labor costs. (Some bloggers seem to take this way too far, doing cost estimates for stuff they're unfamiliar with based on the cost of some component, but stuff can be far more expensive than the raw materials.) In order to do this you have to know what materials and manufacturing approaches would be used, but I know most of their basic types well enough. Sometimes I underestimate costs because my guess at what a company is doing is better than their actual approach, but I've gotten better at adding a cost multiplier range for that.
I'll often use multiple approaches and combine the resulting estimates, sometimes going back if they seem to disagree. Each step in each estimation involves a lot of implicit assumptions; you might say "Y costs about 2x as much as X" but if you start getting into the details, sometimes people want to know the why of Y or talk about some irrelevant exceptions, and it takes a lot of time. If the logic is simple enough I might explain it in a post, but I often just put estimates out there for people to collect and combine with other estimates however they want.
Got it. That sounds reasonable and reasonably modest.
(I hope I'm not being tedious. I think I had a concern like—there are people who will talk about technology very confidently with very little transparency about their reasoning. Sometimes it turns out that their reasoning, or at least parts of it that they don't flag as more speculative, won't withstand scrutiny. Aspects of your writing pattern-match onto that.)
When you write
> In general, it's less likely that I'm wrong than that a startup like TerraPower or NuScale is wrong, and my blog posts are usually as reliable as an above-average techno-economic analysis or survey paper
I wonder how you calibrate that. I generally expect startups to be optimizing their stories for attracting investment, not for being right, so I'm not troubled by the idea that some outside analyst would outperform them. But it takes a long time for things to shake out, and it's often not clear from outside why something really failed. (How much can I read into TerraPower's apparent pivot away from the traveling-wave reactor?)
Coming back around to the question of nuclear power and path dependence, it sounds like there's another sort of contingency that comes from decisionmakers just being unable to evaluate technical arguments. (That is, in the context of KS 150's problems, you write, "I'd argue that those incidents weren't the fault of the basic design, and if anything demonstrate decent resilience, but the problems at KS 150 were a significant reason why HWGCRs were not developed more. Administrators often see only that a technology was tried and had problems, because people often lie about the reasons for problems.")
Maybe this is more of a short-run problem than your other classes of path-dependence—do you expect that, if it really is a good idea, someone will eventually pick it back up? Or does it get locked out "by default" in some sense? Is writing publicly about technology a potential lubricant/hammer here?
there are people who will talk about technology very confidently with very little transparency about their reasoning. Sometimes it turns out that their reasoning, or at least parts of it that they don't flag as more speculative, won't withstand scrutiny. Aspects of your writing pattern-match onto that.
Could you elaborate on what aspects those are? I've found that simple explanations are often useless and people with a lot of expertise often have a confident prediction that would be difficult to explain the reasoning for.
When I see things with obvious scientific problems like this post by Bill Gates (since we were talking about nuclear power startups) and look at comments on them, I don't generally see people saying "this is crackpottery". And apparently Bill Gates doesn't have anybody telling him about the problems, either. If I compare my blog posts to PR releases from MIT, it seems like they're the ones with no scientific/engineering rigor. But most people don't see things that way. Largely it's a matter of credentials and institutional backing, but are you saying there's something in particular about my writing style that could be improved in this regard?
I wonder how you calibrate that.
It's a legitimate question, and good calibration of how confident people should be in their own evaluations is valuable in general. There are 3 basic ways I calibrate my estimation of how well I understand things: predictions, conversations, and literature searches.
My predictions about how things will go with various technologies have been fairly accurate. Sometimes I've been wrong, but usually it was because I was given bad information.
I've talked to a lot of postdocs and a decent number of professors about their research. If you have a good conversation, you can generally tell approximately how you compare to people in terms of how well you understand the topic and how smart you are in general.
As for literature searches, let me tell you how I start to look up a topic. There's usually some engineering problem I want solved, or some question I want answered. I start by thinking about how I would solve the problem or answer the question, and then I look up that approach to see if it's been used. Usually, my search results from guessing are close enough that I can fix some field-specific terminology and find what I'm looking for.
It used to be the case that I'd usually find that my guess was impractical and rejected. Over time, that changed to it usually being an obsolete approach used in the past, then to it usually being an approach sometimes used now. These days, my guesses are often what an active research program is pursuing. So, that provides some information for calibrating how well I understand topics.
About HWGCRs, people just aren't building many nuclear plants in general, and developing a new type isn't commercially worthwhile for companies now. The US government spent lots of money researching new reactor types, but that research was mostly about breeder reactors, because mostly using nuclear for power would have eventually led to uranium supply being a problem. I think they figured they'd get practical breeder reactors working first and work on cost reduction 2nd.
Could you elaborate on what aspects those are?
I think it's a combination of
- willingness to write about a broad range of technical topics
- low density of engagement with other work (supporting citations or specific opposing arguments)
I most often see that from people who think being a generalist is about being smart and reasoning from first principles, who often have a few correct insights but fail in their broader arguments for lack of context, and who essentially are not operating in a way where they can get feedback about how well they're doing. (It's clear enough to me at this point that you're coming at things differently.)
are you saying there's something in particular about my writing style that could be improved in this regard?
Tough question. It seems like an important problem. I think I agree that adding simple explanations isn't necessarily much help and can be a distraction. Open Philanthropy gives one way of looking at this in terms of "reasoning transparency"—rather than trying to produce a version of an argument for a claim that the non-expert reader can evaluate, an author can give metadata that helps the reader judge the process that the claim comes from. I think that makes sense in some contexts, but I can also see arguments that in places it's the reader's job to make their own sense of things.
In theory, the advantage of being a generalist is that you have more context, because you can use context from other fields. But a lot of people haven't integrated their understanding of different fields even if they've studied several fields.
That sounds plausible to me. I'm a pretty thoroughgoing specialist, but within my field I might be relatively generalist (of the sort common enough coming out of small labs)—for most people I interact with on a project, I had to do a version of their job at some point. That's proven valuable in helping me talk to people, but I think it's been especially useful as far as I've integrated my experience into a big picture where I can reason about an entire iterative project loop (e.g., theory-design-fab-test).
At the same time there's a temptation to see everything in some domain I've brushed up against through the lens of the particular challenges that I encountered—a sort of tunnel vision or indexing too strongly on my own experiences. Tricky to guard against that. Talking to lots of different people helps, probably.
Individually indexing too strongly on experiences as a metaphor for societal path dependence? Interaction between different cultures/societies as a metaphor for talking to various people?
Telling what things could have gone differently, and how, is pretty difficult. I'm reminded of David Chapman saying:
My answer to “If not Bayesianism, then what?” is: all of human intellectual effort. Figuring out how things work, what’s true or false, what’s effective or useless, is “human complete.” In other words, it’s unboundedly difficult, and every human intellectual faculty must be brought to bear.
Having seen my blog, do you think there's something in particular I should've focused on, ignoring other areas?
Yeah, I think there's some analogy between scales here. I'd guess in that picture inefficiencies of communicating or integrating information would be a lot more punishing at a societal scale than at the individual/lab scale. That's part of why I still feel like there's so much promise in thinking in public.
I like that Chapman line. Same deal with figuring out how much to listen to which experts, or from the other side how to write correct things so they're correctly persuasive. There's no one weird trick, nor could there have been. That's the whole problem!
For your blog, I'm not sure. I suppose it depends on your goals. Being prolific and wide-ranging can be its own virtue. I think I mainly got stuck orienting myself—what are the common threads here, where is your experience deepest, where are you just putting ideas out there, etc. I can relate to not wanting to put personal information online, and I'm not interested in a resume or credentials, but I bet more of an "about" page or even just "here are some of my favorite posts on this blog and why" would mean fewer people bounce off.
what are the common threads here, where is your experience deepest, where are you just putting ideas out there
So, I want to ask: how much have you found that to matter? When you control for, say, IQ and interest in a topic, do credentials mean as much as they're taken to? And how skeptical should we be of John Carmack or Elon Musk doing a new technical startup in an area they didn't have experience with? (If you look at Carmack's twitter, btw, you can see he tweets a lot of flawed ideas about various areas of engineering he hasn't worked on.)
I think these things are useful as cues for where to pay attention on the margin. Maybe a pure version shows up in forecasting. If someone writes down a number that's different from mine, what do I do with it? I think it's easy to do better than "if they have credentials, move my number towards theirs". If the credential is a forecasting track record, maybe one could argue for that move, but I tend to be more interested in figuring out what, if anything, might be going into that forecast that didn't make it into mine. A credential is barely a cue to look closer, but if they say they're drawing on some specific experience that's a little better. A note that they didn't really think too hard about it and are just aggregating general impressions would be another kind of cue, mainly for me to move on and avoid double counting. If they say they're using some base rate and adjusting, or breaking the question down in a certain way, that can tell me more about what they know that I don't than the specific numbers they plugged in.
The more general version, I think, similarly isn't specifically indexing on credentials so much as on "how does this person conceptualize what they're doing?". There are some technical startup people who mouth off on Twitter, but they make it clear they're just having fun/spitballing/baiting people to poke holes or whatever. That's not really going to affect how closely I look at their other ideas. Then (exaggerating for contrast) there are people who post like it might as well be their primary contribution to the public sphere, like they don't really think of the process that drives their posting any differently from the process that drives their startup. If those ideas are flawed, I'll probably pay less attention to their real work.
I don't know if that really answers the question. If someone can explain how their credential bears on their credibility, or if they otherwise make it clear how they're drawing on their experience, then I can think about whether I'll buy it or guess that they're over-indexing. If they have some other story about what they're doing, I'll consider that in the same way. That's what I'm looking for more than the bare credential.
Does this actually work? In forecasting, I feel pretty good about it. (I've been experimenting on the Metaculus quarterly tournaments with going "by my own lights" more, trying to counter my historical underconfidence, and doing... quite poorly, mostly from a modest number of big misses where I might have noticed I was missing something.) For more ambiguous questions, I don't put a lot of hope in "meta" methods, and it's probably more that I have to filter somehow and tell myself a story about it.
How does that compare to how you see MBA executives and big investment firms making decisions? Or to hiring policies at big companies?
I don't know if this is what you were asking, but my blog posts are typically a byproduct of me trying to design something. For example, the shoes with springs post [LW · GW] came from me thinking about potential solutions to problems that high heels have. The micro-fulfillment post [LW · GW] came from me thinking: "Micro-fulfillment could be important, and I should probably learn a bit more about robotics, so I guess I'll try designing some micro-fulfillment systems". In the former case I posted most of my conclusions, while in the latter case I didn't.
I don't have any insight into investment firms. It's easy to see executives making decisions I think are bad, but I mostly chalk that up to their goals being different from mine rather than to the decisionmaking process.
My impression is that big companies filter on credentials early but try not to advertise it. A lot of policy is constrained by the need for standardization (in the sense of a consistent methodology/standard/etc that can be communicated at scale to interchangeable recruiters, hiring managers, and interviewers) and a semblance of objectivity, and looks pretty stupid if you ignore that. Certainly a lot of postings list requirements that don't make perfect sense but are things a recruiter can be trusted to check. Everyone loves the idea of work samples and pretends their interview process accomplishes the same thing. Standard banks of hypotheticals and puzzles are pretty common but seem doomed to miss strengths (like taking forecasts into account based on forecaster track record but never asking "what do they know that I don't"). Some places focus on getting equivalent information by asking about your past experience, which I like better and think illuminates CV info in the way I describe above more effectively, but this also seems trickier to do in an unbiased way across candidates.
my blog posts are typically a byproduct of me trying to design something
I think that is the sort of thing I'm interested in hearing, although at this point maybe it doesn't matter so much to how I understand those posts.
at this point maybe it doesn't matter so much to how I understand those posts.
Could you explain more about how you do that kind of evaluation?
I think I had a concern like—there are people who will talk about technology very confidently with very little transparency about their reasoning. Sometimes it turns out that their reasoning, or at least parts of it that they don't flag as more speculative, won't withstand scrutiny. Aspects of your writing pattern-match onto that.
I think that's another tower of signalling and counter-signalling:
- executive and investor types who really want to see very high confidence
- hyper-confident people trying to appeal to them
- you seeing those people and pattern matching
But while I interact with that system/culture, I exist outside of it, if you know what I mean.
I've had some interactions with MBA types where I had a main plan and then some fallbacks and contingency plans, and when I started talking about contingency plans they were absolutely disgusted at me making them.
That could be what I'm seeing. If someone's trying to impress investors on Twitter by putting on a certain persona, that's obnoxious but I'll mostly want to ignore it. If it seems like they buy their own bullshit, that's when I'll tend to write them off entirely. (To me, this is plausibly another case of investors/executives having different goals rather than using what I think are just bad proxies for technical merit.)
I'm not doing anything particularly complicated. If someone's posting the first thought that comes into their head on a subject, or if they indicate the particular angle they're coming from, that's useful to know because I can be more confident a critique from some other angle will land and I can look over there. If it's the partial output of a more thorough research or design process, or if something else actually depended on them being right about this part, I can probably assume that they've already thought of things that seem obvious to me.
Once I've read and thought about it enough, my evaluation can screen off what the author thought they were doing, so it's mostly a way of saving myself time. (That and the contrast between "what they thought they were doing" and "what they were really doing" feeds back into my evaluation of how much attention to pay the author in the future.)
Double-counting information is certainly common. Newspapers will copy each other, social media sites will copy each other, and then people will update separately on each story they see. That's a simple example that you can partly compensate for, but in general it seems difficult to do much about that problem; tracking the pedigree of information isn't usually practical for people.
2 comments
Comments sorted by top scores.
comment by df fd (df-fd) · 2024-01-05T22:59:47.009Z · LW(p) · GW(p)
I am feeling like the dialogue has diverted from its original question, so if I may as a question.
What I am hearing is bhauth formed his opinion on extrapolating from current project, reading papers and talk to expert in the field. And while I certainly can not demand him to declare his source and present his whole chain of thought from start to finish, it certainly make it hard to verify those claims even if there is a will to verify them.
E.g. bhauth stated heat exchanger is expensive, yet I have no grounding for what that mean, is $1000/unit expensive? is $1 000 000 000/unit expensive? a quick google search find people talking about the cost of heat exchanger but not what it mean.
bhauth stated the cost of lab grown meat is too high as contamination is a huge problem and the required inputs are much too expensive, but I've talk to a guys who said he worked for a commercial lab gown meat and he was not particularly concerned about those things compare to others concerns.
I mean the guy could be uninformed or incentivised to misinform me. But again I have no way to verify who is more trust worthy.
Maybe it would be easier for people like me if bhauth put up like a 100 prediction market that would resolve in the next 1-3 years and then when the market resolved we would be able to form our belief regarding his expertise.
[This part is only relevant to me, as I came from a culture with heavy social punishment on people that is arrogance, and bhauth writing sometime comes off as such [e.g. all those start up are chasing dead end path], I may have sub consciously applying negative modifier on his writing.]
Replies from: bhauth↑ comment by bhauth · 2024-01-05T23:46:48.585Z · LW(p) · GW(p)
Heat exchanger costs are typically in $ per kW/degree.
Maybe you got a different impression, but I don't care that much about correcting people who are wrong on the internet. I care much more about understanding things myself and designing useful new technology.
In terms of incentives, prediction markets are a losing proposition on average, plus they take time and effort, plus they tie up money and increase risk unless you're hedging something else. If they would get me jobs then I'd care more, but the people with real power don't particularly care about past performance on them, and if I'm going to bet money on things I'd rather do it with stocks, which is more normal for a reason.