Posts
Comments
I spent eighteen months working for a quantitative hedge fund. So we were using financial data -- that is accounts, stock prices, things that are inherently numerical. (Not like, say, defining employee satisfaction.) And we got the data from dedicated financial data vendors, the majority from a single large company, who had already spent lots of effort to standardise it and make it usable. We still spent a lot of time on data cleaning.
The education system also tells students which topics they should care about and think about. Designing a curriculum is a task all by itself, and if done well it can be exceptionally helpful. (As far as I can tell, most universities don't do it well, but there are probably exceptions.)
A student who has never heard of, say, a Nash equilibrium isn't spontaneously going to Google for it, but if it's listed as a major topic in the game theory module of their economics course, then they will. And yes, it's entirely plausible that, once students know what to google for, then they find that YouTube or Wikipedia are more helpful than their official lecture content. Telling people they need to Google for Nash equilibria is still a valuable function.
As Richard Kennaway said, there are no essences of words. In addition to the points others have already made, I would add: Alice learns what the university tells her to. She follows a curriculum that someone else sets. Bob chooses his own curriculum. He himself decides what he wants to learn. In practice, that indicates a huge difference in their relative personalities, and it probably means that they end up learning different things.
While it's certainly possible that Bob will choose a curriculum similar to a standard university course, most autodidacts end up picking a curriculum wildly different. Maybe the university's standard chemistry course includes an introduction to medical drugs and biochemistry, and Bob already knows he doesn't care about that so he can skip that part. Maybe the university's standard course hardly mentions superconducting materials but Bob unilaterally decides to read everything about them and make that his main focus of study.
The argument given relies on a potted history of the US. It doesn't address the relative success of UK democracy - which even British constitutional scholars sometimes describe as an elective dictatorship that notoriously doesn't give a veto to minorities. It doesn't address the history of France, Germany, Italy, Canada, or any other large successful democracy, none of which use the US system, most of which aren't even presidential,
If you want to make a point about US history, fine. If you want to talk about democracy, please try drawing from a sample size larger than one.
I second GeneSmith’s suggestion to ask readers for feedback. Be aware that this is something of an imposition and that you’re asking people to spend time and energy critiquing what is currently not great writing. If possible, offer to trade - find some other people with similar problems and offer to critique their writing. For fiction, you can do this on CritiqueCircle but I don’t know of an organised equivalent for non-fiction.
The other thing you can do is to iterate. When you write something, say to yourself that you are writing the first draft of X. Then go away and do something else, come back to your writing later, and ask how you can edit it to make it better. You already described problems like using too many long sentences. So edit your work to remove them. If possible, aim to edit the day after writing - it helps if you can sleep on it. If you have time constraints, at least go away and get a cup of coffee or something in order to separate writing time from editing time.
First, I just wanted to say that this is an important question and thank you for getting people to produce concrete suggestions.
Disclaimer, I’m not a computer scientist so I’m approaching the question from the point of view of an economist. As such, I found it easier to come up with examples of bad regulation than good regulation.
Some possible categories of bad regulation:
1 It misses the point.
- Example: a regulation only focused on making sure that the AI can’t be made to say racist things, without doing anything to address extinction risk.
- Example: a regulation that requires AI-developers to employ ethics officers or risk management or similar without any requirement that they be effective. (Something similar to cyber-security today: the demand is that companies devote legible resources to addressing the problem, so they can’t be sued for negligence. The demand is not that the resources are used effectively to reduce societal risk.)
NB: I am implicitly assuming that a government which misses the point will pass bad regulation and then stop because they feel that they have now addressed ‘AI safety’. That is, passing bad legislation makes it less likely that good legislation is passed.
2 It creates bad incentives
- Example: from 2027 the government will cap maximum permissible compute for training at whatever the maximum used by that date was. Companies are incentivised to race to do the biggest training runs they can before that date
- Example: restrictions or taxes on compute apply to all AI companies unless they’re working on military or national security projects. Companies are incentivised to classify as much of their research as possible as military, meaning the research still goes ahead, but it’s now much harder for independent researchers to assess safety, because now it’s a military system with a security classification.
- Example: the regulation makes AI developers liable for harms caused by AI but makes an exception for open-source projects. There is now a financial incentive to make models open-source
3 It is intentionally accelerationist, without addressing safety
- A government that wants to encourage a Silicon Valley type cluster in its territory offers tax breaks for AI research over and above existing tax credits. Result: they are now paying people for going into capabilities research, so there is a lot more capabilities research
- Industrial policy, or supply chain friendshoring, that results in a lot of new semiconductor fabs being built (this is an explicit aim of America’s IRA). The result is a global glut of chip capacity, and training AI ends up a lot cheaper than in a free-market situation.
Although clown attacks may seem mundane on their own, they are a case study proving that powerful human thought steering technologies have probably already been invented, deployed, and tested at scale by AI companies, and are reasonably likely to end up being weaponized against the entire AI safety community at some point in the next 10 years.
I agree that clown attacks seem to be possible. I accept a reasonably high probability (c70%) that someone has already done this deliberately - the wilful denigration of the Covid lab leak seems like a good candidate, as you describe. But I don't see evidence is that deliberate clown attacks are widespread. And specifically, I don't see evidence that these are being used by AI companies. (I suspect that most current uses are by governments.)
I think it's fair to warn against the risk that clown attacks might be used against the AI-not-kill-everyone community, and that this might have already happened, but you need a lot more evidence before asserting that it has already happened. If anything, the opposite has occurred, as the CEOs of all major AI companies signed onto the declaration stating that AGI is a potential existential risk. I don't have quantitative proof, but from reading a wide range of media across the last couple of years, I get the impression that the media and general public are increasingly persuaded that AGI is a real risk, and are mostly no longer deriding the AGI-concerned as being low-status crazy sci-fi people.
Do you still think your communication was better than the people who thought the line was being towed, and if so then what's your evidence for that?
We are way off topic, but I am actually going to say yes. If someone understands that English uses standing-on-the-right-side-of-a-line as a standard image for obeying rules, then they are also going to understand variants of the same idea. For example, "crossing a line" means breaking rules/norms to a degree that will not be tolerated, as does "stepping out of line". A person who doesn't grok that these are all referring to the same basic metaphor of do-not-cross-line=rule is either not going to understand the other expressions or is going to have to rote-learn them all separately. (And even after rote-learning, they will get confused by less common variants, like "setting foot over the line".) And a person who uses tow not toe the line has obviously not grokked the basic metaphor.
To recap:
- original poster johnswentsworth wrote a piece about people LARPing their jobs rather than attempting to build deeper understanding or models-with-gears
- aysja added some discussion about people failing to notice that words have referents, as a further psychological exploration of the LARPing idea, and added tow/toe the line as a related phenomenon. They say "LARPing jobs is a bit eerie to me, too, in a similar way. It's like people are towing the line instead of toeing it. Like they're modeling what they're "supposed" to be doing, or something, rather than doing it for reasons."
- You asked for further clarification
- I tried using null pointers as an alternative metaphor to get at the same concept.
No one is debating the question of whether learning etymology of words is important and I'm not sure how you got hung up on that idea. And toe/tow the line is just an example of the problem of people failing to load the intended image/concept, while LARPing (and believing?) that they are in fact communicating in the same way as people who do.
Does that help?
Not sure I understand what you're saying with the "toe the line" thing.
The initial metaphor was ‘toe the line’ meaning to obey the rules, often reluctantly. Imagine a do-not-cross line drawn on the ground and a person coming so close to the line that their toe touched it, but not in fact crossing the line. To substitute “tow the line”, which has a completely different literal meaning, means that the person has failed to comprehend the metaphor, and has simply adopted the view that this random phrase has this specific meaning.
I don’t think aysja adopts the view that it’s terrible to put idiomatic phrases whole into your dictionary. But a person who replaces a meaningful specific metaphor with a similar but meaningless one is in some sense making less meaningful communication. (Note that this also holds if the person has correctly retained the phrase as ‘toe the line’ but has failed to comprehend the metaphor.)
aysja calls this failing to notice that words have referents, and I think that gets at the nature of the problem. These words are meant to point at a specific image, and in some people’s minds they point at a null instead. It’s not a big deal in this specific example, but a) some people seem to have an awful lot of null pointers and b) sometimes the words pointing at a null are actually important. For example, think of a scientist who can parrot that results should be ‘statistically significant’ but literally doesn’t understand the difference between doing one experiment and reporting the significance of the results, and doing 20 experiments and only reporting the one ‘significant’ result
NB: the link to the original blog on the Copenhagen Interpretation of Ethics is now broken and redirects to a shopping page.
Yes. But I think most of us would agree that coercively-breeding or -sterilising people is a lot worse than doing the same to animals. The point here is that intelligent parrots could be people who get treated like animals, because they would have the legal status of animals, which is obviously a very bad thing.
And if the breeding program resulted in gradual increases in intelligence with each generation, there would be no bright line where the parrots at t-minus-1 were still animals but the parrots at time t were obviously people. There would be no fire alarm to make the researchers switch over to treating them like people, getting informed consent etc. Human nature being what it is, I would expect the typical research project staff to keep rationalising why they could keep treating the parrots as animals long after the parrots had achieved sapience.
(There is separate non-trivial debate about what sapience is and where that line should be drawn and how you could tell if a given creature was sapient or not, but I’m not going down that rabbit hole right now.)
You ask if we could breed intelligent parrots without any explanation of why we would want to. In short, because we can doesn’t mean we should. I’m not 100% against the idea, but anyone trying this seriously needs to think about questions like:
- At what point do the parrots get legal rights? If a private effort succeeds in breeding intelligent parrots without government buy-in, it will in effect be creating sapient people who will be legally non-persons and property. There are a lot of ways for that to go wrong.
- ETA: presumably the researchers will want to keep controlling the parrots‘ reproduction, even as the parrots become more intelligent. What happens if the parrots have their own ideas about who to breed with? Or the rejected parrots don’t want to be sterilised? Will the parrot-breeders end up repeating some of the atrocities of the 20th century eugenics movement because they act like they’re breeding animals even once they are breeding people?
- Is there a halfway state where they bred semi-intelligent parrots that are smarter than normal parrots but not as smart as people? (Could also be the result of a failed project.) What happens to them? At what stage does an animal become intelligent enough that keeping it as a pet is wrong? What consequences will there be if you just release the semi-intelligent parrots into the wild?
- What protections are there if the parrot-breeding project runs out of funds or otherwise fails? Will it end up doing the equivalent of releasing a bunch of small children or mentally handicapped people into the wild where they’re ill-equipped to survive, because young intelligent parrots don’t get the legal protections granted to human children?
If there was a really compelling reason to breed intelligent parrots, then these objections could be overcome. But I don’t get any sense from you of what that compelling reason is. “Somebody thinks it sounds cool” is a good reason to do a lot of things, but not when the consequences involve something as ethically complex as creating a sapient species.
Metaculus lets you write private questions. Once you have an account, it’s as simple as selecting ‘write a question’ from the menu bar, and then setting the question to private not public, as a droplist in the settings when you write it. You can resolve your own questions ie mark them as yes/no or whatever, and then it’s easy to use Metaculus’ tools for examining your track record, including Brier score.
@andeslodes, congratulations on a very good first post. You clearly explained your point of view, and went through the text of the proposed Act and the background of the relevant Senators in enough detail to understand why this is important new information. I was already taking the prospect of aliens somewhat-seriously, but I updated higher after this post.
I notice that Metaculus is at a just 1.1% probability confirmed alien tech by 2030, which seems low.
I had not, and still don't know about it, can you post a link?
Thank you for taking the time to highlight this. I hope that some LessWrongers with suitable credentials will sign up and try to get a major government interested in x-risk.
I see a lot of people commenting here and in related posts on the likelihood of aliens deliberately screwing with us and/or how improbable it is that advanced aliens would have bad stealth technology or ships that crash. So I wanted to add a few other possible scenarios into the discussion:
- Earth is a safari park where people go to see the pristine native wildlife (us). Occasionally some idiot tourist gets too close and disturbs that wildlife despite the many very clear warnings telling them not to. (Anyone who has ever worked with human tourists will probably have some sympathy for this explanation.)
- Observation of Earth is baby's first science project. High-schoolers and college undergrads can practice their anthropology/xenology/whatever on us. Yes, they're supposed to keep their existence secret from the natives, otherwise it wouldn't be good science, but they're kids with cheap disposable drones and sometimes they screw up.
- There is a Galactic Federation with serious rules about not disturbing primitive civilisations (us), and only accredited scientists can go observe them, but even scientists get bored and drunk and sometimes do dumb stuff when they're on a low-prestige project a long way from any supervisors.
Obviously, these are all human-inspired examples, aliens could have other motivations incomprehensible to us. (Imagine trying to explain a safari park to a stone-age hunter-gatherer. Then remember that aliens potentially have a comparable tech gap to us and non-human psychology.)
Some takeaways from the scenarios above:
- Aliens aren't necessarily monolithic. There may be rule-setting entities (whoever says 'don't go near the wildlife') which are separate from rule-following entities, and rules about not bothering Earthlings may not be maximally enforced.
- We shouldn't assume we're seeing their most advanced tech. Human beings can manufacture super-safe jet planes that approximately never crash. We still build cheap consumer drones that sometimes fall out of the sky. Saying "how come an ultra-advanced alien civilisation can't build undetectable craft?" is like looking at a kid whose drone just crashed in a park and saying "you'd think humanity could build planes that stay in the air". We can. We just don't always do it, nor should we.
- We shouldn't assume they care that much. Whoever is in charge of enforcing the rules about "don't bother earthlings" might be the equivalent of a bored bureaucrat whose budget just got cut and who gets paid the same whether or not they actually do their job. Or a schoolteacher who's more interested in getting the kids to complete their project than in complying with every pettifogging rule that no one ever checks anyway.
I realise all the above makes it sound like I believe in aliens, so for the record, I think that Chinese drones or other mundane causes are the most likely explanations for American UAP reports, and that hoax/disinformation-op/mental-breakdown are the most likely explanations for the Grusch whistleblower claims. But I would put about a 10% probability on actual aliens, which I realise is a lot higher than most LessWrongers.
Possible yes, but if all advanced civs are highly prioritising stealth, that implies some version of the Dark Forest theory, which is terrifying.
I can come up with a hypothesis about the behaviour of the sources: the drones you send to observe and explore a planet might be disposable. (Eg we’ve left rovers behind on Mars because it’s not worth the effort to retrieve them from the gravity well.) Although if the even-wilder rumours about bio-alien corpses are true, that one fails too.
But the broader picture: that there are high-tech aliens out there who we haven’t observed doing things like building Dyson spheres or tiling the universe with computronium? They’re millions of years ahead of us and somehow didn’t either progress to building mega-tech or to AI apocalypse? They’re not millions of years ahead of us and there’s some insane coincidence where two intelligent species emerged on different planets at the same time but also there are no older civs that already grabbed their lightcone? I’m as boggled as you.
I’m kind of hoping this whole thing is a hoax or deliberate disinformation operation or something because I have absolutely no idea what to think about the alternative. But after the amount of leaks about UAPs over the last few years, I’m at at least 10% that there are literal alien spacecraft visiting our planet.
I don’t think the hyperloop matters one way or the other to your original argument (which I agree with). Someone can be a genius and still make mistakes and fail to succeed at every single goal. (For another example, consider Isaac Newton who a) wasted a lot of time studying alchemy and still failed to transform lead into gold and b) screwed up his day job at the Royal Mint so badly that England ended up with a de facto gold standard even though it was supposed to have both silver and gold currency. He’s still a world-historic genius for inventing calculus.)
OP discusses CFCs in the main post. But yes, that’s the most hopeful precedent. The problem being that CFCs could be replaced by alternatives that were reasonably profitable for the manufacturers, whereas AI can’t be.
The child labour example seems potentially hopeful for AI given that fears of AI taking jobs are very real and salient, even if not everyone groks the existential risks. Possible takeaway: rationalists should be a lot more willing to amplify, encourage and give resources to protectionist campaigns to ban AI from taking jobs, even though we are really worried about x-risk not jobs.
Related point: I notice that the human race has not banned gain-of-function research even though it seems to have high and theoretically even existential risks. I am trying to think of something that's banned purely for having existential risk and coming up blank[^1].
Also related: are there religious people who could be persuaded to object to AI in the same way they object to eg human gene editing? Can we persuade religious influencers that building AI is 'playing God' in some way? (Our very atheist community are probably the wrong people to reach out to the religious - do we know any intermediaries who could be persuaded?)
Or to summarise: if we can't get AGI banned/regulated for the right reasons (and we should keep trying), can we support or encourage those who want to ban AGI for the wrong reasons? Or at minimum, not stand in their way? (I don't like advocating Dark Arts, but my p(doom) is high enough that I would encourage any peaceful effort to ban, restrict, or slow AI development, even if it means working with people I disagree with on practically everything else.)
[^1]European quasi-bans on genetic modification of just about anything are one possibility. But those seem more like reflexive anti-corporatism plus religious fear of playing God, plus a pre-existing precautionary attitude applied to food items.)
One more question for your list: what industries have not been subject to this regulatory ratchet and why not?
I‘m thinking of insecure software, although others may be able to come up with more examples. Right now software vendors have no real incentive to ship secure code. If someone sells a connected fridge which any thirteen-year-old can recruit into their botnet, there’s no consequence for the vendor. If Microsoft ships code with bugs and Windows gets hacked worldwide, all that they suffer is embarrassment[1]. And this situation has stayed stable since the invention of software. Even after high-publicity bugs like Heartbleed or NotPetya, the industry hasn’t suffered the usual regulatory response of ’something must be done, this is something so we’re going to do it’.
You don’t have to start with a pre-approval model. You could write a law requiring all software to be ‘reasonably secure’, and create a 1906-FDA type policing agency that would punish insecure software after the fact. This seems like the first step of the regulatory ratchet, it would be feasible, but no one (in any country?) has done it and I don’t know why.
We’ve also had demonstrations in principle of the ability to hack highly-regulated objects like medical devices and autos, but still no ratchet even in regulated domains and I don’t understand why not.
My best explanation for this non-regulation is that nothing bad enough has happened. We haven’t had any high-profile safety incidents where someone has died, and that is what starts the ratchet. But we have had high-profile hacks, and at least one has taken down hospitals and killed someone in a fairly direct way, and I don’t even remember any journalists pushing for regulation. I notice that I’m confused.
Software is an example of an industry where the first step of the ratchet never happened. Are there any industries which got a first step like a policing-agency and then didn’t ratchet from there even after high-publicity disasters? Can we learn how to prevent or interrupt the regulatory ratchet?
[1] Bruce Schneier has been pushing a liability model as a possible solution for at least ten years, but nothing has changed.
There are already countries where prostitution is legal including the Netherlands, the UK and the US state of Nevada. (Not a complete list, just the first three I thought of off the top of my head.) None of them require people to prostitute themselves rather than accessing public benefits.
Likewise, there are countries, including the USA where it's legal to pay people for donating human eggs, and probably other body parts. So far as I know, no state in the US requires women to attempt that before accessing welfare, and the US welfare system is less generous than European ones.
Empirically, your concern seems not to have any basis in fact.
Thanks, that’s a good example. I’ll think about it.
I think I overstated slightly. And I’m focusing on the rationale for taking away options as much as the taking away itself. I’d restate to something like: taking people’s options away for their own good, because you think they will make the wrong decisions for themselves, is almost always bad.
There’s a discussion further down the thread about arms race dynamics, where you take away options in order to solve a coordination problem, where I accept that it is sometimes a good idea. Note that the arms race example recognises that everyone involved is behaving in a way that is individually rational. But the idea that politicians and regulators, living generally comfortable lives, know better than poor people what is good for them is something I really object to. It reminds me of the Victorian reply to the women’s rights movement: that male relatives should be able to control women’s lives because they could make better decisions than women would make for themselves. Ugh.
To the specific sex example, yes it’s unpleasant to be in that situation, everyone agrees. The problem is that banning payment in sex forces people into situations they find even worse, like homelessness. I would prefer governments to solve these problems constructively, like by building more housing, and said so in a footnote to the main post, but in the meantime we should stop banning poor people from doing the best they can to cope with the world that actually exists.
The game theory example ignores the principal-agent effect. We are not talking about you rationally choosing to give up some of your options. We are talking about someone else, who is not well-aligned with you, taking away your options, generally without input from you.
I’m also introverted and nerdy bordering on autistic, so I can’t make a claim that my experiences are different from yours in that sense. I think some of my perspective comes from growing up in developing countries and knowing what real poverty looks like, even though I haven’t experienced it myself. And some of my perspective is that I value my own personal autonomy very highly, so I oppose people who want to take autonomy away from others, and that feeling seems to be stronger than it is for most people.
This strikes me as a fully general argument against making any form of moral progress. Some examples:
- An average guy in the 1950s notices that the main argument against permitting homosexuality seems to be "God disapproves of it". But he doesn't believe in God. Should he note that there is a strong cultural guardrail against "sexual deviancy" according to the local cultural definition, and oppose the gay rights movement anyway?
- Does the answer to this question change by the 1990s when the cultural environment is shifting? Or by the 2020s? If so, is it right that the answer to an ethical question should change based on other people's attitudes? (Obviously the answer to the pragmatic question of "how much trouble will I get into for speaking out?" changes, but that's not what we're debating.)
- A mother from a culture where coercive arranged marriages are normal notices that the culturally-endorsed reason for this practice is that young adults are immature and parents are better at understanding what is good for them, so parents should arrange marriages to secure their offspring's happiness. She notices that many parents actually make marriage decisions based on what is economically best for the parents, and that even those trying to ensure the young person's happiness often get it wrong. Should the mother think "this is a guardrail preventing breakdown of family structure, or filial respect, or something important, I will arrange marriages for my own sons and daughters anyway?"
- NB: I'm trying to be clear I'm talking about arranged marriage of adults, not child marriage, although that is also a practice that has been endorsed by many cultures, who would presumably be able to name "guardrails" that banning child marriage would cross.
I get that you said "respect" not "obey" guardrails presumably for reasons like these, but without more discussion about when you "respect" the guardrail but bypass it anyway, this seems roughly equivalent to saying that there is always a very heavy burden of proof to change moral norms, even where the existing norms seem to be hurting people. (In the two examples above, gay people, and everyone who gets married to someone they don't like).
"...an important thing to reiterate is that creating a world where people have good options is good, but banning a bad option isn't the way to do it." This is very well-phrased and I strongly agree. In fact, I think you have managed to summarise my view better than I did myself!
But is the free tuberculosis treatment in India because kidney selling was banned? Or because countries which get to a certain development level try to give at least some basic free healthcare to their people? In a counterfactual where India had legalised kidney selling for the last twenty years, do you think they would not have free treatment for tuberculosis?
Just so you know, there are a lot of people disagreeing with me on this page, and you are the only one I have downvoted. I'm surprised that someone who has been on LessWrong as long as you would engage in such blatant strawmanning. Slavery? Really?
Agree, which makes it even more heinous that governments prevent people from doing it.
I actually agree that there are situations where preventing an arms race is a good idea. (And I wish there were a realistic proposal for a government to do something about the education credentials arms race.) But look at the different justifications:
- There is an arms race where each individual is doing what is in their own rational best interest, but the result is collectively damaging, we need a government to solve this coordination failure
- Those poor people are too dumb to make their own decisions, we should ban them from doing X for their own good.
What I'm really strongly arguing against is anything which proceeds from argument 2. I think all the examples I gave are non-arms-race dynamics where most of the people arguing to take a bad option away are giving a version of the "too dumb to make their own decisions" argument, usually described in the language of exploitation. And like I said, I find those arguments offensively infantilising and wrong in principle as well as empirically causing avoidable harm.
Thanks for a steelman. Can you give any real life example of where taking away bad options has led to the creation of better options? Or conversely, can you think of any real life examples where a government said something like "we've allowed sex for rent, now we can ignore the housing crisis"?
I notice that the large majority of the bad options I can think of are ultimately the result of poverty. But even in the current world there are few governments strongly focused on reducing poverty among their own citizens and none I know of focused on reducing poverty internationally. So the existing long list of people not being allowed bad options isn't really leading to good options.
Internationally, what help there is often comes from charities - do you think it likely that MSF or Oxfam would say "OK, Indian people can sell kidneys now, I guess they don't need us?" I doubt it.
Thanks for the comment. I think tenants are still better off with a legal contract than not. Analogously, a money-paying tenant with a legal contract has some protections against a landlord raising rents, and gets a notice period and the option to refuse and go elsewhere; a money-paying tenant who pays cash in hand to an illegal landlord probably has less leverage to negotiate. (Although there will be exceptions.) Likewise, a sex-paying tenant is better off with a legal contract.
I realise that the law won’t protect everyone and that some people will have bad outcomes no matter what - I deliberately picked this example to make people think about uncomfortable trade offs - but I still think the general approach of trying to give people more choice rather than less is preferable.
It is weird and it’s extra-weird that everywhere from Carthage to Greece to China failed to use an efficient system for writing numbers. It’s not like there was just one outlier which kept a traditional system.
And I wonder if the use of traditional systems for writing delayed the development of calculus and advanced mathematics too.
Epistemic status: thinking out loud
My most-puzzling why-did-this-take-so-long example is the base-ten system for writing numbers, using zero*. Wikipedia tells me this was invented in India in the 7th century AD and spread gradually into Europe after that. But this seems to be millennia late. There were plenty of highly organised empires trying to administer everything from military logistics to tax systems to pyramid-building with Roman numerals or worse. See here for the Babylonian version, for example.
So far as I can tell, once you have writing and some basic concept of writing-down-numbers (I can't describe Roman numerals as mathematical notation), there are no further pre-requisites for the invention of zero. And the existence of the abacus, possibly invented as far back c2,700 BC, presumably helped. And yet, we have circa 3,400 years from inventing the abacus to figuring out how to write down a numerical system that actually made sense.
Why not?! Looking at your list of factors 1. Total number of researchers. 3,400 years times every civilisation across Eurasia that needed to administer a large polity or project. The number of person-hours of people calculating stuff must have been astronomical. 2. Speed of research. OK, this is before the printing press, but still. 3,400 years is an excessive delay. 3 size of opportunity. Just huge. 4 social barriers - I don't think many civilisation treated math as a controversial topic.
*It doesn't have to be base-10, a base-12 or -20 or whatever system would work fine too. Just not freaking Roman numerals!
WEIRD = Western, educated, industrialised, rich and democratic
Typo: “A counter-terrorism analyst is prlike it'ivy to a lot of secret information throughout the course of their job.”
Someone has actually written up a scientific paper discussing the hypothesis that the PETM or other events in the geologic record was caused by a prior industrial civilisation. (If you're one of the authors, I apologise for telling you something you already know, but if you're not, I thought you might be interested.) The short version is that there's no smoking gun, but they can't rule it out either.
One item the authors don't go into, which I think is relevant, is the question of whether there are missing fossil fuels. Google tells me that pretty much all existing fossil fuels were formed at least 65 million years ago, which I think makes it unlikely that the PETM 55mya was a previous industrial civilisation, because they'd have burned those fuels instead of leaving them for us to find. But I have zero geological expertise, so someone who knows better than me might be able to pick holes in that argument.
This seems like a good situation to try re-writing some incentives. Are there any lawyers who can comment on whether the FDA could be sued for wrongful death if any baby did starve? Are any rationalists members of parents’ groups who could be persuaded to attempt such a lawsuit? This seems like the sort of situation where loudly and publicly threatening to sue the FDA and cause them massive bad publicity might actually cause a change in policy - the FDA probably prefers changing policy to being sued, even if the lawsuit’s odds of success are only 50:50.
I’d second Peter McCluskey‘s suggestion of fertile soil. So far as I know, the clearest case is the Chaco Canyon civilisation where pollen studies have proved that what is now an inhospitable desert in Nevada used to be a green and pleasant land before the civilisation destroyed itself through deforestation making them unable to keep their topsoil. (And wow, they destroyed it so thoroughly that the place is still desert centuries later.)
I‘m also leaning towards the idea that at least some other ancient civilisations destroyed themselves in a similar way [1] including the Indus Valley civilisation. Not quite exactly the same thing, but the case of Easter Island cutting down all their trees is a similar case of self-inflicted environmental damage causing permanent harm to the civilisation. (Disclaimer, I’m not a historian or archaeologist.)
And like him, I’m not very reassured by the recent record. There are non-civilisation-collapsing examples of similar phenomena from the 1930s Dust Bowl in the US prairies to the current ongoing desertification of the Sahel ie expansion of the Sahara caused mostly by over-grazing. And the inadequate response to climate change [2] suggests that even the most developed countries haven’t become a lot wiser with modern tech.
Having said all that, I agree with the point that so far everyone has been wrong to worry that we will run out of guano / whale oil / peat / coal /oil / potash / insert resource here. But we seem to be a lot better at finding a technological replacement for [specific valuable resource] than we are at mitigating complex externalities with long-term effects.
[1] Basically any civ where the current explanation for their collapse is given as ‘climate change’— that’s the archaeologist equivalent of shrugging and saying ‘they weren’t destroyed by invasion so we dunno’.
[2] with a partial exception for Europe
This is an interesting question. Thank you for asking it.
Thanks, I hadn't seen that before, and now I have a new concept to play with :-)
I think the UK and other Western European countries have relatively little direct rent-seeking behaviour, but I agree with your hypothesis for any country that doesn’t have a strong anti-corruption culture. (Here, the rent-seeking goes more through political parties rather than non-political bureaucracies.) And I think the analogy with education is a very good one.
ETA: I should have said this before diving into an object-level response. Congratulations on writing an interesting first post with an important question on a neglected topic.
I’ve been asking myself the same question about bureaucracies, and the depressing conclusion I came up with is that bureaucracies are often so lacking incentives that their actions are either based on inertia or simply unpredictable. I’m working from a UK perspective but I think it generalises. In a typical civil service job, once hired, you get your salary. You don’t get performance pay or any particular incentive to outperform.[1] You also don’t get fired for anything less than the most egregious misconduct. (I think the US has strong enough public sector unions that the typical civil servant also can’t be fired, despite your different employment laws.) So basically the individual has no incentive to do anything.
As far as I can see, the default state is to continue half-assing your job indefinitely, putting in the minimum effort to stay employed, possibly plus some moral-maze stuff doing office politics if you want promotion. (I’m assuming promotion is not based on accomplishment of object-level metrics.) The moral maze stuff probably accounts for tendencies toward blame minimisation.
Some individuals may care altruistically about doing the bureaucracy’s mission better, eg getting medicines approved faster, but unless they are the boss of the whole organisation, they need to persuade other people to cooperate in order to achieve that. And most of the other people will be enjoying their comfortable low-effort existence and will just get annoyed at that weirdo who’s trying to make them do extra work in order to achieve a change that doesn’t benefit them. So the end result is strong inertia where the bureaucracy keeps doing whatever it was doing already.
You get occasional and unpredictable exceptions to this dynamic if 1) there’s some exceptional cause that produces a ‘war effort’ mentality such that lots of people will voluntarily put in effort to achieve the same goal eg fast approval of Covid vaccine or 2) someone very senior wants to accomplish real change and puts in effort toward that. And in the case of 2) it probably still needs to be change that can be accomplished by something like writing new rules, because if the change requires large numbers of employees to actually change the way they work, they may successfully resist it. (See every large government IT project ever.)
I don’t like my conclusions. And I haven’t ever worked in an actual government bureaucracy, although I have been part of corporate ones, so this is very much a case of the outsider looking in. I hope that my speculation is wrong. But this is the best model I have of how government bureaucracies work.
[1] If there were such incentives you could at least start talking about Goodharting but I don’t think there are.
I think you’re too harsh on the shareholders who don’t want the oil companies to drill more. The problem is that drilling new oil fields is a really capital-intensive process. Even drilling infill wells in existing fields is expensive. You are talking about multi-year payback periods. The logic goes like this: we need an oil price greater than X for the next N years (exact values of X and N will vary by location, but say $90 and 5 years as a ballpark) in order to earn a return on new investment. The oil price is above X today, but there is no guarantee that it stays there and actually the futures market is signalling that it doesn’t. So hedging the output of the new wells before you start drilling isn’t a viable option. And the shareholders all remember clearly that last time there was a high oil price, the oil companies acted like that price would stay high, so they all drilled too many wells and then lost a ton of money when the oil price fell. And the shareholders didn’t get the benefits of the high oil price applied to existing production, because the oil companies spent their windfall drilling new and very expensive holes in the ground instead. So when the shareholders see a temporary spike in the oil price that could reverse in a month if there’s a ceasefire in Ukraine, I can’t really blame them for saying “let’s not make that mistake again!”
There is also some element of genuine concern for ESG and desire to prevent new sources of carbon emissions. I very much doubt that the people citing climate change concerns have done a decent cost-benefit analysis of short-term improvements in energy security vs long-term harm from global warming, but I’m also not convinced that people like you who come down in favour of energy security have done the analysis.
You may be right that deterring the next war is harder than it appears. I’m pretty sure you’re right that escalation is more likely in reality than in theory. That doesn’t change the fact that the world still needs to deter the next war. And taking the problem seriously and thinking about how to find a solution is a necessary first step.