Leto among the Machines

post by Virgil Kurkjian · 2018-09-30T21:17:11.223Z · LW · GW · 20 comments

I've always been surprised that there's not more discussion of Dune in rationalist circles, especially considering that:


1. It's a book all about people improving their minds to the point where they become superhuman.

2. It's set in a world where AI Goal Alignment issues are not only widely understood, but are integrated into the foundation of every society.

3. It's ecological science fiction — dedicated to "the dry-land ecologists, wherever they may be" — but what that secretly means is that it's a series of novels about existential risk, and considers the problem on a timescale of tens of thousands of years.


For those of you who are not familiar, Dune is set about 20,000 years in the future. About 10,000 years before the events of the first book, Strong Artificial Intelligence was developed. As one might expect, humanity nearly went extinct. But we pulled together and waged a 100-year war against the machines, a struggle known as the Butlerian Jihad (This is why a butler is “one who destroys intelligent machines”). We succeeded, but only barely, and the memory of the struggle was embedded deep within the human psyche. Every religion and every culture set up prohibitions against "thinking machines". This was so successful that the next ten millennia saw absolutely no advances in computing, and despite the huge potential benefits of defection, coordination was strong enough to prevent any resurgence of computing technology.

Surprisingly, the prohibition against "thinking machines" appears to extend not only to what we would consider to be Strong AI, but also to computers of all sorts. There is evidence that devices for recording journals (via voice recording?) and doing basic arithmetic were outlawed as well. The suggestion is that there is not a single mechanical calculator or electronic memory-storage device in the entire Imperium. There are advanced technologies, but nothing remotely like computers — the Orange Catholic Bible is printed on "filament paper", not stored on a Kindle.

While I appreciate the existential threat posed by Strong AI, I've always been confused about the proscription against more basic forms of automation. The TI-81 is pretty helpful and not at all threatening. Storing records on paper or filament paper has serious downsides. Why does this society hamstring themselves in this way?

The characters have a good deal to say about the Butlerian Jihad, but to me, their answers were always somewhat confusing:

Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them. (Reverend Mother Gaius Helen Mohiam)

And:

What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking — there's the real danger. (Leto Atreides II)

This doesn't suggest that literal extinction threat is the only reason for the Jihad. In fact, according to these major characters, it wasn't even the primary reason.

This is not to say that extinction risk isn't on their mind. Here's another idea discussed in the books, and condemned for its obvious x-risk issues:

The Ixians contemplated making a weapon—a type of hunter-seeker, self-propelled death with a machine mind. It was to be designed as a self-improving thing which would seek out life and reduce that life to its inorganic matter. (Leto Atreides II)

Or, more explicitly:

Without me there would have been by now no people anywhere, none whatsoever. And the path to that extinction was more hideous than your wildest imaginings. (Leto Atreides II)

But clearly extinction risk isn't the only thing driving the proscription against thinking machines. If it were, then we'd still have our pocket calculators and still be able to index our libraries using electronic databases. But this society has outlawed even these relatively simple machines. Why?


Goodhart's law states that when a measure becomes a target, it ceases to be a good measure.

What this means is that the act of defining a standard almost always destroys the goal you were trying to define. If you make a test for aptitude, teachers and parents will teach to the test. The parents with the most resources — richest/most intelligent/most well-connected — will find ways to get their children ahead at the expense of everyone else. If you require a specific degree to get a particular job, degree-granting institutions will compete to make the degree easier and easier to acquire, to the point where it no longer indicates quality. If supply is at all limited, then the job-seekers who are richest/most intelligent/most well-connected will be the ones who can get the degree. If you set a particular critical threshold for a statistical measure (*cough*), researchers will sacrifice whatever positive qualities about the rest of their research they can in pursuit of reaching that critical threshold.

Governments, if they endure, always tend increasingly toward aristocratic forms. No government in history has been known to evade this pattern. And as the aristocracy develops, government tends more and more to act exclusively in the interests of the ruling class — whether that class be hereditary royalty, oligarchs of financial empires, or entrenched bureaucracy. (Bene Gesserit Training Manual)

One of the most important things we know from AI Alignment work is that defining a rule or standard that can't be misinterpreted is very tricky. An intelligent agent will work very hard to maximize its own utility function, and will find clever ways around any rules you throw in its way.

One of the ways we have been short-sighted is in thinking that this applies only to strong or general artificial intelligences. Humans are strong general intelligences; if you put rules or standards in their way, they will work very hard to maximize their own utility functions and will find clever ways around the rules. Goodhart's law is the AI Alignment problem applied to other people.

("The real AI Alignment problem ... is other people?")

It's been proposed that this issue is the serpent gnawing at the root of our culture. The long and somewhat confusing version of the argument is here. I would strongly recomend that you read first (or instead) this summary by Nabil ad Dajjal. As Scott says, "if only there were something in between Nabil’s length and Concierge’s", but reading the two I think we can get a pretty good picture.

Here are the first points, from Nabil:

There is a four-step process which has infected and hollowed out the entirety of modern society. It affects everything from school and work to friendships and dating.
In step one, a bureaucrat or a computer needs to make a decision between two or more candidates. It needs a legible signal. Signaling (see Robin Hanson) means making a display of a desired characteristic which is expensive or otherwise difficult to fake without that characteristic; legibility (see James Scott) means that the display is measurable and doesn’t require local knowledge or context to interpret.

I will resist quoting it in full. Seriously, go read it, it's pretty short.

When I finished reading this explanation, I had a religious epiphany. This is what the Butlerian Jihad was about. While AI may literally present an extinction risk because of its potential desire to use the atoms in our bodies for its own purposes, lesser forms of AI — including something as simple as a device that can compare two numbers! — are dangerous because of their need for legible signals.

In fact, the simpler the agent is, the more dangerous it is, because simple systems need their signals to be extremely legible. Agents that make decisions based on legible signals are extra susceptible to Goodhart's law, and accelerate us on our way to the signaling catastophe/race to the bottom/end of all that is right and good/etc.

As Nabil ad Dajjal points out, this is true for bureaucrats as well as for machines. It doesn't require what we normally think of as a "computer". Anything that uses a legible signal to make a decision with no or little flexibility will contribute to this problem.

The target of the Jihad was a machine-attitude as much as the machines. (Leto Atreides II)

As a strong example, consider Scott Aaronson's review of Inadequate Equilibria, where he says:

In my own experience struggling against bureaucracies that made life hellish for no reason, I’d say that about 2/3 of the time my quest for answers really did terminate at an identifiable “empty skull”: i.e., a single individual who could unilaterally solve the problem at no cost to anyone, but chose not to.  It simply wasn’t the case, I don’t think, that I would’ve been equally obstinate in the bureaucrat’s place, or that any of my friends or colleagues would’ve been.  I simply had to accept that I was now face-to-face with an alien sub-intelligence—i.e., with a mind that fetishized rules made up by not-very-thoughtful humans over demonstrable realities of the external world.

Together, this suggests a surprising conclusion: Rationalists should be against automation. I suspect that, for many of us, this is an uncomfortable suggestion. Many rationalists are programmers or engineers. Those of us who are not are probably still hackers of one subject or another, and have as a result internalized the hacker ethic.

If you're a hacker, you strongly believe that no problem should ever have to be solved twice, and that boredom and drudgery are evil. These are strong points, perhaps the strongest points, in favor of automation. The world is full of fascinating problems waiting to be solved, and we shouldn't waste the talents of the most gifted among us solving the same problems over and over. If you automate it once, and do it right, you can free up talents to work on the next problem. Repeat this until you've hit all of humanity's problems, boom, utopia achieved.

The problem is that the talents of the most gifted are being double-wasted in our current system. First, intelligent people spend huge amounts of time and effort attempting to automate a system. Given that we aren't even close to being able to solve the AI Alignment problem, the attempt to properly automate the system always fails, and the designers instead automate the system so that it uses one or more legible signals to make its judgment. Now that this system is in place, it is immediately captured by Goodhart's law, and people begin inventing ways to get around it.

Second, the intelligent and gifted people — those people who are most qualified to make the judgment they are trying to automate — are spending their time trying to automate a system that they are (presumably) qualified to make judgments for! Couldn't we just cut out the middleman, and when making decisions about the most important issues that face our society, give intelligent and gifted people these jobs directly?

So we're 1) wasting intellectual capital, by 2) using it to make the problem it's trying to solve subject to Goodhart's law and therefore infinitely worse.

Give me the judgment of balanced minds in preference to laws every time. Codes and manuals create patterned behavior. All patterned behavior tends to go unquestioned, gathering destructive momentum. (Darwi Odrade)

Attempts to solve this problem with machine learning techniques are possibly worse. This is essentially just automating the task of finding a legible signal, with predictable results. It's hard to say if a neural network will tend to find a worse legible signal than the programmer would find on their own, but it's not a bet I would take. Further, it lets programmers automate more decisions, lets them do it faster, and prevents them from understanding the legible signal(s) they select. That doesn't inspire comfort.

Please also consider the not-literally-automation version, as described by Scott Alexander in his daycare worker example:

Daycare companies really want to avoid hiring formerly-imprisoned criminals to take care of the kids. If they can ask whether a certain employee is criminal, this solves their problem. If not, they’re left to guess. And if they’ve got two otherwise equally qualified employees, and one is black and the other’s white, and they know that 28% of black men have been in prison compared to 4% of white men, they’ll shrug and choose the white guy.

Things like race, gender, and class are all extremely legible signals. They're hard to fake, and they're easy to read. So if society seems more racist/sexist/classist/politically bigoted than it was, consider the idea that it may be the result of runaway automation. Or machine-attitude, as God Emperor Leto II would say.

I mentioned before that, unlike problems with Strong AI, this weak-intelligence-Goodhart-problem (SchwachintelligenzGoodhartproblem?) isn't an existential risk. The bureaucrats and the standardized tests aren't going to kill us in order to maximize the amount of hydrogen in the universe. Right?

But if we consider crunches to be a form of x-risk, then this may be an x-risk after all. This problem has already infected "everything from school and work to friendships and dating" making us "older, poorer, and more exhausted". Not satisfied with this, we're currently doing our best to make it more 'efficient' in the areas it already holds sway, and working hard to extend it to new areas. If taken to its logical conclusion, we may successfully automate nearly everything, and destroy our ability to make any progress at all.

I'll take this opportunity to reiterate what I mean by automation. I suspect that when I say "automate nearly everything", many of you imagine some sort of ascended economy, with robotic workers and corporate AI central planners. But part of the issue here is that Goodhart's law is very flexible, and kicks in with the introduction of most rules, even when the rules are very simple.

Literal machines make this easier — a program that says "only forward job applicants if they indicated they have a Master's Degree and 2+ years of experience" is simple, but potentially very dangerous. But on the other hand, a rule about what sorts of applicants will be considered can be identical when faithfully applied by a single-minded bureaucrat. The point is that the decision has been automated to a legible signal. Machines just make this easier, faster, and more difficult to ignore. All but the most extreme bureaucrats will occasionally break protocol. Automation by machine will never do so.


So we want two closely related things:

1. We want to avoid the possible x-risk from automation.

2. We want to reverse the whole "hollowed out the entirety of modern society" thing and make life feel meaningful again.

The good news is that there are some relatively easy solutions.

First, stop automating things or suggesting that things should be automated. Reverse automation wherever possible. (As suggested by Noitamotua.)

There may be some areas where automation is safe and beneficial. But before automating something, please spend some time thinking about whether or not the legible signal is too simple, whether the automation will be susceptible to Goodhart's law. Only in cases where the legible signal is effectively identical to the value you actually want, or where the cost of an error is low ("you must be this tall to ride" is a legible signal) will this be acceptable.

Second, there are personal and political systems which are designed to deal with this problem. Goodhart's law is powerless against a reasonable person. While you or I might take someone's education into consideration when deciding whether or not to offer them a job, we would weigh it in a complex, hard-to-define way against the other evidence available.

Let's continue the hiring decision metaphor. More important than ignoring credentials is the ability to participate in the intellectual arms race, exactly the problem fixed rules cannot follow (Goodhart's law!). If I am in charge of hiring programmers, I might want to give them a simple coding test as part of their interview. I might ask, "If I have two strings, how do I check if they are anagrams of each other?" If I use the same coding test every time (or I automate it, setting up a website version of the test to screen candidates before in-person interviews), then anyone who knows my pattern can figure out or look up the answer ahead of time, and the question no longer screens for programming ability — it screens for whatever "can figure out or look up the answer ahead of time" is.

But when you are a real human being, not a bureaucrat or automaton, you can vary the test, ask questions that haven't been asked before, and engage in the arms race. If you are very smart, you may invent a test which no one has ever thought of before.

Education is no substitute for intelligence. That elusive quality is defined only in part by puzzle-solving ability. It is in the creation of new puzzles reflecting what your senses report that you round out the definitions. (Mentat Text One)

So by this view, the correct thing to do is to replace automation with the most intelligent people available, and have them personally engaged with their duty — rather than having them acting as an administrator, as often happens under our current system.


Some people ask, what genre is Dune? It's set in the far future; there are spaceships, lasers, and nuclear weapons. But most of the series focuses on liege-vassal relationships, scheming, and religious orders with magic powers. This sounds a lot more like fantasy, right?

Clearly, Dune is Political Science Fiction. Science fiction proposes spectacular advances in science and, as a result, technology. But political thought is also a technology:

...while we don’t tend to think of it this way, philosophy is a technology—philosophers develop new modes of thinking, new ways of organizing the state, new ethical principles, and so on. Wartime encourages rulers to invest in Research and Development. So in the Warring States period, a lot of philosophers found work in local courts, as a sort of mental R&D department.

So what Dune has done is thought about wild new political technologies, in the same way that most science fiction thinks about wild new physical technologies (or chemical, or biological, etc.).

The Confucian Heuristic (which you should read, entirely on its own merits) describes a political system built on personal relationships. According to this perspective, Confucius hated unjust inequality. Unlike the western solution, which is to destroy all forms of inequality, Confucius rejected that as impossible. Instead, he proposed that we recognize and promote gifted individuals, and make them extremely aware of their duties to the rest of us. (Seriously, just read it!)

Good government never depends upon laws, but upon the personal qualities of those who govern. The machinery of government is always subordinate to the will of those who administer that machinery. (The Spacing Guild Manual)

In a way, Dune is Confucian as well, or perhaps Neo-Confucian, as Stephenson might say. It presents a society that has been stable for 10,000 years, based largely on feudal principles, and which has arranged itself in such a way that it has kept a major, lurking x-risk at bay.

It’s my contention that feudalism is a natural condition of human beings…not that it is the only condition or not that it is the right condition…that it is just a way we have of falling into organisations. I like to use the example of the Berlin Museum Beavers.
Before World War II there were a number of families of beaver in the Berlin Museum. They were European beaver. They had been there, raised in captivity for something on the order of seventy beaver generations, in cages. World War II came along and a bomb freed some of them into the countryside. What did they do? They went out and they started building dams. (Frank Herbert)

One way of thinking about Goodhart's law is that it says that any automated system can and will be gamed as quickly and ruthlessly as possible. Using human authorities rather than rules is the only safeguard, since the human can participate in the intellectual arms race with the people trying to get around the regulation; they can interpret the rules in their spirit rather than in their letter. No one will get far rules-lawyering the king.

The people who will be most effective at Goodhart-gaming a system will be those with starting advantages. This includes the rich, but also those with more intelligence, better social connections, etc., etc. So one problem with automation is that it always favors the aristocracy. Whoever has advantages will, on average, see them magnified by being the best at gaming automated systems.

The Confucian solution to inequality is to tie aristocrats into meaningful personal relationships with their inferiors. The problem with automation is that it unfairly benefits aristocrats and destroys the very idea of a meaningful personal relationship.

What you of the CHOAM directorate seem unable to understand is that you seldom find real loyalties in commerce. When did you last hear of a clerk giving his life for the company? (A letter to CHOAM, Attributed to The Preacher)

I've argued that we need to use human judgment in place of legible signals, and that we should recruit the most gifted people to do so. But giving all the decision-making power to an intellectual elite comes with its own problems. If we're going to recruit elites to replace our automated decision-making, we should make use of a political technology specifically designed to deal with this situation.

I'm not saying that we need to introduce elective fealty, per se. My short-term suggestion, however, would be that you don't let those in positions of power over you pretend that they are your equal. Choosing to attach yourself to someone powerful in exchange for protection is entirely left to your discretion.

Of course, what I really think we should do is bring back sumptuary laws.

Sumptuary laws keep individuals of a certain class from purchasing or using certain goods, including clothing. People tend to think of sumptuary laws as keeping low-class people from pretending to be high-class people, even if they're rich enough to fake it. The story goes that this was a big problem during the late middle ages, because merchants were often richer than barons and counts, but you couldn't let them get away with pretending to be noble.

The Confucian view is that sumptuary laws can keep high-class people from pretending to be low-class people, and attempting to dodge the responsibilities that come with it. Think of the billionaire chicken farmer wearing overalls and a straw hat. Is he just ol' Joe from up the road? Or was he born with a fortune he doesn't deserve?

Confucians would say that a major problem with our current system is that elites are able to pretend that they aren't elite. They see themselves as, while personally gifted, equal in opportunity to the rest of us, and as a result on an equal playing field. They think that they don't owe us anything, and try to convince us to feel the same way.

I like to think of this as the "Donald-Trump-should-be-forced-to-wear-gold-and-jewels-wherever-he-goes" rule. Or if you're of a slightly different political bent, "If-Zuckerberg-wears-another-plain-grey-T-Shirt-I-will-throw-a-fit-who-does-he-think-he's-fooling" rule.

This viewpoint also strikes a surprising truce between mistake and conflict theorists. Mistake theorists are making the mistake of thinking there is no conflict occurring, of letting "elites" pretend that they're part of the "people". Conflict theorists are making the mistake of thinking that tearing down inequality is desirable or even possible.


If you found any of this interesting, I would suggest that you read Dune and its immediate sequels (up to Chapterhouse, but not the junk Herbert's son wrote). If nothing else, consider that despite being published in 1965, it predicted AI threat and x-risk more generally as a major concern for the future of humanity. I promise there are other topics of interest there.

If you found any of this convincing, I strongly recommend that you fight against automation and legible signals whenever possible. Only fully realized human beings have the ability to pragmatically interpret a rule or guideline in the way it was intended. If we ever crack Strong AI, that may change — but safe to say, at that point we will have a new set of problems!

And in regards to the machines:

War to the death should be instantly proclaimed against them. Every machine of every sort should be destroyed by the well-wisher of his species. Let there be no exceptions made, no quarter shown; let us at once go back to the primeval condition of the race. (Samuel Butler)

20 comments

Comments sorted by top scores.

comment by Richard_Ngo (ricraz) · 2018-10-01T04:26:45.386Z · LW(p) · GW(p)

This is an entertaining essay but extrapolates wayyyy too far. Case in point: I don't even think it's actually about automation - the thing you're criticising sounds more like bureaucracy. Automation includes using robots to build cars and writing scripts to automatically enter data into spreadsheets instead of doing it by hand. You don't address this type of thing at all. Your objection is to mindless rule-following - which may in future be done by machines, but right now is mostly done by people. (I don't know of any tech system which doesn't have customer support and can't be manually overridden).

As a solution, you propose using more intelligent people who are able to exercise their discretion. Problem 1: there aren't that many intelligent, competent people. But you can make more use of the ones you have by putting a competent person in charge of a bunch of less competent people, and laying down guidelines for them to follow. Ooops, we've reinvented bureaucracies. And when we need to scale such a system to a thousand- or million-person enterprise, like a corporation or government, then the people at the bottom are going to be pretty damn incompetent and probably won't care at all about the organisation's overall goals. So having rules to govern their behaviour is important. When implemented badly, that can lead to Kafka-esque situations, but that's true of any system. And there are plenty of companies which create great customer service by having well-thought-out policies - Amazon, for instance.

But incompetence isn't even the main issue. Problem 2: the more leeway individuals have, the more scope they have to be corrupt. A slightly less efficient economy isn't an existential threat to a country. But corrupt governments are. One way we prevent them is using a constitution - a codification of rules to restrict behaviour. That's exactly the type of thing you rail against. Similarly, corruption in a corporation can absolutely wreck it, and so it's better to err on the side of strictness.

Anyway, the funny thing is that I do think there's a useful moral which can be drawn from your account of the Butlerian Jihad, but it's almost the exact opposite of your interpretation: namely, that humans are bad at solving coordination problems without deontological rules. Suppose you want to ensure that strong AI isn't developed for the next 10000 years. Do you a) tell people that strong AI is a terrible idea, but anything short of that is fine, or b) instill a deep hatred of all computing technology, and allow people to come up with post-hoc justifications for why. I don't think you need to know much about psychology or arms races to realise that the latter approach is much better - not despite its pseudo-religious commandments, but rather because of them.

Replies from: Virgil Kurkjian
comment by Virgil Kurkjian · 2018-10-01T10:47:55.282Z · LW(p) · GW(p)

You're correct that I'm not writing about "Automation" in the usual sense, but the categories were made for man. On the other hand, I am talking about writing scripts to automatically enter data into spreadsheets. My discussion was centered around the question of if there could be a reason for the Jihad to ban something so simple as well, and I think it has. If spreadsheet-scripts depend on legible signals (and they usually will) then they are part of this problem.

I then mention more literal bureaucrats a few times, as an attempt to show that I don't mean just those things that are machine-automated. But perhaps my examples were too convincing!

In regards to constitutions, I disagree. So does James Madison. Take a look at what he has to say about "parchment barriers":

Will it be sufficient to mark, with precision, the boundaries of these departments, in the constitution of the government, and to trust to these parchment barriers against the encroaching spirit of power? This is the security which appears to have been principally relied on by the compilers of most of the American constitutions. But experience assures us, that the efficacy of the provision has been greatly overrated; and that some more adequate defense is indispensably necessary for the more feeble, against the more powerful, members of the government.
...
The conclusion which I am warranted in drawing from these observations is, that a mere demarcation on parchment of the constitutional limits of the several departments, is not a sufficient guard against those encroachments which lead to a tyrannical concentration of all the powers of government in the same hands. (The Federalist Papers : No. 48)

Incidentally, this is what the separation of powers is all about! By forcing actual people to compete and negotiate rather than trusting them to follow rules, we can avoid all sorts of nasty things. If you were to accuse me of cribbing from the founders, you wouldn't be far off!

In regards to the other moral from the Butlerian Jihad, you're totally right. That's normally the lesson I would take away too. I just figured that this audience would already be able to see that one! I tried to present something that LW might find surprising or novel.

Thanks for your thoughts!

comment by cousin_it · 2018-10-01T08:29:51.504Z · LW(p) · GW(p)

Dune is a fun book to read. But:

Paul Atreides is the outcome of a millennia old breeding program for people who can see the future and the rightful heir to a whole planet and (independently!) a prophesied messiah of a planet-spanning religion and a supersoldier who's unbeatable in single combat. All of that is set up before the book even begins!!

Dune's perspective as "political science fiction" is way skewed toward power fantasy. I don't want to draw governance lessons from power fantasies, because that could too easily go wrong. Is there a book arguing for hierarchy from the perspective of a commoner?

Replies from: SaidAchmiz, Virgil Kurkjian
comment by Said Achmiz (SaidAchmiz) · 2018-10-01T13:14:53.287Z · LW(p) · GW(p)

Is there a book arguing for hierarchy from the perspective of a commoner?

Well, there’s Hard to be a God, in which the (quite common-born) Doctor Budakh argues thus:

“… Look, for instance, how our society is arranged. How pleasing to the eye this clear, geometrically proper system! On the bottom are the peasants and laborers, above them the nobility, then the clergy, and finally, the king. How thought out it all is, what stability, what a harmonious order! What else can change in this polished crystal, emerged from the hands of the celestial jeweler? There are no buildings sturdier than pyramidal ones, any experienced architect will tell you that.” He raised his finger didactically. “Grain, poured from a sack, does not settle in an even layer, but forms a so-called conic pyramid. Each grain clings to the other, trying not to roll down. So with humanity. If it wants to be a whole, people must cling to one another, inevitably forming a pyramid.”

“Do you seriously consider this world to be perfect?” asked Rumata with surprise. “After meeting don Reba, after the prison…”

“But of course, my young friend! There is much in the world I do not like, much I would like to see otherwise… But what can be done? In the eyes of higher powers, perfection looks otherwise, than in mine. …”

[Translation mine.]

(Naturally, this is a somewhat tongue-in-cheek answer—which is to say, I agree with your point.)

comment by Virgil Kurkjian · 2018-10-01T10:33:32.545Z · LW(p) · GW(p)

Interesting! I think you may be reading Dune backwards. I always thought of it as book strictly against the concept of heroes, rather than as a power fantasy.

No more terrible disaster could befall your people than for them to fall into the hands of a Hero. (Pardot Kynes)
Make no heroes, my father said. (The voice of Ghanima)

Consider it a steelman for the pro-hero position. Paul (& others) have all the attributes you describe, and even so, even with the benefit of prescience(!) they still make their lives miserable and the lives of everyone around them. Whether you find this a convincing argument against heroes is one thing, but I think that was the tack he was taking.

Herbert's original publisher refused to publish Dune Messiah because he found that message so personally disturbing:

Campbell turned down the sequel. Now his argument was that I had created an anti-hero in Paul in the sequel. ... the thing that got to Campbell was not that I had an anti-hero in this sense, but that I had destroyed one of his gods. (FH)
comment by moridinamael · 2018-09-30T22:35:05.061Z · LW(p) · GW(p)

This is very cool to see. I just finished re-reading Dune. I wonder what signal prompted me to do that, and I wonder if it was the same signal that prompted you to write this.

I've been thinking a lot recently [LW · GW] about rationalist advocacy and community. I don't think that individuals unilaterally deciding to stop automating things is going to make a dent in the problem. This is a straightforward coordination problem. If you drop out of modern society, for whatever reason, society fills in the hole you left. The only way to challenge Moloch is to create an alternative social framework that actually works better, at least in some regards.

One thing that keeps cropping up in my thoughts/discussions about rationalist community is that the value-add of the community needs to be very clear and concrete. The metaphor or analogue of professional licensure might be appropriate - a "rationalist credential", some kind of impossible-to-fake, difficult-to-earn token of mastery that denotes high skill level and knowledge, that then becomes symbolically associated with the movement. I mention this idea because the value-add of being a credentialed rationalist would then have to be weighed against whatever weird social restrictions that the community adopts - e.g., your suggestion of avoiding automation, or instituting some kind of fealty system. These ideas may be empirically, demonstrably good ideas (we don't really know yet) but their cost in weirdness points can't be ignored.

As an side - and I'm open to being corrected on this - I don't think Herbert was actually advocating for a lot of the ideas he portrays. Dune and Frank Herbert explore a lot of ideas but don't really make prescriptions. In fact, I think that Herbert is putting forth his universe as an example of undesirable stagnation, not some kind of demonstrated perfection. It would be cool to be a mentat or a Bene Gesserit, i.e. a member of a tribe focused on realizing human potential, but I don't think he was saying with his books that the multi-millennial ideologically motivated political stranglehold of the Bene Gesserit was a good thing. I don't think that Herbert thinks that feudalism is a good thing just because it's the system he presents. Maybe I'm wrong.

Replies from: gwern, Virgil Kurkjian
comment by gwern · 2018-10-01T01:01:08.640Z · LW(p) · GW(p)

I am a fan of Dune (I recently wrote a whole essay on the genetics in Dune), but I've never drawn on it much for LW topics.

The basic problem with Dune is that Herbert based a lot of his extrapolations and discussion on things which were pseudoscience or have turned out to be false. And to some extent, people don't realize this because they read their own beliefs into the novels - for example, OP commits this error in describing the Butlerian Jihad, which was not a war against autonomous machines but against people who used machines (likewise, Leto II's 'Arafel' involved prescient machines... made by the Ixians), and which was not named after Samuel Butler in the first place. If Herbert had been thinking of a classic autonomous AI threat, that would be more interesting, but he wasn't. Similarly, 'ancestral memories': Herbert seriously thought there was some sort of hidden memory repository which explained various social phenomena, and the whole Leto II/Fish Speaker/war is apparently sourced from a highly speculative outsider, probably crank, book (which is so obscure I have been unable to get a copy to see how far the borrowings go). We know now normal humans can't be trained into anything like Mentats, after centuries of failure of education dating at least back to Rousseau & Locke's blankslatism, and especially all the many attempts at developing prodigies, and case-studies like dual n-back. His overall paradigm of genetics was reasonable but unfortunately, for the wrong species - apples rather than humans. Or the sociology in The Dosadi Experiment or how to design AI in Destination: Void or... the list goes on. Nice guy, nothing like L. Ron Hubbard (and a vastly better writer), and it makes for great novels, but like many SF authors or editors of the era* he often used his fiction as tracts/mouthpieces, and he was steeped in the witch's brew that was California & the human potential movement and that made his extrapolations very poor if we want to use them for any serious non-fiction purpose.

So, it just doesn't come up. The Butlerian Jihad isn't that relevant because it's hardly described at all in the books and what is described isn't relevant as we're concerned about entirely different scenarios; human prescience doesn't exist, period, so it doesn't matter that it probably wouldn't follow the genetics he outlines so the whole paradigm of Bene Gesserit and Houses is irrelevant as is everything that follows; Mentats can't exist, at least not without such massive eugenics to boost human intelligence that it'd spark a Singularity first, so there's not much point in discussing nootropics with an eye towards becoming a Mentat because all things like stimulants or spaced repetition can do is give you relatively small benefits at the margin (or to put it another way, things Mentats do in fiction can be done in reality, but only using software on computers)

* eg Hubbard, Asimov, Cordwainer Smith even discounting the hallucination theory, especially John W. Campbell

I don't think he was saying with his books that the multi-millennial ideologically motivated political stranglehold of the Bene Gesserit was a good thing. I don't think that Herbert thinks that feudalism is a good thing just because it's the system he presents.

I would say that he clearly presents the breeding program as a very good thing and vital for the long-term preservation & flourishing of humanity as the only way to create humans who are genuine 'adults' capable of long-term planning (in a rather gom gabbar sense).

As far as feudalism goes, there's an amusing anecdote from Norman Spinrad I quote in my essay where he tweaks Herbert about all "this royalist stuff" and Herbert claims he was going to end it with democracy. (Given how little planning Herbert tended to do, I have to suspect that his response was rather embarrassed and he was thinking to himself, 'I'll do it later'...) He wouldn't be the first author to find feudalism a lot more fun to write than their own liberal-democratic values. (George R. R. Martin is rather liberal, is a free speech advocate, was a conscientious objector, and describes Game of Thrones as anti-war, but you won't find too much democracy in his books.)

Replies from: moridinamael, Virgil Kurkjian
comment by moridinamael · 2018-10-01T01:47:47.607Z · LW(p) · GW(p)

I agree that Herbert thought the breeding program was necessary. But I also think he couched it as tragically necessary. Leto II's horrific repression was similarly tragically necessary.

I think the questions provoked by Herbert's concepts of Mentats and Bene Gesserit might actually be fruitful to think about.

If there were no meditation traditions on Earth, then we would have no reason to suspect that jhanas, or any other advanced states of meditative achievement, exist. If there were no musical instruments, we would have no reason to suspect that a human could use fingers or breath to manipulate strings or harmonics to create intricate, polyphonic, improvised melodies. If there were no arithmetic, we would view a person who could do rudimentary mental math to be a wizard. One can extend this line of thinking to many things - reading and writing, deep strategy games like chess, high-level physical sports, and perhaps even specific fields of knowledge.

So it is probably safe to say that we "know" that a human can't be trained to do the things that Mentats do in Dune, but I don't think it's safe to say that we have any idea what humans could be trained to do with unpredictable avenues of development and 20,000 years of cultural evolution.

I guess I'm not really disagreeing with anything you said, but rather advocating that we take Herbert's ideas seriously but not literally.

Replies from: Virgil Kurkjian
comment by Virgil Kurkjian · 2018-10-01T12:15:18.829Z · LW(p) · GW(p)

This is pretty close to my thinking too. Herbert's proposal was something like, "We have no idea what levels of human potential are out there." He takes this idea and describes what it might look like, based on a few possible lines of development. Possibly he thought these were the most likely avenues of development, but that still seems unclear. Either way, he happened to pick examples that were wrong in the details, but the proposal stands.

comment by Virgil Kurkjian · 2018-10-01T01:28:39.188Z · LW(p) · GW(p)

You're entirely right that taking Herbert's views on most specific subjects isn't helpful. He was wrong about genetics, about education, and about a lot of things besides. (Though like moridinamael, I'm also not clear on whether he personally believed in things like genetic memory, though I would be interested to see sources if you have them. I assumed that it was an element he included for fictional/allegorical purposes.) But I think he was a clever guy who spent a lot of time thinking about problems we're interested in, even if he often got it wrong.

I think it's a little harsh to say that I commit the error of reading-in my beliefs about the Butlerian Jihad, given that I quote Reverend Mother Gaius Helen Mohiam as saying, "Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them," and Leto II as saying, "The target of the Jihad was a machine-attitude as much as the machines." I'm aware that there are a lot of textual clues that the Jihad wasn't a war against autonomous machines themselves. Though autonomous machines were certainly involved; the glossary to the original book defines the Butlerian Jihad as, "the crusade against computers, thinking machines, and conscious robots", and the OC Bible's commandment is, "Thou shalt not make a machine in the likeness of a human mind."

More generally, I was using the Jihad as a metaphor to make a point about automation in general.

It's clear that Strong AI is illegal under the prohibition of "thinking machines", but it had always puzzled me why lesser devices — like calculators and recording devices — were included. I had passed it off as another mistake on Herbert's part. But when I read Nabil's comment it reminded me strongly of the Jihad, and I realized that if taken to an extreme conclusion it would lead to a proscription against almost all automation, like the one we find in Dune. Consider it a steelman of the position, if you would like.

Just because I quote Samuel Butler at the end, doesn't mean I think the Jihad was named after him! It's just an amusing coincidence.

Looking forward to reading your essay on the Genetics of Dune!

Replies from: gwern
comment by gwern · 2022-09-25T01:22:25.018Z · LW(p) · GW(p)

Though like moridinamael, I’m also not clear on whether he personally believed in things like genetic memory, though I would be interested to see sources if you have them. I assumed that it was an element he included for fictional/allegorical purposes.

Yes, we shouldn't assume a SF author endorsed any speculative proto/pseudo-science he includes. But in the case of genetic memory, we can be fairly sure that he 'believed in it' in the sense that he took it way more seriously than you or I and considered it a live hypothesis because he says so explicitly in an interview I quote in the essay: he thinks genetic memory and pheromones, or something much like them, is necessary to explain things like the cohesion of mob & social groups like aristocracies without explicit obvious status markers, or the supposed generational patterns of warfare 'spasms' (this is a reference to the obscure crankery of The Sexual Cycle of Human Warfare† which apparently deeply influenced Herbert and you won't understand all the references/influences unless you at least look at an overview of it because it's so lulzy).


Reading back, I see I got sidetracked and didn't resolve your main point about why the Butlerian Jihad targeted all software. The one-line explanation is: permitting any software is an existential risk because it is a crutch which will cripple humanity's long-term growth throughout the universe, leaving us vulnerable to the inevitable black swans (not necessarily AI).

First, you should read my essay, and especially that Herbert interview and the Spinrad democracy footnote and if you have the time, Herbert's attitude towards computers & software is most revealed in Without Me You're Nothing, which is a very strange artifact: his 1980 technical guide/book on programming PCs of that era - leaving aside the wildly outdated information which you can skip over, the interesting parts are his essays or commentaries on PCs in general, which convey his irascible humanist libertarian attitude on PCs as being a democratizing and empowering force for independent-human growth. Herbert was quite a PC enthusiast: beyond writing a whole book about how to use them, his farmstead apparently had rigged up all sorts of gadgets and 'home automation' he had made as a hobby to help him farm and, at least in theory, be more independent & capable & a Renaissance man. (Touponce is also well worth reading.) There's a lot of supporting information in those I won't try to get into here which I think support my generalizations below.

So, your basic error is that you are wrong about the BJ not being about AI or existential-risk per se. The BJ here is in fact about existential-risk from Herbert's POV; it's just that it's much more indirect than you are thinking. It has nothing to do with signaling or arms-races. Herbert's basic position is that machines (like PCs), 'without me [the living creative human user], they are nothing': they are dead, uncreative, unable to improvise or grow, and constraining. (At least without a level of strong AI he considered centuries or millennia away & to require countless fundamental breakthroughs.) They lock humans into fixed patterns. And to Herbert, this fixedness is death. It is death, sooner or later, perhaps many millennium later, but death nevertheless; and [human] life is jazz:

In all of my universe I have seen no law of nature, unchanging and inexorable. This universe presents only changing relationships which are sometimes seen as laws by short-lived awareness. These fleshly sensoria which we call self are ephemera withering in the blaze of infinity, fleetingly aware of temporary conditions which confine our activities and change as our activities change. If you must label the absolute, use its proper name: "Temporary".

Or

The person who takes the banal and ordinary and illuminates it in a new way can terrify. We do not want our ideas changed. We feel threatened by such demands. 'I already know the important things!' we say. Then Changer comes and throws our old ideas away.

And

Odrade pushed such thoughts aside. There were things to do on the crossing. None of them more important than gathering her energies. Honored Matres could be analyzed almost out of reality, but the actual confrontation would be played as it came -- a jazz performance. She liked the idea of jazz although the music distracted her with its antique flavors and the dips into wildness. Jazz spoke about life, though. No two performances ever identical. Players reacted to what was received from the others: jazz. Feed us with jazz.

('Muad'dib's first lesson was how to learn'/'the wise man shapes himself, the fool lives only to die' etc etc)

Whether it's some space plague or space aliens or sterility or decadence or civil war or spice running out or thinking machines far in the future, it doesn't matter, because the universe will keep changing, and humans mentally enslaved to, and dependent on, their thinking machines, would not. Their abilities will be stunted and wither away, they will fail to adapt and evolve and grow and gain capabilities like prescience. (Even if the thinking-machines survive whatever doomsday inevitably comes, who cares? They aren't humans. Certainly Herbert doesn't care about AIs, he's all about humanity.) And sooner or later - gambler's ruin - there will be something and humanity will go extinct. Unless they strengthen themselves and enter into the infinite open universe, abandoning delusions about certainty or immortality or reducing everything to simple rules.

That is why the BJ places the emphasis on banning anything that serves as a crutch for humans, mechanizing their higher life.* It's fine to use a forklift or a spaceship, humans were never going to hoist a 2-ton pallet or flap their wings to fly the galaxy and those tools extend their abilities; it's not fine to ask a computer for an optimal Five-Year Plan for the economy or to pilot the space ship because now it's replacing the human role. The strictures force the development of mentats, Reverend Mothers, Navigators, Face Dancers, sword-masters, and so on and so force, all of which eventually merge in the later books, evolving super-capable humans who can Scatter across the universe, evading ever new and more dangerous enemies, ensuring that humanity never goes extinct, never gets lazy, and someday will become, as the Bene Gesserit put it, 'adults', who presumably can discard all the feudal trippery and stand as mature independent equals in fully democratic societies.

As you can see, this has little to do with Confucianism or the stasis being intrinsically desirable or it being a good thing to remove all bureaucracy (bureaucracy is just a tool, like any other, to be used skillfully) or indeed all automation etc.

* I suspect that there's a similar idea behind 'BuSab' in his ConSentiency universe, but TBH, I find those novels/stories too boring to read carefully.
† 183MB color scan: https://www.gwern.net/docs/sociology/1950-walter-thesexualcycleofhumanwarfare.pdf

comment by Virgil Kurkjian · 2018-10-01T00:46:54.880Z · LW(p) · GW(p)

That is funny! I hadn't thought about Dune in a while, but Nabil's comment on SSC brought thoughts of the Jihad flooding back.

I agree with your critiques of unilateral action; it's a major problem with all proposals like this (maybe a whole post on this at some point). Something that bugs me about a lot of calls to action, even relatively mundane political ones, is that they don't make clear what I, personally, can do to further the cause.

This is why I specifically advised that people not automate anything new. Many of us are programmers or engineers; we feel positively about automation and will often want to implement it in our lives. Some of us even occupy positions of power in various organizations, or are in a position to advise people who are. I know that this idea will make me less likely to automate things in my life; I hope it will influence others similarly.

Dismantling the automation we have sounds like a much tougher coordination problem. I'm less optimistic about that one! But maybe we can not actively make it worse.

The fealty proposal was intended as a joke! I just think we could consider being more Confucian.

Exactly what Herbert believed is hard to say, but my impression has always been that he mostly agrees with the views of his "main" characters; Leto I, Paul, Hayt, Leto II, Siona, Miles Teg, etc. Regarding Feudalism, he says that it is the "natural condition of human beings…not that it is the only condition or not that it is the right condition". I've found this interview pretty enlightening.

In regards to the "multi-millennial ideologically motivated political stranglehold", I'm not sure if he thinks it's good. But insofar as we think human extinction is bad, we have to see this system as, if not good, then at least successful.

Thanks for the feedback! :)

Replies from: moridinamael
comment by moridinamael · 2018-10-01T01:32:53.912Z · LW(p) · GW(p)

Thanks for the interview. This is great.

comment by Martin Sustrik (sustrik) · 2018-10-01T06:26:40.282Z · LW(p) · GW(p)

I like the framing of the problem here: If a bureaucrat (or lawyer) acts on fully specified set of rules and exercises no personal judgement then they can be replaced by a machine. If they don't want to be replaced by a machine they should be able to prove that their personal judgement is indispensable.

That changes incentives for bureaucrats in quite a dramatic fashion.

Replies from: Virgil Kurkjian
comment by Virgil Kurkjian · 2018-10-01T11:00:57.915Z · LW(p) · GW(p)

Wow, that's an implication I hadn't considered! But you're right on the money with that one.

The one danger I see here is that very simple models can often account for ~70% of the variance in a particular area. People might be tempted to automate these decisions. But the remaining 30% is often highly complex and critical! So personally I wouldn't automate a bureaucrat until something like 95% match or 99% match.

Though I'm sure there are bureaucrats who can be modeled 100% accurately, and they should be replaced. :P

comment by 61i72h · 2019-11-11T18:22:27.807Z · LW(p) · GW(p)

Currently rereading the series, so I thought I'd point out something. Thanks for the reading suggestions in the text :).

At the beginning of Dune, we are told that the Great Schools come into existence after the Butlerian Jihad to replace the functions served by machines. While these are claimed to be 'human' functions they bear remarkable similarity to the functions of machines, and we can roughly categorize these as follows:

Navigators: Mathematics, pure abstraction.

Mentats: Computation, simulation and rejection of inconsistent alternatives. Logic. Bayesian reasoning.

Suks: Moral apathy.

Bene Gesserit: Command architecture.

My main problem is the suggestion that the Dune Universe is internally aware of and attempts to rectify Goodhart's Law, and that the process of doing so is the rejection of automation.

The existence of the Bene Gesserit and the ubiquitous spread of religion are to be taken as counterexamples to this point. We know that their main tools are politics and religion, two fields of rhetoric which can be seen as essentially manipulative in nature; their design is to enforce behaviorisms favorable to the Bene Gesserit using sex, mind control (Voice), and indoctrination. At numerous points in the Dune universe, the Bene Gesserit 'hack' human cognition with these tools, turning the people around them into pliable robots which do not think for themselves. We also know that there are multiple avenues by which this is done, through the OC Bible, the Missionaria Protectiva, the Zensunni and Zenshiite teachings, and the super-religion developed and realized by the Atreides. So to a large degree we can say that automation exists (in the sense of unthinking work), it is just that it is humans who are doing it, not machines. To reiterate, it is unthinkingly bureaucratic because laws are not open to interpretation, they are just shifted from processing signals mechanically to processing them religious-fundamentally.

Of course, the Bene Gesserit also make an important distinction: those who are not Bene Gesserit are not 'human'. They have not been tested in the manner of the Gom Jabbar. So the BG have no qualms about how they treat the 'subhumans' around them, as they are not the ultimate benefactors of their long-term vision of human survival. Ordinary humans are tools, but complex ones. Machines.

We may also take a moment to look at the Gom Jabbar as an example of an unthinking measurement of a signal which probably doesn't say all that much about humanity, and really just something about pain thresholds and early childhood training. We know that the Bene Gesserit disdain love and strictly follow their codified beliefs to the point that many would not consider rejecting their programming. So here they are put in a position of imposing a bad test which 'could' be wrong, but which is impassably automated by doctrine. The Fremen have the same custom. The Bene Gesserit, despite being 'human', are not immune to political programming via their Mother Superior.

With this in mind, I'd say that 'if' the Butlerian Jihad was aimed at ending automation, it totally failed. In fact, it failed so miserably that all it really did is make humans more like the machines they were trying to overcome. The ultimate realization of this is Muad'Dib's Imperium, where a man is trapped into a deterministic course by prescience and holds the Imperium under an extended theo-bureaucratic hegemony whose citizens are programmed by religious law. Arguably the point of Paul's story is that he finds a middle path between rejection of a rational objective, its acceptance, and a personal, irrational desire (irrational in the sense that it betrays the objective upheld by House Atreides, including the unwed Leto). He marries the woman he loves, when what may be better for Humanity is that he marry Irulan Corrino.

That is possibly the decidedly human thing the Butlerian Jihad was fought for. A thinking being with a subjective rationale.


So what else might have happened in the B. Jihad?

If we ignore the prequels, we have this of Leto II's experiences of the Jihad through firsthand memory:

"We must reject the machines-that-think. Humans must set their own guidelines. This is not something machines can do. Reasoning depends upon programming, not on hardware, and we are the ultimate program!"


1. We are the ultimate program

From this we could say that it was never the intention to remove programming, and thus automation. It was to follow the best program. So who had the best program? We can assume that the best program is the most reasonable and rational one.


2. Reasoning depends upon programming, not on hardware. This is not something machines can do.

Advances in computing can roughly be divided between hardware and software. The point made by this jihadi is that advances kept being made in machine hardware, so that they were faster and more efficient, but that at some point, humans failed to create reasoning machines. At least, the jihadis believed so. The thinking machines they created were imperfect in some way, whether lacking in emotion or creativity or consciousness or even so much as the ability to choose, or tell the difference between a turtle and a rifle. Humanity proved itself incapable of creating a machine as sophisticated as a human being.


3. We might guess at a reason for this by borrowing from other aspects of the Dune universe:

“Without change something sleeps inside us, and seldom awakens. The sleeper must awaken.”

Almost entirely throughout the Dune series we are met with the idea of meeting our full potential through the application of stressors. It's basically a multi-book manifesto on post-traumatic growth. Whether it's the Fremen being shaped by Arrakis or the Atreides being shaped by the kindly tyranny of Leto II, the idea is that comfort, a lack of problems, leads downward to eternal stagnation. I would suggest that this is what happened in the Butlerian Jihad: humans reached a point where machines were as smart as they were going to get. Humans were not smart enough to make machines smarter, and humans weren't getting any smarter themselves. Because they had reached their technological epitome, they lived decadent lives with few stressors. In a utopian society, they were essentially ham-stringed, incapable of speeding up space-flight beyond pre-spice levels. Much like at the end of Leto II's reign, their pent-up wanderlust (which we can assume is a real thing in the Dune universe) reached a tipping point. We don't know if the spice had been discovered at this point. But it is probable, given the nature of the Dune universe, that in such a condition humanity looked for ways beyond machines, such as the spice, to break through the barriers imposed by technological dependency. Thus there was the first scattering that created the Corrino Imperium.

I basically believe this because in explains the most perplexing and otherwise inconsistent phrases in the series:

"They were all caught up in the need of their race to renew its scattered inheritance, to cross and mingle and infuse their bloodlines in a great new pooling of genes. And the race knew only one sure way for this - the ancient way, the tried and certain way that rolled over everything in its path: jihad."

One might accept that there was resistance from machines led by humans who opposed the Jihad, leading to a war (remember, machines do not have volition), or, as in the prequels, operating according to the programming of the power-hungry.

It's almost like an anti-singularity: rather than machines outpacing human thinking, humans reach a point where they cannot compute a way forward that includes the existence of machines. So they removed that constant in the equation, and formulate a new one. Much like with spice, they developed a dangerous dependency that prevented them from expanding. The jihad was an erasure of this dependency, just as the Atreides Jihad eventually erased dependency on spice.

If there is any lesson in that for budding rationalists, I'd say it is this: it is dangerous to have a single source of prosperity, be it a machine or a drug. This creates a fragile system with an upper limit on progress. Instead, we need to have an options-mindset that recognizes rationality works best when paired with numerous choices. Accept that AI is not the sole means of automation available to us, nor is it inherently a bad option; there are just other means we must consider in parallel, or the way before us may narrow into a single, terrible way forward: backward.

comment by alexey · 2018-10-08T16:32:31.510Z · LW(p) · GW(p)

I don't believe the original novels imply the humanity nearly went extinct and then banded together, that was only in "the junk Herbert's son wrote". Or that Strong AI was developed only a short time before the Jihad started.

Neither of these are true in the Dune Encyclopedia version, which Frank Hebert at least didn't strongly disapprove of.

There is still some Goodhart's-Law-ing there, to quote https://dune.wikia.com/wiki/Butlerian_Jihad/DE:

After Jehanne's death, she became a martyr, but her generals continued exponentially with more zeal. Jehanne knew her weaknesses and fears, but her followers did not. The politics of Urania were favored. Around that time, the goals of the Jihad were the destruction of machine technology operating at the expense of human values; but by this point they would have be replaced by indiscriminate slaughter.

Replies from: Virgil Kurkjian
comment by Virgil Kurkjian · 2018-10-09T01:25:36.220Z · LW(p) · GW(p)

I think it's a reasonable inference that humanity nearly went extinct, given that the impact of the Jihad was so pronounced as to affect all human culture for the next 10,000+ years. And I think it's a reasonable inference that we banded together, given that we did manage to win.

comment by NancyLebovitz · 2018-10-11T12:28:13.417Z · LW(p) · GW(p)

It's a fascinating essay, but non-automation isn't all that great. In particular, Confucian China had foot-binding for nearly a thousand years-- mothers slowly breaking their daughter's feet to make the daughters more marriageable.

It's possible that in the long run, societies with automation are even worse than societies without it, but I don't think that's proven.

Replies from: Virgil Kurkjian
comment by Virgil Kurkjian · 2018-10-11T20:08:56.849Z · LW(p) · GW(p)

Well, I don't think that foot-binding is necessarily a result of Confucianism directly, and even if it is, I see even less connection to the anti-automation aspects. You could also say that Confucianism as it was practiced bears about as much relationship to what Confucius actually taught as modern Christianity bears to what Jesus actually taught.