Rationalism before the Sequences
post by Eric Raymond (eric-raymond) · 2021-03-30T14:04:15.254Z · LW · GW · 83 commentsContents
84 comments
I'm here to tell you a story about what it was like to be a rationalist decades before the Sequences and the formation of the modern rationalist community. It is not the only story that could be told, but it is one that runs parallel to and has important connections to Eliezer Yudkowsky's and how his ideas developed.
My goal in writing this essay is to give the LW community a sense of the prehistory of their movement. It is not intended to be "where Eliezer got his ideas"; that would be stupidly reductive. I aim more to exhibit where the drive and spirit of the Yudkowskian reform came from, and the interesting ways in which Eliezer's formative experiences were not unique.
My standing to write this essay begins with the fact that I am roughly 20 years older than Eliezer and read many of his sources before he was old enough to read. I was acquainted with him over an email list before he wrote the Sequences, though I somehow managed to forget those interactions afterwards and only rediscovered them while researching for this essay. In 2005 he had even sent me a book manuscript to review that covered some of the Sequences topics.
My reaction on reading "The Twelve Virtues of Rationality" a few years later was dual. It was a different kind of writing than the book manuscript - stronger, more individual, taking some serious risks. On the one hand, I was deeply impressed by its clarity and courage. On the other hand, much of it seemed very familiar, full of hints and callbacks and allusions to books I knew very well.
Today it is probably more difficult to back-read Eliezer's sources than it was in 2006, because the body of more recent work within his reformation of rationalism tends to get in the way. I'm going to attempt to draw aside that veil by talking about four specific topics: General Semantics, analytic philosophy, science fiction, and Zen Buddhism.
Before I get to those specifics, I want to try to convey that sense of what it was like. I was a bright geeky kid in the 1960s and 1970s, immersed in a lot of obscure topics often with an implicit common theme: intelligence can save us! Learning how to think more clearly can make us better! But at the beginning I was groping as if in a dense fog, unclear about how to turn that belief into actionable advice.
Sometimes I would get a flash of light through the fog, or at least a sense that there were other people on the same lonely quest. A bit of that sense sometimes drifted over USENET, an early precursor of today's Internet fora. More often than not, though, the clue would be fictional; somebody's imagination about what it would be like to increase intelligence, to burn away error and think more clearly.
When I found non-fiction sources on rationality and intelligence increase I devoured them. Alas, most were useless junk. But in a few places I found gold. Not by coincidence, the places I found real value were sources Eliezer would later draw on. I'm not guessing about this, I was able to confirm it first from Eliezer's explicit reports of what influenced him and then via an email conversation.
Eliezer and I were not unique. We know directly of a few others with experiences like ours. There were likely dozens of others we didn't know - possibly hundreds - on parallel paths, all hungrily seeking clarity of thought, all finding largely overlapping subsets of clues and techniques because there simply wasn't that much out there to be mined.
One piece of evidence for this parallelism besides Eliezer's reports is that I bounced a draft of this essay off Nancy Lebovitz, a former LW moderator who I've known personally since the 1970s. Her instant reaction? "Full of stuff I knew already."
Around the time Nancy and I first met, some years before Eliezer Yudkowsky was born, my maternal grandfather gave me a book called "People In Quandaries". It was an introduction to General Semantics. I don't know, because I didn't know enough to motivate the question when he was alive, but I strongly suspect that granddad was a member of one of the early GS study groups, probably the same one that included Robert Heinlein (they were near neighbors in Southern California in the early 1940s).
General Semantics is going to be a big part of my story. Twelve Virtues speaks of "carrying your map through to reflecting the territory"; this is a clear, obviously intentional callback to a central GS maxim that runs "The map is not the territory; the word is not the thing defined."
I'm not going to give a primer on GS here. I am going to affirm that it rocked my world, and if the clue in Twelve Virtues weren't enough Eliezer has reported in no uncertain terms that it rocked his too. It was the first time I encountered really actionable advice on the practice of rationality.
Core GS formulations like cultivating consciousness of abstracting, remembering the map/territory distinction, avoiding the verb "to be" and the is-of-identity, that the geometry of the real world is non-Euclidean, that the logic of the real world is non-Aristotelian; these were useful. They helped. They reduced the inefficiency of my thinking.
For the pre-Sequences rationalist, those of us stumbling around in that fog, GS was typically the most powerful single non-fictional piece of the available toolkit. After the millennium I would find many reflections of it in the Sequences.
This is not, however, meant to imply that GS is some kind of supernal lost wisdom that all rationalists should go back and study. Alfred Korzybski, the founder of General Semantics, was a man of his time, and some of the ideas he formulated in the 1930s have not aged well. Sadly, he was an absolutely terrible writer; reading "Science and Sanity", his magnum opus, is like an endless slog through mud with occasional flashes of world-upending brilliance.
If Eliezer had done nothing else but give GS concepts a better presentation, that would have been a great deal. Indeed, before I read the Sequences I thought giving GS a better finish for the modern reader was something I might have to do myself someday - but Eliezer did most of that, and a good deal more besides, folding in a lot of sound thinking that was unavailable in Korzybski's day.
When I said that Eliezer's sources are probably more difficult to back-read today than they were in 2006, I had GS specifically in mind. Yudkowskian-reform rationalism has since developed a very different language for the large areas where it overlaps GS's concerns. I sometimes find myself in the position of a native Greek speaker hunting for equivalents in that new-fangled Latin; usually present but it can take some effort to bridge the gap.
Next I'm going to talk about some more nonfiction that might have had that kind of importance if a larger subset of aspiring rationalists had known enough about it. And that is the analytic tradition in philosophy.
I asked Eliezer about this and learned that he himself never read any of what I would consider core texts: C.S. Peirce's epoch-making 1878 paper "How To Make Our Ideas Clear", for example, or W.V. Quine's "Two Dogmas of Empiricism". Eliezer got their ideas through secondary sources. How deeply pre-Sequences rationalists drew directly from this well seems to be much more variable than the more consistent theme of early General Semantics exposure.
However: even if filtered through secondary sources, tropes originating in analytic philosophy have ended up being central in every formulated version of rationalism since 1900, including General Semantics and Yudkowskian-reform rationalism. A notable one is the program of reducing philosophical questions to problems in language analysis, seeking some kind of flaw in the map rather than mysterianizing the territory. Another is the definition of "truth" as predictive power over some range of future observables.
But here I want to focus on a subtler point about origins rather than ends: these ideas were in the air around every aspiring rationalist of the last century, certainly including both myself and the younger Eliezer. Glimpses of light through the fog...
This is where I must insert a grumble, one that I hope is instructive about what it was like before the Sequences. I'm using the term "rationalist" retrospectively, but those among us who were seeking a way forward and literate in formal philosophy didn't tend to use that term of ourselves at the time. In fact, I specifically avoided it, and I don't believe I was alone in this.
Here's why. In the history of philosophy, a "rationalist" is one who asserts the superiority of a-priori deductive reasoning over grubby induction from mere material facts. The opposing term is "empiricist", and in fact Yudkowskian-reform "rationalists" are, in strictly correct terminology, skeptical empiricists.
Alas, that ship has long since sailed. We're stuck with "rationalist" as a social label now; the success of the Yudkowskian reform has nailed that down. But it's worth remembering that in this case not only is our map not the territory, it's not even immediately consistent with other equally valid maps.
Now we get to the fun part, where I talk about science fiction.
SF author Greg Bear probably closed the book on attempts to define science fiction as a genre in 1994 when he said "the branch of fantastic literature which affirms the rational knowability of the universe". It shouldn't be surprising, then, that ever since the Campbellian Revolution in 1939 invented modern science fiction there has been an important strain in it of fascination with rationalist self-improvement.
I'm not talking about transhumanism here. The idea that we might, say, upload to machines with vastly greater computational capacity is not one that fed pre-Yudkowskian rationalism, because it wasn't actionable. No; I'm pointing at more attainable fictions about learning to think better, or discovering a key that unlocks a higher level of intelligence and rationality in ourselves. "Ultrahumanist" would be a better term for this, and I'll use it in the rest of this essay.
I'm going to describe one such work in some detail, because (a) wearing my SF-historian hat I consider it a central exemplar of the ultrahumanist subgenre, and (b) I know it had a large personal impact on me.
"Gulf", by Robert A. Heinlein, published in the October–November 1949 Astounding Science Fiction. A spy on a mission to thwart an evil conspiracy stumbles over a benign one - people who call themselves "Homo Novis" and have cultivated techniques of rationality and intelligence increase, including an invented language that promotes speed and precision of thought. He is recruited by them, and a key part of his training involves learning the language.
At the end of the story he dies while saving the world, but the ostensible plot is not really the point. It's an excuse for Heinlein to play with some ideas, clearly derived in part from General Semantics, about what a "better" human being might look and act like - including, crucially, the moral and ethical dimension. One of the tests the protagonist doesn't know he's passing is when he successfully cooperates in gentling a horse.
The most important traits of the new humans are that (a) they prize rationality under all circumstances - to be accepted by them you have to retain clear thinking and problem-solving capability even when you're stressed, hungry, tired, cold, or in combat; and (b) they're not some kind of mutation or artificial superrace. They are human beings who have chosen to pool their efforts to make themselves more reliably intelligent.
There was a lot of this sort of GS-inspired ultrahumanism going around in Golden Age SF between 1940 and 1960. Other proto-rationalists may have been more energized by other stories in that current. Eliezer remembers and acknowledges "Gulf" as an influence but reports having been more excited by "The World of Null-A" (1946). Isaac Asimov's "Foundation" novels (1942-1953) were important to him as well even though there was not much actionable in them about rationality at the individual level.
As for me, "Gulf" changed the direction of my life when I read it sometime around 1971. Perhaps I would have found that direction anyway, but...teenage me wanted to be homo novis. More, I wanted to deserve to be homo novis. When my grandfather gave me that General Semantics book later in the same decade, I was ready.
That kind of imaginative fuel was tremendously important, because we didn't have a community. We didn't have a shared system. We didn't have hubs like Less Wrong and Slate Star Codex. Each of us had to bootstrap our own rationality technique out of pieces like General Semantics, philosophical pragmatism, the earliest most primitive research on cognitive biases, microeconomics, and the first stirrings of what became evolutionary psych.
Those things gave us the materials. Science fiction gave us the dream, the desire that it took to support the effort of putting it together and finding rational discipline in ourselves.
Last I'm going to touch on Zen Buddhism. Eliezer likes to play with the devices of Zen rhetoric; this has been a feature of his writing since Twelve Virtues. I understood why immediately, because that attraction was obviously driven by something I myself had discovered decades before in trying to construct my own rationalist technique.
Buddhism is a huge, complex cluster of religions. One of its core aims is the rejection of illusions about how the universe is. This has led to a rediscovery, at several points in its development, of systematic theories aimed at stripping away attachments and illusions. And not just that; also meditative practices intended to shift the practitioner into a mental stance that supports less wrongness.
If you pursue this sort of thing for more than three thousand years, as Buddhists have been doing, you're likely to find some techniques that actually do help you pay better attention to reality - even if it is difficult to dig them out of the surrounding religious encrustations afterwards.
One of the most recent periods of such rediscovery followed the 18th-century revival of Japanese Buddhism by Hakuin Ekaku. There's a fascinating story to be told about how Euro-American culture imported Zen in the early 20th century and refined it even further in the direction Hakuin had taken it, a direction scholars of Buddhism call "ultimatism". I'm not going to reprise that story here, just indicate one important result of it that can inform a rationalist practice.
Here's the thing that Eliezer and I and other 20th-century rationalists noticed; Zen rhetoric and meditation program the brain for epistemic skepticism, for a rejection of language-driven attachments, for not just knowing that the map is not the territory but feeling that disjunction.
Somehow, Zen rhetoric's ability to program brains for epistemic skepticism survives not just disconnection from Japanese culture and Buddhist religious claims, but translation out of its original language into English. This is remarkable - and, if you're seeking tools to loosen the grip of preconceptions and biases on your thinking, very useful.
Alfred Korzybski himself noticed this almost as soon as good primary sources on Zen were available in the West, back in the 1930s; early General Semantics speaks of "silence on the objective level" in a very Zen-like way.
No, I'm not saying we all need to become students of Zen any more than I think we all need to go back and immerse ourselves in GS. But co-opting some of Zen's language and techniques is something that Eliezer definitely did. And I did, and other rationalists before the Yudkowskian reformation tended to find their way to.
If you think about all these things in combination - GS, analytic philosophy, Golden Age SF, Zen Buddhism - I think the roots of the Yudkowskian reformation become much easier to understand. Eliezer's quest and the materials he assembled were not unique. His special gift was the same ambition as Alfred Korzybski's; to form from what he had learned a teachable system for becoming less wrong. And, of course, the intellectual firepower to carry that through - if not perfectly, at least well enough to make a huge difference.
If nothing else, I hope this essay will leave you feeling grateful that you no longer have to do a decades-long bootstrapping process the way Eliezer and Nancy and I and others like us had to in the before times. I doubt any of us are sorry we put in the effort, but being able to shortcut a lot of it is a good thing.
Some of you, recognizing my name, will know that I ended up changing the world in my own way a few years before Eliezer began to write the Sequences. That this ensued after long struggle to develop a rationalist practice is not coincidence; if you improve your thinking hard enough over enough time I suspect it's difficult to avoid eventually getting out in front of people who aren't doing that.
That's what Eliezer did, too. In the long run, I rather hope that his reform movement will turn out to have been more important than mine.
Selected sources follow. The fiction list could have been a lot longer, but I filtered pretty strongly for works that somehow addressed useful models of individual rationality training. Marked with * are those Eliezer explicitly reports he has read.
Huikai, Wumen: "The Gateless Barrier" (1228)
Peirce, Charles Sanders: "How To Make Our Ideas Clear" (1878)
Korzybski, Alfred: "Science and Sanity" (1933)
Chase, Stuart: "The Tyranny of Words" (1938)
Hayakawa, S. I: "Language in Thought and Action" (1939) *
Russell, Bertrand: "A History of Western Philosophy" (1945)
Orwell, George: "Politics and the English Language" (1946) *
Johnson, Wendell: "People in Quandaries: The Semantics of Personal Adjustment" (1946)
Van Vogt, A. E: "The World of Null-A" (1946) *
Heinlein, Robert Anson: "Gulf" (1949) *
Quine, Willard Van Orman: "Two Dogmas of Empiricism" (1951)
Heinlein, Robert Anson: "The Moon Is A Harsh Mistress" (1966) *
Williams, George: "Adaptation and Natural Selection" (1966) *
Pirsig, Robert M.: "Zen and the Art of Motorcycle Maintenance" (1974) *
Benares, Camden: "Zen Without Zen Masters" (1977)
Smullyan, Raymond: "The Tao is Silent" (1977) *
Hill, Gregory & Thornley, Kerry W.: "Principia Discordia (5th ed.)" (1979) *
Hofstadter, Douglas: "Gödel, Escher, Bach: An Eternal Golden Braid" (1979) *
Feynman, Richard: "Surely You're Joking, Mr. Feynman!" (1985) *
Pearl, Judea: "Probabilistic Reasoning in Intelligent Systems" (1988) *
Stiegler, Marc: "David's Sling" (1988) *
Zindell, David: "Neverness" (1988) *
Williams, Walter John: "Aristoi" (1992) *
Tooby & Cosmides: "The Adapted Mind: Evolutionary Psychology and the Generation of Culture" (1992) *
Wright, Robert: "The Moral Animal" (1994) *
Jaynes, E.T.: "Probability Theory: The Logic of Science" (1995) *
The assistance of Nancy Lebovitz, Eliezer Yudowsky, Jason Azze, and Ben Pace is gratefully acknowledged. Any errors or inadvertent misrepresentations remain entirely the author's responsibility.
83 comments
Comments sorted by top scores.
comment by Ben Pace (Benito) · 2021-03-31T02:29:25.340Z · LW(p) · GW(p)
The most important traits of the new humans are that... they prize rationality under all circumstances - to be accepted by them you have to retain clear thinking and problem-solving capability even when you're stressed, hungry, tired, cold, or in combat
Interestingly, as a LessWronger, I don't think of myself in quite this way. I think there's a key skill a rationalist should attain, which is knowing in which environments you will fail to be rational, and avoiding those environments. Knowing your limits, and using that knowledge when making plans.
One that I've dealt with, that I think is pertinent for a lot of people, is being aware of how social media can destroy my attention and leave me feeling quite socially self-conscious. Bringing them into my environment damages my ability to think.
On the one hand, becoming able to think clearly and make good decisions while using social media is valuable and for many necessary. Here are some of the ways I try to do that, in the style of the Homo Novis:
- I notice when I'm being encouraged to use the wrong concepts (e.g. PR rather than honor [LW · GW]) or believe deeply bad theories of ethics (e.g. the copenhagen theory of ethics [LW · GW])
- I keep my identity small / use my identity carefully [LW · GW]
- I build a better model of my social environment, how knowledge propagates [LW · GW], and the narrative forces pushing me with [LW · GW] (especially the forces of blandness [LW · GW]) so I can see threats coming
But one of the important tools I have is avoiding being in those environments. I respond with very strict rules around Sabbath/Rest Days [? · GW] so I can clear my head. I also don't carry a phone in general, and install content blockers on my laptop. I think these approaches are more like "avoiding situations where I cannot think clearly" than "learning to think clearly in difficult situations".
There's a balance between the two strategies. "Learn to think clearly in more environments" and "shape your environment [LW · GW] to help you think clearly / not hinder your ability to think clearly". In response to a situation where I can't think clearly, sometimes I pick the one, and sometimes the other.
All that said, Gulf is totally added to my reading list. I read both The Moon is a Harsh Mistress and Stranger in a Strange Land for the first time this year and that was a thrill.
Replies from: robert-miles, adamzerner, Kaj_Sotala, Alex_Altair, ryan_b↑ comment by Robert Miles (robert-miles) · 2021-04-08T11:07:45.063Z · LW(p) · GW(p)
It would certainly be a mistake to interpret your martial art's principle of "A warrior should be able to fight well even in unfavourable combat situations" as "A warrior should always immediately charge into combat, even when that would lead to an unfavourable situation", or "There's no point in trying to manoeuvre into a favourable situation"
↑ comment by Adam Zerner (adamzerner) · 2021-03-31T06:58:34.839Z · LW(p) · GW(p)
Great point. A few (related) examples come to mind:
- Paul Graham's essay The Top Idea in Your Mind. "I realized recently that what one thinks about in the shower in the morning is more important than I'd thought. I knew it was a good time to have ideas. Now I'd go further: now I'd say it's hard to do a really good job on anything you don't think about in the shower."
- Trying to figure out dinner is the worst when I'm already hungry. I still haven't reached a level of success where I'm satisfied, but I've had some success with 1) planning out meals for the next ~2 weeks, that way instead of deciding what to make for dinner, I just pick something off the list, 2) meal prepping, 3) having Meal Squares as a backup.
- Grooming meetings vs. (I guess you can call it) asynchronous grooming. In scrum, you have meetings where ~15 people get in a room (*"room"), look at the tasks that need to be done, go through each of them, and try to plan each task out + address any questions about the task. With so many people + a fast pace, things can get a little chaotic, and I find it difficult to add much value contributing. However, we're trying something new where tickets are assigned to people before the grooming meeting, and developers have a little "homework assignment" to groom their ticket before the grooming meeting. And then during the grooming meeting you present your ticket and give others a chance to comment or ask questions. We're starting it this week so I'm not sure if it will be more effective, but I have a strong sense that it will be.
- Arguments. It's hard to be productive when things get heated. Probably better to take a breather and come back to it.
↑ comment by Kaj_Sotala · 2021-03-31T05:45:48.642Z · LW(p) · GW(p)
I think this comment would make for a good top-level post almost as it is.
Replies from: habryka4, eric-raymond↑ comment by habryka (habryka4) · 2021-03-31T22:07:57.341Z · LW(p) · GW(p)
This post of mine feels closely related: https://www.lesswrong.com/posts/xhE4TriBSPywGuhqi/integrity-and-accountability-are-core-parts-of-rationality [LW · GW]
Replies from: Benito
- I have come to believe that people's ability to come to correct opinions about important questions is in large part a result of whether their social and monetary incentives reward them when they have accurate models in a specific domain. This means a person can have extremely good opinions in one domain of reality, because they are subject to good incentives, while having highly inaccurate models in a large variety of other domains in which their incentives are not well optimized.
- People's rationality is much more defined by their ability to maneuver themselves into environments in which their external incentives align with their goals, than by their ability to have correct opinions while being subject to incentives they don't endorse. This is a tractable intervention and so the best people will be able to have vastly more accurate beliefs than the average person, but it means that "having accurate beliefs in one domain" doesn't straightforwardly generalize to "will have accurate beliefs in other domains".
One is strongly predictive of the other, and that’s in part due to general thinking skills and broad cognitive ability. But another major piece of the puzzle is the person's ability to build and seek out environments with good incentive structures.- Everyone is highly irrational in their beliefs about at least some aspects of reality, and positions of power in particular tend to encourage strong incentives that don't tend to be optimally aligned with the truth. This means that highly competent people in positions of power often have less accurate beliefs than competent people who are not in positions of power.
- The design of systems that hold people who have power and influence accountable in a way that aligns their interests with both forming accurate beliefs and the interests of humanity at large is a really important problem, and is a major determinant of the overall quality of the decision-making ability of a community. General rationality training helps, but for collective decision making the creation of accountability systems, the tracking of outcome metrics and the design of incentives is at least as big of a factor as the degree to which the individual members of the community are able to come to accurate beliefs on their own.
↑ comment by Ben Pace (Benito) · 2021-03-31T22:14:02.029Z · LW(p) · GW(p)
Hah, I was thinking of replying to say I was largely just repeating things you said in that post.
Nonetheless, thanks both Kaj and Eric, I might turn it into a little post. It's not bad to have two posts saying the same thing (slightly differently).
↑ comment by Eric Raymond (eric-raymond) · 2021-03-31T06:13:31.895Z · LW(p) · GW(p)
Agreed.
↑ comment by Alex_Altair · 2021-04-03T05:41:29.819Z · LW(p) · GW(p)
Similarly, for instrumental rationality, I've been trying to lean harder on putting myself in environments that induce me to be more productive, rather than working on strategies to stay productive when my environment is making that difficult.
↑ comment by ryan_b · 2021-04-06T20:09:31.700Z · LW(p) · GW(p)
I agree with this comment. There is one point that I think we can extend usefully, which may dissolve the distinction with Homo Novis:
I think there's a key skill a rationalist should attain, which is knowing in which environments you will fail to be rational, and avoiding those environments.
While I agree, I also fully expect the list of environments in which we are able to think clearly should expand over time as the art advances. There are two areas where I think shaping the environment will fail as an alternative strategy: first is that we cannot advance the art's power over a new environment without testing ourselves in that environment; second is that there are tail risks to consider, which is to say we inevitably will have such environments imposed on us at some point. Consider events like car accidents, weather like tornadoes, malevolent action like a robbery, or medical issues like someone else choking or having a seizure.
I strongly expect that the ability to think clearly in extreme environments would have payoffs in less extreme environments. For example, a lot of the stress in a bad situation comes from the worry that it will turn into a worse situation; if we are confident of the ability to make good decisions in the worse situation, we should be less worried in the merely bad one, which should allow for better decisions in the merely bad one, thus making the worse situation less likely, and so on.
Also, consider the case of tail opportunities rather than tail risks; it seems like a clearly good idea to work extending rationality to extremely good situations that also compromise clear thought. Things like: winning the lottery; getting hit on by someone you thought was out of your league; landing an interview with a much sought after investor. In fact I feel like all of the discussion around entrepreneurship falls into this category - the whole pitch is seeking out high-risk/high-reward opportunities. The idea that basic execution becomes harder when the rewards get huge is a common trope, but if we apply the test from the quote it comes back as avoid environments with huge upside which clearly doesn't scan (but is itself also a trope).
As a final note - and I emphasize up front I don't know how to square this exactly - I feel like there should be some correspondence between bad environments and bad problems. Consider that one of the motivating problems for our community is X-risk, which is a suite of problems that are by default too huge to wrap our minds around, too horrible to emotionally grapple with, etc. In short, they also meet the criteria for reliably causing rationality to fail, but this motivates us to improve our arts to deal with it. Why should problems be treated in the opposite way as environments?
So I think the Homo Novis distinction comes down to them being in possession of a fully developed art already; we are having to make do with an incomplete one.
For now.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2021-04-07T09:24:24.921Z · LW(p) · GW(p)
Tl;dr for last two comments:
- Know your limits.
- Expand your limits.
comment by Alex_Altair · 2021-04-03T00:03:28.145Z · LW(p) · GW(p)
As a note on terminology, I don't think that (Yudkowskian) rationalists use the word "rationalism" to describe our worldview/practice. It's a natural modification of "rationalist", and I've seen a few people outside the rationalist community use it to refer to our worldview, but e.g. no one ever comes up to me at a party and says, "Have any thoughts about rationalism lately?" We tend to just say "rationality" or "the art of rationality".
I'd also strongly advocate that we not start using the word "rationalism" for it. Mostly this is because I share your grumble about how the word "rationalist" already has a well-defined meaning to the rest of the world, and I don't want to extend that overloading and inevitable confusion by using the word "rationalism" alongside it.
I'm tempted to try to come up with better names for our worldview, but there are actually some advantages to not having a clear proper-noun-type name. One is that everyone immediately gets the gist of what "rationalists" are about. Stereotypes aside, it's an advantage over being called "the Frobnitzists" or something else inscrutable. Another is that, as described in the virtue of the void, we don't know exactly what the name is for what we want; we're trying to move toward that which cannot be named. If we give our current best-guess a proper noun like the Debiasers or the Bayesian Conspiracy, then we might be stuck with that even after we shift to a better understanding, or worse yet, we might think we've found the ultimate answer and become stuck to it through the name.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2021-04-03T05:42:39.454Z · LW(p) · GW(p)
I ~agree with this comment. If we do ever want a noun, I've proposed error-reductionism [LW(p) · GW(p)]. Or maybe we want something more Anglophone... lessening-of-mistake-ism, or something......
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2024-08-26T15:56:32.900Z · LW(p) · GW(p)
I note that I haven't said out loud, and should say out loud, that I endorse this history. Not every single line of it (see my other comment on why I reject verificationism) but on the whole, this is well-informed and well-applied.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-04-05T21:41:32.742Z · LW(p) · GW(p)
One minor note is that, among the reasons I haven't looked especially hard into the origins of "verificationism"(?) as a theory of meaning, is that I do in fact - as I understand it - explicitly deny this theory. The meaning of a statement is not the future experimental predictions that it brings about, nor isomorphic up to those predictions; all meaning about the causal universe derives from causal interactions with us, but you can have meaningful statements with no experimental consequences, for example: "Galaxies continue to exist after the expanding universe carries them over the horizon of observation from us." For my actual theory of meaning see the "Physics and Causality" subsequence of Highly Advanced Epistemology 101 For Beginners [? · GW].
That is: among the reasons why I am not more fascinated with the antecedents of my verificationist theory of meaning is that I explicitly reject a verificationist account of meaning.
Replies from: eric-raymond↑ comment by Eric Raymond (eric-raymond) · 2021-04-08T03:55:30.257Z · LW(p) · GW(p)
"Galaxies continue to exist after the expanding universe carries them over the horizon of observation from us" trivially unpacks to "If we had methods to make observations outside our light cone, we would pick up the signatures that galaxies after the expanding universe has carried them over the horizon of observation from us defined by c."
You say "Any meaningful belief has a truth-condition". This is exactly Peirce's 1878 insight about the meaning of truth claims, expressed in slightly different language - after all, your "truth-condition" unpacks to a bundle of observables, does it not?
The standard term of art you are missing when you say "verificationist" is "predictivist".
I can grasp no way in which you are not a predictivist other than terminological quibbles, Eliezer. You can refute me by uttering a claim that you consider meaningful, e.g. having a "truth-condition", where the truth condition does not implicitly cash out as hypothetical-future observables - or, in your personal terminology, "anticipated experiences"
Amusingly, your "anticipated experiences" terminology is actually closer to the language of Peirce 1878 than the way I would normally express it, which is influenced by later philosophers in the predictivist line, notably Reichenbach.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-04-13T08:39:35.333Z · LW(p) · GW(p)
I reiterate the galaxy example; saying that you could counterfactually make an observation by violating physical law is not the same as saying that something's meaning cashes out to anticipated experiences. Consider the (exact) analogy between believing that galaxies exist after they go over the horizon, and that other quantum worlds go on existing after we decohere them away from us by observing ourselves being inside only one of them. Predictivism is exactly the sort of ground on which some people have tried to claim that MWI isn't meaningful, and they're correct in that predictivism renders MWI meaningless just as it renders the claims "galaxies go on existing after we can no longer see them" meaningless. To reply "If we had methods to make observations outside our quantum world, we could see the other quantum worlds" would be correctly rejected by them as an argument from within predictivism; it is an argument from outside predictivism, and presumes that correspondence theories of truth can be defined meaningfully by imagining an account from outside the universe of how the things that we've observed have their own causal processes generating those observations, such that having thus identified the causal processes through observation, we may speak of unobservable but fully identified variables with no observable-to-us consequences such as the continued existence of distant galaxies and other quantum worlds.
Replies from: eric-raymond↑ comment by Eric Raymond (eric-raymond) · 2021-04-15T03:12:36.358Z · LW(p) · GW(p)
It seems to me that you've been taking your model of predictivism from people who need to read some Kripke. In Peirce's predictivism, to assert that a statement is meaningful is precisely to assert that you have a truth condition for it, but that doesn't mean you necessarily have the capability to test the condition.
Consider Russell's teapot. "A teapot orbits between Earth and Mars" is a truth claim that must unambiguously have a true or false value. There is a truth condition on on it; if you build sufficiently powerful telescopes and perform a whole-sky survey you will find it. It would be entirely silly to claim that the claim is meaningless because the telescopes don't exist.
The claim "Galaxies continue to exist when they exit our light-cone" has exactly the same status. The fact that you happen to to believe the right sort of telescope not only does not exist but cannot exist is irrelevant - you could after all be mistaken in believing that sort of observation is impossible. I think it is quite likely you are mistaken, as nonlocal realism seems the most likely escape from the bind Bell's inequalities put us in.
MWI presents a a subtler problem, not like Russell's Teapot, because we haven't the faintest idea what observing another quantum world would be like. In the case of the overly-distant galaxies, I can sketch a test condition for the claim that involves taking a superluminal jaunt 13 billion light-years thataway and checking all around me to see if the distribution of galaxies has a huge NOT THERE on the side away from Earth. I think a predictivist would be right to ask that you supply an analogous counterfactual before the claim "other quantum worlds exist" can be said to have a meaning.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-04-15T23:02:04.594Z · LW(p) · GW(p)
Just jaunt superquantumly to another quantum world instead of superluminally to an unobservable galaxy. What about these two physically impossible counterfactuals is less than perfectly isomorphic? Except for some mere ease of false-to-fact visualization inside a human imagination that finds it easier to track nonexistent imaginary Newtonian billiard balls than existent quantum clouds of amplitude, with the latter case, in reality, covering both unobservable galaxies distant in space and unobservable galaxies distant in phase space.
Replies from: eric-raymond↑ comment by Eric Raymond (eric-raymond) · 2021-04-29T13:33:35.832Z · LW(p) · GW(p)
One big difference is that there are theoretical cracks in the lightspeed wall that don't have any go-to-another-quantum-world analog. The Alcubierre solution to the field equations is a thing, after all. More importantly for this discussion, we can construct thought experiments about superluminal travel that have truth conditions because we know what a starfield would look like from N lightyears thataway. Quantumporting doesn't have analogues of either of those things.
But that's kind of a distraction. The interesting question for this discussion is how, if at all, the two claims "galaxies receding outside our light cone continue to exist" and "Russell's teapot exists" are different. I think we agree that there is a predictivist account of "teapot".
You assert that a predictivist definition of meaning and truth value cannot sustain an account of the "galaxies" claim, and that predictivism is therefore insufficient. I, a predictivist, deny your assertion - you have smuggled in an assumption that predictivists somehow aren't allowed to assign meaning to counterfactuals that violate physical law, which I (a predictivist) am quite willing to do as long as hypotheically violating that physical law would not bar us from being able to cash out a truth claim in expected experiences.
I believe I am a predictivist who understands predictivism correctly and consistently. I believe you are a predictivist in practice who has failed to understand predictivism in theory.
How can we investigate, confirm, or refute these claims?
Replies from: ChristianKl↑ comment by ChristianKl · 2021-04-29T15:16:50.963Z · LW(p) · GW(p)
One big difference is that there are theoretical cracks in the lightspeed wall that don't have any go-to-another-quantum-world analog.
In that case the conclusion would be that we don't know whether or not galaxies outside of the light cone exist and whether or not they exist depend on whether the theoretical cracks actually allowing faster-then-light travel.
Eliezers position seems to be that they exist whether or not faster-then-light travel is possible.
Or are you saying that in a world where a person is certain about all physical laws that exist and there's no faster-then-light travel, the other galaxies don't exist for that person while they do exist for people with less knowledge about physics?
comment by jdp · 2021-04-01T04:14:45.352Z · LW(p) · GW(p)
As a fellow "back reader" of Yudkowsky, I have a handful of books to add to your recommendations:
Engines Of Creation by K. Eric Drexler
Great Mambo Chicken and The Transhuman Condition by Ed Regis
EY has cited both at one time or another [LW · GW] as the books that 'made him a transhumanist'. His early concept of future shock levels is probably based in no small part on the structure of these two books. The Sequences themselves borrow a ton from Drexler, and you could argue that the entire 'AI risk' vs. nanotech split from the extropians represented an argument about whether AI causes nanotech or nanotech causes AI.
I'd also like to recommend a few more books that postdate The Sequences but as works of history help fill in a lot of context:
Korzybski: A Biography by Bruce Kodish
A History Of Transhumanism by Elise Bohan
Both of these are thoroughly well researched works of history that help make it clearer where LessWrong 'came from' in terms of precursors. Kodish's biography in particular is interesting because Korzybski gets astonishingly close to stating the X-Risk thesis in Manhood of Humanity:
At present I am chiefly concerned to drive home the fact that it is the great disparity between the rapid progress of the natural and technological sciences on the one hand and the slow progress of the metaphysical, so-called social “sciences” on the other hand, that sooner or later so disturbs the equilibrium of human affairs as to result periodically in those social cataclysms which we call insurrections, revolutions and wars.
… And I would have him see clearly that, because the disparity which produces them increases as we pass from generation to generation—from term to term of our progressions—the “jumps” in question occur not only with increasing violence but with increasing frequency.
And in fact Korzybski's philosophy came directly out of the intellectual scene dedicated to preventing World War 2 after the first world war, in that sense there's a clear unbroken line from the first modern concerns about existential risk to Yudkowsky.
Replies from: eric-raymond↑ comment by Eric Raymond (eric-raymond) · 2021-04-08T03:58:50.973Z · LW(p) · GW(p)
Great Mambo Chicken and Engines of Creation were in my reference list for a while, until I decided to cull the list for more direct relevance to systems of training for rationality. It was threatening to get unmanageably long otherwise.
I didn't know there was a biography of Korzybski. Thanks!
comment by Richard_Kennaway · 2021-03-30T16:02:54.823Z · LW(p) · GW(p)
Thank you for writing this. Having read both your writings and Eliezer's, and many of the books listed, the story is as I expected it to be, but it is good to see the history laid out.
comment by habryka (habryka4) · 2021-03-30T22:51:38.052Z · LW(p) · GW(p)
Mod note: I moved this to frontpage despite it being a bit similar to things we've historically left on people's personal blog. Usually there are three checks I run for deciding whether to put something on the frontpage:
- Is it not timeless?
- Is it trying to sell you something, or persuade you, or leverage a bunch of social connections to get you to do something? (e.g. eliciting donations usually falls in this category)
- Is it about community inside-baseball that makes it hard to participate in if you aren't part of the social network?
For this essay, I think the answer is "No" for basically all three (with the last one maybe being a bit true, but not really), so overall I decided to move this to the frontpage.
Replies from: eric-raymond↑ comment by Eric Raymond (eric-raymond) · 2021-03-30T23:46:11.909Z · LW(p) · GW(p)
Heh. Come to think of it from that angle, "a bit true, but not really" would have been exactly my assessment if I were in your shoes. Thanks, I appreciate the nuanced judgment.
comment by romeostevensit · 2021-04-02T03:55:46.389Z · LW(p) · GW(p)
This was not just informationally useful but also just plain well-written and enjoyable. I think you succeeded in communicating some of the feel. Thank you.
comment by Chris_Leong · 2021-03-31T06:43:58.862Z · LW(p) · GW(p)
Thanks Eric for writing this post, I found it fascinating.
I imagine that there are are lot of lessons from General Semantics or analytic philosophy that might not have made it into rational-sphere, so if you ever find time to share some of that with us, I imagine it would be well-received.
comment by lincolnquirk · 2021-03-30T19:18:37.167Z · LW(p) · GW(p)
This is great, strong upvoted!
Offtopic but I've really enjoyed your work over the years (CATB & Hacker's Dictionary from before I was a Less Wronger; Dancing With the Gods since). Glad to see you on LW, and thanks for the pointer to Heinlein's Gulf which I hadn't read, but was a solid read (though very clearly from the 1950s in its attitude - feels very outdated now).
Replies from: FeepingCreature, eric-raymond, Kaj_Sotala↑ comment by FeepingCreature · 2021-03-31T16:11:02.492Z · LW(p) · GW(p)
As a teenager totally unattached to the larger software community (and open source, until years later), the New Hacker's DIctionary and the appended stories, along with Stoll's Cuckoo's Egg were formative for me. I had absolutely no contact with this culture, but I knew I wanted in. Finding that it overlaps with LessWrong, which I found independently later on, honestly feels bizarre.
Now I'm wondering if it's less that hacker culture as presented in those stories was attractive to me in itself, than if there was a common factor shining through. Interesting people, reasonable people...!
Replies from: eric-raymond↑ comment by Eric Raymond (eric-raymond) · 2021-03-31T19:03:59.175Z · LW(p) · GW(p)
Probably, but there is something else more subtle.
Both the cultures you're pointing at are, essentially, engines to support achieving right mindset. It's not quite the same right mindset, but in either case you have to detach for "normal" thinking and its unquestioned assumptions in order to be efficient at the task around which the culture is focused.
Thus, in both cultures there's a kind of implicit mysticism. If you recoil from that word because you associate it with anti-rationality I can't really blame you, but I ask you to consider the idea of mysticism as "techniques for consciousness alteration" detached from any particular beliefs about the universe.
This is why both cultures a have a use for Zen. It is a very well developed school of mystical technique whose connection to religious belief has become tenuous. You can take the Buddhism out of it and the rest is still coherent and interesting.
Perhaps this implicit mysticism is part of the draw for you. It is for me.
↑ comment by Eric Raymond (eric-raymond) · 2021-03-30T19:43:27.858Z · LW(p) · GW(p)
You have an outside view of my writing, so I'm curious. On a scale of 0 = "But of course" to 5 = "Wow, that was out of left field", how surprising did you find it that I would write this essay?
If you can find anything more specific to say along these lines (why it's surprising/unsurprising) I would find that interesting.
Replies from: Kaj_Sotala, Alexei, dominicq, madasario, Zian, philh, lincolnquirk, mruwnik↑ comment by Kaj_Sotala · 2021-03-30T19:48:56.252Z · LW(p) · GW(p)
I was slightly surprised, mostly because I had the expectation that if you've known about LW for a while, then I would have thought that you'd end up contributing either early or not at all. Curious what caused it to happen in 2021 in particular.
Replies from: eric-raymond↑ comment by Eric Raymond (eric-raymond) · 2021-03-30T20:25:49.283Z · LW(p) · GW(p)
I don't really have an interesting answer, I'm afraid. Busy life, lots of other things to pay attention to, never got around to it before.
Now that I've got the idea, I may re-post some rationality-adjacent stuff from my personal blog here so the LW crowd can know it exists.
Replies from: Benito, David Hornbein↑ comment by Ben Pace (Benito) · 2021-03-30T22:55:52.853Z · LW(p) · GW(p)
The way I have set this up for writers in the past has been to setup crossposting from an RSS feed under a tag (e.g. crossposting all posts tagged 'lesswrong').
I spent a minute trying and failed to figure out how to make an RSS feed from your blog under a single category. But if you have such an rss feed, and you make a category like 'lesswrong' then I'll set up a simple crosspost, and hopefully save you a little time in expectation. This will work if you add the category old posts as well as new ones.
Replies from: eric-raymond, localdeity↑ comment by Eric Raymond (eric-raymond) · 2021-03-30T23:49:11.015Z · LW(p) · GW(p)
There's a technical problem. My blog is currently frozen due to a stuck database server; I'm trying to rehost it. But I agree to your plan in principle and will discuss it with you when the blog is back up.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2021-03-31T00:33:09.545Z · LW(p) · GW(p)
Sounds good.
↑ comment by localdeity · 2021-03-31T05:50:07.161Z · LW(p) · GW(p)
I recently learned of a free (donation-funded) service, siftrss.com, wherein you can take an RSS feed and do text-based filtering on any of its fields to produce a new RSS feed. (I've made a few feeds with it and it seems to work well.) I suspect you could filter based on the "category" field.
↑ comment by David Hornbein · 2021-03-30T22:36:01.399Z · LW(p) · GW(p)
Please do.
↑ comment by dominicq · 2021-04-18T08:59:10.208Z · LW(p) · GW(p)
For me, probably 2. I read "How to become a hacker" several years ago and it shaped many of my career-related choices. The writing/reasoning style is very similar to the ratsphere, so I was not too surprised that I would also find you here.
↑ comment by madasario · 2021-04-08T22:34:12.001Z · LW(p) · GW(p)
0 or 1. I saw this post and thought "finally! I've been a fan since the early 90's. I'm most surprised that it took you this long, and excited that you finally got around to it. :)
The ratsphere is ripe for some of the same treatment you gave the fossphere back in the day. (It's under attack by forces of darkness; it's adherents tend to be timid and poorly funded while its attackers are loud, charismatic, and throw a lot of money around; it revolves around a few centers of gravity ("projects") that are fundamental building blocks of the future - the Big Problems; etc.)
I haven't thought this through a ton, but if I squint a bit I can see Jaynes &etc filling the role of, like, Knuth and K&R and etc - hard engineering; and The Sequences/LW/SSC filling the role of, say, GNU and Lions and etc - a way for the masses to participate and contribute and absorb knowlege and gel into a tribe and a movement. I paint that vague hand-wavy picture for you, hoping you'll understand when I say that this post feels like it should be expanded and become TAOUP but for the ratsphere.
↑ comment by Zian · 2021-04-05T19:33:12.476Z · LW(p) · GW(p)
3
My knowledge before reading the article and comments could be summarized as :
- These are some really great articles by ESR. I wonder why no one had taken them super seriously yet...
- somewhat of an outsider perspective as FeepingCreature described
- I wonder why some people have such strong opinions about this person
↑ comment by gjm · 2021-04-05T23:36:43.779Z · LW(p) · GW(p)
I think the main reason some people have strong opinions about ESR is that he has some strong opinions, some of which are highly controversial, and he states some of those controversial opinions openly. In particular, much in US politics is super-divisive, and in five minutes on Eric's blog you can readily find five things that some (otherwise?) reasonable people will get very angry about.
... I thought I should actually test that, so I went over to have a look. His blog has been a bit less political lately than at some other times. But in exactly five minutes I found the following assertions (all the following are my paraphrases; I have no intent to distort but error is always possible, especially when reading quickly, so if you are minded to be angry at Eric you should first go and check what he actually wrote): the US has a problem with Communist oppression, Kyle Rittenhouse is a hero, white people at BLM protests should be assumed to be communists and shot at will [EDITED to add: as habryka points out in a reply, this paraphrase is potentially misleading; more below], an armed storming of the Michigan State House was an appropriate response to stay-at-home orders. (That's April 2020, not the thing a few months later where they tried to kidnap the Governor.) Plus this ceremony for gun users, which doesn't make any particular assertions but I bet some people will find enraging.
For the avoidance of doubt, I am not saying anything about whether it is right or reasonable to get angry about any or all of those things. Only that, given their existence, it should not be a surprise that some people have strong feelings. Also for the avoidance of doubt, I don't think arguments about Eric's opinions about guns or communism or whatever have any place here on LW and I hope everyone will completely ignore those opinions when reading the present article.
(There are probably also people who have strong opinions about his strong opinions on, say, the things sometimes called "free software" and "open source software", or about the quality of the software he's written, or other things in that general vicinity. But I don't think those are what people who get angry about ESR mostly get angry about.)
[EDITED to add:] Well, as I warned, "error is always possible", and indeed one of my paraphrases makes what Eric wrote sound more inflammatory than what he actually wrote was. My apologies for that. Specifically: I said he said "white people at BLM protests should be assumed to be communists and shot at will". He actually said specifically "rioters". As it happens, someone in the ensuing discussion asked him to clarify the distinction, and this is what he said: "A protester holds up a sign and yells a slogan. A rioter intends to commit crime against persons or property, and expresses that intent in behavior; e.g. persons equipped with incendiaries or street weapons are rioters, not protesters." So, not literally all white people at BLM protests, but any white person who "intends to commit crime against persons or property", which may be judged e.g. by what the person is equipped with rather than by actions already committed.
Replies from: habryka4↑ comment by habryka (habryka4) · 2021-04-06T03:54:52.953Z · LW(p) · GW(p)
Woah, at least one of those summaries seems really quite inaccurate. Bad enough that like, I feel like I should step in as a moderator and be like "wait, this doesn't seem OK".
I am not very familiar with ESR's opinions, but your summary of "white people at BLM protests should be assumed to be communists and shot at will" is really misrepresenting the thing he actually said. What he actually said was "White rioters, on the other hand, will be presumed to be Antifa Communists attempting to manipulate this tragedy for Communist political ends;", with the key difference being "white rioters" instead of "white people". While there is still plenty to criticize in that sentence, this seems like a really crucial distinction that makes that sentence drastically less bad.
Topics like this tend to get really politicized and emotional, which I think means it's reasonable to apply some extra scrutiny and care to not misrepresent what other people said, and generally err on the side of quoting verbatim (ideally while giving substantial additional context).
Replies from: gjm↑ comment by gjm · 2021-04-06T13:13:03.413Z · LW(p) · GW(p)
Yeah, "rioters" would have been more accurate than "people", though I don't know exactly what Eric considers the boundary between protesting and rioting. My apologies. As I said, mistakes get made when doing things quickly, and doing it quickly was much of the point.
[EDITED to add:] I have edited my original comment to point out the mistake; I also found a comment from Eric on the original blogpost that clarifies where he draws the line between "rioters" and mere protestors, and have quoted that there too.
Replies from: gjm, ChristianKl↑ comment by gjm · 2021-04-06T16:46:46.322Z · LW(p) · GW(p)
Looking at voting patterns in this subthread, I have the impression that readers generally have the impression that I'm attempting to mount some sort of attack on Eric. Obviously I can't prove anything about my intentions here, but I promise that that was not in any way my purpose; I was answering Zian's puzzlement about how ESR could possibly be controversial by pointing out some controversial things. I don't think Eric would disagree with my identification of those things as things some people might get angry about.
If my purpose had been an unscrupulous political attack, I wouldn't have provided links to let everyone check whether my brief summaries were accurate, and I wouldn't have gone out of my way to point out that I might have made errors and explain why they were particularly likely in this instance.
(I don't object to being downvoted; if you think something I write is of low quality then you should downvote it. But it looks to me as if some wrong assumptions may be being made about my motives here.)
[EDITED to add:] Things look more "normal" now; dunno whether that means that the earlier state was some sort of statistical anomaly, or that some people read the above and agreed, or what. I mention this just in case anyone's reading this and wonders why in this comment I'm expressing concern about something that's not there :-).
↑ comment by ChristianKl · 2021-04-06T14:24:14.142Z · LW(p) · GW(p)
I would expect the bar to be pretty clear and as habryka said "intent to commit crimes against persons or property". I would expect Eric to have the bar somewhere where he thinks that the law that allows private citizens to use force to prevent crimes from happening would protect him.
Replies from: gjm↑ comment by gjm · 2021-04-06T14:37:22.329Z · LW(p) · GW(p)
As you'll see from the edit to my original comment, I found something Eric said in the discussion on his blog that drew a fairly explicit boundary between rioters and mere protestors. My impression is that if Eric actually acts strictly according to the principles stated there, the law will not protect him and he will end up in jail (thinking that someone has intent to commit crimes is not generally sufficient justification in law for shooting them); several commenters on his blog expressed the same concern.
I worry that we may be getting into arguing about Eric's opinions themselves, rather than merely answering the question "why do some people have such strong opinions about him", and I think that's not a useful topic for discussion here. Of course that's mostly my fault for not getting my summaries perfectly accurate, for which once again I apologize.
↑ comment by philh · 2021-04-02T11:25:57.656Z · LW(p) · GW(p)
For me, like 1 maybe 2? (That you would write it; it's a little more surprising that you did.) I knew you'd read at least some of the sequences because I think I first found them through you, and I think you've called yourself a "fellow traveler". Oh, and I remember you liked HPMOR. But I didn't know if you were particularly aware of the community here.
↑ comment by lincolnquirk · 2021-03-30T22:00:34.685Z · LW(p) · GW(p)
Hmm, maybe a 2. I didn’t know you had read the Sequences, but it seems like the sort of thing that would appeal to you based on the writing in Dancing, etc.
↑ comment by mruwnik · 2022-04-18T11:03:00.489Z · LW(p) · GW(p)
For me the main surprise was to think "Eric Raymond. Huh. Just like the CatB author. Wait - really?! Here?" after which was an "of course! Now it all makes sense!".
I'd previously noticed the similarities between the hacker ethos and rationality, to a large extent because they were what were attractive to me in the first place. The GS part was new info for me, but both the SF and Zen influences are obvious (though it's nice to see it so explicitly explained). It feels like, in a certain sense, the hacker ethos is a special case of rationality. Hackers seemed from the outside to be these mystical creatures that used logic and intuition to get closer to a better understanding of computer systems in order to get them to do interesting things. With a focus on clarity, elegance, practicality etc. My understanding of a beisutsukai is someone who does just that, but in all matters, not just computery things. So rationality is a natural extension of being a hacker. Ditto with the mystical aspects which you mentioned earlier. I get the impression that both your writing and the Sequences have the same feel to them, for lack of a better expression.
p.s. - I'd like to thank you for the hacker howto. The "formative" in the earlier comment is spot on. Apart from the general hacker stuff, I also started to learn LISP. For which I'm eternally grateful.
↑ comment by Kaj_Sotala · 2021-03-30T19:36:58.845Z · LW(p) · GW(p)
I also quite liked both the Jargon File (which I found before or around the same time as LW) and Dancing With the Gods (which I found through LW [LW · GW]).
comment by Ben Pace (Benito) · 2021-04-08T03:51:19.430Z · LW(p) · GW(p)
I've curated this essay[1].
Getting a sense of one's own history can be really great for having perspective. The primary reason I've curated this is because the post really helped give me perspective on the history of this intellectual community, and I imagine also for many other LWers.
I wouldn't have been able to split it into "General Semantics, analytic philosophy, science fiction, and Zen Buddhism" as directly as you did, nor would I know which details to pick out. (I would've been able to talk about sci-fi, but I wouldn't quite know how to relate the rest of it.)
That said, while I might be wrong, I do think there's one strand missing here, which is something like "lawful reasoning in physics and mathematics". I think ET Jaynes mastery of probability theory drives a lot of Eliezer's approach to rationality and AI, as well as Feynman's first-principles approach to reasoning, and neither of those authors are discussed except in the books at the end. (I guess they were more of Eliezer's path than yours.)
(I would be interested in people writing posts that address their historical relevance in a similar way to how Eric has written about other schools of thought here.)
The essay is very readable and somehow isn't 10x the length filled with extraneous detail, which is a common failure mode with histories. I think that's because you've written this from a personal perspective, which helps a lot. You know which details mattered to you because you lived through it, and I really appreciated reading this history from your perspective. I had never even heard of the book "Gulf", and now I know I'm going to read it. (The books list at the end is also great.)
Overall I'm delighted to have read this essay, thank you for writing it.
--
[1] Curated posts are emailed to the 3000-4000 readers who are subscribed to the twice-weekly curated posts.
Replies from: eric-raymond↑ comment by Eric Raymond (eric-raymond) · 2021-04-08T04:12:49.951Z · LW(p) · GW(p)
Eliezer was more influenced by probability theory, I by analytic philosophy, yes. These variations are to be expected. I'm reading Jaynes now and finding him quite wonderful. I was a mathematician at one time, so that book is almost comfort food for me - part of the fun is running across old friends expressed in his slightly eccentric language.
I already had a pretty firm grasp on Feynman's "first-principles approach to reasoning" by the time I read his autobiographical stuff. So I enjoyed the books a lot, but more along the lines of "Great physicist and I think alike! Cool!" than being influenced by him. If I'd been able to read them 15 years earlier I probably would have been influenced.
One of the reasons I chose a personal, heavily narratized mode to write the essay in was exactly so I could use that to organize what would otherwise have been a dry and forbidding mass of detail. Glad to know that worked - and, from what you don't say, that I appear to have avoided the common "it's all about my feelings" failure mode of such writing.
comment by Dale Udall · 2021-03-30T17:27:05.228Z · LW(p) · GW(p)
If nothing else, I hope this essay will leave you feeling grateful that you no longer have to do a decades-long bootstrapping process the way Eliezer and Nancy and I and others like us had to in the before times. I doubt any of us are sorry we put in the effort, but being able to shortcut a lot of it is a good thing.
Thank you for introducing us to those who built this basilica. Just in looking up General Semantics, I've learned more about the culture wars that preceded the ones we now fight, and I learned who a few of the generals were on both sides.
comment by Vaniver · 2021-03-30T16:00:15.251Z · LW(p) · GW(p)
If you pursue this sort of thing for more than three thousand years, as Buddhists have been doing, you're likely to find some techniques that actually do help you pay better attention to reality - even if it is difficult to dig them out of the surrounding religious encrustations afterwards.
Interestingly, this is how I often feel about western philosophy; my early experience of philosophy classes and books was very much about 'who said what', and a sort of intellectual territorialism that seemed disconnected from any ultrahumanist project to think better. [Thinking about it now, it feels like the difference between sports commentary / watching tape and playing sports.]
But, of course, philosophy actually contains a bunch of insights about how to pay better attention to reality in it! And yet, even lukeprog when talking about Less Wrong and Mainstream Philosophy [LW · GW] doesn't argue that Eliezer and others should read more Quinean naturalists (in the same way in this post, you don't argue that we should read more Korzybski). One of the things that makes me excited about things like this lecture club [? · GW] is that I think it succeeds somewhat at the 'digging out insights' work.
Replies from: eric-raymond, romeostevensit↑ comment by Eric Raymond (eric-raymond) · 2021-03-30T18:35:28.777Z · LW(p) · GW(p)
Ironically, I disagree a bit with lukeprog here - one of the few flaws I think I detect in the Sequences is due to Eliezer not having read enough philosophy. He does arrive at a predictivist theory of confirmation eventually, but it takes more effort and gear-grinding than it would have if he had understood Peirce's 1878 demonstration and expressed it in clearer language.
Ah well. It's a minor flaw.
↑ comment by romeostevensit · 2021-04-02T03:58:28.578Z · LW(p) · GW(p)
I really wish there was a techniques focused history of European philosophy. I suspect anyone capable of a decent shot at such is busy doing more important things.
comment by peak.singularity · 2021-04-11T11:19:53.109Z · LW(p) · GW(p)
Wow, this was quite a surprise seeing your post here, and finding out that you've been reading Less Wrong for all of these years !
(On the other hand, probably not, an English speaker with similar intellectual tendencies and Silicon Valley tropism would probably have quickly found about it, my case not being very typical ?)
I hope that you are well ?
Replies from: eric-raymond↑ comment by Eric Raymond (eric-raymond) · 2021-04-15T10:36:36.032Z · LW(p) · GW(p)
To be fair, I haven't followed Less Wrong all that closely over the years. It's more that I've known some of the key people for a while, notably Eliezer himself and Scott Alexander.
comment by Ben Pace (Benito) · 2021-04-08T03:11:29.921Z · LW(p) · GW(p)
(Here are some of my thoughts, reading through.)
Sometimes I would get a flash of light through the fog, or at least a sense that there were other people on the same lonely quest. A bit of that sense sometimes drifted over USENET, an early precursor of today's Internet fora.
It's strange, I don't feel the fog much in my life. I wonder if this a problem. It doesn't seem like I should feel like "I and everyone around me basically know what's going on".
I can imagine certain people for whom talking to them would feel like a flash of light in the fog. I probably want to pursue talking to them.
my paternal grandfather gave me a book called "People In Quandaries".
That's an awesome name for a book. I want to write a book of "People In Quandaries" and how to get out of them. Just lots of short stories of people in various parts of life and civilization, and showing how better rationality can save them. I reckon that'd be really fun.
If nothing else, I hope this essay will leave you feeling grateful that you no longer have to do a decades-long bootstrapping process the way Eliezer and Nancy and I and others like us had to in the before times.
I remember being about 14, and walking home from school, with so many deep and philosophical questions about what the world was and how I related to it. That year I read The Sequences. I remember taking the same walk a year later, and realizing that I felt I had a pretty coherent worldview and had answered a lot of my fundamental questions. I had then a sense it was time to 'get to work'.
I am deeply grateful that I got to read this then, and didn't have to figure it out myself.
comment by MSRayne · 2021-04-08T21:13:30.742Z · LW(p) · GW(p)
I'm only 23 - probably younger than most people here - but I imagine my father must have read many of the same books, as he raised me to think in a way which I now understand to be very much like Yudkowsky's version of rationality. As with what you quoted from Nancy, it all seemed really obvious to me when I read the Sequences, except for the mathematical components (Bayesianism still confuses me, but I'll get there eventually).
The main way I differ here though is that I have had lots of "mystical experiences" due to probably schizotypal or dissociative tendencies when I was a teenager, and so my perspective on the world is not quite that of a typical atheist. I don't know of any other LessWrongers with roots in the occult and New Age worlds, who retain thought patterns from those perspectives but rationality-ized, though.
Example: I think religion has at least one extremely important function other than building community, namely promoting the experience of transcendence (at least in some people with brains shaped in such a way as to be able to experience that - note that I'm not claiming this to involve actual "supernatural" phenomena, only psychological ones), and that this experience matters a lot, because I've had it myself many times - but explaining that would require an entire essay and I can't guarantee I'd be able to clearly express it, as it is a fundamentally experiential thing, rather than an easily verbalized thing, sort of like Kensho.
↑ comment by gjm · 2021-04-09T12:54:29.999Z · LW(p) · GW(p)
In this you differ from the average rationalist but maybe not so much from Eric; see e.g. his essay "Dancing with the Gods".
Replies from: MSRayne↑ comment by MSRayne · 2021-04-09T21:40:28.014Z · LW(p) · GW(p)
Yes, yes, yes! This is it, this is exactly it!
> Rituals are programs written in the symbolic language of the unconscious mind. Religions are program libraries that share critical subroutines. And the Gods represent subsystems in the wetware being programmed. All humans have potential access to pretty much the same major gods because our wetware design is 99% shared.
I've come to the same conclusion in the past. Meme theory plus multiagent models of mind, plus the shared structure of the human unconscious (though another layer of what is shared, which is often overlooked, is mountains of cultural context), equals spirits as AIs on a distributed operating system run with human brains as the substrate. Failing to recognize their existence is a mistake. Being enslaved to the fragmented, defiled forms of them which arise when direct theophanic contact is lost (such as faith based religions are ruled by) is another mistake. The middle way is the best. I'm glad to know I'm not the only person here who strives both for rationalism and for gnosis.
↑ comment by peak.singularity · 2021-04-11T11:14:40.247Z · LW(p) · GW(p)
Heh, this reminds me of last week's jab from John Michael Greer :
https://www.ecosophia.net/a-sense-of-deja-vu/
(And if it seems paradoxical to you that a Druid who prays to pagan deities and practices ceremonial magic should be saying [that the universe doesn't care about your feelings] in response to the behavior of people who by and large consider themselves practical-minded rationalists, trust me, the irony has not escaped my attention either. Thank you, and we now return to this week’s regularly scheduled post.)
As for me, I've been really into transhumanism in the noughties :
mostly I'd say that the interest came from the Anglophone science fiction (Foundation, Accelerando, Diamond Age...), but then also from Soviet science fiction -
it's interesting to look at the parallels between that "Homo Novis", the official "New Soviet Man", its representation in the early works of the Strugatsky brothers, and then later their slow slide from progressive utopia to progressive dystopia starting with the novels about their "Institute of experimental history" - which I now realize parallels my own intellectual path -
circa 2010 I switched from transhumanism to "peak oilism" - hence this nickname :
Energy Bulletin (now Resilience.org), Peak Oil Barrel, Archdruid Report (now Ecosophia), Tom Murphy's Do the Math, Cassandra's Legacy...
So I completely missed Less Wrong at it's peak - only discovered it (and SSC) in the mid 2010's - though since I was animated by a similar quest, in parallel I've took some (current, skeptical) Zetetic classes.
Also, despite liking the mandatory philosophy classes in high school, I was so put off by having to study Condillac's Le Traité des animaux in superior education, that my interest in philosophy pretty much disappeared... and only started growing back again through the epistemology of Physics.
And, having finally decided that my grasp of English language was good enough (and having been dismissed enough times for my amateurish knowledge of philosophy), I've been recently reading Russell's History of Western Philosophy - though I kind of hit a hard wall with Spinoza's & Leibniz' metaphysics...
In parallel, through Greer I've stopped completely dismissing occultism (though astrology is still a hard pass), but I haven't really followed through once he started getting into the very specific details of USA's history of Occultism - it's just too foreign to hold my interest.
(Thank you for reading through my ramblings.)
comment by Adam Zerner (adamzerner) · 2021-03-31T06:41:32.203Z · LW(p) · GW(p)
Thanks for making that connection to Zen Buddhism. I never thought of it as a central theme of The Sequences before this.
I'm still not sure if I'm convinced that it actually is a central theme. In the preface [? · GW] to Rationality From AI to Zombies, Eliezer writes:
It ties in to the first-largest mistake in my writing, which was that I didn’t realize that the big problem in learning this valuable way of thinking was figuring out how to practice it, not knowing the theory. I didn’t realize that part was the priority; and regarding this I can only say “Oops” and “Duh.”
The Zen Buddhism stuff you're referring to seems like it fits into practice instead of theory, and like Eliezer says, practice isn't emphasized too much. More specifically, The Ritual [? · GW] seems like a good example of a post that paints a picture of how you could apply Zen Buddhism ideas to enhance your ability to practice rationality, and at least in my recollection, posts like those weren't very frequent.
It's not only Eliezer's writing. I don't see these ideas talked about much on LessWrong by other users either. Both historically and recently. It seems like a very promising concept though, so I'd like to see more posts about it. I agree that learning how to actually practice the ideas is crucial.
Replies from: eric-raymond↑ comment by Eric Raymond (eric-raymond) · 2021-03-31T13:39:07.704Z · LW(p) · GW(p)
I actually wouldn't call Zen a "central theme". More "a recurring rhetorical device". It's not Zen Buddhist content that the Sequences use, it's the emulation of Zen rhetoric as a device to subtly shift the reader's mental stance.
Replies from: gilch, adamzerner↑ comment by gilch · 2021-03-31T17:46:37.151Z · LW(p) · GW(p)
Not being an expert in Zen, I'm not sure what "Zen rhetoric" means. Could you provide examples quoted from the Sequences of what you are talking about and what makes it "Zen"?
Replies from: eric-raymond, RobbBB↑ comment by Eric Raymond (eric-raymond) · 2021-03-31T18:39:51.667Z · LW(p) · GW(p)
I think a collection of examples and analysis would be a post in itself.
But I can give you one suggestive example from Twelve Virtues itself: "If you speak overmuch of the Way you will not attain it."
It is a Zen idea that the essence of enlightenment cannot be discovered by talking about enlightenment; rather one must put one's mind in the state where enlightenment is. Moreover, talk and chatter - even about Zen itself - drives that state away.
Eliezer is trying to say here that the the center of rationalist practice is not in what you know about rationality or how much cleverness you can demonstrate to others but in achieving a mental stance that processes evidence correctly and efficiently.
He is borrowing the rhetoric of Zen to say that because, as with Zen, the center of our Way is found in silence and non-attachment. The Way of Zen wants you to lose your attachment to desires; the Way of rationality wants you to lose your attachment to beliefs.
↑ comment by Rob Bensinger (RobbBB) · 2021-03-31T18:16:36.663Z · LW(p) · GW(p)
- Twelve Virtues of Rationality [LW · GW]
- Two Cult Koans [LW · GW]
- Something to Protect [LW · GW]
- Beisutsukai [? · GW] stories
↑ comment by Adam Zerner (adamzerner) · 2021-03-31T16:21:38.310Z · LW(p) · GW(p)
I see. Thanks for clarifying.
comment by A Ray (alex-ray) · 2023-01-15T00:00:54.876Z · LW(p) · GW(p)
This post was personally meaningful to me, and I'll try to cover that in my review while still analyzing it in the context of lesswrong articles.
I don't have much to add about the 'history of rationality' or the description of interactions of specific people.
Most of my value from this post wasn't directly from the content, but how the content connected to things outside of rationality and lesswrong. So, basically, i loved the citations.
Lesswrong is very dense in self-links and self-citations, and to a lesser degree does still have a good number of links to other websites.
However it has a dearth of connections to things that aren't blog posts -- books, essays from before the internet, etc. Especially older writings.
I found this posts citation section to be a treasure trove of things I might not have found otherwise.
I have picked up and skimmed/started at least a dozen of the books on the list.
I still come back to this list sometimes when I'm looking for older books to read.
I really want more things like this on lesswrong.
comment by Ruby · 2023-01-07T00:59:51.210Z · LW(p) · GW(p)
I like this post for reinforcing a point that I consider important about intellectual progress, and for pushing against a failure mode of the Sequences-style rationalists.
As far as I can tell, intellectual progress is made bit by bit with later building on earlier Sequences. Francis Bacon gets credit for landmark evolution of the scientific method, but it didn't spring from nowhere, he was building on ideas that had built on ideas, etc.
This says the same is true for our flavor of rationality. It's built on many things, and not just probability theory.
The failure mode I think this helps with is not thinking that "we are the only sane people". There is much insanity and we are saner than most, but we are descended from people who are not us, and we probably have relatives we don't know. And I think that's worth remembering, thanks to this post for the reminder.
comment by koroviev · 2021-04-03T02:28:05.024Z · LW(p) · GW(p)
Fascinating and enjoyable read. I put a few of the recommended books onto my to-read list. Thank you.
In your journey, I wonder if you've come across Buckminster Fuller and, if yes, what's your opinion on his ideas?
I ask this because I found Fuller's works at the same time I found Korzybski's. And while vastly different in theme and scope, they seemed to be underpinned by the same spirit--positive, human-centered, problem-solving--one I would label as "humanism."
Replies from: eric-raymond↑ comment by Eric Raymond (eric-raymond) · 2021-04-08T04:01:50.519Z · LW(p) · GW(p)
I have run across Bucky Fuller of course. Often brilliant, occasionally cranky, geodesic domes turned out to suck because you can't seal all those joints well enough. We could use more like him.
comment by DavidFriedman · 2023-03-29T21:44:32.594Z · LW(p) · GW(p)
I also was a rationalist before Eliezer, but of Eric's four sources of information the only one I shared is science fiction. I had the advantage of growing up in a family where the relevance of reason to the world was taken for granted.
At one point, long after I had become an adult, my parents asked me whether it would have been better if they had brought me up in their parents' (Jewish) religion. I replied that I preferred having been brought up in the one they believed in — 18th century rationalism, the ideology of Adam Smith and David Hume.
comment by Mitchell_Porter · 2021-04-07T04:11:39.231Z · LW(p) · GW(p)
The real question is, is there a historical precursor to /r/SneerClub? Perhaps an SF zine run by someone who didn't like Korzybski and Van Vogt...
Replies from: degsy↑ comment by degsy · 2021-04-13T09:09:36.132Z · LW(p) · GW(p)
a lot of the new wave stuff feels like a SneerClub sensibility w r t golden age SF
Replies from: peak.singularity↑ comment by peak.singularity · 2021-04-13T09:53:50.508Z · LW(p) · GW(p)
Well, Pulp & Golden Age sci-fi was "discredited" by us actually landing a probe on Venus and realizing that it was not a likely place to find a lush jungle...
https://www.ecosophia.net/the-worlds-that-never-were/
Meanwhile SneerClub is a bit too current to LessWrong for that parallel to work ?
The above author has followed through on his project of resurrecting classic science fiction, "Vintage Worlds" is already on its 3rd volume :
https://www.solarsystemheritage.com/anthology-project-2017.html
comment by George3d6 · 2021-03-31T09:40:49.644Z · LW(p) · GW(p)
I consider myself a skeptic empiricist, to the extent I can, for it's a difficult view to hold.
I don't think this community or Eliezer's ideas are so, they are fundamentally rational:
- Timeless decision theory
- Assumptions about experimental perfection that lead to EZs incoherent rambling on physics
- Everything that's part of the AI doomsday cult views
These are highly rational things, I suspect steming from a pre school "intelligence is useful" prior that most people failed to introspect, and that is pretty correct unless taken to an extreme. But it's reasoning from that uncommon a prior (after al empiricists also start from something, it's just that their starting point is one that's commonly shared by all or most humans, e.g. obvious seen features), and other like it, that lead to the sequences and to most discussion on LW.
Which is not to say that it's bad, I've personally come to believe it's as ok as any religion, but it shouldn't be confused with empiricism and empiricists methods.
Replies from: peak.singularity↑ comment by peak.singularity · 2021-04-11T11:22:20.034Z · LW(p) · GW(p)
Well, when reading this :
https://plato.stanford.edu/entries/rationalism-empiricism/
Less Wrong & SSC => ACX clearly seem to me to be much closer to the empiricist side than the rationalist one ?