No Safe Defense, Not Even Science

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-05-18T05:19:24.000Z · LW · GW · Legacy · 71 comments

I don't ask my friends about their childhoods—I lack social curiosity—and so I don't know how much of a trend this really is:

Of the people I know who are reaching upward as rationalists, who volunteer information about their childhoods, there is a surprising tendency to hear things like:  "My family joined a cult and I had to break out," or "One of my parents was clinically insane and I had to learn to filter out reality from their madness."

My own experience with growing up in an Orthodox Jewish family seems tame by comparison... but it accomplished the same outcome:  It broke my core emotional trust in the sanity of the people around me.

Until this core emotional trust is broken, you don't start growing as a rationalist.  I have trouble putting into words why this is so.  Maybe any unusual skills you acquire—anything that makes you unusually rational—requires you to zig when other people zag.  Maybe that's just too scary, if the world still seems like a sane place unto you.

Or maybe you don't bother putting in the hard work to be extra bonus sane, if normality doesn't scare the hell out of you.

I know that many aspiring rationalists seem to run into roadblocks around things like cryonics or many-worlds.  Not that they don't see the logic; they see the logic and wonder, "Can this really be true, when it seems so obvious now, and yet none of the people around me believe it?"

Yes.  Welcome to the Earth where ethanol is made from corn and environmentalists oppose nuclear power.  I'm sorry.

(See also:  Cultish Countercultishness.  If you end up in the frame of mind of nervously seeking reassurance, this is never a good thing—even if it's because you're about to believe something that sounds logical but could cause other people to look at you funny.)

People who've had their trust broken in the sanity of the people around them, seem to be able to evaluate strange ideas on their merits, without feeling nervous about their strangeness.  The glue that binds them to their current place has dissolved, and they can walk in some direction, hopefully forward.

Lonely dissent, I called it.  True dissent doesn't feel like going to school wearing black; it feels like going to school wearing a clown suit.

That's what it takes to be the lone voice who says, "If you really think you know who's going to win the election, why aren't you picking up the free money on the Intrade prediction market?" while all the people around you are thinking, "It is good to be an individual and form your own opinions, the shoe commercials told me so."

Maybe in some other world, some alternate Everett branch with a saner human population, things would be different... but in this world, I've never seen anyone begin to grow as a rationalist until they make a deep emotional break with the wisdom of their pack.

Maybe in another world, things would be different.  And maybe not.  I'm not sure that human beings realistically can trust and think at the same time.

Once upon a time, there was something I trusted.

Eliezer18 trusted Science.

Eliezer18 dutifully acknowledged that the social process of science was flawed.  Eliezer18 dutifully acknowledged that academia was slow, and misallocated resources, and played favorites, and mistreated its precious heretics.

That's the convenient thing about acknowledging flaws in people who failed to live up to your ideal; you don't have to question the ideal itself.

But who could possibly be foolish enough to question, "The experimental method shall decide which hypothesis wins"?

Part of what fooled Eliezer18 was a general problem he had, with an aversion to ideas that resembled things idiots had said.  Eliezer18 had seen plenty of people questioning the ideals of Science Itself, and without exception they were all on the Dark Side.  People who questioned the ideal of Science were invariably trying to sell you snake oil, or trying to safeguard their favorite form of stupidity from criticism, or trying to disguise their personal resignation as a Deeply Wise acceptance of futility.

If there'd been any other ideal that was a few centuries old, the young Eliezer would have looked at it and said, "I wonder if this is really right, and whether there's a way to do better."  But not the ideal of Science.  Science was the master idea, the idea that let you change ideas.  You could question it, but you were meant to question it and then accept it, not actually say, "Wait!  This is wrong!"

Thus, when once upon a time I came up with a stupid idea, I thought I was behaving virtuously if I made sure there was a Novel Prediction, and professed that I wished to test my idea experimentally.  I thought I had done everything I was obliged to do.

So I thought I was safe—not safe from any particular external threat, but safe on some deeper level, like a child who trusts their parent and has obeyed all the parent's rules.

I'd long since been broken of trust in the sanity of my family or my teachers at school.  And the other children weren't intelligent enough to compete with the conversations I could have with books.  But I trusted the books, you see.  I trusted that if I did what Richard Feynman told me to do, I would be safe.  I never thought those words aloud, but it was how I felt.

When Eliezer23 realized exactly how stupid the stupid theory had been—and that Traditional Rationality had not saved him from it—and that Science would have been perfectly okay with his wasting ten years testing the stupid idea, so long as afterward he admitted it was wrong...

...well, I'm not going to say it was a huge emotional convulsion.  I don't really go in for that kind of drama.  It simply became obvious that I'd been stupid.

That's the trust I'm trying to break in you.  You are not safe.  Ever.

Not even Science can save you.  The ideals of Science were born centuries ago, in a time when no one knew anything about probability theory or cognitive biases.  Science demands too little of you, it blesses your good intentions too easily, it is not strict enough, it only makes those injunctions that an average scientist can follow, it accepts slowness as a fact of life.

So don't think that if you only follow the rules of Science, that makes your reasoning defensible.

There is no known procedure you can follow that makes your reasoning defensible.

There is no known set of injunctions which you can satisfy, and know that you will not have been a fool.

There is no known morality-of-reasoning that you can do your best to obey, and know that you are thereby shielded from criticism.

No, not even if you turn to Bayescraft.  It's much harder to use and you'll never be sure that you're doing it right.

The discipline of Bayescraft is younger by far than the discipline of Science.  You will find no textbooks, no elderly mentors, no histories written of success and failure, no hard-and-fast rules laid down.  You will have to study cognitive biases, and probability theory, and evolutionary psychology, and social psychology, and other cognitive sciences, and Artificial Intelligence—and think through for yourself how to apply all this knowledge to the case of correcting yourself, since that isn't yet in the textbooks.

You don't know what your own mind is really doing. They find a new cognitive bias every week and you're never sure if you've corrected for it, or overcorrected.

The formal math is impossible to apply.  It doesn't break down as easily as John Q. Unbeliever thinks, but you're never really sure where the foundations come from.  You don't know why the universe is simple enough to understand, or why any prior works for it.  You don't know what your own priors are, let alone if they're any good.

One of the problems with Science is that it's too vague to really scare you.  "Ideas should be tested by experiment."  How can you go wrong with that?

On the other hand, if you have some math of probability theory laid out in front of you, and worse, you know you can't actually use it, then it becomes clear that you are trying to do something difficult, and that you might well be doing it wrong.

So you cannot trust.

And all this that I have said, will not be sufficient to break your trust.  That won't happen until you get into your first real disaster from following The Rules, not from breaking them.

Eliezer18 already had the notion that you were allowed to question Science.  Why, of course the scientific method was not itself immune to questioning!  For are we not all good rationalists?  Are we not allowed to question everything?

It was the notion that you could actually in real life follow Science and fail miserably, that Eliezer18  didn't really, emotionally believe was possible.

Oh, of course he said it was possible.  Eliezer18 dutifully acknowledged the possibility of error, saying, "I could be wrong, but..."

But he didn't think failure could happen in, you know, real life.  You were supposed to look for flaws, not actually find them.

And this emotional difference is a terribly difficult thing to accomplish in words, and I fear there's no way I can really warn you.

Your trust will not break, until you apply all that you have learned here and from other books, and take it as far as you can go, and find that this too fails you—that you have still been a fool, and no one warned you against it—that all the most important parts were left out of the guidance you received—that some of the most precious ideals you followed, steered you in the wrong direction—

—and if you still have something to protect, so that you must keep going, and cannot resign and wisely acknowledge the limitations of rationality—

then you will be ready to start your journey as a rationalist.  To take sole responsibility, to live without any trustworthy defenses, and to forge a higher Art than the one you were once taught.

No one begins to truly search for the Way until their parents have failed them, their gods are dead, and their tools have shattered in their hand.


Post Scriptum:  On reviewing a draft of this essay, I discovered a fairly inexcusable flaw in reasoning, which actually affects one of the conclusions drawn.  I am leaving it in.  Just in case you thought that taking my advice made you safe; or that you were supposed to look for flaws, but not find any.

And of course, if you look too hard for a flaw, and find a flaw that is not a real flaw, and cling to it to reassure yourself of how critical you are, you will only be worse off than before...

It is living with uncertainty—knowing on a gut level that there are flaws, they are serious and you have not found them—that is the difficult thing.

71 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Mycroft · 2008-05-18T07:18:17.000Z · LW(p) · GW(p)

I can't help but remember you talking about a teacher who always had an error in his lectures, and over the course of the semester made them harder and harder to find. The last lecture, which was the most complex, didn't have a flaw.

Replies from: None, Document
comment by [deleted] · 2012-12-16T14:32:44.398Z · LW(p) · GW(p)

I can't help but remember you talking about a teacher who always had an error in his lectures, and over the course of the semester made them harder and harder to find. The last lecture, which was the most complex, didn't have a flaw.

I was in the same class, but you're mistaken: The last three lectures didn't have a flaw. :)

comment by Grant · 2008-05-18T07:29:13.000Z · LW(p) · GW(p)

I was more or less surrounded by people of average sanity when I grew up, but they still seemed pretty nuts to me. (Completely off-topic, but I really wonder why people tell children known-fantasies such as Santa Clause and the Easter Bunny)

I don't think its really accurate to say most people are insane. Clearly they need to be sane for the world to keep on running. IMO, they are insane when they can afford to be - which is pretty common in politics, religion and untestable hypothesizes, but a LOT less common in the workplace. Most people just aren't interested in truth because truth doesn't pay out in a lot of circumstances. I wonder if science might change in your direction (and how quickly?) if betting markets were more commonly accepted?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-05-01T23:04:35.162Z · LW(p) · GW(p)

Never mind the Easter Bunny, what about Harry Potter?

Replies from: gjm
comment by gjm · 2014-12-10T22:55:01.950Z · LW(p) · GW(p)

Harry Potter is overtly fictional and no one tells their children about him as if he's real.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-05-18T07:29:20.000Z · LW(p) · GW(p)

You're thinking of Kai Chang's My Favorite Liar. It's linked in the Post Scriptum.

comment by Brian_Jaress2 · 2008-05-18T09:07:00.000Z · LW(p) · GW(p)

When they taught me about the scientific method in high school, the last step was "go back to the beginning and repeat." There was also a lot about theories replacing other theories and then being replaced later, new technologies leading to new measurements, and new ideas leading to big debates.

I don't remember if they explicitly said, "You can do science right and still get the wrong answer," but it was very strongly (and logically) implied.

I don't know what you were taught, but I expect it was something similar.

All this "emotional understanding" stuff sounds like your personal problem. I don't mean that it isn't important or that I don't have sympathy for any pain you suffered. I just think it's an emotion issue, not a science issue.

Replies from: neuromancer92
comment by neuromancer92 · 2012-04-18T18:42:42.632Z · LW(p) · GW(p)

I understand the point you're raising, because it caught me for a while, but I think I also see the remaining downfall of science. Its not that science leads you to the wrong thing, but that it cannot lead you to the right one. You never know if your experiments actually brought you to the right conclusion - it is entirely possible to be utterly wrong, and complete scientific, for generations and centuries.

Not only this, but you can be obviously wrong. We look at people trusting in spontaneous generation, or a spirit theory of disease, and mock them - rightfully. They took "reasonable" explanations of ideas, tested them as best they could, and ended up with unreasonable confidence in utterly illogical ideas. Science has no step in which you say "and is this idea logically reasonable", and that step is unattainable even if you add it. Science offers two things - gradual improvement, and safety from being wrong with certainty. The first is a weak reward - there is no schedule to science, and by practicing it there's a reasonable chance that you'll go your entire life with major problems with your worldview. The second is hollow - you are defended from taking a wrong idea and saying "this is true" only inasmuch as science deprives you of any certainty. You are offered a qualifier to say, not a change in your ideas.

Replies from: Hul-Gil, Jakeness
comment by Hul-Gil · 2012-04-18T19:21:41.960Z · LW(p) · GW(p)

Not only this, but you can be obviously wrong. We look at people trusting in spontaneous generation, or a spirit theory of disease, and mock them - rightfully. They took "reasonable" explanations of ideas, tested them as best they could, and ended up with unreasonable confidence in utterly illogical ideas.

I don't believe most of the old "obviously wrong" beliefs, like a spirit theory of disease, were ever actually systematically tested. Experimentation doesn't prevent you from coming to silly conclusions, but it can throw out a lot of them.

(A nitpick: Either these things are only obviously wrong in retrospect, or they did not start with reasonable explanations. That is, either we cannot rightfully mock them, or the ideas were ridiculous from the beginning.)

As for the rest, I don't disagree with your assertions - only the (implied) view we should take of them. It is certainly true that science can be slow, and true that you can't ever really know if your explanation is the right one. But I think that emphasis on knowing "the real truth", the really right explanation, is missing the point a little; or, in fact, the idea of the One True Explanation itself is unproductive at best and incoherent at worst. After all, even if we eventually have such an understanding of the universe that we can predict the future in its entirety to the finest level of detail theoretically possible, our understanding could still be totally wrong as to what is "actually" happening. Think of Descartes' Evil Genius, for example. We could be very, very confident we had it right... but not totally sure.

But - once you are at this point, does it matter? The power of science and rationality lies in their predictive ability. Whether our understanding is the real deal or simply an "[apparently] perfect model" becomes immaterial. So I think yes, science can lead you to the right conclusion, if by "right" we mean "applicable to the observed world" and not The Undoubtable Truth. No such thing exists, after all.

The slowness is a disappointment, though. But it's accelerating!

comment by Jakeness · 2013-02-22T01:34:51.528Z · LW(p) · GW(p)

I don't see how what you have said necessitates the "downfall" of science. It seems to me that it only suggests scientists should look at their theories as "the best possible explanation at the current time, which will likely be altered or proven incorrect in the future," rather than the usual "this is right, everything else is wrong." But we already know that this is an improvement everyone should be making to their thought-processes; here scientists are being singled out.

It would be appreciated if someone pointed out flaws in what I have said.

comment by billswift · 2008-05-18T10:04:45.000Z · LW(p) · GW(p)

That is why I am a rationalist and a libertarian. Everyone is totally and completely responsible for every choice they make and everything they do. Every one, from toddler to parent, no one can protect you from your responsibility. That is the difference between a child and a real adult - the adult knows and accepts their responsibility, the less than mature tries to deny or hide from it.

@Brian: Emotion is the driver of everything, even rationality.

comment by MichaelAnissimov · 2008-05-18T10:05:11.000Z · LW(p) · GW(p)

Why do you lack social curiosity? Do you think it's a neuro-quirk, or just a normal quirk?

Replies from: Jakeness
comment by Jakeness · 2013-02-22T01:22:35.922Z · LW(p) · GW(p)

I can't speak for him, but I developed below-average social curiosity after I realized that people usually talk about things that aren't really interesting.

comment by Roland7 · 2008-05-18T10:53:53.000Z · LW(p) · GW(p)

Eliezer,

this article is GOLD!

comment by RobinHanson · 2008-05-18T11:22:31.000Z · LW(p) · GW(p)

People who've had their trust broken in the sanity of the people around them, seem to be able to evaluate strange ideas on their merits.

I'd say instead that this prod produces a high variance response. Some rise to the challenge and become more rational, while others fall even deeper into delusion. Yes the most rational people tend to have experienced this, but so also for the most irrational people.

comment by Tim_Tyler · 2008-05-18T11:58:50.000Z · LW(p) · GW(p)

Science is captitalised, suggesting an abnormal definition of the term. Can this definition be found somewhere? What is "Science" - if it it different from science - and if it is not different, then why captialise it?

comment by Vladimir_Nesov2 · 2008-05-18T12:34:23.000Z · LW(p) · GW(p)

I started to seriously think about rationality only when I started to think about AI, trying to understand grounding. When I saw that meaning, communication, correctness and understanding are just particular ways to characterize probabilistic relations between "representation" and "represented", it all started to come together, and later was transferred to human reasoning and beyond. So, it was the enigma of AI that acted as a catalyst in my case, not a particular delusion (or misplaced trust). Most of the things I read on the subject were outright confused or in the state of paralyzed curiosity, not deluded in a particular technical way. But so is "Science". The problem is in settling for status quo, walking along the trodden track where it's possible to do better.

Thus, I see this post as a demonstration by example of how important it is to break the trust in all of your cherished curiosity stoppers.

comment by Roland2 · 2008-05-18T13:51:05.000Z · LW(p) · GW(p)

I tried, but didn't find a flaw, anyone else?

Replies from: MarkusRamikin
comment by MarkusRamikin · 2014-12-10T21:28:27.536Z · LW(p) · GW(p)

No, and I would be surprised if a lot of others have but are mysteriously refusing to show off and tell.

On the whole, this is easily one of my favourite posts on LW, together with "Something to Protect". But that trollish postsciptum doesn't work.

comment by Caledonian2 · 2008-05-18T14:11:01.000Z · LW(p) · GW(p)

The idea that flaws need to be added - and that the final lecture will be flawless - is both silly and presumptuous. There will almost certainly be flaws, whether they are added or not, and our judgment is not adequate to determine whether our own work has them or not.

Eliezer, all of your problems with "Science" seem to stem not from any problems with the method itself, but from your personal tendency to treat the method as a revelation that people have an emotional investment in: in other words, a religion.

There are a variety of ways people can fail to put science into practice. One of the most pernicious is failing to apply it in situations where it is clearly called for, because we have an emotional investment in holding positions that we don't want to disturb. One especially dangerous subtype of this error is when the important subject is our 'scientific' reasoning and the conclusions we derived from it. It is even more dangerous than the general case because it doesn't just involve a corruption of our ability to deal with one specific set of problems, but a corruption of the general method we must use to rationally investigate the world. Instead of merely having a blind spot, we lose our sight completely, while at the same time losing our ability to detect that we're blind.

You are guilty of this error. That doesn't mean that you've gained a unique insight that must be shared with the world at all costs. This is a very old and trivial insight that most people worth listening to have already produced independently.

Replies from: Kenny
comment by Kenny · 2013-03-21T15:36:15.321Z · LW(p) · GW(p)

Without the second and last paragraphs, this would be a wonderful comment.

comment by Peter_Turney · 2008-05-18T14:30:46.000Z · LW(p) · GW(p)

I agree with your general view, but I came to the same view by a more conventional route: I got a PhD in philosophy of science. If you study philosophy of science, you soon find that nobody really knows what science is. The "Science" you describe is essentially Popper's view of science, which has been extensively criticized and revised by later philosophers. For example, how can you falsify a theory? You need a fact (an "observation") that conflicts with the theory. But what is a fact, if not a true mini-theory? And how can you know that it is true, if theories can be falsified, but not proven? I studied philosophy because I was looking for a rational foundation for understanding the world; something like what Descartes promised with "cogito ergo sum". I soon learned that there is no such foundation. Making a rational model of the world is not like making a home, where the first step is to build a solid foundation. It is more like trying to patch a hole in a sinking ship, where you don't have the luxury of starting from scratch. I view science as an evolutionary process. Changes must be made in small increments: "Natura non facit saltus".

One flaw I see in your post is that the rule "You cannot trust any rule" applies recursively to itself. (Anything you can do, I can do meta.) I would say "Doubt everything, but one at a time, not all at once."

comment by Günther_Greindl · 2008-05-18T14:50:01.000Z · LW(p) · GW(p)

@Caledonian: If it is an old and trivial insight, why do most scientists and near all non-scientists ignore it?

As Eli said in his post, there is a difference between saying the words and knowing, on a gut level, what it means - only then have you truly incorporated the knowledge and it will aid you in your quest to understand the world.

Also, you say: Caledonian: but from your personal tendency to treat the method as a revelation that people have an emotional investment in

Of course people have an emotional investment in this stuff!! Do not make the old mistake of confusing rationality with not being emotional (I guess Star Trek with Mr. Spock is guilty of that, at least for our generation)

And what could be more emotional than dumping the legends of your tribe/parents/priests/elders?

For rationality and emotion in science, read for instance here: The Passionate Scientist: Emotion in Scientific Cognition Paul Thagard http://cogsci.uwaterloo.ca/Articles/Pages/passionate.html

comment by Recovering_irrationalist · 2008-05-18T15:41:50.000Z · LW(p) · GW(p)

You will have to study [...] and social psychology [...]

Please could you recommend some social psychology material?

comment by Jared · 2008-05-18T16:35:47.000Z · LW(p) · GW(p)

As you explain so clearly here, the point is to think for ourselves instead of trusting in any person or system. This valuable insight can be reached by many idiosyncratic paths through life. Your personal path to it, trusting too much in Science itself, is an ironically interesting one, unlikely to be trod by most. That's why your line "Science Isn't Strict Enough" fails to resonate with some readers.

comment by Unknown · 2008-05-18T16:40:44.000Z · LW(p) · GW(p)

Jared, why should you trust yourself more than someone else? And if there is someone more worthy of trust than you, wouldn't it be a more rational strategy to let him think for you instead of thinking for yourself?

comment by Jared · 2008-05-18T16:55:26.000Z · LW(p) · GW(p)

If my own judgment is so faulty that I choose to let somebody else do all my thinking for me, then how can I even trust the thinking behind my choice?

comment by bambi · 2008-05-18T16:56:10.000Z · LW(p) · GW(p)

If you think that Science rewards coming up with stupid theories and disproving them just as much as more productive results, I can hardly even understand what you mean by Science beyond the "observe, hypothesize, test, repeat" overview given to small children as an introduction to the scientific method. Was Eliezer-18 blind to anything beyond such simple rote forumulas?

Negative results are forgiven but hardly ever rewarded (unless the theory disproven is widely believed).

If you'd put aside the rather bizarre bitterness and just say: "Bayesian rationality is a good way to pick which theories to test. Here's some non-toy examples worked through to demonstrate how" that would be much more useful than these weird parables and goofy "I am an outcast" rants.

comment by William_Tanksley · 2008-05-18T17:13:44.000Z · LW(p) · GW(p)

"@Caledonian: If it is an old and trivial insight, why do most scientists and near all non-scientists ignore it?"

They don't. The mismatch between you and them is that they're busy thinking about something else at the moment. I like the rule Turney gave above: "Doubt everything, but one at a time, not all at once." Of course, a single person can't follow that rule completely (there's not enough time in a lifespan to doubt EVERYTHING), and most people pick the wrong things to doubt or are lazy in applying the rule.

Of course, that rule's going to get in the way of reaching truth in some cases (some falsehoods come in self-reinforcing pairs both of which must be doubted in order to falsify either, and some things can't profitably be denied even for the sake of argument), but that's the case with any process, and this is something we've known since Goedel.

This kind of confuses me about this series... If all he was telling us was that Science is a powerful set of rules, and that therefore it can't eliminate all contradictions nor state all facts, I'd simply agree with him. But he seems to be saying that Bayesianism is different from Science, that somehow applying it instead of Science will have better results. It seems to me that both are processes, and both have blind spots.

comment by Caledonian2 · 2008-05-18T17:40:48.000Z · LW(p) · GW(p)

I find it difficult to be sympathetic towards someone who complains he wasn't warned that the rule "do not take things on faith" wasn't supposed to be taken on faith.

We could provide a warning, of course. But how would we then ensure that people understood and applied the warning? Warn them about the warning, perhaps? And then give them a warning about the warning warning?

We could talk until we're blue in the face, but the simple truth is that you cannot force people to apply a method consistently, rigorously, or intelligently. No amount of adding onto the lesson will make people apply it properly, it merely offers them more things to misunderstand, ignore, and apply inconsistently.

comment by Vladimir_Nesov2 · 2008-05-18T17:47:51.000Z · LW(p) · GW(p)

We could provide a warning, of course. But how would we then ensure that people understood and applied the warning? Warn them about the warning, perhaps? And then give them a warning about the warning warning?

That's the problem with discrete reasoning. When you have probabilities, this problem disappears. See http://www.ditext.com/carroll/tortoise.html

comment by Brian_Jaress2 · 2008-05-18T17:59:40.000Z · LW(p) · GW(p)

@billswift: Emotion might drive every human action (or not). That's beside the point. If an emotion drives you into a dead end, there's something wrong with that emotion.

My point was that if someone tells you the truth and you don't believe them, it's not fair to say they've led you astray. Eliezer said he didn't "emotionally believe" a truth he was told, even though he knew it was true. I'm not sure what that means, but it sounds like a problem with Eliezer, involving his emotions, not a problem with what he was told.

comment by Unknown · 2008-05-18T18:19:17.000Z · LW(p) · GW(p)

Jared, it is possible to see that someone is more intelligent and trustworthy than you, without therefore being yourself more intelligent and trustworthy than him.

comment by Caledonian2 · 2008-05-18T18:35:31.000Z · LW(p) · GW(p)

Eliezer didn't trust science too much. He didn't trust it enough. Instead of taking the duties and requirements of skepticism seriously, he treated the scientific method as another faith system.

I'm sure that was a very comforting and familiar approach to take, but it was still wrong. Completely, fundamentally wrong. It's utterly incompatible with the skepticism, open-mindedness, and radical doubt that is essential to the scientific method. And it seems to have had long-lasting implications for the sorts of positions Eliezer takes.

comment by Roland2 · 2008-05-18T19:18:36.000Z · LW(p) · GW(p)

One suggestion for the flaw:

Conclusions from this article: a) you are never safe b) you must understand a) on a emotional basis c) the only way to achieve b) is through an experience of failure after following the rules you trusted

The flaw is that the article actually does the opposite of what it wants to accomplish: by giving the warning(a) it makes people feel safer. In order to convey the necessary emotion of "not feeling safe"(b) Eliezer had to make the PS regarding the flaw.

In a certain sense this also negates c). I think Eliezer doesn't really want us to fail(c) in order to recognize a), the whole point of overcomingbias.com is to prevent humans from failing. So if Eliezer did a good job in conveying the necessary insecurity through his PS then hopefully c) won't happen to you.

Roland

Replies from: wafflepudding
comment by wafflepudding · 2017-04-04T21:44:13.506Z · LW(p) · GW(p)

That second paragraph was hard for me. Seeing "a)" and "b)" repeated made me parse it as a jigsaw puzzle where the second "a)" was a subpoint of the first "b)", but then "c)" got back to the main sequence only to jump back to the "b)", the second subpoint of the first "b)". That didn't make any sense, so I tried to read each clause separately, and came up with "1. You are never safe. 2. You must understand. 3. On an emotional basis..." before becoming utterly lost. Only after coming back to it later did I get that repeated letters were references to previous letters.

comment by Nick_Tarleton · 2008-05-18T19:34:36.000Z · LW(p) · GW(p)

Roland, agreed.

Does anyone disagree that science does not have nearly as strict quantitative constraints as Bayescraft on what you may believe?

comment by Roland2 · 2008-05-18T19:40:22.000Z · LW(p) · GW(p)

@Vladimir Nesov:

why do you say that the problem disappears when you have probabilities?

I guess you still have the same basic problem which is: what are your priors? You cannot bootstrap from nothing and I think that is what the tortoise was hinting at, that there are hidden assumptions in our reasoning that we are not aware of and that you can't think without using those hidden assumptions.

comment by Vladimir_Nesov2 · 2008-05-18T20:13:28.000Z · LW(p) · GW(p)

Roland,

Probabilities allow grades of beliefs, and just as Achilles's pursuit of tortoise can be considered as consisting of infinite number of steps, if you note that steps actually get infinitely short, you can sum them up to a finite quantity. Likewise, you can join infinitely many infinitely unlikely events into a compound event of finite probability. It is a way to avoid regress Caledonian was talking about. Evidence can shift probabilities on all metalevels, even if in some hapless formalism there are infinitely many of them, and still lead to reasonable finite conclusions (decisions).

comment by Caledonian2 · 2008-05-18T20:28:38.000Z · LW(p) · GW(p)
Likewise, you can join infinitely many infinitely unlikely events into a compound event of finite probability. It is a way to avoid regress Caledonian was talking about.

No, Mr. Nesov, it is not. You and I are talking at cross purposes.

comment by Vladimir_Nesov2 · 2008-05-18T20:43:56.000Z · LW(p) · GW(p)

Caledonian, you are not helping by disagreeing without clarification. You don't need to be certain about anything, including estimation of how much you are uncertain about something and estimation of how much you are uncertain about the estimation, etc.

comment by Doug_S. · 2008-05-18T22:12:46.000Z · LW(p) · GW(p)

"The experimental method shall decide which hypothesis wins"?

When there are experiments that can reasonably be done, or have already been done, then this works, right?

comment by Z._M._Davis · 2008-05-19T06:07:52.000Z · LW(p) · GW(p)

"Do you think it's a neuro-quirk, or just a normal quirk?"

Wait, there's another kind?

comment by Zubon · 2008-05-19T12:44:29.000Z · LW(p) · GW(p)

"Doubt everything, but one at a time, not all at once."

Interestingly, Robin Hanson has an existing post on this subject:

To have the best chance of succeeding in a radical project, you should instead choose just a few related dimensions on which to make radical choices, and then make conservative conventional choices on all the other dimensions. This strategy minimizes the chance that some other project dimension will go badly wrong and take down your central radical idea with it.

comment by Phillip_Huggan · 2008-05-22T18:43:48.000Z · LW(p) · GW(p)

About flaws in the post: the idea that environmentalists shouldn't oppose a scaling up of nuclear power is a flaw. This paper: http://www.stormsmith.nl/ was lucky enough to be written (I was lucky enough to find it since every nuclear utility on the planet ignored unearthing the basic economic analysis contained within it). Basically, scaling up of nuclear fails to cost the complete life-cycle of decommissioning a powerplant. And, additionally, going to all nukes ensures the nuclear lobby (the process of lobbying is something Libertarians don't understand) becomes a nearly permanent chunk of the world's economy. My point is that Bayesian reasoning here only unearths the nuclear facts and lies the nuclear industry forwards. You need reliable information sources in addition to Bayes to make correct judgement calls on eoconmics/energy here.

Cryonics is another flaw. IDK if it works or not. But the expensive process certainly shouldn't be a part of Universal Healthcare coverage at present. The best research (not 1970's deductive reasoning)about brains I've read to date, suggests thought is a substrate specific (IDK how much, semiconductors no way but maybe more inclusive than CNS proteins) process that functions as temperature-dependant solitons. Whatever temperature the brain goes down to in the cold water of ice-slip hypothermia survivors, does not necessarily mean brain processes will survive liquid nitrogen or helium temperatures. Under the mathematical model of reality most transhumanists have, temperature (requires physics) doesn't even exist!! I hope cryonics works, and if I were rich enough I might sign up or fund suspension research, but to suggest those who don't believe in cryonics are fools is to suggest brains work identically at 25C and -273C. Then the "rationalist" rebuttal is always to invoke "uploading for immortality" (where Transhumanism dies to me, despite all its progressive memes). If rationalists can't understand mathematics isn't physics, I don't want to be labelled a rationalist and I will pepper posts such as this to avoid unsuspecting readers from mindlessly believing a mindless belief system; I am trying to prevent H+ from functioning as a cult.

Intrade free money?! Surely you must know it takes $5000-$15000/yr in basic costs alone to live in most of the Western world. Surely you must know the average savings rate is very low. Please qualify statements like this with "rich/middle classes can make investments on liquid (can Insite really handle trillions of $$ as suggested and wouldn't it then be subject to manipulation: I'll bet the 65 cents in my pocket Phillip will splash coffee on himself). Bayes may be important, but if it misses very basic facts (like 9/10ths of the world can't presently afford to live off investment income), why would the world want to incorporate more Bayes, a small subset of probability theory already in math, logic and computer science curriculums (I think)? Sweet. I did splash coffee on myself and double up to $1.30. Now I can donate to the political party most likely to teach (for the purpose here of reasoning skills, not ethics) probability theory instead of religious dogma, in public schools. That is the conclusion of the post: better public education at the expense of pop-culture. Or more funding for public education commercials?

Replies from: Lotusmegami
comment by Lotusmegami · 2010-06-25T23:33:56.485Z · LW(p) · GW(p)

I think Phillip has completely misunderstood the purpose of cryonics. Transhumanists don't believe that the brain continues to "function" after a person has been vitrified. Before someone can live again, scientists of the future must find a way to revive them.

comment by xelxebar · 2011-07-15T06:52:16.173Z · LW(p) · GW(p)

I don't know about you guys, but being wrong scares the crap out of me. Or to say it another way, I'll do whatever it takes to get it right. It's a recursive sort of doubt.

comment by kilobug · 2011-09-05T15:59:01.710Z · LW(p) · GW(p)

Hi,

Once again interesting post, but it doesn't apply to my personal case. I've access to no statistics on the issue so I can't claim how exceptional I am, but my own parents are "more rational than average" (they are atheist maths teachers), and I don't think they are insane. Or not more than anyone else, at least.

I did realize that my parents were not perfect, and that if I could trust them in loving me and caring for me and wanting the best for me, I couldn't blindly trust anything they would say. But that didn't require, for me, such a massive emotional break.

Maybe you'll tell I'm not "unusually rational"... but at least, I'm trying to better myself, or I wouldn't spend hours reading LW ;)

Before reading LW, I was not really doubting Science, but neither was I considering it to be an utter sacrilege to claim it was flawed (or that it could be superseded by something better). I knew Bayes' Theorem, without realizing it was much more important than just understanding a mammography result (which was the way I was using it). That's a shortcoming of my part, sure.

I can say that even before reading LW, I had a gut feeling that the Cophenaguen interpretation was just... not right. I was in doubt, torn between that gut feeling, and my own humility saying "well, you're not a physicists, how can you be more right than they are ?" MWI seems much cleaner to me (even if there are a few things that still bother me), but that gut feeling drove me into reading more about QM and rationality. That, and having something to protect.

Well, that's just my own personal experience, hoping it can help understand things better, but not claiming any generalization from it.

comment by [deleted] · 2012-01-28T19:36:33.110Z · LW(p) · GW(p)

Now that we have once again established that 1 and 0 are not probabilities, we have to remember that probabilities are still a strictly ordered set. How do we make it less dangerous?

comment by neuromancer92 · 2012-02-14T05:52:06.082Z · LW(p) · GW(p)

For me, the discovery that science is too slow was bound up with the realization that science is not safe. My private discovery of the slowness of science didn't come from looking at the process of scientific discovery and reflecting on the time it took - rather, it arose from realizing that the things I learned or discovered via science were slower more painful than those I learned from other methods. "Other methods" encompasses everything from pure mathematics to That Magical Click, the first inescapable and the second, initially, unsupported. Realizing that science was a fairly low-quality set of tools carried with it the realization that its inefficiency was a function of its precautions. Not trusting science as the ideal method for discovery, I ceased to trust it as ideal for reliability.

New to this site, Bayescraft, and rationalism as a whole, I still have a mentor left to distrust. Consciously, I know that these techniques are imperfect, but I have yet to understand them well enough to be failed by them.

comment by beberly37 · 2012-06-20T20:55:56.553Z · LW(p) · GW(p)

Welcome to the Earth where ethanol is made from corn and environmentalists oppose nuclear power.

I find this to be a very attention grabbing comparison, so much so that I had to re-read this post 5+ times before I could see the forest through the trees (or tree as the case may be).

The reason these two examples strike me so is that I once held both of the underlying beliefs (ie that corn ethanol is bad and so is nuclear power). While I reversed both of these beliefs many years ago (prior to discovering HPMOR and lesswrong) I now see them as "belief as attire" (tree huggers think nuclear is bad, I'm a tree hugger, therefore I think nuclear is bad) and "password guessing" (why is corn ethanol a bad idea?... thermodynamics....Gold Star!)

After gathering more information about these two "controversies" than can be gathered from Mother Jones or Popular Mechanics, I firmly support nuclear power expansion and think it is quite insane that we don't make more ethanol from corn. I would be happy to support my positions, the former would be rather concise, the later would be considerably longer, so I'll save it until asked.

Perhaps this would have been less distracting:

Welcome to the Earth where 46% of Americans believe in creationist origins of humans and only 15% believe in evolutionary origins of humans.

Replies from: MarkusRamikin
comment by MarkusRamikin · 2014-07-02T10:40:15.632Z · LW(p) · GW(p)

I'll save it until asked.

Consider yourself asked!

Replies from: beberly37
comment by beberly37 · 2015-09-27T02:24:10.742Z · LW(p) · GW(p)

I obviously haven't logged into LessWrong in a long time. Do you still want the answer?

Replies from: Good_Burning_Plastic
comment by Good_Burning_Plastic · 2015-09-27T09:19:41.483Z · LW(p) · GW(p)

Dunno about MarkusRamikin, but I'd sure be interested in hearing why you "think it is quite insane that we don't make more ethanol from corn".

Replies from: MarkusRamikin
comment by MarkusRamikin · 2015-09-27T10:02:50.796Z · LW(p) · GW(p)

Yep, me too.

Replies from: beberly37
comment by beberly37 · 2015-09-28T16:28:10.319Z · LW(p) · GW(p)

disclaimer This defense of corn ethanol is by no means “publish ready”, it is simply a gathering of data and concepts obtain during my work that has been sufficient enough to change my mind on the merits of a seemingly insane practice. It could use more work, however I don’t really care enough either way to put much more effort into this particular topic.

The primary data driven argument against corn ethanol is that it takes more energy to make than the fuel contains. A statement that is generally true, which I don’t really care about. The whole point of getting away from fossil fuels is to reduce the emissions of greenhouse gases (GHG) and slow/stop/reverse climate change. My grizzled, old, super-conservative thermo professor in undergrad often complained about hippies wanting to conserve energy. “Energy is always conserve” ,he would suggest , “what we need to conserve is exergy”. Likewise, I (and I believe the collective “we” should feel the same) don’t care about energy balance, I care about carbon balance.

To find the “best” data on carbon balance of fuels, I turn to the California Air Resources Board, which limits carbon intensities (CI) for fuel sold in California, they have lists of every producer of fuel sold in the state and list the CI’s of the fuels. The unit they use is gCO2e/MJ (grams of CO2 equivalent per megajoule). Which can be found here. They also have published pathways for CI, which are documents describing how they arrived at the CI numbers. The one for corn ethanol is here. Reading through the pathway for corn ethanol, the biggest take away is that there is wide variation in production practices that have major impact on the CI of ethanol, for example, the highest CI for corn ethanol listed as of 5/20/15 is 120 (1) gCO2eq/MJ while the lowest is 63(1) gCO2eq/MJ. That’s nearly a factor of 2. For comparison, the CI of standard CA gasoline is 95(1). The difference between the high ethanol CI and the low is primarily the production energy (ie heat for boilers) for the former is coal and the latter is natural gas with some landfill gas and waste wood.
If you look at the breakdown for “average” corn ethanol there are three major sources of carbon emissions, ag chemical production, ethanol production and land use, each being approximately 30 g/MJ. The total number listed for “average” dry mill is 97 (1) gCO2/MJ. I should note that there is a -11g/MJ credit for “co-products”, which is the left-over solids that is used as animal feed call dried distillers grains.

So here is my general belief, making corn ethanol is not inherently bad (insane), however the way we do it is slightly insane. We get a marginally lower CI fuel, which gets blended into gasoline and reduces non-GHG emissions (at least that’s why it’s mandated in CA). However, by shifting the process (which I might outline sometime if folks are interested, but would turn this comment into more of a TLDNR) to one that is more sustainable, and more cost effective, corn ethanol production become perfectly sane.

So why does this mean we should have more corn ethanol? Well more corn ethanol means more corn ethanol plants (building out the infrastructure is costly and time consuming and a large barrier to expansion). Eventually, there will be a revenue incentive to ethanol plants for caring about the CI of their fuel (since in the US, as a whole, it’s only mandated as being non-fossil caring little about GHG’s). California is a good example of this. Gasoline blenders have to buy ethanol because gas in CA has to be 10% ethanol. There is also a limit to the CI of the gas/ethanol blend, which right now is higher than the CI of most ethanol. However, this “low carbon fuel standard” CI drops every year until 2020, where is stays at 89 (1) gCOe/MJ. This means that if the ethanol a company is trying to sell in CA has a CI above 89, the customer would have to purchase carbon credits as well. So companies would then have an incentive to change their production practices to lower their CI, because they could sell their ethanol for higher prices. If/when a US carbon tax (or something akin to the CA Low Carbon Fuel Standard) is adopted, having an existing ethanol infrastructure will make the scale-up and spread of low carbon liquid fuels able to happen much faster.

There are a few other sides to the corn ethanol argument; growing crops for fuel instead of food for example. An argument I find full of holes, since the increase in the corn crop has not been on the same scale as the increase in ethanol production (2) . This is due to the aforementioned animal feed co-product and the fact that before wide spread ethanol production most of the corn grown in the US was used as animal feed (2) . So making ethanol doesn’t displace another crop, only the starch portion of cattle feed. If you have a moral problem with growing a fuel while people in third world countries starve, you should have the same moral problem with growing a crop to raise meat while people starve. Also, if you have a moral problem with displacing food from American mouths, we have an obesity problem, which means we produce and consume too many calories per capita already, we don’t need more corn in our diet. There is also the notion the ethanol is bad for engines, while I believe that the higher anti-knock characteristics of ethanol combined with the higher heat of vaporization means ethanol-only vehicles could have diesel-like compression ratios with otto cycle performance, resulting in a higher efficiency, lower non-GHG emissions vehicle. There are a few other minor facets, but I think they are immaterial, though I did not want to give the impression that I did not consider them.

1) I’ve truncated these numbers as they are reported to the hundredths place.
2) I really need to dig up some good reference for these, because they are based on me looking at old ag reports, which is less than ideal

As for nuclear power, we know that it is a near 100% probability that burning fossil fuels is bad for the planet (and us too) and that can't really be mitigated with existing technology. However, catastrophe from nuclear power has a probability less than 1 and there is technology that can decrease that probability.

edited to fix hyperlinks and to fix unintended text formatting.

Replies from: Good_Burning_Plastic
comment by Good_Burning_Plastic · 2015-09-29T07:16:22.709Z · LW(p) · GW(p)

Who the heck downvotes comments for answering a question?

Replies from: entirelyuseless
comment by entirelyuseless · 2015-09-29T13:29:21.445Z · LW(p) · GW(p)

I wasn't the one who downvoted, but the comment is not a very good defense of the claim that "it is quite insane that we don't make more ethanol from corn." At most it defends the claim that it would not be unreasonable to make more.

comment by redlizard · 2013-02-21T22:59:49.309Z · LW(p) · GW(p)

No one begins to truly search for the Way until their parents have failed them, their gods are dead, and their tools have shattered in their hand.

Nihil supernum.

comment by Entraya · 2014-02-17T09:59:34.832Z · LW(p) · GW(p)

That post scriptum.. It's just so amusing to have someone write out your exact thoughts and worries like that. It is very rarely that i get tears in my eyes from giggling, and I can't stop smirking about that. It is quite bothersome indeed that I am so unskilled in this art of reasoning that the best i can do is follow your words and hope they lead me to somewhere where i can eventually realize said flaws. I feel my journey will be riddled with such flaws no matter how hard i try to to avoid it. It's the nagging feeling I have almost constantly, and of which i have trouble explaining

comment by [deleted] · 2014-05-01T22:13:27.609Z · LW(p) · GW(p)

What's trust?

Forget it. It's an honest question but it'll just appear as attention grabbing.

comment by AshwinV · 2014-05-02T04:41:12.217Z · LW(p) · GW(p)

This is probably my favorite entry ever.

comment by timujin · 2017-08-17T14:42:58.351Z · LW(p) · GW(p)

What's wrong with ethanol made from corn, anyway?

Replies from: robirahman
comment by Robi Rahman (robirahman) · 2017-08-18T01:11:01.705Z · LW(p) · GW(p)

It's another subsidy to agribusiness conglomerates, which leech huge sums of money from taxpayers already.

And it uses up the corn so it can't be sold to hungry poor people, which is bad because starvation is bad.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2017-08-18T12:48:40.975Z · LW(p) · GW(p)

Those sound like fixable problems.

comment by Emiya (andrea-mulazzani) · 2020-03-04T01:59:35.066Z · LW(p) · GW(p)

The only flaw I can see it's that this reasoning seems to put a lot of weight on the probability of meeting disaster by sticking with a well thought, well tested but imperfect set of rules, and not enough weight on the probability of meeting disaster by trying to be clever and do better than the rules, either by genuine fear of their limitations, or to follow a course of actions that you favour for reasons that aren't really part of your goals but that you are then free to endorse by using the fear of the rules' limits.

I get that the main point of the post is that you can't just relax, get comfortable and think you finally got the perfect way of thinking if you want to actually get the right answers often enough, and I also agree with it. But I still feel that, when considering if to depart from the rules, even those of the lowly common sense, the possibility that you are just about to shoot yourself in the foot and meet some interesting new disaster for reasons you'll see only after and that could have been avoided by sticking to it, has a significant probability compared to the possibility that you are actually in a situation where the rules aren't enough and you'll have to improve them to succeed, that this probability should be carefully weighted before choosing how to act, and that this warning should have been included in the reasoning above, as it has usually been included in other posts.

A practical example would be making this comment as a newcomer, ignoring the common sense consideration that as a flaw it feels kinda obvious and that someone else would have pointed it out by now if I weren't simply missing the point of the post, but I still think that "don't trust the rules" is a rule that requires an awful lot of caution before being mistrusted too much.

Replies from: dranorter
comment by dranorter · 2020-03-10T20:24:10.111Z · LW(p) · GW(p)

It's easy to list flaws; for example the first paragraph admits a major flaw; and technically, if trust itself is a big part of what you value, then it could be crucially important to learn to "trust and think at the same time".

Are either of those the flaw he found?

What we have to go on are "fairly inexcusable" and "affects one of the conclusions". I'm not sure how to filter the claims into a set of more than one conclusion, since they circle around an idea which is supposed to be hard to put into words. Here's an attempt.

  • Tentative observation: the impressive (actively growing) rationalists have early experiences which fall into a cluster.
  • The core of the cluster may be a breaking of "core emotional trust".
  • We can spell out a vivid model where "core emotional trust" is blocking some people from advancing, and "core emotional trust" prevents a skill/activity called "lonely dissent", and "lonely dissent" is crucial.
  • We can have (harmful, limiting) "core emotional trust" in science (and this example enriches our picture of it, and our picture of how much pretty obvious good "lonely dissent" can do).
  • There is no (known) human or mathematical system which is good (excusable, okay, safe) to put "core emotional trust" in.
  • "Core Emotional Trust" can only really be eliminated when we make our best synthesis of available external advice, then faithfully follow that synthesis, and then finally fail; face the failure squarely and recognize its source; and then continue trying by making our own methods.

More proposed flaws I thought of while spelling out the above:

  • An Orthodox Jewish background "seems to have had the same effect", but then the later narrative attributes the effect to a break with Science. Similarly, the beginning of the post talks about childhood experiences, but the rest talks about Science and Bayescraft. In some ways this seems like a justifiable extrapolation, trying to use an observation to take the craft further. However, it is an extrapolation.
  • The post uses details to make possibilities seem more real. "Core emotional trust" is a complex model which is probably wrong somewhere. But, that doesn't mean it's entirely useless, and I don't feel that's the flaw.
  • The argument that Bayesianism can't receive our core trust is slightly complex. Its points are good so far as they go, but to jump from there to "So you cannot trust" period is a bit abrupt.
  • It occurs to me that the entire post presupposes something like epistemic monism. Someone who is open to criticism, has a rich pool of critique, a rich pool of critique-generating habits, and constant motivation to examine such critiques and improve, could potentially have deep trust in Science or Bayescraft and still improve. Deep trust of the social web is a bit different - it prevents "lonely dissent".
  • "Core emotional trust" can possibly be eliminated via other methods than the single, vividly described one at the end of the article. Following the initial example, seeing through a cult can be brought on when other members of the cult make huge errors, rather than onesself.

I suppose that's given me plenty to think about, and I won't try to guess the "real" flaw for now. I agree with, and have violated, the addendum: I had a scattered cloud of critical thoughts in order to feel more critical. (Also: I didn't read all the existing comments first.)

comment by dov · 2023-03-09T11:40:43.006Z · LW(p) · GW(p)

Does your experience support the claim that rationalists had their trust in their milieu (e.g. parents, cult, etc.) broken?

P.s. that was my personal experience but I'd really like to hear from others.

Replies from: redheadbros
comment by ProofBySonnet (redheadbros) · 2023-06-16T18:13:33.919Z · LW(p) · GW(p)

I had a similar experience, for sure. See the post immediately above yours, when sorted by "newest." Do you want to share your story here?

comment by ProofBySonnet (redheadbros) · 2023-06-16T18:11:50.141Z · LW(p) · GW(p)

This wasn't precisely the reason I got here, but I think the biggest reason that I was open to the idea of rationality when I finally stumbled upon LessWrong and the whole Effective Altruism idea was my experience of becoming gay, after diverging from my christian upbringing. 

During my time going to bible school (yes, I was in that deep), there was a lot of theology where I was confused and felt like I didn't understand why something was true, but just accepted that "I'm sure some high-level theologian out there has a reasonable answer for this, I just don't know it." However, once I began living on my own, naturally drifted away from the church, and then discovered that gay stuff wasn't as bad as I'd been told, I realized (this is how I think of it now) that there was a very large black mark on the source of knowledge internally labelled as "religious authority." That was the first frayed thread in that anachronistic tapestry I had properly seen unraveled for myself, and I suspect that there are many more of such threads that I overlooked previously.