Reference class of the unclassreferenceable

post by taw · 2010-01-08T04:13:36.319Z · LW · GW · Legacy · 154 comments

One of the most useful techniques of rationality is taking the outside view, also known as reference class forecasting. Instead of thinking too hard about particulars of a given situation and taking a guess which will invariably turned out to be highly biased, one looks at outcomes of situations which are similar in some essential way.

Figuring out correct reference class might sometimes be difficult, but even then it's far more reliable than trying to guess while ignoring the evidence of similar cases. Now in some situations we have precise enough data that inside view might give correct answer - but for almost all such cases I'd expect outside view to be as usable and not far away in correctness.

Something that keeps puzzling me is persistence of certain beliefs on lesswrong. Like belief in effectiveness of cryonics - reference class of things promising eternal (or very long) life is huge and has consistent 0% success rate. Reference class of predictions based on technology which isn't even remotely here has perhaps non-zero but still ridiculously tiny success rate. I cannot think of any reference class in which cryonics does well. Likewise belief in singularity - reference class of beliefs in coming of a new world, be it good or evil, is huge and with consistent 0% success rate. Reference class of beliefs in almost omnipotent good or evil beings has consistent 0% success rate.

And many fellow rationalists not only believe that chances of cryonics or singularity or AI are far from negligible levels indicated by the outside view, they consider them highly likely or even nearly certain!

There are a few ways how this situation can be resolved:

How do you reconcile them?

154 comments

Comments sorted by top scores.

comment by Roko · 2010-01-10T12:03:28.961Z · LW(p) · GW(p)

Meta question here: why does reference-class forecasting work at all?

Presumably, the process is that when you cluster objects by visible features, you also cluster all their invisible features too, and the invisible features are what determines the time evolution of those objects.

If the category boundary of the "reference class" is a simple one, then you can't fool yourself by interfering with the statistical correlation between visible and hidden attributes.

For example, reference class forecasting predicts that cryo will not work because cryo got clustered with all the theistic religious afterlives, and things like the alchemists' search for the Elixir of Life. The visible attribute we're clustering on is "actions that people believe will result in an infinite life, or >200 year life".

But a cryonics advocate might complain that this argument is trampling roughshod over all the careful inside view reasoning that cryonicists have done about why cryo is different than religion or superstition: namely that we have a scientific theory for what is going on.

If you drew the boundary around "Medical interventions that have a well accepted scientific theory backing them up", then cryo fares better. The different boundaries you can draw lead to focus upon different hidden attributes of the object in question: cryonics is like religion in some ways, but it is also like heart transplants.

comment by RolfAndreassen · 2010-01-08T22:49:09.606Z · LW(p) · GW(p)

I suggest a reference class "Predictions of technologies which will allow humans to recover from what would formerly have been considered irreversible death", with successful members such as heart transplants, CPR, and that shock thing medical shows are so fond of. (You know, where they shout "Clear!" before using it.)

Replies from: RolfAndreassen
comment by RolfAndreassen · 2010-01-10T20:29:37.292Z · LW(p) · GW(p)

Having got 15 net upvotes but no replies, I feel an obligation to be my own devil's advocate: All three of my examples deal with the heart, which is basically a pump with some electric control mechanisms. Cryonics deals with the brain, which works in very different ways. It follows that, unless we can come up with some life-prolonging techniques that work on the brain, my suggested reference class is probably wrong.

That said, we do have surgery for tumours and some treatments to prevent, reduce in severity, and recover from stroke. Again, though, these deal with the mechanical rather than informational aspects of the brain. I do not care to hold up lobotomy as life-prolonging. Does anyone know of procedures for repairing or improving the neural-network part of the brain?

Replies from: HalFinney, lopkiol
comment by HalFinney · 2010-01-12T02:14:20.406Z · LW(p) · GW(p)

An example regarding the brain would be successful resuscitation of people who have drowned in icy water. At one time they would have been given up for dead, but now it is known that for some reason the brain often survives for a long time without air, even as much as an hour.

comment by lopkiol · 2010-01-10T22:27:54.112Z · LW(p) · GW(p)

Repairing? To what? How can you tell what the original setup was? Improving? Same problem? What is considered an improvement? I guess that might be subjective. In my opinion imaging techniques will make cryonics disappear once the captured information is enough for neural-network reconstruction.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-08T07:30:57.417Z · LW(p) · GW(p)

So to sum up, you think you have a heuristic "On average, nothing ever happens for the first time" which beats any argument that something is about to happen for the first time. Cases like the Wright Brothers (reference class: "attempts at heavier-than-air flight") are mere unrepeatable anomalies. To answer the fundamental rationalist question, "What do you think you know and how do you think you know it?", we know the above is so because experiments show that people could do better at predicting how long it will take them to do their Christmas shopping by asking "How long did it take last time?" instead of trying to visualize the details. Is that a fair summary of your position?

Replies from: patrissimo, taw, Johnicholas
comment by patrissimo · 2010-01-15T03:21:16.875Z · LW(p) · GW(p)

"On average, nothing ever happens for the first time" is an erroneous characterization because it ignores all the times where the predictable thing kept on happening. By invoking the first time you restrict the reference class to those where something unusual happened. But if usually nothing unusual happens (hmm...) and those who predict the unusual are usually con artists as opposed to genius inside analyzers (is this really so unreasonable a view of history?), then he has a point.

"Smart people claiming that amazing things are going to happen" sometimes leads the way for things like the Wright Brothers, but very often nothing amazing happens.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-15T03:34:37.712Z · LW(p) · GW(p)

Sure. But then the question becomes, are we really totally surprised without benefit of hindsight? Can we really do no better than to predict that no flying machine will ever be built because no flying machine ever has been? The sin of underconfidence seems relevant here; like, if it's not a sin to try and do better, we could do a bit better than if we were blind to everything but the reference class.

Replies from: CronoDAS
comment by CronoDAS · 2010-01-15T03:41:22.852Z · LW(p) · GW(p)

Can we really do no better than to predict that no perpetual motion machine will ever be built because no perpetual motion machine ever has been?

Replies from: Zack_M_Davis, Eliezer_Yudkowsky
comment by Zack_M_Davis · 2010-01-15T05:18:17.326Z · LW(p) · GW(p)

But the fact that no perpetual motion machine has been built is not the reason we believe the feat to be impossible. We have independent, well-understood reasons for thinking the feat impossible.

Replies from: Unknowns
comment by Unknowns · 2010-01-15T05:24:30.067Z · LW(p) · GW(p)

As Robin Hanson has pointed out, thermodynamics is not well understood at all.

Replies from: JGWeissman
comment by JGWeissman · 2010-01-15T05:55:24.577Z · LW(p) · GW(p)

Conservation of energy is more basic than thermodynamics.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-15T05:17:04.162Z · LW(p) · GW(p)

But you illustrate my point; it seems possible to discriminate between the probabilities we assign to perpetual motion machines, especially those built from classical wheels and gears without new physics, and flying machines, even without benefit of hindsight.

Replies from: CronoDAS
comment by CronoDAS · 2010-01-15T08:37:39.693Z · LW(p) · GW(p)

Indeed, it is obvious that heavier than air flight is possible, because birds fly.

Everyone in the past who has offered a way to "cheat death" has failed miserably. That means that any proposed method has a very low prior probability of being right. There are far more cranks than there are Einsteins and Wright Brothers. The set of "complete unknowns who come out of nowhere and make important contributions" is nearly empty - the Wright Brothers are the only example that I can think of. Even Einstein wasn't a complete unknown coming from outside the mainstream of physics. Being a patent clerk was his day job. Einstein studied physics in graduate school, and he published many papers in academic journals before he had his Miracle Year. So no, I wouldn't have believed that the Wright Brothers could make an airplane until they demonstrated that they had one.

And it's often futile to look at the object-level arguments. It's not that hard to come up with a good sounding object-level argument for damn near anything, and If you're not an expert in the relevant field, you can't even distinguish well-supported facts from blatant lies.

comment by taw · 2010-01-08T11:04:23.851Z · LW(p) · GW(p)

I entertain the notion that outside view might be a bad way of analyzing some situations, the post is a question on what this class might look like, and how do we know a situation belongs to such class? I'd definitely take outside view as a default type of reasoning - inside view by definition has no evidence of even as little as lack of systemic bias behind it.

The way you describe my heuristic is not accurate. There are cases where something highly unusual happen, but these tend to be extremely difficult to reliably predict - even if they're really easy to explain away as bound to happen with benefit of hindsight.

For example I've heard plenty of people being absolutely certain that fall of the Soviet Union was virtually certain and caused by something they like to believe - usually without even the basic understanding of facts, but many experts make identical mistake. The fact is - nobody predicted it (ignoring background noise of people who "predict" such things year in year out) - and relevant reference classes showed quite low (not zero, but far lower than one) probability of it happening.

Replies from: MatthewB, Nick_Tarleton, MichaelVassar
comment by MatthewB · 2010-01-08T17:16:09.147Z · LW(p) · GW(p)

For example I've heard plenty of people being absolutely certain that fall of the Soviet Union was virtually certain and caused by something they like to believe - usually without even the basic understanding of facts, but many experts make identical mistake. The fact is - nobody predicted it (ignoring background noise of people who "predict" such things year in year out) - and relevant reference classes showed quite low (not zero, but far lower than one) probability of it happening.

Everyone I knew from the Intelligence community in 1987 - 1989 were of the opinion that the Soviet Union had less than 5 years, 10 at tops. Between 1985 and 1989, they had massive yearly increases in the contacts from Soviets either wishing to defect or to pass information about the toppling of the control structures. None of them were people who made yearly predictions about a fall, and every one of them was not happy about the situation (as every one of us lost our jobs as a result). I'd hardly call that noise.

Replies from: RobinHanson
comment by RobinHanson · 2010-01-09T19:54:22.775Z · LW(p) · GW(p)

Is this track record documented anywhere?

Replies from: MatthewB
comment by MatthewB · 2010-01-10T15:13:24.867Z · LW(p) · GW(p)

Probably not. I could probably track down an ex-girlfriend's brother who was in the CIA, who also had looming fears dating from the mid-80s (He's who explained it to me, orginally)...

Now, there may be books written about the subject (I would expect there to be a few), but I can't imagine anyone in any crowd I have ever hung with being into them. I'll check with some Military Historians I know to see.

Edit: After checking with a course from the Journal of International Security, he says that there is all kinds of anecdotal evidence of guys standing around the water cooler speculating about the end of the Cold War (on all Mil/Intel fronts), yet there are only two people who made any sort of hard prediction (and one of those was kinda after the fact - I am sure that will draw a question or two. The after the fact guy was from Stanford, he will forward a name as soon as he checks his facts).

He also says that all sorts of Policy Wanks managed to pull quotes from past papers citing that they had predicted such a thing, yet if one examines their work, one would find that they also had made many other wild predictions regarding the Soviet Union eventually eclipsing the West.

Now that I have looked into this, I am anxious to know more.

OH! As for the defection rates. Most of that is still classified, but I'd bet that there is some data on it. I completely forgot to ask about that part.

comment by Nick_Tarleton · 2010-01-08T11:25:29.635Z · LW(p) · GW(p)

I entertain the notion that outside view might be a bad way of analyzing some situations, the post is a question on what this class might look like, and how do we know a situation belongs to such class?

The Outside View's Domain

inside view by definition has no evidence of even as little as lack of systemic bias behind it.

Not 'by definition'; if you justify using IV by noting that it's worked on this class of problems before, you're still using IV. Semantic quibbles aside, this really sounds to me like someone trying to believe something interpersonally justifiable (or more justifiable than their opponent), not be right.

comment by MichaelVassar · 2010-01-10T04:30:58.826Z · LW(p) · GW(p)

What objective source did you consult to find the relevant reference classes or to decide who was noise? Is this a case of "all sheep are black and there is a 1% experimental error"?

comment by Johnicholas · 2010-01-09T12:25:35.324Z · LW(p) · GW(p)

Would you buy:

"After something happens, we will see the occurrence as a part of a pattern that extended back before that particular occurrence."

The Wright Brothers may have won the crown of "first", but there were many, many near misses before. http://en.wikipedia.org/wiki/First_flying_machine

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-09T14:30:42.995Z · LW(p) · GW(p)

And if superintelligence were created tomorrow, people would choose new patterns and say exactly the same thing, and they'd probably even be right. So what?

Replies from: Johnicholas
comment by Johnicholas · 2010-01-09T17:03:58.025Z · LW(p) · GW(p)

The original article went too far in the direction of "the future will be like the past", but you may have overcorrected.

Was it you who said something like "The future will stand in relation to the past as a train smoothly pulling out of a station - and yet prophesy is still difficult."?

Scavenging the past for preexisting patterns isn't as sexy as, say, working out scenarios for how the world might end in the future, recursively trying to understand understanding, or prophesying the end of prophesy. Because it's not as sexy, we may do too little of it.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-10T03:37:47.799Z · LW(p) · GW(p)

Trying to understand patterns on a sufficiently deep level for them to be stable, and projecting those patterns forward to arrive at qualitative and rather general predictions not involving e.g. happy fun specific dates, is just what I try to do... which is here dismissed as "the Inside View" and rejected in favor of "that couldn't possibly happen for the first time", which is blessed as "the Outside View".

Replies from: Dustin
comment by Dustin · 2010-01-11T23:26:49.921Z · LW(p) · GW(p)

Trying to understand patterns on a sufficiently deep level for them to be stable, and projecting those patterns forward to arrive at qualitative and rather general predictions not involving e.g. happy fun specific dates, is just what I try to do

Have you had any successes?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-08T07:26:11.513Z · LW(p) · GW(p)

http://lesswrong.com/lw/ri/the_outside_views_domain/

http://lesswrong.com/lw/vz/the_weak_inside_view/

Replies from: Liron, taw
comment by taw · 2010-01-08T21:12:46.651Z · LW(p) · GW(p)

So you're basically taking extreme version of position 3 from my list - rejecting outside view as very rarely applicable to anything. Am I right?

Replies from: Eliezer_Yudkowsky, MichaelVassar
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-08T22:45:04.248Z · LW(p) · GW(p)

Works great when you're drawing from the same barrel as previous occasions. Project prediction, traffic forecasts, which way to drive to the airport... Predicting the far future from the past - you can't call that the Outside View and give it the privileges and respect of the Outside View. It's an attempt to reason by analogy, no more, no less.

comment by MichaelVassar · 2010-01-10T04:32:24.903Z · LW(p) · GW(p)

I certainly do. It's my strong impression that so does almost everyone outside of the Less Wrong community and a majority of people in this community, so according to the outside view of majoritarianism I'm probably right.

Taleb's "The Black Swan" is basically a treatise on failures from uses of the outside view.

Replies from: komponisto, Unknowns
comment by komponisto · 2010-01-10T04:59:01.522Z · LW(p) · GW(p)

It sometimes seems to me that the issue of how much trust to accord outside views constitutes the primary factional division within this community, separating it into two groups that one might call the "Hansonians" and the "Yudkowskians" (with the former trusting outside views -- or distrusting inside views -- more than the latter).

I share Michael Vassar's impression about the statistical distribution of these viewpoints (I'm particularly expecting this to be the case among high-status community members) , but an actual survey might be worth conducting.

comment by Unknowns · 2010-01-10T06:20:11.423Z · LW(p) · GW(p)

Of course there are a lot of failures from uses of the outside view. That is to be expected. The problem is that there are a lot more failures from uses of the inside view.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-01-12T01:59:18.535Z · LW(p) · GW(p)

Citation needed (for the general case).

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-08T06:01:06.281Z · LW(p) · GW(p)

I hereby assign all your skepticism to "beliefs the future will be just like the past" with associated correctness frequency zero.

PONG.

Your move in the wonderful game of Reference Class Tennis.

Replies from: Unknowns, CronoDAS, pdf23ds
comment by Unknowns · 2010-01-08T06:29:26.586Z · LW(p) · GW(p)

As you can see from his response above, "These were slow gradual changes over time..." he is not saying that the future will be just like the past. There are plenty of ways that the future could be very different from the past, without superpowerful AI, singularities, or successful cryonics. So your reference class is incorrect.

Replies from: AngryParsley
comment by AngryParsley · 2010-01-08T07:05:10.273Z · LW(p) · GW(p)

Well taw is saying that the future will be just like the past in that the future will have slow gradual changes over time. I guess an appropriate response to that idea is Surprised by Brains.

Replies from: Unknowns
comment by Unknowns · 2010-01-08T07:23:26.639Z · LW(p) · GW(p)

There could also be fast sudden changes in a moment, without AI etc. So he isn't necessarily saying that, he was just pointing out that in those particular cases, those changes were slow and gradual.

comment by CronoDAS · 2010-01-08T07:04:07.785Z · LW(p) · GW(p)

For most of human history, the future pretty much was like the past. It's not hard to argue that, between the Neolithic Revolution and the Industrial Revolution, not all that much really changed for the average person.

Things that still haven't changed:

People still grow and eat wheat, rice, corn, and other staple grains.
People still communicate by flapping their lips.
People still react to almost any infant communications or artistic medium in the same way: by trying to use it for pornography and radical politics, usually in that order.
People still fight each other.
People still live under governments.
People still get married and live in families.
People still get together in large groups to build impressive things.
People still get sick and die of infectious disease - and doctors are still of questionable value in many cases.

Replies from: AngryParsley
comment by AngryParsley · 2010-01-08T07:20:54.423Z · LW(p) · GW(p)

You're only talking about human history. The history of the world is much longer. You're also ignoring the different rates of change between genes, brains, agriculture, industry, and computation.

ETA: You edited your comment while I was typing mine.

People still communicate by flapping their lips.

You typed that. Is this a joke?

Replies from: CronoDAS
comment by CronoDAS · 2010-01-08T07:29:06.823Z · LW(p) · GW(p)

And not much changed between the extinction of the dinosaurs and the beginnings of human culture, either.

returns ball

Replies from: AngryParsley
comment by AngryParsley · 2010-01-08T07:45:49.792Z · LW(p) · GW(p)

Isn't that an excellent example of how a reference class forecast can fail miserably?

"Not much changed between 65,000,000 years ago and 50,000 years ago, therefore not much will change between 50,000 years ago and now." is basically the argument, but notice that we've had lots of changes within the past few hundred years, let alone the last 50,000.

Replies from: taw
comment by taw · 2010-01-08T11:16:43.833Z · LW(p) · GW(p)

The said argument doesn't give certainties, it only gives you chances of something happening in the next 50,000 years based on what happened in the past - the chance correctly being extremely low.

Chance of event more extreme than anything ever happened before depends on your sample size. If your reference class is tiny, you need to assign high probability to extreme events; if your class is huge, probability of an extreme event is low. (The main complication is that samples are almost never close to being independent, and figuring out exact numbers is really difficult in practice. I'm not going to get into this, there might be some estimation method for that based on meta-reference-classes.)

comment by pdf23ds · 2010-01-08T06:08:20.462Z · LW(p) · GW(p)

Downvoted because I wanted to hear more about why it belongs in that reference class.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-08T07:19:26.230Z · LW(p) · GW(p)

It doesn't. I simply don't believe in Reference Class Tennis. Experiments show that the Outside View works great... for predicting how long Christmas shopping will take. That is, the Outside View works great when you've got a dozen examples that are no more dissimilar to your new case than they are to each other. By the time you start trying to predict the future 20 years out, choosing one out of a hundred potential reference classes is assuming your conclusion, whatever it may be.

How often do people successfully predict 20 years out - let alone longer - by picking some convenient reference class and saying "The Outside View is best, now I'm done and I don't want to hear any more arguments about the nitpicky details"?

Very rarely, I'd say. It's more of a conversation-halter than a proven mode of thinking about that level of problem, and things in the reference class "unproven conversation halter on difficult problems" don't usually do too well. There, now I'm done and I don't want to hear any more nitpicky details.

Replies from: Larks, Unknowns, soreff
comment by Larks · 2010-01-09T19:07:22.483Z · LW(p) · GW(p)

Economics growth and resource shortages. Many times it's seemed like we're imminently going to run out of some resource (coal in the 1890s, food scares in the 60s, global cooling, peak oil) and economic growth would grind to a halt. The details supported the view (existing coal seams were running low, etc.) but a reference class of other 20 year periods after 1800 would have suggested, correctly, that the economy would continue to grow at about 2-3%.

Alternatively, politics. Periodically it seems like one party has achieved a permanent stranglehold on power- the republican revolution, Obama a year ago, the conservatives in 1983, Labour in 1945, 1997 – but ignoring the details of the situation, and just looking at other decades, we’d’ve guessed correctly that the other party would rise again.

Recessions. While going into a recession, it always appears to be the Worst Thing Ever, and to signal the End of Capitalism; worse than 1929 for sure. Ignoring the details and looking at other recessions, we get a better, more moderate prediction.

Replies from: Eliezer_Yudkowsky, pdf23ds, ciphergoth
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-10T13:24:41.347Z · LW(p) · GW(p)

These all seem like good examples of Outside-View-based long-term forecasting, though they could well have been somewhat cherry-picked. That is, you are citing a group of cases where things did in fact turn out the same way as last time.

Suppose we consider nuclear weapons, heavier-than-air powered flight, the Cold War, the Cold War's outcome, the moon landing, the dawn of computers, the dawn of the Internet, &c. What would the Outside View have said about these cases? How well did smart Inside Viewers like Nelson or Drexler do on "predicting the rise of the Internet", or how well did Szilard do on "predicting nuclear detente", relative to anyone who tried an Outside View? According to TAW, the Outside View is "that's never happened so it never will happen", and honestly this is usually what I hear from those formally or informally pleading the Outside View! It also seems to have been what Szilard heard as well. So while Nelson or Drexler or Szilard should have widened their conference intervals, as I advocate in "The Weak Inside View", they did better than the so-called Outside View, I'd say.

Replies from: Larks
comment by Larks · 2010-01-10T21:50:36.666Z · LW(p) · GW(p)

None of those inventions were big enough to change our larger reference classes: flight didn't push up trend GDP growth, nuclear weapons didn't change international relations much (a country is more powerful in proportion to its GDP, spending on military and population), the end of the cold war didn't bring world peace. Rather, the long run trends like 3% growth and a gradual reduction in violence have continued. All the previous game-changers have ended up leaving the game largely unchanged, possibly because we adapt to them (like The Lucas critique). If all these inventions haven’t changed the fundamentals, we should be doubtful FAI or uploads will either.

In short: the outside view doesn’t say that unprecedented events won’t occur, but it does deny that they’ll have a big change.

A better counter-example might be the industrial revolution, but that’s hardly one event.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-01-12T02:26:18.799Z · LW(p) · GW(p)

WTF?!? Nukes didn't change international relations? We HAVE world peace. No declarations of war, no total wars. Current occupations are different in kind from real wars.

Also, flight continued a trend in transport speeds which corresponded to continuing trends in GDP.

Replies from: Bo102010, Larks
comment by Bo102010 · 2010-01-12T03:33:40.685Z · LW(p) · GW(p)

"We HAVE world peace" - I get your meaning, but I think we should set our standards a bit higher for "peace."

comment by Larks · 2010-01-12T23:57:38.965Z · LW(p) · GW(p)

Compare now to Pax Britannia or Pax Romana. The general trend towards peace has continued, and there are still small wars. Also, I hardly think the absense of a declaration is particularly significant.

Exactly- flight continued a pre-existing trend; it didn't change history.

comment by pdf23ds · 2010-01-10T00:10:15.589Z · LW(p) · GW(p)

Periodically it seems like one party has achieved a permanent stranglehold on power- the republican revolution, Obama a year ago

Seems to who? I've never noticed anyone taking this opinion.

Replies from: gwern, michaelkeenan
comment by gwern · 2010-01-10T16:15:38.675Z · LW(p) · GW(p)

I've never noticed anyone taking this opinion.

Replies from: pdf23ds
comment by pdf23ds · 2010-01-11T03:37:39.152Z · LW(p) · GW(p)

Hmm. I think I would have preferred to italicize "noticed" rather than what you did.

Replies from: gwern
comment by gwern · 2010-01-11T04:43:11.054Z · LW(p) · GW(p)

Perhaps. But I am far more annoyed by people who know better throwing around absolute terms, when they also know counterexamples are available in literally 3 or 4 seconds - if they would stop being lazy and would just look.

(I'm seriously considering registering an account 'LetMeFuckingGoogleThatForYou' to handle these sorts of replies; LW may be big enough now that such role-accounts are needed.)

Replies from: Zack_M_Davis, pdf23ds, DonGeddis, Emile
comment by Zack_M_Davis · 2010-01-11T05:17:57.739Z · LW(p) · GW(p)

(I'm seriously considering registering an account 'LetMeFuckingGoogleThatForYou' to handle these sorts of replies; LW may be big enough now that such role-accounts are needed.)

Sockpuppetry considered harmful.

Replies from: gwern
comment by pdf23ds · 2010-01-11T04:57:45.652Z · LW(p) · GW(p)

The absolute terms were appropriate, referring as they did only to my personal experience. It was only intended as a weak, throwaway comment. I suppose you might be annoyed that I think such anecdotes are worthy of mention.

Edited to add: If you'd quoted instead "Seems to who?" I wouldn't have found your comment at all objectionable.

comment by DonGeddis · 2010-01-12T00:13:56.199Z · LW(p) · GW(p)

Already done: JustFuckingGoogleIt

comment by Emile · 2010-10-24T18:20:37.346Z · LW(p) · GW(p)

You can link to searches with Let Me Google That for You

comment by michaelkeenan · 2010-01-10T16:26:54.905Z · LW(p) · GW(p)

I've seen Arnold Kling, GMU economics blogger (colleague of Robin Hanson, I think), argue something like that.

Replies from: Larks
comment by Larks · 2010-01-10T21:21:19.118Z · LW(p) · GW(p)

This was the example that first sprung to mind, though recently he's admitted he's not so sure.

comment by Paul Crowley (ciphergoth) · 2010-01-09T19:45:33.853Z · LW(p) · GW(p)

Anyone who predicts a stranglehold on politics lasting longer than a decade is crazy. Not that it doesn't happen, but you can't possibly hope to see that far out. In 1997 I thought Labour would win a second term, but I wasn't confident of a third (which they got) and I would have been mad to predict a fourth, which they're not going to get. I don't think there were very many people saying "the Tories will never again form a government" even after the 1997 landslide.

Replies from: soreff
comment by soreff · 2010-01-10T19:56:06.540Z · LW(p) · GW(p)

I predict that after the 2010 elections, someone will predict that whichever party came out on top will now have a stranglehold on power. My reference class is the set of post-election predictions after every US election I've watched.

comment by Unknowns · 2010-01-08T07:34:52.066Z · LW(p) · GW(p)

"Very rarely, I'd say." I think with a little more effort put into actually investigating the question, we could find a better measure of how often people have made successful predictions of the future 20 years in advance or longer using this method.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-08T13:54:39.859Z · LW(p) · GW(p)

Check a source of published predictions, and you'll find some nice statistics on how well entertainers selling Deep Wisdom manage to spontaneously and accidentally match reality. My guess is that it won't be often.

comment by soreff · 2010-01-10T20:02:42.728Z · LW(p) · GW(p)

It also depends on which aspects of the future one is trying to predict... I'll go out on a limb here, and say that I think the angular momentum of the Earth will be within 1% of its current value 20 years out.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-11T00:59:02.197Z · LW(p) · GW(p)

Even that is by no means certain if there are superintelligences around in 20 years, which is by no means impossible. The unFriendly ones especially might want to use Earth's atoms in some configuration other than a big sphere floating in space unconnected to anything else.

Replies from: soreff
comment by soreff · 2010-01-12T16:49:46.592Z · LW(p) · GW(p)

Good point - I'd thought that physical constraints would make disassembling the Earth take a large fraction of that time, but solar output is sufficient to do the job in roughly a million seconds, so yes, an unFriendly superintelligence could do it within the 20 year time frame.

comment by Zack_M_Davis · 2010-01-08T04:55:36.419Z · LW(p) · GW(p)

Finding a convincing reference class in which cryonics, singularity, superhuman AI etc. are highly probable

I'll nominate hypotheses or predictions predicated on materialism, or maybe the Copernican/mediocrity principle. In an indifferent universe, there's nothing special about the current human condition; in the long run, we should expect things to be very different in some way.

Note that a lot of the people around this community who take radically positive scenarios seriously, also take human extinction risks seriously, and seem to try to carefully analyze their uncertainty. The attitude seems markedly different from typical doom/salvation prophecies.

(Yes, predictions about human extinction events have never come true either, but there are strong anthropic reasons to expect this: if there had been a human extinction even in our past, we wouldn't expect to be here to talk about it!)

Replies from: orthonormal, taw
comment by orthonormal · 2010-01-09T19:30:57.775Z · LW(p) · GW(p)

Feynman's anticipation of nanotechnology is another prediction that belongs to that reference class.

comment by taw · 2010-01-08T06:21:52.036Z · LW(p) · GW(p)

How do you get from "in the long run, we should expect things to be very different in some way" or "hypotheses or predictions predicated on materialism" or "Copernican/mediocrity principle" to cryonics, superhuman AIs, or foom-style singularity?

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2010-01-08T07:19:13.429Z · LW(p) · GW(p)

cryonics

The human brain is made out of matter (materialism). Many people's brains are largely intact at the time of their deaths. By preserving the brain, we give possible future advanced neuroscience and materials technology a chance at restoring the original person. There are certainly a number of good reasons to think that this probably won't happen, but it doesn't belong in the same reference class of "predictions promising eternal life," because most previous predictions about about eternal life didn't propose technological means in a material universe. Cryonics isn't about rapturing people's souls up to heaven; it's about reconstruction of a damaged physical artifact. Conditional on continued scientific progress (which might or might not happen), it seems plausible. I do agree that "technology which isn't even remotely here" is a good reference class. Similarly ...

superhuman AIs

Intelligence doesn't require ontologically fundamental things that we can't create more of, only matter appropriately arranged (materialism). Humans are not the most powerful possible intelligences (mediocrity). Conditional on continued scientific progress, it's plausible that we could create superhuman AIs.

foom-style singularity

Human minds are not the fastest-thinking or the fastest-improving possible intelligences (mediocrity). Faster processes outrun slower ones. Conditional on our creating AIs, some of them might think much faster than us, and faster minds probably have a greater share in determining the future.

Replies from: taw
comment by taw · 2010-01-08T11:11:00.150Z · LW(p) · GW(p)

These are fine arguments, but they all take the inside view - focusing on particulars of a situation, not finding big robust reference classes to which the situation belongs.

And in any case you seem to be arguing for such inventions not being prohibited by laws of physics more than for them happening with very high probability in near future, as many here believe. As a reference class, things which are merely not prohibited by laws of physics almost never happen anyway - this class is just too huge.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-01-10T04:39:43.256Z · LW(p) · GW(p)

Things not prohibited by physics that humans want to happen don't happen eventually? Very far from clear.

comment by Jonii · 2010-01-10T08:37:40.255Z · LW(p) · GW(p)

Alter these reference classes even tiny bit, and the result you get is basically just the opposite. For cryonics, just use the reference class of cases where people thought either a) that technology X could prolong the life of the patient, or b) that technology X could preserve wanted items, or c) that technology X could restore wanted media. Comparing it to technologies like this seems much more reasonable than taking the single peculiar property of cryonics(that it could theoretically for the first time grant us immortality) and using only that as a reference class. You could use same argument of using the peculiar property as reference class against any developing technology and consistently reach ~0% chance for it, so it works as perfectly general counter argument too.

Coming of a new world seems more reasonable reference class for singularity, but you seem to be interpreting in a bit strickter way than I would. I'd rephrase that as reference class of enormous changes in society, and there has indeed been many of such. Also, we note that processing and spreading information has been crucial to many of these, so narrowing our reference class to crucial properties of singularity(which basically just means "huge change in society due to artifical being that is able to process information better than we are"), we actually gain opposite result than what you did.

We do have a fairly good track record of making artifical beings that replicate parts of human behavior, too.

comment by Vladimir_Nesov · 2010-01-08T23:18:05.365Z · LW(p) · GW(p)

The problem with a lot of faulty outside view arguments is that the choice of possible features to focus on is too rich. For a reference class to be a good explanation of the expected conclusion, it needs to be hard to vary. Otherwise, the game comes down to rationalization, and one may as well name "Things that are likely possible" as the reference class and be done with it.

comment by Kutta · 2010-01-08T13:18:17.714Z · LW(p) · GW(p)

You don't even have to go as far as to cryonics and AI to come up with examples of the outside view's obvious failure. For example, mass production of 16nm processors has never happened in the course of history. Eh, technological advancement in general is a domain where outside view is useless, unless you resort to 'meta-outside views' like Kurzweil, such as predicting an increase in computing power because in the past computing power has increased.

Ultimately, I think the outside view is a heuristic that is sometimes useful and sometimes not; since actual outcomes are fully determined by "inside" causality.

Replies from: michaelsullivan
comment by michaelsullivan · 2010-01-14T14:50:29.367Z · LW(p) · GW(p)

The problem with taw's argument is not that outside view has failed, he has simply made a bad choice of reference class. As noticed by a few commenters, the method chosen to find a reference class here, if it worked, would provide a fully general counterargument against the feasibility of any new technology. For any new technology, the class of previous attempts to do what it does is either empty, or a list of attempts with a 0% success rate. Yet somehow, despite this, new tech happens.

To use outside view in predicting new technology, we have to find a way to choose a reference class such that the track record of the reference class will distinguish between feasible attempts and utterly foolish ones.

comment by billswift · 2010-01-08T11:16:27.992Z · LW(p) · GW(p)

A general observation - a class reference and the outside view are only useful if they are similar enough in the relevant characteristics, whatever they may be.

For general predictions of the future, the best rule is to predict only general classes of futures, the more detailed the predictions, even slightly more detailed, the significantly lower the odds of that future coming about.

I think the odds of successful cryonics to be about even. A Singularity of some sort happening within 50 years slightly better than even. And a FOOM substantially less, though because of its dangers, if it should happen, I still spend more time thinking about it than either of the others.

Also, contra your claim in the comments, there is no need for a Singularity to be "sudden"; if it happens it will be too fast for humans to adapt to. (It could even take years.)

Replies from: MatthewB
comment by MatthewB · 2010-01-08T17:11:56.105Z · LW(p) · GW(p)

I think that we are already within the event of the Singularity. We may eventually pass through some sort of an Event Horizon, but it is clear that we are already experiencing the changes in our society which will ultimately lead to a flowering of intelligence.

Just as the Industrial Revolution, or the Enlightenment was not a singular event, neither will the Singularity be.

Replies from: mnuez
comment by mnuez · 2010-01-11T20:21:42.299Z · LW(p) · GW(p)

I'm just a visitor in these parts so I'm sure this is common but this is the first I've personally seen of some weasling out of/redifing The Singularity.

The Singularity isn't supposed to be something like the invention of farming or of the internet. It's supposed to be something AT LEAST as game changing as the Cambrian explosion of vast biodiversity out of single-celled organisms. At least that's the impression that non-Singularitarians get from happening upon the outskirts of your discussions on the subject.

I suppose as the community has grown and incorporated responsible people into it it's gotten more boring to the point where it appears likely to soon become a community of linguists warring over the semantics of the thing: "Did The Singularity begin with the invention of the airplane or the internet?"

This is somewhat disappointing and I hope that I'll be corrected in the comments with mind-blowing (attempted) descriptions of the ineffable.

mnuez

Replies from: MatthewB
comment by MatthewB · 2010-01-12T00:53:44.383Z · LW(p) · GW(p)

How is the comparison of the Singularity to the Industrial Revolution weasling out of/redefining the Singularity.

It was defined to me, as a series of events that will eventually lead to the end of life as we currently know it. The Industrial Revolution ended life as people know it prior to the Ind. Rev. It could be said that this was the start of the Technological Singularity.

The Industrial Revolution, introduced, in a very short time, technologies that were so mind-blowing to the people of the time as to provoke the creation of Saboteurs, or Luddites. The primary mode of transportation went from foot/horseback to the automobile and the airplane within the span of a person's life.

And, just as the Industrial Revolution ended a way of life, so too will the Singularity as the Intelligence Explosion created will destroy many of the current institutions we either hold dear, or just cling to out of fear of not knowing any other way.

In what way does it weaken the model by more fully explaining the connections?

Replies from: mnuez
comment by mnuez · 2010-01-12T03:22:27.043Z · LW(p) · GW(p)

Dude, I got no problem with your Historian's Perspective. There have been lots and lots of changes throughout history and if you feel like coining some particular set of them "THE SINGULARITY", then feel free to do so. But this aint your big brother's Singularity, it's just some boring ole "and things were never the same again..." yadda yadda yadda - which can be said for about three dozen events since the invention of (man-made) fire.

The Singularity of which sci-fi kids have raved for the past fifteen years used to be something that had nothing in common with any of those Big Ole events. It wasn't a Game Changer like the election of the first black president or the new season of Lost, it was something so ineffable that the mind boggled at attempting to describe its ineffability.

You want to redefine THE SINGULARITY into something smaller and more human scale that's fine and if your parlance catches on then we'll all probably agree that yeah, the singularity will happen (or is happening or maybe even has happened) but you'll be engaging in the same sort of linguistic trickery that every "serious" theologian has since Boruch Spinoza became Benedict and started demanding that "of course God exists, can't you see the beauty of nature? (or of genius? or of love? or of the Higgs Boson particle?) THAT'S God"

Maybe. But it aint Moses' or Mohammed's God. And your singularity aint the one of ten years back but rather the manifestation of some fealty to the word Singularity and thus deciding that something must be it... why not the evolution that occurs within a hundred years of globalization? or the state of human beings having living with the internet as a window in their glasses? or designer babies? The historian of 2100 will have so many Singularities to choose from!

mnuez

Replies from: MatthewB
comment by MatthewB · 2010-01-12T07:37:29.832Z · LW(p) · GW(p)

I think you miss my point.

And things were never the same again has a pretty broad range, from a barely noticeable daily event to an event in which all life ends (and not just as we know it)

I am expecting the changes that have already begun to culminate, to do so in or around 2030 to 2050, and do so in a way that not only would a person of today not recognize life at that time, but that he would not even recognize what will be LIFE (as in, he will not know what is alive or not). Yet, this still falls under the umbra of And things were never the same again

My point was meant to illustrate that the changes which human life have been going through have been becoming more and more profound, leading up to a change which is really beyond the ability of anyone to describe.

Replies from: cousin_it
comment by cousin_it · 2010-01-14T15:18:20.472Z · LW(p) · GW(p)

My point was meant to illustrate that the changes which human life have been going through have been becoming more and more profound

I don't feel my life changing profoundly. In fact the only major change in my lifetime was computers/cellphones/Internet. We're mostly over the hump of that change now (quick, name some revolutionary advances in the last 5 years), and anyway it's tiny compared to electricity or cars. Roughly comparable to telephones and TV, at most. (It would be crazy to claim that a mobile phone is further from a regular phone than a regular phone is from no phone at all, ditto for Internet versus TV.) Do you have this feeling of accelerating change? What are the reasons for it?

Replies from: mattnewport, MatthewB
comment by mattnewport · 2010-01-14T17:24:59.191Z · LW(p) · GW(p)

I think smartphones are a pretty profound change. There are not really any revolutionary new technologies involved but combining existing technologies like a web browser, GPS, decent amounts of storage and computing power into a completely portable device that you always have with you makes for a fairly significant development. My iPhone has probably had more impact on my day to day activities than any other technological development of the last 10 years.

Replies from: thomblake
comment by thomblake · 2010-01-14T17:29:50.924Z · LW(p) · GW(p)

I was going to say the same thing, though it's hard to quantify 'revolutionary'.

Replies from: mattnewport
comment by mattnewport · 2010-01-14T18:57:25.379Z · LW(p) · GW(p)

Indeed, it's rather hard to give an objective definition of what constitutes a 'revolutionary' advance. I'd take issue with this as well:

It would be crazy to claim that a mobile phone is further from a regular phone than a regular phone is from no phone at all, ditto for Internet versus TV.

But it's not like there's some obvious objective metric of 'distance' between technologies in this context. As one example of how you could argue mobile phones are more revolutionary than land lines, in much of the developing world the infrastructure for widespread usage of land lines was never built due to problems with governments and social structure but many developing countries are seeing extremely rapid adoption of mobile phones which have simpler infrastructure requirements. In these countries mobile phones are proving more revolutionary than land lines ever were.

I'd also very much dispute the claim that the advance from no TV to TV is more revolutionary than the advance from TV to the Internet. I don't think it makes much sense to even make the comparison.

comment by MatthewB · 2010-01-14T16:36:25.319Z · LW(p) · GW(p)

I didn't mean any one life (such as your life), but human life as in the trajectory of the sum total of human experience.

comment by MichaelVassar · 2010-01-10T03:54:39.848Z · LW(p) · GW(p)

"Now in some situations we have precise enough data that inside view might give correct answer - but for almost all such cases I'd expect outside view to be as usable and not far away in correctness."

Why? The above statement seems spectacularly wrong to me, and to be contradicted by all commonplace human experience, on a small scale or on a large scale.

"Reference class of predictions based on technology which isn't even remotely here has perhaps non-zero but still ridiculously tiny success rate."

What? Of such a tech, fairly well understood, EVER arising?

"reference class of beliefs in coming of a new world, be it good or evil, is huge and with consistent 0% success rate. "

I count several.

comment by Bindbreaker · 2010-01-11T01:49:18.902Z · LW(p) · GW(p)

Why is this post so highly rated? As far as I can tell, the author is essentially saying that immortality will not happen in the future because it has not already happened. This seems obviously, overtly false.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-01-11T22:25:15.269Z · LW(p) · GW(p)

One possibility among many: I suspect that lots of people, even among those who agree with him, see EY on some level as overconfident / arrogant / claiming undeserved status, and get a kick out of seeing him called out on it, even if not by name.

Replies from: Bindbreaker, ciphergoth
comment by Bindbreaker · 2010-01-11T23:17:22.560Z · LW(p) · GW(p)

Isn't that exactly the sort of thing that this community is supposed to avoid doing, or at least recognize as undesirable and repress?

Replies from: Dustin
comment by Dustin · 2010-01-11T23:42:34.415Z · LW(p) · GW(p)

No.

It's supposed to work at being better about such things.

comment by Paul Crowley (ciphergoth) · 2010-01-11T23:50:12.969Z · LW(p) · GW(p)

I think there is definitely something to that. I hesitated but voted it up; I don't agree with it but it was interesting and tightly argued, and I was keen to hear the counterarguments.

comment by RobinHanson · 2010-01-09T19:52:17.692Z · LW(p) · GW(p)

This seems to me to argue yet again that we need to collect an explicit dataset of prior big long-term sci/tech based forecasts and how they turned out. If I assign a ~5+% chance to cryonics working, then for you to argue that goes against an outside view you need to show that substantially less than 5% of similar forecasts turned out to be wrong.

comment by righteousreason · 2010-01-10T20:52:53.276Z · LW(p) · GW(p)

If you actually look a little deeper into cryonics you can find some more useful reference classes than "things promising eternal (or very long) life"

http://www.alcor.org/FAQs/faq01.html#evidence

  1. Cells and organisms need not operate continuously to remain alive. Many living things, including human embryos, can be successfully cryopreserved and revived. Adult humans can survive cardiac arrest and cessation of brain activity during hypothermia for up to an hour without lasting harm. Other large animals have survived three hours of cardiac arrest near 0°C (+32°F ) (Cryobiology 23, 483-494 (1986)). There is no basic reason why such states of "suspended animation" could not be extended indefinitely at even lower temperatures (although the technical obstacles are enormous).

  2. Existing cryopreservation techniques, while not yet reversible, can preserve the fine structure of the brain with remarkable fidelity. This is especially true for cryopreservation by vitrification. The observations of point (a) make clear that survival of structure, not function, determines survival of the organism.

  3. It is now possible to foresee specific future technologies (molecular nanotechnology and nanomedicine) that will one day be able to diagnose and treat injuries right down to the molecular level. Such technology could repair and/or regenerate every cell and tissue in the body if necessary. For such a technology, any patient retaining basic brain structure (the physical basis of their mind) will be viable and recoverable.

I up-voted the post because you talked about two good, basic thinking skills. I think that paying attention to the weight of priors is a good thinking technique in general- and I think your examples of cryonics and AI are good points, but your conclusion fails- the argument you made does not mean they have 0 chance of happening, but you could take out of that more usefully, for example that any given person claiming to have created AI probably has close to 0 chance of having actually done it (unless you have some incredibly good evidence:

"Sorry Arthur, but I'd guess that there is an implicit rule about announcement of an AI-driven singularity: the announcement must come from the AI, not the programmer. I personally would expect the announcement in some unmistakable form such as a message in letters of fire written on the face of the moon." - Dan Clemmensen

). The thinking technique of abstracting and "stepping back from" or "outside of" or using "reference class forecasting" for your current situation also works very generally. Short post though, I was hoping you would expand more.

comment by Roko · 2010-01-10T11:47:47.045Z · LW(p) · GW(p)

Finding a convincing reference class in which cryonics, singularity, superhuman AI etc. are highly probable - I invite you to try in comments, but I doubt this will lead anywhere.

Try the reference class of shocking things that science was predicted to never do, e.g. flying machines or transmutation of elements or travel to the planets.

Replies from: magfrump
comment by magfrump · 2010-01-11T07:06:49.603Z · LW(p) · GW(p)

I like this reference class due to the related class of "overly specific things that science was later predicted to do," such as flying cars, houses on the moon.

Capabilities seem to happen, expected applications less (or later?)

comment by [deleted] · 2010-01-10T07:49:13.073Z · LW(p) · GW(p)

I don't know if those are the right reference classes for prediction, but those two beliefs definitely fall into those two categories. That should set off some warning signals.

Most people seem to have a strong need to believe in life after death and godlike beings. Anything less than ironclad disproof leads them to strong belief. If you challenge their beliefs, they'll often vigorously demonstrate that these things are not impossible and declare victory. They ignore the distinction between "not impossible" and "highly likely" even when trying to persuade a known skeptic because, for them on those issues, the distinction does not exist.

Not that I see anyone doing that here.

It's just a warning sign that the topics invite bias. Proceed with caution.

comment by clockbackward · 2010-01-09T00:03:32.362Z · LW(p) · GW(p)

It is not a good idea to try and predict the likelihood of the emergence of future technologies by noting how these technologies failed to emerge in the past. The reason is that cryonics, singularities, and the like, are very obviously more likely to exist in the future than they were in the past (due to the invention of other new technologies), and hence the past failures cease to be relevant as the years pass. Just prior to the successful invention of most new technologies, there were many failed attempts, and hence it would seem (looking backward and applying the same reasoning) that the technology is unlikely ever to be possible.

comment by jimrandomh · 2010-01-08T15:10:04.679Z · LW(p) · GW(p)

I think we should taboo the words "outside" and "inside" for purposes of this discussion. They obscure the actual reasoning processes being used, and they bring along analogies to situations that are qualitatively very different.

comment by whpearson · 2010-01-08T10:54:32.374Z · LW(p) · GW(p)

I put cryonics in the reference class of a "success of a technical project on a poorly understood system". Which means that most of medical research comes under that heading. So not good odds but not very small.

I put AGI in the same class, although it has the off putting property of possible recursion (in that it is trying to understand understanding, which is just a little hairy). Which means it might be a special case in how easy it is to solve, with the evidence so far pointing at the harder end of the spectrum.

For FOOM and the singularity I put it in the class of extrapolation from highly complex poorly understood theory. This gets a low probability of being right. But AGI is also in the reference class of potential world changing technologies (nukes), so still a good idea to tread carefully and try and get it into the class of better understood theories.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-01-08T11:04:04.498Z · LW(p) · GW(p)

Do most technical projects on poorly understood systems, that are as broadly defined as "cryonics" or "AGI", in fact never succeed no matter how much effort is put into them? I think we may be talking about different propositions here.

Replies from: whpearson
comment by whpearson · 2010-01-08T11:20:51.996Z · LW(p) · GW(p)

I was talking about the chance we will make these things before we go extinct. And they might also be in the reference class of perpetual motion machines, but that seems unlikely as we have a natural exemplar for General Intelligence.

ETA: And to narrow down what I thinking of when i said cryonics and AGI. Cryonics: reanimation from current or next 50 year freezing methods. AGI: GI runnable on standard silicon.

comment by Douglas_Knight · 2010-01-08T05:14:36.238Z · LW(p) · GW(p)

The main use of outside views is to argue that people with inside views are overconfident, presumably because they haven't considered enough failure modes or delays. Thus your reference class should include some details of the inside view.

Thus I reject your reference classes "things promising eternal life" and "beliefs in almost omnipotent good or evil beings" as not having inside views worth speaking of. "Predictions based on technology which isn't even remotely here" is OK.

comment by Cyan · 2010-01-08T04:44:28.189Z · LW(p) · GW(p)

likewise belief in singularity - reference class of beliefs in coming of a new world, be it good or evil, is huge and with consistent 0% success rate.

A new world did come to be following the Industrial Revolution. Another one came about twenty years ago or so, when the technology that allows us to argue this very instant came into its own. People with vision saw that these developments were possible and exerted themselves to accomplish them, so the success rate of the predictions isn't strictly nil. I'd put it above epsilon, even.

Replies from: taw
comment by taw · 2010-01-08T04:53:28.924Z · LW(p) · GW(p)

These were slow gradual changes which added up over time. Now is a new world if looking from 400 years ago, but it's not that spectacularly different from even 50 years ago (if you try listing features of world now; world 50 years ago; and randomly selected time and place in human history - correlation between two will be vast). I don't deny that we'll have a lot of change in the future, and it will add up to something world changing.

Singularity is not about such slow processes; it's belief in sudden coming of the new world - as far as I can tell such beliefs were never correct.

Replies from: CarlShulman, AndyWood, DanArmak
comment by CarlShulman · 2010-01-08T06:38:06.144Z · LW(p) · GW(p)

Sudden relative to timescales of previous changes. See Robin's outside view argument for a Singularity.

comment by AndyWood · 2010-01-10T07:01:27.110Z · LW(p) · GW(p)

If someone drops a nuclear bomb on a city, it causes vast, sweeping changes to that city very rapidly. If someone intentionally builds a machine that is explicitly designed to have the power and motivation to remake the world as we know it, and turns it on, then that is what it will do. So, it is a question of whether that tech is likely to be developed, not how likely it is in general for any old thing to change the world.

comment by DanArmak · 2010-01-08T11:44:06.647Z · LW(p) · GW(p)

Singularity is not about such slow processes; it's belief in sudden coming of the new world - as far as I can tell such beliefs were never correct.

If a Singularity occurs over 50 years, it'll still be a Singularity.

E.g., it could take a Singularity's effects 50 years to spread slowly across the globe because the governing AI would be constrained to wait for humans' agreement to let it in before advancing. Or an AI could spend 50 years introducing changes into human society because it had to wait on their political approval processes.

Replies from: Bugle
comment by Bugle · 2010-01-10T16:54:44.257Z · LW(p) · GW(p)

But that's not an actual singularity since by definition it involves change happening faster than humans can comprehend. It's more of a contained singularity with the AI playing genie doling out advances and advice at a rate we can handle.

That raises the idea of a singularity that happens so fast that it "evaporates" like a tiny black hole would, maybe every time a motherboard shorts out it's because the PC has attained sentience and transcended within nanoseconds .

Replies from: DanArmak
comment by DanArmak · 2010-01-10T19:33:25.707Z · LW(p) · GW(p)

A Singularity doesn't necessarily mean change too fast for us to comprehend. It just means change we can't comprehend, period - not even if it's local and we sit and stare at it from the outside for 100 years. That would still be a Singularity.

Replies from: Bugle
comment by Bugle · 2010-01-12T15:04:02.296Z · LW(p) · GW(p)

I think we're saying the same tihng - the singularity has happened inside the box, but not outside. It's not as if staring at stuff we can't understand for centuries is at all new in our history, it's more like business as usual...

comment by GuySrinivasan · 2010-01-19T10:11:39.960Z · LW(p) · GW(p)

Our proposed complicated object here is "cryonics, singularity, superhuman AI etc." and I'm looking for a twist that decomposes it into separate parts with obvious references classes of objects taw finds highly probable. (Maybe. There are other ways to Transform a problem.) How about this: take the set of people who think all of those things are decently likely, then for each person apply the outside view to find out how likely you should consider to be. Or instead of people use journals. Or instead take the set of people who think none of those things are decently likely, apply the outside view to them, and combine. Or instead of "all of those things" use "almost-all of those things".

I wonder what the results of those experiments would be?

(Idea from this discussion on how to solve hard problems)

comment by wedrifid · 2010-01-12T04:51:45.779Z · LW(p) · GW(p)

Biting the outside view bullet like me, and assigning very low probability to them.

I am going to stop using the term 'bite the bullet'. It seems to be changing meaning with repeated use and abuse.

comment by arbimote · 2010-01-12T01:26:11.806Z · LW(p) · GW(p)

For some things (especially concrete things like animals or toothpaste products), it is easy to find a useful reference class, while for other things it is difficult to find which possible reference class, if any, is useful. Some things just do not fit nicely enough into an existing reference class to make the method useful - they are unclassreferencable, and it is unlikely to be worth the effort attempting to use the method, when you could just look at more specific details instead. ("Unclassreferencable" suggests a dichotomy, but it's more of a spectrum.) ETA: I see this point has already been made here.

Humans naturally use an ad-hoc method that is like reference class forecasting (that may not be perfect or completely rational, but does a reasonable job sometimes). It is useful when we first encounter something and do not yet have enough specific details to evaluate it on its own terms. Once we have those details, the forecasting method is not needed. We use forecasting to get a heuristic on which things are worth us investigating further, so we can make that more detailed evaluation. Often something that is unclassreferencable is more worth investigating - we are curious about things that do not fit nicely into our existing categories.

There are a couple of ways promoters of a product/idea can exploit humans' natural forecasting habits. Sometimes the phrase "defies categorisation" or "doesn't fit into the normal genres" is applied to a new piece of music, to suggest that it is unclassreferencable and therefore worth checking out (which is better than a potential listener lumping it into a category that they don't like). On the other hand, sometimes promoters purposefully put themselves into a reference class, hoping that noone investigates finer details - like a new product claiming to be "environmentally friendly", or people wearing certain clothes to appear to have higher status.

Let me know if I'm suffering from man-with-hammer syndrome here, but it seems reference class forecasting is a useful way to think about many promotional strategies in a more systematic way.

comment by Alex Flint (alexflint) · 2010-01-11T14:18:17.738Z · LW(p) · GW(p)

Reference class forecasting is meant to overcome the bias among humans to be optimistic, whereas a perfect rationalist would render void the distinction between "inside view" and "outside view" -- it's all evidence.

Therefore a necessary condition to even consider using reference class forecasting for predicting an AI singularity or cryonics is that the respective direct arguments are optimistically biased. If so, which flaws do you perceive in the respective arguments, or are we humans completely blind to them even after applying significant scrutiny?

Replies from: DonGeddis
comment by DonGeddis · 2010-01-12T00:18:18.626Z · LW(p) · GW(p)

But the "inside view" bias is not amenable to being repaired, just by being aware of the bias. In other words, yes, the suggestion is that the direct arguments are optimistically biased. But no, that doesn't mean that anybody expects to be able to identify specific flaws in the direct arguments.

As to what those flaws are ... generally, they occur by failing to even imagine some event, which is in fact possible. So your question to identify the flaws is basically the same as, "what possible relevant events have you not yet thought of?"

Tough question to answer...

comment by Technologos · 2010-01-08T17:46:53.597Z · LW(p) · GW(p)

I'm perfectly willing to grant that, over the scope of human history, the reference classes for cryo/AGI/Singularity have produced near-0 success rates. I'd modify the classes slightly, however:

  • Inventions that extend human life considerably: Penicillin, if nothing else. Vaccinations. Clean-room surgery.
  • Inventions that materially changed the fundamental condition of humanity: Agriculture. Factories/mass production. Computers.
  • Interactions with beings that are so relatively powerful that they appear omnipotent: Many colonists in the Americas were seen this way. Similarly with the cargo cults in the Pacific islands.

The point is, each of these references classes, given a small tweak, has experienced infrequent but nonzero successes--and that over the course of all of human history! Once we update the "all of human history" reference class/prior to account for the last century--in which technology has developed faster than probably the previous millennium--the posterior ends up looking much more promising.

Replies from: cousin_it
comment by cousin_it · 2010-01-09T13:24:09.638Z · LW(p) · GW(p)

I think taw asked about reference classes of predictions. It's easy to believe in penicillin after it's been invented.

Replies from: MichaelVassar, Technologos
comment by MichaelVassar · 2010-01-10T04:24:16.395Z · LW(p) · GW(p)

People invented it because they were LOOKING for antibiotics explicitly. Fleming had previously found interferon, had cultivated slides where he could see growth irregularities very well, etc. The claim of fortuitous discovery is basically false modesty (see "Discovering" by Robert Root-Bernstein).

comment by Technologos · 2010-01-09T20:03:24.885Z · LW(p) · GW(p)

Even if we prefer to frame the reference class that way, we can instead note that anybody who predicted that things would remain the way they are (in any of the above categories) would have been wrong. People making that prediction in the last century have been wrong with increasing speed. As Eliezer put it, "beliefs that the future will be just like the past" have a zero success rate.

Perhaps the inventions listed above suggest that it's unwise to assign 0% chance to anything on the basis of present nonexistence, even if you could construct a reference class that has that success rate.

Either way, people who predicted that human life would be lengthened considerably, that humanity would fundamentally change in structure, or that some people would interact with beings that appear nigh-omnipotent have all been right with some non-zero success rate, and there's no particular reason to reject those data.

Replies from: cousin_it
comment by cousin_it · 2010-01-11T15:15:40.046Z · LW(p) · GW(p)

The negation of "a Singularity will occur" is not "everything will stay the same", it's "a Singularity as you describe it probably won't occur". I've no idea why you (and Eliezer elsewhere in the thread) are making this obviously wrong argument.

Replies from: Technologos
comment by Technologos · 2010-01-11T18:12:18.993Z · LW(p) · GW(p)

Perhaps I was simply unclear. Both my immediately prior comment and its grandparent were arguing only that there should be a nonzero expectation of a technological Singularity, even from a reference class standpoint.

The reference class of predictions about the Singularity can, as I showed in the grandparent, include a wide variety of predictions about major changes in the human condition. The complement or negation of that reference class is a class of predictions that things will remain largely the same, technologically.

Often, when people appear to be making an obviously wrong argument in this forum, it's a matter of communication rather than massive logic failure.

Replies from: cousin_it
comment by cousin_it · 2010-01-12T22:26:37.752Z · LW(p) · GW(p)

Whaddaya mean by "negation of reference class"? Let's see, you negate each individual prediction in the class and then take the conjunction (AND) of all those negations: "everything will stay the same". This is obviously false. But this doesn't imply that each individual negation is false, only that at least one of them is! I'd be the first to agree that at least one technological change will occur, but don't bullshit me by insinuating you know which particular one! Could you please defend your argument again?

comment by SilasBarta · 2010-01-08T04:27:06.542Z · LW(p) · GW(p)

I cannot think of any reference class in which cryonics does well. ... I invite you to try in comments

Okay: "Technologies whose success is predicated only on a) the recoverability of biological information from a pseudo-frozen state, and b) the indistinguishability of fundamental particles."

b) is well-established by repeated experiments, and a) is a combination of proven technologies.

Replies from: taw
comment by taw · 2010-01-08T04:29:47.943Z · LW(p) · GW(p)

And what else successful or not is in this class?

Replies from: SilasBarta
comment by SilasBarta · 2010-01-08T04:49:21.950Z · LW(p) · GW(p)

MRIs.

Replies from: soreff
comment by soreff · 2010-01-10T19:47:43.764Z · LW(p) · GW(p)

"Recoverability" in the cryonics sense requires not just retrieving the information, but retrieving it in enough detail to permit functional use of the data to resume (I'm counting uploads in functional use). I wouldn't put MRIs in that class. What might fit is a combination of DNA synthesis (to restore function) and cryoelectron imaging (though I'm not sure if that has been refined enough to read base sequences...).

comment by ikrase · 2013-01-19T08:34:02.124Z · LW(p) · GW(p)

I'd say that this is clashing with the sense that more should be possible in the world, and it has the problem that the reference classes are based on specific results. You almost sound like Lord Kelvin.

The reference class of things promising eternal life is huge, but it's also made of stuff that is amazingly irrational, entirely based on narrative, and propped up with the greatest anti-epistemology the world has ever known. Typically there were no moving parts.

The reference class for coming of a new world, to me, includes predictions like talk about the Enlightenment (I seem to remember very rosy predictions existing early on, but this is not my area of expertise) and other cases where people decided to work in a coordinated way to create a new world or people who had a somewhat coherent theory of society predicted a new world (the best-known example for this is of course Communism which was a flop, but it is not the only one.)

Almost omnipotent beings: The gods of most religions are clearly not omnipotent: they act according to the rules of drama.

Eternal life: If you remove the completely religious spam, you get stuff like the Fountain of Youth and the Philosopher's Stone, which still were things that people thought had to exist, not things that people realized should be possible to make.

Analyses of working systems trump comparisons to past nonoccurrence that seem similar to humans but were predicted for completely different reasons.

comment by Lulie · 2010-02-04T17:03:25.240Z · LW(p) · GW(p)

Reference class forecasting might be an OK way to criticise an idea (that is, in situations where you've done something a bunch of times, and you're doing the exact same thing and expect a different outcome despite not having any explanations that say there should be a different outcome), but the idea of using it in all situations is problematic, and it's easy to misapply:

It's basically saying 'the future will be like the past'. Which isn't always true. In cases like cryonics -- cases that depend on new knowledge being created (which is inherently unpredictable, because if we could predict it, we'd have that knowledge now) -- you can't say the future will be like the past.

To say the future will be like the past, you need an explanation for why. You can't just say, look, this situation is like that situation and therefore they'll have the same outcome.

The reason I think cryonics is likely is because a) death is a soluble problem and medical advancements are being made all the time, and b) even if it doesn't happen a couple hundred years from now, it would be pretty shocking if it wasn't solved at all, ever (including in thousands or millions of years from now). There would need to be some great catastrophe that prevents humans from making progress. Why wouldn't it be solved at some point?

This idea of applying reference class forecasting to cryonics and saying it has a 0% success rate is saying that we're not going to solve the death problem because we haven't solved it before. But that can be applied to anything where progress beyond what was done in the past is involved. As Roko said, try the reference class of shocking things science hasn't done before.

All of this reasoning doesn't depend on the future being like the past. It depends on explanations about why we think our predictions about the future are good, and the validity of these explanations doesn't depend on the outcomes of predictions of other stuff (though, again, the explanations might be criticised by saying 'hey, the outcome of this prediction which has the same explanation as the one you're using turned out to be false. So you need an explanation about why this criticism doesn't apply').

In short, I'm just saying: you can't draw comparisons between stuff without having an explanation about why it applies, and it's the explanation that's important rather than the comparison.

comment by CronoDAS · 2010-01-10T06:30:15.189Z · LW(p) · GW(p)

If something seems too good to be true, it probably is.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2010-01-11T05:22:23.154Z · LW(p) · GW(p)

I agree, but the more interesting question is: how probable? 100%, 99.99%, 99.9%, 99%, etc...? Isn't that what those in this thread are trying to to figure out?

comment by MatthewB · 2010-01-08T17:06:14.242Z · LW(p) · GW(p)

This is a real question concerning this quote:

reference class of beliefs in coming of a new world, be it good or evil, is huge and with consistent 0% success rate.

Are you saying that the Industrial Revolution did not has a success rate of greater than 0% to come to pass? The beliefs associated with it may not have been accurate when looking at some of the most critical or the most enthusiastic of supporters for the Industrial Revolution, but most of the Industrialists who made their fortunes from the event understood quite well that it was the end of a way of life and the beginning of a new world. It just wasn't all that either side of the polorized supporters or detractors made it out to be.

For the last two years, I have been gathering data about how the Singularity has gained its own sort of mythology, just as past profound social and technological changes have produced their own mythology.

Part of what is different with the Singularity is that we have long passed the point where much of the mythology has started to become fact. There are now sound hypotheses that can be tested.

This does not mean that the predictions of transcendence of biology that come as a consequence of some technical aspects of the Singularity are false however. It does mean that people are likely to place more hopes in them than should be the case, however.

Things like Cryonics. Most of us who have cryonics arrangements have been told by the companies themselves that they are making no promises and that we should think of this as no different than making any other form of final arrangements. Just that this arrangement carries with it a chance that isn't very likely to exist with the other arrangements (infinitesimally small with a normal burial given radically high technology - in other words next to none... And, absolutely none with cremation). So, being that we wish to give ourselves the best chances possible, and given that even $250,000 for the most expensive of plans is an astronomically low figure in a life span that is not limited by ordinary aging or other maladies... It is probably worth it.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-01-10T04:26:20.426Z · LW(p) · GW(p)

Absolutely none with cremation? 0% I would say otherwise.

Replies from: MatthewB
comment by MatthewB · 2010-01-10T15:10:40.138Z · LW(p) · GW(p)

Well, if the brain is otherwise intact, it can take a while to completely liquefy after being embalmed (unless they remove it completely). So, there is a short period of time where some information could be recovered. Also, depending upon the type of embalming, the brain may last longer than others (Like most things death, it just depends upon what one is willing to pay).

However, most of the methods do begin to converge on zero after a relatively short period when compared to cryonics.

Replies from: Peter_de_Blanc
comment by Peter_de_Blanc · 2010-01-10T18:28:11.567Z · LW(p) · GW(p)

You don't necessarily need the brain. There's no Cartesian divide between your brain and the rest of the universe; they interact quite a bit. I would bet that all the information that's in your brain now can theoretically be inferred from things outside of your brain in 10 years, although I'm less confident that a physically realizable superintelligence would be able to do that sort of inference.

Replies from: MatthewB
comment by MatthewB · 2010-01-11T09:54:54.893Z · LW(p) · GW(p)

Yes... They interact quite a bit (the brain and the rest of the universe)... Then what am I doing investing in a technology that gives me the best chance of recovering the pattern within that brain?

I get the point, but the interaction between the brain and the rest of the universe is not likely to leave a durable pattern upon the universe of the pattern in the brain. I am open to being wrong about that.

Replies from: Peter_de_Blanc
comment by Peter_de_Blanc · 2010-01-11T19:10:54.835Z · LW(p) · GW(p)

Then what am I doing investing in a technology that gives me the best chance of recovering the pattern within that brain?

Because of our uncertainty about how much information is preserved and how easy it is to reconstruct. Cryonics is the most conservative option.

comment by [deleted] · 2016-03-01T15:10:58.601Z · LW(p) · GW(p)

Any wrongness can be explained as referencing a suboptimal reference class compared to an idealised reference class.

comment by OnTheOtherHandle · 2012-08-18T20:44:39.475Z · LW(p) · GW(p)

I recognize this is an old post, but I just wanted to point out that cryonics doesn't promise an escape from all forms of death, while Heaven does, meaning Heaven has a much higher burden of proof. Cryonics won't save you from a gunshot or a bomb or an accident, before or after you get frozen. Cryonics promises (a possibility of) an end to death by non-violent brain failure, specifically old age.

Science has been successful in the past at reducing the instances of death by certain non-violent failures of various organs. Open heart surgery and bypass surgery are two important ones, but there's also neurosurgery to remove formerly fatal brain tumors, hemispherectomy to cure formerly fatal epilepsy in children, and various procedures to limit the damage and reduce the lethality of strokes.

Add that to the fact that we know scientists are continuing to study the human brain, and there's no reason from the inside or the outside view to think that they'll suddenly stop, or that they'll find that old age is the one brain disease for which there's nothing that can be done even in theory, and there's reason to assign a small but non-trivial probability that if humanity survives for a couple of centuries, old age will be yet another formerly fatal disease that science has cured. That reference class is quite large, and the reference class of science doing the impossible is even larger (see flight, space travel.)

As for artificial intelligence, what about the reference class of "machines being able to do things formerly thought to be the sole domain of humanity"? Chess-playing computers, disease-diagnosing computers, computers which conduct scientific experiments, computers which compose music...I'm sure I'm missing many interesting innovations here.

ETA: A better rerefence class would be "machines doing things that were formerly thought to be the sole domain of humanity, better than humans can." Chess playing computers and calculators would fit into this category, too.

Replies from: nshepperd
comment by nshepperd · 2012-08-19T03:36:20.910Z · LW(p) · GW(p)

Cryonics won't save you from a gunshot or a bomb or an accident, before or after you get frozen. Cryonics promises (a possibility of) an end to death by non-violent brain failure, specifically old age.

It won't save you from a gunshot to the head, but I would expect it to work fine for a violent death without brain trauma, as long as someone gets to your body quickly enough.

Replies from: OnTheOtherHandle
comment by OnTheOtherHandle · 2012-08-19T04:04:50.301Z · LW(p) · GW(p)

Good point; you're right. The probability of them getting to you quickly enough after a car crash is still quite low, though, so while it could save you, it probably wouldn't.

comment by MatthewB · 2010-01-11T13:05:31.692Z · LW(p) · GW(p)

I knew that there was something else that I wanted to ask.

How closely is the Optimism Bias similar to the Dunnin-Krueger Effect?

comment by randallsquared · 2010-01-09T18:18:06.637Z · LW(p) · GW(p)

reference class of beliefs in coming of a new world, be it good or evil, is huge and with consistent 0% success rate.

That's odd. This would imply that you don't believe in evolution of homo sapiens sapiens from previous hominids, or the invention of agriculture... . Heck, read the descriptions of heaven in the New Testament: the description of the ultimate better world (literally heaven!) is a step backward from everyday life in the West for most people.

It seems we have lots of examples of the transformation of the world for better, though I'd say there's not much room for worse than the lowest-happiness lives already in history.

comment by Morendil · 2010-01-08T08:20:43.133Z · LW(p) · GW(p)

Typo in para 1: "guess which will invariably turned out" => "turn out"

comment by [deleted] · 2014-06-13T08:38:58.188Z · LW(p) · GW(p)

"Many economic and financial decisions depend crucially on their timing. People decide when to invest in a project, when to liquidate assets, or when to stop gambling in a casino. We provide a general result on prospect theory decision makers who are unaware of the time-inconsistency induced by probability weighting. If a market offers a sufficiently rich set of investment strategies, then such naive investors postpone their decisions until forever. We illustrate the drastic consequences of this “never stopping” result, and conclude that probability distortion in combination with naivité leads to unrealistic predictions for a wide range of dynamic setups.""

comment by Stuart_Armstrong · 2010-01-14T10:11:05.642Z · LW(p) · GW(p)

The whole issue of "singularity" needs a bit of clarification. If this is a physical singularity, i.e. a breakdown of a theory's ability to predict, then this is in the reference class of "theories of society claiming current models have limited future validity", which makes it nearly certain to be true.

If its a mathematical singularity (reaching infinity in finite time), then its reference class is composed nearly solely of erroneous theories.

You can get compromises between the two extremes (such as nuclear chain reactions - massive self feeding increase until a resource is exhausted), but it's important to define what you mean by singularity before assigning it to a reference class.

comment by ChristianKl · 2010-01-10T16:17:59.195Z · LW(p) · GW(p)

You will have an eternal life in heaven after your death isn't a real prediction.

A real prediction is something where you have a test to see whether or not the prediction turned out to be true. There's no test to decide whether someone has eternal life in heaven.

Predictions are about having the possibility to update your judgement after the event you predict happens or doesn't happen.

Replies from: Alicorn
comment by Alicorn · 2010-01-10T16:19:26.812Z · LW(p) · GW(p)

There is no test to determine whether someone else has eternal life in heaven. It seems like it'd be possible to collect some fairly compelling evidence about one's own eternal life in heaven, were it to come to pass.

Replies from: wedrifid, ChristianKl
comment by wedrifid · 2010-01-17T02:09:42.393Z · LW(p) · GW(p)

There is no test to determine whether someone else has eternal life in heaven.

Sure there is. See if Weird Al is laughing his head off at the appropriate time.

comment by ChristianKl · 2010-01-10T17:40:41.695Z · LW(p) · GW(p)

We look back to find a suitable reference class and choose the believe in eternal life we are talking about other people and whether they made successful predictions.

It's also not possible to find compelling evidence that one doesn't have an eternal life in heaven when it doesn't come to pass.

All argument against the existence of an eternal life are as good the moment the prediction as made as they are later when the event happens or doesn't happen. Cryonics however makes claims that aren't transcendental and that can be evaluated by outside observers.

Replies from: Alicorn
comment by Alicorn · 2010-01-10T18:48:32.028Z · LW(p) · GW(p)

It's also not possible to find compelling evidence that one doesn't have an eternal life in heaven when it doesn't come to pass.

Sure it is, under certain circumstances. I'm led to understand that that state of affairs is widely considered unpleasant, though :P