"Outside View!" as Conversation-Halter
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-24T05:53:34.133Z · LW · GW · Legacy · 103 commentsContents
103 comments
Followup to: The Outside View's Domain, Conversation Halters
Reply to: Reference class of the unclassreferenceable
In "conversation halters", I pointed out a number of arguments which are particularly pernicious, not just because of their inherent flaws, but because they attempt to chop off further debate - an "argument stops here!" traffic sign, with some implicit penalty (at least in the mind of the speaker) for trying to continue further.
This is not the right traffic signal to send, unless the state of knowledge is such as to make an actual halt a good idea. Maybe if you've got a replicable, replicated series of experiments that squarely target the issue and settle it with strong significance and large effect sizes (or great power and null effects), you could say, "Now we know." Or if the other is blatantly privileging the hypothesis - starting with something improbable, and offering no positive evidence to believe it - then it may be time to throw up hands and walk away. (Privileging the hypothesis is the state people tend to be driven to, when they start with a bad idea and then witness the defeat of all the positive arguments they thought they had.) Or you could simply run out of time, but then you just say, "I'm out of time", not "here the gathering of arguments should end."
But there's also another justification for ending argument-gathering that has recently seen some advocacy on Less Wrong.
An experimental group of subjects were asked to describe highly specific plans for their Christmas shopping: Where, when, and how. On average, this group expected to finish shopping more than a week before Christmas. Another group was simply asked when they expected to finish their Christmas shopping, with an average response of 4 days. Both groups finished an average of 3 days before Christmas. Similarly, Japanese students who expected to finish their essays 10 days before deadline, actually finished 1 day before deadline; and when asked when they had previously completed similar tasks, replied, "1 day before deadline." (See this post.)
Those and similar experiments seem to show us a class of cases where you can do better by asking a certain specific question and then halting: Namely, the students could have produced better estimates by asking themselves "When did I finish last time?" and then ceasing to consider further arguments, without trying to take into account the specifics of where, when, and how they expected to do better than last time.
From this we learn, allegedly, that "the 'outside view' is better than the 'inside view'"; from which it follows that when you're faced with a difficult problem, you should find a reference class of similar cases, use that as your estimate, and deliberately not take into account any arguments about specifics. But this generalization, I fear, is somewhat more questionable...
For example, taw alleged upon this very blog that belief in the 'Singularity' (a term I usually take to refer to the intelligence explosion) ought to be dismissed out of hand, because it is part of the reference class "beliefs in coming of a new world, be it good or evil", with a historical success rate of (allegedly) 0%.
Of course Robin Hanson has a different idea of what constitutes the reference class and so makes a rather different prediction - a problem I refer to as "reference class tennis":
Taking a long historical long view, we see steady total growth rates punctuated by rare transitions when new faster growth modes appeared with little warning. We know of perhaps four such "singularities": animal brains (~600MYA), humans (~2MYA), farming (~1OKYA), and industry (~0.2KYA)...
Excess inside viewing usually continues even after folks are warned that outside viewing works better; after all, inside viewing better show offs inside knowledge and abilities. People usually justify this via reasons why the current case is exceptional. (Remember how all the old rules didn’t apply to the new dotcom economy?) So expect to hear excuses why the next singularity is also an exception where outside view estimates are misleading. Let’s keep an open mind, but a wary open mind.
If I were to play the game of reference class tennis, I'd put recursively self-improving AI in the reference class "huge mother#$%@ing changes in the nature of the optimization game" whose other two instances are the divide between life and nonlife and the divide between human design and evolutionary design; and I'd draw the lesson "If you try to predict that things will just go on sorta the way they did before, you are going to end up looking pathetically overconservative".
And if we do have a local hard takeoff, as I predict, then there will be nothing to say afterward except "This was similar to the origin of life and dissimilar to the invention of agriculture". And if there is a nonlocal economic acceleration, as Robin Hanson predicts, we just say "This was similar to the invention of agriculture and dissimilar to the origin of life". And if nothing happens, as taw seems to predict, then we must say "The whole foofaraw was similar to the apocalypse of Daniel, and dissimilar to the origin of life or the invention of agriculture". This is why I don't like reference class tennis.
But mostly I would simply decline to reason by analogy, preferring to drop back into causal reasoning in order to make weak, vague predictions. In the end, the dawn of recursive self-improvement is not the dawn of life and it is not the dawn of human intelligence, it is the dawn of recursive self-improvement. And it's not the invention of agriculture either, and I am not the prophet Daniel. Point out a "similarity" with this many differences, and reality is liable to respond "So what?"
I sometimes say that the fundamental question of rationality is "Why do you believe what you believe?" or "What do you think you know and how do you think you know it?"
And when you're asking a question like that, one of the most useful tools is zooming in on the map by replacing summary-phrases with the concepts and chains of inferences that they stand for.
Consider what inference we're actually carrying out, when we cry "Outside view!" on a case of a student turning in homework. How do we think we know what we believe?
Our information looks something like this:
- In January 2009, student X1 predicted they would finish their homework 10 days before deadline, and actually finished 1 day before deadline.
- In February 2009, student X1 predicted they would finish their homework 9 days before deadline, and actually finished 2 days before deadline.
- In March 2009, student X1 predicted they would finish their homework 9 days before deadline, and actually finished 1 day before deadline.
- In January 2009, student X2 predicted they would finish their homework 8 days before deadline, and actually finished 2 days before deadline.
- And so on through 157 other cases.
- Furthermore, in another 121 cases, asking students to visualize specifics actually made them more optimistic.
Therefore, when new student X279 comes along, even though we've never actually tested them before, we ask:
"How long before deadline did you plan to complete your last three assignments?"
They say: "10 days, 9 days, and 10 days."
We ask: "How long before did you actually complete them?"
They reply: "1 day, 1 day, and 2 days".
We ask: "How long before deadline do you plan to complete this assignment?"
They say: "8 days."
Having gathered this information, we now think we know enough to make this prediction:
"You'll probably finish 1 day before deadline."
They say: "No, this time will be different because -"
We say: "Would you care to make a side bet on that?"
We now believe that previous cases have given us strong, veridical information about how this student functions - how long before deadline they tend to complete assignments - and about the unreliability of the student's planning attempts, as well. The chain of "What do you think you know and how do you think you know it?" is clear and strong, both with respect to the prediction, and with respect to ceasing to gather information. We have historical cases aplenty, and they are all as similar to each other as they are similar to this new case. We might not know all the details of how the inner forces work, but we suspect that it's pretty much the same inner forces inside the black box each time, or the same rough group of inner forces, varying no more in this new case than has been observed on the previous cases that are as similar to each other as they are to this new case, selected by no different a criterion than we used to select this new case. And so we think it'll be the same outcome all over again.
You're just drawing another ball, at random, from the same barrel that produced a lot of similar balls in previous random draws, and those previous balls told you a lot about the barrel. Even if your estimate is a probability distribution rather than a point mass, it's a solid, stable probability distribution based on plenty of samples from a process that is, if not independent and identically distributed, still pretty much blind draws from the same big barrel.
You've got strong information, and it's not that strange to think of stopping and making a prediction.
But now consider the analogous chain of inferences, the what do you think you know and how do you think you know it, of trying to take an outside view on self-improving AI.
What is our data? Well, according to Robin Hanson:
- Animal brains showed up in 550M BC and doubled in size every 34M years
- Human hunters showed up in 2M BC, doubled in population every 230Ky
- Farmers, showing up in 4700BC, doubled every 860 years
- Starting in 1730 or so, the economy started doubling faster, from 58 years in the beginning to a 15-year approximate doubling time now.
From this, Robin extrapolates, the next big growth mode will have a doubling time of 1-2 weeks.
So far we have an interesting argument, though I wouldn't really buy it myself, because the distances of difference are too large... but in any case, Robin then goes on to say: We should accept this estimate flat, we have probably just gathered all the evidence we should use. Taking into account other arguments... well, there's something to be said for considering them, keeping an open mind and all that; but if, foolishly, we actually accept those arguments, our estimates will probably get worse. We might be tempted to try and adjust the estimate Robin has given us, but we should resist that temptation, since it comes from a desire to show off insider knowledge and abilities.
And how do we know that? How do we know this much more interesting proposition that it is now time to stop and make an estimate - that Robin's facts were the relevant arguments, and that other arguments, especially attempts to think about the interior of an AI undergoing recursive self-improvement, are not relevant?
Well... because...
- In January 2009, student X1 predicted they would finish their homework 10 days before deadline, and actually finished 1 day before deadline.
- In February 2009, student X1 predicted they would finish their homework 9 days before deadline, and actually finished 2 days before deadline.
- In March 2009, student X1 predicted they would finish their homework 9 days before deadline, and actually finished 1 day before deadline.
- In January 2009, student X2 predicted they would finish their homework 8 days before deadline, and actually finished 2 days before deadline...
It seems to me that once you subtract out the scary labels "inside view" and "outside view" and look at what is actually being inferred from what - ask "What do you think you know and how do you think you know it?" - that it doesn't really follow very well. The Outside View that experiment has shown us works better than the Inside View, is pretty far removed from the "Outside View!" that taw cites in support of predicting against any epoch. My own similarity metric puts the latter closer to the analogies of Greek philosophers, actually. And I'd also say that trying to use causal reasoning to produce weak, vague, qualitative predictions like "Eventually, some AI will go FOOM, locally self-improvingly rather than global-economically" is a bit different from "I will complete this homework assignment 10 days before deadline". (The Weak Inside View.)
I don't think that "Outside View! Stop here!" is a good cognitive traffic signal to use so far beyond the realm of homework - or other cases of many draws from the same barrel, no more dissimilar to the next case than to each other, and with similarly structured forces at work in each case.
After all, the wider reference class of cases of telling people to stop gathering arguments, is one of which we should all be wary...
103 comments
Comments sorted by top scores.
comment by RobinHanson · 2010-02-25T01:17:47.025Z · LW(p) · GW(p)
I've put far more time that most engaging your singularity arguments, my responses have consisted of a lot more than just projecting a new growth jump from stats on the last three jumps, and I've continue to engage the topic long after that June 2008 post. So it seems to me unfair to describe me as someone arguing "for ending argument-gathering" on the basis that this one projection says all that can be said.
I agree that inside vs outside viewing is a continuum, not a dichotomy. I'd describe the key parameter as the sort of abstractions used, and the key issue is how well grounded are those abstractions. Outside views tend to pretty directly use abstractions that are based more "surface" features of widely known value. Inside views tend to use more "internal" abstractions, and inferences with longer chains.
Responding most directly to your arguments my main critiques have been about the appropriateness of your abstractions. You may disagree with them, and think I try to take the conversation in the wrong direction, but I don't see how I can be described as trying to halt the conversation.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-25T02:21:04.163Z · LW(p) · GW(p)
I don't see how I can be described as trying to halt the conversation.
Allow me to disclaim that you usually don't. But that particular referenced post did, and taw tried it even more blatantly - to label further conversation as suspect, ill-advised, and evidence of morally nonvirtuous (giving in to pride and the temptation to show off) "inside viewing". I was frustrated with this at the time, but had too many higher-priority things to say before I could get around to describing exactly what frustrated me about it.
It's also not clear to me how you think someone should be allowed to proceed from the point where you say "My abstractions are closer to the surface than yours, so my reference class is better", or if you think you just win outright at that point. I tend to think that it's still a pretty good idea to list out the underlying events being used as alleged evidence, stripped of labels and presented as naked facts, and see how much they seem to tell us about the future event at hand, once the covering labels are gone. I think that under these circumstances the force of implication from agriculture to self-improving AI tends to sound pretty weak.
Replies from: Tyrrell_McAllister, RobinHanson↑ comment by Tyrrell_McAllister · 2010-02-25T03:01:09.314Z · LW(p) · GW(p)
I think that we should distinguish
trying to halt the conversation, from
predicting that your evidence will probably be of low quality if it takes a certain form.
Robin seems to think that some of your evidence is a causal analysis of mechanisms based on poorly-grounded abstractions. Given that it's not logically rude for him to think that your abstractions are poorly grounded, it's not logically rude for him to predict that they will probably offer poor evidence, and so to predict that they will probably not change his beliefs significantly.
I'm not commenting here on whose predictions are higher-quality. I just don't think that Robin was being logically rude. If anything, he was helpfully reporting which arguments are mostly likely to sway him. Furthermore, he seems to welcome your trying to persuade him to give other arguments more weight. He probably expects that you won't succeed, but, so long as he welcomes the attempt, I don't think that he can be accused of trying to halt the conversation.
Replies from: xamdam↑ comment by xamdam · 2010-02-25T04:13:00.797Z · LW(p) · GW(p)
Can someone please link to the posts in question for the latecomers?
Replies from: Eliezer_Yudkowsky, Cyan↑ comment by Cyan · 2010-02-25T04:34:49.779Z · LW(p) · GW(p)
Thanks to the OB/LW split, it's pretty awkward to try to find all the posts in sequence. I think Total Nano Domination is the first one*, and Total Tech Wars was Robin's reply. They went back and forth after that for a few days (you can follow along in the archives), and then restored the congenial atmosphere by jointly advocating cryonics. In fall 2009 they got into it again in a comment thread on OB.
* maybe it was prompted by Abstract/Distant Future Bias.
Replies from: wedrifid↑ comment by wedrifid · 2010-02-25T05:32:48.983Z · LW(p) · GW(p)
Don't neglect the surrounding context. The underlying disagreements have been echoing about all over the place in the form of "Contrarians boo vs Correct Contrarians yay!" and "here is a stupid view that can be classed as an inside view therefore inside view sucks!" vs "high status makes you stupid" and "let's play reference class tennis".
Replies from: Cyan↑ comment by RobinHanson · 2010-02-25T04:59:04.810Z · LW(p) · GW(p)
There are many sorts of arguments that tend to be weak, and weak-tending arguments deserved to be treated warily, especially if their weakness tends not to be noticed. But pointing that out is not the same as trying to end a conversation.
It seems to me the way to proceed is to talk frankly about various possible abstractions, including their reliability, ambiguity, and track records of use. You favor the abstractions "intelligence" and "self-improving" - be clear about what sort of detail those summaries neglect, why that neglect seems to you reasonable in this case, and look at the track record of others trying to use those abstractions. Consider other abstractions one might use instead.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-25T05:33:01.056Z · LW(p) · GW(p)
I've got no problem with it phrased that way. To be clear, the part that struck me as unfair was this:
Replies from: wedrifid, RobinHansonExcess inside viewing usually continues even after folks are warned that outside viewing works better; after all, inside viewing better show offs inside knowledge and abilities. People usually justify this via reasons why the current case is exceptional. (Remember how all the old rules didn’t apply to the new dotcom economy?) So expect to hear excuses why the next singularity is also an exception where outside view estimates are misleading. Let’s keep an open mind, but a wary open mind.
↑ comment by wedrifid · 2010-02-25T05:56:40.905Z · LW(p) · GW(p)
Another example that made me die a little inside when I read it was this:
As Brin notes, many would-be broadcasters come from an academic area where for decades the standard assumption has been that aliens are peaceful zero-population-growth no-nuke greens, since we all know that any other sort quickly destroy themselves. This seems to me an instructive example of how badly a supposed “deep theory” inside-view of the future can fail, relative to closest-related-track-record outside-view.
The inside-view tells me that is an idiotic assumption to make.
Replies from: Eliezer_Yudkowsky, Tyrrell_McAllister↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-25T13:03:49.579Z · LW(p) · GW(p)
Agreed that this is the conflict of two inside views, not an inside view versus an outside view. You could as easily argue that most stars don't seem to have been eaten, therefore, the outside view suggests that any aliens within radio range are environmentalists. And certainly Robin is judging one view right and the other wrong using an inside view, not an outside view.
I simply don't see the justification for claiming the power and glory of the Outside View at all in cases like this, let alone claiming that there exists a unique obvious reference class and you have it.
Replies from: RobinHanson↑ comment by RobinHanson · 2010-02-25T15:03:00.341Z · LW(p) · GW(p)
It seems to me the obvious outside view of future contact is previous examples of contact. Yes uneaten stars is also an outside stat, which does (weakly) suggest aliens don't eat stars. I certainly don't mean to imply there is always a unique inside view.
Replies from: Eliezer_Yudkowsky, JGWeissman↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-25T19:19:32.901Z · LW(p) · GW(p)
Why isn't the obvious outside view to draw a line showing the increased peacefulness of contacts with the increasing technological development of the parties involved, and extrapolate to super-peaceful aliens? Isn't this more or less exactly why you argue that AIs will inevitably trade with us? Why extrapolate for AIs but not for aliens?
To be clear on this, I don't simply distrust the advice of an obvious outside view, I think that in cases like these, people perform a selective search for a reference class that supports a foregone conclusion (and then cry "Outside View!"). This foregone conclusion is based on inside viewing in the best case; in the worst case it is based entirely on motivated cognition or wishful thinking. Thus, to cry "Outside View!" is just to conceal the potentially very flawed thinking that went into the choice of reference class.
↑ comment by JGWeissman · 2010-02-25T20:19:29.722Z · LW(p) · GW(p)
Yes uneaten stars is also an outside stat, which does (weakly) suggest aliens don't eat stars.
Weakly? What are your conditional probabilities that we would observe stars being eaten, given that there exist star-eating aliens (within range of our attempts at communication), and given that such aliens do not exist? Or, if you prefer, what is your likelyhood ratio?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-25T21:51:29.472Z · LW(p) · GW(p)
This is an excellently put objection - putting it this way makes it clear just how strong the objection is. The likelihood ratio to me sounds like it should be more or less T/F, where for the sake of conservatism T might equal .99 and F might equal .01. If we knew for a fact that there were aliens in our radio range, wouldn't this item of evidence wash out any priors we had about them eating stars? We don't see the stars being eaten!
↑ comment by Tyrrell_McAllister · 2010-02-25T06:20:02.865Z · LW(p) · GW(p)
You think that it is idiotic to believe that many (not all) "broadcasters" expect that any aliens advanced enough to harm us will be peaceful?
Replies from: wedrifid↑ comment by RobinHanson · 2010-02-25T15:04:37.782Z · LW(p) · GW(p)
To clarify, I meant an excess of reliance on the view, not of exploration of the view.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-25T19:09:04.885Z · LW(p) · GW(p)
Forbidding "reliance" on pain of loss of prestige isn't all that much better than forbidding "exploration" on pain of loss of prestige. People are allowed to talk about my arguments but of course not take them seriously? Whereas it's perfectly okay to rely on your "outside view" estimate? I don't think the quoted paragraph is one I can let stand no matter how you reframe it...
Replies from: wedrifid, Tyrrell_McAllister↑ comment by wedrifid · 2010-02-25T22:55:59.842Z · LW(p) · GW(p)
I simply don't see the justification for claiming the power and glory of the Outside View at all in cases like this, let alone claiming that there exists a unique obvious reference class and you have it.
It could even be somewhat worse. Forbidden things seem to be higher status 'bad' than things that can be casually dismissed.
↑ comment by Tyrrell_McAllister · 2010-02-26T18:53:01.128Z · LW(p) · GW(p)
Forbidding "reliance" on pain of loss of prestige isn't all that much better than forbidding "exploration" on pain of loss of prestige.
The "on pain of loss of prestige" was implicit, if it was there at all. All that was explicit was that Robin considered your evidence to be of lower quality than you thought. Insofar as there was an implicit threat to lower status, such a threat would be implicit in any assertion that your evidence is low-quality. You seem to be saying that it is logically rude for Robin to say that he has considered your evidence and to explain why he found it wanting.
comment by Alex Flint (alexflint) · 2010-02-25T09:55:50.753Z · LW(p) · GW(p)
"inside view" and "outside view" seem misleading labels for things that are actually "bayesian reasoning" and "bayesian reasoning deliberately ignoring some evidence to account for flawed cognitive machinery". The only reason for applying the "outside view" is to compensate for our flawed machinery, so to attack an "inside view", one needs to actually give a reasonable argument that the inside view has fallen prey to bias. This argument should come first, it should not be assumed.
Replies from: RobinHanson↑ comment by RobinHanson · 2010-02-25T15:12:19.656Z · LW(p) · GW(p)
Obviously the distinction depends on being able to distinguish inside from outside considerations in any particular context. But given such a distinction there is no asymmetry - both views are not full views, but instead focus on their respective considerations.
Replies from: alexflint, jimmy↑ comment by Alex Flint (alexflint) · 2010-02-25T23:09:48.975Z · LW(p) · GW(p)
Well an ideal Bayesian would unashamedly use all available evidence. It's only our flawed cognitive machinery that suggests ignoring some evidence might sometimes be beneficial. But the burden of proof should be on the one who suggests that a particular situation warrants throwing away some evidence, rather than on the one who reasons earnestly from all evidence.
Replies from: wedrifid, RobinHanson↑ comment by wedrifid · 2010-02-25T23:31:13.980Z · LW(p) · GW(p)
I don't think ideal Bayesian's use burden of proof either. Who has the burden of proof in demonstrating that burden of proof is required in a particular instance?
Replies from: alexflint↑ comment by Alex Flint (alexflint) · 2010-02-26T08:35:19.176Z · LW(p) · GW(p)
Occams razor: the more complicated hypothesis acquires a burden of proof
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-26T22:35:00.865Z · LW(p) · GW(p)
In which case there's some specific amount of distinguishing evidence that promotes the hypothesis over the less complicated one, in which case, I suppose, the other would acquire this "burden of proof" of which you speak?
Replies from: alexflint↑ comment by Alex Flint (alexflint) · 2010-02-27T10:35:53.941Z · LW(p) · GW(p)
Not sure that I understand (I'm not being insolent, I just haven't had my coffee this morning). Claiming that "humans are likely to over-estimate the chance of a hard-takeoff singularity in the next 50 years and should therefore discount inside view arguments on this topic" requires evidence, and I'm not convinced that the standard optimism bias literature applies here. In the absence of such evidence one should accept all arguments on their merits and just do Bayesian updating.
↑ comment by RobinHanson · 2010-02-26T19:22:17.165Z · LW(p) · GW(p)
If we are going to have any heuristics that say that some kinds of evidence tend to be overused or underused, we have to be able to talk about sets of evidence that are less than than the total set. The whole point here is to warn people about our evidence that suggests people tend to over-rely on inside evidence relative to outside evidence.
Replies from: alexflint↑ comment by Alex Flint (alexflint) · 2010-02-27T10:31:58.268Z · LW(p) · GW(p)
Agreed. My objection is to cases where inside view arguments are discounted completely on the basis of experiments that have shown optimism bias among humans, but where it isn't clear that optimism bias actually applies to the subject matter at hand. So my disagreement is about degrees rather than absolutes: How widely can the empirical support for optimism bias be generalized? How much should inside view arguments be discounted? My answers would be, roughly, "not very widely" and "not much outside traditional forecasting situations". I think these are tangible (even empirical) questions and I will try to write a top-level post on this topic.
↑ comment by jimmy · 2010-02-25T21:30:18.784Z · LW(p) · GW(p)
What would you call the classic drug testing example where you use the outside view as a prior and update based on the test results?
If the test is sufficiently powerful, it seems like you'd call it using the "inside view" for sure, even though it really uses both, and is a full view.
I think the issue is not that one ignores the outside view when using the inside view- I think it's that in many cases the outside view only makes very weak predictions that are easily dwarfed by the amount of information one has at hand for using the inside view.
In these cases, it only makes sense to believe something close to the outside view if you don't trust your ability to use more information without shooting yourself in the foot- which is alexflint's point.
Replies from: RobinHanson↑ comment by RobinHanson · 2010-02-26T19:24:20.320Z · LW(p) · GW(p)
I really can't see why a prior would correspond more to an outside view. The issue is not when the evidence arrived, it is more about whether the evidence is based on a track record or reasoning about process details.
Replies from: jimmy↑ comment by jimmy · 2010-02-27T20:08:55.985Z · LW(p) · GW(p)
Well, you can switch around the order in which you update anyway, so that's not really the important part.
My point was that in most cases, the outside view gives a much weaker prediction than the inside view taken at face value. In these cases using both views is pretty much the same as using the inside view by itself, so advocating "use the outside view!" would be better translated as "don't trust yourself to use the inside view!"
Replies from: RobinHanson↑ comment by RobinHanson · 2010-03-02T02:23:52.365Z · LW(p) · GW(p)
I can't imagine what evidence you think there is for your claim "in most cases, the outside view gives a much weaker prediction."
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-03-02T02:28:13.928Z · LW(p) · GW(p)
Weakness as in the force of the claim, not how well-supported the claim may be.
Replies from: JGWeissman↑ comment by JGWeissman · 2010-03-02T02:56:25.680Z · LW(p) · GW(p)
This confuses me. What force of a claim should I feel, that does not come from it being well-supported?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-03-02T03:32:43.250Z · LW(p) · GW(p)
Okay, rephrase: Suppose I pull a crazy idea out of my hat and scream "I am 100% confident that every human being on earth will grow a tail in the next five minutes!" Then I am making a very forceful claim, which is not well-supported by the evidence.
The idea is that the outside view generally makes less forceful claims than the inside view - allowing for a wider range of possible outcomes, not being very detailed or precise or claiming a great deal of confidence. If we were to take both outside view and inside view perfectly at face value, giving them equal credence, the sum of the outside view and the inside view would be mostly the inside view. So saying that the sum of the outside view and the inside view equals mostly the outside view must imply that we think the inside view is not to be trusted in the strength it says its claims should have, which is indeed the argument being made.
Replies from: JGWeissman↑ comment by JGWeissman · 2010-03-02T03:43:24.919Z · LW(p) · GW(p)
Thank you, I understand that much better.
comment by mwaser · 2010-02-24T12:33:36.981Z · LW(p) · GW(p)
We say: "Would you care to make a side bet on that?"
And I'd say . . . . "Sure! I recognize that I normally plan to finish 9 to 10 days early to ensure that I finish before the deadline and that I normally "fail" and only finish a day or two early (but still succeed at the real deadline) . . . . but now, you've changed the incentive structure (i.e. the entire problem) so I will now plan to finish 9 or 10 days before my new deadline (necessary to take your money) of 9 or 10 days before the real deadline. Are you sure that you really want to make that side bet?
I note also that "Would you care to make a side bet on that? is interesting as a potential conversation-filter but can also, unfortunately, act as a conversation-diverter.
Replies from: wedrifidcomment by CarlShulman · 2010-02-24T18:55:35.145Z · LW(p) · GW(p)
Eliezer, the 'outside view' concept can also naturally be used to describe the work of Philip Tetlock, who found that political/foreign affairs experts were generally beaten by what Robin Dawes calls the "the robust beauty of simple linear models." Experts relying on coherent ideologies (EDIT: hedgehogs) did particularly badly.
Those political events were affected by big systemic pressures that someone could have predicted using inside view considerations, e.g. understanding the instability of the Soviet Union, but in practice acknowledged experts were not good enough at making use of such insights to generate net improvements on average.
Now, we still need to assign probabilities over different different models, not all of which should be so simple, but I think it's something of a caricature to focus so much on the homework/curriculum planning problems.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-24T19:13:38.588Z · LW(p) · GW(p)
(It's foxes who know many things and do better; the hedgehog knows one big thing.)
I haven't read Tetlock's book yet. I'm certainly not surprised to hear that foreign affairs "experts" are full of crap on average; their incentives are dreadful. I'm much more surprised to hear that situations like the instability of the Soviet Union could be described and successfully predicted by simple linear models, and I'm extremely suspicious if the linear models were constructed in retrospect. Wasn't this more like the kind of model-based forecasting that was actually done in advance?
Conversely if the result is just that hedgehogs did worse than foxes, I'm not surprised because hedgehogs have worse incentives - internal incentives, that is, there are no external incentives AFAICT.
I have read Dawes on medical experts being beaten by improper linear models (i.e., linear models with made-up -1-or-1 weights and normalized inputs, if I understand correctly) whose factors are the judgments of the same experts on the facets of the problem. This ought to count as the triumph or failure of something but it's not quite isomorphic to outside view versus inside view.
comment by Roko · 2010-02-24T14:42:14.321Z · LW(p) · GW(p)
I think there probably is a good reference class for predictions surrounding the singularity. When you posted on "what is wrong with our thoughts? you identified it: the class of instances of the human mind attempting to think and act outside of its epistemologically nurturing environment of clear feedback from everyday activities.
See, e.g. how smart humans like Stephen Hawking, Ray Kurzweil, Kevin Warwick, Kevin Kelly, Eric Horowitz, etc have all managed to say patently absurd things about the issue, and hold mutually contradictory positions, with massive overconfidence in some cases. I do not exclude myself from the group of people who have said absurd things about the Singularity, and I think we shouldn't exclude Eliezer either. At least Eliezer has put in massive amounts of work for what may well be the greater good of humanity, which is morally commendable.
To escape from this reference class, and therefore from the default prediction of insanity, I think that bringing in better feedback and a large diverse community of researchers might work. Of course, more feedback and more researchers = more risk according to our understanding of AI motivations. But ultimately, that's an unavoidable trade-off; the lone madman versus the global tragedy of the commons.
Replies from: Eliezer_Yudkowsky, whpearson↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-24T18:33:37.714Z · LW(p) · GW(p)
Large communities don't constitute help or progress on the "beyond the realm of feedback" problem. In the absence of feedback, how is a community supposed to know when one of its members has made progress? Even with feedback we have cases like psychotherapy and dietary science where experimental results are simply ignored. Look at the case of physics and many-worlds. What has "diversity" done for the Singularity so far? Kurzweil has gotten more people talking about "the Singularity" - and lo, the average wit of the majority hath fallen. If anything, trying to throw a large community at the problem just guarantees that you get the average result of failure, rather than being able to notice one of the rare individuals or minority communities that can make progress using lower amounts of evidence.
I may even go so far as to call "applause light" or "unrelated charge of positive affect" on the invocation of a "diverse community" here, because of the degree to which the solution fails to address the problem.
Replies from: Roko, CarlShulman↑ comment by Roko · 2010-02-24T19:18:35.849Z · LW(p) · GW(p)
In the absence of feedback, how is a community supposed to know when one of its members has made progress?
Good question. It seems that academic philosophy does, to an extent, achieve this. The mechanism seems to be that it is easier to check an argument for correctness than to generate it. And it is easier to check whether a claimed flaw in an argument really is a flaw, and so on.
In this case, a mechanism where everyone in the community tries to think of arguments, and tries to think of flaws in others' arguments, and tries to think of flaws in the criticisms of arguments, etc, means that as the community size --> infinity, the field converges on the truth.
Replies from: wedrifid↑ comment by wedrifid · 2010-02-24T20:24:47.200Z · LW(p) · GW(p)
Good question. It seems that academic philosophy does, to an extent, achieve this.
With some of my engagements with academic philosophers in mind I have at times been tempted to lament that that 'extent' wasn't rather a lot greater. Of course, that may be 'the glass is half empty' thinking. I intuit that there is potential for a larger body of contributers to have even more of a correcting influence of the kind that you mention than what we see in practice!
Replies from: Roko↑ comment by Roko · 2010-02-25T15:32:49.467Z · LW(p) · GW(p)
Philosophy has made some pretty significant progress in many areas. However, sometimes disciplines of that form can get "stuck" in an inescapable pit of nonsense, e.g. postmodernism or theology. In a sense, the philosophy community is trying to re-do what the theologians have failed at: answering questions such as "how should I live", etc.
↑ comment by CarlShulman · 2010-02-24T23:36:34.105Z · LW(p) · GW(p)
Many-worlds has made steady progress since it was invented. Especially early on, trying to bring in diversity would get you some many-worlds proponents rather than none, and their views would tend to spread.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-25T02:10:54.217Z · LW(p) · GW(p)
Think of how much more progress could have been made if the early many-worlds proponents had gotten together and formed a private colloquium of the sane, providing only that they had access to the same amount of per capita grant funding (this latter point being not about a need for diversity but a need to pander to gatekeepers).
Replies from: Roko↑ comment by whpearson · 2010-02-24T23:05:13.234Z · LW(p) · GW(p)
Logged in to vote this up...
However I wouldn't go the lots of people route either. At least not until decent research norms had been created.
The research methodology that has been mouldering away in my brain for the past few years is the following:
We can agree that computational systems might be dangerous (in the FOOM sense).
So let us start from the basics and prove that bits of computer space aren't dangerous either by experiments we have already done (or have been done by nature) or by formal proof.
Humanity has played around with basic computers and networked computers in a variety of configurations, if our theories say that they are dangerous then our theories are probably wrong.
Nature has and is also in the process of creating many computational systems. The Gene networks I mentioned earlier and if you want to look at the air around you as a giant quantum billiard ball computer of sorts, then giant ephemeral networks of "if molecule A collides with molecule B then molecule A will collide with molecule C" type calculations are being performed all around you without danger. *
The proof section is more controversial. There are certain mathematical properties I would expect powerful systems to have. The ability to accept recursive languages and also modify internal state (especially state that controls how they accept languages) based on them seems crucial to me. If we could build up a list of properties like this we can prove that certain systems don't have them and aren't going to be dangerous.
You can also correlate the dangerousness of parts of computational space with other parts of computational space.
One way of looking at self-modifying systems is that it that they are equivalent to non-self-modifying systems with infinite program memory and a bit of optimisation. As if you can write a program that changes function X to Y when it sees input Z, you can write a program that chooses to perform function X rather than Y if input Z has been seen using a simple if condition. Of course as you add more and more branches you increase the memory needed to store all the different programs. However I haven't seen anyone put bounds on how large the memory would have to be for the system to be considered dangerous. It might well be larger than the universe, but that would be good to know.
In this way we can whittle down where is dangerous until we either prove that
a) There is FOOMy part of computer space b) There is a non-FOOMy part of AI space, so we can build that and use that to try and figure out how to avoid malthusian scenarios and whittle down the space more c) We have a space which we can't prove much about. We should have a respectable science to say, "here might be dragons," by this point. d) We cover all of the computer space and none of it is FOOMy.
My personal bet is on b. But I am just a monkey...
Edit: Re-lurk. I have to be doing other things to do with boring surviving so don't expect much out of me for a good few months.
*An interesting question is do any of these network implement the calculations for pain/pleasure or other qualia.
Replies from: Roko↑ comment by Roko · 2010-02-25T15:27:47.401Z · LW(p) · GW(p)
Thanks!
I think you may be overestimating how much work formal proof can do here. For example, could formal proof have proved that early homonids would cause the human explosion?
Replies from: whpearson↑ comment by whpearson · 2010-02-25T16:32:56.490Z · LW(p) · GW(p)
Data about the world is very important in my view of intelligence.
Hominid brains were collecting lots of information about the world, then losing it all when they were dying, because they couldn't pass it all on. They could only pass on what they could demonstrate directly. (Lots of other species were doing so as well, so this argument applies to them as well.)
The species that managed to keep a hold of this lost information and spread it far and wide, you could probably prove would have a different learning pattern to the "start from scratch-learn/mimic-die" model of most animals, and potentially explode as "things with brains" had before.
Could you have proven it would be homonids? Possibly, you would need to know more about how the systems could realistically spread information between them including protection from lying and manipulation. And whether homonids had the properties that made them more likely to explode.
comment by ShardPhoenix · 2010-02-25T09:08:11.566Z · LW(p) · GW(p)
Hanson's argument was interesting but ultimately I think it's just numerology - there's no real physical reason to expect that pattern to continue, especially given how different/loosely-related the 3 previous changes were.
comment by Paul Crowley (ciphergoth) · 2010-02-24T12:07:37.488Z · LW(p) · GW(p)
This is an excellent post, thank you.
An earlier comment of yours pointed out that one compensates for overconfidence not by adjusting ones probability towards 50%, but by adjusting it towards the probability that a broader reference class would give. In this instance, the game of reference class tennis seems harder to avoid.
Replies from: RobinZ, arundelo, Eliezer_Yudkowsky↑ comment by RobinZ · 2010-02-24T13:44:40.530Z · LW(p) · GW(p)
Speaking of that post: It didn't occur to me when I was replying to your comment there, but if you're arguing about reference classes, you're arguing about the term in your equation representing ignorance.
I think that is very nearly the canonical case for dropping the argument until better data comes in.
↑ comment by arundelo · 2010-02-24T12:33:33.649Z · LW(p) · GW(p)
My introduction to that idea was RobinZ's "The Prediction Hierarchy".
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-24T18:29:06.154Z · LW(p) · GW(p)
Agreed.
Replies from: komponisto↑ comment by komponisto · 2010-02-24T18:42:10.643Z · LW(p) · GW(p)
So what do we do about it?
comment by Matt_Stevenson · 2010-02-24T09:33:02.024Z · LW(p) · GW(p)
It seems like the Outside View should only be considered in situations which have repeatably provided consistent results. This is an example of the procrastinating student. The event has been repeated numerous time with closely similar outcomes.
If the data is so insufficient that you have a hard time casting it to a reference class, that would imply that you don't have enough examples to make a reference and that you should find some other line of argument.
This whole idea of outside view is analogous to instance based learning or case based reasoning. You are not trying to infer some underlying casual structure to give you insight in estimating. You are using an unknown distance and clustering heuristic to do a quick comparison. Just like in machine learning it will be fast, but it is only as accurate as your examples.
If you're using something like Eigenfaces for face recognition, and you get a new face in, if it falls right in the middle of a large cluster of Jack's faces, you can safely assume you are looking at Jack. If you get a face that is equally close to Susan, Laura, and Katherine, you wouldn't want to just roll the dice with that guess. The best thing to do would be to recognize that you need to fill in this area of this map a little more if you want to use it here. Otherwise switch to a better map.
Edit: spelling
Replies from: taw↑ comment by taw · 2010-02-24T22:40:51.881Z · LW(p) · GW(p)
If the data is so insufficient that you have a hard time casting it to a reference class [...]
... then the data is most likely insufficient for reasoning in any other way. Reference class of smart people's predictions of the future performs extremely badly, even though they all had some real good inside view reasons for them.
Replies from: Matt_Stevenson, alexflint↑ comment by Matt_Stevenson · 2010-02-25T07:05:43.213Z · LW(p) · GW(p)
I'm not sure what you are trying to argue here? I am saying that trying to use a reference class prediction in a situation where you don't have many examples of what you are referencing is a bad idea and will likely result in a flawed prediction.
You should only try and use the Outside View if you are in a situation that you have been in over and over and over again, with the same concrete results.
... then the data is most likely insufficient for reasoning in any other way If you are using an Outside View to do reasoning and inference than I don't know what to say other than, you're doing it wrong.
If you are presented with a question about a post-singularity world, and the only admissible evidence (reference class) is
the class of instances of the human mind attempting to think and act outside of its epistemologically nurturing environment of clear feedback from everyday activities.
I'm sorry, but I am not going to trust any conclusion you draw. That is a really small class to draw from, small enough that we could probably name each instance individually. I don't care how smart the person is. If they are assigning probabilities from sparse data, it is just guessing. And if they are smart, they should know better than to call it anything else.
There have been no repeated trials of singularities with consistent unquestionable results. This is not like procrastinating students and shoppers, or estimations in software. Without enough data, you are more likely to invent a reference class than anything else.
I think the Outside View is only useful when your predictions for a specific event have been repeatedly wrong, and the the actual outcome is consistent. The point of the technique is to correct for a bias. I would like to know that I actually have a bias before correcting it. And, I'd like to know which way to correct.
Edit: formatting
↑ comment by Alex Flint (alexflint) · 2010-02-25T09:47:12.267Z · LW(p) · GW(p)
I don't think they all had "good inside view reasons" if they were all, in fact, wrong!
Perhaps they thought they had good reasons, but you can't conclude from this all future "good-sounding" arguments are incorrect.
comment by haig · 2010-02-24T22:30:54.683Z · LW(p) · GW(p)
I may be overlooking something, but I'd certainly consider Robin's estimate of 1-2 week doublings a FOOM. Is that really a big difference compared with Eliezer's estimates? Maybe the point in contention is not the time it takes for super-intelligence to surpass human ability, but the local vs. global nature of the singularity event; the local event taking place in some lab, and the global event taking place in a distributed fashion among different corporations, hobbyists, and/or governments through market mediated participation. Even this difference isn't that great, since there will be some participants in the global scenario with much greater contributions and may seem very similar to the local scenario, and vice versa where a lab may get help from a diffuse network of contributors over the internet. If the differences really are that marginal, then Robin's 'outside view' seems to approximately agree with Eliezer's 'inside view'.
Replies from: wedrifid↑ comment by wedrifid · 2010-02-24T23:37:24.612Z · LW(p) · GW(p)
I may be overlooking something, but I'd certainly consider Robin's estimate of 1-2 week doublings a FOOM. Is that really a big difference compared with Eliezer's estimates?
I think Eliezer estimates 1-2 week until game over. An intelligence that has undeniable, unassailable dominance over the planet. This makes economic measures output almost meaningless.
Maybe the point in contention is not the time it takes for super-intelligence to surpass human ability, but the local vs. global nature of the singularity event; the local event taking place in some lab, and the global event taking place in a distributed fashion among different corporations, hobbyists, and/or governments through market mediated participation.
I think you're right on the mark with this one.
Even this difference isn't that great, since there will be some participants in the global scenario with much greater contributions and may seem very similar to the local scenario
My thinking diverges with yours here. The global scenario gives a fundamentally different outcome than a local event. If participation is market mediated then the influence is determined by typical competitive forces. Whereas a local foom gives a singularity and full control to whatever the effective utility function is embedded in the machine, as opposed to a rapid degeneration into a hardscrapple hell. More directly in the local scenario that Eliezer predicts outside contributions stop once 'foom' starts. Nobody else's help is needed. Except, of course, as cats paws while bootstrapping.
comment by Alex Flint (alexflint) · 2010-02-25T09:56:09.706Z · LW(p) · GW(p)
If the Japanese students had put as much effort into their predictions as Eliezer has put into thinking about the singularity then I dare say they would have been rather more accurate, perhaps even more so than the "outside view" prediction.
comment by pscheyer · 2010-02-26T06:46:05.849Z · LW(p) · GW(p)
I prefer the outside view when speaking with good friends, because they know me well enough to gather what I'm really saying isn't 'Stop Here!' but rather 'Explain to me why I shouldn't stop here?'
Perhaps this isn't really the outside view but the trappings of the outside view used rhetorically to test whether the other party is willing to put some effort into explaining their views. The Outside View as a test of your discussion partner.
The Inside View can be a conversation halter as well; going 'farther inside' or 'farther outside' than your partner can deal with halts the conversation, not the fact that you've taken an inside/outside view by itself.
Also, am I the only one who sees clear links between Outside/inside Views and methods of avoiding the tragedies of the Anticommons/Commons? Seems like the Outside view does a good job of saying 'By continuing to gather evidence you are hurting your ability to remain rational!' while the Inside view says 'Your Grand Idea is irrational considering this evidence!'
comment by Jack · 2010-02-25T02:21:04.911Z · LW(p) · GW(p)
Someone should point out that "That is a conversation-halter" is very often a conversation halter.
Replies from: grouchymusicologist, Liron, Eliezer_Yudkowsky, wedrifid↑ comment by grouchymusicologist · 2010-02-25T02:40:30.818Z · LW(p) · GW(p)
Very often? Really? Any examples you could cite for us?
↑ comment by Liron · 2010-02-25T09:52:01.384Z · LW(p) · GW(p)
An even bigger conversation halter is pointing out meta-inconsistency.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-25T13:16:33.538Z · LW(p) · GW(p)
Um... no they're not? Refuting a specific point is not the same as trying to halt a debate and stop at the current answer.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-25T13:15:50.292Z · LW(p) · GW(p)
Um... no it's not?
↑ comment by wedrifid · 2010-02-25T02:48:45.062Z · LW(p) · GW(p)
By 'often' do you mean 'since Eliezer introduced the term he has used it as a conversation halter in every post he has made'? Although even then it didn't so much halt the conversation as it did preface an extensive argument with reasoning.
comment by SarahNibs (GuySrinivasan) · 2010-02-24T20:48:15.948Z · LW(p) · GW(p)
The outside view technique is as follows:
You are given an estimation problem f(x)=?. x is noisy and you don't know all of the internals of f. First choose any set of functions F containing f. Then find a huge subset G of F such that g in G has that for all y in Y, g(y) is (say) bounded to some nice range R. Now find your probability p that x is in Y and your probability q that f is in G. Then with probability p*q f(x) is in R and this particular technique says nothing about f(x) in the remaining 1-p*q of your distribution.
Sometimes this is extremely helpful. Suppose you have the opportunity to bet $1 and you win $10 if a < f(x) < b. Then if you can find G,Y,R with R within (a,b) and with p*q > (1/10), you know the bet's good without having to bother with "well what does f look like exactly?".
Obvious pitfalls:
- Depending on f and G, sometimes p is not much easier to estimate than f(x). If you do a bad job estimating p and don't realize it, your bounds will be artificially confident. This IMO is what is happening when we pick any-reference-class-we-want F and then say the probability f is in G is equal to the number of things in G divided by the number of things in F.
- Estimate the wrong quantity. For example, instead of estimating the EV of a bet, estimate the probability you lose. You will establish a very nice bound on the probability, but the quantity you care about has an extra component which is WinPayoff*WinProbability, and if WinPayoff is greater order of magnitude than the other quantities, your bound tells you little.
- Mis-estimate q. It feels to me like this rarely happens, and that often it's the clarity of q's estimation that makes outside view arguments feel so persuasive.
What does this mean for continued investigation of the structure of f? It crucially depends on how we estimate p. If further knowledge about the structure of f does not affect how we should estimate p, then changing our estimate of the R*(p*q) component of f(x) based on our inside view is a bad plan and makes our estimate worse. If further knowledge about the structure of f does affect how we should estimate p, then to keep our R*(p*q) component around is also invalid.
So I see 3 necessary criteria to show that investigating the structure of f or the specifics of x won't help our estimate of f(x) much based on an outside view G,Y,R:
- An argument that 1-p*q is small.
- An argument that we cannot estimate the probability that f is in G much better given further knowledge of the structure of f.
- An argument that we cannot estimate the probability that x is in Y much better given further knowledge of the specifics of x.
comment by Unknowns · 2010-02-24T07:02:15.727Z · LW(p) · GW(p)
Again, even if I don't have time for it myself, I think it would be useful to gather data on particular disagreements based on such "outside views" with vague similarities, and then see how often each side happened to be right.
Of course, even if the "outside view" happened to be right in most such cases, you might just respond with the same argument that all of these particular disagreements are still radically different from your case. But it might not be a very good response, in that situation.
Replies from: Eliezer_Yudkowsky, Nick_Tarleton↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-24T11:16:59.822Z · LW(p) · GW(p)
But the original problem was that, in order to carry an argument about the "outside view" in case of what (arguendo) was so drastic a break with the past as to have only two possible analogues if that, data was being presented about students guessing their homework times. If instead you gather data about predictions made in cases of, say, the industrial revolution or the invention of the printing press, you're still building a certain amount of your conclusion into your choice of data.
What I would expect to find is that predictive accuracy falls off with dissimilarity and attempted jumps across causal structural gaps, that the outside view wielded with reasonable competence becomes steadily less competitive with the Weak Inside View wielded with reasonable competence. And this could perhaps be extrapolated across larger putative differences, so as to say, "if the difference is this large, this is what will happen".
But I would also worry that outside-view advocates would take the best predictions and reinterpret them as predictions the outside view "could" have made (by post-hoc choice of reference class) or because the successful predictor referenced historical cases in making their argument (which they easily could have done on the basis of having a particular inside view that caused them to argue for that conclusion), and comparing this performance to a lot of wacky prophets being viewed as "the average performance of the inside view" when a rationalist of the times would have been skeptical in advance even without benefit of hindsight. (And the corresponding wackos who tried to cite historical cases in support not being considered as average outside viewers - though it is true that if you're going crazy anyway, it's a bit easier to go crazy with new ideas than with historical precedents.)
And outside-view advocates could justly worry that by selecting famous historical cases, we are likely to be selecting anomalous breaks with the past that (arguendo) could not have been predicted in advance. Or that by filtering on "reasonably competent" predictions with benefit of hindsight, inside view advocates would simply pick one person out of a crowd who happened to get it right, while surely competent outside view advocates would all tend to give similar predictions, and so not be allowed a similar post-hoc selection bias.
Note: I predict that in historical disagreements, people appealing to historical similarities will not all give mostly similar predictions, any more than taw and Robin Hanson constructed the same reference class for the Singularity.
But then by looking for historical disagreements, you're already filtering out a very large class of cases in which the answer is too obvious for there to be famous disagreements about it.
I should like to have the methodology available for inspection, and both sides comparing what they say they expect in advance, so that it can be known what part of the evidential flow is agreed-upon divergence of predictions, and what part of the alleged evidence is based on one side saying that the other side isn't being honest about what their theory predicts.
It's not like we're going to find a large historical database of people explicitly self-identifying as outside view advocates or weak inside view advocates. Not in major historical disputes, as opposed to homework.
↑ comment by Nick_Tarleton · 2010-02-24T07:32:40.767Z · LW(p) · GW(p)
Which outside view do you propose to use to evaluate the intelligence explosion, if you find that "the" outside view is usually right in such cases?
comment by MatthewB · 2010-02-26T09:27:50.901Z · LW(p) · GW(p)
I always love reading Less Wrong. I am just sometimes confused, for many days, about what exactly I have read. Until, something pertinent comes along and reveals the salience of what I had read, and then I say "OH! Now I get it!"
At present, I am between those two states... Waiting for the Now I get it moment.
Replies from: gwerncomment by cousin_it · 2010-02-24T11:19:12.374Z · LW(p) · GW(p)
"Eventually, some AI will go FOOM, locally self-improvingly rather than global-economically"
Ouch. This statement smells to me like backtracking from your actual position. If you honestly have no time estimate beyond "eventually", why does the SIAI exist? Don't you have any more urgent good things to do?
(edited to remove unrelated arguments, there will be time for them later)
Replies from: Eliezer_Yudkowsky, Vladimir_Nesov↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-24T11:56:37.294Z · LW(p) · GW(p)
I honestly don't understand your objection. "Eventually" means "sometime between tomorrow and X years from now" where my probability distribution over X peaks in the 20-40 range and then starts dropping off but with a long tail because hey, gotta widen those confidence intervals.
If I knew for an absolute fact that nothing was going to happen for the next 100 years, it would still be a pretty damned urgent problem, you wouldn't want to just let things slide until we ended up in a position as awful as the one we probably occupy in real life.
I still feel shocked when I read something like this and remember how short people's time horizons are, how they live in a world that is so much tinier than known space and time, a world without a history or a future or an intergalactic civilization that bottlenecks through it. Human civilization has been around for thousands of years. Anything within the next century constitutes the last minutes of the endgame.
Replies from: cousin_it↑ comment by Vladimir_Nesov · 2010-02-24T19:04:15.196Z · LW(p) · GW(p)
There is a path of retreat from belief in sudden FOOM, that still calls for working on FAI (no matter what is feasible, we still need to preserve human value as effectively as possible, and FAI is pretty much this project, FOOM or not):
comment by David_J_Balan · 2010-03-01T01:03:56.443Z · LW(p) · GW(p)
It's a bit off topic, but I've been meaning to ask Eliezer this for a while. I think I get the basic logic behind "FOOM." If a brain as smart as ours could evolve from pretty much nothing, then it seems likely that sooner or later (and I have not the slightest idea whether it will be sooner or later) we should be able to use the smarts we have to design a mind that is smarter. And if we can make a mind smarter than ours, it seems likely that that mind should be able to make one smarter than it, and so on. And this process should be pretty explosive, at least for a while, so that in pretty short order the smartest minds around will be way more than a match for us, which is why it's so important that it be baked into the process from the beginning that it proceed in a way that we will regard as friendly.
But it seems to me that this qualitative argument works equally well whether "explosive" means "box in someone's basement to Unchallenged Lord and Master of the Universe Forever and Ever" before anyone else knows about it or can do anything about it, or it means "different people/groups will innovate and and borrow/steal each others' innovations over a period of many years, at the end of which where we end up will depend only a little on the contribution of the people who started the ball rolling." And if that's right, doesn't it follow that what really matters is not the correctness of the FOOM argument (which seems correct to me), but rather the estimate of how big the exponent is in the exponential growth is likely to be? Is this (and much of your disagreement with Robin Hanson) just a disagreement over an estimate of a number? Does that disagreement stand any chance of being anywhere near resolved with available evidence?
comment by LongInTheTooth · 2010-02-24T17:27:51.124Z · LW(p) · GW(p)
Once again, Bayesian reasoning comes to the rescue. The assertion to stop updating based on new data (ignore the inside view!) is just plain wrong.
However a reminder to be careful and objective about the probability one might assign to a new bit of data (Inside view data is not privileged over outside view data! And it might be really bad!) is helpful.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-24T18:36:54.449Z · LW(p) · GW(p)
The assertion to stop updating based on new data (ignore the inside view!) is just plain wrong.
I'd like to be able to say that, but there actually is research showing how human beings get more optimistic about their Christmas shopping estimates as they try to visualize the details of when, where, and how.
Your statement is certainly true of an ideal rational agent, but it may not be carried in human practice.
Replies from: Cyan↑ comment by Cyan · 2010-02-24T18:59:07.041Z · LW(p) · GW(p)
...updating based on new data (ignore the inside view!)...
...human beings get more optimistic... as they try to visualize the details of when, where, and how.
Are updating based on new data and updating based on introspection equivalent? If not, then LongInTheTooth equivocated by calling ignoring the inside view a failure to update based on new data. But maybe they are equivalent under a non-logical-omniscience view of updating, and it's necessary to factor in meta-information about the quality and reliability of the introspection.
Replies from: LongInTheTooth↑ comment by LongInTheTooth · 2010-02-24T22:57:04.776Z · LW(p) · GW(p)
"But maybe they are equivalent under a non-logical-omniscience view of updating, and it's necessary to factor in meta-information about the quality and reliability of the introspection."
Yes, that is what I was thinking in a wishy-washy intuitive way, rather than an explicit and clearly stated way, as you have helpfully provided.
The act of visualizing the future and planning how long a task will take based on guesses about how long the subtasks will take, I would call generating new data which one might use to update a probability of finishing the task on a specific date. (FogBugz Evidence Based Scheduling does exactly this, although with Monte Carlo simulation, rather than Bayesian math)
But research shows that when doing this exercise for homework assignments and Christmas shopping (and, incidentally, software projects), the data is terrible. Good point! Don't lend much weight to this data for those projects.
I see Eliezer saying that sometimes the internally generated data isn't bad after all.
So, applying a Bayesian perspective, the answer is: Be aware of your biases for internally generated data (inside view), and update accordingly.
And generalizing from my own experience, I would say, "Good luck with that!"
comment by timtyler · 2010-02-24T11:54:24.314Z · LW(p) · GW(p)
I see a couple of problems with classifying "intelligent design by machines" as a big evolutionary leap away from "intelligent design by humans".
The main difference is one of performance. Performance has been increasing gradually anyway - what is happening now is that it is now increasing faster.
Also, humans routinely augment their intelligence by using machine tools - blurring any proposed line between the machine-augmented humans that we have today and machine intelligence.
My favoured evolutionary transition classification scheme is to bundle all the modern changes together - and describe them as being the result of the rise of the new replicators. Machine intelligence is just the new replicators making brains for themselves. Nanotechnology and robotics are just the new replicators making bodies for themselves.
These phenomena had a single historical source. Future historians will see them as intimately linked. They happen rapidly in quick succession in geological time.
If you want to subdivide further, fine - but then I think you have to consider the origin of language, and the origin of writing and the origin of the computer to be the next-most important recent events.
comment by PhilGoetz · 2010-02-25T04:18:56.645Z · LW(p) · GW(p)
You seem to be phrasing this as an either/or decision.
Remembering that all decisions are best formalized as functions, the outside view asks, "What other phenomena are like this one, and what did their functions look like?" without looking at the equation. The inside view tries to analyze the function. The hardcore inside view tries to actually plot points on the function; the not-quite-so-ambitious inside view just tries to find its zeros, inflection points, regions where it must be positive or negative, etc.
For complex issues, you should go back and forth between these views. If your outside view gives you 5 other functions you think are comparable, and they are all monotonic increasing, see if your function is monotonic increasing. If your inside view suggests that your function is everywhere positive, and 4 of your 5 outside-view functions are everywhere positive, throw out the 5th. And so on.
For example, the Singularity is unusual enough that looking past it calls for some inside-view thinking. But there are a few strong outside-view constraints that you can invoke that turn out to have strong implications, such as the speed of light.
The one that's on my mind at present is my view that there will be scarcity of energy and physical resources after the Singularity, an outside view based on general properties of complex systems such as markets and ecosystems, plus the relative effects I expect the Singularity to have on the speed of computation and development of new information vs. the speed of development of energy and physical resources. A lot of people on LW make the contrary assumption that there will be abundance after the Singularity (often, I think, as a necessary article of faith for the Church of the Friendly AI Santa). The resulting diverging opinions on what is likely to happen (such as, Am I likely to be revived from cryonic suspension after the Singularity?) are an example of opinions resulting from similar inside-view reasoning within different outside-view constraints. The argument isn't inside view vs. outside view; the argument in this case is between two outside views.
(Well, actually there's more to our differences than that. I think that asking "Will I be revived after the Singularity?" is like a little kid asking "Will there be Bugs Bunny cartoons in heaven?" Asking the question is at best humorous, regardless of whether the answer is yes or no. But for purposes of illustrating my point, the differing outside views is what matters.)
Replies from: wedrifid, MichaelVassar↑ comment by wedrifid · 2010-02-25T05:38:30.450Z · LW(p) · GW(p)
(Well, actually there's more to our differences than that. I think that asking "Will I be revived after the Singularity?" is like a little kid asking "Will there be Bugs Bunny cartoons in heaven?" Asking the question is at best humorous, regardless of whether the answer is yes or no. But for purposes of illustrating my point, the differing outside views is what matters.)
That is the real conversation halter. "Appeal to the outside view" is usually just a bad argument, making something silly regardless of the answer, that's a conversation halter and a mind killer.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-02-25T22:28:24.328Z · LW(p) · GW(p)
If something really is silly, then saying so is a mind-freer, not a mind-killer. If we were actually having that conversation, and I said "That's silly" at the start, rather than at the end, you might accuse me of halting the conversation. This is a brief summary of a complicated position, not an argument.
↑ comment by MichaelVassar · 2010-02-26T19:17:57.758Z · LW(p) · GW(p)
I don't think that many people expect the elimination of resource constraints.
Regarding the issue of FAI as Santa, wouldn't the same statement apply to the industrial revolution?
Regarding reviving the cryonically suspended, yes, probably a confusion, but not clearly, and working within the best model we have, the answer is "plausibly" which is all that anyone claims.
↑ comment by PhilGoetz · 2010-02-27T04:42:52.068Z · LW(p) · GW(p)
Regarding the issue of FAI as Santa, wouldn't the same statement apply to the industrial revolution?
The industrial revolution gives you stuff. Santa gives you what you want. When I read people's dreams of a future in which an all-knowing benevolent Friendly AI provides them with everything they want forever, it weirds me out to think these are the same people who ridicule Christians. I've read interpretations of Friendly AI that suggest a Friendly AI is so smart, it can provide people with things that are logically inconsistent. But can a Friendly AI make a rock so big that it can't move it?
Replies from: MichaelVassar↑ comment by MichaelVassar · 2010-03-01T14:46:46.299Z · LW(p) · GW(p)
Citation needed.
The economy gives you what you want. Cultural snobbishness gives you what you should want. Next...
comment by diegocaleiro · 2010-02-24T07:56:20.172Z · LW(p) · GW(p)
The "Outside View" is the best predictor of timeframes and happiness (Dan Gilbert 2007)
The reason why people have choosen to study students and what they think about homework is probably because they supposed there were mistakes in average students declarations. Same for studying affective forecast (technical term for 'predicting future happiness given X').
I suspect this kind of mistake does not happen in engineering companies when they evaluate next year's profits given their new technical environment... Therefore no one bothers to study engineers inside views.
comment by timtyler · 2010-02-24T11:53:58.157Z · LW(p) · GW(p)
I see a couple of problems with classifying "intelligent design by machines" as a big leap away from "intelligent design by humans".
The main difference is one of performance. Performance has been increasing gradually anyway - what is happening now is that it is now increasing faster.
Also, humans routinely augment their intelligence by using machine tools - blurring the proposed line between human and machine intelligence.
My favoured evolutionary transition classification scheme is to bundle the modern changes together - and describe them as being the result of the rise of the new replicators. Machine intelligence is just the new replicators making brains for themselves. Nanotechnology and robotics are just the new replicators making bodies for themselves.
These phenomena had a single historical source. Future historians will see them as intimately linked. They happen rapidly in quick succession in geological time.
If you want to subdivide further, fine - but then I think you have to consider the origin of language, and the origin of writing and the origin of the computer to be the next-most important recent events.
comment by taw · 2010-02-24T22:47:07.198Z · LW(p) · GW(p)
You are using inside view arguments to argue against outside view.
I think the only thing that could convince me that outside view of predictions of the future works worse than inside view, is big database of such predictions showing how outside view did worse - that is outside view arguments against outside view.
Outside view arguments against inside view - countless failed expert predictions - is easy to find.
Replies from: RobinZ, wedrifid↑ comment by wedrifid · 2010-02-24T23:18:36.687Z · LW(p) · GW(p)
After all, the wider reference class of cases of telling people to stop gathering arguments, is one of which we should all be wary...
You are using inside view arguments to argue against outside view.
The subquote is an outside view argument. In fact, the post title, introduction and conclusion all take the form of an outside view argument.
comment by bigbad · 2010-02-26T03:43:11.276Z · LW(p) · GW(p)
I tend to think that the most relevant piece of information about the singularity is Moore's Second Law, "the cost of a semiconductor fab rises exponentially with each generation of chips."
Corollary 1: At some point, the rate of increase in computing power will be limited by the rate of growth in the overall global economy.
Corollary 2: AI will not go FOOM (at least, not by a process of serially improved computer designs).