Is my view contrarian?

post by lukeprog · 2014-03-11T17:42:49.788Z · LW · GW · Legacy · 96 comments

Contents

  Is my view contrarian?
  My approach
  When do I have good reason to think I’m correct?
None
96 comments

Previously: Contrarian Excuses, The Correct Contrarian Cluster, What is bunk?, Common Sense as a Prior, Trusting Expert Consensus, Prefer Contrarian Questions.

Robin Hanson once wrote:

On average, contrarian views are less accurate than standard views. Honest contrarians should admit this, that neutral outsiders should assign most contrarian views a lower probability than standard views, though perhaps a high enough probability to warrant further investigation. Honest contrarians who expect reasonable outsiders to give their contrarian view more than normal credence should point to strong outside indicators that correlate enough with contrarians tending more to be right.

I tend to think through the issue in three stages:

  1. When should I consider myself to be holding a contrarian[1] view? What is the relevant expert community?
  2. If I seem to hold a contrarian view, when do I have enough reason to think I’m correct?
  3. If I seem to hold a correct contrarian view, what can I do to give other people good reasons to accept my view, or at least to take it seriously enough to examine it at length?

I don’t yet feel that I have “answers” to these questions, but in this post (and hopefully some future posts) I’d like to organize some of what has been said before,[2] and push things a bit further along, in the hope that further discussion and inquiry will contribute toward significant progress in social epistemology.[3] Basically, I hope to say a bunch of obvious things, in a relatively well-organized fashion, so that less obvious things can be said from there.[4]

In this post, I’ll just address stage 1. Hopefully I’ll have time to revisit stages 2 and 3 in future posts.

 

Is my view contrarian?

World model differences vs. value differences

Is my effective altruism a contrarian view? It seems to be more of a contrarian value judgment than a contrarian world model,[5] and by “contrarian view” I tend to mean “contrarian world model.” Some apparently contrarian views are probably actually contrarian values.

 

Expert consensus

Is my atheism a contrarian view? It’s definitely a world model, not a value judgment, and only 2% of people are atheists.

But what’s the relevant expert population, here? Suppose it’s “academics who specialize in the arguments and evidence concerning whether a god or gods exist.” If so, then the expert population is probably dominated by academic theologians and religious philosophers, and my atheism is a contrarian view.

We need some heuristics for evaluating the soundness of the academic consensus in different fields. [6]

For example, we should consider the selection effects operating on communities of experts. If someone doesn’t believe in God, they’re unlikely to spend their career studying arcane arguments for and against God’s existence. So most people who specialize in this topic are theists, but nearly all of them were theists before they knew the arguments.

Perhaps instead the relevant expert community is “scholars who study the fundamental nature of the universe” — maybe, philosophers and physicists? They’re mostly atheists. [7] This is starting to get pretty ad-hoc, but maybe that’s unavoidable.

What about my view that the overall long-term impact of AGI will be, most likely, extremely bad? A recent survey of the top 100 authors in artificial intelligence (by citation index)[8] suggests that my view is somewhat out of sync with the views of those researchers.[9] But is that the relevant expert population? My impression is that AI experts know a lot about contemporary AI methods, especially within their subfield, but usually haven’t thought much about, or read much about, long-term AI impacts.

Instead, perhaps I’d need to survey “AGI impact experts” to tell whether my view is contrarian. But who is that, exactly? There’s no standard credential.

Moreover, the most plausible candidates around today for “AGI impact experts” are — like the “experts” of many other fields — mere “scholastic experts,” in that they[10] know a lot about the arguments and evidence typically brought to bear on questions of long-term AI outcomes.[11] They generally are not experts in the sense of “Reliably superior performance on representative tasks” — they don’t have uniquely good track records on predicting long-term AI outcomes, for example. As far as I know, they don’t even have uniquely good track records on predicting short-term geopolitical or sci-tech outcomes — e.g. they aren’t among the “super forecasters” discovered in IARPA’s forecasting tournaments.

Furthermore, we might start to worry about selection effects, again. E.g. if we ask AGI experts when they think AGI will be built, they may be overly optimistic about the timeline: after all, if they didn’t think AGI was feasible soon, they probably wouldn’t be focusing their careers on it.

Perhaps we can salvage this approach for determining whether one has a contrarian view, but for now, let’s consider another proposal.

 

Mildly extrapolated elite opinion

Nick Beckstead instead suggests that, at least as a strong prior, one should believe what one thinks “a broad coalition of trustworthy people would believe if they were trying to have accurate views and they had access to [one’s own] evidence.”[12] Below, I’ll propose a modification of Beckstead’s approach which aims to address the “Is my view contrarian?” question, and I’ll call it the “mildly extrapolated elite opinion” (MEEO) method for determining the relevant expert population. [13]

First: which people are “trustworthy”? With Beckstead, I favor “giving more weight to the opinions of people who can be shown to be trustworthy by clear indicators that many people would accept, rather than people that seem trustworthy to you personally.” (This guideline aims to avoid parochialism and self-serving cognitive biases.)

What are some “clear indicators that many people would accept”? Beckstead suggests:

IQ, business success, academic success, generally respected scientific or other intellectual achievements, wide acceptance as an intellectual authority by certain groups of people, or success in any area where there is intense competition and success is a function of ability to make accurate predictions and good decisions…

Of course, trustworthiness can also be domain-specific. Very often, elite common sense would recommend deferring to the opinions of experts (e.g., listening to what physicists say about physics, what biologists say about biology, and what doctors say about medicine). In other cases, elite common sense may give partial weight to what putative experts say without accepting it all (e.g. economics and psychology). In other cases, they may give less weight to what putative experts say (e.g. sociology and philosophy).

Hence MEEO outsources the challenge of evaluating academic consensus in different fields to the “generally trustworthy people.” But in doing so, it raises several new challenges. How do we determine which people are trustworthy? How do we “mildly extrapolate” their opinions? How do we weight those mildly extrapolated opinions in combination?

This approach might also be promising, or it might be even harder to use than the “expert consensus” method.

 

My approach

In practice, I tend to do something like this:


When do I have good reason to think I’m correct?

Suppose I conclude I have a contrarian view, as I plausibly have about long-term AGI outcomes,[14] and as I might have about the technological feasibility of preserving myself via cryonics.[15] How much evidence do I need to conclude that my view is justified despite the informed disagreement of others?

I’ll try to tackle that question in a future post. Not surprisingly, my approach is a kind of model combination and adjustment.

 

 


  1. I don’t have a concise definition for what counts as a “contrarian view.” In any case, I don’t think that searching for an exact definition of “contrarian view” is what matters. In an email conversation with me, Holden Karnofsky concurred, making the point this way: “I agree with you that the idea of ‘contrarianism’ is tricky to define. I think things get a bit easier when you start looking for patterns that should worry you rather than trying to Platonically define contrarianism… I find ‘Most smart people think I’m bonkers about X’ and ‘Most people who have studied X more than I have plus seem to generally think like I do think I’m wrong about X’ both worrying; I find ‘Most smart people think I’m wrong about X’ and ‘Most people who spend their lives studying X within a system that seems to be clearly dysfunctional and to have a bad track record think I’m bonkers about X’ to be less worrying.”  ↩

  2. For a diverse set of perspectives on the social epistemology of disagreement and contrarianism not influenced (as far as I know) by the Overcoming Bias and Less Wrong conversations about the topic, see Christensen (2009); Ericsson et al. (2006); Kuchar (forthcoming); Miller (2013); Gelman (2009); Martin & Richards (1995); Schwed & Bearman (2010); Intemann & de Melo-Martin (2013). Also see Wikipedia’s article on scientific consensus.  ↩

  3. I suppose I should mention that my entire inquiry here is, ala Goldman (1998), premised on the assumptions that (1) the point of epistemology is the pursuit of correspondence-theory truth, and (2) the point of social epistemology is to evaluate which social institutions and practices have instrumental value for producing true or well-calibrated beliefs.  ↩

  4. I borrow this line from Chalmers (2014): “For much of the paper I am largely saying the obvious, but sometimes the obvious is worth saying so that less obvious things can be said from there.”  ↩

  5. Holden Karnofsky seems to agree: “I think effective altruism falls somewhere on the spectrum between ‘contrarian view’ and ‘unusual taste.’ My commitment to effective altruism is probably better characterized as ‘wanting/choosing to be an effective altruist’ than as ‘believing that effective altruism is correct.’”  ↩

  6. Without such heuristics, we can also rather quickly arrive at contradictions. For example, the majority of scholars who specialize in Allah’s existence believe that Allah is the One True God, and the majority of scholars who specialize in Yahweh’s existence believe that Yahweh is the One True God. Consistency isn’t everything, but contradictions like this should still be a warning sign.  ↩

  7. According to the PhilPapers Surveys, 72.8% of philosophers are atheists, 14.6% are theists, and 12.6% categorized themselves as “other.” If we look only at metaphysicians, atheism remains dominant at 73.7%. If we look only at analytic philosophers, we again see atheism at 76.3%. As for physicists: Larson & Witham (1997) found that 77.9% of physicists and astronomers are disbelievers, and Pew Research Center (2009) found that 71% of physicists and astronomers did not believe in a god.  ↩

  8. Muller & Bostrom (forthcoming). “Future Progress in Artificial Intelligence: A Poll Among Experts.”  ↩

  9. But, this is unclear. First, I haven’t read the forthcoming paper, so I don’t yet have the full results of the survey, along with all its important caveats. Second, distributions of expert opinion can vary widely between polls. For example, Schlosshauer et al. (2013) reports the results of a poll given to participants in a 2011 quantum foundations conference (mostly physicists). When asked “When will we have a working and useful quantum computer?”, 9% said “within 10 years,” 42% said “10–25 years,” 30% said “25–50 years,” 0% said “50–100 years,” and 15% said “never.” But when the exact same questions were asked of participants at another quantum foundations conference just two years later, Norsen & Nelson (2013) report, the distribution of opinion was substantially different: 9% said “within 10 years,” 22% said “10–25 years,” 20% said “25–50 years,” 21% said “50–100 years,” and 12% said “never.”  ↩

  10. I say “they” in this paragraph, but I consider myself to be a plausible candidate for an “AGI impact expert,” in that I’m unusually familiar with the arguments and evidence typically brought to bear on questions of long-term AI outcomes. I also don’t have a uniquely good track record on predicting long-term AI outcomes, nor am I among the discovered “super forecasters.” I haven’t participated in IARPA’s forecasting tournaments myself because it would just be too time consuming. I would, however, very much like to see these super forecasters grouped into teams and tasked with forecasting longer-term outcomes, so that we can begin to gather scientific data on which psychological and computational methods result in the best predictive outcomes when considering long-term questions. Given how long it takes to acquire these data, we should start as soon as possible.  ↩

  11. Weiss & Shanteau (2012) would call them “privileged experts.”  ↩

  12. Beckstead’s “elite common sense” prior and my “mildly extrapolated elite opinion” method are epistemic notions that involve some kind idealization or extrapolation of opinion. One earlier such proposal in social epistemology was Habermas’ “ideal speech situation,” a situation of unlimited discussion between free and equal humans. See Habermas’ “Wahrheitstheorien” in Schulz & Fahrenbach (1973) or, for an English description, Geuss (1981), pp. 65–66. See also the discussion in Tucker (2003), pp. 502–504.  ↩

  13. Beckstead calls his method the “elite common sense” prior. I’ve named my method differently for two reasons. First, I want to distinguish MEEO from Beckstead’s prior, since I’m using the method for a slightly different purpose. Second, I think “elite common sense” is a confusing term even for Beckstead’s prior, since there’s some extrapolation of views going on. But also, it’s only a “mild” extrapolation — e.g. we aren’t asking what elites would think if they knew everything, or if they could rewrite their cognitive software for better reasoning accuracy.  ↩

  14. My rough impression is that among the people who seem to have thought long and hard about AGI outcomes, and seem to me to exhibit fairly good epistemic practices on most issues, my view on AGI outcomes is still an outlier in its pessimism about the likelihood of desirable outcomes. But it’s hard to tell: there haven’t been systematic surveys of the important-to-me experts on the issue. I also wonder whether my views about long-term AGI outcomes are more a matter of seriously tackling a contrarian question rather than being a matter of having a particularly contrarian view. On this latter point, see this Facebook discussion.  ↩

  15. I haven’t seen a poll of cryobiologists on the likely future technological feasibility of cryonics. Even if there were such polls, I’d wonder whether cryobiologists also had the relevant philosophical and neuroscientific expertise. I should mention that I’m not personally signed up for cryonics, for these reasons.  ↩

96 comments

Comments sorted by top scores.

comment by fezziwig · 2014-03-11T18:04:31.552Z · LW(p) · GW(p)

I've not read all your references yet, so perhaps you can just give me a link: why is it useful to classify your beliefs as contrarian or not? If you already know that e.g. most philosophers of religion believe in God but most physicists do not, then it seems like you already know enough to start drawing useful conclusions about your own correctness.

In other words, I guess I don't see how the "contrarianism" concept, as you've defined it, helps you believe only true things. It seems...incidental.

Replies from: katydee, lukeprog
comment by katydee · 2014-03-11T20:54:17.303Z · LW(p) · GW(p)

One reason to classify beliefs as contrarian is that it helps you discuss them more effectively. My presentation of an idea that I expect will seem contrarian is going to look very different from my presentation of an idea that I expect will seem mainstream, and it's useful to know what will and won't be surprising or jarring to people.

This seems most relevant to stage 3-- if you hold what you believe to be a correct contrarian view (as opposed to a correct mainstream view), this has important ramifications as to how to proceed in conversation. Thus, knowing which of your views are contrarian has instrumental value.

comment by lukeprog · 2014-03-11T18:36:17.010Z · LW(p) · GW(p)

I agree: see footnote 1.

The 'My Approach' summary was supposed to make clear that in the end it always comes down to a "model combination and adjustment" anyway, but maybe I didn't make that clear enough.

Replies from: fezziwig
comment by fezziwig · 2014-03-12T13:41:47.177Z · LW(p) · GW(p)

Mm, fair enough. Maybe I'm just getting distracted by the word "contrarian".

Would another reasonable title for this sequence be "How to Correctly Update on Expert Opinion", or would that miss some nuance?

comment by elharo · 2014-03-11T23:48:18.648Z · LW(p) · GW(p)

Not only

If someone doesn’t believe in God, they’re unlikely to spend their career studying arcane arguments for and against God’s existence. So most people who specialize in this topic are theists, but nearly all of them were theists before they knew the arguments.

but also

If someone doesn’t believe in UFAI , they’re unlikely to spend their career studying arcane arguments about AGI impact. So most people who specialize in this topic are UFAI believers, but nearly all of them were UFAI believers before they knew the arguments.

Thus I do not think you should rule out the opinions of the large community of AI experts who do not specialize in AGI impact.

Replies from: brainoil
comment by brainoil · 2014-03-31T02:52:28.165Z · LW(p) · GW(p)

This is a false analogy. You can be a believer in God when you're five years old and haven't read any relevant arguments due to childhood indoctrination that happens in every home. You might even believe in income redistribution when you're five years old if your parents tell you that it's the right thing to do. I'm pretty sure nobody teaches their children about UFAI that way. You'd have to know the arguments for or against UFAI to even know what that means.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-03-31T19:29:46.777Z · LW(p) · GW(p)

You'd have to know the arguments for or against UFAI to even know what that means.

You just have to watch the Terminator movies, or the Matrix, or read almost any science fiction with robots in it. The UFness of AI is a default assumption in popular culture.

Replies from: christopherj, Lumifer
comment by christopherj · 2014-04-01T04:18:35.140Z · LW(p) · GW(p)

It's more complicated than that. We use (relatively incompetent) AIs all over the place, and there is no public outcry, even as we develop combat AI for our UAVs and ground based combat robots, most likely because everyone thinks of AIs as merely idiot-savant servants or computer programs. Few people think much about the distinction between specialized AIs and general AI, probably because we don't actually have any general AI, though no doubt they understand that the simpler AI "can't become self-aware".

People dangerously anthropomorphize AI, expecting it by default to assign huge values to human life (huge negative values in the case of "rogue AI"), with a common failure mode of immediate and incompetent homicidal rampage while being plagued by various human failings. Even general AIs are viewed as being inferior to humans in several aspects.

Overall, there is not a general awareness that a non-friendly general AI might cause a total extinction of human life due to apathy.

comment by Lumifer · 2014-03-31T19:51:02.643Z · LW(p) · GW(p)

The UFness of AI is a default assumption in popular culture.

This is true. On the other hand, the default is for the AI to be both unfriendly and stupid. Notice, for example, the complete inability of the Matrix overlords to make their agents hit anything they're shooting at :-D

comment by ThrustVectoring · 2014-03-11T19:20:59.006Z · LW(p) · GW(p)

Standard beliefs are only more likely to be correct when the cause of their standard-ness is causally linked to its correctness.

That takes care of things like, say, pro-American patriotism and pro-Christian religious fervor. Specifically, these ideas are standard not because contrary views are wrong, but because expressing contrary views makes you lose status in the eyes of a powerful in-group. Furthermore, it does not exclude beliefs like "classical physics is an almost entirely accurate description of the world at a macro scale" - inaccurate models would contradict observations of the world and get replaced with more accurate ones.

Granted, standard opinions often are standard because they are right. But, the more you can separate out the standard beliefs into ones with stronger and weaker links to correctness, the more this effect shows up in the former and not the latter.

To determine whether my view is contrarian, I ask whether there’s a fairly obvious, relatively trustworthy expert population on the issue.

I think that's on the same page as my initial thoughts on the matter. At least, it is a useful heuristic that applies more to correct standard beliefs than incorrect ones.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2014-03-14T10:57:34.410Z · LW(p) · GW(p)

I'm sympathetic to your position. Note, however, that the causal origin of a belief is itself a question about which there can be disagreement. So the same sort of considerations that make you give epistemic weight to majoritarian opinion should sometimes make you revise your decision to dismiss some of those majorities on the grounds that their beliefs do not reliably track the truth. For example, do most people agree with your causal explanation of pro-Christian religious fervor? If not, that may itself give you a reason to distrust those explanations, and consequently increase the evidential value you give to the beliefs of Christians. Of course, you can try to debunk the beliefs of the majority of people who disagree with your preferred causal explanation, but that just shifts the dispute to another level, rather than resolving it conclusively. (I'm not saying that, in the end, you can't be justified in dismissing the opinions of some people; rather, I'm saying that doing this may be trickier than it might at first appear. And, for the record, I do think that pro-Christian religious fervor is crazy.)

comment by Error · 2014-03-11T21:33:38.947Z · LW(p) · GW(p)

Some apparently contrarian views are probably actually contrarian values.

It doesn't help that a lot of people conflate beliefs and values. That issue has come up enough that I now almost instinctively respond to "how in hell can you believe that??!" by double-checking whether we're even talking about the same thing.

It also does not help that "believing" and "believing in" are syntactically similar but have radically different meanings.

Is my atheism a contrarian view? It’s definitely a world model, not a value judgment

Is it, though? I suspect that to most people discussing the subject, it's actually a disguised value judgement.

Replies from: Protagoras
comment by Protagoras · 2014-03-11T21:45:19.707Z · LW(p) · GW(p)

Most people discussing the subject from which side? If you think those who profess atheism are mostly expressing disguised value judgments, I'd like to know which ones you suspect they're expressing; the theory doesn't sound very plausible to me. I do grant that I find it more plausible that religious claims are mostly disguised value judgments, no doubt including religious objections to atheism, but I don't think it's the case that atheists are commonly taking the other side of those value questions; rather, I'm inclined to take them at face value as rejecting the factual disguise. Do you have reasons for thinking otherwise? Or did you just mean to comment on the value judgment aspect of the religious majority side of the dispute?

Replies from: christopherj
comment by christopherj · 2014-03-29T18:57:59.737Z · LW(p) · GW(p)

I'd say atheism correlates strongly with various value judgements. For example, almost everyone who doesn't believe in a god, also doesn't approve of that god's morality. Few people believe that a given god has excellent morals but does not exist. And many people lose their religion when they get upset at their god/gods. Part of this is likely to be because said god is used as the source or justification for a morality, so that rejecting one will result in rejecting the other. I believe there was also research indicating that whatever a person believes, they're likely to believe their god believes the same.

comment by XiXiDu · 2014-03-11T18:32:39.035Z · LW(p) · GW(p)

If you are already unable to determine the relevant expert community you should maybe ask how accurate people have been who started a new research field compared to people decades after the field has been established.

If it turns out that most people who founded a research field should expect their models to be radically revised at some point, then you should probably focus on verifying your models rather than prematurely drawing action relevant conclusions.

Replies from: Will_Sawin
comment by Will_Sawin · 2014-03-11T21:33:23.973Z · LW(p) · GW(p)

No, you should focus on founding a research field, which mainly requires getting other people interested in the research field.

comment by Jonathan Paulson (jpaulson) · 2014-03-13T05:04:43.508Z · LW(p) · GW(p)

Doesn't "contrarian" just mean "disagrees with the majority"? Any further logic-chopping seems pointless and defensive.

The fact that 98% of people are theists is evidence against atheism. I'm perfectly happy to admit this. I think there is other, stronger evidence for atheism, but the contrarian heuristic definitely argues for belief in God.

Similarly, believing that cryonics is a good investment is obviously contrarian. AGI is harder to say; most people probably haven't thought about it.

It seems like the question you're really trying to answer is "what is a good prior belief for things I am not an expert on?"

(I'm sorry about arguing over terminology, which is usually stupid, but this case seems egregious to me).

Replies from: elharo, JQuinton
comment by elharo · 2014-03-13T12:06:28.999Z · LW(p) · GW(p)

I wonder. Perhaps that 98% of people are theists is better evidence that theism is useful than that it's correct. In fact, I think ihe 98%, or even an 80% figure, is pretty damn strong evidence that theism is useful; i.e. instrumentally rational. It's basic microeconomics: if people didn't derive value from religion, they'd stop doing it. To cite just one example, lukeprog has written previously about joining Scientology because they had best Toastmasters group. There are many other benefits to be had by professing theism.

However I'm not sure that this strong majority belief is particularly strong evidence that theism is correct, or epistemically rational. In particular if it were epistemically rational, I'd expect religions would be more similar than they are. To say that 98% of people believe in God, requires that one accept Allah, the Holy Trinity, and Hanuman as instances of "God". However, adherents of various religions routinely claim that others are not worshipping God at all (though admittedly this is less common than it used to be). Is there some common core nature of "God" that most theists believe in? Possibly, but it's a lot hazier. I've even heard some professed "theists" define God in such a way that it's no more than the physical universe, or even one small group of actual, currently living, not-believed-to-be-supernatural people. (This happens on occasion in Alcoholics Anonymous, for members who don't like accepting the "Higher Power".)

At the least, majority beliefs and practice are stronger evidence of instrumental rationality than epistemic rationality.

Are there other cases where we have evidence that epistemic and instrumental rationality diverge? Perhaps the various instances of Illusory Superiority; for instance where the vast majority of people think they're an above average driver or the Dunning-Krueger effect. Such beliefs may persist in the face of reality because they're useful to people who hold these beliefs.

Replies from: Articulator
comment by Articulator · 2014-03-25T02:25:31.909Z · LW(p) · GW(p)

I don't think it is so much that it suggests Theism is useful - rather that Theism is a concept which tends to propagate itself effectively, of which usefulness is one example. Effectively brainwashing participants at an early age is another. There almost certainly several factors, only some of which are good.

comment by JQuinton · 2014-03-14T19:38:16.169Z · LW(p) · GW(p)

On the face of it, I also think that the fact that the majority believes something is evidence for that something. But then what about how consensus belief is also a function of time period?

How many times over the course of all human history has the consensus of average people been wrong about some fact about the universe? The consensus of say, what causes disease back in 1400 BCE is different than the consensus about the same today. What's to say that this same consensus won't point to something different 3400 years in the future?

It seems that looking at how many times the consensus has been wrong over the course of human history is actually evidence that "consensus" -- without qualification (e.g. consensus of doctors, etc.) -- is more likely to be wrong than right; the consensus seems to be weak evidence against said position.

Replies from: Nornagest
comment by Nornagest · 2014-03-14T21:15:47.752Z · LW(p) · GW(p)

Seems to me that we're more likely to remember instances where the expert consensus was wrong than instances where it wasn't. The consensus among Classical Greek natural philosophers in 300 BC was that the earth was round, and it turns out they were absolutely right.

And I can only pick that out as an example because of the later myth regarding Christopher Columbus et al. There are probably hundreds of cases where consensus, rather than being overturned by some new paradigm, withstood all challenges and slowly fossilized into common knowledge.

comment by Articulator · 2014-03-25T03:27:36.327Z · LW(p) · GW(p)

With all due respect, I feel like this subject is somewhat superfluous. It seems to be trying to chop part of a general concept off into its own discrete category.

This can all be simplified into accepting that Expert and Common majority opinion are both types of a posteriori evidence that can support an argument, but can be overturned by better a posteriori or a priori evidence.

In other words, they are pretty good heuristics, but like any heuristics, can fail. Making anything more out of it seems to just be artificial, and only necessary if the basic concept proves to difficult to understand.

comment by jimrandomh · 2014-03-12T17:22:23.388Z · LW(p) · GW(p)

If relevant experts seem to disagree with a position, this is evidence against it. But this evidence is easily screened off, if:

  • The standard position is fully explained by a known bias
  • The position is new enough that newness alone explains why it's not widespread (eg nutrition)
  • The nonstandard position is more complicated than the standard one, and exceeds the typical expert's complexity limit (AGI)
  • The nonstandard position would destroy concepts in which experts have substantial sunk costs (Bayesianism)
Replies from: V_V, elharo
comment by V_V · 2014-03-13T13:50:54.714Z · LW(p) · GW(p)

The position is new enough that newness alone explains why it's not widespread (eg nutrition)

Nutrition what?

The nonstandard position is more complicated than the standard one, and exceeds the typical expert's complexity limit (AGI)

And the non-experts arguing the non-standard position are supposed to be smarter than typical experts?

The nonstandard position would destroy concepts in which experts have substantial sunk costs (Bayesianism)

What do you mean by Bayesianism? Bayesian statistics or Bayesian epistemology? How would them destroy concepts in which experts have substantial sunk costs?

Replies from: jimrandomh
comment by jimrandomh · 2014-03-13T18:55:43.407Z · LW(p) · GW(p)

Nutrition what?

This was shorthand for "I hold several contrarian beliefs about nutrition which seem to fit this pattern but don't really belong in this comment".

And the non-experts arguing the non-standard position are supposed to be smarter than typical experts?

Sometimes. To make a good decision about whether to copy a contrarian position, you generally have to either be smarter (but perhaps less domain-knowledgeable) than typical experts, or else have a good estimate of some other contrarian's intelligence and rationality and judge them to be high. (If you can't do either of these things, then you have little hope of choosing correct contrarian beliefs.)

What do you mean by Bayesianism? Bayesian statistics or Bayesian epistemology? How would them destroy concepts in which experts have substantial sunk costs?

I mean Bayesian statistical methods, as opposed to Frequentist ones. (This isn't a great example because there's not actually such a clean divide, and the topic is tainted by prior use as a Less Wrong shibboleth. Luke's original example - theology - illustrates the point pretty well).

Replies from: Lumifer
comment by Lumifer · 2014-03-13T19:01:05.913Z · LW(p) · GW(p)

If you can't do either of these things, then you have little hope of choosing correct contrarian beliefs.

Notably, even if you can't do either of these things, sometimes you can rationally reject the mainstream position if you can conclude that the incentive structure for the "typical experts" makes them hopelessly biased in a particular direction.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2014-03-14T02:21:45.483Z · LW(p) · GW(p)

This shouldn't lead to rejection of the mainstream position, exactly, but rejection of the evidential value of mainstream belief, and reversion to your prior belief / agnosticism about the object-level question.

comment by elharo · 2014-03-13T12:17:28.363Z · LW(p) · GW(p)

I doubt that in the particular case of AGI, the nonstandard position's complexity exceeds the typical AI expert's complexity limit. I know a few AI experts, and they can handle extreme complexity. For that matter, I think AGI is well within the complexity limits of general computer science/mathematics/physics/science experts and at least some social science experts (e.g. Robin Hanson).

In fact, the number of such experts that have looked seriously at AGI, and come to different conclusions, strongly suggests to me that the jury is still out on this one. The answers, whatever they are, are not obvious or self-evident.

Repeat the same but s/AGI/Bayesianism. Bayesianism is routinely and quickly adopted within the community of mathematicians/scientists/software developers when it is useful and produces better answers. The conflict between Bayesianism and frequentism that is sometimes alluded to here is simply not an issue in every day practical work.

comment by itaibn0 · 2014-03-11T21:11:06.844Z · LW(p) · GW(p)

Warning: Reference class tennis below.

I think you're neglecting something when trying to determine the right group of experts for judging AGI risk. You consider experts on AI, but the AGI risk thesis is not just a belief on the behavior of AI, it is also a belief on our long-term future, and it is incompatible with many other beliefs intelligent people hold on our long-term future. Therefore I think you should also consider experts on humanity's long-term future as relevant. As an analogy, if the question you want to answer is "Is the on Bible a correct account of the nature of God?", you should not just ask experts on the Bible. Instead, you should broaden your question to "What religion is true?" and ask experts on religion in general.

A difficult aspect is that it's not clear who is an expert on the long-term future, seeing as almost everybody is interested in the issue. Politicians, historians, economists, and philosophers are all candidates. Perhaps the best candidates are the super forecasters you mentioned.

Replies from: fubarobfusco
comment by fubarobfusco · 2014-03-12T01:53:28.757Z · LW(p) · GW(p)

As an analogy, if the question you want to answer is "Is the on Bible a correct account of the nature of God?", you should not just ask experts on the Bible. Instead, you should broaden your question to "What religion is true?" and ask experts on religion in general.

One thing we notice when we look at differences among religious views in the population is that there is a substantial geographic component. People in Germany are a lot more likely than people in Japan to be Roman Catholics. People in Bangkok, Thailand are a lot more likely than people in Little Rock, Arkansas to be Buddhists.

Replies from: Brian_Tomasik, Jayson_Virissimo
comment by Brian_Tomasik · 2014-03-12T09:03:22.007Z · LW(p) · GW(p)

There are also clusters of opinion in many other fields based on location (continental philosophy, Chicago school of economics, Washington consensus, etc.).

comment by Jayson_Virissimo · 2014-03-12T18:01:25.960Z · LW(p) · GW(p)

One thing we notice when we look at differences among religious views in the population is that there is a substantial geographic component. People in Germany are a lot more likely than people in Japan to be Roman Catholics. People in Bangkok, Thailand are a lot more likely than people in Little Rock, Arkansas to be Buddhists.

If I am not mistaken, Newtonian physics was accepted almost instantly in the anglosphere, while Cartesian physics was dominant on the Continent for an extended period of time thereafter. Also, interpretations of probability (like Frequentism and Bayesianism) have often clustered by university (or even particular faculties within an individual university). So, physics and statistics seems to suffer from the same problem.

Replies from: fubarobfusco, Jiro
comment by fubarobfusco · 2014-03-13T06:56:54.641Z · LW(p) · GW(p)

For religion, the difference seems to persist over millennia, even under reasonably close contact, except in specific circumstances of authority and conquest — whereas for science and technology, adoption spreads over years or decades whenever there's close contact and comparison.

Religious conquests such as the Catholic conquest of South America don't seem more common worldwide than persistent religious differences such as in India, where Christians have remained a 2% minority despite almost 2000 years of contact including trade, missionary work, and even occasional conquest (the British Raj).

With religion, whenever there aren't authorities with the political power to expel heretics, syncretism and folk-religion develop — the blending of religious traditions, rather than the inexorable disproof and overturning of one by another.

This suggests that differences in religious practice do not reliably bring the socioeconomic and geopolitical advantages that differences in scientific and technological practice bring. If one of the major religions brought substantial advantages, we would not expect to see the persistence of religious differences over millennia that we do.

(IIRC, Newtonian physics spread to the Continent beginning with du Châtelet's translation of the Principia into French, some sixty years after its original publication.)

comment by Jiro · 2014-03-12T18:45:45.778Z · LW(p) · GW(p)

First of all, it seems like the mechanisms for this happening are different for science. One rarely sees scientific ideas spread by conquest, or by the kind of social pressure brought to bear by religions. Nobody gets told by their mother to believe Newton's Laws every week at college in the way they might be told to go to Mass.

Second, everyone is (according to religion) supposed to believe in and live by religion, whether they are educated or not. I'd have greater expectations that beliefs associated with education are geographically limited.

Replies from: Lumifer, Eugine_Nier
comment by Lumifer · 2014-03-12T20:00:36.879Z · LW(p) · GW(p)

One rarely sees scientific ideas spread by conquest

Except when a more-scientific society kicks the hell out of the less-scientific society and takes it over (or, maybe, just takes over its land and women). Science is very helpful for building weapons.

Replies from: Strange7, TheAncientGeek
comment by Strange7 · 2014-03-13T05:26:45.177Z · LW(p) · GW(p)

In those situations, though, the conquering force usually makes a point to avoid letting their core scientific insights spread, lest the tables turn.

Replies from: Lumifer, Eugine_Nier
comment by Lumifer · 2014-03-13T15:07:05.512Z · LW(p) · GW(p)

the conquering force usually makes a point to avoid letting their core scientific insights spread, lest the tables turn.

Only if the conquered society survives. Often it doesn't -- look at what happened in the Americas.

Replies from: Strange7
comment by Strange7 · 2014-03-15T08:23:42.389Z · LW(p) · GW(p)

That seems like a subset, not an exception. A conquered society which doesn't survive is certainly not going to be adopting higher-tech weapons.

comment by Eugine_Nier · 2014-03-18T03:16:28.414Z · LW(p) · GW(p)

Well that didn't stop Japan from getting western advisers to help with it's modernization after Perry opened it.

Replies from: newerspeak
comment by TheAncientGeek · 2014-03-12T20:35:16.958Z · LW(p) · GW(p)

Science =/= technology. Profoundly anti scientific movements can make their point using tech. .bought on the open market.

comment by Eugine_Nier · 2014-03-13T03:35:07.532Z · LW(p) · GW(p)

One rarely sees scientific ideas spread by conquest,

The spread of scientific ideas from Europe was almost entirely by conquest.

or by the kind of social pressure brought to bear by religions.

Try being openly creationist at a major university, you'll quickly discover the kind of social pressure science can bring to bear.

Nobody gets told by their mother to believe Newton's Laws every week at college in the way they might be told to go to Mass.

Although they might get told to go to the doctor and not the New Age healer.

Replies from: EHeller, Jiro, Bugmaster
comment by EHeller · 2014-03-13T03:54:28.441Z · LW(p) · GW(p)

Try being openly creationist at a major university, you'll quickly discover the kind of social pressure science can bring to bear.

There were several people in my physics phd program who were openly creationist, and they were politely left alone. I don't know of an environment more science-filled, and honestly I've never known a higher density of creationists.

comment by Jiro · 2014-03-13T08:19:23.684Z · LW(p) · GW(p)

The spread of scientific ideas from Europe was almost entirely by conquest.

Only in a very literal sense. Nobody said "let's conquer everyone in order to spread the atomic of theory of matter" and once they conquered they didn't execute people who didn't believe in atoms or decide that people who don't believe in atoms are not permitted to testify in court.

Try being openly creationist at a major university, you'll quickly discover the kind of social pressure science can bring to bear.

That's not the same kind of social pressure. I'm referring to one's personal life, not one's professional life.

comment by Bugmaster · 2014-03-13T04:58:23.407Z · LW(p) · GW(p)

Although they might get told to go to the doctor and not the New Age healer.

Where do you draw the line, though ? Kids also get told to brush their teeth and to never play in traffic...

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-03-18T03:17:15.565Z · LW(p) · GW(p)

Ask Jiro, I'm not convinced a line exists.

Replies from: Jiro
comment by Jiro · 2014-03-18T20:39:28.152Z · LW(p) · GW(p)

The question dealt with beliefs. Brushing one's teeth (or going to a doctor instead of a healer) is an action. Going to Mass is technically an action, but its primary effect is to instill a belief system in the child.

comment by TheAncientGeek · 2014-03-11T19:59:04.864Z · LW(p) · GW(p)

Maybe the easy answer is to turn "contrarian" into a two place predicate.

Replies from: JoshuaZ
comment by JoshuaZ · 2014-03-17T01:13:58.870Z · LW(p) · GW(p)

What would the two places be?

comment by christopherj · 2014-04-01T06:47:21.936Z · LW(p) · GW(p)

It seems to me that having some contrarian views is a necessity, despite the fact that most contrarian views are wrong. "Not every change is an improvement, but every improvement is a change." As such I'd recommend going meta, teaching other people the skills to recognize correct contrarian arguments. This of course will synergize with recognizing whether your own views are probable or suspect, as well as with convincing others to accept your contrarian views.

  1. Determine levels of expertise in the subject. Not a binary distinction between "expert" and "non-expert" that would put nutritionists, theologians, and futurists in the same category as physicists, materials scientists, and engineers. 1a. The main determinants would be how easy it is to test things, and how much testing has been done. 1b. What's the level of consensus? I'd say less than 90% consensus is suspicious; probably indicative of a difficult profession (the experts cannot give definitive well-tested answers).

  2. What's the experts' reaction to the contrarian view? Do the experts have good reason for rejecting the view, or do they become converts upon hearing it?

  3. What's the epistemic basis of the views? Are we talking about empirical tests, logical deduction, educated guesses, or wild speculation?

  4. Look for conflicts of interest. Don't exclude your own. Look for monetary interests, political interests, moral/values interests, emotional interests, aesthetic interests. Subjects like climate change and economic policy are so interest-laden that besides the difficulties in testing it becomes difficult to find honest, actual experts. Conversely, some ideas are accepted despite interests; dentists advise you against sugar despite their monetary interests, and quantum mechanics is accepted despite being unintuitive.

  5. Consider how you'd convince an honest, intelligent, well-educated expert to accept your contrarian view. If you don't think you can, odds are you don't have cause to believe it yourself.

  6. Test point 5. Remember, you're making a difference in the world, so don't make excuses.

comment by Filipe · 2014-03-12T17:58:16.292Z · LW(p) · GW(p)

Garth Zietsman, who according to himself, "Scored an IQ of 185 on the Mega27 and has a degree in psychology and statistics and 25 years experience in psychometrics and statistics", proposed the statistical concept of The Smart Vote , which seems to resemble your "Mildly extrapolate elite opinion". There are many applications of his idea to relevant topics on his blog.

It's not choosing the most popular answer among the smart people in any (aggregation of) poll(s), but comparing the proportion of the most to the less intelligent in any answer, and deciding The Smart Vote is that which has the largest ratio, after controlling for possible interests.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2014-03-12T18:10:31.494Z · LW(p) · GW(p)

J. S. Mill had a similar idea:

...one might also mention his acceptance of the principle of multiple votes, in which educated and more responsible persons would be made more influential by giving them more votes than the uneducated.

-- Wilson, Fred, "John Stuart Mill", The Stanford Encyclopedia of Philosophy

Replies from: Jiro, jpaulson, Filipe
comment by Jiro · 2014-03-12T18:16:38.693Z · LW(p) · GW(p)

There is nobody who I'd trust to decide who is considered educated and more responsible and therefore who would get more votes. And the historical record on similar subjects is pretty bad.

I would also expect a feedback loop where people who can vote more vote to give more voting power to people like themselves.

(And I also find it odd that most people who contemplate such things assume they would be considered educated and more responsible. Imagine a world where, say, the socially responsible get more voting power, and that (for instance) thinking that there are innate differences between races or sexes disqualifies one from being socially responsible.)

comment by Jonathan Paulson (jpaulson) · 2014-03-13T05:09:59.150Z · LW(p) · GW(p)

The pervasive influence of money in politics sort of functions as a proxy of this. YMMV for whether it's a good thing...

comment by Filipe · 2014-03-12T18:18:34.992Z · LW(p) · GW(p)

Even though he calls it "The Smart Vote", the concept is a way to figure out the truth, not to challenge current democratic notions (I think), and is quite a bit more sophisticated than merely giving greater weight to smarter people's opinions.

comment by [deleted] · 2014-03-12T12:55:22.591Z · LW(p) · GW(p)

But what’s the relevant expert population, here? Suppose it’s “academics who specialize in the arguments and evidence concerning whether a god or gods exist.” If so, then the expert population is probably dominated by academic theologians and religious philosophers, and my atheism is a contrarian view.

Have you actually checked whether most theologians and philosophers of religion believe in God? Have you picked out which God they believe in?

A priori, academics usually believe in God less than the general population.

Replies from: Alejandro1, pragmatist
comment by Alejandro1 · 2014-03-12T23:31:38.204Z · LW(p) · GW(p)

Here are the results of a survey of philosophers of religion specifically. It has lots of interesting data, among them:

  • Most philosophers of religion are committed Christians.

  • The most common reasons given for specializing in philosophy of religion presupposed a previous belief in religion. (E.g. "Faith seeking understanding"; "Find arguments in order to witness", etc.)

  • Most belief revisions due to studying philosophy of religion were in the direction of greater atheism rather than the opposite. However, this seems to be explained by a combination of two separated facts: most philosophers of religion begin as theists as mentioned above, and most (from both sides) become less dogmatic and more appreciative of arguments for the opposing view with time.

comment by pragmatist · 2014-03-12T16:52:45.601Z · LW(p) · GW(p)

According to the PhilPapers survey, around 70% of philosophers of religion are theists.

Replies from: None
comment by [deleted] · 2014-03-12T17:21:54.895Z · LW(p) · GW(p)

Has someone asked them why? Perhaps we should be theists.

Replies from: pragmatist, Protagoras, Jayson_Virissimo, chaosmage
comment by pragmatist · 2014-03-12T22:05:34.889Z · LW(p) · GW(p)

I will admit to not being all that familiar with contemporary arguments in the philosophy of religion. However, there are other areas of philosophy with which I am quite familiar, and where I regard the debates as basically settled. According to the PhilPapers survey, pluralities of philosophers of religion line up on the wrong side of those debates. For example, philosophers of religion are much more likely (than philosophers in general) to believe in libertarian free will, non-physicalism about the mind, and the A-theory of time (a position that has, for all intents and purposes, been refuted by the theory of relativity). These are not, by the way, issues that are incidental to a philosopher of religion's area of expertise. I imagine views about the mind, the will and time are integral to most intellectual theistic frameworks.

The fact that these philosophers get things so wrong on these issues considerably reduces my credence that I will find their arguments for God convincing. And this is not just a facile "They're wrong about these things, so they're probably wrong about that other thing too" kind of argument. Their views on those issues are indicative of a general philosophical approach --, one that takes our common-sense conceptual scheme and our pre-theoretic intuitions as much stronger evidence than I think they actually are, and correspondingly takes the deliverances of our best scientific theories much less seriously than I do. I very strongly suspect that their arguments for theism will fit this pattern (reliance on a priori "common-sense" principles like the Principle of Sufficient Reason, for example).

I am familiar with the kinds of arguments made by people who adopt this philosophical outlook -- not in the case of theism specificially, but in other domains of philosophy -- and I don't find them all that compelling. In fact, I think they represent much of what is pathological about contemporary philosophy. So I think there is sound evidence that philosophers of religion tend to practice a mode of philosophy which, although quite sophisticated and intellectually challenging, is not particularly truth-conducive.

Replies from: None, Protagoras
comment by [deleted] · 2014-03-13T23:33:06.723Z · LW(p) · GW(p)

Their views on those issues are indicative of a general philosophical approach --, one that takes our common-sense conceptual scheme and our pre-theoretic intuitions as much stronger evidence than I think they actually are, and correspondingly takes the deliverances of our best scientific theories much less seriously than I do. I very strongly suspect that their arguments for theism will fit this pattern (reliance on a priori "common-sense" principles like the Principle of Sufficient Reason, for example).

So you're saying they practice non-naturalized philosophy? Are you sure these are philosophers of religion we're dealing with and not AIXI instances incentivized by professorships?

comment by Protagoras · 2014-03-12T22:44:41.058Z · LW(p) · GW(p)

Very good diagnosis. While I don't recall where the philosophers of religion came out in this area on the survey, some of the popular arguments for the existence of God seem to rely on a very strong version of mathematical Platonism, a believe that there is a one true logic/mathematics and that strong conclusions about the world can be drawn by the proper employment of that logic (the use of PSR that you mention is a common, but not the only, example of this). Since I reject that kind of One True Logic (I'm a Carnapian, "in logic there are no morals!" guy), I tend to think that any logical proof of the existence of God (or anything else) serves only to establish that you are using a logic which has a built in "God exists" (or whatever) assumption. For example, the simple modal ontological argument which says God possibly exists, God is a necessary being, so by a few simple steps in S5, God exists; if you're using S5, then once you've made one of the assumptions about God, making the other one just amounts to assuming God exists, and so in effect committing yourself to reasoning only about God worlds. Such a logic may have its uses, but it is incapable of reasoning about the possibility that God might or might not exist; for such purposes a neutral logic would be required.

comment by Protagoras · 2014-03-12T18:24:56.404Z · LW(p) · GW(p)

I read some of the discussion on philosophy of religion blogs after the survey came out. One of the noteworthy results of the survey was that philosophers who don't specialize in philosophy of religion were about 3/4 atheists. One or two of the philosophy of religion bloggers claimed that their non-specialist colleagues weren't familiar with some of the recent literature presenting new arguments for the existence of God. As a philosopher who doesn't specialize in philosophy of religion, I thought they underestimated how familiar people like me are with the arguments concerning theism. However, I admit for people like me it comes up especially in history, so I followed up on that and looked at some of the recommended recent papers. I was unable to find anything that looked at all compelling, or really even new, but perhaps I didn't try hard enough.

Replies from: None
comment by [deleted] · 2014-03-12T19:48:40.965Z · LW(p) · GW(p)

So, sum total, you're saying that philosophers of religion believe because they engage in special pleading to get separate epistemic standards for God?

(Please note that my actual current position is strong agnosticism: "God may exist, but if so, He's plainly hiding, and no God worthy of the name would be incapable of hiding from me, so I cannot know if such a God exists or not.")

Replies from: Protagoras, Strange7
comment by Protagoras · 2014-03-12T20:28:34.686Z · LW(p) · GW(p)

Well, I didn't want to go into detail, because I don't remember all the details and didn't feel like wasting time going and looking it up again, but yes, essentially. The usual form is "if you make these seemingly reasonable assumptions, you get to God, so God!", and usually the assumptions actually didn't look that reasonable to me to begin with, and of course an alternative response to the arguments is always that they provide evidence that the assumptions are much more powerful than they seem and so need much closer examination.

comment by Strange7 · 2014-03-13T05:45:33.686Z · LW(p) · GW(p)

I've got a variant of that. "Assuming God exists, He seems to be going to some trouble to hide. Either He doesn't want to be found, in which case the polite thing is to respect that, or He's doing some screwy reverse-psychology thing, in which case I have better things to do with my time than engage an omnipotent troll."

comment by Jayson_Virissimo · 2014-03-12T23:05:10.641Z · LW(p) · GW(p)

Has someone asked them why? Perhaps we should be theists.

Yes, they say something like this, this, or this. Whether that is their True Reason for Believing...well, I don't think so in most cases, but that is just my intuition.

BTW, I'm also a theist (some might say on a technicality) for the fairly boring and straightforward reason that I affirm Bostrom's trilemma and deny its first two disjuncts along with accepting having created our universe as a sufficient (but not necessary) condition for godhood.

Replies from: TheAncientGeek, None
comment by TheAncientGeek · 2014-03-14T11:09:10.861Z · LW(p) · GW(p)

The trilemma should be a tetralemma: you also need to deny that consciousness, qualia and all, is unsimulable.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2015-07-10T21:41:05.992Z · LW(p) · GW(p)

You are correct and I deny that consciousness is unsimulable. The addition of this disjunct does reduce my subjective probability of simulation-theism, but not by much.

Interestingly, unsimulable consciousness would increase the probability of other, more common, types of theism. Don't you think?

comment by [deleted] · 2014-03-14T07:05:14.357Z · LW(p) · GW(p)

BTW, I'm also a theist (some might say on a technicality) for the fairly boring and straightforward reason that I affirm Bostrom's trilemma and deny its first two disjuncts along with accepting having created our universe as a sufficient (but not necessary) condition for godhood.

So you believe there exists a Matrix Lord? Basically, computational Deism?

comment by chaosmage · 2014-03-12T17:32:01.986Z · LW(p) · GW(p)

Would you believe their self-reported answer to be the actual reason?

Replies from: None
comment by [deleted] · 2014-03-12T17:47:57.147Z · LW(p) · GW(p)

I would believe that their self-reported answer is at least partially the real reason. I would also want to know about any mitigating factors, of course, but saying something about the reasons for their beliefs implies some evidence.

comment by tom_cr · 2014-03-11T21:55:11.401Z · LW(p) · GW(p)

A minor point in relation to this topic, but an important point, generally:

It seems to be more of a contrarian value judgment than a contrarian world model

Correct me if I'm wrong, but isn't a value judgement necessarily part of a world model? You are a physical object, and your values necessarily derive from the arrangement of the matter that composes you.

Many tell me (effectively) that what I've just expressed is a contrarian view. Certainly, for many years I would have happily agreed with the non-overlapping-ness of value judgements and world views. But then I started to think about it. I thought about it all the more carefully because it seemed the conclusion I was reaching was a contrarian position. I thought about it so much, in fact, that it's now quite obvious to me that I'm right, regardless how large the majority who profess to disagree with me.

Perhaps this illustrates the utility of recognizing an idea's contrarian nature (and conversely, the danger of not pursuing ideas simply because consensus is deemed to have been already reached).

Replies from: nshepperd, Lumifer
comment by nshepperd · 2014-03-12T00:30:01.510Z · LW(p) · GW(p)

Correct me if I'm wrong, but isn't a value judgement necessarily part of a world model? You are a physical object, and your values necessarily derive from the arrangement of the matter that composes you.

That's confusing levels. A world model that makes some factual assertions, some of which imply "my values are X" is a distinct thing from your values actually being X. To begin with, it's entirely possible for your world model to imply that "my values are X" when your values are actually Y, in which case your world model is wrong.

Replies from: tom_cr
comment by tom_cr · 2014-03-12T03:47:11.877Z · LW(p) · GW(p)

What levels am I confusing? Are you sure it's not you that is confused?

Your comment bears some resemblance to that of Lumifer. See my reply above.

Replies from: nshepperd
comment by nshepperd · 2014-03-12T04:22:34.353Z · LW(p) · GW(p)

To put it simply, what I am saying is that a value judgement is about whatever it is you are in fact judging. While a factual assertion such as you would find in a "model of the world" is about the physical configuration of your brain. This is similar to the use/mention distinction in linguistics. When you make a value judgement you use your values. A model of your brain mentions them.

An argument like this

You are a physical object, and your values necessarily derive from the arrangement of the matter that composes you [therefore a value judgement is necessarily part of a world model].

could be equally well applied to claim that the act of throwing a ball is necessarily part of a world model, because your arm is physical. In fact, they are completely different things (for one thing, simply applying a model will never result in the ball moving), even though a world model may well describe the throwing of a ball.

Replies from: tom_cr
comment by tom_cr · 2014-03-12T15:52:32.602Z · LW(p) · GW(p)

A value judgement both uses and mentions values.

The judgement is an inference about values. The inference derives from the fact that some value exist. (The existing value exerts a causal influence on one's inferences.)

This is how it is with all forms of inference.

Throwing a ball is not an inference (note that 'inference' and 'judgement' are synonyms), thus throwing a ball is no way necessarily part of a world model, and for our purposes, in no way analogous to making a value judgement.

Replies from: nshepperd
comment by nshepperd · 2014-03-12T23:27:08.433Z · LW(p) · GW(p)

Here is a quote from the article:

Is my effective altruism a contrarian view? It seems to be more of a contrarian value judgment than a contrarian world model, and by “contrarian view” I tend to mean “contrarian world model.”

Lukeprog thinks that effective altruism is good, and this is a value judgement. Obviously, most of mainstream society doesn't agree—people prefer to give money to warm fuzzy causes, like "adopt an endangered panda". So that value judgement is certainly contrarian.

Presumably, lukeprog also believes that "lukeprog thinks effective altruism is good". This is a fact in his world model. However, most people would agree with him when asked if that is true. We can see that lukeprog likes effective altruism. There's no reason for anyone to claim "no, he doesn't think that" when he obviously does. So this element of his world model is not contrarian.

Replies from: tom_cr
comment by tom_cr · 2014-03-13T02:40:24.348Z · LW(p) · GW(p)

I guess Lukeprog also believes that Lukeprog exists, and that this element of his world view is also not contrarian. So what?

One thing I see repeatedly in others is a deep-rooted reluctance to view themselves as blobs of perfectly standard physical matter. One of the many ways this manifests itself is a failure to consider inferences about one's own mind as fundamentally similar to any other form of inference. There seems to be an assumption of some kind on non-inferable magic, when many people think about their own motivations. I'm sure you appreciate how fundamentally silly this is, but maybe you could take a little time to meditate on it some more.

Sorry if my tone is a little condescending, but understand that you have totally failed to support your initial claim that I was confused.

Replies from: nshepperd, Strange7
comment by nshepperd · 2014-03-13T12:05:03.299Z · LW(p) · GW(p)

That's not at all what I meant. Obviously minds and brains are just blobs of matter.

You are conflating the claims "lukeprog thinks X is good" and "X is good". One is an empirical claim, one is a value judgement. More to the point, when someone says "P is a contrarian value judgement, not a contrarian world model", they obviously intend "world model" to encompass empirical claims and not value judgements.

Replies from: tom_cr
comment by tom_cr · 2014-03-13T16:45:47.364Z · LW(p) · GW(p)

I'm not conflating anything. Those are different statements, and I've never implied otherwise.

The statement "X is good," which is a value judgement, is also an empirical claim, as was my initial point. Simply restating your denial of that point does not constitute an argument.

"X is good" is a claim about the true state of X, and its relationship to the values of the person making the claim. Since you agree that values derive from physical matter, you must (if you wish to be coherent) also accept that "X is good" is a claim about physical matter, and therefore part of the world model of anybody who believes it.

If there is some particular point or question I can help with, don't hesitate to ask.

Replies from: nshepperd
comment by nshepperd · 2014-03-13T21:23:43.993Z · LW(p) · GW(p)

If "X is good" was simply an empirical claim about whether an object conforms to a person's values, people would frequently say things like "if my values approved of X, then X would be good" and would not say things like "taking a murder pill doesn't affect the fact that murder is bad".

Alternative: what if "X is good" was a mathematical claim about the value of a thing according to whatever values the speaker actually holds?

Replies from: tom_cr
comment by tom_cr · 2014-03-13T22:03:10.051Z · LW(p) · GW(p)

If "X is good" was simply an empirical claim about whether an object conforms to a person's values, people would frequently say things like "if my values approved of X, then X would be good"....

If that is your basis for a scientific standard, then I'm afraid I must withdraw from this discussion.

Ditto, if this is your idea of humor.

what if "X is good" was a mathematical claim about the value of a thing according to whatever values the speaker actually holds?

That's just silly. What if c = 299,792,458 m/s is a mathematical claim about the speed of light, according to what the speed of light actually is? May I suggest that you don't invent unnecessary complexity to disguise the demise of a long deceased argument.

No further comment from me.

comment by Strange7 · 2014-03-13T05:35:27.584Z · LW(p) · GW(p)

My theory is that the dualistic theory of mind is an artifact of the lossy compression algorithm which, conveniently, prevents introspection from turning into infinite recursion. Lack of neurosurgery in the environment of ancestral adaptation made that an acceptable compromise.

Replies from: tom_cr
comment by tom_cr · 2014-03-13T17:11:29.586Z · LW(p) · GW(p)

I quite like Bob Trivers' self-deception theory, though I only have tangential acquaintance with it. We might anticipate that self deception is harder if we are inclined to recognize the bit we call "me" as caused by some inner mechanism, hence it may be profitable to suppress that recognition, if Trivers is on to something.

Wild speculation on my part, of course. There may simply be no good reason, from the point of view of historic genetic fitness, to be good at self analysis, and you're quite possibly on to something, that the computational overhead just doesn't pay off.

comment by Lumifer · 2014-03-12T01:06:05.682Z · LW(p) · GW(p)

but isn't a value judgement necessarily part of a world model? You are a physical object, and your values necessarily derive from the arrangement of the matter that composes you.

The issue is, whose world model? Your world model does not necessarily include values even if they were to be deterministically derived from "the arrangement of the matter".

The map is not the territory. Models are imperfect and many different models can be build on the basis of the same reality.

Replies from: tom_cr
comment by tom_cr · 2014-03-12T02:19:03.362Z · LW(p) · GW(p)

whose world model?

Trivially, it is the world model of the person making the value judgement I'm talking about. I'm trying hard, but I'm afraid I really don't understand the point of your comment.

If I make a judgement of value, I'm making an inference about an arrangement of matter (mostly in my brain), which (inference) is therefore part of my world model. This can't be otherwise.

Furthermore, any entity capable of modeling some aspect of reality must be, by definition, capable of isolating salient phenomena, which amounts to making value judgements. Thus, I'm forced to disagree when you say "your world model does not necessarily include values..."

Your final sentence is trivially correct, but its relevance is beyond me. Sorry. If you mean that my world model may not include values I actually possess, this is correct of course, but nobody stipulated that a world model must be correct.

Replies from: Lumifer
comment by Lumifer · 2014-03-12T04:16:12.285Z · LW(p) · GW(p)

I don't think we understand each other. Let me try to unroll.

A model (of the kind we are talking about) is some representation of reality. It exists in a mind.

Let's take Alice. Alice holds an apple in her hand. Alice believes that if she lets go of the apple it will fall to the ground. This is an example of a simple world model that exists inside Alice's mind: basically, that there is such a thing as gravity and that it pulls objects towards the ground.

You said "isn't a value judgement necessarily part of a world model?" I don't see a value judgement in this particular world model inside Alice's mind.

You also said "You are a physical object, and your values necessarily derive from the arrangement of the matter that composes you." That is a claim about how Alice's values came to be. But I don't see why Alice's values must necessarily be part of all world models that exists inside Alice's mind.

Replies from: tom_cr
comment by tom_cr · 2014-03-12T15:42:29.643Z · LW(p) · GW(p)

I never said anything of the sort that Alice's values must necessarily be part of all world models that exist inside Alice's mind. (Note, though, that if we are talking about 'world model,' singular, as I was, then world model necessarily includes perception of some values.)

When I say that a value judgement is necessarily part of a world model, I mean that if I make a value judgement, then that judgement is necessarily part of my world model.

Replies from: Lumifer
comment by Lumifer · 2014-03-12T15:55:33.633Z · LW(p) · GW(p)

When I say that a value judgement is necessarily part of a world model, I mean that if I make a value judgement, then that judgement is necessarily part of my world model.

So, Alice likes cabernet and dislikes merlot. Alice says "I value cabernet more than merlot". This is a value judgement. How is it a part of Alice's world model and which world model?

By any chance, are you calling "a world model" the totality of a person's ideas, perceptions, representations, etc. of external reality?

Replies from: tom_cr
comment by tom_cr · 2014-03-12T16:14:27.217Z · LW(p) · GW(p)

Alice is part of the world, right? So any belief about Alice is part of a world model. Any belief about Alice's preference for cabernet is part of a world model - specifically, the world model of who-ever holds that belief.

By any chance....?

Yes. (The phrase "the totality of" could, without any impact on our current discussion, be replaced with "elements of". )

Is there something wrong with that? I inferred that to also be the meaning of the original poster.

Replies from: Lumifer
comment by Lumifer · 2014-03-12T16:20:39.841Z · LW(p) · GW(p)

specifically, the world model of who-ever holds that belief

Not "whoever", we are talking specifically about Alice. Is Alice's preference for cabernet part of Alice's world model?

I have a feeling we're getting into the snake-eating-its-own-tail loops. If Alice's preferences are part of Alice's world model then Alice's world model is part of Alice's world model as well. Recurse until you're are reduced to praying to the Holy Trinity of Godel, Escher, and Bach :-)

The phrase "the totality of" could, without any impact on our current discussion, be replaced with "elements of".

Could it? You are saying that value judgments must be a part of. Are there "elements of" which do not contain value judgements?

Replies from: tom_cr
comment by tom_cr · 2014-03-12T16:41:20.249Z · LW(p) · GW(p)

Are there "elements of" which don't contain value judgements?

That strikes me as a question for dictionary writers. If we agree that Newton's laws of motion constitute such an element, then clearly, there are such elements that do not not contain value judgements.

Is Alice's preference for cabernet part of Alice's world model?

iff she perceives that preference.

If Alice's preferences are part of Alice's world model, then Alice's world model is part of Alice's world model as well.

I'm not sure this follows by logical necessity, but how is this unusual? When I mention Newton's laws, am I not implicitly aware that I have this world model? Does my world model, therefore, not include some description of my world model? How is this relevant?

comment by Dagon · 2014-03-11T20:11:25.686Z · LW(p) · GW(p)

More recent (last week) Hanson writing on contrarianism: http://www.overcomingbias.com/2014/03/prefer-contrarian-questions-vs-answers.html He takes a tack similar to your "value contrarianism" - you and he think that beliefs on these topics (values for you, important topics for him) are less likely for the consensus (whichever one you're contrary-ing) to be correct.

I wonder if some topics, especially far-mode ones, don't have truth, or truth is less important.to actions. Those topics would be the ones to choose for contrarian signaling.

comment by Shmi (shminux) · 2014-03-11T18:35:19.628Z · LW(p) · GW(p)

Hanson:

your being tempted to be contrarian on questions suggests that you are the sort of person who is also tempted to be contrarian on answers

This is a "bad contrarian", and if you suspect that as one of your reasons (I am fairly sure it is not), then the thing to do is not to worry about whether your view is contrarian, but to work on avoiding skewing your priors.

On the other hand, if after a lot of research, you happen to find yourself in opposition to what appears to be the mainstream view, i.e. being a "good contrarian", then you should figure out whether you started with the same priors, and if so, carefully examine the steps that lead you to diverge from the mainstream, and look for an independent assessment of your logic and evidence for each step. What is probably not productive is to look back from the final divergent point of view and wonder whether holding a contrarian view is evidence against it being "correct", then discontinuously switch your belief to that of a relevant group of experts, which is what Beckstead appears to be doing.

comment by common_law · 2014-03-31T18:48:00.072Z · LW(p) · GW(p)

You're mistaken in applying the same standards to personal and deliberative decisions. The decision to enroll in cryonics is different in kind from the decision to promote safe AI for the public good. The first should be based on the belief that cryonics claims are true; the second should be based (ultimately) on the marginal value of advocacy in advancing the discussion. The failure to understand this distinction is a major failing in public rationality. For elaboration, see The distinct functions of belief and opinion.