Common sense as a prior
post by Nick_Beckstead · 2013-08-11T18:18:11.494Z · LW · GW · Legacy · 215 commentsContents
Introduction An outline of the framework and some guidelines for applying it effectively Some further reasons to think that the framework is likely to be helpful Cases where people often don’t follow the framework but I think they should Objections to this approach Objection: elite common sense is often wrong Objection: the best people are highly unconventional Objection: elite common sense is wrong about X, and can’t be talked out of it, so your framework should be rejected in general Conclusion None 215 comments
Introduction
[I have edited the introduction of this post for increased clarity.]
This post is my attempt to answer the question, "How should we take account of the distribution of opinion and epistemic standards in the world?" By “epistemic standards,” I roughly mean a person’s way of processing evidence to arrive at conclusions. If people were good Bayesians, their epistemic standards would correspond to their fundamental prior probability distributions. At a first pass, my answer to this questions is:
Main Recommendation: Believe what you think a broad coalition of trustworthy people would believe if they were trying to have accurate views and they had access to your evidence.
The rest of the post can be seen as an attempt to spell this out more precisely and to explain, in practical terms, how to follow the recommendation. Note that there are therefore two broad ways to disagree with the post: you might disagree with the main recommendation, or the guidelines for following main recommendation.
I am aware of two relatively close intellectual relatives to my framework: what philosophers call “equal weight” or “conciliatory” views about disagreement and what people on LessWrong may know as “philosophical majoritarianism.” Equal weight views roughly hold that when two people who are expected to be roughly equally competent at answering a certain question have different subjective probability distributions over answers to that question, those people should adopt some impartial combination of their subjective probability distributions. Unlike equal weight views in philosophy, my position is meant as a set of rough practical guidelines rather than a set of exceptionless and fundamental rules. I accordingly focus on practical issues for applying the framework effectively and am open to limiting the framework’s scope of application. Philosophical majoritarianism is the idea that on most issues, the average opinion of humanity as a whole will be a better guide to the truth than one’s own personal judgment. My perspective differs from both equal weight views and philosophical majoritarianism in that it emphasizes an elite subset of the population rather than humanity as a whole and that it emphasizes epistemic standards more than individual opinions. My perspective differs from what you might call "elite majoritarianism" in that, according to me, you can disagree with what very trustworthy people think on average if you think that those people would accept your views if they had access to your evidence and were trying to have accurate opinions.
I am very grateful to Holden Karnofsky and Jonah Sinick for thought-provoking conversations on this topic which led to this post. Many of the ideas ultimately derive from Holden’s thinking, but I've developed them, made them somewhat more precise and systematic, discussed additional considerations for and against adopting them, and put everything in my own words. I am also grateful to Luke Muehlhauser and Pablo Stafforini for feedback on this post.
In the rest of this post I will:
- Outline the framework and offer guidelines for applying it effectively. I explain why I favor relying on the epistemic standards of people who are trustworthy by clear indicators that many people would accept, why I favor paying more attention to what people think than why they say they think it (on the margin), and why I favor stress-testing critical assumptions by attempting to convince a broad coalition of trustworthy people to accept them.
- Offer some considerations in favor of using the framework.
- Respond to the objection that common sense is often wrong, the objection that the most successful people are very unconventional, and objections of the form “elite common sense is wrong about X and can’t be talked out of it.”
- Discuss some limitations of the framework and some areas where it might be further developed. I suspect it is weakest in cases where there is a large upside to disregarding elite common sense, there is little downside, and you’ll find out whether your bet against conventional wisdom was right within a tolerable time limit, and cases where people are unwilling to carefully consider arguments with the goal of having accurate beliefs.
An outline of the framework and some guidelines for applying it effectively
My suggestion is to use elite common sense as a prior rather than the standards of reasoning that come most naturally to you personally. The three main steps for doing this are:
- Try to find out what people who are trustworthy by clear indicators that many people would accept believe about the issue.
- Identify the information and analysis you can bring to bear on the issue.
- Try to find out what elite common sense would make of this information and analysis, and adopt a similar perspective.
On the first step, people often have an instinctive sense of what others think, though you should beware the false consensus effect. If you don’t know what other opinions are out there, you can ask some friends or search the internet. In my experience, regular people often have similar opinions to very smart people on many issues, but are much worse at articulating considerations for and against their views. This may be because many people copy the opinions of the most trustworthy people.
I favor giving more weight to the opinions of people who can be shown to be trustworthy by clear indicators that many people would accept, rather than people that seem trustworthy to you personally. This guideline is intended to help avoid parochialism and increase self-skepticism. Individual people have a variety of biases and blind spots that are hard for them to recognize. Some of these biases and blind spots—like the ones studied in cognitive science—may affect almost everyone, but others are idiosyncratic—like biases and blind spots we inherit from our families, friends, business networks, schools, political groups, and religious communities. It is plausible that combining independent perspectives can help idiosyncratic errors wash out.
In order for the errors to wash out, it is important to rely on the standards of people who are trustworthy by clear indicators that many people would accept rather than the standards of people that seem trustworthy to you personally. Why? The people who seem most impressive to us personally are often people who have similar strengths and weaknesses to ourselves, and similar biases and blind spots. For example, I suspect that academics and people who specialize in using a lot of explicit reasoning have a different set of strengths and weaknesses from people who rely more on implicit reasoning, and people who rely primarily on many weak arguments have a different set of strengths and weaknesses from people who rely more on one relatively strong line of argument.
Some good indicators of general trustworthiness might include: IQ, business success, academic success, generally respected scientific or other intellectual achievements, wide acceptance as an intellectual authority by certain groups of people, or success in any area where there is intense competition and success is a function of ability to make accurate predictions and good decisions. I am less committed to any particular list of indicators than the general idea.
Of course, trustworthiness can also be domain-specific. Very often, elite common sense would recommend deferring to the opinions of experts (e.g., listening to what physicists say about physics, what biologists say about biology, and what doctors say about medicine). In other cases, elite common sense may give partial weight to what putative experts say without accepting it all (e.g. economics and psychology). In other cases, they may give less weight to what putative experts say (e.g. sociology and philosophy). Or there may be no putative experts on a question. In cases where elite common sense gives less weight to the opinions of putative experts or there are no plausible candidates for expertise, it becomes more relevant to think about what elite common sense would say about a question.
How should we assign weight to different groups of people? Other things being equal, a larger number of people is better, more trustworthy people are better, people who are trustworthy by clearer indicators that more people would accept are better, and a set of criteria which allows you to have some grip on what the people in question think is better, but you have to make trade-offs. If I only included, say, the 20 smartest people I had ever met as judged by me personally, that would probably be too small a number of people, the people would probably have biases and blind spots very similar to mine, and I would miss out on some of the most trustworthy people, but it would be a pretty trustworthy collection of people and I’d have some reasonable sense of what they would say about various issues. If I went with, say, the 10 most-cited people in 10 of the most intellectually credible academic disciplines, 100 of the most generally respected people in business, and the 100 heads of different states, I would have a pretty large number of people and a broad set of people who were very trustworthy by clear standards that many people would accept, but I would have a hard time knowing what they would think about various issues because I haven’t interacted with them enough. How these factors can be traded-off against each other in a way that is practically most helpful probably varies substantially from person to person.
I can’t give any very precise answer to the question about whose opinions should be given significant weight, even in my own case. Luckily, I think the output of this framework is usually not very sensitive to how we answer this question, partly because most people would typically defer to other, more trustworthy people. If you want a rough guideline that I think many people who read this post could apply, I would recommend focusing on, say, the opinions of the top 10% of people who got Ivy-League-equivalent educations (note that I didn’t get such an education, at least as an undergrad, though I think you should give weight to my opinion; I’m just giving a rough guideline that I think works reasonably well in practice). You might give some additional weight to more accomplished people in cases where you have a grip on how they think.
I don’t have a settled opinion about how to aggregate the opinions of elite common sense. I suspect that taking straight averages gives too much weight to the opinions of cranks and crackpots, so that you may want to remove some outliers or give less weight to them. For the purpose of making decisions, I think that sophisticated voting methods (such as the Condorcet method) and analogues of the parliamentary approaches outlined by Nick Bostrom and Toby Ord seem fairly promising as rough guidelines in the short run. I don’t do calculations with this framework—as I said, it’s mostly conceptual—so uncertainty about an aggregation procedure hasn’t been a major issue for me.
On the margin, I favor paying more attention to people’s opinions than their explicitly stated reasons for their opinions. Why? One reason is that I believe people can have highly adaptive opinions and patterns of reasoning without being able to articulate good defenses of those opinions and/or patterns of reasoning. (Luke Muehlhauser has discussed some related points here.) One reason is that people can adopt practices that are successful without knowing why they are successful, others who interact with them can adopt those practices, others who interact with them can adopt those practices, and so forth. I heard an extreme example of this from Spencer Greenberg, who had read it in Scientists Greater than Einstein. The story involved a folk remedy for visual impairment:
There were folk remedies worthy of study as well. One widely used in Java on children with either night blindness or Bitot’s spots consisted of dropping the juices of lightly roasted lamb’s liver into the eyes of affected children. Sommer relates, “We were bemused at the appropriateness of this technique and wondered how it could possibly be effective. We, therefore, attended several treatment sessions, which were conducted exactly as the villagers had described, except for one small addition—rather than discarding the remaining organ, they fed it to the affected child. For some unknown reason this was never considered part of the therapy itself.” Sommer and his associates were bemused, but now understood why the folk remedy had persisted through the centuries. Liver, being the organ where vitamin A is stored in a lamb or any other animal, is the best food to eat to obtain vitamin A. (p. 14)
Another striking example is bedtime prayer. In many Christian traditions I am aware of, it is common to pray before going to sleep. And in the tradition I was raised in, the main components of prayer were listing things you were grateful for, asking for forgiveness for all the mistakes you made that day and thinking about what you would do to avoid similar mistakes in the future, and asking God for things. Christians might say the point of this is that it is a duty to God, that repentance is a requirement for entry to heaven, or that asking God for things makes God more likely to intervene and create miracles. However, I think these activities are reasonable for different reasons: gratitude journals are great, reflecting on mistakes is a great way to learn and overcome weaknesses, and it is a good idea to get clear about what you really want out of life in the short-term and the long-term.
Another reason I have this view is that if someone has an effective but different intellectual style from you, it’s possible that your biases and blind spots will prevent you from appreciating their points that have significant merit. If you partly give weight to opinions independently of how good the arguments seem to you personally, this can be less of an issue for you. Jonah Sinick described a striking reason this might happen in Many Weak Arguments and the Typical Mind:
We should pay more attention to people’s bottom line than to their stated reasons — If most high functioning people aren’t relying heavily on any one of the arguments that they give, if a typical high functioning person responds to a query of the type “Why do you think X?” by saying “I believe X because of argument Y” we shouldn’t conclude that the person believes argument Y with high probability. Rather, we should assume that argument Y is one of many arguments that they believe with low confidence, most of which they’re not expressing, and we should focus on their belief in X instead of argument Y. [emphasis his]
This idea interacts in a complementary way to Luke Muehlhauser’s claim that some people who are not skilled at explicit rationality may be skilled in tacit rationality, allowing them to be successful at making many types of important decisions. If we are interacting with such people, we should give significant weight to their opinions independently of their stated reasons.
A counterpoint to my claim that, on the margin, we should give more weight to others’ conclusions and less to their reasoning is that some very impressive people disagree. For example, Ray Dalio is the founder of Bridgewater, which, at least as of 2011, was the world’s largest hedge fund. He explicitly disagrees with my claim:
“I stress-tested my opinions by having the smartest people I could find challenge them so I could find out where I was wrong. I never cared much about others’ conclusions—only for the reasoning that led to these conclusions. That reasoning had to make sense to me. Through this process, I improved my chances of being right, and I learned a lot from a lot of great people.” (p. 7 of Principles by Ray Dalio)
I suspect that getting the reasoning to make sense to him was important because it helped him to get better in touch with elite common sense, and also because reasoning is more important when dealing with very formidable people, as I suspect Dalio did and does. I also think that for the some of the highest functioning people who are most in touch with elite common sense, it may make more sense to give more weight to reasoning than conclusions.
The elite common sense framework favors testing unconventional views by seeing if you can convince a broad coalition of impressive people that your views are true. If you can do this, it is often good evidence that your views are supported by elite common sense standards. If you can’t, it’s often good evidence that your views can’t be so supported. Obviously, these are rules of thumb and we should restrict our attention to cases where you are persuading people by rational means, in contrast with using rhetorical techniques that exploit human biases. There are also some interesting cases where, for one reason or another, people are unwilling to hear your case or think about your case rationally, and applying this guideline to these cases is tricky.
Importantly, I don’t think cases where elite common sense is biased are typically an exception to this rule. In my experience, I have very little difficulty convincing people that some genuine bias, such as scope insensitivity, really is biasing their judgment. And if the bias really is critical to the disagreement, I think it will be a case where you can convince elite common sense of your position. Other cases, such as deeply entrenched religious and political views, may be more of an exception, and I will discuss the case of religious views more in a later section.
The distinction between convincing and “beating in an argument” is important for applying this principle. It is much easier to tell whether you convinced someone than it is to tell whether you beat them in an argument. Often, both parties think they won. In addition, sometimes it is rational not to update much in favor of a view if an advocate for that view beats you in an argument.
In support of this claim, consider what would happen if the world’s smartest creationist debated some fairly ordinary evolution-believing high school student. The student would be destroyed in argument, but the student should not reject evolution, and I suspect he should hardly update at all. Why not? The student should know that there are people out there in the world who could destroy him on either side of this argument, and his personal ability to respond to arguments is not very relevant. What should be most relevant to this student is the distribution of opinion among people who are most trustworthy, not his personal response to small sample of the available evidence. Even if you genuinely are beating people in arguments, there is a risk that you will be like this creationist debater.
An additional consideration is that certain beliefs and practices may be reasonable and adopted for reasons that are not accessible to people who have adopted those beliefs and practices, as illustrated with the examples of the liver ritual and bedtime prayer. You might be able to “beat” some Christian in an argument about the merits of bedtime prayer, but praying may still be better than not praying. (I think it would be better still to introduce a different routine that serves similar functions—this is something I have done in my own life—but the Christian may be doing better than you on this issue if you don’t have a replacement routine yourself.)
Under the elite common sense framework, the question is not “how reliable is elite common sense?” but “how reliable is elite common sense compared to me?” Suppose I learn that, actually, people are much worse at pricing derivatives than I previously believed. For the sake of argument suppose this was a lesson of the 2008 financial crisis (for the purposes of this argument, it doesn't matter whether this is actually a correct lesson of the crisis). This information does not favor relying more on my own judgment unless I have reason to think that the bias applies less to me than the rest of the derivatives market. By analogy, it is not acceptable to say, “People are really bad at thinking about philosophy. So I am going to give less weight to their judgments about philosophy (psst…and more weight to my personal hunches and the hunches of people I personally find impressive).” This is only OK if you have evidence that your personal hunches and the hunches of the people you personally find impressive are better than elite common sense, with respect to philosophy. In contrast, it might be acceptable to say, “People are very bad at thinking about the consequences of agricultural subsidies in comparison with economists, and most trustworthy people would agree with this if they had my evidence. And I have an unusual amount of information about what economists think. So my opinion gets more weight than elite common sense in this case.” Whether this ultimately is acceptable to say would depend on how good elites are at thinking about the consequences of agricultural subsidies—I suspect they are actually pretty good at it—but this is isn’t relevant to the general point that I’m making. The general point is that this is one potentially correct form of an argument that your opinion is better than the current stance of elite common sense.
This is partly a semantic issue, but I count the above example as a case where “you are more reliable than elite common sense,” even though, in some sense, you are relying on expert opinion rather than your own. But you have different beliefs about who is a relevant expert or what experts say than common sense does, and in this sense you are relying on your own opinion.
I favor giving more weight to common sense judgments in cases where people are trying to have accurate views. For example, I think people don’t try very hard to have correct political, religious, and philosophical views, but they do try to have correct views about how to do their job properly, how to keep their families happy, and how to impress their friends. In general, I expect people to try to have more accurate views in cases where it is in their present interests to have more accurate views. (A quick reference for this point is here.) This means that I expect them to strive more for accuracy in decision-relevant cases, cases where the cost of being wrong is high, and cases where striving for more accuracy can be expected to yield more accuracy, though not necessarily in cases where the risks and rewards are won’t come for a very long time. I suspect this is part of what explains why people can be skilled in tacit rationality but not explicit rationality.
As I said above, what’s critical is not how reliable elite common sense is but how reliable you are in comparison with elite common sense. So it only makes sense to give more weight to your views when learning that others aren’t trying to be correct if you have compelling evidence that you are trying to be correct. Ideally, this evidence would be compelling to a broad class of trustworthy people and not just compelling to you personally.
Some further reasons to think that the framework is likely to be helpful
In explaining the framework and outlining guidelines for applying it, I have given some reasons to expect this framework to be helpful. Here are some more weak arguments in favor of my view:
- Some studies I haven’t personally reviewed closely claim that combinations of expert forecasts are hard to beat. For instance, a review by (Clemen 1989) found that: "Considerable literature has accumulated over the years regarding the combination of forecasts. The primary conclusion of this line of research is that forecast accuracy can be substantially improved through the combination of multiple individual forecasts." (abstract) And a recent work by the Good Judgment Project found that taking an average individual forecasts and transforming it away from .5 credence gave the lowest errors of a variety of different methods of aggregating judgments of forecasters (p. 42).
- There are plausible philosophical considerations suggesting that, absent special evidence, there is no compelling reason to favor your own epistemic standards over the epistemic standards that others use.
- In practice, we are extremely reliant on conventional wisdom for almost everything we believe that isn’t very closely related to our personal experience, and single individuals working in isolation have extremely limited ability to manipulate their environment in comparison with individuals who can build on the insights of others. To see this point, consider that a small group of very intelligent humans detached from all cultures wouldn’t have much of an advantage at all over other animal species in competition for resources, but humans are increasingly dominating the biosphere. A great deal of this must be chalked up to cultural accumulation of highly adaptive concepts, ideas, and procedures that no individual could develop on their own. I see trying to rely on elite common sense as highly continuous with this successful endeavor.
- Highly adaptive practices and assumptions are more likely to get copied and spread, and these practices and assumptions often work because they help you to be right. If you use elite common sense as a prior, you’ll be more likely to be working with more adaptive practices and assumptions.
- Some successful processes for finding valuable information, such as PageRank and Quora, seem analogous to the framework I have outlined. PageRank is one algorithm that Google uses to decide how high different pages should be in searches, which is implicitly a way of ranking high-quality information. I’m speaking about something I don’t know very well, but my rough understanding is that PageRank gives pages more votes when more pages link to them, and votes from a page get more weight if that page itself has a lot of votes. This seems analogous to relying on elite common sense because information sources are favored when they are regarded as high quality by a broad coalition of other information sources. Quora seems analogous because it favors answers to questions that many people regard as good.
- I’m going to go look at the first three questions I can find on Quora. I predict that I would prefer the answers that elite common sense would give to these questions to what ordinary common sense would say, and also that I would prefer elite common sense’s answers to these questions to my own except in cases where I have strong inside information/analysis. Results: 1st question: weakly prefer elite common sense, don’t have much special information. 2nd question: prefer elite common sense, don’t have much special information. 3rd question: prefer elite common sense, don’t have much special information. Note that I skipped a question because it was a matter of taste. This went essentially the way I predicted it to go.
- The type of mathematical considerations underlying Condorcet’s Jury Theorem give us some reason to think that combined opinions are often more reliable than individual opinions, even though the assumptions underlying this theorem are far from totally correct.
- There’s a general cluster of social science findings that goes under the heading “wisdom of crowds” and suggests that aggregating opinions across people outperforms individual opinions in many contexts.
- Some rough “marketplace of ideas” arguments suggest that the best ideas will often become part of elite common sense. When claims are decision-relevant, people pay if they have dumb beliefs and benefit if they have smart beliefs. When claims aren’t decision-relevant, people sometimes pay a social cost for saying dumb things and get social benefits for saying things that are smarter, and the people with more information have more incentive to speak. For analogous reasons, when people use and promote epistemic standards that are dumb, they pay costs and when they use and promote epistemic standards that are smart. Obviously there are many other factors, including ones that point in different directions, but there is some kind of positive force here.
Cases where people often don’t follow the framework but I think they should
I have seen a variety of cases where I believe people don’t follow the principles I advocate. There are certain types of errors that I think many ordinary people make and others that are more common for sophisticated people to make. Most of these boil down to giving too much weight to personal judgments, giving too much weight to people who are impressive to you personally but not impressive by clear and uncontroversial standards, or not putting enough weight on what elite common sense has to say.
Giving too much weight to the opinions of people like you: People tend to hold religious views and political views that are similar to the views of their parents. Many of these people probably aren’t trying to have accurate views. And the situation would be much better if people gave more weight to the aggregated opinion of a broader coalition of perspectives.
I think a different problem arises in the LessWrong and effective altruism communities. In this case, people are much more reflectively choosing which sets of people to get their beliefs from, and I believe they are getting beliefs from some pretty good people. However, taking an outside perspective, it seems overwhelmingly likely that these communities are subject to their own biases and blind spots, and the people who are most attracted to these communities are most likely to suffer from the same biases and blind spots. I suspect elite common sense would take these communities more seriously than it currently does if it had access to more information about the communities, but I don’t think it would take us sufficiently seriously to justify having high confidence in many of our more unusual views.
Being overconfident on open questions where we don’t have a lot of evidence to work with: In my experience, it is common to give little weight to common sense takes on questions about which there is no generally accepted answer, even when it is impossible to use commonsense reasoning to arrive at conclusions that get broad support. Some less sophisticated people seem to see this as a license to think whatever they want, as Paul Graham has commented in the case of politics and religion. I meet many more sophisticated people with unusual views about big picture philosophical, political, and economic questions in areas where they have very limited inside information and very limited information about the distribution of expert opinion. For example, I have now met a reasonably large number of non-experts who have very confident, detailed, unusual opinions about meta-ethics, libertarianism, and optimal methods of taxation. When I challenge people about this, I usually get some version of “people are not good at thinking about this question” but rarely a detailed explanation of why this person in particular is an exception to this generalization (more on this problem below).
There’s an inverse version of this problem where people try to “suspend judgment” on questions where they don’t have high-quality evidence, but actually end up taking very unusual stances without adequate justification. For example, I sometimes talk with people who say that improving the very long-term future would be overwhelmingly important if we could do it, but are skeptical about whether we can. In response, I sometimes run arguments of the form:
- In expectation, it is possible to improve broad feature X of the world (education, governance quality, effectiveness of the scientific community, economic prosperity).
- If we improve feature X, it will help future people deal with various big challenges and opportunities better in expectation.
- If people deal with these challenges and opportunities better in expectation, the future will be better in expectation.
- Therefore, it is possible to make the future better in expectation.
I’ve presented some preliminary thoughts on related issues here. Some people try to resist this argument on grounds of general skepticism about attempts at improving the world that haven’t been documented with high-quality evidence. Peter Hurford’s post on “speculative causes” is the closest example that I can point to online, though I’m not sure whether he still disagrees with me on this point. I believe that there can be some adjustment in the direction of skepticism in light of arguments that GiveWell has articulated here under “we are relatively skeptical,” but I consider rejecting the second premise on these grounds a significant departure from elite common sense. I would have a similar view about anyone who rejected any of the other premises—at least if they rejected them for all values of X—for such reasons. It’s not that I think the presumption in favor of elite common sense can’t be overcome—I strongly favor thinking about such questions more carefully and am open to changing my mind—it’s just that I don’t think it can be overcome by these types of skeptical considerations. Why not? These types of considerations seem like they could make the probability distribution over impact on the very long-term narrower, but I don’t see how they could put it tightly around zero. And in any case, GiveWell articulates other considerations in that post and other posts which point in favor of less skepticism about the second premise.
Part of the issue may be confusion about “rejecting” a premise and “suspending judgment.” In my view, the question is “What are the expected long-term effects of improving factor X?” You can try not to think about this question or say “I don’t know,” but when you make decisions you are implicitly committed to certain ranges of expected values on these questions. To justifiably ignore very long-term considerations, I think you probably need your implicit range to be close to zero. I often see people who say they are “suspending judgment” about these issues or who say they “don’t know” acting as if this ranger were very close to zero. I see this as a very strong, precise claim which is contrary to elite common sense, rather than an open-minded, “we’ll wait until the evidence comes in” type of view to have. Another way to put it is that my claim that improving some broad factor X has good long-run consequences is much more of an anti-prediction than the claim that its expected effects are close to zero. (Independent point: I think that a more compelling argument than the argument that we can’t affect the far future is the argument that that lots of ordinary actions have flow-through effects with astronomical expected impacts if anything does, so that people aiming explicitly at reducing astronomical waste are less privileged than one might think at first glance. I hope to write more about this issue in the future.)
Putting too much weight on your own opinions because you have better arguments on topics that interest you than other people, or the people you typically talk to: As mentioned above, I believe that some smart people, especially smart people who rely a lot on explicit reasoning, can become very good at developing strong arguments for their opinions without being very good at finding true beliefs. I think that in such instances, these people will generally not be very successful at getting a broad coalition of impressive people to accept their views (except perhaps by relying on non-rational methods of persuasion). Stress-testing your views by trying to actually convince others of your opinions, rather than just out-arguing them, can help you avoid this trap.
Putting too much weight on the opinions of single individuals who seem trustworthy to you personally but not to people in general, and have very unusual views: I have seen some people update significantly in favor of very unusual philosophical, scientific, and sociological claims when they encounter very intelligent advocates of these views. These people are often familiar with Aumann’s agreement theorem and arguments for splitting the difference with epistemic peers, and they are rightly troubled by the fact that someone fairly similar to them disagrees with them on an issue, so they try to correct for their own potential failures of rationality by giving additional weight to the advocates of these very unusual views.
However, I believe that taking disagreement seriously favors giving these very unusual views less weight, not more. The problem partly arises because philosophical discussion of disagreement often focuses on the simple case of two people sharing their evidence and opinions with each other. But what’s more relevant is the distribution of quality-weighted opinion around the world in general, not the distribution of quality-weighted opinion of the people that you have had discussions with, and not the distribution of quality-weighted opinion of the people that seem trustworthy to you personally. The epistemically modest move here is to try to stay closer to elite common sense, not to split the difference.
Objections to this approach
Objection: elite common sense is often wrong
One objection I often hear is that elite common sense is often wrong. I believe this is true, but not a problem for my framework. I make the comparative claim that elite common sense is more trustworthy than the idiosyncratic standards of the vast majority of individual people, not the claim that elite common sense is almost always right. A further consideration is that analogous objections to analogous views fail. For instance, “markets are often wrong in their valuation of assets” is not a good objection to the efficient markets hypothesis. As explained above, the argument that “markets are often wrong” needs to point to specific way in which one can do better than the market in order for it to make sense to place less weight on what the market says than on one’s own judgments.
Objection: the best people are highly unconventional
Another objection I sometimes hear is that the most successful people often pay the least attention to conventional wisdom. I think this is true, but not a problem for my framework. One reason I believe this is that, according to my framework, when you go against elite common sense, what matters is whether elite common sense reasoning standards would justify your opinion if someone following those standards knew about your background, information, and analysis. Though I can’t prove it, I suspect that the most successful people are often depart from elite common sense in ways that elite common sense would endorse if it had access to more information. I also believe that the most successful people tend to pay attention to elite common sense in many areas, and specifically bet against elite common sense in areas where they are most likely to be right.
A second consideration is that going against elite common sense may be a high-risk strategy, so that it is unsurprising if we see the most successful people pursuing it. People who give less weight to elite common sense are more likely to spend their time on pointless activities, join cults, and become crackpots, though they are also more likely to have revolutionary positive impacts. Consider an analogy: it may be that the gamblers who earned the most used the riskiest strategies, but this is not good evidence that you should use a risky strategy when gambling because the people who lost the most also played risky strategies.
A third consideration is that while it may be unreasonable to be too much of an independent thinker in a particular case, being an independent thinker helps you develop good epistemic habits. I think this point has a lot of merit, and could help explain why independent thinking is more common among the most successful people. This might seem like a good reason not to pay much attention to elite common sense. However, it seems to me that you can get the best of both worlds by being an independent thinker and keeping separate track of your own impressions and what elite common sense would make of your evidence. Where conflicts come up, you can try to use elite common sense to guide your decisions.
I feel my view is weakest in cases where there is a strong upside to disregarding elite common sense, there is little downside, and you’ll find out whether your bet against conventional wisdom was right within a tolerable time limit. Perhaps many crazy-sounding entrepreneurial ideas and scientific hypotheses fit this description. I believe it may make sense to pick a relatively small number of these to bet on, even in cases where you can’t convince elite common sense that you are on the right track. But I also believe that in cases where you really do have a great but unconventional idea, it will be possible to convince a reasonable chunk of elite common sense that your idea is worth trying out.
Objection: elite common sense is wrong about X, and can’t be talked out of it, so your framework should be rejected in general
Another common objection takes the form: view X is true, but X is not a view which elite common sense would give much weight to. Eliezer makes a related argument here, though he is addressing a different kind of deference to common sense. He points to religious beliefs, beliefs about diet, and the rejection of cryonics as evidence that you shouldn’t just follow what the majority believes. My position is closer to “follow the majority’s epistemic standards” than “believe what the majority beliefs,” and closer still to “follow the best people’s epistemic standards without cherry picking “best” to suit your biases,” but objections of this form could have some force against the framework I have defended.
A first response is that unless one thinks there are many values of X in different areas where my framework fails, providing a few counterexamples is not very strong evidence that the framework isn’t helpful in many cases. This is a general issue in philosophy which I think is underappreciated, and I’ve made related arguments in chapter 2 of my dissertation. I think the most likely outcome of a careful version of this attack on my framework is that we identify some areas where the framework doesn’t apply or has to be qualified.
But let’s delve into the question about religion in greater detail. Yes, having some religious beliefs is generally more popular than being an atheist, and it would be hard to convince intelligent religious people to become atheists. However, my impression is that my framework does not recommend believing in God for the following reasons. Here are a number of weak arguments for this claim:
- My impression is that the people who are most trustworthy by clear and generally accepted standards are significantly more likely to be atheists than the general population. One illustration of my perspective is that in a 1998 survey of the National Academy of Sciences, only 7% of respondents reported that they believed in God. However, there is a flame war and people have pushed many arguments on this issue, and scientists are probably unrepresentative of many trustworthy people in this respect.
- While the world at large has broad agreement that some kind of higher power exists, there is very substantial disagreement about what this means, to the point where it isn’t clear that these people are talking about the same thing.
- In my experience, people generally do not try very hard to have accurate beliefs about religious questions and have little patience for people who want to carefully discuss arguments about religious questions at length. This makes it hard to stress-test one’s views about religion by trying to get a broad coalition of impressive people to accept atheism, and makes it possible to give more weight to one’s personal take if one has thought unusually carefully about religious questions.
- People are generally raised in religious families, and there are substantial social incentives to remain religious. Social incentives for atheists to remain non-religious generally seem weaker, though they can also be substantial. For example, given my current social network, I believe I would pay a significant cost if I wanted to become religious.
- Despite the above point, in my experience, it is much more common for religious people to become atheists than it is for atheists to become religious.
- In my experience, among people who try very hard to have accurate beliefs about whether God exists, atheism is significantly more common than belief in God.
- In my experience, the most impressive people who are religious tend not to behave much differently from atheists or have different takes on scientific questions/questions about the future.
These points rely a lot on my personal experience, could stand to be researched more carefully, and feel uncomfortably close to lousy contrarian excuses, but I think they are nevertheless suggestive. In light of these points, I think my framework recommends that the vast majority of people with religious beliefs should be substantially less confident in their views, recommends modesty for atheists who haven’t tried very hard to be right, and I suspect it allows reasonably high confidence that God doesn’t exist for people who have strong indicators that they have thought carefully about the issue. I think it would be better if I saw a clear and principled way for the framework to push more strongly in the direction of atheism, but the case has enough unusual features that I don’t see this as a major argument against the general helpfulness of the framework.
As a more general point, the framework seems less helpful in the case of religion and politics because people are generally unwilling to carefully consider arguments with the goal of having accurate beliefs. By and large, when people are unwilling to carefully consider arguments with the goal of having accurate beliefs, this is evidence that it is not useful to try to think carefully about this area. This follows from the idea mentioned above that people tend to try to have accurate views when it is in their present interests to have accurate views. So if this is the main way the framework breaks down, then the framework is mostly breaking down in cases where good epistemology is relatively unimportant.
Conclusion
I’ve outlined a framework for taking account of the distribution of opinions and epistemic standards in the world and discussed some of its strengths and weaknesses. I think the largest strengths of the framework are that it can help you avoid falling prey to idiosyncratic personal biases, and that using it derives benefits from the “wisdom of crowds” effects. The framework is less helpful in:
- cases where there is a large upside to disregarding elite common sense, there is little downside, and you’ll find out whether your bet against conventional wisdom was right within a tolerable time limit, and
- cases where people are unwilling to carefully consider arguments with the goal of having accurate beliefs.
Some questions for people who want to further develop the framework include:
- How sensitive is the framework to other reasonable choices of standards for selecting trustworthy people? Are there more helpful standards to use?
- How sensitive is the framework to reasonable choices of standards for aggregating opinions of trustworthy people?
- What are the best ways of getting a better grip on elite common sense?
- What other areas are there where the framework is particularly weak or particularly strong?
- Can the framework be developed in ways that make it more helpful in cases where it is weakest?
215 comments
Comments sorted by top scores.
comment by Wei Dai (Wei_Dai) · 2013-08-10T21:07:29.612Z · LW(p) · GW(p)
One problem with this is that you often can't access the actual epistemic standards of other people because they have no incentives to reveal them to you. Consider the case of the Blu-ray copy protection system BD+ (which is fresh in my mind because I just used it recently as a example elsewhere). I'm not personally involved with this case, but my understanding based on what I've read is that the Blu-ray consortium bought the rights to the system from a reputable cryptography consulting firm for several million dollars (presumably after checking with other independent consultants), and many studios choose Blu-ray over HD DVD because of it. (From Wikipedia: Several studios cited Blu-ray Disc's adoption of the BD+ anti-copying system as the reason they supported Blu-ray Disc over HD DVD. The copy protection scheme was to take "10 years" to crack, according to Richard Doherty, an analyst with Envisioneering Group.) And yet one month after Blu-ray discs were released using the system, it was broken and those discs became copyable to people having a commercially available piece of software.
I think the actual majority opinion in the professional cryptography community, when they talk about this privately among themselves, was that such copy protection systems are pretty hopeless (i.e., likely to be easily broken to people with the right skills), but the elite decision makers had no access to this information. The consultants they bought the system from had no reason to tell them this, or were just overconfident in their own ideas. The other consultants they checked with couldn't personally break the system, probably due to not having quite the right sub-specialization (which perhaps only a handful of people in the world had), and since it would have been embarrassing to say "this is probably easily broken, but I can't break it", they just stuck with "I can't break it". (Or again, they may have been overconfident and translated "I can't break it" to "it can't be broken", even if they previously agreed with the majority opinion before personally studying the system.)
In order to correct for things like this, you have to take the elite opinion that you have access to, and use as evidence to update some other prior, instead of using it directly as a prior. In other words, ask questions like "If BD+ was likely to be easily breakable to people with the right skills, would I learn this from these consultants?" (But of course doing that exposes you to your own biases which perhaps make elite opinion too easy to "explain away".)
Replies from: Nick_Beckstead, Lumifer↑ comment by Nick_Beckstead · 2013-08-11T07:37:18.911Z · LW(p) · GW(p)
If I understand this objection properly, the objection is:
(1) The executives making decisions didn't have access to what the cryptographers thought.
(2) In order for the executives to apply the elite common sense framework, they would need to have access to what the cryptographers thought.
(3) Therefore, the executives could not apply the elite common sense framework in this case.
I would agree with the first premise but reject the second. If this all happened as you say--which seems plausible--then I would frame this as a case where the elite decision makers didn't have access to the opinions of some relevant subject matter experts rather than a case where the elite decision makers didn't have access to elite common sense. In my framework, you can have access to elite common sense without having access to what relevant subject mater experts think, though in this kind of situation you should be extremely modest in your opinions. The elite decision makers still had reasonable access to elite common sense insofar as they were able to stress-test their views about what to expect if they bought this copyright protection system by presenting their opinions to a broad coalition of smart people and seeing what others thought.
I agree that you have to start from your own personal standards in order to get a grip on elite common sense. But note that this point generally applies to anyone recommending that you use any reasoning standards at all other than the ones you happen to presently have. And my sense is that people can get reasonably well in touch with elite common sense by trying to understand how other trustworthy people think and applying the framework that I have advocated here. I acknowledge that it is not easy to know about the epistemic standards that others use; what I advocate here is doing your best to follow the epistemic standards of the most trustworthy people.
Replies from: Wei_Dai, Eliezer_Yudkowsky↑ comment by Wei Dai (Wei_Dai) · 2013-08-12T13:58:35.851Z · LW(p) · GW(p)
Ok, I think I misunderstood you earlier and thought "elite common sense" referred to the common sense of elite experts, rather than of elites in general. (I don't share Eliezer's "No True Elite" objection since that's probably what you originally intended.)
In view of my new understanding I would revise my criticism a bit. If the Blu-ray and studio executives had asked the opinions of a broad coalition of smart people, they likely would have gotten back the same answer that they already had: "hire some expert consultants and ask them to evaluate the system". An alternative would be to instead learn about Bayesian updating and the heuristics-and-biases literature (in other words learn LW-style rationality), which could have enabled the executives to realize that they'd probably be reading the same reports from their consultants even if BD+ was actually easily breakable by a handful of people with the right skills. At that point maybe they could have come up with some unconventional, outside-the-box ideas about how to confirm or rule out this possibility.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-11T18:19:44.494Z · LW(p) · GW(p)
I worry a bit that this has a flavor of 'No True Elite' or informal respecification of the procedure - suddenly, instead of consulting the best-trained subject matter experts, we are to poll a broad coalition of smart people. Why? Well, because that's what might have delivered the best answer in this case post-facto. But how are we to know in advance which to do?
(One possible algorithm is to first arrive at the correct answer, then pick an elite group which delivers that answer. But in this case the algorithm has an extra step. And of course you don't advocate this explicitly, but it looks to me like that's what you just did.)
Replies from: Nick_Beckstead, Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-11T19:02:09.664Z · LW(p) · GW(p)
I'm not sure I understand the objection/question, but I'll respond to the objection/question I think it is.
Am I changing the procedure to avoid a counterexample from Wei Dai?
I think the answer is No. If you look at the section titled "An outline of the framework and some guidelines for applying it effectively" you'll see that I say you should try to use a prior that corresponds to an impartial combination of what the people who are most trustworthy in general think. I say a practical approximation of being an "expert" is being someone elite common sense would defer to. If the experts won't tell elite common sense what they think, then what the experts think isn't yet part of elite common sense. I think this is a case where elite common sense just gets it wrong, not that they clearly could have done anything about it. But I do think it's a case where you can apply elite common sense, even if it gives you the wrong answer ex post. (Maybe it doesn't give you the wrong answer though; maybe some better investigation would have been possible and they didn't do it. This is hard to say from our perspective.)
Why go with what generally trustworthy people think as your definition of elite common sense? It's precisely because I think it is easier to get in touch with what generally trustworthy people think, rather than what all subject matter experts in the world think. As I say in the essay:
How should we assign weight to different groups of people? Other things being equal, a larger number of people is better, more trustworthy people are better, people who are trustworthy by clearer indicators that more people would accept are better, and a set of criteria which allows you to have some grip on what the people in question think is better, but you have to make trade-offs....If I went with, say, the 10 most-cited people in 10 of the most intellectually credible academic disciplines, 100 of the most generally respected people in business, and the 100 heads of different states, I would have a pretty large number of people and a broad set of people who were very trustworthy by clear standards that many people would accept, but I would have a hard time knowing what they would think about various issues because I haven’t interacted with them enough. How these factors can be traded-off against each other in a way that is practically most helpful probably varies substantially from person to person.
In principle, if you could get a sense for what all subject matter experts thought about every issue, that would be a great place to start for your prior. But I think that's not possible in practice. So I recommend using a more general group that you can use as your starting point.
Does this answer your question?
↑ comment by Nick_Beckstead · 2013-08-11T21:17:35.264Z · LW(p) · GW(p)
It seems the "No True Elite" fallacy would involve:
(1) Elite common sense seeming to say that I should believe X because on my definition of "elites," elites generally believe X. (2) X being an embarrassing thing to believe (3) Me replying that someone who believed X wouldn't count as an "elite," but doing so in a way that couldn't be justified by my framework
In this example I am actually saying we should defer to the cryptographers if we know their opinions, but that they don't get to count as part of elite common sense immediately because their opinions are too hard to access. And I'm actually saying that elite common sense supports a claim which it is embarrassing to believe.
So I don't understand how this is supposed to be an instance of the "No True Scotsman" fallacy.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-11T22:00:36.477Z · LW(p) · GW(p)
There's always reasons why the scotsman isn't a Scotsman. What I'm worried about is more the case where these types of considerations are selected post-facto and seem perfectly reasonable since they produce the correct answer there, but then in a new case, someone cries 'cherry-picking' when similar reasoning is applied.
Suppose I selected from among all physicists who accept MWI and asked them what they thought about FAI arguments. To me that's just an obvious sort of reweighting you might try, though anyone who's had experience with machine learning knows that most clever reweightings you try don't work. To someone else it might be cherry-picking of gullible physicists, and say, "You have violated Beckstead's rules!"
To me it might be obvious that AI 'elites' are exceedingly poorly motivated to come up with good answers about FAI. Someone else might think that the world being at stake would make them more motivated. (Though here it seems to me that this crosses the line into blatant empirical falsity about how human beings actually think, and brief acquaintance with AI people talking about the problem ought to confirm this, except that most such evidence seems to be discarded because 'Oh, they're not true elites' or 'Even though it's completely predictable that we're going to run into this problem later, it's not a warning sign for them to drop their epistemical trousers right now because they have arrived at the judgment that AI is far away via some line of reasoning which is itself reliable and will update accordingly as doom approaches, suddenly causing them to raise their epistemic standards again'. But now I'm diverging into a separate issue.)
I'd be happy with advice along the lines of, "First take your best guess as to who the elites really are and how much they ought to be trusted in this case, then take their opinion as a prior with an appropriate degree of concentrated probability density, then update." I'm much more worried about alleged rules for deciding who the elites are that are supposed to substitute for "Eh, take your best guess" and if you're applying complex reasoning to say, "Well, but that rule didn't really fail for cryptographers" then it becomes more legitimate for me to reply, "Maybe just 'take your best guess' would better summarize the rule?" In turn, I'm espousing this because I think people will have a more productive conversation if they understand that the rule is just 'best guess' and itself something subject to dispute rather than hard rules, as opposed to someone thinking that someone else violated a hard rule that is clearly visible to everyone in targeting a certain 'elite'.
Replies from: Nick_Beckstead, Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-11T22:16:19.441Z · LW(p) · GW(p)
Suppose I selected from among all physicists who accept MWI and asked them what they thought about FAI arguments. To me that's just an obvious sort of reweighting you might try, though anyone who's had experience with machine learning knows that most clever reweightings you try don't work. To someone else it might be cherry-picking of gullible physicists, and say, "You have violated Beckstead's rules!"
Just to be clear: I would count this as violating my rules because you haven't used a clear indicator of trustworthiness that many people would accept.
ETA: I'd add that people should generally pick their indicators in advance and stick with them, and not add them in to tune the system to their desired bottom lines.
↑ comment by Nick_Beckstead · 2013-08-11T22:12:43.523Z · LW(p) · GW(p)
Could you maybe just tell me what you think my framework is supposed to imply about Wei Dai's case, if not what I said it implies? To be clear: I say it implies that the executives should have used an impartial combination of the epistemic standards used by the upper crust of Ivy League graduates, and that this gives little weight to the cryptographers because, though the cryptographers are included, they are a relatively small portion of all people included. So I think my framework straightforwardly doesn't say that people should be relying on info they can't use, which is how I understood Wei Dai's objection. (I think that if they were able to know what the cryptographers opinions are, then elite common sense would recommend deferring to the cryptographers, but I'm just guessing about that.) What is it you think my framework implies--with no funny business and no instance of the fallacy you think I'm committing--and why do you find it objectionable?
ETA:
I'd be happy with advice along the lines of, "First take your best guess as to who the elites really are and how much they ought to be trusted in this case, then take their opinion as a prior with an appropriate degree of concentrated probability density, then update."
This is what I think I am doing and am intending to do.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-11T22:25:20.877Z · LW(p) · GW(p)
So in my case I would consider elite common sense about cryptography to be "Ask Bruce Schneier", who might or might not have declined to talk to those companies or consult with them. That's much narrower than trying to poll an upper crust of Ivy League graduates, from whom I would not expect a particularly good answer. If Bruce Schneier didn't answer I would email Dad and ask him for the name of a trusted cryptographer who was friends with the Yudkowsky family, and separately I would email Jolly and ask him what he thought or who to talk to.
But then if Scott Aaronson, who isn't a cryptographer, blogged about the issue saying the cryptographers were being silly and even he could see that, I would either mark it as unknown or use my own judgment to try and figure out who to trust. If I couldn't follow the object-level arguments and there was no blatantly obvious meta-level difference, I'd mark it unresolvable-for-now (and plan as if both alternatives had substantial probability). If I could follow the object-level arguments and there was a substantial difference of strength which I perceived, I wouldn't hesitate to pick sides based on it, regardless of the eliteness of the people who'd taken the opposite side, so long as there were some elites on my own side who seemed to think that yes, it was that obvious. I've been in that epistemic position lots of times.
I'm honestly not sure about what your version is. I certainly don't get the impression that one can grind well-specified rules to get to the answer about polling the upper 10% of Ivy League graduates in this case. If anything I think your rules would endorse my 'Bruce Schneier' output more strongly than the 10%, at least as I briefly read them.
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-12T11:13:50.171Z · LW(p) · GW(p)
I think we don't disagree about whether elite common sense should defer to cryptography experts (I assume this is what Bruce Schneier is a stand-in for). Simplifying a bit, we are disagreeing about the much more subtle question of whether, given that elite common sense should defer to cryptography experts, in a situation where the current views of cryptographers are unknown, elite common sense recommends adopting the current views of cryptographers. I say elite common sense recommends adopting their views if you know them, but going with what e.g. the upper crust of Ivy League graduates would say if they had access to your information if you don't know about the opinions of cryptographers. I also suspect elite common sense recommends finding out about the opinions of elite cryptographers if you can. But Wei Dai's example was one in which you didn't know and maybe couldn't find out, so that's why I said what I said. Frankly, I'm pretty flummoxed about why you think this is the "No True Scotsman" fallacy. I feel that one of us is probably misunderstanding the other on a basic level.
A possible confusion here is that I doubt the cryptographers have very different epistemic standards as opposed to substantive knowledge and experience about cryptography and tools for thinking about it.
I certainly don't get the impression that one can grind well-specified rules to get to the answer about polling the upper 10% of Ivy League graduates in this case.
I agree with this, and tried to make this clear in my discussion. I went with a rough guess that would work for a decent chunk of the audience rather than only saying something very abstract. It's subtle, but I think reasonable epistemic frameworks are subtle if you want them to have much generality.
↑ comment by Lumifer · 2013-08-11T02:13:16.836Z · LW(p) · GW(p)
bought the rights to the system from a reputable cryptography consulting firm for several million dollars
That's petty change -- consider big-studio movie budgets for proper context.
but the elite decision makers had no access to this information
I am pretty sure they had -- but it's hard to say whether they discounted it to low probability or their whole incentive structure was such that it made sense for them to ignore this information even if they believed it to be true. I'm inclined towards the latter.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-10T19:05:04.533Z · LW(p) · GW(p)
(Upvoted.) I have to say that I'm a lot more comfortable with the notion of elite common sense as a prior which can then be updated, a point of departure rather than an eternal edict; but it seems to me that much of the post is instead speaking of elite common sense as a non-defeasible posterior. (E.g. near the start, comparing it to philosophical majoritarianism.)
It also seems to me that much of the text has the flavor of what we would in computer programming call the B&D-nature, an attempt to impose strict constraints that prevent bad programs from being written, when there is not and may never be a programming language in which it is the least bit difficult to write bad programs, and all you can do is offer tools to people that (switching back to epistemology) make it easier for them to find the truth if they wish to do so, and make it clearer to them when they are shooting off their own foot. I remark, inevitably, that when it comes to discussing the case of God, you very properly - as I deem it proper - list off a set of perfectly good reasons to violate the B&D-constraints of your system. And this would actually make a deal more sense if we were taking elite opinion about God as a mere point of departure, and still more sense if we were allowing ourselves to be more selective about 'elites' than you recommend. (It rather begs the question to point to a statistic about what 93% of the National Academy of Sciences believe - who says that theirs is the most elite and informed opinion about God? Would the person the street say that, or point you to the prestigious academies of theologians, or perhaps the ancient Catholic Church?)
But even that's hard to tell because the discussion is also very abstract, and you seem to be much more relaxed when it comes to concrete cases then when you are writing in the abstract about what we ought not to do.
I would be interested in what you think this philosophy says about (a) the probability that quantum mechanics resolves to a single-world theory, and (b) the probability that molecular nanotechnology is feasible.
I myself would be perfectly happy saying, "The elite common sense prior for quantum mechanics resolving to a single world is on the order of 40%, however the posterior - now taking into account such matters as the application of the quantitative Occam's Razor as though they were evidence being applied to this prior - is less than 1%." Which is what I initially thought you were saying, and I was nodding along to that.
So far as distinguishing elites goes, I remark that in a case of recent practice, I said to someone, "Well, if it comes down to believing the systematic experiments done by academic scientists with publication in peer-reviewed journals, or believing in what a bunch of quantitative self people say they discovered by doing mostly individual experiments on just themselves with no peer review, we have no choice but to believe the latter." And I was dead serious. (Now that I think about it, I've literally never heard of a conflict between the quantitative self people and academic science where academic science later turned out to be right, though I don't strongly expect to have heard about such a case if it existed.)
Replies from: JonahSinick, Nick_Beckstead, Document↑ comment by JonahS (JonahSinick) · 2013-08-10T20:07:30.085Z · LW(p) · GW(p)
[Edit: Some people have been telling me that I've been eschewing politeness norms too much when commenting on the internet, valuing succinctness to the exclusion of friendliness. I apologize if my comment comes across as aggressive — it's nothing personal, this is just my default style of intellectual discourse.]
I myself would be perfectly happy saying, "The elite common sense prior for quantum mechanics resolving to a single world is on the order of 40%, however the posterior - now taking into account such matters as the application of the quantitative Occam's Razor as though they were evidence being applied to this prior - is less than 1%." Which is what I initially thought you were saying, and I was nodding along to that.
Why do you think that the object level arguments are sufficient to drive the probability down to less than 1%?
Great physicists have thought about interpretations of quantum mechanics for nearly 100 years, and there's no consensus in favor of many worlds. To believe that the probability is < 1%, you need to believe some combination of
- Most of the great physicists who have thought about interpretations of quantum mechanics were not aware of your argument.
- Most of the great physicists don't have arguments of comparable aggregate strength for a single world interpretation (c.f. my post on many weak arguments ).
- It's a priori evident that you're vastly more rational than the great physicists on this dimension.
I think that each of #1, #2 and #3 is probably wrong. On point #3, I'd refer to Carl Shulman's remark
It looks to me like "people are crazy, the world is mad" has lead you astray repeatedly, but I haven't seen as many successes.
Note that you haven't answered Carl's question, despite Luke's request and re-prodding.
Replies from: Eliezer_Yudkowsky, Document↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-10T20:39:05.742Z · LW(p) · GW(p)
Did you happen to read (perhaps an abbreviated version of) the QM sequence on LW, e.g. this one?
Of course I would stake my reply most strongly on 2 (single-world QM simply doesn't work) with a moderate dose of 1 (great physicists may be bad epistemologists and not know about Solomonoff Induction, formal definitions of simplicity in Occam's Razor, or how to give up and say oops, e.g. many may be religious which sets very harsh upper bounds on how much real discipline their subject could systematically teach on reductionist epistemology, rejection of complex inadequately supported privileged hypotheses, and saying oops when nobody is holding a gun to your head, yes this is a fair critique). And with that said, I reject the question 3 as being profoundly unhelpful. It's evident from history that the state of affairs postulated in 1 and 2 is not improbable enough to require some vastly difficult thesis about inhumanly superior rationality! I don't need a hero license!
This would serve as one of my flagship replies to Carl's question with respect to that portion of the audience which is capable of putting their metaness on hold long enough to see that single-world QM has negligible probability on the object level. Unfortunately, majoritarianism is a closed system in terms of rejecting all evidence against itself, when you take the 'correct' answer for comparison purposes to be the majoritarian one.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-08-10T21:17:46.244Z · LW(p) · GW(p)
Did you happen to read (perhaps an abbreviated version of) the QM sequence on LW, e.g. this one?
I haven't read the QM sequence. The marginal value of reading it (given its length) seemed too low to give it priority over other things, but I'm open to reconsidering. My comments above and here are entirely outside view in nature.
Of course I would stake my reply most strongly on 2 (single-world QM simply doesn't work)
It could be that one can reformulate QM in an entirely different language that makes it clear that some version of single-world QM does work. Obviously you have more subject matter knowledge than I do, but I know of examples from math where an apparently incoherent mathematical concepts turned out to be rigorously formalizable. (The Dirac delta-function is perhaps an example.)
It could be that your analysis is confused. As far as I know, it hasn't been vetted by many people with subject matter knowledge, and analysis that hasn't been vetted often turns out to be wrong. Confidence in the correctness of one's reasoning at the 99+% level is really high.
There could be equally strong arguments against many worlds. My understanding is that the broad consensus among physicists is that no interpretation is satisfactory.
great physicists may be bad epistemologists and not know about Solomonoff Induction, formal definitions of simplicity in Occam's Razor, or how to give up and say oops,
While Solomonoff Induction and formalized definitions of simplicity are in some sense relevant, it's not clear that they're necessary to get the right answer here.
Epistemic soundness isn't a univariate quantity
I find it hard to imagine how you could have gotten enough evidence about the rationality of physicists to warrant 99+% confidence.
e.g. many may be religious which sets very harsh upper bounds on how much real discipline their subject could systematically teach on reductionist epistemology
According to Wikipedia, Bell, Bethe, Bohr, Dirac, Everett Feynman, Higgs, Oppenheimer, Penrose, Schrödinger and Wigner were all atheists.
rejection of complex inadequately supported privileged hypotheses, and saying oops when nobody is holding a gun to your head
Have you had enough exposure to these people to be able to make an assessment of this type?
Great physicists are in substantial part motivated by intellectual curiosity, which corresponds to a desire to get things having to do with physics right.
It's evident from history that the state of affairs postulated in 1 and 2 is not improbable enough to require some vastly difficult thesis about inhumanly superior rationality!
What history?
Not improbable at the 99+% level?
This would serve as one of my flagship replies to Carl's question with respect to that portion of the audience which is capable of putting their metaness on hold long enough to see that single-world QM has negligible probability on the object level. Unfortunately, majoritarianism is a closed system in terms of rejecting all evidence against itself, when you take the 'correct' answer for comparison purposes to be the majoritarian one.
I don't believe in majoritarianism in general. But I think that one needs an awful lot of evidence to develop very high confidence that one's views on a given subject are right when other people who have thought about it a lot disagree. I think that there's a necessity not only for object level arguments (which may be wrong in unseen ways) but also a robust base of arguments for why other people have gone astray.
Replies from: lukeprog↑ comment by lukeprog · 2013-08-10T22:55:03.961Z · LW(p) · GW(p)
Just FYI, I think Eliezer's mental response to most of the questions/responses you raised here will be "I spent 40+ pages addressing these issues in my QM sequence. I don't have time to repeat those points all over again."
It might indeed be worth your time to read the QM sequence, so that you can have one detailed, thoroughly examined example of how one can acquire high confidence that the plurality of scientific elites (even in a high-quality field like physics) are just wrong. Or, if you read the sequence and come to a different conclusion, that would also be interesting.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-08-10T23:50:54.759Z · LW(p) · GW(p)
Even if I read the QM sequence and find the arguments compelling, I still wouldn't feel as though I had enough subject matter expertise to rationally disagree with elite physicists with high confidence. I don't think that I'm more rational than Bohr, Heisenberg, Dirac, Feynman, Penrose, Schrödinger and Wigner. These people thought about quantum mechanics for decades. I wouldn't be able to catch up in a week. The probability that I'd be missing fundamental and relevant things that they knew would dominate my prior.
I'll think about reading the QM sequence.
Replies from: lukeprog, Eliezer_Yudkowsky, private_messaging, lukeprog↑ comment by lukeprog · 2013-08-11T02:02:02.472Z · LW(p) · GW(p)
I don't think that I'm more rational than Bohr, Heisenberg, Dirac, Feynman, Penrose, Schrödinger and Wigner. These people thought about quantum mechanics for decades. I wouldn't be able to catch up in a week.
We should separate "rationality" from "domain knowledge of quantum physics."
Certainly, each of these figures had greater domain knowledge of quantum physics than I plan to acquire in my life. But that doesn't mean I'm powerless to (in some cases) tell when they're just wrong. The same goes for those with immense domain knowledge of – to take an unusually clear-cut example – Muslim theology.
Consider a case where you talk to a top Muslim philosopher and extract his top 10 reasons for believing in Allah. His arguments have obvious crippling failures, so you try to steel man them, and you check whether you're just misinterpreting the arguments, but in the end they're just... terrible arguments. And then you check with lots of other leading Muslim philosophers and say "Give me your best reasons" and you just get nothing good. At that point, I think you're in a pretty good position to reject the Muslim belief in Allah, rather than saying "Well, maybe there's some subtle point they know about that I haven't gotten to in the first 40 hours of investigation. They are the experts on Allah, so I'm probably not in a good position to confidently disagree with them."
The case with QM is obviously different in many ways, but the same point that it's possible to (rationally) confidently disagree with the plurality or majority of experts on something still stands, I think.
One of the biggest differences between Muslim theology and QM is that the QM experts seem to have much better rationality than the Muslim theologians, so it's not as easy to conclude that you're evaluating the evidence and arguments more appropriately than they are.
And this brings us to what might be a significant difference between us. I tend to think it's quite feasible to surpass the rationality skill of (most) top-performing physicists. Perhaps you think that is not that feasible.
Let me make another analogy. Consider the skill of nonfiction writing. Rousseau and Hume were great for their time, but Pinker and Dawkins write better because they've got enormous advantages: they get to study Rousseau and Hume, they get to study later writers, they have instant access to online writing-helpers like Thesaurus.com and reading-level measures, they get to benefit from hundreds of years of scientific study of psychology and communication, they get to email drafts to readers around the world for feedback, etc.
Similarly, I think it's quite feasible to outperform (in rationality) most of the top-performing physicists. I'm not sure that I have reached that level yet, but I think I certainly could.
Here, the enormous advantages available to me don't come from the fact that I have access to things that the scientist's don't have access to, but instead from the fact that the world's best physicists mostly won't choose to take advantage of the available resources – which is part of why they so often say silly things as soon as they step outside the lab.
Many qualifications, and specific examples, could be given, but I'll pause here and give you a chance to respond.
Replies from: Nick_Beckstead, private_messaging, private_messaging, Richard_Kennaway, JonahSinick, EHeller, shminux↑ comment by Nick_Beckstead · 2013-08-11T07:16:02.817Z · LW(p) · GW(p)
I feel the main disanalogy with the Muslim theology case is that elite common sense does not regard top Muslim philosophers as having much comparative expertise on the question of whether Allah exists, but they would regard top physicists as having very strong comparative expertise on the interpretation of quantum mechanics. By this I mean that elite common sense would generally substantially defer to the opinions of the physicists but not the Muslim philosophers. This disanalogy is sufficiently important to me that I find the overall argument by analogy highly non-compelling.
I note that there are some meaningful respects in which elite common sense would regard the Muslim philosophers as epistemic authorities. They would recognize their authority as people who know about what the famous arguments for Allah's existence and nature are, what the famous objections and replies are, and what has been said about intricacies of related metaphysical questions, for example.
Replies from: private_messaging, Brian_Tomasik↑ comment by private_messaging · 2013-09-15T16:10:50.633Z · LW(p) · GW(p)
The big difference is that the atheist arguments are not reliant upon Muslim philosophy in the way in which all possible MWI arguments rely on physics. The relevant questions are subtle aspects of physics - linearity, quantum gravity, etc. related, there are unresolved issues in physics to which many potential solutions are MWI unfriendly - precisely in the ballpark of your "quantum gravity vs. string theory" where you do not even know how two terms relate to each other, nor know how advanced physics relates to MWI.
↑ comment by Brian_Tomasik · 2013-08-13T17:59:59.111Z · LW(p) · GW(p)
1.3 billion Muslims -- including many national leaders, scientists, and other impressive people throughout the Islamic world -- believe that Mohammad was an authority on whether Allah exists.
Of course, there are plenty of others who do not, so I'm not suggesting we conclude it's probable that he was, but it seems like something we couldn't rule out, either.
Replies from: Jiro↑ comment by Jiro · 2013-08-13T19:02:01.866Z · LW(p) · GW(p)
Religion is a set of memes, not in the Internet sense of "catchy saying' but in the slightly older sense that it has characteristics which lead to it spreading. I suggest modifying the original proposal: in deciding who is trustworthy to many people, you should take into account that beliefs which are good at spreading for reasons unrelated to their truth value can skew the "many people" part.
Given what we know about how religions spread, religious beliefs should be excluded on these grounds alone. If scientists who expressed a particular belief about science were killed, that would apply to scientists too (the fact that most trusted scientists in Stalinist Russia believed in Lysenkoism, since the others were dead or silenced, would not be a reason for you to do so).
Replies from: Lumifer↑ comment by Lumifer · 2013-08-13T19:11:12.892Z · LW(p) · GW(p)
Religion is a set of memes
So is culture. Are you ready to demand culture-neutrality?
If scientists who expressed a particular belief about science were killed, that would apply to scientists too
Would that apply also to scientists who were prevented from getting grants and being published? How do you know, without hindsight, which of the two warring scientific factions consists of cranks and crooks, and which one does not?
Replies from: Jiro↑ comment by Jiro · 2013-08-13T21:30:20.329Z · LW(p) · GW(p)
So is culture. Are you ready to demand culture-neutrality?
It is possible for a culture to at least not be inimical to truth. But to the extent that a religion is not inimical to truth, it has ceased to be a religion.
Would that apply also to scientists who were prevented from getting grants and being published?
If the main reason why scientists don't express a belief is that if they do they would be arbitrarily denied grants and publication, then it would apply. However, in the modern Western world, almost every example* where someone made this claim has turned out to be a crank whose lack of publication was for very good reasons. As such, my default assumption will be that this has not occurred unless I have a specific reason to believe that it has.
- I can think of a few cases involving politically correct subjects but those tend to have other kinds of problems.
↑ comment by private_messaging · 2013-11-10T17:54:25.647Z · LW(p) · GW(p)
We should separate "rationality" from "domain knowledge of quantum physics."
If by rationality we mean more accurate and correct reasoning... it so happens that in the case of existence of many worlds, any rational conclusion is highly dependent on the intricacies of physics (specifically related to the quantum field theory and various attempts at quantum gravity), and consequently, conclusions being reached without such knowledge is a sign of very poor rationality just as equation of a+b*7+c = 0 having been simplified to a=1 without knowing anything about the values of b and c is a sign of very poor knowledge of mathematics, not of some superior equation simplification skill.
(The existence of Allah, God, Santa Claus, or what ever is, of course, not likewise dependent on theology).
Replies from: zslastman↑ comment by zslastman · 2013-11-10T18:20:51.337Z · LW(p) · GW(p)
My impression was that the conclusion in fact just depends on one's interpretation of Occam's razor, rather than the intricacies of physics. I had allowed myself to reach a fairly tentative conclusion about Many Worlds because physicists seem to agree that both are equally consistent with the data. We are then left with the question of what 'the simplest explanation' means, which is an epistemology question, not a physics question, and one I feel comfortable answering.
Am I mistaken?
Replies from: private_messaging↑ comment by private_messaging · 2013-11-12T10:51:10.853Z · LW(p) · GW(p)
Yes, you are (mistaken). As numerous PhD physicists have been saying numerous times on this site and elsewhere, the issue is that QM is not consistent with observations (does not include gravity). Neither is QFT.
The question is one of fragility of MWI over potential TOEs, and, relying on exact linearity it is arguably very fragile.
Furthermore with regards to Occam's razor and specifically formalizations of it, that is also a subtle question requiring domain specific training.
In particular, in Solomonoff Induction, codes have to produce output that exactly matches the observations. Complete with the exact picture of photon noise on the double slit experiment's screen. Outputting a list of worlds like MWI does is not even an option.
Replies from: zslastman↑ comment by zslastman · 2013-11-13T11:41:24.151Z · LW(p) · GW(p)
Interesting. I'll assume an agnostic position again for the time being.
Can you point me towards some of the best comments?
I was aware that both theories are inconsistent with data with respect to gravity, obviously if either of them weren't, the choice would be clear.
What do you mean by 'fragility over' potential theories of everything? That the TOEs suggested thus far tend not to be compatible with it? Presumably not given that the people generating the TOEs are likely to start with the most popular theory.
Whats the standard response by MW enthusiasts to your point on Solomonof induction? My understanding would then suggest that neither MW nor Copenhagen can give an exact picture of photon noise, in which case the problem would seem to be with Solomonoff induction as a formalization.
Replies from: private_messaging↑ comment by private_messaging · 2013-11-16T13:05:23.478Z · LW(p) · GW(p)
Can you point me towards some of the best comments?
There's some around this thread (responses to Luke's comment). Also I think that QM sequence has responses from physicists.
What do you mean by 'fragility over' potential theories of everything?
The MWI is concluded from exactly linear quantum mechanics. Because we know QM to be only an approximation, we lack any strong reasons to expect exact linearity in the final TOE. Furthermore even though exact linearity is arguably favoured by the Occam's razor over any purely speculative non-linear theory, that does not imply that it is more probable than all of the nonlinear theories together (which would have same linear approximation).
In my opinion, things like multitude of potential worlds allow for e.g. elegantly (and compactly) expressing some conservation laws as survivor bias (via some sort of instability destroying observers in the world where said laws do not hold). Whenever that is significant to TOEs is, of course, purely speculative.
Whats the standard response by MW enthusiasts to your point on Solomonof induction?
As far as I know, the arguments that Solomonoff induction supports MWI never progressed beyond mere allusions to such support.
My understanding would then suggest that neither MW nor Copenhagen can give an exact picture of photon noise
In raw form, yes, neither interpretation fits and it's unclear how to compare complexities of them formally.
in which case the problem would seem to be with Solomonoff induction as a formalization.
I explored some on how S.I. would work on data from quantum experiments here. Basically, the task is to represent said photon noise with the minimum amount of code and data, which can be done in two steps by calculating probabilities as per QM and Born rule, and using the probability density function to decode photon coordinates from the subsequent input bits. (analogous to collapse), or perhaps more compactly in one step by doing QM with some sort of very clever bit manipulation on strings of random noise as to obtain desired probability distribution in the end.
↑ comment by private_messaging · 2013-09-15T09:40:09.423Z · LW(p) · GW(p)
Here, the enormous advantages available to me don't come from the fact that I have access to things that the scientist's don't have access to
You need to keep in mind that they have access to a very, very huge number of things that you don't have access to (and have potential access to more things than you think they do), and can get the taste of the enormous space of issues any of which can demolish the case for MWI that you see completely, from the outside. For example, non-linearity of the equations absolutely kills MWI. Now, only a subset of physicists considers the non-linearity to be likely. Others believe in other such things, most of which just kill MWI outright, with all the argument for it flying apart like a card house in the wind of a category 6 hurricane. Expecting a hurricane, experts choose not to bother with the question of mechanical stability of the card house on it's own - they would rather debate in which direction, and how far, the cards will land.
↑ comment by Richard_Kennaway · 2013-08-13T18:23:10.479Z · LW(p) · GW(p)
Consider a case where you talk to a top Muslim philosopher and extract his top 10 reasons for believing in Allah.
Consider a case where you talk to a top flat-earth believer and extract his top 10 reasons for believing the earth is flat.
The act of selecting a Muslim philosopher confounds the accuracy of his belief with the method whereby you selected him. It's like companies searching out a scientist who happens to take a view of some question congenial to that company, then booming his research.
↑ comment by JonahS (JonahSinick) · 2013-08-11T06:29:13.957Z · LW(p) · GW(p)
Thanks for the thoughtful response. I agree that there are many instances in which it’s possible to rationally come to confident conclusions that differ from those of subject matter experts. I realize that my earlier comment was elliptical, and will try to clarify. The relevant points to my mind are:
The extraordinary intellectual caliber of the best physicists
Though difficult to formalize, I think that there's a meaningful sense in which one can make statements of the general form "person A has intellectual caliber n times that of person B." Of course, this is domain specific to some degree, but I think that the concept hangs together somewhat even across domains.
One operationalization of this is "if person B reaches a correct conclusion on a given subject, person A could reach it n times as fast." Another is "it would take n copies of person B to do person A's work." These things are hard to estimate, but one can become better calibrated by using the rule "if person A has intellectual caliber n times that of person B and person B has intellectual caliber m times that of person C, then person A has intellectual caliber n*m times that of person C."
In almost all domains, I think that the highest intellectual caliber people have no more than 5x my intellectual caliber. Physics is different. From what I’ve heard, the distribution of talent in physics is similar to that of math. The best mathematicians are 100x+ my intellectual caliber. I had a particularly striking illustrative experience with Don Zagier, who pinpointed a crucial weakness in an analogy that I had been exploring for 6 months (and which I had run by a number of other mathematicians) in a mere ~15 minutes. I would not be surprised if he himself were to have an analogous experience with the historical greats.
When someone is < 5x one’s intellectual caliber, an argument of the type “this person may be smarter than me, but I’ve focused a lot more on having accurate views, so I trust my judgment over him or her” seems reasonable. But when one gets to people who are 100x+ one’s intellectual caliber, the argument becomes much weaker. Model uncertainty starts to play a major role. It could be that people who are that much more powerful easily come to the correct conclusion on a given question without even needing to put conscious effort into having accurate beliefs.
The intrinsic interest of the question of interpretation of quantum mechanics
The question of what quantum mechanics means has been considered one of the universe’s great mysteries. As such, people interested in physics have been highly motivated to understand it. So I think that the question is privileged relative to other questions that physicists would have opinions on — it’s not an arbitrary question outside of the domain of their research accomplishments.
Solicitation of arguments from those with opposing views
In the Muslim theology example, you spend 40 hours engaging with the Muslim philosophers. This seems disanalogous to the present case, in that as far as I know, Eliezer’s quantum mechanics sequence hasn’t been vetted by any leading physicists who disagree with the many world’s interpretation of quantum mechanics. I also don’t know of any public record of ~40 hours of back and forth analogous to the one that you describe. I know that Eliezer might cite an example in his QM sequence, and will take a look.
Replies from: Nick_Beckstead, komponisto, Eliezer_Yudkowsky, shminux, DavidPlumpton, lukeprog↑ comment by Nick_Beckstead · 2013-08-11T07:46:45.334Z · LW(p) · GW(p)
The intrinsic interest of the question of interpretation of quantum mechanics
The question of what quantum mechanics means has been considered one of the universe’s great mysteries. As such, people interested in physics have been highly motivated to understand it. So I think that the question is privileged relative to other questions that physicists would have opinions on — it’s not an arbitrary question outside of the domain of their research accomplishments.
My understanding is that the interpretation of QM is (1) not regarded as a very central question in physics, being seen more as a "philosophy" question and being worked on to a reasonable extent by philosophers of physics and physicists who see it as a hobby horse, (2) is not something that physics expertise--having good physical intuition, strong math skills, detailed knowledge of how to apply QM on concrete problems--is as relevant for as many other questions physicists work on, and (3) is not something about which there is an extremely enormous amount to say. These are some of the main reasons I feel I can update at all from the expert distribution of physicists on this question. I would hardly update at all from physicist opinions on, say, quantum gravity vs. string theory, and I think it would basically be crazy for me to update substantially in one direction or the other if I had comparable experience on that question.
[ETA: As evidence of (1), I might point to the prevalence of the "shut up and calculate" mentality which seems to have been reasonably popular in physics for a while. I'd also point to the fact that Copenhagen is popular but really, really, really, really not good. And I feel that this last claim is not just Nick Beckstead's idiosyncratic opinion, but the opinion of every philosopher of physics I have ever spoken with about this issue.]
Replies from: itaibn0, JonahSinick↑ comment by itaibn0 · 2013-08-12T00:00:23.499Z · LW(p) · GW(p)
A minor quibble.
quantum gravity vs. string theory
I believe you are using bad terminology. 'Quantum gravity' refers to any attempt to reconcile quantum mechanics and general relativity, and string theory is one such theory (as well as a theory of everything). Perhaps you are referring to loop quantum gravity, or more broadly, to any theory of quantum gravity other than string theory?
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-12T10:58:37.768Z · LW(p) · GW(p)
Perhaps I should have meant loop quantum gravity. I confess that I am speaking beyond my depth, and was just trying to give an example of a central dispute in current theoretical physics. That is the type of case where I would not like to lean heavily on my own perspective.
↑ comment by JonahS (JonahSinick) · 2013-08-11T16:14:22.432Z · LW(p) · GW(p)
My understanding is that the interpretation of QM is (1) not regarded as a very central question in physics, being seen more as a "philosophy" question and being worked on to a reasonable extent by philosophers of physics and physicists who see it as a hobby horse,
I agree if we're talking about the median theoretical physicist at a top 5 school, but when one gets further toward the top of the hierarchy, one starts to see a high density of people who are all-around intellectually curious and who explore natural questions that they come across independently of whether they're part of their official research.
(2) is not something that physics expertise--having good physical intuition, strong math skills, detailed knowledge of how to apply QM on concrete problems--is as relevant for as many other questions physicists work on
I agree, but a priori I suspect that philosophers of physics and others without heavy subject matter knowledge of quantum mechanics have leaned too heavily on this. Spending one's life thinking about something can result subconscious acquisition of implicit knowledge of things that are obliquely related. People who haven't had this experience may be at a disadvantage.
I would hardly update at all from physicist opinions on, say, quantum gravity vs. string theory, and I think it would basically be crazy for me to update substantially in one direction or the other if I had comparable experience on that question.
I actually think that it's possible for somebody without subject matter knowledge to rationally develop priors that are substantially different from expert consensus here. One can do this by consulting physicists who visibly have high epistemic rationality outside of physics, by examining sociological factors that may have led to the status quo, and by watching physicists who disagree debate each other and see which of the points they respond to and which ones they don't.
[ETA: As evidence of (1), I might point to the prevalence of the "shut up and calculate" mentality which seems to have been reasonably popular in physics for a while. I'd also point to the fact that Copenhagen is popular but really, really, really, really not good. And I feel that this last claim is not just Nick Beckstead's idiosyncratic opinion, but the opinion of every philosopher of physics I have ever spoken with about this issue.]
Can you give a reference?
Replies from: Nick_Beckstead, Nick_Beckstead, Eliezer_Yudkowsky↑ comment by Nick_Beckstead · 2013-08-11T19:20:39.378Z · LW(p) · GW(p)
I agree, but a priori I suspect that philosophers of physics and others without heavy subject matter knowledge of quantum mechanics have leaned too heavily on this. Spending one's life thinking about something can result subconscious acquisition of implicit knowledge of things that are obliquely related. People who haven't had this experience may be at a disadvantage.
But note that philosophers of physics sometimes make whole careers thinking about this, and they are among the most high-caliber philosophers. They may be at an advantage in terms of this criterion.
I can't think of a reference in print for my claim about what almost all philosophers think. I think a lot of them would find it too obvious to say, and wouldn't bother to write a paper about it. But, for what it's worth, I attended a couple of conferences on philosophy of physics held at Rutgers, with many leading people in the field, and talked about this question and never heard anyone express an opposing opinion. And I was taught about interpretations of QM from some leading people in philosophy of physics.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-08-11T19:26:05.640Z · LW(p) · GW(p)
What I'm anchoring on here is the situation in the field of philosophy of math, where lack of experience with the practice of math seriously undercuts most philosophers' ability to do it well. There are exceptions, for example I consider Imre Lakatos to be one. Maybe the situation is different in philosophy of physics.
↑ comment by Nick_Beckstead · 2013-08-11T19:23:47.533Z · LW(p) · GW(p)
I agree, but a priori I suspect that philosophers of physics and others without heavy subject matter knowledge of quantum mechanics have leaned too heavily on this. Spending one's life thinking about something can result subconscious acquisition of implicit knowledge of things that are obliquely related. People who haven't had this experience may be at a disadvantage.
But note that philosophers of physics sometimes make whole careers thinking about this, and they are among the most high-caliber philosophers. They may be at an advantage in terms of this criterion.
I can't think of a reference in print for my claim about what almost all philosophers think. I think a lot of them would find it too obvious to say, and wouldn't bother to write a paper about it. But, for what it's worth, I attended a couple of conferences on philosophy of physics held at Rutgers, with many leading people in the field, and talked about this question and never heard anyone express an opposing opinion. And I was taught about interpretations of QM from some leading people in philosophy of physics.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-11T18:02:19.945Z · LW(p) · GW(p)
I'd also point to the fact that Copenhagen is popular but really, really, really, really not good.
Can you give a reference?
Er, http://lesswrong.com/lw/r8/and_the_winner_is_manyworlds/ maybe.
↑ comment by komponisto · 2013-08-13T03:11:18.970Z · LW(p) · GW(p)
Let me first say that I find this to be an extremely interesting discussion.
In almost all domains, I think that the highest intellectual caliber people have no more than 5x my intellectual caliber. Physics is different. From what I’ve heard, the distribution of talent in physics is similar to that of math. The best mathematicians are 100x+ my intellectual caliber.
I think there is a social norm in mathematics and physics that requires people to say this, but I have serious doubts about whether it is true. Anyone 100x+ your intellectual caliber should be having much, much more impact on the world (to say nothing of mathematics itself) than any of the best mathematicians seem to be having. At the very least, if there really are people of that cognitive level running around, then the rest of the world is doing an absolutely terrible job of extracting information and value from them, and they themselves must not care too much about this fact.
More plausible to me is the hypothesis that the best mathematicians are within the same 5x limit as everyone else, and that you overestimate the difficulty of performing at their level due to cultural factors which discourage systematic study of how to imitate them.
Try this thought experiment: suppose you were a graduate student in mathematics, and went to your advisor and said: "I'd like to solve [Famous Problem X], and to start, I'm going to spend two years closely examining the work of Newton, Gauss, and Wiles, and their contemporaries, to try to discern at a higher level of generality what the cognitive stumbling blocks to solving previous problems were, and how they overcame them, and distill these meta-level insights into a meta-level technique of my own which I'll then apply to [Famous Problem X]." What do you think the reaction would be? How many times do you think such a thing has ever been proposed, let alone attempted, by a serious student or (even) senior researcher?
Replies from: Eliezer_Yudkowsky, JonahSinick, army1987↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-13T04:22:04.086Z · LW(p) · GW(p)
Try this thought experiment: suppose you were a graduate student in mathematics, and went to your advisor and said: "I'd like to solve [Famous Problem X], and to start, I'm going to spend two years closely examining the work of Newton, Gauss, and Wiles, and their contemporaries, to try to discern at a higher level of generality what the cognitive stumbling blocks to solving previous problems were, and how they overcame them, and distill these meta-level insights into a meta-level technique of my own which I'll then apply to [Famous Problem X]."
This is a terrible idea unless they're spending half their time pushing their limits on object-level math problems. I just don't think it works to try to do a meta phase before an object phase unless the process is very, very well-understood and tested already.
Replies from: komponisto↑ comment by komponisto · 2013-08-13T10:22:48.023Z · LW(p) · GW(p)
I'm sure that's exactly what the advisor would say (if they bother to give a reasoned reply at all), with the result that nobody ever tries this.
(I'll also note that it's somewhat odd to hear this response from someone whose entire mission in life is essentially to go meta on all of humanity's problems...)
But let me address the point, so as not to be logically rude. The person would be pushing their limits on object-level math problems in the course of "examining the work of Newton, Gauss, and Wiles", in order to understand said work; otherwise, it can hardly be said to constitute a meaningful examination. I also think it's important not to confuse meta-ness with (nontechnical) "outside views"; indeed I suspect that a lot of the thought processes of mathematical "geniuses" consist of abstracting over classes of technical concepts that aren't ordinarily abstracted over, and thus if expressed explicitly (which the geniuses may lack the patience to do) would simply look like another form of mathematics. (Others of their processes, I speculate, consist in obsessive exercising of visual/dynamic mental models of various abstractions.)
Switching back to logical rudeness, I'm not sure the meta-ness is your true rejection; I suspect what you may be really worried about is making sure there are tight feedback loops to which one's reasoning can be subjected.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-13T20:05:40.471Z · LW(p) · GW(p)
(I'll also note that it's somewhat odd to hear this response from someone whose entire mission in life is essentially to go meta on all of humanity's problems...)
That's not the kind of meta I mean. The dangerous form of meta is when you spend several years preparing to do X, supposedly becoming better at doing X, but not actually doing X, and then try to do X. E.g. college. Trying to improve at doing X while doing X is much, much wiser. I would similarly advise Effective Altruists who are not literally broke to be donating $10 every three months to something while they are trying to increase their incomes and invest in human capital; furthermore, they should not donate to the same thing two seasons in a row, so that they are also practicing the skill of repeatedly assessing which charity is most important.
"Meta" for these purposes is any daily activity which is unlike the daily activity you intend to do 'later'.
Tight feedback loops are good, but not always available. This is a separate consideration from doing meta while doing object.
The activity of understanding someone else's proofs may be unlike the activity of producing your own new math from scratch; this would be the problem.
Replies from: somervta↑ comment by somervta · 2013-08-15T09:19:54.667Z · LW(p) · GW(p)
I would similarly advise Effective Altruists who are not literally broke to be donating $10 every three months to something while they are trying to increase their incomes and invest in human capital; furthermore, they should not donate to the same thing two seasons in a row, so that they are also practicing the skill of repeatedly assessing which charity is most important.
This is excellent advice. I have put a note in my calendar thee months hence to reevaluate my small monthly donation.
↑ comment by JonahS (JonahSinick) · 2013-08-13T04:16:24.563Z · LW(p) · GW(p)
Nice to hear from you :-)
At the very least, if there really are people of that cognitive level running around, then the rest of the world is doing an absolutely terrible job of extracting information and value from them, and they themselves must not care too much about this fact.
Yes, I think that this is what the situation is.
I'll also say that I think that there are very few such people — maybe on the order of 10 who are alive today. With such a small absolute number, I don't think that their observed impact on math is a lot lower than what one would expect a priori, and the prior in favor them having had a huge impact in society isn't that strong.
More plausible to me is the hypothesis that the best mathematicians are within the same 5x limit as everyone else, and that you overestimate the difficulty of performing at their level due to cultural factors which discourage systematic study of how to imitate them.
"The best mathematicians are 100+x higher in intellectual caliber than I am" and "the difference is in large part due to cultural factors which discourage systematic study of how to imitate them" aren't mutually exclusive. I'm sympathetic to your position.
What do you think the reaction would be?
To change the subject :-)
How many times do you think such a thing has ever been proposed, let alone attempted, by a serious student or (even) senior researcher?
Basically never.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-08-13T10:05:29.425Z · LW(p) · GW(p)
I'll also say that I think that there are very few such people — maybe on the order of 10 who are alive today.
Not to mention that some of them might be working on Wall Street or something, and not have worked on unsolved problems in mathematics in decades.
↑ comment by A1987dM (army1987) · 2013-08-13T10:00:59.200Z · LW(p) · GW(p)
Anyone 100x+ your intellectual caliber should be having much, much more impact on the world (to say nothing of mathematics itself) than any of the best mathematicians seem to be having.
How do you know how little intellectual caliber JonahSinick has?
looks at JonahSinick's profile
follows link to his website
skims the “About me” page
Yes, you have a point.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-11T17:53:32.054Z · LW(p) · GW(p)
The extraordinary intellectual caliber of the best physicists
That is of course exactly why I picked QM and MWI to make my case for nihil supernum. It wouldn't serve to break a smart person's trust in a sane world if I demonstrated the insanity of Muslim theologians or politicians; they would just say, "But surely we should still trust in elite physicists." It is by demonstrating that trust in a sane world fails even at the strongest point which 'elite common sense' would expect to find, that I would hope to actually break someone's emotional trust, and cause them to just give up.
Replies from: Nick_Beckstead, JonahSinick↑ comment by Nick_Beckstead · 2013-08-11T19:12:25.496Z · LW(p) · GW(p)
I haven't fully put together my thoughts on this, but it seems like a bad test to "break someone's trust in a sane world" for a number of reasons:
this is a case where all the views are pretty much empirically indistinguishable, so it isn't an area where physicists really care all that much
since the views are empirically indistinguishable, it is probably a low-stakes question, so the argument doesn't transfer well to breaking our trust in a sane world in high-stakes cases; it makes sense to assume people would apply more rationality in cases where more rationality pays off
as I said in another comment, MWI seems like a case where physics expertise is not really what matters, so this doesn't really show that the scientific method as applied by physicists is broken; it seems it at most it shows that physics aren't good at questions that are essentially philosophical; it would be much more persuasive if you showed that e.g., quantum gravity was obviously better than string theory and only 18% of physicists working in the relevant area thought so
[Edited to add a missing "not"]
Replies from: Eliezer_Yudkowsky, army1987↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-11T19:20:02.732Z · LW(p) · GW(p)
From my perspective, the main point is that if you'd expect AI elites to handle FAI competently, you would expect physics elites to handle MWI competently - the risk factors in the former case are even greater. Requires some philosophical reasoning? Check. Reality does not immediately call you out on being wrong? Check. The AI problem is harder than MWI and it has additional risk factors on top of that, like losing your chance at tenure if you decide that your research actually needs to slow down. Any elite incompetence beyond the demonstrated level in MWI doesn't really matter much to me, since we're already way under the 'pass' threshold for FAI.
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-11T19:59:20.171Z · LW(p) · GW(p)
I feel this doesn't address the "low stakes" issues I brought up, or that this may not even by the physicists' area of competence. Maybe you'd get a different outcome if the fate of the world depended on this issue, as you believe it does with AI.
I also wonder if this analysis leads to wrong historical predictions. E.g., why doesn't this reasoning suggest that the US government would totally botch the constitution? That requires philosophical reasoning and reality doesn't immediately call you out on being wrong. And the people setting things up don't have incentives totally properly aligned. Setting up a decent system of government strikes me as more challenging than the MWI problem in many respects.
How much weight do you actually put on this line of argument? Would you change your mind about anything practical if you found out you were wrong about MWI?
Replies from: Document↑ comment by Document · 2013-08-11T20:34:26.766Z · LW(p) · GW(p)
What different evidence would you expect to observe in a world where amateur attempts to set up systems of government were usually botched?
(Edit: reworded for (hopefully) clarity.)
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-11T20:54:55.415Z · LW(p) · GW(p)
I have an overall sense that there are a lot of governments that are pretty good and that people are getting better at setting up governments over time. The question is very vague and hard to answer, so I am not going to attempt a detailed one. Perhaps you could give it a shot if you're interested.
↑ comment by A1987dM (army1987) · 2013-08-12T19:09:35.477Z · LW(p) · GW(p)
as I said in another comment, MWI seems like a case where physics expertise is really what matters
You meant “is not really”?
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-12T22:37:39.721Z · LW(p) · GW(p)
Yes, thank you for catching.
↑ comment by JonahS (JonahSinick) · 2013-08-11T18:58:29.572Z · LW(p) · GW(p)
I agree that if it were true that the consensus of elite physicists believed that MWI is wrong when there was a decisive case in favor of it, that would be striking. But
There doesn't seem to be a consensus among elite physicists that MWI is wrong.
Paging through your QM sequence, it doesn't look as though you've systematically addressed all objections that otherwise credible people have raised against MWI. For example, have you been through all of the references that critique MWI cited in this paper? I think given that most experts don't view the matter as decided, and given the intellectual caliber of the experts, in order have 99+% confidence in this setting, one has to cover all of one's bases.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-11T19:31:58.066Z · LW(p) · GW(p)
One will generally find that correct controversial ideas convince some physicists. There are many physicists who believe MWI (though they perhaps cannot get away with advocating it as rudely as I do), there are physicists signed up for cryonics, there were physicists advocating for Drexler's molecular nanotechnology before it was cool, and I strongly expect that some physicists who read econblogs have by now started advocating market monetarism (if not I would update against MM). A good new idea should have some physicists in favor of it, and if not it is a warning sign. (Though the endorsement of some physicists is not a proof, obviously many bad ideas can convince a few physicists too.) If I could not convince any physicists of my views on FAI, that would be a grave warning sign indeed. (I'm pretty sure some such already exist.) But that a majority of physicists do not yet believe in MWI does not say very much one way or another.
The cognitive elites do exist and some of them are physicists, therefore you should be able to convince some physicists. But the cognitive elites don't correspond to a majority of MIT professors or anything like that, so you shouldn't be able to convince a majority of that particular group. A world which knew what its own elite truthfinders looked like would be a very different world from this one.
Replies from: JonahSinick, JonahSinick↑ comment by JonahS (JonahSinick) · 2013-08-11T19:50:11.817Z · LW(p) · GW(p)
Ok, putting aside MWI, maybe our positions are significantly more similar than it initially seemed. I agree with
A world which knew what its own elite truthfinders looked like would be a very different world from this one.
I've taken your comments such as
Depends on how crazy the domain experts are being, in this mad world of ours.
to carry connotations of the type "the fraction of people who exhibit high epistemic rationality outside of their immediate areas of expertise is vanishingly small."
I think that there are thousands of people worldwide who exhibit very high epistemic rationality in most domains that they think about. I think that most of these people are invisible owing to the paucity of elites online. I agree that epistemic standards are generally very poor, and that high status academics generally do poorly outside of their immediate areas of expertise.
Replies from: lukeprog↑ comment by lukeprog · 2013-08-17T05:05:30.133Z · LW(p) · GW(p)
I think that there are thousands of people worldwide who exhibit very high epistemic rationality in most domains that they think about. I think that most of these people are invisible owing to the paucity of elites online.
Where does this impression come from? Are they people you've encountered personally? If so, what gave you the impression that they exhibited "very high epistemic rationality in most domains that they think about"?
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-08-17T15:47:34.917Z · LW(p) · GW(p)
To clarify, when I wrote "very high epistemic rationality" I didn't mean "very accurate beliefs," but rather "aware of what they know and what they don't." I also see the qualifier "most" as significant — I think that any given person has some marked blind spots. Of course, the boundary that I'm using is fuzzy, but the standard that I have in mind is something like "at the level of the 15 most epistemically rational members of the LW community."
"Thousands" is at the "1 in a million" level, so in relative terms, my claim is pretty weak. If one disputes the claim, one needs a story explaining how the fraction could be so small. It doesn't suffice to say "I haven't personally seen almost any such people," because there are so many people who one hasn't seen, and the relevant people may be in unexpected places.
I've had the subjective impression that ~ 2% of those who I know outside of the LW/EA spheres fit this description. To be sure, there's a selection effect, but I've had essentially no exposure to physics, business, finance or public policy, nor to people in very populous countries such as India and China. The people who I know who fit this description don't seem to think that they're extremely rare, which suggests that their experiences are similar to my own (though I recognize that "it's a small world," i.e. these people's social circles may overlap in nonobvious ways).
Some of the people who GiveWell has had conversations with come across very favorably in the notes. (I recognize that I'm making a jump from "their area of specialty" to "most topics that they think about" Here I'm extrapolating from the people who I know who are very strong in their area of specialty.) I think that Carl's comment about the Gates Foundation is relevant.
I updated in the direction of people being more rational than I had thought for reason that I gave at the end of my post Many weak arguments and the typical mind.
I don't have high confidence here: maybe ~ 50% (i.e. I'm giving a median case estimate).
↑ comment by JonahS (JonahSinick) · 2013-08-11T20:19:16.648Z · LW(p) · GW(p)
I should also clarify that I don't think that one needs a silver bullet argument of the type "the people who you would expect to be most trustworthy have the wrong belief on something that they've thought about, with very high probability" to conclude with high confidence that epistemic standards are generally very low.
I think that there are many weak arguments that respected authorities are very often wrong.
Vladimir M has made arguments of the type "there's fierce disagreement among experts at X about matters pertaining to X, so one knows that at least some of them are very wrong." I think that string theory is a good case study. There are very smart people who strongly advocate for string theory as a promising road for theoretical physics research, and other very smart people who strongly believe that the research program is misguided. If nothing else, one can tell that a lot of the actors involved are very overconfident (even if one doesn't know who they are).
Replies from: Vaniver↑ comment by Vaniver · 2013-08-13T04:58:59.739Z · LW(p) · GW(p)
There are very smart people who strongly advocate for string theory as a promising road for theoretical physics research, and other very smart people who strongly believe that the research program is misguided. If nothing else, one can tell that a lot of the actors involved are very overconfident (even if one doesn't know who they are).
Or, alternatively, they disagree about who the research program is promising for.
↑ comment by Shmi (shminux) · 2013-08-13T08:02:26.520Z · LW(p) · GW(p)
who disagree with the many world’s interpretation of quantum mechanics
That's too strong of a statement. If you exclude some diehards like Deutsch, the Bohmian school and maybe Penrose's gravity-induced decoherence, the prevailing attitude is "however many worlds are out there, we can only ever see one", so, until new evidence comes along, we can safely treat MWI as a single world.
↑ comment by DavidPlumpton · 2013-08-13T05:30:10.649Z · LW(p) · GW(p)
In computer science an elite coder might take 6 months to finish a very hard task (e.g. create some kind of tricky OS kernel), but a poor coder will never complete the task. This makes the elite coder infinitely better than the poor coder. Furthermore the poor coder will ask many questions of other people, impacting their productivity. Thus an elite coder is transfinitely more efficient than a poor coder ;-)
↑ comment by lukeprog · 2013-08-24T22:50:59.995Z · LW(p) · GW(p)
From what I’ve heard, the distribution of talent in physics is similar to that of math. The best mathematicians are 100x+ my intellectual caliber.
See also Stephen Hsu's comments on this.
↑ comment by EHeller · 2013-08-11T02:28:57.445Z · LW(p) · GW(p)
One of the biggest differences between Muslim theology and QM is that the QM experts seem to have much better rationality than the Muslim theologians,
Another huge difference is that much of quantum mechanics is very technical physics. To get to the point where you can even have an opinion you need a fair amount of background information. When assessing expert opinion, you have a hugely difficult problem of trying to discern whether an expert physicist has relevant technical knowledge you do not have OR whether they are making a failure in rationality.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-11T17:50:52.321Z · LW(p) · GW(p)
This of course is exactly what Muslim theologians would say about Muslim theology. And I'm perfectly happy to say, "Well, the physicists are right and Muslim theologians are wrong", but that's because I'm relying on my own judgment thereon.
Replies from: Jiro↑ comment by Jiro · 2013-08-15T15:53:52.610Z · LW(p) · GW(p)
The equivalent to asking Muslim theologians about Allah would be to ask many-worlds-believing quantum physicists about many-worlds.
The equivalent of asking quantum physicists about many-worlds would be to ask theologians about Allah, without specifically picking Muslim theologians. And if you ask theologians about Allah (by which I mean the Muslim conception of God--of course "Allah" is just the Arabic for "God"), you're going to find that quite a few of them don't think that Allah exists and that some other version of God does.
And that's not even getting into the problems caused by the fact that religion is a meme that spreads in a way that skews the population of experts, which quantum mechanics doesn't.
↑ comment by Shmi (shminux) · 2013-08-12T06:49:04.776Z · LW(p) · GW(p)
Similarly, I think it's quite feasible to outperform (in rationality) most of the top-performing physicists. I'm not sure that I have reached that level yet, but I think I certainly could.
Definitely. Scientists in general and physicists in particular are probably no better than other professionals in instrumental rationality outside of their area of expertise, not sure about epistemic rationality. The "top-performing physicists" (what a strange name, physicists are not athletes), whoever they are, are probably not very much better, as you mention. I have seen some of them committing a number of standard cognitive fallacies.
In fact, I personally think that you are way more rational than many famous physicists, since you took pains to improve your rationality skills and became an expert in the area, and they did not.
However, what you have no hope of is to competently judge their results and beliefs about physics, except by relying on the opinions of other physicists and deciding whom to trust how much in case of a disagreement. But I guess we are in agreement here.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-11T01:42:41.297Z · LW(p) · GW(p)
Even if I read the QM sequence and find the arguments compelling, I still wouldn't feel as though I had enough subject matter expertise to rationally disagree with elite physicists with high confidence.
You don't know what's in the QM sequence. The whole point of it (well, one of the whole points) is to show people who wouldn't previously believe such a thing was plausible, that they ought to disagree with elite physicists with high confidence - to break their trust in a sane world, before which nothing can begin.
Replies from: EHeller, shminux, Pablo_Stafforini, hg00, JonahSinick↑ comment by EHeller · 2013-08-11T01:59:12.672Z · LW(p) · GW(p)
The whole point of it (well, one of the whole points) is to show people who wouldn't previously believe such a thing was plausible, that they ought to disagree with elite physicists with high confidence - to break their trust in a sane world, before which nothing can begin.
Does it worry you that people with good domain knowledge of physics(Shminux,Mitchell Porter, myself) seem to feel that your QM sequence is actually presenting a misleading picture of why some elite physicists don't hold to many worlds with high probability?
Also, is it desirable to train rationalists to believe that they SHOULD update their belief about interpretations of quantum mechanics above a weighted sampling of domain experts based on ~50 pages of highschool level physics exposition? I would hope anyone whose sole knowledge of quantum mechanics is the sequence puts HUGE uncertainty bands around any estimate of the proper interpretation of quantum mechanics, because there is so much they don't know (and even more they don't know that they don't know)
Replies from: lukeprog, Eliezer_Yudkowsky↑ comment by lukeprog · 2013-08-11T03:01:06.008Z · LW(p) · GW(p)
Does it worry you that people with good domain knowledge of physics(Shminux,Mitchell Porter, myself) seem to feel that your QM sequence is actually presenting a misleading picture of why some elite physicists don't hold to many worlds with high probability?
Is there an explanation of this somewhere that you can link me to?
Replies from: hg00↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-11T02:58:43.635Z · LW(p) · GW(p)
Does it worry you that people with good domain knowledge of physics(Shminux,Mitchell Porter, myself) seem to feel that your QM sequence is actually presenting a misleading picture of why some elite physicists don't hold to many worlds with high probability?
Not under these circumstances, no. Part of understanding that the world is not sane, is understanding that some people in any given reference class will refuse to be persuaded by any given bit of sanity. It might be worrying if the object-level case against single-world QM were not absolutely clear-cut.
Also, is it desirable to train rationalists to believe that they SHOULD update their belief about interpretations of quantum mechanics above a weighted sampling of domain experts based on ~50 pages of highschool level physics exposition?
Depends on how crazy the domain experts are being, in this mad world of ours.
Replies from: wedrifid, EHeller↑ comment by wedrifid · 2013-08-11T05:39:13.080Z · LW(p) · GW(p)
Not under these circumstances, no. Part of understanding that the world is not sane, is understanding that some people in any given reference class will refuse to be persuaded by any given bit of sanity. It might be worrying if the object-level case against single-world QM were not absolutely clear-cut.
It may be worth also observing that at least two of those users have disagreements with you about epistemology and reductionism far more fundamental than QM interpretations. When someone's epistemic philosophy leads them to disagree about existence implications of general relativity then their epistemic disagreement about the implications of QM provides very little additional information.
When you bite the bullet and accept someone else's beliefs based on their authority then consistency suggests you do it at the core point of disagreement, not merely one of the implications thereof. In this case that would require rather sweeping changes.
Replies from: Document↑ comment by Document · 2013-08-11T17:57:43.977Z · LW(p) · GW(p)
When someone's epistemic philosophy leads them to disagree about existence implications of general relativity
I'm slightly annoyed that I just reread most of that thread in the understanding that you were linking to the disagreements in question, only to find no comments by either shminux, Mitchell Porter or EHeller and therefore feel no closer to understanding this particular subthread's context.
Replies from: CarlShulman, wedrifid↑ comment by CarlShulman · 2013-08-11T21:09:08.068Z · LW(p) · GW(p)
I think he was referring to shminux's non scientific-realist views, suggesting they are in conflict with such statements as "there are galaxies distant enough that we cannot see them due to lightspeed limitations."
Replies from: Document↑ comment by wedrifid · 2013-08-12T04:32:18.498Z · LW(p) · GW(p)
I'm slightly annoyed that I just reread most of that thread in the understanding that you were linking to
Never mind, that post and thread are far more interesting than the assorted related comments spread over years. Carl's comment is correct and if you want more details about those you may have luck using Wei Dai's script. You'd have to experiment with keywords.
↑ comment by EHeller · 2013-08-11T03:42:48.728Z · LW(p) · GW(p)
Does it worry you, then, that many worlds does not generalize to Lagrangians that lead to non-linear equations of motion, and that many successful particle physics lagrangians are non-linear in the fields?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2013-08-11T05:22:42.122Z · LW(p) · GW(p)
This sounds wrong. That nonlinearity is in the classical equations of motion. The quantum equations of motion will still be linear, in the sense of obeying the superposition principle. Offhand, I'm not sure how to sum up the quantum consequences of the classical nonlinearity ... maybe something to do with commutators? ... but I don't see this as an argument against MWI.
edit Maybe you mean quadratic fermionic terms? Which I would agree is specifically quantum.
Replies from: EHeller, wedrifid↑ comment by EHeller · 2013-08-11T05:50:45.847Z · LW(p) · GW(p)
That nonlinearity is in the classical equations of motion.
Sure, the equation of motion are non-linear in the FIELDS, which aren't necessarily the wavefunctions. No one has solved Yang-Mills, which is almost certainly the easiest of the non-linear lagrangians, so I don't think we actually know about whether the quantum equations would be linear.
The standard approach is to fracture gauge symmetry and use solutions to the linearized equations of motion as wavefunctions, and treat the non-linear part as an interaction you can ignore at large times. This is actually hugely problematic because Haag's theorem calls into question the entire framework (you can't define an interaction picture).
It seems unlikely that you can have the classical equations of motion be non-linear in the field without the wavefunction having non-linear evolution- after all the creation operator at leading order has to obey the classical equation of motion, and you can write a single particle state as (creation*vacuum). The higher order terms would have to come together rather miraculously.
And keep in mind, its not just Yang-Mills. If we think of the standard model as the power-series of an effective field theory, it seems likely all those linear, first-order equations governing the propagator are just the linearization of the full theory.
There are many arguments against many worlds, I was simply throwing out one that I used to hear bandied about at particle physics conferences that isn't addressed in the sequences at all. And generally, quantum field theories are still on fairly weak mathematical underpinnings. We have a nice collection of recipes that get the job done, but the underlying mathematical structure is unknown. Maybe a miracle occurs. But its a huge unknown area that is worth pointing to as "what can this mean?" Unless someone has dealt with this in the last 5 or 6 years, its been awhile since I was a physicist.
Edit: Replying to your edit, I'm thinking of any higher order terms you can throw into your lagrangian, be they quadratic fermion terms, three or four-point terms in Yang-Mills,etc. These lead to non-linear equations for the fields, and (quite likely, but unproven), the full solution to the wavefunction will be similarly non-linear. This may have implications for whether or not particle-number is a good observable, but I'm tired and don't want to try and work it out.
Edit Again: I re-read this and its pretty clear to me I'm not communicating well (since I'm having trouble understanding what I just wrote). So- I'll try to rephrase. The same non-linearities that crop up in Yang-Mills that require you to pick a proper gauge before you try to use the canonical relationships to write the quantum equations of motion are likely more general. You can add all kinds of non-linear terms to the lagrangian and you can't always gauge fix them away. Most of the time they are small and you can ignore them, but on a fundamental level they are there (and at higher energies then some effective scale they probably matter). This requires modifications to linear quantum mechanics (commutation relationships become power series of which the traditional commutator is just the lowest order term, etc).
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2013-08-11T07:01:17.187Z · LW(p) · GW(p)
I strongly disagree that quantizing a classically nonlinear field theory should be expected to lead to a nonlinear Schrodinger equation, either at the level of the global quantum state, or at the level of the little wavefunctions which appear in the course of a calculation.
Linearity at the quantum level means that the superposition principle is obeyed - that if X1(t) and X2(t) are solutions to the Schrodinger equation, then X1(t)+X2(t) is also a solution. Nonlinear Schrodinger equations screw that up. There are sharp bounds on the degree to which that can be happening, at least for specific proposals regarding the nonlinearity.
Also, I think the "argument from Haag's theorem" is just confused. See the discussions here. The adiabatic hypothesis and the interaction picture are heuristics which motivate the construction of a perturbative formalism. Haag's theorem shows that the intuition behind the heuristic cannot literally be correct. But then, except for UV-complete theories like QCD, these theories all run into problems at some high energy anyway, so they don't exist in the classic sense e.g. of von Neumann, as a specific operator algebra on a specific Hilbert space.
Instead, they exist as effective field theories. OK, what does that mean? Let's accept the "Haagian" definition of what a true QFT or prototypical QFT should be - a quantum theory of fields that can satisfy a Hilbert-space axiomatization like von Neumann's. Then an "effective field theory" is a "QFT-like object which only requires a finite number of parameters for its Wilsonian renormalization group to be predictive". Somehow, this has still failed to obtain a generally accepted, rigorous mathematical definition, despite the work of people like Borcherds, Kreimer, etc. Nonetheless, effective field theories work, and they work empirically without having to introduce quantum nonlinearity.
I should also say something about how classical nonlinearity does show up in QFT - namely, in the use of solitons. But a soliton is still a fundamentally classical object, a solution to the classical equations of motion, which you might then "dress" with quantum corrections somehow.
Even if there were formally a use for quantum nonlinearity at the level of the little wavefunctions appearing along the way in a calculation, that wouldn't prove that "ontologically", at the level of THE global wavefunction, that there was quantum nonlinearity. It could just be an artefact of an approximation scheme. But I don't even see a use for it at that level.
The one version of this argument that I might almost have time for, is Penrose's old argument that gravitational superpositions are a problem, because you don't know how to sync up the proper time in the different components of the superposition. It's said that string theory, especially AdS/CFT, is a counterexample to this, but because it's holographic, you're really looking at S-matrix-like probabilities for going from the past to the future, and it's not clear to me that the individual histories linking asymptotic past with asymptotic future, that appear in the sum over histories, do have a natural way of being aligned throughout their middle periods. They only have to "link up" at the beginning and the end, so that the amplitudes can sum. But I may be underestimating the extent to which a notion of time evolution can be extended into the "middle period", in such a framework.
So to sum up, I definitely don't buy these particular arguments about Haag's theorem and nonlinearity. As you have presented them, they have a qualitative character, and so does my rebuttal, and so it could be that I'm overlooking some technicality which gives the arguments more substance. I am quite prepared to be re-educated in that respect. But for now, I think it's just fuzzy thinking and a mistake.
P.S. In the second edit you say that the modified commutation relations "require modifications to linear quantum mechanics". Well, I definitely don't believe that, for the reasons I've already given, but maybe this is a technicality which can help to resolve the discussion. If you can find someone making that argument in detail...
edit What I don't believe, is that the modifications in question amount to nonlinearity in the sense of the nonlinear Schrodinger equation. Perhaps the thing to do here, is to take nonlinear Schrodinger dynamics, construct the corresponding Heisenberg picture, and then see whether the resulting commutation relations, resemble the commutator power series you describe.
↑ comment by wedrifid · 2013-08-11T05:28:53.813Z · LW(p) · GW(p)
This sounds wrong.
Given EHeller's appeal to your authority to support his position in this context this is an interesting development.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2013-08-11T05:40:46.612Z · LW(p) · GW(p)
We'll have to see what he had in mind - I may have had the wrong interpretation.
↑ comment by Shmi (shminux) · 2013-08-12T04:25:54.510Z · LW(p) · GW(p)
The whole point of it (well, one of the whole points) is to show people who wouldn't previously believe such a thing was plausible, that they ought to disagree with elite physicists with high confidence
-1 for unjustified arrogance. The QM sequence has a number of excellent teaching points, like the discussion of how we can be sure that all electrons are identical, but the idea that one can disagree with experts in a subject matter without first studying the subject matter in depth is probably the most irrational, contagious and damaging idea in all of the sequences.
Replies from: wedrifid↑ comment by wedrifid · 2013-08-12T05:14:31.973Z · LW(p) · GW(p)
-1 for unjustified arrogance.
Example followed. That is, this utterance is poor form and ironic. This charge applies far more to the parent than the grandparent. Advocacy of deference to social status rather than and despite of evidence of competence (or the lack thereof) is the most irrational, contagious and damaging idea that appears on this site. (Followed closely by the similar "but the outside view says ".)
↑ comment by Pablo (Pablo_Stafforini) · 2013-08-11T16:28:26.052Z · LW(p) · GW(p)
The whole point of it (well, one of the whole points) is to show people who wouldn't previously believe such a thing was plausible, that they ought to disagree with elite physicists with high confidence
Elite physicists are also people. Would you say that, if exposed to your sequence, these physicists would come to see that they were mistaken in their rejection of MWI? If not, it seems that the most credible explanation of the fact that your sequence can persuade ordinary folk that elite physicists are wrong, but can't persuade elite physicists themselves, is that there is something wrong with your argument, which only elite physicists are competent enough to appreciate.
Replies from: ciphergoth, Eliezer_Yudkowsky, wedrifid↑ comment by Paul Crowley (ciphergoth) · 2013-08-11T18:59:22.863Z · LW(p) · GW(p)
Unfortunately, in general when someone has given a question lots of thought and come to a conclusion, it will take an absolute steamroller to get them to change their mind. Most elite physicists will have given the question that much thought. So this wouldn't demonstrate as much as you might like.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-11T17:47:54.466Z · LW(p) · GW(p)
Certainly many elite physicists were persuaded by similar arguments before I wrote them up. If MWI is wrong, why can't you persuade those elite physicists that it's wrong? Huh? Huh?
Replies from: Pablo_Stafforini, Kawoomba, owencb↑ comment by Pablo (Pablo_Stafforini) · 2013-08-11T20:05:22.111Z · LW(p) · GW(p)
Certainly many elite physicists were persuaded by similar arguments before I wrote them up.
Okay, so you are saying that these arguments were available at the time elite physicists made up their minds on which interpretation of QM was correct, and yet only a minority of elite physicists were persuaded. What evidential weight do you assign to this fact? More importantly, what evidential weight should the target audience of your QM sequence assign to it? To conclude that MWI is very likely true after reading your QM sequence, from a prior position of relative agnosticism, seems to me to give excessive weight to my own ability to assess arguments, relative to the ability of people who are smarter than me and have the relevant technical expertise--most of whom, to repeat, were not persuaded by those arguments.
↑ comment by Kawoomba · 2013-08-11T17:52:54.850Z · LW(p) · GW(p)
Some kind of community-driven kickstarter to convince a top-level physicist to read the MWI sequence (in return for a grant) and to provide an in-depth answer tailored to it would be awesome. May also be good PR.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-11T18:03:59.585Z · LW(p) · GW(p)
Scott Aaronson was already reading along to it as it was published. If we paid David Deutsch to read it, I expect him to just say, "Yeah, that's all basically correct" which wouldn't be very in-depth.
From those who already disagree with MWI, I would expect more in the way of awful amateur epistemology delivered with great confidence. Then those who already had their trust in a sane world broken will nod and say "I expected no better." Others will say, "How can you possibly disregard the word of so great a physicist? Perhaps he knows something you don't!" - though they will not be able to formalize the awful amateur epistemology - and nod among themselves about how Yudkowsky failed to anticipate that so strong a reply might be made (it should be presumed to be a very strong reply since a great physicist made it, even if they can't 100% follow themselves why it is a great refutation, or would not have believed the same words so much from a street performer). And so both will emerge strengthened in their prior beliefs, which isn't much of a test.
Replies from: komponisto, JonahSinick↑ comment by komponisto · 2013-08-12T10:28:56.233Z · LW(p) · GW(p)
convince a top-level physicist to read the MWI sequence
Scott Aaronson was already reading along to it as it was published.
Scott Aaronson is not a physicist!
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-12T15:13:58.760Z · LW(p) · GW(p)
I'd expect him to notice math errors and he specializes in the aspect of QM that I talk about, regardless of job titles.
Replies from: Luke_A_Somers, komponisto↑ comment by Luke_A_Somers · 2013-08-12T16:19:22.277Z · LW(p) · GW(p)
Still, it diminishes the effect.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-13T01:11:32.221Z · LW(p) · GW(p)
Nnnoo it doesn't, IMO.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-08-13T14:03:02.723Z · LW(p) · GW(p)
The effect of authoritative validation? The difference between professional physicist qua physicist, as opposed to quantum-computing-aware computer scientist, would not be small. Even if Scott Aaronson happens to know quantum mechanics as well as Feynman, it's difficult to validate that authority.
↑ comment by komponisto · 2013-08-13T02:08:36.383Z · LW(p) · GW(p)
Job titles aside, I think you had an incorrect model of his intellectual background, and how much he knows about certain subjects (e.g. general relativity) as contrasted with others (e.g. P and NP). Also (therefore) an incorrect model of how others would view your citation of him as an authority.
That said, I think you were right to think of him as an authority here and expect him to notice any important errors in your QM sequence.
↑ comment by JonahS (JonahSinick) · 2013-08-11T19:22:36.140Z · LW(p) · GW(p)
- As I said in a response to a comment of Nick, I would distinguish between the median theoretical physicist at a top 5 school, and the very best people. Intellectual caliber among high status scientists at distinguished institutions varies by 100x+. Apparently elite scientists can be very unrepresentative of the best scientists. You probably haven't had extensive exposure to the best scientists. Unless one has it, one isn't good position to assess their epistemology.
- If a great physicist were to exhibit apparently awful amateur epistemology in responding to your sequence, I would assign substantial probability mass (not sure how much) to you being right.
- The issue that I take with your position is your degree of confidence. The key thing to my mind is the point that Yvain makes in Confidence levels inside and outside an argument. You have an epistemic framework that assigns a probability of < 1% (and maybe much smaller) to MWI being wrong. But given the prior established by the absence of a consensus among such smart people, 99+% confidence in the epistemic framework that you're using is really high. I could see 80-90% confidence as being well grounded. But a 99+% confidence level corresponds to an implicit belief that you'd be using a sound epistemic framework at least 99 times if there were 100 instances analogous to the QM/MWI situation. Does this sound right?
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-11T19:26:15.712Z · LW(p) · GW(p)
Maybe you should just quickly glance at http://lesswrong.com/lw/q7/if_manyworlds_had_come_first/. The mysterious force that eats all of the wavefunction except one part is something I assign similar probability as I assign to God - there is just no reason to believe in it except poorly justified elite opinion, and I don't believe in elite opinions that I think are poorly justified.
Replies from: OccamsTaser, JonahSinick↑ comment by OccamsTaser · 2013-08-11T19:42:02.187Z · LW(p) · GW(p)
But the main (if not only) argument you make for many worlds in that post and the others is the ridiculousness of collapse postulates. Now I'm not disagreeing with you, collapses would defy a great deal of convention (causality, relativity, CPT-symmetry, etc) but even with 100% confidence in this (as a hypothetical), you still wouldn't be justified in assigning 99+% confidence in many worlds. There exist single world interpretations without a collapse, against which you haven't presented any arguments. Bohmian mechanics would seem to be the most plausible of these (given the LW census). Do you still assign <1% likelihood to this interpretation, and if so, why?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-11T23:09:45.647Z · LW(p) · GW(p)
Obvious rationalizations of single-world theories have no more evidence in their favor, no more reason to be believed; it's like Deism vs. Jehovah. Sure, the class 'Deism' is more probable but it's still not credible in an absolute sense (and no, Matrix Lords are not deities, they were born at a particular time, have limited domains and are made of parts). You can't start with a terrible idea and expect to find >1% rationalizations for it. There's more than 100 possible terrible ideas. Single-world QM via collapse/Copenhagen/shut-up was originally a terrible idea and you shouldn't expect terrible ideas to be resurrectable on average. Privileging the hypothesis.
(Specifically: Bohm has similar FTL problems and causality problems and introduces epiphenomenal pointers to a 'real world' and if the wavefunction still exists (which it must because it is causally affecting the epiphenomenal pointer, things must be real to be causes of real effects so far as we know) then it should still have sentient observers inside it. Relational quantum mechanics is more awful amateur epistemology from people who'd rather abandon the concept of objective reality, with no good formal replacement, than just give up already. But most of all, why are we even asking that question or considering these theories in the first place? And again, simulated physics wouldn't count because then the apparent laws are false and the simulations would presumably be of an original universe that would almost certainly be multiplicitous by the same reasoning; also there'd presumably be branches within the sim, so not single-world which is what I specified.)
If you can assign <1% probability to deism (the generalized abstracted class containing Jehovahism) then there should be no problem with assigning <1% probability to all single-world theories.
Replies from: owencb, Kawoomba↑ comment by Kawoomba · 2013-08-12T17:41:25.752Z · LW(p) · GW(p)
Minor points: the generalized abstracted class containing Jehovaism is general theism, not deism. Deism is the subset of deities which do not interfere with their creation, whereas personal theism is the subset of deities which do interfere.
Also -- I myself stopped with this usage but it bears mentioning -- there are "gods" which were born as mortals and ascended, apotheosis-like; there are gods that can kill each other, there's Hermes and legions of minor gods, many of them "with parts".
It's not trivial to draw a line that allows for killable gods of ancient times (compare Ragnarök) and thus doesn't contradict established mythology that has lots of trivial, minor gods, but doesn't allow for Matrix Lords to be considered gods (if not in the contemporary "triple-O Abrahamic deity" parlance). Ontologically fundamental mental powers ain't the classifying separator, and I'm sure you'd agree that a label shouldn't depend simply on whether we understand a phenomenon. Laws of physics with an if-clause for a certain kind of "god"-matter would still be laws of physics, and just having that description (knowing the laws), lifting the curtain, shouldn't be sufficient to remove a "god" label.
↑ comment by JonahS (JonahSinick) · 2013-08-11T19:37:23.534Z · LW(p) · GW(p)
Thanks. I read this some years ago, but will take another look.
↑ comment by wedrifid · 2013-08-12T04:13:26.963Z · LW(p) · GW(p)
Elite physicists are also people. Would you say that, if exposed to your sequence, these physicists would come to see that they were mistaken in their rejection of MWI? If not, it seems that the most credible explanation of the fact that your sequence can persuade ordinary folk that elite physicists are wrong, but can't persuade elite physicists themselves, is that there is something wrong with your argument, which only elite physicists are competent enough to appreciate.
Elite physicists are easy to persuade in the abstract. You wait till the high status old people die.
↑ comment by hg00 · 2013-08-11T16:10:28.698Z · LW(p) · GW(p)
to break their trust in a sane world, before which nothing can begin.
It seems likely to me that both "the world is insane" and "the world is sane" are incorrect, and the truth is "the world is right about some things and wrong about other things". I like Nick's idea of treating the opinions of people who society regards as experts as a prior and carefully updating from that as evidence warrants. I dislike treating mainstream human civilization as a faction to either align with or break from, and I dislike even more the way some people in the LW community show off by casually disregarding mainstream opinion. This seems like something that both probably looks cultish to outsiders and is legitimately cultish in a way that is bad and worth fighting.
Replies from: Document, army1987↑ comment by Document · 2013-08-11T21:18:42.027Z · LW(p) · GW(p)
You probably have a good point, but I found it briefly amusing to imagine it going like this:
ELIEZER: Elite scientists are usually elite for good reason, but sometimes they're wrong. People shouldn't blindly trust an elite's position on a subject when they have compelling reasons to believe that that position is wrong.
STRAWMAN: I agree that there are problems with blindly trusting them. But let's not jump straight to the opposite extreme of not blindly trusting them.
Replies from: hg00↑ comment by hg00 · 2013-08-11T22:30:53.637Z · LW(p) · GW(p)
As soon as you start saying things like people need "to break their trust in a sane world, before which nothing can begin", you know you have a problem.
Replies from: Document, wedrifid↑ comment by wedrifid · 2013-08-12T05:03:29.494Z · LW(p) · GW(p)
As soon as you start saying things like people need "to break their trust in a sane world, before which nothing can begin", you know you have a problem.
For example, you may be talking to a child whose parents are still lying to him.
Replies from: hg00↑ comment by hg00 · 2013-08-12T05:31:13.724Z · LW(p) · GW(p)
Deciding the world is completely untrustworthy after learning that Santa Claus is a lie seems like the wrong update to make. The right update to make seems to be "adults sometimes lie to kids to entertain themselves and the kids". In fact, arguably society tells you more incorrect info as a kid than as an adult, so you should become more and more trusting of what society tells you the older you grow. (Well, I guess you also get smarter as you grow, so it's a bit more complicated than that.)
Replies from: wedrifid↑ comment by wedrifid · 2013-08-12T06:52:44.781Z · LW(p) · GW(p)
Deciding the world is completely untrustworthy after learning that Santa Claus is a lie seems like the wrong update to make. The right update to make seems to be "adults sometimes lie to kids to entertain themselves and the kids".
It would be even better to update in the direction of understanding the interaction between social status, belief, affiliation and dominance.
In fact, arguably society tells you more incorrect info as a kid than as an adult, so you should become more and more trusting of what society tells you the older you grow. (Well, I guess you also get smarter as you grow, so it's a bit more complicated than that.)
The parenthetical is correct. The 'arguable' part is true only in as much as it is technically possible to argue for most things that are stupid.
Replies from: hg00↑ comment by hg00 · 2013-08-13T03:10:34.261Z · LW(p) · GW(p)
You should become more trusting in the sense that what adults tell you constitutes stronger Bayesian evidence, but your own thoughts will also constitute stronger Bayesian evidence.
Replies from: wedrifid↑ comment by wedrifid · 2013-08-13T04:18:50.074Z · LW(p) · GW(p)
You should become more trusting in the sense that what adults tell you constitutes stronger Bayesian evidence, but your own thoughts will also constitute stronger Bayesian evidence.
Adults also have more incentive to lie and deceive you when your decisions have enough power to influence their outcomes.
↑ comment by A1987dM (army1987) · 2013-08-13T10:39:27.189Z · LW(p) · GW(p)
You must be strawmanning “the world is insane” if you think it's not compatible with “the world is right about some things and wrong about other things”. EY knows pretty well that the world isn't wrong about everything.
↑ comment by JonahS (JonahSinick) · 2013-08-11T06:36:06.105Z · LW(p) · GW(p)
This seems circular :-) (I should use an inside view of your sequence to update the view that I shouldn't give too much weight to my inside view of your sequence...?) But I'll check out your sequence.
Replies from: wedrifid↑ comment by wedrifid · 2013-08-11T07:50:29.994Z · LW(p) · GW(p)
This seems circular :-) (I should use an inside view of your sequence to update the view that I shouldn't give too much weight to my inside view of your sequence...?)
That isn't circular. Choosing to limit your use of reasoning to a specific subset of valid reasoning (the assertion of a reference class that you have selected and deference to analogies thereto---'outside view') is a quirk of your own psychology, not a fact about what reasoning is valid or circular. You don't have to believe reasoning that isn't based on the outside view but this is very different from that reasoning being circular. Using arguments that will not convince the particular audience is futile, not fallacious.
The above holds regardless of whether your particular usage of outside view reasoning is sound with respect to the subject discussed in this context. "Circular" is an incorrect description even if Eliezer is wrong.
↑ comment by private_messaging · 2014-08-31T10:36:58.971Z · LW(p) · GW(p)
The problem is, he haven't spent 1..2 pages actually writing out a more-or-less formal argument as of how exactly he's comparing the complexity of two interpretations that output different sorts of data. Attempts at production of such arguments tend to de-confuse people who think they see an answer.
There's 40 pages of, mostly, second-order popularization of some basics of QM (if the author has only learned QM from popularizations himself and haven't ensured correctness with homework), rife with things like incitement to feel better about yourself if you agree with the author, allusions to how the author can allegedly write the interpretations as TM programs, the author 'explaining' why physicists don't agree with him, and so on.
Ultimately, any worthwhile conclusion about MWI is to an extreme extent dependent on extensive domain specific expertise in physics. One could perhaps argue that it also requires rationality, but one can't argue that one doesn't need to know physics. It doesn't really matter to physicists what's the most rational thing to believe when you lack said knowledge, just as Monty in Monty Hall who knows where the goat is doesn't care much about the highest probability door given ignorance of where the goat is.
↑ comment by lukeprog · 2013-08-12T19:08:04.356Z · LW(p) · GW(p)
A related question...
David Hume is the first person who, the way I'm measuring things, was "basically right about nearly all the basics," circa 1740, when his views were mostly minority opinions among elite education opinion. His basic views didn't become majority elite opinion (again, in the way I'm carving things up) until, say, the 1910s. Was Hume justified in being quite confident of his views back in 1740?
I'm not actually asking you to answer: I'm betting neither of us knows enough about Hume or the details of the evidence and argument available to him and his peers to know. But it's an interesting question. I lean non-confidently toward "yes, Hume was justified," but that mostly just reveals my view about elite competence rather than arguing for it.
Replies from: JonahSinick, Alejandro1↑ comment by JonahS (JonahSinick) · 2013-08-12T21:53:20.961Z · LW(p) · GW(p)
Information markets have become more efficient over time, and this has asymmetrically improved elite common sense relative to the views of outstanding individuals.
Even if Hume's views were minority opinions, they may not have been flat out rejected by his peers (although they may have been). So the prior against his views being right coming from other people thinking other things may not be that strong.
Even if Hume wouldn't have been justified in believing that the conjunction of all of his views is probably right, he could still have been justified in believing that each individual one is right with high probability.
I think that it's sometimes possible to develop high (~95+%) confidence in views that run counter to elite conventional wisdom. This can sometimes be accomplished by investigating the relevant issues in sufficient detail and by using model combination, as long as one is sufficiently careful about checking that the models that one is using aren't very dependent.
In the particular case of MWI, I doubt that Eliezer has arrived at his view via thorough investigation and by combining many independent models. In a response to Eliezer I highlighted a paper which gives a lot of references to papers criticizing MWI. As I wrote in my comment, I don't think that it's possible to have high confidence in MWI without reading and contemplating these criticisms.
↑ comment by Alejandro1 · 2013-08-14T07:58:24.951Z · LW(p) · GW(p)
This reminds me of this passage of Richard Rorty (could not link in a better way, sorry!). Rorty considers whether someone could ever be warranted in believing p against the best, thoughtfully considered opinion of the intellectual elites of his time. He answers:
Only if there was some way of determining warrant sub specie aeternitatis, some natural order of reasons that determines, quite apart from S's ability to justify p to those around him, whether he is really justified in holding p.
Now, Rorty is a pragmatist sympathetic to postmodernism and cultural relativism (though without admitting explicitly to the latter), and for him the question is rhetorical and the answer is clearly "no". From the LW point of view, it seems at first glance that the structure of Bayesian probability provides an objective "natural order of reasons" that allows an unequivocal answer. But digging deeper, the problem of the priors brings again the "relativist menace" of the title of Rorty's essay. In practice, our priors come from our biology and our cultural upbringing; even our idealized constructs like Solomonoff induction are the priors that sound most reasonable to us given our best knowledge, that comes largely form our culture. If under considered reflection we decide that the best priors for existing humans involve the elite opinion of existing society, as the post suggests, it follows that Hume (or Copernicus or Einsten!1905 or…) could not be justified in going against that opinion, although they could be correct (and become justified as elites are convinced).
(Note: actually, Einstein might have been justified according to the sophisticated relativism of this post, if he had good reasons to believe that elite opinion would change when hearing his arguments--and he probably did, since the relevant elite opinion did change. But I doubt Hume and Copernicus could have been justified in the same way.)
↑ comment by Document · 2013-08-12T04:01:23.779Z · LW(p) · GW(p)
It's a priori evident that you're vastly more rational than the great physicists on this dimension.
I suspect you have some good points (and I upvoted), but I have a fantasy of people who say this being transported to a world where it's normal to deliberately bang your head against a hard surface once a day. If they don't want to, they're asked "What - so you think you're smarter than everybody else?".
↑ comment by Nick_Beckstead · 2013-08-10T20:04:55.480Z · LW(p) · GW(p)
Thank you for your thoughtful comments. I've tried to answer some where I could.
(Upvoted.) I have to say that I'm a lot more comfortable with the notion of elite common sense as a prior which can then be updated, a point of departure rather than an eternal edict; but it seems to me that much of the post is instead speaking of elite common sense as a non-defeasible posterior. (E.g. near the start, comparing it to philosophical majoritarianism.)
I agree that it would be helpful if I emphasized that as a difference more in the beginning. I do think the idea of “convincing” elite common sense of something is helpful. It isn’t great if you say, “Well, elite common sense has a prior of 1:100 in X, and I just had an observation which, by my personal standards, has a likelihood ratio of 100:1 in favor of X, so that my posterior in X is 50:50.” In this framework, you need to be able to say, “I had an observation which, by the standards of elite common sense, has a likelihood ratio of 100:1 in favor of X, so now my posterior in X is 50:50.” Since priors set the standards for what counts as evidence for what, elite common sense gets to set those standards in this framework. In whatever sense you’re “stuck” with a prior, you’re “stuck” with the prior of elite common sense in this framework.
It also seems to me that much of the text has the flavor of what we would in computer programming call the B&D-nature,
Sorry, but I have no background in programming. I tried to figure out what B&D was, but it wasn’t readily googled. Could you provide a link or explanation please?
an attempt to impose strict constraints that prevent bad programs from being written, when there is not and may never be a programming language in which it is the least bit difficult to write bad programs, and all you can do is offer tools to people that (switching back to epistemology) make it easier for them to find the truth if they wish to do so, and make it clearer to them when they are shooting off their own foot. I remark, inevitably, that when it comes to discussing the case of God, you very properly - as I deem it proper - list off a set of perfectly good reasons to violate the B&D-constraints of your system.
One important point is that I mainly see this as in tension with the guideline that you stress-test your views by seeing if you can get a broad coalition of trustworthy people to agree, rather the the more fundamental principle that you rely on elite common sense standards of reasoning. I'm not sure if I'm effectively responding because of my uncertainty about what B&D means.
And this would actually make a deal more sense if we were taking elite opinion about God as a mere point of departure, and still more sense if we were allowing ourselves to be more selective about 'elites' than you recommend.
I am pretty open about playing around with who we count as elites. Ability to have a grip on their thoughts and avoiding cherry picking are very important to me though.
(It rather begs the question to point to a statistic about what 93% of the National Academy of Sciences believe - who says that theirs is the most elite and informed opinion about God? Would the person the street say that, or point you to the prestigious academies of theologians, or perhaps the ancient Catholic Church?)
I think the group of people I explicitly recommended—the upper crust of people with Ivy League-equivalent educations—would put a reasonable amount of weight on the scientists' opinions if they were really trying to have accurate views about this question, but would resist using only that group as their starting point. I'm not sure how much these people would really say that the theologians had tried really hard to have accurate beliefs about whether God exists. I think they might point to philosophers and converts and such.
But even that's hard to tell because the discussion is also very abstract, and you seem to be much more relaxed when it comes to concrete cases then when you are writing in the abstract about what we ought not to do.
Would you care to elaborate on this?
I would be interested in what you think this philosophy says about (a) the probability that quantum mechanics resolves to a single-world theory, and (b) the probability that molecular nanotechnology is feasible.
I myself would be perfectly happy saying, "The elite common sense prior for quantum mechanics resolving to a single world is on the order of 40%, however the posterior - now taking into account such matters as the application of the quantitative Occam's Razor as though they were evidence being applied to this prior - is less than 1%." Which is what I initially thought you were saying, and I was nodding along to that.
My background on this question mainly consists of taking a couple of courses on quantum mechanics (one by Tim Maudlin in philosophy and one by Shelly Goldstein in applied math) in graduate school and reading your quantum physics sequence. I think that the first pass would be to ask, what do physicists think about the question? In the survey I’ve heard of, many worlds got 18% of the votes of 33 physicists at a quantum foundations meeting. I would take that as a starting point and then update based on (a) my overall sense that the arguments favor many worlds and (b) my sense that people impressive to me tend to go for the many worlds interpretation. Regarding (a), I think there is a very strong update against Copenhagen interpretation, a moderate update against Bohm, and I don’t really know what some of the other interpretations even are. (a) and (b) aren’t really independent because the people I am impressed by share my biases and blind spots. Together these points maybe give an update on the order of 5:1 to 30:1 in favor of many worlds, so that my posterior in many worlds is around 50-85%. This would be pretty open to changing on the basis of further considerations. [Edited to add: I'm not very sure what my framework implies about this case. I wouldn't be that surprised if my framework said I shouldn't change much from the physicists distribution of answers to this question, but I also would be surprised if I should change substantially.]
I have so little to go on regarding the feasibility of molecular nanotechnology that I’d prefer to avoid public embarrassment by making up numbers. It’s something I would like to look into in greater depth eventually. Most of my update on this comes from “a lot of smart people I know take it seriously.”
So far as distinguishing elites goes, I remark that in a case of recent practice, I said to someone, "Well, if it comes down to believing the systematic experiments done by academic scientists with publication in peer-reviewed journals, or believing in what a bunch of quantitative self people say they discovered by doing mostly individual experiments on just themselves with no peer review, we have no choice but to believe the latter." And I was dead serious. (Now that I think about it, I've literally never heard of a conflict between the quantitative self people and academic science where academic science later turned out to be right, though I don't strongly expect to have heard about such a case if it existed.)
It’s a tricky test case for me to check because I am not very familiar with the quantified self people. Depending on what kind of experiments the scientists did and what these people are like, I may agree with you. A really important test here is whether you think that if the “elite” people had access to your evidence and analysis on this issue, they would agree with your perspective.
Replies from: CarlShulman, Eliezer_Yudkowsky↑ comment by CarlShulman · 2013-08-10T21:02:34.552Z · LW(p) · GW(p)
I haven't seen any of these interpretation polls with a good random sample, as opposed to niche meetings.
One of the commenters below the Carroll blog post you linked suggests that poll was from a meeting organized by a Copenhagen proponent:
I think that one of the main things I learned from this poll is that if you conduct a poll at a conference organized by Zeilinger then Copenhagen will come out top, whereas if you conduct a poll at a conference organized by Tegmark then many worlds will come out top. Is this a surprise to anyone?
The Tegmark "Everett@50" (even more obvious bias there, but this one allowed a "none of the above/undecided" option which was very popular) conference results are discussed in this paper:
Which interpretation of quantum mechanics is closest to your own?
2 Copenhagen or consistent histories (including postulate of explicit collapse)
5 Modified dynamics (Schrdinger equation modified to give explicit collapse)
19 Many worlds/consistent histories (no collapse)
2 Bohm
1.5 Modal
22.5 None of the above/undecided
- Do you feel comfortable saying that Everettian parallel uni- verses are as real as our universe? (14 Yes/26 No/8 Undecided)
A 1997 workshop:
Interpretation Votes
Copenhagen 13
Many Worlds 8
Bohm 4
Consistent Histories 4
Modified dynamics (GRW/DRM) 1
None of the above/undecided 18
More polls are cited at Wikipedia.
A 2005 poll of fewer than 40 students and researchers taken after a course on the Interpretation of Quantum Mechanics at the Institute for Quantum Computing University of Waterloo found "Many Worlds (and decoherence)" to be the least favored
And there is a strange one, for which I don't yet have a link to the original, critiqued at Wikipedia and discussed here, that claimed majority support for MWI in a a sample of 72. The argument for it being compatible with other polls is that it includes a lot of cosmologists, who tend to support MWI (it makes it easier to explain the evolution of the universe as a whole, and perhaps they are more open to a vast universe extending beyond our vision), but something still seems fishy about it.
1) "Yes, I think MWI is true" 58% 2) "No, I don't accept MWI" 18% 3) "Maybe it's true but I'm not yet convinced" 13% 4) "I have no opinion one way or the other" 11%
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-10T20:32:44.645Z · LW(p) · GW(p)
Quick remarks (I may or may not be able to say more later).
If your system allows you to update to 85% in favor of Many-Worlds based on moderate familiarity with the arguments, then I think I'm essentially okay with what you're actually doing. I'm not sure I'm okay with what the OP advocates doing, but I'm okay with what you just did there.
↑ comment by Document · 2013-08-12T03:31:06.115Z · LW(p) · GW(p)
(I haven't read every post in this subthread, or the original post, so at least some of the following may be redundant or addressed elsewhere.)
(The following is a tangent.)
It rather begs the question to point to a statistic about what 93% of the National Academy of Sciences believe - who says that theirs is the most elite and informed opinion about God? Would the person the street say that, or point you to the prestigious academies of theologians, or perhaps the ancient Catholic Church?
I'm not sure of the answer to that, but it seems like there's an important difference between a question within the domain of a field and the question of whether the field as a whole is valid. Are doctors the experts on whether it's right (not whether it's effective) to treat illness rather than "letting nature run its course"? Are farmers the experts on whether breeding animals for meat is ethically or environmentally defensible? Would the person on the street be convinced by arguments that SIAI or MIRI were the experts on whether AI research was beneficial?
(The following is a ramble/vent, with no particular point, stating things you probably already know.)
7% of top scientists is still a lot of people. To my knowledge, people with the highest credentials in every relevant field argue for theism; they aren't all the degree-mill fakers some atheists portray. You can find cosmologists whose expert judgment on the origin of the universe is that it was deliberately created, neurobiologists who'll explain how the physical structure of the brain isn't enough to account for the human mind, archaeologists and historians who'll show you a wealth of corroborating evidence for the religious text of your choice, and biologists who'll explain why Earth's life forms can't have evolved wholly by chance.
(From The Hidden Face of God, by Gerald Schroeder, Ph.D.: "The more knowledge one has, the harder it becomes to be fooled. Those diagrams that in ten steps evolve from a random spread of lines into people-like outlines, and in a few hundred steps simulate a light-sensitive patch on skin evolving into an eye, once had me fooled. They are so impressively convincing. Then I studied molecular biology.")
These people have thought about theism for decades. They're people, not evil mutants or cartoon idiots. They've considered more than once the possibility that they might be wrong; each time, they've determined that they aren't. Any argument you could make against their position, they've probably heard enough times to memorize.
And yet so-called "rationalists", many with no relevant education of their own, have the ridiculous idea that they're entitled to dismiss everything those experts say on the grounds of a few informal arguments they read on the Internet. Edit: more than ridiculous - it's irrational, contagious and damaging.
(Disclaimer: This isn't about your quantum physics sequence in particular, which I haven't actually read.)
comment by joaolkf · 2013-09-12T15:27:59.144Z · LW(p) · GW(p)
Data point: I have emailed the top ~10 researchers of 3 different fields in which I was completely naive at the time (social psychology, computational social simulation, neuroscience of morality) - giving a ~ 30 total -, and they all tend to engage my questions, with a subsequent e-mail conversation of 5 to 30 emails. I had no idea how easy it was to make a top researcher engage in a conversation with a naive person. Knowing this made me much more prone to apply the prescription of this post - one I am in agreement. (I understand that what this post prescribes is far more complex than actually engaging in a conversation with the top researchers of an area.)
comment by CronoDAS · 2013-08-11T20:22:51.138Z · LW(p) · GW(p)
Formatting issue: for long posts to the main page, it's nice to have a cut after the introduction.
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-11T20:36:27.262Z · LW(p) · GW(p)
Sorry, limited experience with LW posts and limited HTML experience. 5 minutes of Google didn't help. Can you link or explain? Sorry if I'm being dense.
Replies from: CronoDAS↑ comment by CronoDAS · 2013-08-11T20:42:55.017Z · LW(p) · GW(p)
Sorry, I should have said "summary break" - that's what the editor calls it. It's what puts the "continue reading" link in the post.
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-11T20:58:48.040Z · LW(p) · GW(p)
Fixed. Thank you.
comment by CarlShulman · 2013-08-10T20:14:44.325Z · LW(p) · GW(p)
It's nice to have this down in linkable format.
As a more general point, the framework seems less helpful in the case of religion and politics because people are generally unwilling to carefully consider arguments with the goal of having accurate beliefs. By and large, when people are unwilling to carefully consider arguments with the goal of having accurate beliefs, this is evidence that it is not useful to try to think carefully about this area. This follows from the idea mentioned above that people tend to try to have accurate views when it is in their present interests to have accurate views. So if this is the main way the framework breaks down, then the framework is mostly breaking down in cases where good epistemology is relatively unimportant.
It seems to me that you are mainly using this framework in cases involving charity and setting policy to affect long run outcomes, i.e. cases where the short run individual selfish (CDT) impact of good decisions is low. But by the logic above those are places where the framework would be less applicable.
The Ungar et al forecasting article you link to merits much more examination. Some of its takeaways:
- Brief training in probability methods meaningfully improves forecasting accuracy over a control group, and over 'scenario analysis' training
- Teams outperforming individuals in forecasting, and teams with good aggregation algorithms beat prediction markets (although the prediction markets didn't have the scale or time to permit "prediction hedge funds" to emerge and hire lots of analytical talent
- Aggregated opinion has less squared error in forecasting, and somewhat more sophisticated algorithms do even better, especially by transforming probabilities away from 0.5
↑ comment by Nick_Beckstead · 2013-08-10T20:32:56.936Z · LW(p) · GW(p)
It seems to me that you are mainly using this framework in cases involving charity and setting policy to affect long run outcomes, i.e. cases where the short run individual selfish (CDT) impact of good decisions is low. But by the logic above that is one of the places where the framework would be less applicable.
[Edited to add: This may be a stronger point than my original comment recognized. One thing I'd like to add in addition is that a lot of effective altruism topics are pretty apolitical. The fact that we can get people to think rationally about apolitical topics much more easily, and thereby allow us to stress-test our views about these topics much more easily, seems like a significant consideration in favor of avoiding politically-charged topics. I didn't fully appreciate that before thinking about Carl's comment.]
I agree that this is a consideration in favor of thinking it isn't helpful to think carefully about how to set policy to affect long-run outcomes. One qualifier is that when I said people's "interests," I didn't mean to limit my claim to their "selfish interests" or their concern about what happens right now. I meant to focus on the desires that they currently have, including their present concerns about the welfare of others and the future of humanity.
Another issue is that we have strong evidence that certain types of careful thinking about how to do good does result in conclusions that can command wide support in the form of GiveWell's success so far. I see GiveWell's work as in many ways continuous with trying to find out how to optimize for long-run impact.
I think there is more uncertainty about the value of trying to move into speculative considerations about very long-run impacts. This framework may ultimately suggest that you can't arrive at conclusions that will command the support of a broad coalition of impressive people. This would be an update against the value of looking into speculative issues. I hope to find some areas where credible work can be done, and I'm optimistic that people who do care about long-run outcomes will be help stress-test my conclusions. I also hope to articulate more of my thinking about why it is potentially helpful to try to think about speculative long-run considerations.
comment by Brian_Tomasik · 2013-08-11T23:02:35.241Z · LW(p) · GW(p)
Great post, Nick! I agree with most of what you say, although there are times when I don't always demonstrate this in practice. Your post is what I would consider a good "motivational speech" -- an eloquent defense of something you agree with but could use reminding of on occasion.
It's good to get outside one's intellectual bubble, even a bubble as fascinating and sophisticated as LessWrong. Even on the seemingly most obvious of questions, we could be making logical mistakes.
I think the focus on only intellectual elites has unclear grounding. Is the reason because elites think most seriously about the questions that you care about most? On a question of which kind of truck was most suitable for garbage collection, you would defer to a different class of people. In such a case, I guess you would regard them as the (question-dependent) "elites."
It can be murky to infer what people believe based on actions or commitments, because this mixes two quantities: Probabilities and values. For example, the reason most elites don't seem to take seriously efforts like shaping trajectories for strong AI is not because they think the probabilities of making a difference are astronomically small but because they don't bite Pascalian bullets. Their utility functions are not linear. If your utility function is linear, this is a reason that your actions (if not your beliefs) will diverge from those of most elites. In any event, many elites are not even systematic or consequentialist in translating utilities times probabilities into actions.
Insofar as my own actions are atypical, I intend for it to result from atypical moral beliefs rather than atypical factual beliefs. (If you can think of instances of clearly atypical factual beliefs on my part, let me know.) Of course, you could claim, elite common sense should apply also as a prior to what my own moral beliefs actually are, given the fallibility of introspection. This is true, but its importance depends on how abstractly I view my own moral values. If I ask questions about what an extrapolated Brian would think upon learning more, having more experiences, etc., then the elite prior has a lot to say on this question. But if I'm more concerned with my very immediate emotional reaction, then there's less room for error and less that the common-sense prior has to say. The fact that my moral values are sometimes not strongly affected by common-sense moral values comes from my favoring immediate emotions rather than what (one of many possible) extrapolated Brians would feel upon having further and different life experiences. (Of course, there are many possible further life experiences I could have, which would push me in lots of random directions. This is why I'm not so gung ho about what my extrapolated selves would think on some questions.)
Finally, as you point out, it can be useful to make contrarian points for the purpose of intellectual progress. Most startups, experiments, and new theories fail, and you're more likely to be right by sticking with conventional wisdom than betting on something new. Yet if no one tried new things and pushed the envelope, we'd have an epistemic "tragedy of the commons" where everyone tries to make her own views more accurate at the cost of slowing the overall intellectual progress of society. That said, we can sometimes explore weird ideas without actually betting on them when the stakes are high, although sometimes (as in the case of startups), you do have to take on high risks. Maybe there would be fewer startups if the founders were more sober-minded in their assessment of their odds of success.
Replies from: Nick_Beckstead, JonahSinick, Nick_Beckstead, Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-12T12:32:00.552Z · LW(p) · GW(p)
I think the focus on only intellectual elites has unclear grounding. Is the reason because elites think most seriously about the questions that you care about most? On a question of which kind of truck was most suitable for garbage collection, you would defer to a different class of people. In such a case, I guess you would regard them as the (question-dependent) "elites."
This is a question which it seems I wasn't sufficiently clear about. I count someone as an "expert on X" roughly when they are someone that a broad coalition of trustworthy people would defer to on questions about X. As I explained in another comment, if you don't know about what the experts on X think, I recommend trying to find out what the experts think (if it's easy/important enough) and going with what the broad coalition of trustworthy people thinks until then. So it may be that some non-elite garbage guys are experts on garbage collection, and a broad coalition of trustworthy people would defer to them on questions of garbage collection, once the broad coalition of trustworthy people knows about what these people think about garbage collection.
Why focus on people who are regarded as most trustworthy by many people? I think those people are likely to be more trustworthy than ordinary people, as I tried to suggest in my quick Quora experiment.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2013-08-12T17:39:28.010Z · LW(p) · GW(p)
Cool -- that makes sense. In principle, would you still count everyone with some (possibly very small) weight, the way PageRank does? (Or maybe negative weight in a few cases.) A binary separation between elites and non-elites seems hacky, though of course, it may in fact be best to do so in practice to make the analysis tractable. Cutting out part of the sample also leads to a biased estimator, but maybe that's not such a big deal in most cases if the weight on the remaining part was small anyway. You could also give different weight among the elites. Basically, elites vs. non-elites is a binary approximation of a more continuous weighting distribution. Anyway, it may be misleading to think of this as purely a weighted sample of opinion, because (a) you want to reduce the weight of beliefs that are copied from each other and (b) you may want to harmonize the beliefs in a way that's different from blind averaging. Also, as you suggested, (c) you may want to dampen outliers to avoid pulling the average too much toward the outlier.
Replies from: owencb, army1987, Nick_Beckstead↑ comment by owencb · 2013-08-12T17:55:15.892Z · LW(p) · GW(p)
This sounds roughly right to me. Note that there's are two different things you really want to know about people:
(1) What they believe on the matter;
(2) Who they think is trustworthy on the matter.
Often it seems that (2) is more important, even when you're looking at people who are deemed trustworthy. If I have a question about lung disease, most people will not have much idea to (1), and recommend doctors for (2). Most doctors will have some idea, and recommend specialists for (2). Specialists are likely to have a pretty good idea for (1), and recommend the top people in their field for (2). These are the people you really want to listen to for (1), if you can, but regular people would not tend to know who they were.
I'm not sure exactly how you should be weighting (1) against (2), but the principle of using both, and following through chains to at least some degree, feels natural.
↑ comment by A1987dM (army1987) · 2013-08-13T10:29:31.764Z · LW(p) · GW(p)
Replies from: Brian_Tomasik(Or maybe negative weight in a few cases.)
↑ comment by Brian_Tomasik · 2013-08-13T18:27:22.350Z · LW(p) · GW(p)
Yeah, it's hard to say whether the weights would be negative. As an extreme case, if there was someone who wanted to cause as much suffering as possible, then if that person was really smart, we might gain insight into how to reduce suffering by flipping around the policies he advocated. If someone wants you to get a perfect zero score on a binary multiple-choice test, you can get a perfect score by flipping the answers. These cases are rare, though. Even the hypothetical suffering maximizer still has many correct beliefs, e.g., that you need to breathe air to stay alive.
↑ comment by Nick_Beckstead · 2013-08-12T17:52:04.354Z · LW(p) · GW(p)
I agree that in principle, you don't want some discontinuous distinction between elites and non-elites. I also agree with your points (a) - (c). Something like PageRank seems good to me, though of course I would want to be tentative about the details.
In practice, my suspicion is that most of what's relevant here comes from the very elite people's thinking, so that not much is lost by just focusing on their opinions. But I hold this view pretty tentatively. I presented the ideas the way I did partly because of this hunch and partly for ease of exposition.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2013-08-13T01:27:26.356Z · LW(p) · GW(p)
Nick, what do you do about the Pope getting extremely high PageRank by your measure? You could say that most people who trust his judgment aren't elites themselves, but some certainly are (e.g., heads of state, CEOs, celebrities). Every president in US history has given very high credence to the moral teachings of Jesus, and some have even given high credence to his factual teachings. Hitler had very high PageRank during the 1930s, though I guess he doesn't now, and you could say that any algorithm makes mistakes some of the time.
ETA: I guess you did say in your post that we should be less reliant on elite common sense in areas like religion and politics where rationality is less prized. But I feel like a similar thing could be said to some extent of debates about moral conclusions. The cleanest area of application for elite common-sense is with respect to verifiable factual claims.
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-13T11:44:40.771Z · LW(p) · GW(p)
I don't have a lot to add to my comments on religious authorities, apart from what I said in the post and what I said in response to Luke's Muslim theology case here.
One thing I'd say is that many of the Christian moral teachings that are most celebrated are actually pretty good, though I'd admit that many others are not. Examples of good ones include:
Love your neighbor as yourself (I'd translate this as "treat others as you would like to be treated")
Focus on identifying and managing your own personal weaknesses rather than criticizing others for their weaknesses
Prioritize helping poor and disenfranchised people
Don't let your acts of charity be motivated by finding approval from others
These are all drawn from Jesus's Sermon on the Mount, which is arguably his most celebrated set of moral teachings.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2013-08-13T18:10:37.677Z · LW(p) · GW(p)
Good points. Of course, depending on the Pope in question, you also have teachings like the sinfulness of homosexuality, the evil of birth control, and the righteousness of God in torturing nonbelievers forever. Many people place more weight on these beliefs than they do on those of liberal/scientific elites.
It seems like you're going to get clusters of authority sentiment. Educated people will place high authority on impressive intellectuals, business people, etc. Conservative religious people will tend to place high authority on church leaders, religious founders, etc. and very low authority on scientists, at least when it comes to metaphysical questions rather than what medicine to take for an ailment. (Though there are plenty of skeptics of traditional medicine too.) What makes the world of Catholic elites different from the world of scientific elites? I mean, some people think the Pope is a stronger authority on God than anyone thinks the smartest scientist is about physics.
↑ comment by JonahS (JonahSinick) · 2013-08-12T04:42:18.202Z · LW(p) · GW(p)
Hi Brian :-)
For example, the reason most elites don't seem to take seriously efforts like shaping trajectories for strong AI is not because they think the probabilities of making a difference are astronomically small but because they don't bite Pascalian bullets.
How do you know this? It's true that their utility functions aren't linear, but it doesn't follow that that's why they don't take such efforts seriously. Near-Earth Objects: Finding Them Before They Find Us reports on concerted efforts to prevent extinction-level asteroids from colliding into earth. This shows that people are (sometimes) willing to act on small probabilities of human extinction.
Insofar as my own actions are atypical, I intend for it to result from atypical moral beliefs rather than atypical factual beliefs. (If you can think of instances of clearly atypical factual beliefs on my part, let me know.)
Dovetailing from my comment above, I think that there's a risk of following the line of thought "I'm doing X because it fulfills certain values that I have. Other people don't have these values. So the fact that they don't engage in X, and don't think that doing X is a good idea, isn't evidence against X being a good idea for me" without considering the possibility that despite the fact that they don't have your values, doing X or something analogous to X would fulfill their (different) values conditioning on your factual beliefs being right, so that the fact that they don't do or endorse X is evidence against your factual beliefs connected with X. In a given instance, there will be a subtle judgment call as to how much weight to give to this possibility, but I think that it should always be considered.
Replies from: Brian_Tomasik, gwern, AspiringRationalist↑ comment by Brian_Tomasik · 2013-08-12T05:14:39.665Z · LW(p) · GW(p)
Fair enough. :) Yes, from the fact that probability * utility is small, we can't tell whether the probability is small or the utility is, or both. In the case of shaping AI specifically, I haven't heard good arguments against assigning it non-negligible probability of success, and I also know that many people don't bite Pascalian wagers at least partly because they don't like Pascalian wagers rather than because they disagree with the premises, so combining these suggests the probability side isn't so much the issue, but this suggestion stands to be verified. Also, people will often feign having ridiculously small probabilities to get out of Pascalian wagers, but they usually make these proclamations after the fact, or else are the kind of people who say "any probability less than 0.01 is set to 0" (except when wearing seat belts to protect against a car accident or something, highlighting what Nick said about people potentially being more rational for important near-range decisions).
Anyway, not accepting a Pascalian wager does not mean you don't agree with the probability and utility estimates; maybe you think the wager is missing the forest for the trees and ignoring bigger-picture issues. I think most Pascalian wagers can be defused by saying, "If that were true, this other thing would be even more important, so you should focus on that other thing instead." But then you should actually focus on that other thing instead rather than focusing on neither, which most people tend to do. :P
You are also correct that differences in moral values doesn't completely shield off an update to probabilities when I find my actions divergent from those of others. However, in cases when people do make their probabilities explicit, I don't normally diverge substantially (or if I do, I tend to update somewhat), and in these particular cases, divergent values comprise the remainder of the gap (usually most of it). Of course, I may have already updated the most in those cases where people have made their probabilities explicit, so maybe there's bigger latent epistemic divergence when we're distant from the lamp post.
Replies from: JonahSinick, Jiro↑ comment by JonahS (JonahSinick) · 2013-08-12T17:30:43.863Z · LW(p) · GW(p)
"If that were true, this other thing would be even more important, so you should focus on that other thing instead." But then you should actually focus on that other thing instead rather than focusing on neither, which most people tend to do. :P.
If you restrict yourself to thoughtful, intelligent people who care about having a big positive impact on global welfare (which is a group substantially larger than the EA community), I think that a large part of what's going on is that people recognize that they have a substantial comparative advantage in a given domain, and think that they can have the biggest impact by doing what they're best at, and so don't try to optimize between causes. I think that their reasoning is a lot closer to the mark than initially meets the eye, for reasons that I gave in my posts Robustness of Cost-Effectiveness Estimates and Philanthropy and Earning to Give vs. Altruistic Career Choice Revisited.
Of course, this is relative to more conventional values than utilitarianism, and so lots of their efforts go into things that aren't utilitarian. But because of the number of people, and the diversity of comparative advantages, some of them will be working on problems that are utilitarian by chance, and will learn a lot about how best to address these problems. You may argue that the problems that they're working on are different from the problems that you're interested in addressing, but there may be strong analogies between the situations, and so their knowledge may be transferable.
As for people not working to shape AI, I think that that the utilitarian expected value of working to shape AI is lower than it may initially appear. Some points:
- For reasons that I outline in this comment, I think that the world's elites will do a good job of navigating AI risk. Working on AI risk is in part fungible, and I believe that the effect size is significant.
- If I understand correctly, Peter Thiel has argued that the biggest x-risk comes from the possibility that if economic growth halts, then we'll shift from a positive sum situation to a zero sum situation, which will erode prosocial behavior, which could give rise to negative feedback loop that leads to societal collapse. We've already used lots of natural resources, and so might not be able to recover from a societal collapse. Carl has argued against this, but Peter Thiel is very sophisticated and so his view can't be dismissed out of hand. This increases the expected value of pushing on economic growth relative to AI risk reduction.
- More generally, there are lots of X for which there's a small probability that X is the limiting factor to a space-faring civilization. For example, maybe gold is necessary for building spacecrafts that can travel from earth to places with more resources, so that the limiting factor to a spacefaring civilization is gold, and the number one priority should be preventing gold from being depleted. I think that this is very unlikely: I'm only giving one example. Note that pushing on economic growth reduces the probability that gold will be depleted before it's too late, so that: I think that this is true for many values of X. If this is so the prima facie reaction "but if gold is the limiting factor, then one should pursue more direct interventions than pushing on economic growth" loses force, because pushing on economic growth has a uniform positive impact over different values of X.
- I give a case for near term helping (excluding ripple effects) potentially having astronomical benefits comparable to those of AI risk reduction in this comment.
- An additional consideration that buttresses the above point is that as you've argued, the future may have negative expected value. Even if this looks unlikely, it increases the value of near-term helping relative to AI risk reduction, and since near-term helping might have astronomical benefits comparable to those of AI risk reduction, it increases the value by a nontrivial amount.
Viewing all of these things in juxtaposition, I wouldn't take people's low focus on AI risk reduction as very strong evidence that people don't care about astronomical waste. See also my post Many Weak Arguments and the Typical Mind: the absence of an attempt to isolate the highest expected value activities may be adaptive rather than an indication of lack of seriousness of purpose.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2013-08-12T20:19:58.165Z · LW(p) · GW(p)
Thanks, Jonah. :)
If you restrict yourself to thoughtful, intelligent people who care about having a big positive impact on global welfare (which is a group substantially larger than the EA community)
But it's a smaller group than the set of elites used for the common-sense prior. Hence, many elites don't share our values even by this basic measure.
Of course, this is relative to more conventional values than utilitarianism, and so lots of their efforts go into things that aren't utilitarian.
Yes, this was my point.
You may argue that the problems that they're working on are different from the problems that you're interested in addressing, but there may be strong analogies between the situations, and so their knowledge may be transferable.
Definitely. I wouldn't claim otherwise.
I wouldn't take people's low focus on AI risk reduction as very strong evidence that people don't care about astronomical waste.
In isolation, their not working on astronomical waste is not sufficient proof that their utility functions are not linear. However, combined with everything else I know about people's psychology, it seems very plausible that they in fact don't have linear utility functions.
Compare with behavioral economics. You can explain away any given discrepancy from classical microeconomic behavior by rational agents through an epicycle in the theory, but combined with all that we know about people's psychology, we have reason to think that psychological biases themselves are playing a role in the deviations.
Carl has argued against this, but Peter Thiel is very sophisticated and so his view can't be dismissed out of hand.
Not dismissed out of hand, but downweighted a fair amount. I think Carl is more likely to be right than Thiel on an arbitrary question where Carl has studied it and Thiel has not. Famous people are busy. Comments they make in an offhand way may be circulated in the media. Thiel has some good general intuition, sure, but his speculations on a given social trend don't compare with more systematic research done by someone like Carl.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-08-12T22:27:35.249Z · LW(p) · GW(p)
But it's a smaller group than the set of elites used for the common-sense prior. Hence, many elites don't share our values even by this basic measure.
But a lot of the people within this group use an elite common-sense prior despite having disjoint values, which is a signal that the elite common-sense prior is right.
Yes, this was my point.
I was acknowledging it :-)
In isolation, their not working on astronomical waste is not sufficient proof that their utility functions are not linear. However, combined with everything else I know about people's psychology, it seems very plausible that they in fact don't have linear utility functions.
Elite common sense says that voting is important for altruistic reasons. It's not clear that this is contingent on the number of people in America not being too big. One could imagine an intergalactic empire with 10^50 people where voting was considered important. So it's not clear that people have bounded utility functions. (For what it's worth, I no longer consider myself to have a bounded utility function.)
People's moral intuitions do deviate from utilitarianism, e.g. probably most people don't subscribe to the view that bringing a life into existence is equivalent to saving a life. But the ways in which their intuitions differ from utilitarianism may cancel each other out. For example, having read about climate change tail risk, I have the impression that climate change reduction advocates are often (in operational terms) valuing future people more than they value present people.
So I think it's best to remain agnostic as to the degree to which variance in the humanitarian endeavors that people engage in is driven by variance in their values.
Not dismissed out of hand, but downweighted a fair amount. I think Carl is more likely to be right than Thiel on an arbitrary question where Carl has studied it and Thiel has not. Famous people are busy. Comments they make in an offhand way may be circulated in the media. Thiel has some good general intuition, sure, but his speculations on a given social trend don't compare with more systematic research done by someone like Carl.
I've been extremely impressed by Peter Thiel based on reading notes on his course about startups. He has extremely broad and penetrating knowledge. He may have the highest crystalized intelligence of anybody who I've ever encountered. I would not be surprised if he's studied the possibility of stagnation and societal collapse in more detail than Carl has.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2013-08-13T01:13:43.459Z · LW(p) · GW(p)
Elite common sense says that voting is important for altruistic reasons.
This is because they're deontologists, not because they're consequentialists with a linear utility function. So rather than suggesting more similarity in values, it suggests less. (That said, there's more overlap between deontology and consequentialism than meets the eye.)
So I think it's best to remain agnostic as to the degree to which variance in the humanitarian endeavors that people engage in is driven by variance in their values.
It may be best to examine on a case-by-case basis. We don't need to just look at what people are doing and make inferences; we can also look at other psychological hints about how they feel regarding a given issue. Nick did suggest giving greater weight to what people believe (or, in this case, what they do) than their stated reasons for those beliefs (or actions), but he acknowledges this recommendation is controversial (e.g., Ray Dalio disagrees), and on some issues it seems like there's enough other information to outweigh whatever inferences we might draw from actions alone. For example, we know people tend to be irrational in the religious domain based on other facts and so can somewhat discount the observed behavior there.
Points taken on the other issues we discussed.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-08-13T03:59:45.200Z · LW(p) · GW(p)
This is because they're deontologists, not because they're consequentialists with a linear utility function.
How do you know this? Do you think that these people would describe their reason for voting as deontological?
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2013-08-13T04:18:28.197Z · LW(p) · GW(p)
Oh, definitely. The consequentialist justification only happens in obscure corners of geekdom like LessWrong and stat / poli sci journals.
Just ask people why they vote, and most of them will say things like "It's a civic duty," "Our forefathers died for this, so we shouldn't waste it," "If everyone didn't vote, things would be bad," ...
I Googled the question and found similar responses in this article:
One reason that people often offer for voting is “But what if everybody thought that way?” [...]
Another reason for voting, offered by political scientists and lay individuals alike, is that it is a civic duty of every citizen in a democratic country to vote in elections. It’s not about trying to affect the electoral outcome; it’s about doing your duty as a democratic citizen by voting in elections.
Interestingly, the author also says: "Your decision to vote or not will not affect whether or not other people will vote (unless you are a highly influential person and you announce your voting intention to the world in advance of the election)." This may be mostly true in practice, but not in the limit as everyone approaches identity with you. It seems like this author is a two-boxer based on his statements. He calls timeless considerations "magical thinking."
Replies from: CarlShulman, JonahSinick, wedrifid↑ comment by CarlShulman · 2013-08-13T20:41:20.101Z · LW(p) · GW(p)
Just ask people why they vote, and most of them will say things like "It's a civic duty," "Our forefathers died for this, so we shouldn't waste it," "If everyone didn't vote, things would be bad," ...
These views reflect the endorsements of various trusted political figures and groups, the active promotion of voting by those with more individual influence, and the raw observation of outcomes affected by bulk political behavior.
In other words, the common sense or deontological rules of thumb are shaped by the consequences, as the consequences drive moralizing activity. Joshua Greene has some cute discussion of this in his dissertation:
I believe that this pattern is quite general. Our intuitions are not utilitarian, and as a result it is often possible to devise cases in which our intuitions conflict with utilitarianism. But at the same time, our intuitions are somewhat constrained by utilitarianism. This is because we care about utilitarian outcomes, and when a practice is terribly anti-utilitarian, there is, sooner or later, a voice in favor of abolishing it, even if the voice is not explicitly utilitarian. Take the case of drunk driving. Drinking is okay. Driving is okay. Doing both at the same time isn’t such an obviously horrible thing to do, but we’ve learned the hard way that this intuitively innocuous, even fun, activity is tremendously damaging. And now, having moralized the issue with the help of organizations like Mothers Against Drunk Driving—what better moral authority than Mom?—we are prepared to impose very stiff penalties on people who aren’t really “bad people,” people with no general anti-social tendencies. We punish drunk driving and related offenses in a way that appears (or once appeared) disproportionately harsh because we’ve paid the utilitarian costs of not doing so.39 The same might be said of harsh penalties applied to wartime deserters and draft-dodgers. The disposition to avoid situations in which one must kill people and risk being killed is not such an awful disposition to have, morally speaking, and what could be a greater violation of your “rights” than your government’s sending you, an innocent person, off to die against your will?40 Nevertheless we are willing to punish people severely, as severely as we would punish violent criminals, for acting on that reasonable and humane disposition when the utilitarian stakes are sufficiently high.41
↑ comment by JonahS (JonahSinick) · 2013-08-13T14:34:13.619Z · LW(p) · GW(p)
The consequentialist justification only happens in obscure corners of geekdom like LessWrong and stat / poli sci journals.
Explicitly yes, but implicitly...?
Just ask people why they vote,
Do you have in mind average people, or, e.g., top 10% Ivy Leaguers ... ?
Just ask people why they vote, and most of them will say things like "It's a civic duty," "Our forefathers died for this, so we shouldn't waste it," "If everyone didn't vote, things would be bad," ...
These reasons aren't obviously deontological (even though they might sound like they are on first hearing). As you say in your comment, timeless decision theory is relevant (transparently so in the last two of the three reasons that you cite).
Even if people did explicitly describe their reasons as deontological, one still wouldn't know whether this was the case, because people's stated reasons are often different from their actual reasons.
One would want to probe here to try to tell whether these things reflect terminal values or instrumental values.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2013-08-13T17:41:16.295Z · LW(p) · GW(p)
Do you have in mind average people, or, e.g., top 10% Ivy Leaguers ... ?
Both. Remember that many Ivy Leaguers are liberal-arts majors. Even many that are quantitatively oriented I suspect aren't familiar with the literature. I guess it takes a certain level of sophistication to think that voting doesn't make a difference in expectation, so maybe most people fall into the bucket of those who haven't really thought about the matter rigorously at all. (Remember, we're including English and Art majors here.)
You could say, "If they knew the arguments, they would be persuaded," which may be true, but that doesn't explain why they already vote without knowing the arguments. Explaining that suggests deontology as a candidate hypothesis.
These reasons aren't obviously deontological (even though they might sound like they are on first hearing).
- "It's a civic duty" is deontological if anything is, because deontology is duty-based ethics.
- "If everyone didn't vote, things would be bad" is an application of Kant's categorical imperative.
- "Our forefathers died for this, so we shouldn't waste it" is not deontological -- just the sunk-cost fallacy.
Even if people did explicitly describe their reasons as deontological, one still wouldn't know whether this was the case, because people's stated reasons are often different from their actual reasons.
At some point it may become a debate about the teleological level at which you assess their "reasons." As individuals, it's very likely the value of voting is terminal in some sense, based on cultural acclimation. Taking a broader view of why society itself developed this tendency, you might say that it did so for more consequentialist / instrumental reasons.
It's similar to assessing the "reason" why a mother cares for her child. At an individual / neural level it's based on reward circuitry. At a broader evolutionary level, it's based on bequeathing genes.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-08-13T20:59:08.471Z · LW(p) · GW(p)
The main point to my mind here is that apparently deontological beliefs may originate from a combination of consequentialist values with an implicit understanding of timeless decision theory.
↑ comment by wedrifid · 2013-08-13T04:24:06.108Z · LW(p) · GW(p)
Interestingly, the author also says: "Your decision to vote or not will not affect whether or not other people will vote (unless you are a highly influential person and you announce your voting intention to the world in advance of the election)." This may be mostly true in practice, but not in the limit as everyone approaches identity with you. It seems like this author is a two-boxer based on his statements. He calls timeless considerations "magical thinking."
He may also be a two boxer who thinks that one boxing is magical thinking. However this instance doesn't demonstrate that. Acting as if other agents will conditionally cooperate when they in fact will not is an error. In fact, it will prompt actual timeless decision theorists to defect against you.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2013-08-13T05:33:01.374Z · LW(p) · GW(p)
Thanks! I'm not sure I understood your comment. Did you mean that if the other agents aren't similar enough to you, it's an error to assume that your cooperating will cause them to cooperate?
I was drawing the inference about two-boxing from the fact that the author seemed to dismiss the possibility that what you do could possibly affect what others do in any circumstance.
Replies from: wedrifid↑ comment by wedrifid · 2013-08-13T05:43:11.211Z · LW(p) · GW(p)
Did you mean that if the other agents aren't similar enough to you, it's an error to assume that your cooperating will cause them to cooperate?
Yes, specifically similar with respect to decision theory implementation.
I was drawing the inference about two-boxing from the fact that the author seemed to dismiss the possibility that what you do could possibly affect what others do in any circumstance.
He seems to be talking about humans as they exist. If (or when) he generalises to all agents he starts being wrong.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2013-08-13T05:49:38.660Z · LW(p) · GW(p)
Even among humans, there's something to timeless considerations, right? If you were in a real prisoner's dilemma with someone you didn't know but who was very similar to you and had read a lot of the same things, it seems plausible you should cooperate? I don't claim the effect is strong enough to operate in the realm of voting most of the time, but theoretically timeless considerations can matter for less-than-perfect copies of yourself.
Replies from: wedrifid↑ comment by wedrifid · 2013-08-13T06:50:27.986Z · LW(p) · GW(p)
Even among humans, there's something to timeless considerations, right? If you were in a real prisoner's dilemma with someone you didn't know but who was very similar to you and had read a lot of the same things, it seems plausible you should cooperate?
Yes, it applies among (some of) that class of humans.
I don't claim the effect is strong enough to operate in the realm of voting most of the time, but theoretically timeless considerations can matter for less-than-perfect copies of yourself.
Yes.
↑ comment by Jiro · 2013-08-12T15:36:06.047Z · LW(p) · GW(p)
You're assuming that people work by probabilities and Bayes each time. Nobody can do that for all of their beliefs, and many people don't do it much at all. Typically a statement like "any probability less than 0.01 is I set to 0" really means "I have this set of preferences, but I think I can derive a statement about probabilities from that set of preferences". Pointing out that they don't actually ignore a probability of 0.01 when wearing a seatbelt, then, should lead to a response of "I guess my derivation isn't quite right" and lead them to revise the statement, but it's not a reason why they should change their preferences in the cases that they originally derived the statement from.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2013-08-12T17:06:14.046Z · LW(p) · GW(p)
Yep, that's right. In my top-level comment, I said, "In any event, many elites are not even systematic or consequentialist in translating utilities times probabilities into actions." Still, on big government-policy questions that affect society (rather than personal actions, relationships, etc.) elites tend to be (relatively) more interested in utilitarian calculations.
↑ comment by gwern · 2013-08-19T02:31:50.897Z · LW(p) · GW(p)
This shows that people are (sometimes) willing to act on small probabilities of human extinction.
Unfortunately, it's a mixed case: there were motives besides pure altruism/self-interest. For example, Edward Teller was an advocate of asteroid defense... no doubt in part because it was a great excuse for using atomic bombs and keeping space and laser-related research going.
↑ comment by NoSignalNoNoise (AspiringRationalist) · 2013-08-19T02:09:54.048Z · LW(p) · GW(p)
How do you know this? It's true that their utility functions aren't linear, but it doesn't follow that that's why they don't take such efforts seriously. Near-Earth Objects: Finding Them Before They Find Us reports on concerted efforts to prevent extinction-level asteroids from colliding into earth. This shows that people are (sometimes) willing to act on small probabilities of human extinction.
It's pretty easy to accept the possibility that an asteroid impact could wipe out humanity, given that something very similar has happened before. You have to overcome a much larger inferential distance to explain the risks from an intelligence explosion.
↑ comment by Nick_Beckstead · 2013-08-12T13:05:16.047Z · LW(p) · GW(p)
It can be murky to infer what people believe based on actions or commitments, because this mixes two quantities: Probabilities and values. For example, the reason most elites don't seem to take seriously efforts like shaping trajectories for strong AI is not because they think the probabilities of making a difference are astronomically small but because they don't bite Pascalian bullets. Their utility functions are not linear. If your utility function is linear, this is a reason that your actions (if not your beliefs) will diverge from those of most elites. In any event, many elites are not even systematic or consequentialist in translating utilities times probabilities into actions.
I don’t endorse biting Pascalian bullets, in part for reasons argued in this post, which I think give further support to some considerations identified by GiveWell. In Pascalian cases, we have claims that people in general aren’t good at thinking about and which people generally assign low weight when they are acquainted with the arguments. I believe that Pascalian estimates of expected value that differ greatly from elite common sense and aren’t persuasive to elite common sense should be treated with great caution.
I also endorse Jonah’s point about some people caring about what you care about, but for different reasons. Just as we are weird, there can be other people who are weird in different ways that make them obsessed with the things we're obsessed with for totally different reasons. Just as some scientists are obsessed with random stuff like dung beetles, I think a lot of asteroids were tracked because there were some scientists who are really obsessed with asteroids in particular, and want to ensure that all asteroids are carefully tracked far beyond the regular value that normal people place on tracking all the asteroids. I think this can include some borderline Pascalian issues. For example, important government agencies that care about speculative threats to national security. Dick Cheney famously said, "If there's a 1% chance that Pakistani scientists are helping al-Qaeda build or develop a nuclear weapon, we have to treat it as a certainty in terms of our response." Similarly, there can be people that are obsessed with many issues far out of proportion with what most ordinary people care about. Looking at what "most people" care about is less robust a way to find gaps in a market than it can appear at first. (I know you don’t think it would be good to save the world, but I think the example still illustrates the point to some extent. An example more relevant to would be that some scientists might just be really interested in insects and do a lot of the research that you’d think would be valuable, even though if you had just thought “no one cares about insects so this research will never happen” you’d be wrong.)
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2013-08-12T19:30:57.071Z · LW(p) · GW(p)
I don’t endorse biting Pascalian bullets, in part for reasons argued in this post, which I think give further support to some considerations identified by GiveWell.
As far as the GiveWell point, I meant "proper Pascalian bullets" where the probabilities are computed after constraining by some reasonable priors (keeping in mind that a normal distribution with mean 0 and variance 1 is not a reasonable prior in general).
In Pascalian cases, we have claims that people in general aren’t good at thinking about and which people generally assign low weight when they are acquainted with the arguments.
Low probability, yes, but not necessarily low probability*impact.
I believe that Pascalian estimates of expected value that differ greatly from elite common sense and aren’t persuasive to elite common sense should be treated with great caution.
As I mentioned in another comment, I think most Pascalian wagers that one comes across are fallacious because they miss even bigger Pascalian wagers that should be pursued instead. However, there are some Pascalian wagers that seem genuinely compelling even after looking for alternatives, like "the Overwhelming Importance of Shaping the Far Future." My impression is that most elites do not agree that the far future is overwhelmingly important even after hearing your arguments because they don't have linear utility functions and/or don't like Pascalian wagers. Do you think most elites would agree with you about shaping the far future?
This highlights a meta-point in this discussion: Often what's under debate here is not the framework but instead claims about (1) whether elites would or would not agree with a given position upon hearing it defended and (2) whether their sustained disagreement even after hearing it defended results from divergent facts, values, or methodologies (e.g., not being consequentialist). It can take time to assess these, so in the short term, disagreements about what elites would come to believe are a main bottleneck for using elite common sense to reach conclusions.
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-12T23:14:23.742Z · LW(p) · GW(p)
However, there are some Pascalian wagers that seem genuinely compelling even after looking for alternatives, like "the Overwhelming Importance of Shaping the Far Future." My impression is that most elites do not agree that the far future is overwhelmingly important even after hearing your arguments because they don't have linear utility functions and/or don't like Pascalian wagers. Do you think most elites would agree with you about shaping the far future?
I disagree with the claim that the argument for shaping the far future is a Pascalian wager. In my opinion, there is a reasonably high, reasonably non-idiosyncratic probability that humanity will survive for a very long time, that there will be a lot of future people, and/or that future people will have a very high quality of life. Though I have not yet defended this claim as well as I would like, I also believe that many conventionally good things people can do push toward future generations facing future challenges and opportunities better than they otherwise would, which with a high enough and conventional enough probability makes the future go better. I think that these are claims which elite common sense would be convinced of, if in possession of my evidence. If elite common sense would not be so convinced, I would consider abandoning these assumptions.
Regarding the more purely moral claims, I suspect there are a wide variety of considerations which elite common sense would give weight to, and that very long-term considerations are one time of important consideration which would get weight according to elite common sense. It may also be, in part, a fundamental difference of values, where I am part of a not-too-small contingent of people who have distinctive concerns. However, in genuinely altruistic contexts, I think many people would give these considerations substantially more weight if they thought about the issue carefully.
Near the beginning of my dissertation, I actually speak about the level of confidence I have in my thesis quite tentatively:
How convinced should you be by the arguments I'm going to give? I'm defending an unconventional thesis and my support for that thesis comes from highly speculative arguments. I don't have great confidence in my thesis, or claim that others should. But I am convinced that it could well be true, that the vast majority of thoughtful people give the claim less credence that they should, and that it is worth thinking about more carefully. I aim to make the reader justified in taking a similar attitude. (p. 3, Beckstead 2013)
I stand by this tentative stance.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2013-08-13T01:04:32.556Z · LW(p) · GW(p)
I disagree with the claim that the argument for shaping the far future is a Pascalian wager.
I thought some of our disagreement might stem from understanding what each other meant, and that seems to have been true here. Even if the probability of humanity surviving a long time is large, there remain entropy in our influence and butterfly effects, such that it seems extremely unlikely that what we do now will actually make a pivotal difference in the long term, and we could easily be getting the sign wrong. This makes the probabilities small enough to seem Pascalian for most people.
It's very common for people to say, "Predictions are hard, especially about the future, so let's focus on the short term where we can be more confident we're at least making a small positive impact."
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-13T09:29:16.843Z · LW(p) · GW(p)
It's very common for people to say, "Predictions are hard, especially about the future, so let's focus on the short term where we can be more confident we're at least making a small positive impact."
If by short-term you mean "what happens in the next 100 years or so," I think there is something to this idea, even for people who care primarily about very long-term considerations. I suspect it is true that the expected value of very long-run outcomes is primarily dominated by totally unforeseeable weird stuff that could happen in the distant future. But I believe that the best way deal with this challenge is to empower humanity to deal with the relatively foreseeable and unforeseeable challenges and opportunities that it will face over the next few generations. This doesn't mean "let's just look only at short-run well-being boosts," but something more like "let's broadly improve cooperation, motives, access to certain types of information, narrow and broad technological capabilities, and intelligence and rationality to deal with the problems we can't foresee, and let's rely on the best evidence we can to prepare for the problems we can foresee." I say a few things about this issue here. I hope to say more about it in the future.
An analogy would be that if you were a 5-year-old kid and you primarily cared about how successful you were later in life, you should focus on self-improvement activities (like developing good habits, gaining knowledge, and learning how to interact with other people) and health and safety issues (like getting adequate nutrition, not getting hit by cars, not poisoning yourself, not falling off of tall objects, and not eating lead-based paint). You should not try to anticipate fine-grained challenges in the labor market when you graduate from college or disputes you might have with your spouse. I realize that this analogy may not be compelling, but perhaps it illuminates my perspective.
↑ comment by Nick_Beckstead · 2013-08-12T13:17:08.153Z · LW(p) · GW(p)
Insofar as my own actions are atypical, I intend for it to result from atypical moral beliefs rather than atypical factual beliefs. (If you can think of instances of clearly atypical factual beliefs on my part, let me know.) Of course, you could claim, elite common sense should apply also as a prior to what my own moral beliefs actually are, given the fallibility of introspection. This is true, but its importance depends on how abstractly I view my own moral values. If I ask questions about what an extrapolated Brian would think upon learning more, having more experiences, etc., then the elite prior has a lot to say on this question. But if I'm more concerned with my very immediate emotional reaction, then there's less room for error and less that the common-sense prior has to say. The fact that my moral values are sometimes not strongly affected by common-sense moral values comes from my favoring immediate emotions rather than what (one of many possible) extrapolated Brians would feel upon having further and different life experiences. (Of course, there are many possible further life experiences I could have, which would push me in lots of random directions. This is why I'm not so gung ho about what my extrapolated selves would think on some questions.)
As you point out, one choice point is how much idealization to introduce. At one extreme, you might introduce no idealization at all, so that whatever you presently approve of is what you’ll assume is right. On the other extreme you might have a great deal of idealization. You may assume that a better guide is what you would approve of if you knew much more, had experienced much more, were much more intelligent, made no cognitive errors in your reasoning, and had much more time to think. I lean in favor of the other extreme, as I believe most people who have considered this question do, though recognize that you want to specify your procedure in a way that leaves some core part of your values unchanged. Still, I think this is a choice that turns on many tricky cognitive steps, any of which could easily be taken in the wrong direction. So I would urge that insofar as you are making a very unusual decision at this step, you should try to very carefully understand the process that others are going through.
ETA: I'd also caution against just straight-out assuming a particular meta-ethical perspective. This is not a case where you are an expert in the sense of someone who elite common sense would defer to, and I don't think your specific version of anti-realism, or your philosophical perspective which says there is no real question here, are views which can command the assent of a broad coalition of trustworthy people.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2013-08-12T19:39:23.044Z · LW(p) · GW(p)
I don't think your specific version of anti-realism, or your philosophical perspective which says there is no real question here, are views which can command the assent of a broad coalition of trustworthy people.
My current meta-ethical view says I care about factual but not necessarily moral disagreements with respect to elites. One's choice of meta-ethics is itself a moral decision, not a factual one, so this disagreement doesn't much concern me. Of course, there are some places where I could be factually wrong in my meta-ethics, like with the logical reasoning in this comment, but I think most elites don't think there's something wrong with my logic, just something (ethically) wrong with my moral stance. Let me know if you disagree with this. Even with moral realists, I've never heard someone argue that it's a factual mistake not to care about moral truth (what could that even mean?), just that it would be a moral mistake or an error of reasonableness or something like that.
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-12T22:57:00.678Z · LW(p) · GW(p)
My current meta-ethical view says I care about factual but not necessarily moral disagreements with respect to elites. One's choice of meta-ethics is itself a moral decision, not a factual one, so this disagreement doesn't much concern me.
I'm a bit flabbergasted by the confidence with which you speak about this issue. In my opinion, the history of philosophy is filled with a lot of people often smarter than you and me going around saying that their perspective is the unique one that solves everything and that other people are incoherent and so on. As far as I can tell, you are another one of these people.
Like Luke Muehlhauser, I believe that we don't even know what we're asking when we ask ethical questions, and I suspect we don't really know what we're asking when we asking meta-ethical questions either. As far as I can tell, you've picked one possible candidate thing we could be asking--"what do I care about right now?"--among a broad class of possible questions, and then you are claiming that whatever you want right now is right because that's what you're asking.
Of course, there are some places where I could be factually wrong in my meta-ethics, like with the logical reasoning in this comment, but I think most elites don't think there's something wrong with my logic, just something (ethically) wrong with my moral stance. Let me know if you disagree with this.
I think most people would just think you had made an error somewhere and not be able to say where it was, and add that you were talking about completely murky issue that people aren't good at thinking about.
I personally suspect your error lies in not considering the problem from perspectives other than "what does Brian Tomasik care about right now?".
[Edited to reduce rhetoric.]
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2013-08-13T00:56:48.943Z · LW(p) · GW(p)
In my opinion, the history of philosophy is filled with a lot of people often smarter than you and me going around saying that their perspective is the unique one that solves everything and that other people are incoherent and so on.
I think it's fair to say that concepts like libertarian free will and dualism in philosophy of mind are either incoherent or extremely implausible, though maybe the elite-common-sense prior would make us less certain of that than most on LessWrong seem to be.
Like Luke Muehlhauser, I believe that we don't even know what we're asking when we ask ethical questions
Yes, I think most of the confusion on this subject comes from disputing definitions. Luke says: "Within 20 seconds of arguing about the definition of 'desire', someone will say, 'Screw it. Taboo 'desire' so we can argue about facts and anticipations, not definitions.'"
Here I would say, "Screw ethics and meta-ethics. All I'm saying is I want to do what I feel like doing, even if you and other elites don't agree with it."
I personally suspect your error lies in not considering the problem from perspectives other than "what does Brian Tomasik care about right now?".
Sure, but this is not a factual error, just an error in being a reasonable person or something. :)
I should point out that "doing what I feel like doing" doesn't necessarily mean running roughshod over other people's values. I think it's generally better to seek compromise and remain friendly to those with whom you want to cooperate. It's just that this is an instrumental concession, not because I actually agree with the values that I'm willing to be nice to.
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-13T12:00:44.069Z · LW(p) · GW(p)
Here I would say, "Screw ethics and meta-ethics. All I'm saying is I want to do what I feel like doing, even if you and other elites don't agree with it."
I think that there is a genuine concern that many people have when they try to ask ethical questions and discuss them with others, and that this process can lead to doing better in terms of that concern. I am speaking vaguely because, as I said earlier, I don't think that I or others really understand what is going on. This has been an important process for many of the people I know who are trying to make a large positive impact on the world. I believe it was part of the process for you as well. When you say "I want to do what I want to do" I think it mostly just serves as a conversation-stopper, rather than something that contributes to a valuable process of reflection and exchange of ideas.
I personally suspect your error lies in not considering the problem from perspectives other than "what does Brian Tomasik care about right now?".
Sure, but this is not a factual error, just an error in being a reasonable person or something. :)
I think it is a missed opportunity to engage in a process of reflection and exchange of ideas that I don't fully understand but seems to deliver valuable results.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2013-08-13T18:45:28.317Z · LW(p) · GW(p)
When you say "I want to do what I want to do" I think it mostly just serves as a conversation-stopper, rather than something that contributes to a valuable process of reflection and exchange of ideas.
I'm not always as unreasonable as suggested there, but I was mainly trying to point out that if I refuse to go along with certain ideas, it's not dependent on a controversial theory of meta-ethics. It's just that I intuitively don't like the ideas and so reject them out of hand. Most people do this with ideas they find too unintuitive to countenance.
On some questions, my emotions are too strong, and it feels like it would be bad to budge my current stance.
I think it is a missed opportunity to engage in a process of reflection and exchange of ideas that I don't fully understand but seems to deliver valuable results.
Fair enough. :) I'll buy that way of putting it.
Anyway, if I were really as unreasonable as it sounds, I wouldn't be talking here and putting at risk the preservation of my current goals.
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-13T19:09:01.579Z · LW(p) · GW(p)
I'm not always as unreasonable as suggested there, but I was mainly trying to point out that if I refuse to go along with certain ideas, it's not dependent on a controversial theory of meta-ethics. It's just that I intuitively don't like the ideas and so reject them out of hand. Most people do this with ideas they find too unintuitive to countenance.
Whether you want to call it a theory of meta-ethics or not, and whether it is a factual error or not, you have an unusual approach to dealing with moral questions that places an unusual amount of emphasis on Brian Tomasik's present concerns. Maybe this is because there is something very different about you that justifies it, or maybe it is some idiosyncratic blind spot or bias of yours. I think you should put weight on both possibilities, and that this pushes in favor of more moderation in the face of values disagreements. Hope that helps articulate where I'm coming from in your language. This is hard to write and think about.
Replies from: Lumifer, Brian_Tomasik↑ comment by Lumifer · 2013-08-13T19:18:27.577Z · LW(p) · GW(p)
an unusual approach to dealing with moral questions
Why do you think it's unusual? I would strongly suspect that the majority of people have never examined their moral beliefs carefully and so their moral responses are "intuitive" -- they go by gut feeling, basically. I think that's the normal mode in which most of humanity operates most of the time.
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-13T19:32:41.446Z · LW(p) · GW(p)
I think other people are significantly more responsive to values disagreements than Brian is, and that this suggests they are significantly more open to the possibility that their idiosyncratic personal values judgments are mistaken. You can get a sense of how unusual Brian's perspectives are by examining his website, where his discussions of negative utilitarianism and insect suffering stand out.
Replies from: Lumifer↑ comment by Lumifer · 2013-08-13T19:54:52.054Z · LW(p) · GW(p)
I think other people are significantly more responsive to values disagreements
That's a pretty meaningless statement without specifying which values. How responsive "other people" would be to value disagreements about child pornography, for example, do you think?
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2013-08-13T23:10:05.225Z · LW(p) · GW(p)
I suspect Nick would say that if there were respected elites who favored increasing the amount of child pornography, he would give some weight to the possibility that such a position was in fact something he would come to agree with upon further reflection.
↑ comment by Brian_Tomasik · 2013-08-13T19:30:25.154Z · LW(p) · GW(p)
Maybe this is because there is something very different about you that justifies it, or maybe it is some idiosyncratic blind spot or bias of yours.
Or, most likely of all, it's because I don't care to justify it. If you want to call "not wanting to justify a stance" a bias or blind spot, I'm ok with that.
Hope that helps articulate where I'm coming from in your language. This is hard to write and think about.
:)
comment by tog · 2013-08-11T17:15:20.874Z · LW(p) · GW(p)
Great article! A couple of questions:
(1) Can you justify picking 'the top 10% of people who got Ivy-League-equivalent educations' an an appropriate elite a little more? How will the elite vary (in size and in nature) for particular subjects?
(2) Can you (or others) give more examples of the application of this method to particular questions? Personally, I'm especially interested in case where it'd depart from decision-relevant views held by a substantial fraction of effective altruists.
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-11T20:47:27.468Z · LW(p) · GW(p)
Re: (1), I can't do too much better than what I already wrote under "How should we assign weight to different groups of people?" I'd say you can go about as elite as you want if you are good at telling how the relevant people think and you aren't cherry-picking or using the "No True Scotsman" fallacy. I picked this number as something I felt a lot of people reading this blog would be in touch with and wouldn't be too narrow.
Re: (2), this is something I hope to discuss at greater length later on. I won't try to justify these claims now, but other things being equal, I think it favors
more skepticism about most philosophical arguments (I think a lot of these can't command the assent of a broad coalition of people), including arguments which depend on idiosyncratic moral perspectives (I think this given either realist or anti-realist meta-ethics)
more adjustment toward common sense in cost-effectiveness estimates
more skepticism about strategies for doing good that "seem weird" to most people
more respect for the causes that top foundations focus on today
more effort to be transparent about our thinking and stress-test unusual views we have
But this framework is a prior not a fixed point, so in case this doesn't settle issues, it just points in a specific direction. I'd prefer not to get into details defending these claims now, since I hope to get into it at a later date.
Replies from: Eliezer_Yudkowsky, tog↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-11T21:19:19.399Z · LW(p) · GW(p)
I'd say you can go about as elite as you want if you are good at telling how the relevant people think and you aren't cherry-picking or using the "No True Scotsman" fallacy.
If your advice is "Go ahead and be picky about who you consider elites, but make a good-faith effort not to cherry-pick them and watch out for the No True Scotsman fallacy" then I may merely agree with you here! It's when people say, "You're not allowed to do that because outside view!" that I start to worry, and I may have committed a fallacy of believing that you were saying that because I've heard other people argue it so often, for which I apologize.
Replies from: JonahSinick, Nick_Beckstead↑ comment by JonahS (JonahSinick) · 2013-08-11T21:57:30.859Z · LW(p) · GW(p)
I really appreciate your thoughts on this thread :-)
I think that on any given specific question, it's generally possible for a sufficiently intelligent and determined person to beat out elite conventional wisdom. To the extent that you and I disagree, the point of disagreement is how much investigation one has to do in order to overturn elite conventional wisdom. This requires a subtle judgment call.
To give an example where not investigating in sufficient depth leads to problems, consider the error that GiveWell found in the DCP-2 cost-effectiveness estimate for deworming. A priori one could look at the DCP-2 and say "Conventional wisdom among global health organizations is to do lots of health interventions, but they should be trying to optimize cost-effectiveness, deworming is the most cost-effective intervention, there are funding gaps for deworming, so the conventional wisdom is wrong." Such a view would have given too little weight to alternative hypotheses (e.g. it being common knowledge among experts that the DCP-2 report is sloppy, experts having knowledge of the low robustness of cost-effectiveness estimates in philanthropy, etc.)
↑ comment by Nick_Beckstead · 2013-08-12T11:38:08.604Z · LW(p) · GW(p)
Great. It sounds like may reasonably be on the same page at this point.
To reiterate and clarify, you can pretty much make the standards as high as you like as long as: (1) you have a good enough grip on how the elite class thinks,(2) you are using clear indicators of trustworthiness that many people would accept, and (3) you make a good-faith effort not to cherry pick and watch out for the No True Scotsman fallacy. The only major limitation on this I can think of is that there is some trade-off to be made between certain levels of diversity and independent judgment. Like, if you could somehow pick the 10 best people in the world by some totally broad standards that everyone would accept (I think this is deeply impossible), that probably wouldn't be as good as picking the best 100-10,000 people by such standards. And I'd substitute some less trustworthy people for more trustworthy people in some cases where it would increase diversity of perspectives.
↑ comment by tog · 2013-08-11T21:28:53.066Z · LW(p) · GW(p)
Great, thanks. The unjustified examples for (2) help.
I'm interested in how the appropriate elite would vary in size and in nature for particular subjects. For instance, I imagine I might place a little more weight on certain groups of experts in, say, economics and medical effectiveness than the top 10% of Ivy League grads would. I haven't thought about this a great deal, so might well be being irrational in this.
comment by owencb · 2013-08-11T10:57:34.163Z · LW(p) · GW(p)
I feel my view is weakest in cases where there is a strong upside to disregarding elite common sense, there is little downside, and you’ll find out whether your bet against conventional wisdom was right within a tolerable time limit. Perhaps many crazy-sounding entrepreneurial ideas and scientific hypotheses fit this description.
It doesn't look too hard to fold these scenarios into the framework. Elite common sense tends to update in light of valid novel arguments, but this updating is slow. So if you're in possession of a novel argument you often don't want to wait the years required to find out if it's accepted; you can stress test it by checking whether elite common sense thinks there's any plausibility in the argument, and if so let it nudge your beliefs a bit. If there's a high upside and low downside to acting in the way suggested by the argument, it may not need to change your beliefs by very much before it's correct to act on the new idea.
Or am I distorting what you intended by the framework here?
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-11T12:09:29.627Z · LW(p) · GW(p)
This sounds right to me.
comment by wubbles · 2013-08-10T18:53:47.572Z · LW(p) · GW(p)
I think the problem with elite common sense is that it can be hard to determine whose views count. Take for instance the observed behavior of the investing public determined by where they put their assets in contrast with the views of most economists. If you define the elite as the economists, you get a different answer from the elite as people with money. (The biggest mutual funds are not the lowest cost ones, though this gap has narrowed over time, and economists generally prefer lower cost funds very strongly)
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-10T19:03:08.836Z · LW(p) · GW(p)
I agree that it is interesting and possibly important to think about whose views count as "elite."
The key question here is, "If a broad coalition of highly trustworthy people were aware of the considerations that I think favor investing in index funds, would they switch to investing in index funds rather than investing in actively managed mutual funds?" I think the answer to this question is Yes. By and large, my sense is that the most trustworthy people who are investing in other ways aren't aware of the considerations in favor of investing in index fund and the strong support for doing so from economists, and would change their minds if they were. One reason I believe this is that I have had conversations with some people like this, convinced them of the evidence I have, and this convinced them to switch to index funds. They did actually switch.
In case you missed it, I discuss who counts as "elite" here:
How should we assign weight to different groups of people? Other things being equal, a larger number of people is better, more trustworthy people are better, people who are trustworthy by clearer indicators that more people would accept are better, and a set of criteria which allows you to have some grip on what the people in question think is better, but you have to make trade-offs. If I only included, say, the 20 smartest people I had ever met as judged by me personally, that would probably be too small a number of people, the people would probably have biases and blind spots very similar to mine, and I would miss out on some of the most trustworthy people, but it would be a pretty trustworthy collection of people and I’d have some reasonable sense of what they would say about various issues. If I went with, say, the 10 most-cited people in 10 of the most intellectually credible academic disciplines, 100 of the most generally respected people in business, and the 100 heads of different states, I would have a pretty large number of people and a broad set of people who were very trustworthy by clear standards that many people would accept, but I would have a hard time knowing what they would think about various issues because I haven’t interacted with them enough. How these factors can be traded-off against each other in a way that is practically most helpful probably varies substantially from person to person.
I can’t give any very precise answer to the question about whose opinions should be given significant weight, even in my own case. Luckily, I think the output of this framework is usually not very sensitive to how we answer this question, partly because most people would typically defer to other, more trustworthy people. If you want a rough guideline that I think many people who read this post could apply, I would recommend focusing on, say, the opinions of the top 10% of people who got Ivy-League-equivalent educations (note that I didn’t get such an education, at least as an undergrad, though I think you should give weight to my opinion; I’m just giving a rough guideline that I think works reasonably well in practice). You might give some additional weight to more accomplished people in cases where you have a grip on how they think.
comment by MichaelVassar · 2013-08-26T00:38:01.116Z · LW(p) · GW(p)
Upvoted for clarity, but fantastically wrong, IMHO. In particular, "I suspect that taking straight averages gives too much weight to the opinions of cranks and crackpots, so that you may want to remove some outliers or give less weight to them. " seems to me to be unmotivated by epistemology and visibly motivated by conformity.
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-26T09:25:52.646Z · LW(p) · GW(p)
Would be interested to know more about why you think this is "fantastically wrong" and what you think we should do instead. The question the post is trying to answer is, "In practical terms, how should we take account of the distribution of opinion and epistemic standards in the world?" I would like to hear your answer to this question. E.g., should we all just follow the standards that come naturally to us? Should certain people do this? Should we follow the standards of some more narrowly defined group of people? Or some more narrow set of standards still?
I see the specific sentence you objected to as very much a detail rather than a core feature of my proposal, so it would be surprising to me if this was the reason you thought the proposal was fantastically wrong. For what it's worth, I do think that particular sentence can be motivated by epistemology rather than conformity. It is naturally motivated by the aggregation methods I mentioned as possibilities, which I have used in other contexts for totally independent reasons. I also think it is analogous to a situation in which I have 100 algorithms returning estimates of the value of a stock and one of them says the stock is worth 100x market price and all the others say it is worth market price. I would not take straight averages here and assume the stock is worth about 2x market price, even if the algorithm giving a weird answer was generally about as good as the others.
Replies from: MichaelVassar↑ comment by MichaelVassar · 2013-12-10T17:10:24.836Z · LW(p) · GW(p)
I think that people following the standards that seem credible to them upon reflection is the best you can hope for. Ideally, upon reflection, bets and experiments will be part of those standards to at least some people. Hopefully, some such groups will congeal into effective trade networks. If one usually reliable algorithm disagrees strongly with others, yes, short term you should probably effectively ignore it, but that can be done via squaring assigned probabilities, taking harmonic or geometric means, etc, not by dropping it, and more importantly, such deviations should be investigated with some urgency.
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2013-12-12T11:36:07.722Z · LW(p) · GW(p)
If one usually reliable algorithm disagrees strongly with others, yes, short term you should probably effectively ignore it, but that can be done via squaring assigned probabilities, taking harmonic or geometric means, etc, not by dropping it, and more importantly, such deviations should be investigated with some urgency.
I think we agree about this much more than we disagree. After writing this post, I had a conversation with Anna Salamon in which she suggested that--as you suggest--exploring such disagreements with some urgency was probably more important than getting the short-term decision right. I agree with this and I'm thinking about how to live up to that agreement more.
Regarding the rest of it, I did say "or give less weight to them".
I think that people following the standards that seem credible to them upon reflection is the best you can hope for. Ideally, upon reflection, bets and experiments will be part of those standards to at least some people.
Thanks for answering the main question.
I and at least one other person I highly trust have gotten a lot of mileage out of paying a lot attention to cues like "Person X wouldn't go for this" and "That cluster of people that seems good really wouldn't go for this", and trying to think through why, and putting weight on those other approaches to the problem. I think other people do this too. If that counts as "following the standards that seems credible to me upon reflection", maybe we don't disagree too much. If it doesn't, I'd say it's a substantial disagreement.
comment by RobinHanson · 2013-08-12T19:10:35.807Z · LW(p) · GW(p)
The overall framework is sensible, but I have trouble applying it to the most vexing cases: where the respected elites mostly just giggle at a claim and seem to refuse to even think about reasons for or against it, but instead just confidently reject it. It might seem to me that their usual intellectual standards would require that they engage in such reasoning, but the fact that they do not in fact think that appropriate in this case is evidence of something. But what?
Replies from: Nick_Beckstead, Lumifer↑ comment by Nick_Beckstead · 2013-08-12T23:52:38.336Z · LW(p) · GW(p)
I think it is evidence that thinking about it carefully wouldn't advance their current concerns, so they don't bother or use the thinking/talking for other purposes. Here are some possibilities that come to mind:
they might not care about the outcomes that you think are decision-relevant and associated with your claim
they may care about the outcomes, but your claim may not actually be decision-relevant if you were to find out the truth about the claim
it may not be a claim which, if thought about carefully, would contribute enough additional evidence to change your probability in the claim enough to change decisions
it may be that you haven't framed your arguments in a way that suggests to people that there is a promising enough path to getting info that would become decision-relevant
it may be because of a signalling hypothesis that you would come up with; if you're talking about the distant future, maybe people mostly talk about such stuff as part of a system of behavior that signals support for certain perspectives. If this is happening more in this kind of case, it may be in part because of the other considerations.
↑ comment by Lumifer · 2013-08-12T19:33:33.197Z · LW(p) · GW(p)
Evidence of incentives working, I think.
In general I think that the overall framework suffers from the (very big, IMHO) problem of ignoring the interests of various people and the consequent incentives.People in real life are not impartial beings of pure rationality -- they will and routinely do filter the evidence, misrepresent it, occasionally straight out invent it, all for the purpose of advancing their interests. And that's on top of people sincerely believing what is useful/profitable for them to believe.
comment by tog · 2013-08-11T21:41:31.988Z · LW(p) · GW(p)
Can you expand a little on how you would "try to find out what elite common sense would make of [your] information and analysis"? Is the following a good example of how to do it?
- Imagine there's an economic policy issue about which most members of the elite (say the top 10% of Ivy League grads) haven't thought, and which they'd have no initial position on.
- All I'm aware of is a theoretical economics argument for position A; I don't know of any counters, arguments on the other side, etc.
- I find as representative a sample of the elite as I can and tell them the argument.
- The vast majority move rapidly to significant confidence in position A.
- I adopt a similar perspective.
↑ comment by Nick_Beckstead · 2013-08-12T11:23:19.185Z · LW(p) · GW(p)
Yes.
comment by Divide · 2014-02-24T20:53:47.847Z · LW(p) · GW(p)
By and large, when people are unwilling to carefully consider arguments with the goal of having accurate beliefs, this is evidence that it is not useful to try to think carefully about this area. This follows from the idea mentioned above that people tend to try to have accurate views when it is in their present interests to have accurate views. So if this is the main way the framework breaks down, then the framework is mostly breaking down in cases where good epistemology is relatively unimportant.
That's one clever trick right there!
comment by lukeprog · 2014-01-06T21:37:35.342Z · LW(p) · GW(p)
Some empirical discussion of this issue can be found in Hochschild (2012) and the book it discusses, Zaller (1992).
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2014-01-06T23:27:54.647Z · LW(p) · GW(p)
I'd say Hothschild's stuff isn't that empirical. As far as I can tell, she just gives examples of cases where (she thinks) people do follow elite opinion and and should, don't follow it but should, do follow it but shouldn't, and don't follow it and shouldn't. There's nothing systematic about it.
Hochschild's own answer to my question is:
When should citizens reject elite opinion leadership?In principle, the answer is easy: the mass public should join the elite consensus when leaders’ assertions are empirically supported and morally justified. Conversely, the public should not fall in line when leaders’ assertions are either empirically unsupported, or morally unjustified, or both. p. 536
This view seems to be the intellectual cousin of the view that we should just believe what is supported by good epistemic standards, regardless of what others think. (These days, philosophers are calling this a "steadfast" (as contrasted with "conciliatory") view of disagreement.) I didn't talk about this kind of view, largely because I find it very unhelpful.
I haven't looked at Zaller yet but it appears to mostly be about when people do (rather than should) follow elite opinion. It sounds pretty interesting though.
Replies from: lukeprogcomment by lukeprog · 2013-08-31T22:58:21.454Z · LW(p) · GW(p)
Compare to: No Safe Defense, Not Even Science.
comment by Ustice · 2013-08-15T20:04:32.751Z · LW(p) · GW(p)
How would this apply to social issues do you think? It seems that this is a poor way to be on the front of social change? If this strategy was widely applied, would we ever have seen the 15th and 19th amendments to the Constitution here in the US?
On a more personal basis, I'm polyamorous, but if I followed your framework, I would have to reject polyamory as a viable relationship model. Yes, the elite don't have a lot of data on polyamory, but although I have researched the good and the bad, and how it can work compared to monogamy, but I don't think that I would be able to convince the elite of my opinions.
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-15T22:22:47.376Z · LW(p) · GW(p)
How would this apply to social issues do you think? It seems that this is a poor way to be on the front of social change? If this strategy was widely applied, would we ever have seen the 15th and 19th amendments to the Constitution here in the US?
My impression is that the most trustworthy people are more likely to be at the front of good social movements than the general public, so that if people generally adopted the framework, many of the promising social movements would progress more quickly than they actually did. I am not sufficiently aware of the specific history of the 15th and 19th amendments to say more than that at this point.
There is a general question about how the framework is related to innovation. Aren't innovators generally going against elite common sense? I think that innovators are often overconfident about the quality of their ideas, and have significantly more confidence in their ideas than they need for their projects to be worthwhile by the standards of elite common sense. E.g., I don't think you need to have high confidence that Facebook is going to pan out for it to be worthwhile to try to make Facebook. Elite common sense may see most attempts at innovation as unlikely to succeed, but I think it would judge many as worthwhile in cases where we'll get to find out whether the innovation was any good or not. This might point somewhat in the direction of less innovation.
However, I think that the most trustworthy people tend to innovate more, are more in favor of innovation than the general population, and are less risk-averse than the general population. These factors might point in favor of more innovation. It is unclear to me whether we would have more or less innovation if the framework were widely adopted, but I suspect we would have more.
On a more personal basis, I'm polyamorous, but if I followed your framework, I would have to reject polyamory as a viable relationship model. Yes, the elite don't have a lot of data on polyamory, but although I have researched the good and the bad, and how it can work compared to monogamy, but I don't think that I would be able to convince the elite of my opinions.
My impression is that elite common sense is not highly discriminating against polyamory as a relationship model. It would probably be skeptical of polyamory for the general person, but say that it might work for some people, and that it could make sense for certain interested people to try it out.
If your opinion is that polyamory should be the norm, I agree that you wouldn't be able to convince elite common sense of this. My personal take is that it is far from clear that polyamory should be the norm. In any event, this doesn't seem like a great test case for taking down the framework because the idea that polyamory should be the norm does not seem like a robustly supported claim.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-15T23:05:11.358Z · LW(p) · GW(p)
My impression is that the most trustworthy people are more likely to be at the front of good social movements than the general public
That sounds reeeeaaally suspicious in terms of potentially post-facto assignments. (Though defeasibly so - I can totally imagine a case being made for, "Yes, this really was generally visible to the person on the street at the time without benefit of hindsight.")
Can you use elite common sense to generate an near-term testable prediction that would sound bold relative to my probability assignments or LW generally? The last obvious point on which you could have thus been victorious would have been my skepticism of the now-confirmed Higgs boson, and Holden is apparently impressed by the retrospective applicability of this heuristic to predict that interventions much better than the Gates Foundation's best interventions would not be found. But still, an advance prediction would be pretty cool.
Replies from: Nick_Beckstead↑ comment by Nick_Beckstead · 2013-08-16T00:01:56.283Z · LW(p) · GW(p)
That sounds reeeeaaally suspicious in terms of potentially post-facto assignments. (Though defeasibly so - I can totally imagine a case being made for, "Yes, this really was generally visible to the person on the street at the time without benefit of hindsight.")
This isn't something I've looked into closely, though from looking at it for a few minutes I think it is something I would like to look into more. Anyway, on the Wikipedia page on diffusion of innovation:
This is the second fastest category of individuals who adopt an innovation. These individuals have the highest degree of opinion leadership among the other adopter categories. Early adopters are typically younger in age, have a higher social status, have more financial lucidity, advanced education, and are more socially forward than late adopters. More discrete in adoption choices than innovators. Realize judicious choice of adoption will help them maintain central communication position (Rogers 1962 5th ed, p. 283)."
I think this supports my claim that elite common sense is quicker to join and support new good social movements, though as I said I haven't looked at it closely at all.
Can you use elite common sense to generate an near-term testable prediction that would sound bold relative to my probability assignments or LW generally?
I can't think of anything very good, but I'll keep it in the back of my mind. Can you think of something that would sound bold relative to my perspective?