Posts

Truthful and honest AI 2021-10-29T07:28:36.225Z
Interpretability 2021-10-29T07:28:02.650Z
Techniques for enhancing human feedback 2021-10-29T07:27:46.700Z
Measuring and forecasting risks 2021-10-29T07:27:32.836Z
Request for proposals for projects in AI alignment that work with deep learning systems 2021-10-29T07:26:58.754Z
Provide feedback on Open Philanthropy’s AI alignment RFP 2021-08-20T19:52:55.309Z
Review of studies says you can decrease motivated cognition through self-affirmation 2013-10-23T11:43:49.300Z
My daily reflection routine 2013-08-18T11:54:00.596Z
Common sense as a prior 2013-08-11T18:18:11.494Z
A Proposed Adjustment to the Astronomical Waste Argument 2013-05-27T03:39:01.559Z

Comments

Comment by Nick_Beckstead on Do Earths with slower economic growth have a better chance at FAI? · 2014-10-29T16:41:58.742Z · LW · GW

What this shows is that people are inconsistent in a certain way. If you ask them the same question in two different ways (packed vs. unpacked) you get different answers. Is there any indication of which is the better way to ask the question, or whether asking it some other way is better still? Without an answer to this question, it's unclear to me whether we should talk about an "unpacking fallacy" or a "failure to unpack fallacy".

Comment by Nick_Beckstead on Please recommend some audiobooks · 2014-10-16T23:24:42.184Z · LW · GW

I have audiobook recommendations here.

Comment by Nick_Beckstead on Will the world's elites navigate the creation of AI just fine? · 2014-02-25T09:47:38.180Z · LW · GW

Thanks!

Comment by Nick_Beckstead on Will the world's elites navigate the creation of AI just fine? · 2014-02-24T14:10:07.286Z · LW · GW

Could you say a bit about your audiobook selection process?

Comment by Nick_Beckstead on Common sense as a prior · 2014-01-06T23:27:54.647Z · LW · GW

I'd say Hothschild's stuff isn't that empirical. As far as I can tell, she just gives examples of cases where (she thinks) people do follow elite opinion and and should, don't follow it but should, do follow it but shouldn't, and don't follow it and shouldn't. There's nothing systematic about it.

Hochschild's own answer to my question is:

When should citizens reject elite opinion leadership?In principle, the answer is easy: the mass public should join the elite consensus when leaders’ assertions are empirically supported and morally justified. Conversely, the public should not fall in line when leaders’ assertions are either empirically unsupported, or morally unjustified, or both. p. 536

This view seems to be the intellectual cousin of the view that we should just believe what is supported by good epistemic standards, regardless of what others think. (These days, philosophers are calling this a "steadfast" (as contrasted with "conciliatory") view of disagreement.) I didn't talk about this kind of view, largely because I find it very unhelpful.

I haven't looked at Zaller yet but it appears to mostly be about when people do (rather than should) follow elite opinion. It sounds pretty interesting though.

Comment by Nick_Beckstead on A critique of effective altruism · 2013-12-12T11:42:49.097Z · LW · GW

What I mostly remember from that conversation was disagreeing about the likely consequences of "actually trying". You thought elite people in the EA cluster who actually tried had high probability of much more extreme achievements than I did. I see how that fits into this post, but I didn't know you had loads of other criticism about EA, and I probably would have had a pretty different conversation with you if I did.

Fair enough regarding how you want to spend your time. I think you're mistaken about how open I am to changing my mind about things in the face of arguments, and I hope that you reconsider. I believe that if you consulted with people you trust who know me much better than you, you'd find they have different opinions about me than you do. There are multiple cases where detailed engagement with criticism has substantially changed my operations.

Comment by Nick_Beckstead on Common sense as a prior · 2013-12-12T11:36:07.722Z · LW · GW

If one usually reliable algorithm disagrees strongly with others, yes, short term you should probably effectively ignore it, but that can be done via squaring assigned probabilities, taking harmonic or geometric means, etc, not by dropping it, and more importantly, such deviations should be investigated with some urgency.

I think we agree about this much more than we disagree. After writing this post, I had a conversation with Anna Salamon in which she suggested that--as you suggest--exploring such disagreements with some urgency was probably more important than getting the short-term decision right. I agree with this and I'm thinking about how to live up to that agreement more.

Regarding the rest of it, I did say "or give less weight to them".

I think that people following the standards that seem credible to them upon reflection is the best you can hope for. Ideally, upon reflection, bets and experiments will be part of those standards to at least some people.

Thanks for answering the main question.

I and at least one other person I highly trust have gotten a lot of mileage out of paying a lot attention to cues like "Person X wouldn't go for this" and "That cluster of people that seems good really wouldn't go for this", and trying to think through why, and putting weight on those other approaches to the problem. I think other people do this too. If that counts as "following the standards that seems credible to me upon reflection", maybe we don't disagree too much. If it doesn't, I'd say it's a substantial disagreement.

Comment by Nick_Beckstead on A critique of effective altruism · 2013-12-02T20:36:40.076Z · LW · GW

The main thing that I personally think we don't need as much of is donations to object-level charities (e.g. GiveWell's top picks). It's unclear to me how much this can be funged into more self-reflection for the general person, but for instance I am sacrificing potential donations right now in order to write this post and respond to criticism...

I am substantially less enthusiastic about donations to object-level charities (for their own sake) than I am for opportunities for us to learn and expand our influence. So I'm pretty on board here.

I think "writing blogposts criticizing mistakes that people in the EA community commonly make" is a moderate strawman of what I'd actually like to see, in that it gets us closer to being a successful movement but clearly won't be sufficient on its own.

That was my first pass at how I'd try to start to try to increase the "self-awareness" of the movement. I would be interested in hearing more specifics about what you'd like to see happen.

Why do you think basic fact-finding would be particularly helpful? Seems to me that if we can't come to nontrivial conclusions already, the kind of facts we're likely to find won't help very much.

A few reasons. One is that the model for research having an impact is: you do research --> you find valuable information --> people recognize your valuable information --> people act differently. I have become increasingly pessimistic about people's ability to recognize good research on issues like population ethics. But I believe people can recognize good research on stuff like shallow cause overviews.

Another consideration is our learning and development. I think the above consideration applies to us, not just to other people. If it's easier for us to tell if we're making progress, we'll learn how to learn about these issues more quickly.

I believe that a lot of the more theoretical stuff needs to happen at some point. There can be a reasonable division of labor, but I think many of us would be better off loading up on the theoretical side after we had a stronger command of the basics. By "the basics" I mean stuff like "who is working on synthetic biology?" in contrast with stuff like "what's the right theory of population ethics?".

You might have a look at this conversation I had with Holden Karnofsky, Paul Christiano, Rob Wiblin, and Carl Shulman. I agree with a lot of what Holden says.

Comment by Nick_Beckstead on A critique of effective altruism · 2013-12-02T18:02:19.963Z · LW · GW

I'd like to see more critical discussion of effective altruism of the type in this post. I particularly enjoyed the idea of "pretending to actually try." People doing sloppy thinking and then making up EA-sounding justifications for their actions is a big issue.

As Will McAskill said in a Facebook comment, I do think that a lot of smart people in the EA movement are aware of the issues you're bringing up and have chosen to focus on other things. Big picture, I find claims like "your thing has problem X so you need to spend more resources on fixing X" more compelling when you point to things we've been spending time on and say that we should have done less of those things and more of the thing you think we should have been doing. E.g., I currently spend a lot of my time on research, advocacy, and trying to help improve 80,000 Hours and I'd be pretty hesitant to switch to writing blogposts criticizing mistakes that people in the EA community commonly make, though I've considered doing so and agree this would be help address some of the issues you've identified. But I would welcome more of that kind of thing.

I disagree with your perspective that the effective altruism movement has underinvested in research into population ethics. I wrote a PhD thesis which heavily featured population ethics and aimed at drawing out big-picture takeaways for issues like existential risk. I wouldn't say I settled all the issues, but I think we'd make more progress as movement if we did less philosophy and more basic fact-finding of the kind that goes into GiveWell shallow cause overviews.

Disclosure: I am a Trustee for the Centre for Effective Altruism and I formerly worked at GiveWell as a summer research analyst.

Comment by Nick_Beckstead on A critique of effective altruism · 2013-12-02T16:49:24.602Z · LW · GW

I would love to hear about your qualms with the EA movement if you ever want to have a conversation about the issue.

Edited: When I first read this, I thought you were saying you hadn't brought these problems up with me, but re-reading it it sounds like you tried to raise these criticisms with me. This post has a Vassar-y feel to it but this is mostly criticism I wouldn't say I'd heard from you, and I would have guessed your criticisms would be different. In any case, I would still be interested in hearing more from you about your criticisms of EA.

Comment by Nick_Beckstead on Review of studies says you can decrease motivated cognition through self-affirmation · 2013-10-23T18:03:58.550Z · LW · GW

I agree that this would be good, but didn't think it was worthwhile for me to go through the extra effort in this case. But I did think it was worthwhile to share what I had already found. I think I was very clear about how closely this had been vetted (which is to say, extremely little).

Comment by Nick_Beckstead on Nick Beckstead: On the Overwhelming Importance of Shaping the Far Future · 2013-09-04T22:32:05.473Z · LW · GW

What if we assume Period Independence except for exact repetitions, where the value of extra repetitions eventually go to zero? Perhaps this could be a way to be "timid" while making the downsides of "timidity" seem not so bad or even reasonable? For example in section 6.3.2, such a person would only choose deal 1 over deal 2 if the years of happy lives offered in deal 1 are such that he would already have repeated all possible happy time periods so many times that he values more repetitions very little.

I think it would be interesting if you could show that the space of possible periods-of-lives is structured in such a way that, when combined with a reasonable rule for discounting repetitions, yields a bounded utility function. I don't have fully developed views on the repetition issue and can imagine that the view has some weird consequences, but if you could do this I would count it as a significant mark in favor of the perspective.

BTW what do you think about my suggestion to do a sequence of blog posts based on your thesis?

I think this would have some value but isn't at the top of my list right now.

Also as an unrelated comment, the font in your thesis seems to be such that it's pretty uncomfortable to read in Adobe Acrobat, unless I zoom in to make the text much larger than I usually have to. Not sure if it's something you can easily fix. If not, I can try to help if you email me the source of the PDF.

I think I'll keep with the current format for citation consistency for now. But I have added a larger font version here.

Comment by Nick_Beckstead on Nick Beckstead: On the Overwhelming Importance of Shaping the Far Future · 2013-09-04T14:16:17.571Z · LW · GW

Also, it's not clear to me that strict Period Independence is a good thing. It seems reasonable to not value a time period as much if you knew it was an exact repetition of a previous time period. I wrote a post that's related to this.

I agree that Period Independence may break in the kind of case you describe, though I'm not sure. I don't think that the kind of case you are describing here is a strong consideration against using Period Independence in cases that don't involve exact repetition. I think your main example in the post is excellent.

Comment by Nick_Beckstead on Nick Beckstead: On the Overwhelming Importance of Shaping the Far Future · 2013-09-02T10:13:20.296Z · LW · GW

OK, I"ll ask Paul or Stewart next time I see them.

Does your proposal also violate #1 because the simplicity of an observer-situated-in-a-world is a holistic property of the the observer-situated-in-a-world rather than a local one?

Comment by Nick_Beckstead on Nick Beckstead: On the Overwhelming Importance of Shaping the Far Future · 2013-08-30T12:27:56.275Z · LW · GW

That aside, I do have an object-level comment. Nick states (in section 6.3.1) that Period Independence is incompatible with bounded utility function, but I think that's wrong. Consider a total utilitarian who exponentially discounts each person-stage according to their distance from some chosen space-time event. Then the utility function is both bounded (assuming the undiscounted utility for each person-stage is bounded) and satisfies Period Independence.

I agree with this. I think I was implicitly assuming some additional premises, particularly Temporal Impartiality. I believe that bounded utility + Temporal Impartiality is inconsistent with bounded utility. (Even saying this implicitly assumes other stuff, like transitive rankings, etc., though I agree that Temporal Impartiality is much more substantive.)

Another idea for a bounded utility function satisfying Period Independence, which I previously suggested on LW and was originally motivated by multiverse-related considerations, is to discount or bound the utility assigned to each person-stage by their algorithmic probability.

I am having a hard time parsing this. Could you explain where the following argument breaks down?

Let A(n,X) be a world in which there are n periods of quality X.

  1. The value of what happens during a period is a function of what happens during that period, and not a function of what happens in other periods.

  2. If the above premise is true, then there exists a positive period quality X such that, for any n, A(n,X) is a possible world.

  3. Assuming Period Independence and Temporal Impartiality, as n approaches infinity, the value of A(n,X) approaches infinity.

  4. Therefore, Period Independence and Temporal Impartiality imply an unbounded utility function.

The first premise here is something I articulate in Section 3.2, but may not be totally clear given the informal statement of Period Independence that I run with.

Let me note that one thing about your proposal confuses me, and could potentially be related to why I don't see which step of the above argument you deny. I primarily think of probability as a property of possible worlds, rather than individuals. Perhaps you are thinking of probability as a property of centered possible worlds? Is your proposal that the goodness of a world A with is of the form:

g(A) = well-being of person 1 prior centered world probability of person 1 in world A + well-being of person 2 prior centered world probability of person 2 in A + ...

? If it is, this is a proposal I have not thought about and would be interested in hearing more about its merits and why it is bounded.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-26T09:25:52.646Z · LW · GW

Would be interested to know more about why you think this is "fantastically wrong" and what you think we should do instead. The question the post is trying to answer is, "In practical terms, how should we take account of the distribution of opinion and epistemic standards in the world?" I would like to hear your answer to this question. E.g., should we all just follow the standards that come naturally to us? Should certain people do this? Should we follow the standards of some more narrowly defined group of people? Or some more narrow set of standards still?

I see the specific sentence you objected to as very much a detail rather than a core feature of my proposal, so it would be surprising to me if this was the reason you thought the proposal was fantastically wrong. For what it's worth, I do think that particular sentence can be motivated by epistemology rather than conformity. It is naturally motivated by the aggregation methods I mentioned as possibilities, which I have used in other contexts for totally independent reasons. I also think it is analogous to a situation in which I have 100 algorithms returning estimates of the value of a stock and one of them says the stock is worth 100x market price and all the others say it is worth market price. I would not take straight averages here and assume the stock is worth about 2x market price, even if the algorithm giving a weird answer was generally about as good as the others.

Comment by Nick_Beckstead on My daily reflection routine · 2013-08-19T08:25:46.304Z · LW · GW

The answers over the last 6 weeks have not been very repetitive at all. I'm not sure why this is exactly, since when I was much younger and would pray daily the answers were highly repetitive. It may have something to do with greater maturity and a greater appreciation of the purpose of the activity.

Comment by Nick_Beckstead on My daily reflection routine · 2013-08-18T18:32:27.646Z · LW · GW

I think of the gratitude list as things that stood out as either among the best parts of the day or as unusually good (for you personally). And mistakes go the opposite way.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-16T00:01:56.283Z · LW · GW

That sounds reeeeaaally suspicious in terms of potentially post-facto assignments. (Though defeasibly so - I can totally imagine a case being made for, "Yes, this really was generally visible to the person on the street at the time without benefit of hindsight.")

This isn't something I've looked into closely, though from looking at it for a few minutes I think it is something I would like to look into more. Anyway, on the Wikipedia page on diffusion of innovation:

This is the second fastest category of individuals who adopt an innovation. These individuals have the highest degree of opinion leadership among the other adopter categories. Early adopters are typically younger in age, have a higher social status, have more financial lucidity, advanced education, and are more socially forward than late adopters. More discrete in adoption choices than innovators. Realize judicious choice of adoption will help them maintain central communication position (Rogers 1962 5th ed, p. 283)."

I think this supports my claim that elite common sense is quicker to join and support new good social movements, though as I said I haven't looked at it closely at all.

Can you use elite common sense to generate an near-term testable prediction that would sound bold relative to my probability assignments or LW generally?

I can't think of anything very good, but I'll keep it in the back of my mind. Can you think of something that would sound bold relative to my perspective?

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-15T22:22:47.376Z · LW · GW

How would this apply to social issues do you think? It seems that this is a poor way to be on the front of social change? If this strategy was widely applied, would we ever have seen the 15th and 19th amendments to the Constitution here in the US?

My impression is that the most trustworthy people are more likely to be at the front of good social movements than the general public, so that if people generally adopted the framework, many of the promising social movements would progress more quickly than they actually did. I am not sufficiently aware of the specific history of the 15th and 19th amendments to say more than that at this point.

There is a general question about how the framework is related to innovation. Aren't innovators generally going against elite common sense? I think that innovators are often overconfident about the quality of their ideas, and have significantly more confidence in their ideas than they need for their projects to be worthwhile by the standards of elite common sense. E.g., I don't think you need to have high confidence that Facebook is going to pan out for it to be worthwhile to try to make Facebook. Elite common sense may see most attempts at innovation as unlikely to succeed, but I think it would judge many as worthwhile in cases where we'll get to find out whether the innovation was any good or not. This might point somewhat in the direction of less innovation.

However, I think that the most trustworthy people tend to innovate more, are more in favor of innovation than the general population, and are less risk-averse than the general population. These factors might point in favor of more innovation. It is unclear to me whether we would have more or less innovation if the framework were widely adopted, but I suspect we would have more.

On a more personal basis, I'm polyamorous, but if I followed your framework, I would have to reject polyamory as a viable relationship model. Yes, the elite don't have a lot of data on polyamory, but although I have researched the good and the bad, and how it can work compared to monogamy, but I don't think that I would be able to convince the elite of my opinions.

My impression is that elite common sense is not highly discriminating against polyamory as a relationship model. It would probably be skeptical of polyamory for the general person, but say that it might work for some people, and that it could make sense for certain interested people to try it out.

If your opinion is that polyamory should be the norm, I agree that you wouldn't be able to convince elite common sense of this. My personal take is that it is far from clear that polyamory should be the norm. In any event, this doesn't seem like a great test case for taking down the framework because the idea that polyamory should be the norm does not seem like a robustly supported claim.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-13T19:32:41.446Z · LW · GW

I think other people are significantly more responsive to values disagreements than Brian is, and that this suggests they are significantly more open to the possibility that their idiosyncratic personal values judgments are mistaken. You can get a sense of how unusual Brian's perspectives are by examining his website, where his discussions of negative utilitarianism and insect suffering stand out.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-13T19:09:01.579Z · LW · GW

I'm not always as unreasonable as suggested there, but I was mainly trying to point out that if I refuse to go along with certain ideas, it's not dependent on a controversial theory of meta-ethics. It's just that I intuitively don't like the ideas and so reject them out of hand. Most people do this with ideas they find too unintuitive to countenance.

Whether you want to call it a theory of meta-ethics or not, and whether it is a factual error or not, you have an unusual approach to dealing with moral questions that places an unusual amount of emphasis on Brian Tomasik's present concerns. Maybe this is because there is something very different about you that justifies it, or maybe it is some idiosyncratic blind spot or bias of yours. I think you should put weight on both possibilities, and that this pushes in favor of more moderation in the face of values disagreements. Hope that helps articulate where I'm coming from in your language. This is hard to write and think about.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-13T12:00:44.069Z · LW · GW

Here I would say, "Screw ethics and meta-ethics. All I'm saying is I want to do what I feel like doing, even if you and other elites don't agree with it."

I think that there is a genuine concern that many people have when they try to ask ethical questions and discuss them with others, and that this process can lead to doing better in terms of that concern. I am speaking vaguely because, as I said earlier, I don't think that I or others really understand what is going on. This has been an important process for many of the people I know who are trying to make a large positive impact on the world. I believe it was part of the process for you as well. When you say "I want to do what I want to do" I think it mostly just serves as a conversation-stopper, rather than something that contributes to a valuable process of reflection and exchange of ideas.

I personally suspect your error lies in not considering the problem from perspectives other than "what does Brian Tomasik care about right now?".

Sure, but this is not a factual error, just an error in being a reasonable person or something. :)

I think it is a missed opportunity to engage in a process of reflection and exchange of ideas that I don't fully understand but seems to deliver valuable results.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-13T11:44:40.771Z · LW · GW

I don't have a lot to add to my comments on religious authorities, apart from what I said in the post and what I said in response to Luke's Muslim theology case here.

One thing I'd say is that many of the Christian moral teachings that are most celebrated are actually pretty good, though I'd admit that many others are not. Examples of good ones include:

  • Love your neighbor as yourself (I'd translate this as "treat others as you would like to be treated")

  • Focus on identifying and managing your own personal weaknesses rather than criticizing others for their weaknesses

  • Prioritize helping poor and disenfranchised people

  • Don't let your acts of charity be motivated by finding approval from others

These are all drawn from Jesus's Sermon on the Mount, which is arguably his most celebrated set of moral teachings.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-13T09:29:16.843Z · LW · GW

It's very common for people to say, "Predictions are hard, especially about the future, so let's focus on the short term where we can be more confident we're at least making a small positive impact."

If by short-term you mean "what happens in the next 100 years or so," I think there is something to this idea, even for people who care primarily about very long-term considerations. I suspect it is true that the expected value of very long-run outcomes is primarily dominated by totally unforeseeable weird stuff that could happen in the distant future. But I believe that the best way deal with this challenge is to empower humanity to deal with the relatively foreseeable and unforeseeable challenges and opportunities that it will face over the next few generations. This doesn't mean "let's just look only at short-run well-being boosts," but something more like "let's broadly improve cooperation, motives, access to certain types of information, narrow and broad technological capabilities, and intelligence and rationality to deal with the problems we can't foresee, and let's rely on the best evidence we can to prepare for the problems we can foresee." I say a few things about this issue here. I hope to say more about it in the future.

An analogy would be that if you were a 5-year-old kid and you primarily cared about how successful you were later in life, you should focus on self-improvement activities (like developing good habits, gaining knowledge, and learning how to interact with other people) and health and safety issues (like getting adequate nutrition, not getting hit by cars, not poisoning yourself, not falling off of tall objects, and not eating lead-based paint). You should not try to anticipate fine-grained challenges in the labor market when you graduate from college or disputes you might have with your spouse. I realize that this analogy may not be compelling, but perhaps it illuminates my perspective.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-12T23:52:38.336Z · LW · GW

I think it is evidence that thinking about it carefully wouldn't advance their current concerns, so they don't bother or use the thinking/talking for other purposes. Here are some possibilities that come to mind:

  • they might not care about the outcomes that you think are decision-relevant and associated with your claim

  • they may care about the outcomes, but your claim may not actually be decision-relevant if you were to find out the truth about the claim

  • it may not be a claim which, if thought about carefully, would contribute enough additional evidence to change your probability in the claim enough to change decisions

  • it may be that you haven't framed your arguments in a way that suggests to people that there is a promising enough path to getting info that would become decision-relevant

  • it may be because of a signalling hypothesis that you would come up with; if you're talking about the distant future, maybe people mostly talk about such stuff as part of a system of behavior that signals support for certain perspectives. If this is happening more in this kind of case, it may be in part because of the other considerations.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-12T23:14:23.742Z · LW · GW

However, there are some Pascalian wagers that seem genuinely compelling even after looking for alternatives, like "the Overwhelming Importance of Shaping the Far Future." My impression is that most elites do not agree that the far future is overwhelmingly important even after hearing your arguments because they don't have linear utility functions and/or don't like Pascalian wagers. Do you think most elites would agree with you about shaping the far future?

I disagree with the claim that the argument for shaping the far future is a Pascalian wager. In my opinion, there is a reasonably high, reasonably non-idiosyncratic probability that humanity will survive for a very long time, that there will be a lot of future people, and/or that future people will have a very high quality of life. Though I have not yet defended this claim as well as I would like, I also believe that many conventionally good things people can do push toward future generations facing future challenges and opportunities better than they otherwise would, which with a high enough and conventional enough probability makes the future go better. I think that these are claims which elite common sense would be convinced of, if in possession of my evidence. If elite common sense would not be so convinced, I would consider abandoning these assumptions.

Regarding the more purely moral claims, I suspect there are a wide variety of considerations which elite common sense would give weight to, and that very long-term considerations are one time of important consideration which would get weight according to elite common sense. It may also be, in part, a fundamental difference of values, where I am part of a not-too-small contingent of people who have distinctive concerns. However, in genuinely altruistic contexts, I think many people would give these considerations substantially more weight if they thought about the issue carefully.

Near the beginning of my dissertation, I actually speak about the level of confidence I have in my thesis quite tentatively:

How convinced should you be by the arguments I'm going to give? I'm defending an unconventional thesis and my support for that thesis comes from highly speculative arguments. I don't have great confidence in my thesis, or claim that others should. But I am convinced that it could well be true, that the vast majority of thoughtful people give the claim less credence that they should, and that it is worth thinking about more carefully. I aim to make the reader justified in taking a similar attitude. (p. 3, Beckstead 2013)

I stand by this tentative stance.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-12T22:57:00.678Z · LW · GW

My current meta-ethical view says I care about factual but not necessarily moral disagreements with respect to elites. One's choice of meta-ethics is itself a moral decision, not a factual one, so this disagreement doesn't much concern me.

I'm a bit flabbergasted by the confidence with which you speak about this issue. In my opinion, the history of philosophy is filled with a lot of people often smarter than you and me going around saying that their perspective is the unique one that solves everything and that other people are incoherent and so on. As far as I can tell, you are another one of these people.

Like Luke Muehlhauser, I believe that we don't even know what we're asking when we ask ethical questions, and I suspect we don't really know what we're asking when we asking meta-ethical questions either. As far as I can tell, you've picked one possible candidate thing we could be asking--"what do I care about right now?"--among a broad class of possible questions, and then you are claiming that whatever you want right now is right because that's what you're asking.

Of course, there are some places where I could be factually wrong in my meta-ethics, like with the logical reasoning in this comment, but I think most elites don't think there's something wrong with my logic, just something (ethically) wrong with my moral stance. Let me know if you disagree with this.

I think most people would just think you had made an error somewhere and not be able to say where it was, and add that you were talking about completely murky issue that people aren't good at thinking about.

I personally suspect your error lies in not considering the problem from perspectives other than "what does Brian Tomasik care about right now?".

[Edited to reduce rhetoric.]

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-12T22:37:39.721Z · LW · GW

Yes, thank you for catching.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-12T17:52:04.354Z · LW · GW

I agree that in principle, you don't want some discontinuous distinction between elites and non-elites. I also agree with your points (a) - (c). Something like PageRank seems good to me, though of course I would want to be tentative about the details.

In practice, my suspicion is that most of what's relevant here comes from the very elite people's thinking, so that not much is lost by just focusing on their opinions. But I hold this view pretty tentatively. I presented the ideas the way I did partly because of this hunch and partly for ease of exposition.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-12T13:17:08.153Z · LW · GW

Insofar as my own actions are atypical, I intend for it to result from atypical moral beliefs rather than atypical factual beliefs. (If you can think of instances of clearly atypical factual beliefs on my part, let me know.) Of course, you could claim, elite common sense should apply also as a prior to what my own moral beliefs actually are, given the fallibility of introspection. This is true, but its importance depends on how abstractly I view my own moral values. If I ask questions about what an extrapolated Brian would think upon learning more, having more experiences, etc., then the elite prior has a lot to say on this question. But if I'm more concerned with my very immediate emotional reaction, then there's less room for error and less that the common-sense prior has to say. The fact that my moral values are sometimes not strongly affected by common-sense moral values comes from my favoring immediate emotions rather than what (one of many possible) extrapolated Brians would feel upon having further and different life experiences. (Of course, there are many possible further life experiences I could have, which would push me in lots of random directions. This is why I'm not so gung ho about what my extrapolated selves would think on some questions.)

As you point out, one choice point is how much idealization to introduce. At one extreme, you might introduce no idealization at all, so that whatever you presently approve of is what you’ll assume is right. On the other extreme you might have a great deal of idealization. You may assume that a better guide is what you would approve of if you knew much more, had experienced much more, were much more intelligent, made no cognitive errors in your reasoning, and had much more time to think. I lean in favor of the other extreme, as I believe most people who have considered this question do, though recognize that you want to specify your procedure in a way that leaves some core part of your values unchanged. Still, I think this is a choice that turns on many tricky cognitive steps, any of which could easily be taken in the wrong direction. So I would urge that insofar as you are making a very unusual decision at this step, you should try to very carefully understand the process that others are going through.

ETA: I'd also caution against just straight-out assuming a particular meta-ethical perspective. This is not a case where you are an expert in the sense of someone who elite common sense would defer to, and I don't think your specific version of anti-realism, or your philosophical perspective which says there is no real question here, are views which can command the assent of a broad coalition of trustworthy people.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-12T13:05:16.047Z · LW · GW

It can be murky to infer what people believe based on actions or commitments, because this mixes two quantities: Probabilities and values. For example, the reason most elites don't seem to take seriously efforts like shaping trajectories for strong AI is not because they think the probabilities of making a difference are astronomically small but because they don't bite Pascalian bullets. Their utility functions are not linear. If your utility function is linear, this is a reason that your actions (if not your beliefs) will diverge from those of most elites. In any event, many elites are not even systematic or consequentialist in translating utilities times probabilities into actions.

I don’t endorse biting Pascalian bullets, in part for reasons argued in this post, which I think give further support to some considerations identified by GiveWell. In Pascalian cases, we have claims that people in general aren’t good at thinking about and which people generally assign low weight when they are acquainted with the arguments. I believe that Pascalian estimates of expected value that differ greatly from elite common sense and aren’t persuasive to elite common sense should be treated with great caution.

I also endorse Jonah’s point about some people caring about what you care about, but for different reasons. Just as we are weird, there can be other people who are weird in different ways that make them obsessed with the things we're obsessed with for totally different reasons. Just as some scientists are obsessed with random stuff like dung beetles, I think a lot of asteroids were tracked because there were some scientists who are really obsessed with asteroids in particular, and want to ensure that all asteroids are carefully tracked far beyond the regular value that normal people place on tracking all the asteroids. I think this can include some borderline Pascalian issues. For example, important government agencies that care about speculative threats to national security. Dick Cheney famously said, "If there's a 1% chance that Pakistani scientists are helping al-Qaeda build or develop a nuclear weapon, we have to treat it as a certainty in terms of our response." Similarly, there can be people that are obsessed with many issues far out of proportion with what most ordinary people care about. Looking at what "most people" care about is less robust a way to find gaps in a market than it can appear at first. (I know you don’t think it would be good to save the world, but I think the example still illustrates the point to some extent. An example more relevant to would be that some scientists might just be really interested in insects and do a lot of the research that you’d think would be valuable, even though if you had just thought “no one cares about insects so this research will never happen” you’d be wrong.)

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-12T12:32:00.552Z · LW · GW

I think the focus on only intellectual elites has unclear grounding. Is the reason because elites think most seriously about the questions that you care about most? On a question of which kind of truck was most suitable for garbage collection, you would defer to a different class of people. In such a case, I guess you would regard them as the (question-dependent) "elites."

This is a question which it seems I wasn't sufficiently clear about. I count someone as an "expert on X" roughly when they are someone that a broad coalition of trustworthy people would defer to on questions about X. As I explained in another comment, if you don't know about what the experts on X think, I recommend trying to find out what the experts think (if it's easy/important enough) and going with what the broad coalition of trustworthy people thinks until then. So it may be that some non-elite garbage guys are experts on garbage collection, and a broad coalition of trustworthy people would defer to them on questions of garbage collection, once the broad coalition of trustworthy people knows about what these people think about garbage collection.

Why focus on people who are regarded as most trustworthy by many people? I think those people are likely to be more trustworthy than ordinary people, as I tried to suggest in my quick Quora experiment.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-12T11:38:08.604Z · LW · GW

Great. It sounds like may reasonably be on the same page at this point.

To reiterate and clarify, you can pretty much make the standards as high as you like as long as: (1) you have a good enough grip on how the elite class thinks,(2) you are using clear indicators of trustworthiness that many people would accept, and (3) you make a good-faith effort not to cherry pick and watch out for the No True Scotsman fallacy. The only major limitation on this I can think of is that there is some trade-off to be made between certain levels of diversity and independent judgment. Like, if you could somehow pick the 10 best people in the world by some totally broad standards that everyone would accept (I think this is deeply impossible), that probably wouldn't be as good as picking the best 100-10,000 people by such standards. And I'd substitute some less trustworthy people for more trustworthy people in some cases where it would increase diversity of perspectives.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-12T11:23:19.185Z · LW · GW

Yes.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-12T11:13:50.171Z · LW · GW

I think we don't disagree about whether elite common sense should defer to cryptography experts (I assume this is what Bruce Schneier is a stand-in for). Simplifying a bit, we are disagreeing about the much more subtle question of whether, given that elite common sense should defer to cryptography experts, in a situation where the current views of cryptographers are unknown, elite common sense recommends adopting the current views of cryptographers. I say elite common sense recommends adopting their views if you know them, but going with what e.g. the upper crust of Ivy League graduates would say if they had access to your information if you don't know about the opinions of cryptographers. I also suspect elite common sense recommends finding out about the opinions of elite cryptographers if you can. But Wei Dai's example was one in which you didn't know and maybe couldn't find out, so that's why I said what I said. Frankly, I'm pretty flummoxed about why you think this is the "No True Scotsman" fallacy. I feel that one of us is probably misunderstanding the other on a basic level.

A possible confusion here is that I doubt the cryptographers have very different epistemic standards as opposed to substantive knowledge and experience about cryptography and tools for thinking about it.

I certainly don't get the impression that one can grind well-specified rules to get to the answer about polling the upper 10% of Ivy League graduates in this case.

I agree with this, and tried to make this clear in my discussion. I went with a rough guess that would work for a decent chunk of the audience rather than only saying something very abstract. It's subtle, but I think reasonable epistemic frameworks are subtle if you want them to have much generality.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-12T10:58:37.768Z · LW · GW

Perhaps I should have meant loop quantum gravity. I confess that I am speaking beyond my depth, and was just trying to give an example of a central dispute in current theoretical physics. That is the type of case where I would not like to lean heavily on my own perspective.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-11T22:16:19.441Z · LW · GW

Suppose I selected from among all physicists who accept MWI and asked them what they thought about FAI arguments. To me that's just an obvious sort of reweighting you might try, though anyone who's had experience with machine learning knows that most clever reweightings you try don't work. To someone else it might be cherry-picking of gullible physicists, and say, "You have violated Beckstead's rules!"

Just to be clear: I would count this as violating my rules because you haven't used a clear indicator of trustworthiness that many people would accept.

ETA: I'd add that people should generally pick their indicators in advance and stick with them, and not add them in to tune the system to their desired bottom lines.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-11T22:12:43.523Z · LW · GW

Could you maybe just tell me what you think my framework is supposed to imply about Wei Dai's case, if not what I said it implies? To be clear: I say it implies that the executives should have used an impartial combination of the epistemic standards used by the upper crust of Ivy League graduates, and that this gives little weight to the cryptographers because, though the cryptographers are included, they are a relatively small portion of all people included. So I think my framework straightforwardly doesn't say that people should be relying on info they can't use, which is how I understood Wei Dai's objection. (I think that if they were able to know what the cryptographers opinions are, then elite common sense would recommend deferring to the cryptographers, but I'm just guessing about that.) What is it you think my framework implies--with no funny business and no instance of the fallacy you think I'm committing--and why do you find it objectionable?

ETA:

I'd be happy with advice along the lines of, "First take your best guess as to who the elites really are and how much they ought to be trusted in this case, then take their opinion as a prior with an appropriate degree of concentrated probability density, then update."

This is what I think I am doing and am intending to do.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-11T21:17:35.264Z · LW · GW

It seems the "No True Elite" fallacy would involve:

(1) Elite common sense seeming to say that I should believe X because on my definition of "elites," elites generally believe X. (2) X being an embarrassing thing to believe (3) Me replying that someone who believed X wouldn't count as an "elite," but doing so in a way that couldn't be justified by my framework

In this example I am actually saying we should defer to the cryptographers if we know their opinions, but that they don't get to count as part of elite common sense immediately because their opinions are too hard to access. And I'm actually saying that elite common sense supports a claim which it is embarrassing to believe.

So I don't understand how this is supposed to be an instance of the "No True Scotsman" fallacy.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-11T20:58:48.040Z · LW · GW

Fixed. Thank you.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-11T20:54:55.415Z · LW · GW

I have an overall sense that there are a lot of governments that are pretty good and that people are getting better at setting up governments over time. The question is very vague and hard to answer, so I am not going to attempt a detailed one. Perhaps you could give it a shot if you're interested.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-11T20:47:27.468Z · LW · GW

Re: (1), I can't do too much better than what I already wrote under "How should we assign weight to different groups of people?" I'd say you can go about as elite as you want if you are good at telling how the relevant people think and you aren't cherry-picking or using the "No True Scotsman" fallacy. I picked this number as something I felt a lot of people reading this blog would be in touch with and wouldn't be too narrow.

Re: (2), this is something I hope to discuss at greater length later on. I won't try to justify these claims now, but other things being equal, I think it favors

  • more skepticism about most philosophical arguments (I think a lot of these can't command the assent of a broad coalition of people), including arguments which depend on idiosyncratic moral perspectives (I think this given either realist or anti-realist meta-ethics)

  • more adjustment toward common sense in cost-effectiveness estimates

  • more skepticism about strategies for doing good that "seem weird" to most people

  • more respect for the causes that top foundations focus on today

  • more effort to be transparent about our thinking and stress-test unusual views we have

But this framework is a prior not a fixed point, so in case this doesn't settle issues, it just points in a specific direction. I'd prefer not to get into details defending these claims now, since I hope to get into it at a later date.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-11T20:36:27.262Z · LW · GW

Sorry, limited experience with LW posts and limited HTML experience. 5 minutes of Google didn't help. Can you link or explain? Sorry if I'm being dense.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-11T19:59:20.171Z · LW · GW

I feel this doesn't address the "low stakes" issues I brought up, or that this may not even by the physicists' area of competence. Maybe you'd get a different outcome if the fate of the world depended on this issue, as you believe it does with AI.

I also wonder if this analysis leads to wrong historical predictions. E.g., why doesn't this reasoning suggest that the US government would totally botch the constitution? That requires philosophical reasoning and reality doesn't immediately call you out on being wrong. And the people setting things up don't have incentives totally properly aligned. Setting up a decent system of government strikes me as more challenging than the MWI problem in many respects.

How much weight do you actually put on this line of argument? Would you change your mind about anything practical if you found out you were wrong about MWI?

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-11T19:20:39.378Z · LW · GW

I agree, but a priori I suspect that philosophers of physics and others without heavy subject matter knowledge of quantum mechanics have leaned too heavily on this. Spending one's life thinking about something can result subconscious acquisition of implicit knowledge of things that are obliquely related. People who haven't had this experience may be at a disadvantage.

But note that philosophers of physics sometimes make whole careers thinking about this, and they are among the most high-caliber philosophers. They may be at an advantage in terms of this criterion.

I can't think of a reference in print for my claim about what almost all philosophers think. I think a lot of them would find it too obvious to say, and wouldn't bother to write a paper about it. But, for what it's worth, I attended a couple of conferences on philosophy of physics held at Rutgers, with many leading people in the field, and talked about this question and never heard anyone express an opposing opinion. And I was taught about interpretations of QM from some leading people in philosophy of physics.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-11T19:12:25.496Z · LW · GW

I haven't fully put together my thoughts on this, but it seems like a bad test to "break someone's trust in a sane world" for a number of reasons:

  • this is a case where all the views are pretty much empirically indistinguishable, so it isn't an area where physicists really care all that much

  • since the views are empirically indistinguishable, it is probably a low-stakes question, so the argument doesn't transfer well to breaking our trust in a sane world in high-stakes cases; it makes sense to assume people would apply more rationality in cases where more rationality pays off

  • as I said in another comment, MWI seems like a case where physics expertise is not really what matters, so this doesn't really show that the scientific method as applied by physicists is broken; it seems it at most it shows that physics aren't good at questions that are essentially philosophical; it would be much more persuasive if you showed that e.g., quantum gravity was obviously better than string theory and only 18% of physicists working in the relevant area thought so

[Edited to add a missing "not"]

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-11T19:02:09.664Z · LW · GW

I'm not sure I understand the objection/question, but I'll respond to the objection/question I think it is.

Am I changing the procedure to avoid a counterexample from Wei Dai?

I think the answer is No. If you look at the section titled "An outline of the framework and some guidelines for applying it effectively" you'll see that I say you should try to use a prior that corresponds to an impartial combination of what the people who are most trustworthy in general think. I say a practical approximation of being an "expert" is being someone elite common sense would defer to. If the experts won't tell elite common sense what they think, then what the experts think isn't yet part of elite common sense. I think this is a case where elite common sense just gets it wrong, not that they clearly could have done anything about it. But I do think it's a case where you can apply elite common sense, even if it gives you the wrong answer ex post. (Maybe it doesn't give you the wrong answer though; maybe some better investigation would have been possible and they didn't do it. This is hard to say from our perspective.)

Why go with what generally trustworthy people think as your definition of elite common sense? It's precisely because I think it is easier to get in touch with what generally trustworthy people think, rather than what all subject matter experts in the world think. As I say in the essay:

How should we assign weight to different groups of people? Other things being equal, a larger number of people is better, more trustworthy people are better, people who are trustworthy by clearer indicators that more people would accept are better, and a set of criteria which allows you to have some grip on what the people in question think is better, but you have to make trade-offs....If I went with, say, the 10 most-cited people in 10 of the most intellectually credible academic disciplines, 100 of the most generally respected people in business, and the 100 heads of different states, I would have a pretty large number of people and a broad set of people who were very trustworthy by clear standards that many people would accept, but I would have a hard time knowing what they would think about various issues because I haven’t interacted with them enough. How these factors can be traded-off against each other in a way that is practically most helpful probably varies substantially from person to person.

In principle, if you could get a sense for what all subject matter experts thought about every issue, that would be a great place to start for your prior. But I think that's not possible in practice. So I recommend using a more general group that you can use as your starting point.

Does this answer your question?

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-11T12:09:29.627Z · LW · GW

This sounds right to me.

Comment by Nick_Beckstead on Common sense as a prior · 2013-08-11T07:46:45.334Z · LW · GW

The intrinsic interest of the question of interpretation of quantum mechanics

The question of what quantum mechanics means has been considered one of the universe’s great mysteries. As such, people interested in physics have been highly motivated to understand it. So I think that the question is privileged relative to other questions that physicists would have opinions on — it’s not an arbitrary question outside of the domain of their research accomplishments.

My understanding is that the interpretation of QM is (1) not regarded as a very central question in physics, being seen more as a "philosophy" question and being worked on to a reasonable extent by philosophers of physics and physicists who see it as a hobby horse, (2) is not something that physics expertise--having good physical intuition, strong math skills, detailed knowledge of how to apply QM on concrete problems--is as relevant for as many other questions physicists work on, and (3) is not something about which there is an extremely enormous amount to say. These are some of the main reasons I feel I can update at all from the expert distribution of physicists on this question. I would hardly update at all from physicist opinions on, say, quantum gravity vs. string theory, and I think it would basically be crazy for me to update substantially in one direction or the other if I had comparable experience on that question.

[ETA: As evidence of (1), I might point to the prevalence of the "shut up and calculate" mentality which seems to have been reasonably popular in physics for a while. I'd also point to the fact that Copenhagen is popular but really, really, really, really not good. And I feel that this last claim is not just Nick Beckstead's idiosyncratic opinion, but the opinion of every philosopher of physics I have ever spoken with about this issue.]