lukeprog feed - LessWrong 2.0 Reader lukeprog’s posts and comments on the Effective Altruism Forum en-us Comment by lukeprog on Which scientific discovery was most ahead of its time? <p>Cases where scientific knowledge was in fact lost and then rediscovered provide especially strong evidence about the discovery counterfactauls, e.g. <a href="">Hero&#x27;s eolipile</a> and <a href="">al-Kindi&#x27;s development</a> of relative frequency analysis for decoding messages. Probably we underestimate how common such cases are, because the knowledge of the lost discovery is itself lost — e.g. we might easily have simply not rediscovered the <a href="">Antikythera mechanism</a>.</p> lukeprog toKY2gsQXmcA9KY97 2019-05-16T14:42:59.024Z Comment by lukeprog on Preliminary thoughts on moral weight <p>Apparently Shelly Kagan has <a href="">a book</a> coming out soon that is (sort of?) about moral weight.</p> lukeprog kjQktDqSYQvhWhqpY 2018-10-24T19:38:04.818Z Comment by lukeprog on A Proper Scoring Rule for Confidence Intervals <p>This scoring rules has some downsides from a usability standpoint. See <a href="">Greenberg 2018</a>, a whitepaper prepared as background material for a (forthcoming) calibration training app. </p> lukeprog Rkw6wFub4nd2FwhxK 2018-08-29T17:48:22.643Z Comment by lukeprog on Preliminary thoughts on moral weight <p>Some other people at Open Phil have spent more time thinking about two-envelope effects more than I have, and fwiw some of their thinking on the issue is in <a href="">this post</a> (e.g. see section</p> lukeprog A8BmKSmsspufwNTKr 2018-08-16T02:33:24.309Z Comment by lukeprog on Preliminary thoughts on moral weight <p>My own take on this is described briefly <a href="">here</a>, with more detail in various appendices, e.g. <a href="">here</a>.</p> lukeprog ruq6FZc7L99jzMJQq 2018-08-14T19:03:58.337Z Comment by lukeprog on Preliminary thoughts on moral weight <p>Yes, I meant to be describing ranges conditional on each species being moral patients at all. I previously gave my own (very made-up) probabilities for that <a href="">here</a>. Another worry to consider, though, is that many biological/cognitive and behavioral features of a species are simultaneously (1) evidence about their likelihood of moral patienthood (via consciousness), and (2) evidence about features that might affect their moral weight *given* consciousness/patienthood. So, depending on how you use that evidence, it&#x27;s important to watch out for double-counting.</p><p></p><p>I&#x27;ll skip responding to #2 for now.</p> lukeprog kN7PN57jaQBTybGKc 2018-08-14T19:02:27.195Z Comment by lukeprog on Preliminary thoughts on moral weight <p>For anyone who is curious, I cite much of the literature arguing over criteria for moral patienthood/weight in the footnotes of <a href="">this section</a> of my original moral patienthood report. My brief comments on why I&#x27;ve focused on consciousness thus far are <a href="">here</a>.</p> lukeprog eMZCfNhswi5d6JkjY 2018-08-14T18:57:48.389Z Preliminary thoughts on moral weight <p><em>This post adapts some internal notes I wrote for the Open Philanthropy Project, but they are merely at a &quot;brainstorming&quot; stage, and do not express my &quot;endorsed&quot; views nor the views of the Open Philanthropy Project. This post is also written quickly and not polished or well-explained.</em></p><p>My <a href="">2017 Report on Consciousness and Moral Patienthood</a> tried to address the question of &quot;Which creatures are moral patients?&quot; but it did little to address the question of &quot;moral weight,&quot; i.e. how to weigh the interests of different kinds of moral patients against each other:</p><blockquote>For example: suppose we conclude that fishes, pigs, and humans are all moral patients, and we estimate that, for a fixed amount of money, we can (in expectation) dramatically improve the welfare of (a) 10,000 rainbow trout, (b) 1,000 pigs, or (c) 100 adult humans. In that situation, how should we compare the different options? This depends (among other things) on how much “moral weight” we give to the well-being of different kinds of moral patients.</blockquote><p>Thus far, philosophers have said very little about moral weight (see below). In this post I lay out one approach to thinking about the question, in the hope that others might build on it or show it to be misguided.</p><h1>Proposed setup</h1><p>For the simplicity of a first-pass analysis of moral weight, let&#x27;s assume a variation on classical utilitarianism according to which the only thing that morally matters is the moment-by-moment character of a being&#x27;s conscious experience. So e.g. it doesn&#x27;t matter whether a being&#x27;s rights are respected/violated or its preferences are realized/thwarted, except insofar as those factors affect the moment-by-moment character of the being&#x27;s conscious experience, by causing pain/pleasure, happiness/sadness, etc. </p><p>Next, and again for simplicity&#x27;s sake, let&#x27;s talk only about the &quot;typical&quot; conscious experience of &quot;typical&quot; members of different species when undergoing various &quot;canonical&quot; positive and negative experiences, e.g. consuming species-appropriate food or having a nociceptor-dense section of skin damaged.</p><p>Given those assumptions, when we talk about the relative &quot;moral weight&quot; of different species, we mean to ask something like &quot;How morally important is 10 seconds of a typical human&#x27;s experience of [some injury], compared to 10 seconds of a typical rainbow trout&#x27;s experience of [that same injury]?</p><p>For this exercise, I&#x27;ll separate &quot;moral weight&quot; from &quot;probability of moral patienthood.&quot; Naively, you could then multiply your best estimate of a species&#x27; moral weight (using humans as the baseline of 1) by P(moral patienthood) to get the species&#x27; &quot;expected moral weight&quot; (or whatever you want to call it). Then, to estimate an intervention&#x27;s potential benefit for a given species, you could multiply [expected moral weight of species] × [individuals of species affected] × [average # of minutes of conscious experience affected across those individuals] × [average magnitude of positive impact on those minutes of conscious experience].</p><p>However, I say &quot;naively&quot; because <em>this doesn&#x27;t actually work</em>, due to <a href="">two-envelope effects</a>.</p><h1>Potential dimensions of moral weight</h1><p>What features of a creature&#x27;s conscious experience might be relevant to the moral weight of its experiences? Below, I describe some possibilities that I previously mentioned in <a href="">Appendix Z7</a> of my moral patienthood report.</p><p>Note that any of the features below could be (and in some cases, very likely are) hugely multidimensional. For simplicity, I&#x27;m going to assume a <em>unidimensional</em> characterization of them, e.g. what we&#x27;d get if we looked only at the principal component in a principal component analysis of a hugely multidimensional phenomenon.</p><h2>Clock speed of consciousness</h2><p>Perhaps animals vary in their &quot;clock speed.&quot; E.g. a hummingbird reacts to some things much faster than I ever could. If <em>any</em> of that is under conscious control, its &quot;clock speed&quot; of conscious experience seems like it should be faster than mine, meaning that, intuitively, it should have a greater number of subjective &quot;moments of consciousness&quot; per objective minute than I do.</p><p>In general, smaller animals probably have faster clock speeds than larger ones, for mechanical reasons:</p><blockquote>The natural oscillation periods of most consciously controllable human body parts are greater than a tenth of a second. Because of this, the human brain has been designed with a matching reaction time of roughly a tenth of a second. As it costs more to have faster reaction times, there is little point in paying to react much faster than body parts can change position.</blockquote><blockquote>…the first resonant period of a bending cantilever, that is, a stick fixed at one end, is proportional to its length, at least if the stick’s thickness scales with its length. For example, sticks twice as long take twice as much time to complete each oscillation. Body size and reaction time are predictably related for animals today… (<a href="">Hanson 2016</a>, ch. 6)</blockquote><p>My impression is that it&#x27;s a common intuition to value experience by its &quot;subjective&quot; duration rather than its &quot;objective&quot; duration, with no discount. So if a hummingbird&#x27;s clock speed is 3x as fast as mine, then all else equal, an objective minute of its conscious pleasure would be worth 3x an objective minute of my conscious pleasure.</p><h2>Unities of consciousness</h2><p>Philosophers and cognitive scientists debate how &quot;unified&quot; consciousness is, in various ways. Our normal conscious experience <em>seems</em> to many people to be pretty &quot;unified&quot; in various ways, though sometimes it feels less unified, for example when one goes &quot;in and out of consciousness&quot; during a restless night&#x27;s sleep, or when one engages in certain kinds of meditative practices.</p><p>Daniel Dennett suggests that animal conscious experience is radically less unified than human consciousness is, and <a href="">cites this as a major reason he doesn&#x27;t give most animals much moral weight</a>.</p><p>For convenience, I&#x27;ll use <a href="">Bayne (2010)</a>&#x27;s taxonomy of types of unity. He talks about subject unity, representational unity, and phenomenal unity — each of which has a &quot;synchronic&quot; (momentary) and &quot;diachronic&quot; (across time) aspect of unity.</p><h3>Subject unity</h3><p>Bayne explains:</p><blockquote>My conscious states possess a certain kind of unity insofar as they are all mine; likewise, your conscious states possess that same kind of unity insofar as they are all yours. We can describe conscious states that are had by or belong to the same subject of experience as subject unified. Within subject unity we need to distinguish the unity provided by the subject of experience across time (diachronic unity) from that provided by the subject at a time (synchronic unity).</blockquote><h3>Representational unity</h3><p>Bayne explains:</p><blockquote>Let us say that conscious states are representationally unified to the degree that their contents are integrated with each other. Representational unity comes in a variety of forms. A particularly important form of representational unity concerns the integration of the contents of consciousness around perceptual objects—what we might call ‘object unity’. Perceptual features are not normally represented by isolated states of consciousness but are bound together in the form of integrated perceptual objects. This process is known as feature-binding. Feature-binding occurs not only within modalities but also between them, for we enjoy multimodal representations of perceptual objects.</blockquote><p>I suspect many people wouldn&#x27;t treat representational unity as all that relevant to moral weight. E.g. there are humans with low representational unity of a sort (e.g. <a href="">visual agnosics</a>); are their sensory experiences less morally relevant as a result?</p><h3>Phenomenal unity</h3><p>Bayne explains:</p><blockquote>Subject unity and representational unity capture important aspects of the unity of consciousness, but they don’t get to the heart of the matter. Consider again what it’s like to hear a rumba playing on the stereo whilst seeing a bartender mix a mojito. These two experiences might be subject unified insofar as they are both yours. They might also be representationally unified, for one might hear the rumba as coming from behind the bartender. But over and above these unities is a deeper and more primitive unity: the fact that these two experiences possess a conjoint experiential character. There is something it is like to hear the rumba, there is something it is like to see the bartender work, and there is something it is like to hear the rumba while seeing the bartender work. Any description of one’s overall state of consciousness that omitted the fact that these experiences are had together as components, parts, or elements of a single conscious state would be incomplete. Let us call this kind of unity — sometimes dubbed ‘co-consciousness’ — phenomenal unity.</blockquote><blockquote>Phenomenal unity is often in the background in discussions of the ‘stream’ or ‘field’ of consciousness. The stream metaphor is perhaps most naturally associated with the flow of consciousness — its unity through time — whereas the field metaphor more accurately captures the structure of consciousness at a time. We can say that what it is for a pair of experiences to occur within a single phenomenal field just is for them to enjoy a conjoint phenomenality — for there to be something it is like for the subject in question not only to have both experiences but to have them together. By contrast, simultaneous experiences that occur within distinct phenomenal fields do not share a conjoint phenomenal character.</blockquote><h2>Unity-independent intensity of valenced aspects of consciousness</h2><p>A common report of those who take psychedelics is that, while &quot;tripping,&quot; their conscious experiences are &quot;more intense&quot; than they normally are. Similarly, different pains feel similar but have different intensities, e.g. when my stomach is upset, the intensity of my stomach pain waxes and wanes a fair bit, until it gradually fades to not being noticeable anymore. Same goes for conscious pleasures.</p><p>It&#x27;s possible such variations in intensity are entirely accounted for by their degrees of different kinds of unity, or by some other plausible feature(s) of moral weight, but maybe not. If there <em>is</em> some additional &quot;intensity&quot; variable for valenced aspects of conscious experience, it would seem a good candidate for affecting moral weight.</p><p>From my own experience, my guess is that I would endure ~10 seconds of the most intense pain I&#x27;ve ever experienced to avoid experiencing ~2 months of the lowest level of discomfort that I&#x27;d bother to call &quot;discomfort.&quot; That very low level of discomfort might suggest a lower bound on &quot;intensity of valenced aspects of experience&quot; that I intuitively morally care about, but &quot;the most intense pain I&#x27;ve ever experienced&quot; probably is not the <em>highest</em> intensity of valenced aspects of experience it is possible to experience — probably not even close. You could consider similar trades to get a sense for how much you intuitively value &quot;intensity of experience,&quot; at least in your own case.</p><h1>Moral weights of various species</h1><p>If we thought about all this more carefully and collected as much relevant empirical data as possible, what moral weights might we assign to different species?</p><p>Whereas my <a href="">probabilities of moral patienthood</a> for any animal as complex as a crab only range from 0.2 - 1, the plausible ranges of moral weight seem like they could be much larger. I don&#x27;t feel like I&#x27;d be surprised if an omniscient being told me that my <a href="">extrapolated values</a> would assign pigs <em>more</em> moral weight than humans, and I don&#x27;t feel like I&#x27;d be surprised if an omniscient being told me my extrapolated values would assign pigs .0001 moral weight (assuming they were moral patients at all). </p><p>To illustrate how this might work, below are some guesses at some &quot;plausible ranges of moral weight&quot; for a variety of species that someone might come to, if they had intuitions like those explained below.</p><ul><li>Humans: 1 (baseline)</li><li>Chimpanzees: 0.001 - 3</li><li>Pigs: 0.0005 - 3.5</li><li>Cows: 0.0001 - 5</li><li>Chickens: 0.0005 - 7</li><li>Rainbow trout: 0.00005 - 10</li><li>Fruit fly: 0.000001 - 20</li></ul><p>(But whenever you&#x27;re tempted to multiply such numbers by something, remember <a href="">two-envelope effects</a>!)</p><p>What intuitions might lead to something like these ranges?</p><ul><li>An intuition to not place much value on &quot;complex/higher-order&quot; dimensions of moral weight — such as &quot;fullness of self-awareness&quot; or &quot;capacity for reflecting on one&#x27;s holistic life satisfaction&quot; — above and beyond the subjective duration and &quot;intensity&quot; of relatively &quot;brute&quot; pleasure/pain/happiness/sadness that (in humans) tends to accompany reflection, self-awareness, etc.</li><li>An intuition to care more about subject unity and phenomenal unity than about such higher-order dimensions of moral weight.</li><li>An intuition to care most of all about clock speed and experience intensity (if intensity is distinct from unity).</li><li>Intuitions that if the animal species listed above are conscious, they:</li><ul><li>have very little of the higher-order dimensions of conscious experience, </li><li>have faster clock speeds than humans (the smaller the faster), </li><li>probably have lower &quot;intensity&quot; of experience, but <em>might</em> actually have somewhat <em>greater</em> intensity of experience (e.g. because they aren&#x27;t distracted by linguistic thought), </li><li>have moderately less subject unity and phenomenal unity, especially of the diachronic sort.</li></ul></ul><p>Under these intuitions, the low end of the ranges above could be explained by the possibility that intensity of conscious experience diminishes dramatically with brain complexity and flexibility, while the high end of the ranges above could be explained by the possibility concerning faster clock speeds for smaller animals, the possibility of lesser unity in non-human animals (which one might value at &gt;1x for the same reason one might value a dually-conscious split-brain patient at ~2x), and the possibility for <em>greater</em> intensity of experience in simpler animals.</p><p></p><h1>Other writings on moral weight</h1><ul><li>Brian Tomasik: <u><a href="">Is animal suffering less bad than human suffering?</a></u>; <u><a href="">Which computations do I care about?</a></u>; <u><a href="">Is brain size morally relevant?</a></u>; <u><a href="">Do Smaller Animals Have Faster Subjective Experiences?</a></u>; <u><a href="">Two-Envelopes Problem for Uncertainty about Brain-Size Valuation and Other Moral Questions</a></u></li><li>Nick Bostrom: <a href="">Quantity of Experience</a></li><li>Kevin Wong: <u><a href="">Counting Animals</a></u></li><li>Oscar Horta: <u><a href="">Questions of Priority and Interspecies Comparisons of Happiness</a></u></li><li>Adler et al., <u><a href="">Would you choose to be happy? Tradeoffs between happiness and the other dimensions of life in a large population survey</a></u></li></ul> lukeprog 2jTQTxYNwo6zb3Kyp 2018-08-13T23:45:13.430Z Comment by lukeprog on Announcement: AI alignment prize winners and next round <p>Cool, this looks better than I&#x27;d been expecting. Thanks for doing this! Looking forward to next round.</p> lukeprog RTKKDNDgck37qnGYR 2018-01-15T21:42:23.386Z Quick thoughts on empathic metaethics <p><em>Years ago, I wrote an unfinished sequence of posts called &quot;<a href="">No-Nonsense Metaethics</a>.&quot; My last post, <a href="">Pluralistic Moral Reductionism</a>, said I would next explore &quot;empathic metaethics,&quot; but I never got around to writing those posts. Recently, I wrote a high-level summary of some initial thoughts on &quot;empathic metaethics&quot; in <a href="">section 6.1.2</a> of a report prepared for my employer, the <a href="">Open Philanthropy Project</a>. With my employer&#x27;s permission, I&#x27;ve adapted that section for publication here, so that it can serve as the long-overdue concluding post in my sequence on metaethics.</em></p><p>In my <a href="">previous post</a>, I distinguished &quot;austere metaethics&quot; and &quot;empathic metaethics,&quot; where austere metaethics confronts moral questions roughly like this:</p><blockquote>Tell me what you mean by &#x27;right&#x27;, and I will tell you what is the right thing to do. If by &#x27;right&#x27; you mean X, then Y is the right thing to do. If by &#x27;right&#x27; you mean P, then Z is the right thing to do. But if you can&#x27;t tell me what you mean by &#x27;right&#x27;, then you have failed to ask a coherent question, and no one can answer an incoherent question.</blockquote><p>Meanwhile, empathic metaethics says instead:</p><blockquote>You may not know what you mean by &#x27;right.&#x27; But let&#x27;s not stop there. Here, let me come alongside you and help decode the cognitive algorithms that generated your question in the first place, and then we&#x27;ll be able to answer your question. Then we can tell you what the right thing to do is.</blockquote><p>Below, I provide a high-level summary of some of my initial thoughts on what one approach to &quot;empathic metaethics&quot; could look like.</p><p>Given my metaethical approach, when I make a “moral judgment” about something (e.g. about <a href="">which kinds of beings are moral patients</a>), I don’t conceive of myself as perceiving an objective moral truth, or coming to know an objective moral truth via a series of arguments. Nor do I conceive of myself as merely expressing my moral feelings as they stand today. Rather, I conceive of myself as making a conditional forecast about what my values would be if I underwent a certain “idealization” or “extrapolation” procedure (coming to know more true facts, having more time to consider moral arguments, etc.).[1]</p><p>Thus, in a (hypothetical) &quot;extreme effort&quot; attempt to engage in empathic metaethics (for thinking about <em>my own</em> moral judgments), I would do something like the following:</p><ol><li>I would try to make the scenario I&#x27;m aiming to forecast as concrete as possible, so that my brain is able to treat it as a genuine forecasting challenge, akin to participating in a prediction market or forecasting tournament, rather than as a fantasy about which my brain feels &quot;allowed&quot; to make up whatever story feels nice, or signals my values to others, or achieves something else that isn&#x27;t <em>forecasting accuracy</em>.[2] In my case, I concretize the extrapolation procedure as one involving a large population of copies of me who learn many true facts, consider many moral arguments, and undergo various other experiences, and then collectively advise me about what I should value and why.[3]</li><li>However, I would also try to make forecasts I can actually check for accuracy, e.g. about what my moral judgment about various cases will be 2 months in the future.</li><li>When making these forecasts, I would try to draw on the best research I&#x27;ve seen concerning how to make accurate estimates and forecasts. For example I would try to &quot;think like a fox, not like a hedgehog,&quot; and I&#x27;ve already done several hours of probability calibration training, and some amount of forecasting training.[4]</li><li>Clearly, my current moral intuitions would serve as one important source of evidence about what my extrapolated values might be. However, recent findings in moral psychology and related fields lead me to assign more evidential weight to some moral intuitions than to others. More generally, I interpret my current moral intuitions as data generated partly by my moral principles, and partly by various &quot;error processes&quot; (e.g. a hard-wired disgust reaction to spiders, which I don&#x27;t endorse upon reflection). Doing so allows me to make use of some standard lessons from statistical curve-fitting when thinking about how much evidential weight to assign to particular moral intuitions.[5]</li><li>As part of forecasting what my extrapolated values might be, I would try to consider different processes and contexts that could generate alternate moral intuitions in moral reasoners both similar and dissimilar to my current self, and I would try to consider how I feel about the the &quot;legitimacy&quot; of those mechanisms as producers of moral intuitions. For example I might ask myself questions such as &quot;How might I feel about that practice if I was born into a world in which it was already commonplace?&quot; and &quot;How might I feel about that case if my built-in (and largely unconscious) processes for associative learning and imitative learning had been exposed to different life histories than my own?&quot; and &quot;How might I feel about that case if I had been born in a different century, or a different country, or with a greater propensity for clinical depression?&quot; and &quot;How might a moral reasoner on another planet feel about that case if it belonged to a more strongly <a href="">r-selected species</a> (compared to humans) but had roughly human-like general reasoning ability?&quot;[6]</li><li>Observable patterns in how people&#x27;s values change (seemingly) in response to components of my proposed extrapolation procedure (learning more facts, considering moral arguments, etc.) would serve as another source of evidence about what my extrapolated values might be. For example, the correlation between aggregate human knowledge and our &quot;expanding circle of moral concern&quot; (<a href="">Singer 2011</a>) might (very weakly) suggest that, if I continued to learn more true facts, my circle of moral concern would continue to expand. Unfortunately, such correlations are badly confounded, and might not provide much evidence.[7]</li><li>Personal facts about how my own values have evolved as I&#x27;ve learned more, considered moral arguments, and so on, would serve as yet another source of evidence about what my extrapolated values might be. Of course, these relations are likely confounded as well, and need to be interpreted with care.[8]</li></ol><hr class="dividerBlock"/><p><strong>1.</strong> This general approach sometimes goes by names such as &quot;ideal advisor theory&quot; or, arguably, &quot;reflective equilibrium.&quot; Diverse sources explicating various extrapolation procedures (or fragments of extrapolation procedures) include: <a href="">Rosati (1995)</a>; <a href="">Daniels (2016)</a>; <a href="">Campbell (2013)</a>; chapter 9 of <a href="">Miller (2013)</a>; <a href="">Muehlhauser &amp; Williamson (2013)</a>; <a href="">Trout (2014)</a>; Yudkowsky&#x27;s &quot;<a href="">Extrapolated volition (normative moral theory)</a>&quot; (2016); <a href="">Baker (2016)</a>; <a href="">Stanovich (2004)</a>, pp. 224-275; <a href="">Stanovich (2013)</a>.</p><p><strong>2.</strong> For more on forecasting accuracy, see <a href="">this blog post</a>. My use of research on the psychological predictors of forecasting accuracy for the purposes of doing moral philosophy is one example of my support for the use of &quot;ameliorative psychology&quot; in philosophical practice — see e.g. Bishop &amp; Trout (<a href="">2004</a>, <a href="">2008</a>).</p><p><strong>3.</strong> Specifically, the scenario I try to imagine (and make conditional forecasts about) looks something like this:</p><ol><li>In the distant future, I am non-destructively &quot;uploaded.&quot; In other words, my brain and some supporting cells are scanned (non-destructively) at a fine enough spatial and chemical resolution that, when this scan is combined with accurate models of how different cell types carry out their information-processing functions, one can create an executable computer model of my brain that matches my biological brain&#x27;s input-output behavior almost exactly. This whole brain emulation (&quot;em&quot;) is then connected to a virtual world: computed inputs are fed to the em&#x27;s (now virtual) signal transduction neurons for sight, sound, etc., and computed outputs from the em&#x27;s virtual arm movements, speech, etc. are received by the virtual world, which computes appropriate changes to the virtual world in response. (I don&#x27;t think anything remotely like this will ever happen, but as far as I know it is a <em>physically possible</em> world that can be described in some detail; for one attempt, see <a href="">Hanson 2016</a>.) Given functionalism, this &quot;em&quot; has the same memories, personality, and conscious experience that I have, though it experiences quite a shock when it awakens to a virtual world that might look and feel somewhat different from the &quot;real&quot; world.</li><li>This initial em is copied thousands of times. Some of the copies interact inside the same virtual world, other copies are placed inside isolated virtual worlds.</li><li>Then, these ems spend a very long time (a) collecting and generating arguments and evidence about morality and related topics, (b) undergoing various experiences, in varying orders, and reflecting on those experiences, (c) dialoguing with ems sourced from other biological humans who have different values than I do, and perhaps with sophisticated chat-bots meant to simulate the plausible reasoning of other types of people (from the past, or from other worlds) who were not available to be uploaded, and so on. They are able to do these things for a very long time because they and their virtual worlds are run at speeds thousands of times faster than my biological brain runs, allowing subjective eons to pass in mere months of &quot;objective&quot; time.</li><li>Finally, at some time, the ems dialogue with each other about which values seem &quot;best,&quot; they engage in moral trade (<a href="">Ord 2015</a>), and they try to explain to me what values they think I should have and why. In the end, I am not forced to accept any of the values they then hold (collectively or individually), but I am able to come to much better-informed moral judgments than I could have without their input.</li></ol><p>For more context on this sort of values extrapolation procedure, see <a href="">Muehlhauser &amp; Williamson (2013)</a>.</p><p><strong>4.</strong> For more on forecasting &quot;best practices,&quot; see <a href="">this blog post</a>.</p><p><strong>5.</strong> Following <a href="">Hanson (2002)</a> and ch. 2 of <a href="">Beckstead (2013)</a>, I consider my moral intuitions in the context of Bayesian curve-fitting. To explain, I&#x27;ll quote <a href="">Beckstead (2013)</a> at some length:</p><blockquote>Curve fitting is a problem frequently discussed in the philosophy of science. In the standard presentation, a scientist is given some data points, usually with an independent variable and a dependent variable, and is asked to predict the values of the dependent variable given other values of the independent variable. Typically, the data points are <em>observations</em>, such as &quot;measured height&quot; on a scale or &quot;reported income&quot; on a survey, rather than true values, such as height or income. Thus, in making predictions about additional data points, the scientist has to account for the possibility of error in the observations. By an error process I mean anything that makes the observed values of the data points differ from their true values. Error processes could arise from a faulty scale, failures of memory on the part of survey participants, bias on the part of the experimenter, or any number of other sources. While some treatments of this problem focus on predicting observations (such as measured height), I&#x27;m going to focus on predicting the true values (such as true height).</blockquote><blockquote>…For any consistent data set, it is possible to construct a curve that fits the data exactly… If the scientist chooses one of these polynomial curves for predictive purposes, the result will usually be <em>overfitting</em>, and the scientist will make worse predictions than he would have if he had chosen a curve that did not fit the data as well, but had other virtues, such as a straight line. On the other hand, always going with the simplest curve and giving no weight to the data leads to <em>underfitting</em>…</blockquote><blockquote>I intend to carry over our thinking about curve fitting in science to reflective equilibrium in moral philosophy, so I should note immediately that curve fitting is not limited to the case of two variables. When we must understand relationships between multiple variables, we can turn to multiple-dimensional spaces and fit planes (or hyperplanes) to our data points. Different axes might correspond to different considerations which seem relevant (such as total well-being, equality, number of people, fairness, etc.), and another axis could correspond to the value of the alternative, which we can assume is a function of the relevant considerations. Direct Bayesian updating on such data points would be impractical, but the philosophical issues will not be affected by these difficulties.</blockquote><blockquote>…On a Bayesian approach to this problem, the scientist would consider a number of different hypotheses about the relationship between the two variables, including both hypotheses about the phenomena (the relationship between X and Y) and hypotheses about the error process (the relationship between observed values of Y and true values of Y) that produces the observations…</blockquote><blockquote>…Lessons from the Bayesian approach to curve fitting apply to moral philosophy. Our moral intuitions are the data, and there are error processes that make our moral intuitions deviate from the truth. The complete moral theories under consideration are the hypotheses about the phenomena. (Here, I use &quot;theory&quot; broadly to include any complete set of possibilities about the moral truth. My use of the word &quot;theory&quot; does not assume that the truth about morality is simple, systematic, and neat rather than complex, circumstantial, and messy.) If we expect the error processes to be widespread and significant, we must rely on our priors more. If we expect the error processes to be, in addition, biased and correlated, then we will have to rely significantly on our priors even when we have a lot of intuitive data.</blockquote><p>Beckstead then summarizes the framework with a table (p. 32), edited to fit into LessWrong&#x27;s formatting:</p><ul><li>Hypotheses about phenomena</li><ul><li><em>(Science)</em> Different trajectories of a ball that has been dropped</li><li><em>(Moral Philosophy)</em> Moral theories (specific versions of utilitarianism, Kantianism, contractualism, pluralistic deontology, etc.)</li></ul><li>Hypotheses about error processes</li><ul><li><em>(Science)</em> Our position measurements are accurate on average, and are within 1 inch 95% of the time (with normally distributed error)</li><li><em>(Moral Philosophy)</em> Different hypotheses about the causes of error in historical cases; cognitive and moral biases; different hypotheses about the biases that cause inconsistent judgments in important philosophical cases</li></ul><li>Observations</li><ul><li><em>(Science)</em> Recorded position of a ball at different times recorded with a certain clock</li><li><em>(Moral Philosophy)</em> Intuitions about particular cases or general principles, and any other relevant observations</li></ul><li>Background theory</li><ul><li><em>(Science)</em> The ball never bounces higher than the height it started at. The ball always moves along a continuous trajectory.</li><li><em>(Moral Philosophy)</em> Meta-ethical or normative background theory (or theories)</li></ul></ul><p><strong>6.</strong> For more on this, see <a href="">my conversation with Carl Shulman</a>, <a href="">O&#x27;Neill (2015)</a>, the literature on the evolution of moral values (e.g. <a href="">de Waal et al. 2014</a>; <a href="">Sinnott-Armstrong &amp; Miller 2007</a>; <a href="">Joyce 2005</a>), the literature on moral psychology more generally (e.g. <a href="">Graham et al. 2013</a>; <a href="">Doris 2010</a>; <a href="">Liao 2016</a>; <a href="">Christen et al. 2014</a>; <a href="">Sunstein 2005</a>), the literature on how moral values vary between cultures and eras (e.g. see <a href="">Flanagan 2016</a>; <a href="">Inglehart &amp; Welzel 2010</a>; <a href="">Pinker 2011</a>; <a href="">Morris 2015</a>; <a href="">Friedman 2005</a>; <a href="">Prinz 2007</a>, pp. 187-195), and the literature on moral thought experiments (e.g. <a href="">Tittle 2004</a>, ch. 7). See also <a href="">Wilson (2016)</a>&#x27;s comments on internal and external validity in ethical thought experiments, and <a href="">Bakker (2017)</a> on &quot;alien philosophy.&quot;</p><p>I do not read much fiction, but I suspect that some types of fiction — e.g. historical fiction, fantasy, and science fiction — can help readers to temporarily transport themselves into fully-realized alternate realities, in which readers can test how their moral intuitions differ when they are temporarily &quot;lost&quot; in an alternate world.</p><p><strong>7.</strong> There are many sources which discuss how people&#x27;s values seem to change along with (and perhaps in response to) components of my proposed extrapolation procedure, such as learning more facts, reasoning through more moral arguments, and dialoguing with others who have different values. See e.g. <a href="">Inglehart &amp; Welzel (2010)</a>, <a href="">Pinker (2011)</a>, <a href="">Shermer (2015)</a>, and <a href="">Buchanan &amp; Powell (2016)</a>. See also the literatures on &quot;enlightened preferences&quot; (<a href="">Althaus 2003</a>, chs. 4-6) and on &quot;<a href="">deliberative polling</a>.&quot;</p><p><strong>8.</strong> For example, as I&#x27;ve learned more, considered more moral arguments, and dialogued more with people who don&#x27;t share my values, my moral values have become more &quot;secular-rational&quot; and &quot;self-expressive&quot; (<a href="">Inglehart &amp; Welzel 2010</a>), more geographically global, more extensive (e.g. throughout more of the animal kingdom), less <a href="">person-affecting</a>, and subject to greater moral uncertainty (<a href="">Bykvist 2017</a>).</p> lukeprog FZ5aTJFXZMpPL7ycK 2017-12-12T21:46:08.834Z Comment by lukeprog on Oxford Prioritisation Project Review <div class="ory-row"><div class="ory-cell ory-cell-sm-12 ory-cell-xs-12"><div class="ory-cell-inner ory-cell-leaf"><div><p>Hurrah failed project reports!</p></div></div></div></div> lukeprog P9reGhq6Wt5FthgoR 2017-10-14T00:12:05.575Z Comment by lukeprog on Ten small life improvements <p>One of my most-used tools is very simple: an Alfred snippet that lets me paste-as-plain-text using Cmd+Opt+V.</p> lukeprog GuiX467FWhBdJDT5D 2017-08-24T15:55:33.787Z Comment by lukeprog on Rescuing the Extropy Magazine archives <p>Thanks!</p> lukeprog cEPst4B6E5wNgZMxX 2017-07-01T21:36:21.850Z Comment by lukeprog on LessWrong 2.0 Feature Roadmap & Feature Suggestions <div class="ory-row"><div class="ory-cell ory-cell-sm-12 ory-cell-xs-12"><div class="ory-cell-inner ory-cell-leaf"><div><p>From a user&#x27;s profile, be able to see their comments in addition to their posts.</p><p>Dunno about others, but this is actually one of the LW features I use the most.</p><p>(Apologies if this is listed somewhere already and I missed it.)</p></div></div></div></div> lukeprog 7y2QgEYwTRfBjBEZX 2017-07-01T06:43:09.823Z Comment by lukeprog on LessWrong 2.0 Feature Roadmap & Feature Suggestions <div class="ory-row"><div class="ory-cell ory-cell-sm-12 ory-cell-xs-12"><div class="ory-cell-inner ory-cell-leaf"><div><p>Probably not suitable for launch, but given that the epistemic seriousness of the users is the most important &quot;feature&quot; for me and some other people I&#x27;ve spoken to, I wonder if some kind of &quot;user badges&quot; thing might be helpful, especially if it influences the weight that upvotes and downvotes from those users have. E.g. one badge could be &quot;has read &gt;60% of the sequences, as &#x27;verified&#x27; by one of the 150 people the LW admins trust to verify such a thing about someone&quot; and &quot;verified superforecaster&quot; and probably some other things I&#x27;m not immediately thinking of.</p></div></div></div></div> lukeprog rup3rWnGvLtxZd9dg 2017-06-23T23:37:09.865Z Comment by lukeprog on Book recommendation requests <ol> <li>Constantly.</li> <li>Frequently.</li> </ol> lukeprog juSWyC2GNfou69MgR 2017-06-03T23:32:00.328Z Comment by lukeprog on Book recommendation requests <p><a href="">Best Textbooks on Every Subject</a></p> lukeprog vEdpngY7NTQgxG24C 2017-06-02T19:40:42.748Z Comment by lukeprog on AGI and Mainstream Culture <p>Thanks for briefly describing those <em>Doctor Who</em> episodes.</p> lukeprog QZ2MA4dWdTHbPMhzA 2017-05-23T19:28:54.174Z Comment by lukeprog on The Best Textbooks on Every Subject <p>Lists of textbook award winners like <a href="">this list</a> might also be useful.</p> lukeprog umAXKG8QQqwrXEWyf 2017-03-07T22:24:42.993Z Comment by lukeprog on The Best Textbooks on Every Subject <p>Fixed, thanks.</p> lukeprog F892RCvqWHDk9vjb3 2017-02-25T21:44:11.720Z Comment by lukeprog on Can the Chain Still Hold You? <p>Today I encountered a real-life account of a the chain story — involving a cow rather than an elephant — around 24:10 into the &quot;<a href="">Best of BackStory, Vol. 1</a>&quot; episode of the podcast <em>BackStory</em>.</p> lukeprog qnJQaF2hRdd2kkh65 2017-01-27T14:41:03.746Z Comment by lukeprog on CFAR’s new focus, and AI Safety <p>&quot;Accuracy-boosting&quot; or &quot;raising accuracy&quot;?</p> lukeprog 6XLpTXu3uMNXaKyh2 2016-12-07T18:06:38.611Z Comment by lukeprog on Paid research assistant position focusing on artificial intelligence and existential risk <p><a href=";cd=1&amp;hl=en&amp;ct=clnk&amp;gl=us">Source</a>. But the <a href="">non-cached page</a> says &quot;The details of this job cannot be viewed at this time,&quot; so maybe the job opening is no longer available.</p> <p>FWIW, I'm a bit familiar with Dafoe's thinking on the issues, and I think it would be a good use of time for the right person to work with him.</p> lukeprog QwkJiaxgQ4ASvmuMR 2016-05-03T18:28:24.969Z Comment by lukeprog on Audio version of Rationality: From AI to Zombies out of beta <p>Hi Rick, any updates on the Audible version?</p> lukeprog gNrhTf8koiTNchyhF 2016-04-22T14:44:38.240Z Comment by lukeprog on [link] Simplifying the environment: a new convergent instrumental goal <p>See also: <a href=";hl=en&amp;as_sdt=1,5">;hl=en&amp;as_sdt=1,5</a></p> lukeprog 3ordo7zKiZMm7zrgR 2016-04-22T14:43:10.592Z Comment by lukeprog on Why CFAR? The view from 2015 <p>Just donated!</p> lukeprog asfy9ZD5ZyRbn6heK 2015-12-20T19:57:54.441Z Comment by lukeprog on Audio version of Rationality: From AI to Zombies out of beta <p>Hurray!</p> lukeprog MvExLCnxCbzdraa24 2015-12-01T04:51:13.682Z Comment by lukeprog on Audio version of Rationality: From AI to Zombies out of beta <p>Any chance you'll eventually get this up on Audible? I suspect that in the long run, it can find a wider audience there.</p> lukeprog jDNajKYwLx9794BRm 2015-11-27T15:24:23.787Z Comment by lukeprog on The Best Textbooks on Every Subject <p>Another attempt to do something like this thread: <a href="">Viva la Books</a>.</p> lukeprog qbcSrwad9Zn4yaJF2 2015-10-03T05:52:52.571Z Comment by lukeprog on Estimate Stability <p>I guess <a href="">subjective logic</a> is also trying to handle this kind of thing. From Jøsang's <a href="">book draft</a>:</p> <blockquote> <p>Subjective logic is a type of probabilistic logic that allows probability values to be expressed with degrees of uncertainty. The idea of probabilistic logic is to combine the strengths of logic and probability calculus, meaning that it has binary logic’s capacity to express structured argument models, and it has the power of probabilities to express degrees of truth of those arguments. The idea of subjective logic is to extend probabilistic logic by also expressing uncertainty about the probability values themselves, meaning that it is possible to reason with argument models in presence of uncertain or incomplete evidence.</p> </blockquote> <p>Though maybe this particular formal system has really undesirable properties, I don't know.</p> lukeprog zTWr3yncEE5Xct5x5 2015-08-20T17:30:53.991Z Comment by lukeprog on MIRI's 2015 Summer Fundraiser! <p>Donated $300.</p> lukeprog H5o5g2QwmcsDZpNtk 2015-07-21T01:56:19.191Z Comment by lukeprog on Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields <p>Never heard of him.</p> lukeprog wD8G3vs9NGGypyA77 2015-07-17T13:07:17.541Z Comment by lukeprog on [link] FLI's recommended project grants for AI safety research announced <p>For those who haven't been around as long as Wei Dai…</p> <p>Eliezer tells the story of coming around to a more Bostromian view, circa 2003, in his <a href="'s_Coming_of_Age">coming of age</a> sequence.</p> lukeprog awqFSbXZgp8WXxm6N 2015-07-02T07:08:14.186Z Comment by lukeprog on GiveWell event for SF Bay Area EAs <p>Just FYI, I plan to be there.</p> lukeprog xT4dt9yKrfGvGnh8E 2015-06-25T20:42:51.429Z Comment by lukeprog on A map: Typology of human extinction risks <p>Any idea when the book is coming out?</p> lukeprog pg9Zqw32ypy7Aqzv9 2015-06-24T17:43:12.734Z Comment by lukeprog on [link] Baidu cheats in an AI contest in order to gain a 0.24% advantage <p>Just FYI to readers: the source of the first image is <a href="">here</a>.</p> lukeprog g9fWMe338tDEd4dp4 2015-06-08T01:57:10.615Z Comment by lukeprog on Learning to get things right first time <p>I don't know if this is commercially feasible, but I do like this idea from the perspective of building civilizational competence at getting things right on the first try.</p> lukeprog 7TNAbaqWCaSandjqc 2015-05-30T00:17:06.638Z Comment by lukeprog on Request for Advice : A.I. - can I make myself useful? <p>Might you be able to slightly retrain so as to become an expert on medium-term and long-term biosecurity risks? Biological engineering presents serious GCR risk over the next 50 years (and of course after that, as well), and very few people are trying to think through the issues on more than a 10-year time horizon. FHI, CSER, GiveWell, and perhaps others each have a decent chance of wanting to hire people into such research positions over the next few years. (GiveWell is looking to hire a biosecurity program manager right now, but I assume you can't acquire the requisite training and background <em>immediately</em>.)</p> lukeprog 6yiuZi7yzjLwxb3XR 2015-05-29T18:17:43.531Z Comment by lukeprog on CFAR-run MIRI Summer Fellows program: July 7-26 <p>I think it's partly not doing enough far-advance planning, but also partly just a greater-than-usual willingness to Try Things that seem like good ideas even if the timeline is a bit rushed. That's how the original minicamp happened, which ended up going so well that it inspired us to develop and launch CFAR.</p> lukeprog 8TJ5CijoMheubo5kd 2015-04-29T23:57:55.306Z Comment by lukeprog on The Effective Altruism Handbook <p>People have complained about Sumatra not working with MIRI's PDF ebooks, too. It was hard enough already to get our process to output the links we want on most readers, so we decided not to make the extra effort to additionally support Sumatra. I'm not sure what it would take.</p> lukeprog qFFdNeYfNJea7xXo3 2015-04-26T03:02:30.605Z Comment by lukeprog on The Best Textbooks on Every Subject <p>Updated, thanks!</p> lukeprog EN3d7TEjFfxib3vtm 2015-04-15T00:19:58.404Z Comment by lukeprog on The Best Textbooks on Every Subject <p>Fixed, thanks.</p> lukeprog Agx5GEHu9MjPZ5Tdn 2015-04-15T00:07:56.197Z Comment by lukeprog on How urgent is it to intuitively understand Bayesianism? <p>Maybe just use <a href="">odds ratios</a>. That's what I use when I'm trying to make updates on the spot.</p> lukeprog jRt3a65DBnYfXp9ik 2015-04-07T02:27:47.038Z Comment by lukeprog on Just a casual question regarding MIRI <p>Working on MIRI's <a href="">current technical agenda</a> mostly requires a background in computer science with an unusually strong focus on logic: see details <a href="">here</a>. That said, the scope of MIRI's research program should be expanding over time. E.g. see <a href="">Patrick's recent proposal</a> to model goal stability challenges in a machine learning system, which would require more typical AI knowledge than has usually been the case for MIRI's work so far.</p> <p>MIRI's research isn't really what a mathematician would typically think of as &quot;math research&quot; — it's more like theory-heavy computer science research with an unusually significant math/logic component, as is the case with a few other areas of computer science research, e.g. program analysis.</p> <p>Also see the &quot;Our recommended path for becoming a MIRI research fellow&quot; section on our <a href="">research fellow job posting</a>.</p> lukeprog CAycN8b2vb3tzm8hi 2015-03-22T22:05:42.493Z Comment by lukeprog on The Best Textbooks on Every Subject <p>Fixed, thanks!</p> lukeprog em3nBHbNP74RK626W 2015-03-19T23:08:36.185Z Comment by lukeprog on Best Explainers on Different Subjects <p>I tried this earlier, with <a href="">Great Explanations</a>.</p> lukeprog QFBncwFbgwrqFSZPi 2015-03-18T23:37:39.168Z Comment by lukeprog on Rationality: From AI to Zombies <blockquote> <p>I can't mail that address, I get a failure message from Google</p> </blockquote> <p>Oops. Should be fixed now.</p> lukeprog ona7nduZe8wWxbEcD 2015-03-15T18:27:14.800Z Comment by lukeprog on Calibration Test with database of 150,000+ questions <p>Thanks! BTW, I'd prefer to have 1% <em>and</em> 0.1% <em>and</em> 99% <em>and</em> 99.9% as options, rather than skipping over the 1% and 99% options as you have it now.</p> lukeprog fvyW3CiimJkJMg37q 2015-03-13T21:57:50.093Z Comment by lukeprog on Calibration Test with database of 150,000+ questions <p>Fair enough. I've edited my original comment.</p> <p>(For posterity: the text for my original comment's first hyperlink originally read &quot;0 and 1 are not probabilities&quot;.)</p> lukeprog DrTJTY7bzXuQzkafY 2015-03-13T21:55:55.311Z Comment by lukeprog on Rationality: From AI to Zombies <p>Which is <a href="">roughly</a> the length of <em>War and Peace</em> or <em>Atlas Shrugged</em>.</p> lukeprog TCNLJ9wYWZxH55nCT 2015-03-13T21:48:28.029Z Comment by lukeprog on Calibration Test with database of 150,000+ questions <p>0% probability is my most common answer as well, but I'm using it less often than I was choosing 50% on the CFAR calibration app (which forces a binary answer choice rather than an open-ended answer choice). The CFAR app has lots of questions like &quot;Which of these two teams won the Superbowl in 1978&quot; where I just have no idea. The trivia database Nanashi is using has, for me, a greater proportion of questions on which my credence is something more interesting than an ignorance prior.</p> lukeprog chpkKr2v4Fb97sKBg 2015-03-13T18:03:23.961Z Comment by lukeprog on Calibration Test with database of 150,000+ questions <p><a href="">I'd prefer not to allow 0 and 1 as available credences</a>. But if 0 remained as an option I would just interpret it as &quot;very close to 0&quot; and then keep using the app, though if a future version of the app showed me my <a href="">Bayes score</a> then the difference between what the app allows me to choose (0%) and what I'm interpreting 0 to mean (&quot;very close to 0&quot;) could matter.</p> lukeprog 7mF3LGYQ27wPAQ7B2 2015-03-13T17:59:17.922Z MIRI's 2014 Summer Matching Challenge <p><small>(Cross-posted from <a href="">MIRI's blog</a>. <a href="">MIRI</a> maintains Less Wrong, with generous help from <a href="">Trike Apps</a>, and much of the core content is written by salaried MIRI staff members.)</small></p> <p>Thanks to the generosity of several major donors,<sup>&dagger;</sup>&nbsp;every donation made to MIRI between now and August 15th, 2014 will be <strong>matched dollar-for-dollar</strong>, up to a total of $200,000! &nbsp;</p> <p align="center"><img src="" alt="" /></p> <p>Now is your chance to <strong>double your impact</strong> while helping us raise up to $400,000 (with matching) to fund <a href="">our research program</a>.</p> <p><small>Corporate matching and monthly giving pledges will count towards the total! Please email <a href=""></a> if you intend on leveraging corporate matching (check <a href="">here</a>, to see&nbsp;if your employer will match your donation) or would like to pledge 6 months of monthly donations, so that we can properly account for your contributions towards the fundraiser.</small></p> <p><small></small>(If you're unfamiliar with our mission, see: <a href="">Why MIRI?</a>)</p> <p align="center"><a href=""><strong>Donate Now</strong></a></p> <p>&nbsp; <img class="img-rounded shadowed aligncenter" src="" alt="" /></p> <h3>Accomplishments Since Our Winter&nbsp;2013 Fundraiser Launched:</h3> <ul> <li>Hired&nbsp;<strong>2 new Friendly AI researchers</strong>,&nbsp;Benja Fallenstein &amp; Nate Soares. Since March, they've authored or co-authored 4 papers/reports, with several others in the works. Right now they're traveling, to present papers at the <a href="">Vienna Summer of Logic</a>, <a href="">AAAI-14</a>, and <a href="">AGI-14</a>.</li> <li><strong>5 new papers &amp; book chapters</strong>: &ldquo;<a href="">Why We Need Friendly AI</a>,&rdquo; &ldquo;<a href="">The errors, insights, and lessons of famous AI predictions</a>,&rdquo; &ldquo;<a href="">Problems of self-reference...</a>,&rdquo; &ldquo;<a href="">Program equilibrium...</a>,&rdquo; and &ldquo;<a href="">The ethics of artificial intelligence</a>.&rdquo;</li> <li><strong>11 new technical reports</strong>: <a href="">7 reports from the December 2013 workshop</a>, &ldquo;<a href="">Botworld</a>,&rdquo; &ldquo;<a href="">Loudness...</a>,&rdquo; &ldquo;<a href="">Distributions allowing tiling...</a>,&rdquo; and &ldquo;<a href="">Non-omniscience...</a>&rdquo;</li> <li><strong>New book</strong>:&nbsp;<em><a href="">Smarter Than Us</a>,</em>&nbsp;published&nbsp;both as an e-book&nbsp;and a paperback.</li> <li>Held <a href="">one MIRI workshop</a> and launched the <strong><a href="">MIRIx program</a></strong>, which currently supports&nbsp;8 independently-organized&nbsp;Friendly AI&nbsp;discussion/research groups&nbsp;around the world.</li> <li><strong>New analyses</strong>: <a href="">Robby's posts on naturalized induction</a>, <a href="">Luke's list of 70+ studies which could improve our picture of superintelligence strategy</a>, &ldquo;<a href="">Exponential and non-exponential trends in information technology</a>,&rdquo; &ldquo;<a href="">The world's distribution of computation</a>,&rdquo; &ldquo;<a href="">How big is the field of artificial intelligence?</a>,&rdquo; &ldquo;<a href="">Robust cooperation: A case study in Friendly AI research</a>,&rdquo;&nbsp;&ldquo;<a href="/lw/jv2/is_my_view_contrarian/">Is my view contrarian?</a>,&rdquo; and &ldquo;<a href="">Can we really upload Johnny Depp's brain?</a>&rdquo;</li> <li><strong>Won $60,000+ in matching and prizes</strong> from sources that wouldn't have otherwise given to MIRI, <a href="">via the Silicon Valley Gives fundraiser</a>. (Thanks again, all you dedicated donors!)</li> <li><a href=""><strong>49 new expert interviews</strong></a>, including interviews with <a href="">Scott Aaronson</a> (MIT), <a href="">Max Tegmark</a> (MIT),&nbsp;<a href="">Kathleen Fisher</a> (DARPA), <a href="">Suresh Jagannathan</a> (DARPA),&nbsp;<a href="">Andr&eacute; Platzer</a> (CMU),&nbsp;<a href="">Anil Nerode</a> (Cornell),&nbsp;<a href="">John Baez</a> (UC Riverside), <a href="">Jonathan Millen</a> (MITRE), and <a href="">Roger Schell</a>.</li> <li><strong>4 transcribed&nbsp;conversations</strong> about MIRI strategy: <a href="">1</a>, <a href="">2</a>, <a href="">3</a>, <a href="">4</a>.</li> <li>Published a thorough &ldquo;<a href="">2013 in review</a>.&rdquo;</li> </ul> <h3>Ongoing&nbsp;Activities You Can Help Support</h3> <ul> <li>We're writing an overview of the Friendly AI technical agenda (as we see it) so far.</li> <li>We're currently developing and testing several tutorials on different pieces of the Friendly AI technical agenda (tiling agents, modal agents, etc.).</li> <li>We're writing several more papers and reports.</li> <li>We're growing the MIRIx program, largely to grow the pool of people we can plausibly hire as full-time FAI researchers in the next&nbsp;couple years.</li> <li>We're planning, or helping to plan,&nbsp;multiple research workshops, including&nbsp;the <a href="">May 2015 decision theory workshop at Cambridge University</a>.</li> <li>We're finishing the editing for&nbsp;a book version of Eliezer's&nbsp;<em><a href="">Sequences</a></em>.</li> <li>We're helping to fund further <a href="">SPARC</a>&nbsp;activity, which provides education and skill-building to elite young math talent, and introduces them to ideas like effective altruism and global catastrophic risks.</li> <li>We're continuing to discuss formal collaboration opportunities with UC Berkeley faculty and development staff.</li> <li>We're helping Nick Bostrom promote his&nbsp;<a href=""><em>Superintelligence</em></a> book in the U.S.</li> <li>We're investigating&nbsp;opportunities for supporting&nbsp;Friendly AI research via federal funding&nbsp;sources such as the NSF.</li> </ul> <p>Other projects are still being surveyed for likely cost and impact. See also our <a href="">mid-2014 strategic plan</a>. We appreciate your support for our work!</p> <p><a href="">Donate now</a>, and seize a better than usual opportunity&nbsp;to move our work forward. If you have questions about donating, please contact&nbsp;Malo Bourgon&nbsp;at (510) 292-8776 or</p> <p><sup>&dagger;</sup>&nbsp;$200,000 of total matching funds has been provided by Jaan Tallinn, Edwin Evans, and Rick Schwall.</p> <p><small>Screenshot service provided by <a href=""></a> used to include self updating progress bar.</small></p> lukeprog MLxdKGWvpCToz7vEE 2014-08-07T20:03:24.171Z Will AGI surprise the world? <p><small>Cross-posted from <a href="">my blog</a>.</small></p> <p>Yudkowsky <a href="/lw/hp5/after_critical_event_w_happens_they_still_wont/">writes</a>:</p> <blockquote> <p>In general and across all instances I can think of so far, I do not agree with the part of your futurological forecast in which you reason, "After event W happens, everyone will see the truth of proposition X, leading them to endorse Y and agree with me about policy decision Z."</p> <p>...</p> <p>Example 2: "As AI gets more sophisticated, everyone will realize that real AI is on the way and then they'll start taking Friendly AI development seriously."</p> <p>Alternative projection: As AI gets more sophisticated, the rest of society can't see any difference between the latest breakthrough reported in a press release and that business earlier with Watson beating Ken Jennings or Deep Blue beating Kasparov; it seems like the same sort of press release to them. The same people who were talking about robot overlords earlier continue to talk about robot overlords. The same people who were talking about human irreproducibility continue to talk about human specialness. Concern is expressed over technological unemployment the same as today or Keynes in 1930, and this is used to fuel someone's previous ideological commitment to a basic income guarantee, inequality reduction, or whatever. The same tiny segment of unusually consequentialist people are concerned about Friendly AI as before. If anyone in the science community does start thinking that superintelligent AI is on the way, they exhibit the same distribution of performance as modern scientists who think it's on the way, e.g. Hugo de Garis, Ben Goertzel, etc.</p> </blockquote> <p>My&nbsp;own projection goes more like this:</p> <blockquote> <p>As AI gets more sophisticated, and as more prestigious AI scientists begin to publicly acknowledge that AI is plausibly&nbsp;only 2-6 decades away, policy-makers and research funders will begin to&nbsp;respond to&nbsp;the AGI safety challenge, just like&nbsp;they began to respond&nbsp;to CFC&nbsp;damages in the late 70s,&nbsp;to global warming in the late 80s, and to synbio developments in the 2010s. As for society at large, I dunno.&nbsp;They'll think all kinds of random stuff for random reasons, and in some cases&nbsp;this will seriously impede effective policy, as it does in the USA for science education and immigration reform. Because AGI lends itself to arms races and is harder to handle adequately&nbsp;than global warming or nuclear security are, policy-makers and industry leaders will generally know AGI is coming but be unable to fund the needed&nbsp;efforts and coordinate effectively enough to ensure good outcomes.</p> </blockquote> <p>At least one clear difference between my projection and Yudkowsky's is that I expect AI-expert&nbsp;performance on the problem to improve&nbsp;substantially as a greater fraction of <em>elite</em> AI scientists begin to think about the issue in <a href="">Near mode&nbsp;rather than Far mode</a>.</p> <p>As a friend of mine suggested&nbsp;recently,&nbsp;current elite awareness of the AGI safety challenge is roughly where elite awareness of the global warming challenge was in the early 80s. Except, I expect&nbsp;elite acknowledgement&nbsp;of the AGI safety challenge to spread more slowly than it did for global warming or nuclear security, because&nbsp;AGI is tougher&nbsp;to forecast in general, and involves trickier philosophical nuances. (Nobody was ever tempted to say, "But as&nbsp;the nuclear chain reaction grows in power, it will necessarily&nbsp;become more moral!")</p> <p>Still, there is a worryingly non-negligible&nbsp;chance that AGI explodes "out of nowhere." Sometimes important theorems are proved suddenly after decades of failed attempts by other mathematicians, and sometimes a computational procedure is <a href="">sped up by 20 orders of magnitude with a single breakthrough</a>.</p> lukeprog pAwhfwG6rgjabJL4T 2014-06-21T22:27:31.620Z Some alternatives to “Friendly AI” <p><small>Cross-posted from <a href="">my blog</a>.</small></p> <p>What does MIRI's <a href="">research program</a> study?</p> <p>The most established term for this was coined&nbsp;by MIRI founder Eliezer Yudkowsky: "<strong><a href="">Friendly AI</a></strong>."&nbsp;The term has some advantages, but it&nbsp;might suggest that MIRI is trying to build C-3PO, and it sounds a bit whimsical for a serious research program.</p> <p>What about&nbsp;<strong>safe AGI</strong> or&nbsp;<strong>AGI safety</strong>? These terms are probably easier to interpret than Friendly AI. Also, people&nbsp;<em>like</em> being safe, and governments like saying they're funding initiatives&nbsp;to keep&nbsp;the public safe.</p> <p>A friend of mine&nbsp;worries that these terms could provoke a defensive response (in AI researchers) of "Oh, so you think me and everybody&nbsp;<em>else</em> in AI is working on&nbsp;<em>unsafe AI</em>?" But I've never actually heard&nbsp;that response to "AGI safety" in the wild, and AI safety researchers&nbsp;regularly discuss&nbsp;"<a href="">software system safety</a>" and&nbsp;"<a href="">AI safety</a>" and&nbsp;"<a href="">agent safety</a>" and&nbsp;more specific topics like "<a href="">safe reinforcement learning</a>" without&nbsp;provoking negative reactions from people&nbsp;doing regular AI research.</p> <p>I'm more worried that&nbsp;a term like "safe AGI" could provoke&nbsp;a response of "So you're trying to make sure that a system which is smarter than humans, and able to operate in arbitrary real-world environments, and able to invent new technologies to achieve its goals, will be&nbsp;<em>safe</em>? Let me save you some time and tell you right now that's <em>impossible</em>. Your research program is a pipe dream."</p> <p>My reply goes something like "Yeah, it's&nbsp;<em>way</em> beyond our current capabilities, but lots of things that once looked impossible are now feasible&nbsp;because people worked really hard on them for a long time, and&nbsp;we don't think we can get the whole world to promise never to build AGI just because it's hard to make safe, so we're going to give AGI safety a solid try for a few decades and see what&nbsp;can be discovered." But that's probably not all&nbsp;<em>that</em> reassuring.</p> <p>How about&nbsp;<strong>high-assurance AGI?</strong> In computer science, a "<a href="">high assurance system</a>" is one built from the ground up for unusually strong safety and/or security guarantees, because it's going to be used in safety-critical applications where human lives &mdash; or sometimes simply&nbsp;<em>billions of dollars</em> &mdash; are at stake (e.g. autopilot software or Mars rover software). So there's a nice analogy to MIRI's work, where we're trying to figure out what an AGI would look like if it was built from the ground up to get the strongest safety guarantees possible for such an autonomous and capable system.</p> <p>I think the main problem with this term is that, quite reasonably, nobody will believe that we&nbsp;can ever get anywhere <em>near</em> as much assurance in the behavior of an AGI as we&nbsp;can in the behavior of, say, <a href="">the&nbsp;relatively limited AI software&nbsp;that controls the European Train Control System</a>. "High assurance AGI" sounds a bit like "Totally safe all-powerful demon lord." It sounds even <em>more</em> wildly unimaginable to AI researchers than "safe AGI."</p> <p>What about&nbsp;<strong>superintelligence control</strong> or&nbsp;<strong>AGI control</strong>, as in <a href="">Bostrom (2014)</a>? "AGI control" is perhaps more believable than "high-assurance AGI" or "safe AGI," since it brings to mind AI <em>containment</em> methods,&nbsp;which sound more feasible&nbsp;to most people than designing an unconstrained&nbsp;AGI that is somehow nevertheless safe. (It's okay if they learn&nbsp;<em>later</em> that containment probably isn't an ultimate solution to the problem.)</p> <p>On the other hand, it might provoke a reaction of "What, you don't think sentient robots have any rights, and you're free to control and confine them in any way you please? You're just repeating the immoral mistakes of the old slavemasters!" Which of course isn't true, but it takes some time&nbsp;to explain how I&nbsp;can think it's obvious that conscious machines have moral value while also being in favor of AGI control methods.</p> <p>How about&nbsp;<strong>ethical AGI?</strong> First, I worry that it sounds too philosophical, and philosophy is widely perceived as a confused, unproductive discipline. Second, I worry that it sounds like the research&nbsp;assumes moral realism, which many (most?) intelligent people reject. Third, it makes it sound like most of the work is in selecting the goal function, which I don't think is true.</p> <p>What about&nbsp;<strong>beneficial AGI?</strong> That's better than "ethical AGI," I think, but like "ethical AGI" and "Friendly AI," the term sounds less like a serious math and engineering discipline and more like some enclave of crank researchers writing a flurry of words (but no math) about how AGI needs to be "nice" and "trustworthy" and "not harmful" and oh yeah it must be "virtuous" too, whatever that means.</p> <p>So yeah, I dunno. I think "AGI safety" is my least-disliked&nbsp;term these days, but I wish I knew of some better options.</p> lukeprog P2evgLpCZA2tRRJAR 2014-06-15T19:53:20.340Z An onion strategy for AGI discussion <p><small>Cross-posted from <a href="">my blog</a>.</small></p> <p>"<a href="">The stabilization of environments</a>" is a paper&nbsp;about AIs that&nbsp;reshape their environments to&nbsp;make it easier to achieve their goals. This is typically called&nbsp;<em>enforcement</em>, but they prefer the term&nbsp;<em>stabilization</em> because it "sounds less hostile."</p> <p>"I'll open the pod bay doors,&nbsp;Dave, but then I'm going to stabilize the ship..."</p> <p><a href="">Sparrow (2013)</a>&nbsp;takes&nbsp;the opposite approach to plain vs. dramatic language. Rather than&nbsp;using a modest&nbsp;term like&nbsp;<em><a href="">iterated embryo selection</a></em>, Sparrow prefers&nbsp;the phrase&nbsp;<em>in vitro eugenics</em>.&nbsp;Jeepers.</p> <p>I suppose that's more&nbsp;likely to provoke&nbsp;public discussion, but...&nbsp;will much good will come of that public discussion?&nbsp;The public had a needless freak-out about in vitro fertilization&nbsp;back in the 60s and 70s and then, as soon as the first IVF baby was born in 1978, <a href="">decided they were in favor of it</a>.</p> <p>Someone&nbsp;recently&nbsp;suggested I use an&nbsp;"<strong>onion strategy</strong>" for the discussion of novel technological risks.&nbsp;The outermost layer of the&nbsp;communication onion would be&nbsp;aimed at&nbsp;the general public, and focus on benefits rather than risks, so as not to provoke an unproductive panic. A second layer for a&nbsp;specialist&nbsp;audience could&nbsp;include&nbsp;a more detailed elaboration of the risks. The most complete discussion of risks and mitigation options would be&nbsp;reserved for&nbsp;technical publications that are read only by professionals.</p> <p>Eric Drexler <a href="">seems</a> to wish he had more successfully used&nbsp;an onion strategy when writing about nanotechnology.&nbsp;<a href=""><em>Engines of Creation</em></a>&nbsp;included frank discussions of both the benefits and risks of nanotechnology, including the "<a href="">grey goo</a>" scenario that was discussed widely in the media and used as the premise for the bestselling novel&nbsp;<em><a href="">Prey</a>.</em></p> <p>Ray Kurzweil may be using an onion strategy, or at least keeping his writing&nbsp;in the outermost layer. If you look carefully, chapter 8&nbsp;of&nbsp;<em><a href="">The Singularity is Near</a></em>&nbsp;takes technological risks pretty seriously, and yet&nbsp;it's written in such a way that most people who read the book seem to come away with an overwhelmingly optimistic perspective on technological change.</p> <p>George Church may be following an onion strategy.&nbsp;<em><a href="">Regenesis</a></em>&nbsp;also contains a chapter on the risks of advanced bioengineering, but it's presented&nbsp;as an "epilogue" that many readers will skip.</p> <p>Perhaps those of us writing about AGI for the general public should try to discuss:</p> <ul> <li><em>astronomical stakes</em> rather than&nbsp;<em>existential risk</em></li> <li><em>Friendly AI</em> rather than&nbsp;<em>AGI risk</em> or&nbsp;the<em> superintelligence control problem</em></li> <li>the orthogonality<em> thesis</em> and&nbsp;<em>convergent instrumental values</em> and&nbsp;<em>complexity of values</em> rather than "doom by default"</li> <li>etc.</li> </ul> <p>MIRI doesn't have any official recommendations on the matter, but these days I find myself leaning toward an&nbsp;onion strategy.</p> lukeprog mfHvyPL2d6v7pXkjs 2014-05-31T19:08:24.784Z Can noise have power? <p>One of the most interesting debates on Less Wrong that seems like it should be definitively resolvable is the one between Eliezer Yudkowsky, Scott Aaronson, and others on <a href="/lw/vq/the_weighted_majority_algorithm/">The Weighted Majority Algorithm</a>. I'll reprint the debate here in case anyone wants to comment further on it.</p> <p>In that post, Eliezer argues that "noise hath no power" (read the post for details). Scott disagreed. He&nbsp;<a href="/lw/vq/the_weighted_majority_algorithm/owm">replied</a>:</p> <blockquote> <p>...Randomness provably never helps in average-case complexity (i.e., where you fix the probability distribution over inputs) -- since given any ensemble of strategies, by convexity there must be at least one deterministic strategy in the ensemble that does at least as well as the average.</p> <p>On the other hand, if you care about the worst-case running time, then there are settings (such as query complexity) where randomness provably does help. For example, suppose you're given n bits, you're promised that either n/3 or 2n/3 of the bits are 1's, and your task is to decide which. Any deterministic strategy to solve this problem clearly requires looking at 2n/3 + 1 of the bits. On the other hand, a randomized sampling strategy only has to look at O(1) bits to succeed with high probability.</p> <p>Whether randomness ever helps in worst-case polynomial-time computation is the P versus BPP question, which is in the same league as P versus NP. It's conjectured that P=BPP (i.e., randomness never saves more than a polynomial). This is known to be true if really good pseudorandom generators exist, and such PRG's can be constructed if certain problems that seem to require exponentially large circuits, really do require them (see <a href="">this paper</a> by Impagliazzo and Wigderson). But we don't seem close to proving P=BPP unconditionally.</p> </blockquote> <p>Eliezer <a href="/lw/vq/the_weighted_majority_algorithm/owp">replied</a>:</p> <blockquote> <p>Scott, I don't dispute what you say. I just suggest that the confusing term "in the worst case" be replaced by the more accurate phrase "supposing that the environment is an adversarial superintelligence who can perfectly read all of your mind except bits designated 'random'".</p> </blockquote> <p>Scott <a href="/lw/vq/the_weighted_majority_algorithm/owq">replied</a>:</p> <blockquote> <p>I often tell people that theoretical computer science is basically mathematicized paranoia, and that this is the reason why Israelis so dominate the field. You're absolutely right: we do typically assume the environment is an adversarial superintelligence. But that's not because we literally think it is one, it's because we don't presume to know which distribution over inputs the environment is going to throw at us. (That is, we lack the self-confidence to impose any particular prior on the inputs.) We do often assume that, if we generate random bits ourselves, then the environment isn't going to magically take those bits into account when deciding which input to throw at us. (Indeed, if we like, we can easily generate the random bits after seeing the input -- not that it should make a difference.)</p> <p>Average-case analysis is also well-established and used a great deal. But in those cases where you can solve a problem without having to assume a particular distribution over inputs, why complicate things unnecessarily by making such an assumption? Who needs the risk?</p> </blockquote> <p>And later <a href="/lw/vq/the_weighted_majority_algorithm/t6b">added</a>:</p> <blockquote> <p>...Note that I also enthusiastically belong to a "derandomize things" crowd! The difference is, I think derandomizing is hard work (sometimes possible and sometimes not), since I'm unwilling to treat the randomness of the problems the world throws at me on the same footing as randomness I generate myself in the course of solving those problems. (For those watching at home tonight, I hope the differences are now reasonably clear...)</p> </blockquote> <p>Eliezer <a href="/lw/vq/the_weighted_majority_algorithm/t6c">replied</a>:</p> <blockquote> <p>I certainly don't say "it's not hard work", and the environmental probability distribution should not look like the probability distribution you have over your random numbers - it should contain correlations and structure. But once you know what your probability distribution is, then you should do your work relative to that, rather than assuming "worst case". Optimizing for the worst case in environments that aren't actually adversarial, makes even less sense than assuming the environment is as random and unstructured as thermal noise.</p> <p>I would defend the following sort of statement: While often it's not worth the computing power to take advantage of all the believed-in regularity of your probability distribution over the environment, any environment that you can't get away with treating as effectively random, probably has enough structure to be worth exploiting instead of randomizing.</p> <p>(This isn't based on career experience, it's how I would state my expectation given my prior theory.)</p> </blockquote> <p>Scott <a href="/lw/vq/the_weighted_majority_algorithm/t6d">replied</a>:</p> <blockquote> <p>&gt; "once you know what your probability distribution is..."</p> <p>I'd merely stress that that's an enormous "once." When you're writing a program (which, yes, I used to do), normally you have only the foggiest idea of what a typical input is going to be, yet you want the program to work anyway. This is not just a hypothetical worry, or something limited to cryptography: people have actually run into strange problems using pseudorandom generators for Monte Carlo simulations and hashing (see <a href="">here</a> for example, or Knuth vol 2).</p> <p>Even so, intuition suggests it should be possible to design PRG's that defeat anything the world is likely to throw at them. I share that intuition; it's the basis for the (yet-unproved) P=BPP conjecture.</p> <p>"Any one who considers arithmetical methods of producing random digits is, of course, in a state of sin." --von Neumann</p> </blockquote> <p>And that's where the debate drops off, at least between Eliezer and Scott, at least on that thread.</p> lukeprog 8sitELf6z8zGPKDRm 2014-05-23T04:54:32.829Z Calling all MIRI supporters for unique May 6 giving opportunity! <p><small>(Cross-posted from <a href="">MIRI's blog</a>. <a href="">MIRI</a> maintains Less Wrong, with generous help from <a href="">Trike Apps</a>, and much of the core content is written by salaried MIRI staff members.)</small></p> <p>Update: I'm liveblogging the fundraiser <a href="">here</a>.</p> <h2 style="text-align:center">Read our strategy below, then <a href="">give here</a>!</h2> <p><a href=""><img style="float: right; padding: 10px;" src="" alt="SVGives logo lrg" width="230" height="178" /></a>As previously <a title="Help MIRI in a Massive 24-Hour Fundraiser on May 6th" href="">announced</a>,&nbsp;MIRI is participating in a massive 24-hour fundraiser on May 6th, called <a href="">SV Gives</a>. This is a unique opportunity for all MIRI supporters to increase the impact of their donations. To be successful we'll need to pre-commit to a strategy and see it through. <strong>If you plan to give at least $10 to MIRI sometime this year, during this event would be the best time to do it!</strong></p> <h2><br /></h2> <h2>The plan</h2> <p>We need all hands on deck to help us win the following prize as many times as possible:</p> <blockquote>$2,000 prize for the nonprofit that has the most individual donors in an hour, every hour for 24 hours.</blockquote> <p>To paraphrase, <em>every hour</em>, there is a $2,000 prize for the organization that has the most individual donors during that hour. <strong>That's a total of $48,000 in prizes, from sources that wouldn't normally give to MIRI.&nbsp;</strong> The minimum donation is $10, and an individual donor can give as many times as they want. Therefore we ask our supporters to:</p> <ol> <li><strong><a href="">give</a> $10 an hour, during <em>every hour</em> of the fundraiser that they are awake (I'll be up and donating for all 24 hours!)</strong>;</li> <li>for those whose giving budgets won't cover all those hours, see below for list of which hours you should privilege; and</li> <li>publicize this effort as widely as possible.</li> </ol> <h3 style="text-align: center;">International donors, we especially need your help!</h3> <p>MIRI has a strong community of international supporters, and this gives us a distinct advantage! While North America sleeps, you'll be awake, ready to target all of the overnight $2,000 hourly prizes.</p> <p><a id="more"></a></p> <h2>Hours to target in order of importance</h2> <p>To increase our chances of winning these prizes we want to preferentially target the hours that will see the least donation traffic from donors of other participating organizations. Below are the top 12 hours we'd like to target in order of importance. Remember that all times are in Pacific Time. (Click on an hour to see what time it is in your timezone.)</p> <ul> <li><a href=";p1=224&amp;ah=1">1 am hour</a>&nbsp;&nbsp;(01:00&ndash;01:59 PT)</li> <li><a href=";p1=224&amp;ah=1">2 am hour</a>&nbsp;(02:00&ndash;02:59 PT)</li> <li><a href=";p1=224&amp;ah=1">3 am hour</a>&nbsp;(03:00&ndash;03:59 PT)</li> <li><a href=";p1=224&amp;ah=1">4 am hour</a>&nbsp;(04:00&ndash;04:59 PT)</li> <li><a href=";p1=224&amp;ah=1">5 am hour</a>&nbsp;(05:00&ndash;05:59 PT)</li> <li><a href=";p1=224&amp;ah=1">6 am hour</a>&nbsp;(06:00&ndash;06:59 PT)</li> <li><a href=";p1=224&amp;ah=1">11 pm hour</a>&nbsp;(23:00&ndash;23:59 PT)</li> <li><a href=";p1=224&amp;ah=1">7 am hour</a> (07:00&ndash;07:59 PT)</li> <li><a href=";p1=224&amp;ah=1">10 pm hour</a> (22:00&ndash;22:59 PT)</li> <li><a href=";p1=224&amp;ah=1">8 am hour</a>&nbsp;(08:00&ndash;08:59 PT)</li> <li><a href=";p1=224&amp;ah=1">5 pm hour</a> (17:00&ndash;17:59 PT)</li> <li><a href=";p1=224&amp;ah=1">9 pm hour</a> (21:00&ndash;21:59 PT)</li> </ul> <p>For the 5 pm hour there is an additional prize I think we can win:</p> <blockquote>$1,000 golden ticket added to the first 50 organizations receiving gifts in the 5 pm hour.</blockquote> <p><strong>So if you are giving in the 5 pm hour try and give right at the beginning of the hour.</strong></p> <h3 style="text-align: center;">Bottom line, for every hour you are awake, <a href="">give</a> $10 an hour.</h3> <h3 style="text-align: center;">&nbsp;Give preferentially to the hours above, if unable to give during all waking hours.</h3> <p>We also have plans to target the $300,000 in matching funds up for grabs during the event. If you would like to contribute $500 or more to this effort, shoot Malo an email at <a href=""></a>. &nbsp;</p> <p>For those who want to follow along and contribute to the last minute planning, as well as receive updates and giving reminders during the event, <strong><a href="">sign up here</a>.</strong></p> lukeprog FuoPkThduHrRgxRSR 2014-05-04T23:45:25.469Z Is my view contrarian? <p><small>Previously: <a href="">Contrarian Excuses</a>, <a href="/lw/1kh/the_correct_contrarian_cluster/">The Correct Contrarian Cluster</a>, <a href="/lw/28i/what_is_bunk/">What is bunk?</a>, <a href="/lw/iao/common_sense_as_a_prior/">Common Sense as a Prior</a>, <a href="/lw/iu0/trusting_expert_consensus/">Trusting Expert Consensus</a>, <a href="">Prefer Contrarian Questions</a>.</small></p> <p>Robin Hanson once <a href="">wrote</a>:</p> <blockquote> <p>On average, contrarian views are less accurate than standard views. Honest contrarians should admit this, that neutral outsiders should assign most contrarian views a lower probability than standard views, though perhaps a high enough probability to warrant further investigation. Honest contrarians who expect reasonable outsiders to give their contrarian view more than normal credence should point to strong outside indicators that correlate enough with contrarians tending more to be right.</p> </blockquote> <p>I tend to think through the issue in three stages:</p> <ol> <li>When should I consider myself to be holding a contrarian<sup><a id="fnref:1" class="footnote" title="see footnote" href="#fn:1">[1]</a></sup> view? What is the relevant expert community?</li> <li>If I seem to hold a contrarian view, when do <em>I</em> have enough reason to think I&rsquo;m correct?</li> <li>If I seem to hold a <em>correct</em> contrarian view, what can I do to give <em>other</em> people good reasons to accept my view, or at least to take it seriously enough to examine it at length?</li> </ol> <p>I don&rsquo;t yet feel that I have &ldquo;answers&rdquo; to these questions, but in this post (and hopefully some future posts) I&rsquo;d like to organize some of what has been said before,<sup><a id="fnref:2" class="footnote" title="see footnote" href="#fn:2">[2]</a></sup> and push things a bit further along, in the hope that further discussion and inquiry will contribute toward significant progress in <a href="">social epistemology</a>.<sup><a id="fnref:3" class="footnote" title="see footnote" href="#fn:3">[3]</a></sup> Basically, I hope to say a bunch of obvious things, in a relatively well-organized fashion, so that less obvious things can be said from there.<sup><a id="fnref:4" class="footnote" title="see footnote" href="#fn:4">[4]</a></sup></p> <p>In this post, I&rsquo;ll just address stage 1. Hopefully I&rsquo;ll have time to revisit stages 2 and 3 in future posts.</p> <p>&nbsp;</p> <h4>Is my view contrarian?</h4> <h5>World model differences vs. value differences</h5> <p>Is my <a href="/lw/hx4/four_focus_areas_of_effective_altruism/">effective altruism</a> a contrarian view? It seems to be more of a contrarian <em>value judgment</em> than a contrarian <em>world model</em>,<sup><a id="fnref:5" class="footnote" title="see footnote" href="#fn:5">[5]</a></sup> and by &ldquo;contrarian view&rdquo; I tend to mean &ldquo;contrarian world model.&rdquo; Some apparently contrarian views are probably actually contrarian <em>values</em>.</p> <p>&nbsp;</p> <h5>Expert consensus</h5> <p>Is my <a href="">atheism</a> a contrarian view? It&rsquo;s definitely a world model, not a value judgment, and <a href="">only 2% of people are atheists</a>.</p> <p>But what&rsquo;s the relevant <em>expert</em> population, here? Suppose it&rsquo;s &ldquo;academics who specialize in the arguments and evidence concerning whether a god or gods exist.&rdquo; If so, then the expert population is probably dominated by academic theologians and religious philosophers, and my atheism is a contrarian view.</p> <p>We need <a href="/lw/4ba/some_heuristics_for_evaluating_the_soundness_of/">some heuristics</a> for evaluating the soundness of the academic consensus in different fields. <sup><a id="fnref:6" class="footnote" title="see footnote" href="#fn:6">[6]</a></sup></p> <p>For example, we should consider the selection effects operating on communities of experts. If someone doesn&rsquo;t believe in God, they&rsquo;re unlikely to spend their career studying arcane arguments for and against God&rsquo;s existence. So most people who specialize in this topic are theists, but nearly all of them were theists <em>before</em> they knew the arguments.</p> <p>Perhaps instead the relevant expert community is &ldquo;scholars who study the fundamental nature of the universe&rdquo; &mdash; maybe, philosophers and physicists? They&rsquo;re mostly atheists. <sup><a id="fnref:7" class="footnote" title="see footnote" href="#fn:7">[7]</a></sup> This is starting to get pretty ad-hoc, but maybe that&rsquo;s unavoidable.</p> <p>What about my view that the overall long-term impact of <a href="">AGI</a> will be, most likely, extremely bad? A recent survey of the top 100 authors in artificial intelligence (by citation index)<sup><a id="fnref:8" class="footnote" title="see footnote" href="#fn:8">[8]</a></sup> suggests that my view is somewhat out of sync with the views of those researchers.<sup><a id="fnref:9" class="footnote" title="see footnote" href="#fn:9">[9]</a></sup> But is that the relevant expert population? My impression is that AI experts know a lot about contemporary AI methods, especially within their subfield, but usually haven&rsquo;t thought much about, or read much about, long-term AI impacts.</p> <p>Instead, perhaps I&rsquo;d need to survey &ldquo;<a href="">AGI impact experts</a>&rdquo; to tell whether my view is contrarian. But who is that, exactly? There&rsquo;s no standard credential.</p> <p>Moreover, the most plausible candidates around today for &ldquo;AGI impact experts&rdquo; are &mdash; like the &ldquo;experts&rdquo; of many other fields &mdash; mere &ldquo;scholastic experts,&rdquo; in that they<sup><a id="fnref:10" class="footnote" title="see footnote" href="#fn:10">[10]</a></sup> know a lot about the arguments and evidence typically brought to bear on questions of long-term AI outcomes.<sup><a id="fnref:11" class="footnote" title="see footnote" href="#fn:11">[11]</a></sup> They generally are <em>not</em> experts in the sense of &ldquo;<a href="">Reliably superior performance on representative tasks</a>&rdquo; &mdash; they don&rsquo;t have uniquely good track records on predicting long-term AI outcomes, for example. As far as I know, they don&rsquo;t even have uniquely good track records on predicting short-term geopolitical or sci-tech outcomes &mdash; e.g. they aren&rsquo;t among the &ldquo;<a href="">super forecasters</a>&rdquo; discovered in <a href="">IARPA&rsquo;s forecasting tournaments</a>.</p> <p>Furthermore, we might start to worry about selection effects, again. E.g. if we ask AGI experts when they think AGI will be built, they may be overly optimistic about the timeline: after all, if they didn&rsquo;t think AGI was feasible soon, they probably wouldn&rsquo;t be focusing their careers on it.</p> <p>Perhaps we can salvage this approach for determining whether one has a contrarian view, but for now, let&rsquo;s consider another proposal.</p> <p>&nbsp;</p> <h5>Mildly extrapolated elite opinion</h5> <p>Nick Beckstead instead <a href="/lw/iao/common_sense_as_a_prior/">suggests </a> that, at least as a strong prior, one should believe what one thinks &ldquo;a broad coalition of trustworthy people would believe if they were trying to have accurate views and they had access to [one&rsquo;s own] evidence.&rdquo;<sup><a id="fnref:12" class="footnote" title="see footnote" href="#fn:12">[12]</a></sup> Below, I&rsquo;ll propose a modification of Beckstead&rsquo;s approach which aims to address the &ldquo;Is my view contrarian?&rdquo; question, and I&rsquo;ll call it the &ldquo;mildly extrapolated elite opinion&rdquo; (MEEO) method for determining the relevant expert population. <sup><a id="fnref:13" class="footnote" title="see footnote" href="#fn:13">[13]</a></sup></p> <p>First: which people are &ldquo;trustworthy&rdquo;? With Beckstead, I favor &ldquo;giving more weight to the opinions of people who can be shown to be trustworthy by clear indicators that many people would accept, rather than people that seem trustworthy to you personally.&rdquo; (This guideline aims to avoid parochialism and self-serving cognitive biases.)</p> <p>What are some &ldquo;clear indicators that many people would accept&rdquo;? Beckstead suggests:</p> <blockquote> <p>IQ, business success, academic success, generally respected scientific or other intellectual achievements, wide acceptance as an intellectual authority by certain groups of people, or success in any area where there is intense competition and success is a function of ability to make accurate predictions and good decisions&hellip;</p> </blockquote> <blockquote> <p>Of course, trustworthiness can also be domain-specific. Very often, elite common sense would recommend deferring to the opinions of experts (e.g., listening to what physicists say about physics, what biologists say about biology, and what doctors say about medicine). In other cases, elite common sense may give partial weight to what putative experts say without accepting it all (e.g. economics and psychology). In other cases, they may give less weight to what putative experts say (e.g. sociology and philosophy).</p> </blockquote> <p>Hence MEEO outsources the challenge of evaluating academic consensus in different fields to the &ldquo;generally trustworthy people.&rdquo; But in doing so, it raises several new challenges. How do we determine which people are trustworthy? How do we &ldquo;mildly extrapolate&rdquo; their opinions? How do we weight those mildly extrapolated opinions in combination?</p> <p>This approach might also be promising, or it might be even harder to use than the &ldquo;expert consensus&rdquo; method.</p> <p>&nbsp;</p> <h4>My approach</h4> <p>In practice, I tend to do something like this:</p> <ul> <li>To determine whether my view is contrarian, I ask whether there&rsquo;s a fairly obvious, relatively trustworthy expert population on the issue. If there is, I try to figure out what their consensus on the matter is. If it&rsquo;s different than my view, I conclude I have a contrarian view.</li> <li>If there <em>isn&rsquo;t</em> an obvious trustworthy expert population on the issue from which to extract a consensus view, then I basically give up on step 1 (&ldquo;Is my view contrarian?&rdquo;) and just move to the model combination in step 2 (see below), retaining pretty large uncertainty about how contrarian my view might be.</li> </ul> <h4><br /></h4> <h4>When do I have good reason to think I&rsquo;m correct?</h4> <p>Suppose I conclude I have a contrarian view, as I plausibly have about long-term AGI outcomes,<sup><a id="fnref:14" class="footnote" title="see footnote" href="#fn:14">[14]</a></sup> and as I might have about the technological feasibility of preserving myself via cryonics.<sup><a id="fnref:15" class="footnote" title="see footnote" href="#fn:15">[15]</a></sup> How much evidence do I need to conclude that my view is justified despite the informed disagreement of others?</p> <p>I&rsquo;ll try to tackle that question in a future post. Not surprisingly, my approach is a kind of <a href="/lw/hzu/model_combination_and_adjustment/">model combination and adjustment</a>.</p> <p>&nbsp;</p> <p>&nbsp;</p> <hr /> <p><small> <ol> <li id="fn:1"> <p>I don&rsquo;t have a concise definition for what counts as a &ldquo;contrarian view.&rdquo; In any case, I don&rsquo;t think that searching for an exact definition of &ldquo;contrarian view&rdquo; is what matters. In an email conversation with me, Holden Karnofsky concurred, making the point this way: &ldquo;I agree with you that the idea of &lsquo;contrarianism&rsquo; is tricky to define. I think things get a bit easier when you start looking for patterns that should worry you rather than trying to Platonically define contrarianism&hellip; I find &lsquo;Most smart people think I&rsquo;m bonkers about X&rsquo; and &lsquo;Most people who have studied X more than I have <em>plus seem to generally think like I do</em> think I&rsquo;m wrong about X&rsquo; both worrying; I find &lsquo;Most smart people think I&rsquo;m wrong about X&rsquo; and &lsquo;Most people who spend their lives studying X within a system that seems to be clearly dysfunctional and to have a bad track record think I&rsquo;m bonkers about X&rsquo; to be less worrying.&rdquo; <a class="reversefootnote" title="return to article" href="#fnref:1">&nbsp;↩</a></p> </li> <li id="fn:2"> <p>For a diverse set of perspectives on the social epistemology of disagreement and contrarianism not influenced (as far as I know) by the Overcoming Bias and Less Wrong conversations about the topic, see <a href="">Christensen (2009)</a>; <a href="">Ericsson et al. (2006)</a>; <a href="">Kuchar (forthcoming)</a>; <a href="">Miller (2013)</a>; <a href="">Gelman (2009)</a>; <a href="">Martin &amp; Richards (1995)</a>; <a href="">Schwed &amp; Bearman (2010)</a>; <a href="">Intemann &amp; de Melo-Martin (2013)</a>. Also see Wikipedia&rsquo;s article on <a href="">scientific consensus</a>. <a class="reversefootnote" title="return to article" href="#fnref:2">&nbsp;↩</a></p> </li> <li id="fn:3"> <p>I suppose I should mention that my entire inquiry here is, <em>ala</em> <a href="">Goldman (1998)</a>, premised on the assumptions that (1) the point of epistemology is the pursuit of correspondence-theory truth, and (2) the point of <em>social</em> epistemology is to evaluate which social institutions and practices have instrumental value for producing true or well-calibrated beliefs. <a class="reversefootnote" title="return to article" href="#fnref:3">&nbsp;↩</a></p> </li> <li id="fn:4"> <p>I borrow this line from <a href="">Chalmers (2014)</a>: &ldquo;For much of the paper I am largely saying the obvious, but sometimes the obvious is worth saying so that less obvious things can be said from there.&rdquo; <a class="reversefootnote" title="return to article" href="#fnref:4">&nbsp;↩</a></p> </li> <li id="fn:5"> <p>Holden Karnofsky <a href="">seems to agree</a>: &ldquo;I think effective altruism falls somewhere on the spectrum between &lsquo;contrarian view&rsquo; and &lsquo;unusual taste.&rsquo; My commitment to effective altruism is probably better characterized as &lsquo;wanting/choosing to be an effective altruist&rsquo; than as &lsquo;believing that effective altruism is correct.&rsquo;&rdquo; <a class="reversefootnote" title="return to article" href="#fnref:5">&nbsp;↩</a></p> </li> <li id="fn:6"> <p>Without such heuristics, we can also rather quickly arrive at contradictions. For example, the majority of scholars who specialize in Allah&rsquo;s existence believe that Allah is the One True God, and the majority of scholars who specialize in Yahweh&rsquo;s existence believe that Yahweh is the One True God. Consistency isn&rsquo;t everything, but contradictions like this should still be a warning sign. <a class="reversefootnote" title="return to article" href="#fnref:6">&nbsp;↩</a></p> </li> <li id="fn:7"> <p>According to the <a href="">PhilPapers Surveys</a>, 72.8% of philosophers are atheists, 14.6% are theists, and 12.6% categorized themselves as &ldquo;other.&rdquo; If we look only at metaphysicians, atheism remains dominant at 73.7%. If we look only at analytic philosophers, we again see atheism at 76.3%. As for physicists: <a href="">Larson &amp; Witham (1997)</a> found that 77.9% of physicists and astronomers are disbelievers, and <a href="">Pew Research Center (2009)</a> found that 71% of physicists and astronomers did not believe in a god. <a class="reversefootnote" title="return to article" href="#fnref:7">&nbsp;↩</a></p> </li> <li id="fn:8"> <p>Muller &amp; Bostrom (forthcoming). &ldquo;Future Progress in Artificial Intelligence: A Poll Among Experts.&rdquo; <a class="reversefootnote" title="return to article" href="#fnref:8">&nbsp;↩</a></p> </li> <li id="fn:9"> <p>But, this is unclear. First, I haven&rsquo;t read the forthcoming paper, so I don&rsquo;t yet have the full results of the survey, along with all its important caveats. Second, distributions of expert opinion can vary widely between polls. For example, <a href="">Schlosshauer et al. (2013)</a> reports the results of a poll given to participants in a 2011 quantum foundations conference (mostly physicists). When asked &ldquo;When will we have a working and useful quantum computer?&rdquo;, 9% said &ldquo;within 10 years,&rdquo; 42% said &ldquo;10&ndash;25 years,&rdquo; 30% said &ldquo;25&ndash;50 years,&rdquo; 0% said &ldquo;50&ndash;100 years,&rdquo; and 15% said &ldquo;never.&rdquo; But when the exact same questions were asked of participants at another quantum foundations conference just two years later, <a href="">Norsen &amp; Nelson (2013)</a> report, the distribution of opinion was substantially different: 9% said &ldquo;within 10 years,&rdquo; 22% said &ldquo;10&ndash;25 years,&rdquo; 20% said &ldquo;25&ndash;50 years,&rdquo; 21% said &ldquo;50&ndash;100 years,&rdquo; and 12% said &ldquo;never.&rdquo; <a class="reversefootnote" title="return to article" href="#fnref:9">&nbsp;↩</a></p> </li> <li id="fn:10"> <p>I say &ldquo;they&rdquo; in this paragraph, but I consider myself to be a plausible candidate for an &ldquo;AGI impact expert,&rdquo; in that I&rsquo;m unusually familiar with the arguments and evidence typically brought to bear on questions of long-term AI outcomes. I <em>also</em> don&rsquo;t have a uniquely good track record on predicting long-term AI outcomes, nor am I among the discovered &ldquo;super forecasters.&rdquo; I haven&rsquo;t participated in IARPA&rsquo;s forecasting tournaments myself because it would just be too time consuming. I would, however, very much like to see these super forecasters grouped into teams and tasked with forecasting longer-term outcomes, so that we can begin to gather scientific data on which psychological and computational methods result in the best predictive outcomes when considering long-term questions. Given how long it takes to acquire these data, we should start as soon as possible. <a class="reversefootnote" title="return to article" href="#fnref:10">&nbsp;↩</a></p> </li> <li id="fn:11"> <p><a href="">Weiss &amp; Shanteau (2012)</a> would call them &ldquo;privileged experts.&rdquo; <a class="reversefootnote" title="return to article" href="#fnref:11">&nbsp;↩</a></p> </li> <li id="fn:12"> <p>Beckstead&rsquo;s &ldquo;elite common sense&rdquo; prior and my &ldquo;mildly extrapolated elite opinion&rdquo; method are epistemic notions that involve some kind idealization or extrapolation of opinion. One earlier such proposal in social epistemology was Habermas&rsquo; &ldquo;ideal speech situation,&rdquo; a situation of unlimited discussion between free and equal humans. See Habermas&rsquo; &ldquo;Wahrheitstheorien&rdquo; in <a href="">Schulz &amp; Fahrenbach (1973)</a> or, for an English description, <a href="">Geuss (1981)</a>, pp. 65&ndash;66. See also the discussion in <a href="">Tucker (2003)</a>, pp. 502&ndash;504. <a class="reversefootnote" title="return to article" href="#fnref:12">&nbsp;↩</a></p> </li> <li id="fn:13"> <p>Beckstead calls his method the &ldquo;elite common sense&rdquo; prior. I&rsquo;ve named my method differently for two reasons. First, I want to distinguish MEEO from Beckstead&rsquo;s prior, since I&rsquo;m using the method for a slightly different purpose. Second, I think &ldquo;elite common sense&rdquo; is a confusing term even for Beckstead&rsquo;s prior, since there&rsquo;s some extrapolation of views going on. But also, it&rsquo;s only a &ldquo;mild&rdquo; extrapolation &mdash; e.g. we aren&rsquo;t asking what elites would think if they knew <em>everything</em>, or if they could rewrite their cognitive software for better reasoning accuracy. <a class="reversefootnote" title="return to article" href="#fnref:13">&nbsp;↩</a></p> </li> <li id="fn:14"> <p>My rough impression is that among the people who seem to have thought long and hard about AGI outcomes, and seem to me to exhibit fairly good epistemic practices on most issues, my view on AGI outcomes is still an outlier in its pessimism about the likelihood of desirable outcomes. But it&rsquo;s hard to tell: there haven&rsquo;t been systematic surveys of the important-to-me experts on the issue. I also wonder whether my views about long-term AGI outcomes are more a matter of seriously tackling a contrarian <em>question</em> rather than being a matter of having a particularly contrarian <em>view</em>. On this latter point, see <a href="">this Facebook discussion</a>. <a class="reversefootnote" title="return to article" href="#fnref:14">&nbsp;↩</a></p> </li> <li id="fn:15"> <p>I haven&rsquo;t seen a poll of cryobiologists on the likely future technological feasibility of cryonics. Even if there were such polls, I&rsquo;d wonder whether cryobiologists <em>also</em> had the relevant philosophical and neuroscientific expertise. I should mention that I&rsquo;m not personally signed up for cryonics, for <a href="/lw/guy/open_thread_march_115_2013/8kin">these reasons</a>. <a class="reversefootnote" title="return to article" href="#fnref:15">&nbsp;↩</a></p> </li> </ol> </small></p> lukeprog kFiz7Etau5HNMdKdx 2014-03-11T17:42:49.788Z Futurism's Track Record <p>It would be nice (and expensive) to get a systematic survey on this, but my impressions <a id="fnref:1" class="footnote" title="see footnote" href="#fn:1">[1]</a> after tracking down lots of past technology predictions, and reading histories of technological speculation and invention, and reading about &ldquo;elite common sense&rdquo; at various times in the past, are that:</p> <ul> <li>Elite common sense at a given time almost always <em>massively</em> underestimates what will be technologically feasible in the future.</li> <li>&ldquo;Futurists&rdquo; in history tend to be far more accurate about what will be technologically feasible (when they don&rsquo;t grossly violate known physics), but they are often too optimistic about timelines, and (like everyone else) show little ability to predict (1) the long-term social consequences of future technologies, or (2) the details of which (technologically feasible; successfully prototyped) things will make commercial sense, or be popular products.</li> </ul> <p>Naturally, as someone who thinks it&rsquo;s incredibly important to predict the long-term future as well as we can while also avoiding overconfidence, I try to put myself in a position to learn what past futurists were doing right, and what they were doing wrong. For example, I recommend: Be a fox not a hedgehog. Do calibration training. Know how your brain works. Build quantitative models even if you don&rsquo;t believe the outputs, <a href="/lw/jfm/another_critique_of_effective_altruism/aagj">so that</a> specific pieces of the model are easier to attack and update. Have broad confidence intervals over the timing of innovations. Remember to forecast future developments by looking at trends in many inputs to innovation, not just the &ldquo;calendar years&rdquo; input. Use <a href="/lw/hzu/model_combination_and_adjustment/">model combination</a>. Study history and learn from it. Etc.</p> <p>Anyway: do others who have studied the history of futurism, elite common sense, innovation, etc. have different impressions about futurism&rsquo;s track record? And, anybody want to do a PhD thesis examining futurism&rsquo;s track record? Or on some piece of it, <em>ala</em> <a href="">this</a> or <a href="">this</a> or <a href="/lw/diz/kurzweils_predictions_good_accuracy_poor/">this</a>? :)</p> <div class="footnotes"> <hr /> <ol> <li id="fn:1"> <p>I should explain one additional piece of reasoning which contributes to my impressions on the matter. How do I think about futurist predictions of technologies that haven&rsquo;t yet been definitely demonstrated to be technologically feasible or infeasible? For these, I try to use something like the <a href="/lw/jl7/tricky_bets_and_truthtracking_fields/">truth-tracking fields proxy</a>. E.g. very few intellectual elites (outside Turing, von Neumann, Good, etc.) in 1955 thought AGI would be technologically feasible. By 1980, we&rsquo;d made a bunch of progress in computing and AI and neuroscience, and a much greater proportion of intellectual elites came to think AGI would be technologically feasible. Today, I think the proportion is even greater. The issue hasn&rsquo;t been &ldquo;definitely decided&rdquo; yet (from a social point of view), but things are strongly trending in favor of Good and Turing, and against (e.g.) Dreyfus. <a class="reversefootnote" title="return to article" href="#fnref:1">&nbsp;↩</a></p> </li> </ol></div> lukeprog 6ycPKhdmDgWsoyvKd 2014-01-29T20:27:24.738Z Tricky Bets and Truth-Tracking Fields <p>While visiting Oxford for MIRI&rsquo;s <a href="">November 2013 workshop</a>, I had the pleasure of visiting a meeting of &ldquo;Skeptics in the Pub&rdquo; in the delightfully British-sounding town of <em>High Wycombe</em> in <em>Buckinghamshire</em>. (Say that aloud in a British accent and try not to grin; I dare you!)</p> <p>I presented a mildly drunk intro to applied rationality, followed by a 2-hour Q&amp;A that, naturally, wandered into the subject of why AI will inevitably eat the Earth. I must have been fairly compelling despite the beer, because at one point I noticed the bartenders were leaning uncomfortably over one end of the bar in order to hear me, ignoring thirsty customers at the other end.</p> <p>Anyhoo, at one point I was talking about the role of formal knowledge in applied rationality, so I explained <a href="/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/">Solomonoff&rsquo;s lightsaber</a> and why it made me think the wave function never collapses.</p> <p>Someone &mdash; I can&rsquo;t recall who; let&rsquo;s say &ldquo;Bob&rdquo; &mdash; wisely asked, &ldquo;But if quantum interpretations all predict the same observations, what does it mean for you to say the wave function never collapses? What do you <em>anticipate</em>?&rdquo; <a id="fnref:1" class="footnote" title="see footnote" href="#fn:1">[1]</a></p> <p>Now, I don&rsquo;t actually know whether the <a href="">usual proposals</a> for experimental tests of collapse make sense, so instead I answered:</p> <blockquote> <p>Well, I think theoretical physics is truth-tracking enough that it <em>eventually</em> converges toward true theories, so one thing I anticipate as a result of favoring a no-collapse view is that a significantly greater fraction of physicists will reject collapse in 20 years, compared to today.</p> </blockquote> <p>Had Bob and I wanted to bet on whether the wave function collapses or not, that would have been an awfully tricky bet to settle. But if we roughly agree on the truth-trackingness of physics as a field, then we can use the consensus of physicists a decade or two from now as a proxy for physical truth, and bet on that instead.</p> <p>This won&rsquo;t work for some fields. For example, philosophy sometimes looks more like a random walk than a truth-tracking inquiry &mdash; or, more charitably, it tracks truth on the scale of <em>centuries</em> rather than <em>decades</em>. For example, did you know that one year after the cover of <em>TIME</em> asked &ldquo;Is God dead?&rdquo;, a philosopher named Alvin Plantinga launched a <a href="">renaissance in Christian philosophy</a>, such that theism and Christian particularism were <em>more</em> commonly defended by analytic philosophers in the 1970s than they were in the 1930s? I also have the impression that moral realism was a more popular view in the 1990s than it was in the 1970s, and that physicalism is less common today than it was in the 1960s, but I&rsquo;m less sure about those.</p> <p>You can also do this for bets that are hard to settle for a different kind of reason, e.g. an <a href="/lw/ie/the_apocalypse_bet/">apocalypse bet</a>. <a id="fnref:2" class="footnote" title="see footnote" href="#fn:2">[2]</a> Suppose Bob and I want to bet on whether smarter-than-human AI is technologically feasible. Trouble is, if it&rsquo;s ever proven that superhuman AI is feasible, that event might overthrow the global economy, making it hard to collect the bet, or at least pointless.</p> <p>But suppose Bob and I agree that AI scientists, or computer scientists, or technology advisors to first-world governments, or some other set of experts, is likely to converge toward the true answer on the feasibility of superhuman AI as time passes, as humanity learns more, etc. Then we can instead make a bet on whether it will be the case, 20 years from now, that a significantly increased or decreased fraction of those experts will think superhuman AI is feasible.</p> <p>Often, there won&rsquo;t be acceptable polls of the experts at both times, for settling the bet. But domain experts typically have a general sense of whether some view has become more or less common in their field over time. So Bob and I could agree to poll a randomly chosen subset of our chosen expert community 20 years from now, asking them how common the view in question is at that time and how common it was 20 years earlier, and settle our bet that way.</p> <p>Getting the details right for this sort of long-term bet isn&rsquo;t trivial, but I don't see a fatal flaw. Is there a fatal flaw in the idea that I&rsquo;ve missed? <a id="fnref:3" class="footnote" title="see footnote" href="#fn:3">[3]</a></p> <p>&nbsp;</p> <div class="footnotes"> <hr /> <ol> <li id="fn:1"> <p>I can&rsquo;t recall exactly how the conversation went, but it was <em>something</em> like this. <a class="reversefootnote" title="return to article" href="#fnref:1">&nbsp;↩</a></p> </li> <li id="fn:2"> <p>See also Jones, <a href="">How to bet on bad futures</a>. <a class="reversefootnote" title="return to article" href="#fnref:2">&nbsp;↩</a></p> </li> <li id="fn:3"> <p>I also doubt I&rsquo;m the first person to describe this idea in writing: please link to other articles making this point if you know of any. <a class="reversefootnote" title="return to article" href="#fnref:3">&nbsp;↩</a></p> </li> </ol></div> lukeprog LzyN9wzEdfS3j5SmT 2014-01-29T08:52:38.889Z MIRI's Winter 2013 Matching Challenge <p><strong>Update</strong>: The fundraiser has been completed! Details <a href="">here</a>. The original post follows...</p> <p>&nbsp;</p> <p>&nbsp;</p> <p><small>(Cross-posted from <a href="">MIRI's blog</a>. <a href="">MIRI</a> maintains Less Wrong, with generous help from <a href="">Trike Apps</a>, and much of the core content is written by salaried MIRI staff members.)</small></p> <p>Thanks to <a href="">Peter Thiel</a>, every donation made to MIRI between now and January 15th, 2014 will be <strong>matched dollar-for-dollar</strong>!</p> <p>Also,&nbsp;<strong>gifts from "new large donors" will be matched 3x!</strong> That is, if you've given less than $5k to SIAI/MIRI ever, and you now give or pledge $5k or more, Thiel will donate $3 for every dollar you give or pledge.</p> <p>We don't know whether we'll be able to offer the 3:1 matching ever again, so if you're capable of giving $5k or more, we encourage you to take advantage of the opportunity while you can. Remember that:</p> <ul> <li>If you prefer to give monthly, no problem! If you pledge 6 months of monthly donations, your full 6-month pledge will be the donation amount to be matched. So if you give monthly, you can get 3:1 matching for only $834/mo (or $417/mo if you get matching from your employer).</li> <li>We accept <a href="">Bitcoin</a> (BTC) and <a href="">Ripple</a> (XRP), both of which have recently jumped in value. If the market value of your Bitcoin or Ripple is $5k or more on the day you make the donation, this will count for matching.</li> <li>If your employer matches your donations at 1:1 (check <a href="">here</a>), then you can take advantage of Thiel's 3:1 matching by giving as little as $2,500 (because it's $5k after corporate matching).</li> </ul> <p><em>Please email <a href="mailto:"></a>&nbsp;if you intend on leveraging corporate matching or would like to pledge&nbsp;6 months of monthly donations, so that we can properly account for your contributions towards the fundraiser.</em></p> <p>Thiel's total match is capped at $250,000. The total amount raised will depend on how many people take advantage of 3:1 matching. We don't anticipate being able to hit the $250k cap without substantial use of 3:1 matching &mdash; so if you haven't given $5k thus far, please consider giving/pledging $5k or more during this drive. (If you'd like to know the total amount of your past donations to MIRI, just ask <a href="mailto:"></a>.)</p> <p align="center"><img src="" alt="" /></p> <p style="text-align: center;">Now is your chance to <strong>double or quadruple your impact</strong> in funding our <a href="">research program</a>.</p> <p align="center"><big><a href="">Donate Today</a></big></p> <p align="center"><img class="img-rounded shadowed aligncenter" src="" alt="" /></p> <h3>Accomplishments Since Our July 2013 Fundraiser Launched:</h3> <ul> <li>Held three <strong><a href="">research workshops</a></strong>, including our <a href="">first European workshop</a>.</li> <li><a href=""><strong>Talks</strong> at MIT and Harvard</a>, by Eliezer Yudkowsky and Paul Christiano.</li> <li>Yudkowsky is blogging more <strong>Open Problems in Friendly AI</strong>...&nbsp;<a href="">on Facebook</a>! (They're also being written up in a more conventional format.)</li> <li>New <strong>papers</strong>: (1)&nbsp;<a href="/r/discussion/lw/i8i/algorithmic_progress_in_six_domains/">Algorithmic Progress in Six Domains</a>; (2)&nbsp;<a href="">Embryo Selection for Cognitive Enhancement</a>; (3)&nbsp;<a href="">Racing to the Precipice</a>; (4) <a href="">Predicting AGI: What can we say when we know so little?</a></li> <li>New <strong>ebook</strong>: <a href=""><em>The Hanson-Yudkowsky AI-Foom Debate</em></a>.</li> <li>New <strong>analyses</strong>: (1) <a href="">From Philosophy to Math to Engineering</a>; (2)&nbsp;<a href="">How well will policy-makers handle AGI?</a> (3)&nbsp;<a href="">How effectively can we plan for future decades?</a> (4)&nbsp;<a href="">Transparency in Safety-Critical Systems</a>; (5)&nbsp;<a href="">Mathematical Proofs Improve But Don&rsquo;t Guarantee Security, Safety, and Friendliness</a>; (6)&nbsp;<a href="">What is AGI?</a> (7)&nbsp;<a href="">AI Risk and the Security Mindset</a>; (8)&nbsp;<a href="">Richard Posner on AI Dangers</a>; (9)&nbsp;<a href="">Russell and Norvig on Friendly AI</a>.</li> <li>New <strong>expert interviews</strong>: <a href="">Greg Morrisett</a> (Harvard),&nbsp;<a href="">Robin Hanson</a> (GMU), <a href="">Paul Rosenbloom</a> (USC), <a href="">Stephen Hsu</a> (MSU),&nbsp;<a href="">Markus Schmidt</a> (Biofaction), <a href="">Laurent Orseau</a> (AgroParisTech), <a href="">Holden Karnofsky</a> (GiveWell),&nbsp;<a href="">Bas Steunebrink</a> (IDSIA),&nbsp;<a href="">Hadi Esmaeilzadeh</a>&nbsp;(GIT), <a href="">Nick Beckstead</a> (Oxford),&nbsp;<a href="">Benja Fallenstein</a> (Bristol), <a href="">Roman Yampolskiy</a> (U Louisville),&nbsp;<a href="">Ben Goertzel</a> (Novamente), and <a href="">James Miller</a> (Smith College).</li> <li>With <a href="">Leverage Research</a>, we held a San Francisco <strong>book launch party</strong> for James Barratt's&nbsp;<a href=""><em>Our Final Invention</em></a>, which discusses MIRI's work at length. (If you live in the Bay Area and would like to be notified of local events, please tell!)</li> </ul> <h3>How Will Marginal Funds Be Used?</h3> <ul> <li><strong>Hiring Friendly AI researchers</strong>, identified through our workshops, as they become available for full-time work at MIRI.</li> <li>Running <strong>more workshops</strong> (next one begins&nbsp;<a href="">Dec. 14th</a>), to make concrete Friendly AI research progress, to introduce new researchers to open problems in Friendly AI, and to identify candidates for MIRI to hire.</li> <li>Describing more <strong>open problems in Friendly AI</strong>. Our current strategy is for Yudkowsky to explain them as quickly as possible via Facebook discussion, followed by more structured explanations written by others in collaboration with Yudkowsky.</li> <li>Improving humanity's <strong>strategic understanding</strong> of what to do about superintelligence. In the coming months this will include (1) additional interviews and analyses on our blog, (2) a reader's guide for Nick Bostrom's forthcoming&nbsp;<a href=""><em>Superintelligence</em></a><a href=""> book</a>, and (3) an introductory ebook currently titled&nbsp;<em>Smarter Than Us.</em></li> </ul> <p>Other projects are still being surveyed for likely cost and impact.</p> <p><small>We appreciate your support for our work! <a href="">Donate now</a>, and seize a better than usual chance to move our work forward. If you have questions about donating, please contact Louie Helm at (510) 717-1477 or <a href="">Screenshot Service</a> provided by</small></p> lukeprog arpzMZyzod3Nsqr4J 2013-12-17T20:41:28.303Z A model of AI development <p>FHI has released a new tech report:</p> <p>Armstrong, Bostrom, and Shulman. <a href="">Racing to the Precipice: a Model of Artificial Intelligence Development.</a></p> <p>Abstract:</p> <blockquote> <p>This paper presents a simple model of an AI arms race, where several development teams race to build the first AI. Under the assumption that the first AI will be very powerful and transformative, each team is incentivized to finish first &mdash; by skimping on safety precautions if need be. This paper presents the Nash equilibrium of this process, where each team takes the correct amount of safety precautions in the arms race. Having extra development teams and extra enmity between teams can increase the danger of an AI-disaster, especially if risk taking is more important than skill in developing the AI. Surprisingly, information also increases the risks: the more teams know about each others&rsquo; capabilities (and about their own), the more the danger increases.</p> </blockquote> <p>The paper is short and readable; discuss it here!</p> <p>But my main reason for posting is to ask this question: <strong>What is the most similar work that you know of?</strong> I'd expect people to do this kind of thing for modeling nuclear security risks, and maybe other things, but I don't happen to know of other analyses like this.</p> lukeprog 5zqxzJ7cz4d9coqBZ 2013-11-28T13:48:20.083Z Gelman Against Parsimony <p>In <a href="">two</a> <a href="">posts</a>, Bayesian <a href="">stats guru</a> Andrew Gelman argues against parsimony, though it seems to be favored 'round these parts, in particular <a href="/lw/jp/occams_razor/">Solomonoff Induction</a> and <a href="">BIC</a> as <a href="/lw/cw1/open_problems_related_to_solomonoff_induction/">imperfect</a> formalizations of <a href="">Occam's Razor.</a></p> <p>Gelman says:</p> <blockquote> <p>I&rsquo;ve never seen any good general justification for parsimony...</p> <p>Maybe it&rsquo;s because I work in social science, but my feeling is: if you can approximate reality with just a few parameters, fine. If you can use more parameters to fold in more information, that&rsquo;s even better.</p> <p>In practice, I often use simple models&ndash;because they are less effort to fit and, especially, to understand. But I don&rsquo;t kid myself that they&rsquo;re better than more complicated efforts!</p> <p>My favorite quote on this comes from Radford Neal&lsquo;s book, <em>Bayesian Learning for Neural Networks</em>, pp. 103-104: "Sometimes a simple model will outperform a more complex model . . . Nevertheless, I believe that deliberately limiting the complexity of the model is not fruitful when the problem is evidently complex. Instead, if a simple model is found that outperforms some particular complex model, the appropriate response is to define a different complex model that captures whatever aspect of the problem led to the simple model performing well."</p> <p>...</p> <p>...ideas like minimum-description-length, parsimony, and Akaike&rsquo;s information criterion, are particularly relevant when models are estimated using least squares, maximum likelihood, or some other similar optimization method.</p> <p>When using hierarchical models, we can avoid overfitting and get good descriptions without using parsimony&ndash;the idea is that the many parameters of the model are themselves modeled. <a href="">See here for some discussion of Radford Neal&rsquo;s ideas in favor of complex models</a>, and <a href="">see here</a> for an example from my own applied research.</p> </blockquote> lukeprog az2vsi8ugTWXZ3Lq2 2013-11-24T15:23:32.773Z From Philosophy to Math to Engineering <p><small>Cross-posted from the <a href="">MIRI blog</a>.</small></p> <p>For centuries, philosophers wondered how we could learn what causes what. Some argued it was impossible, or possible only via experiment. Others kept <a href="/lw/8ns/hack_away_at_the_edges/">hacking away</a> at the problem, <a href="">clarifying ideas</a> like <em>counterfactual</em> and <em>probability</em> and <em>correlation</em> by making them more precise and coherent.</p> <p>Then, in the 1990s, a breakthrough: Judea Pearl and others <a href="">showed</a> that, in principle, we can sometimes infer causal relations from data even without experiment, via the mathematical machinery of probabilistic graphical models.</p> <p>Next, engineers used this mathematical insight to write <a href="">software</a> that can, in seconds, infer causal relations from a data set of observations.</p> <p>Across the centuries, researchers had toiled away, pushing our understanding of causality from philosophy to math to engineering.</p> <p align="center"><a href=""><img class="aligncenter size-full wp-image-10570" src="" alt="From Philosophy to Math to Engineering (small)" /></a></p> <p>And so it is with&nbsp;<a href="">Friendly AI</a>&nbsp;research. Current progress on each sub-problem of Friendly AI lies somewhere on a spectrum from philosophy to math to engineering.</p> <!--more--> <p>We began with some fuzzy philosophical ideas of what we want from a Friendly AI (FAI).&nbsp;We want it to be benevolent and powerful enough to eliminate suffering, protect us from natural catastrophes, help us explore the universe, and otherwise make life&nbsp;<em>awesome</em>. We want FAI to allow for moral progress, rather than immediately reshape the galaxy according to whatever our current values happen to be. We want FAI to remain beneficent even as it rewrites its core algorithms to become smarter and smarter. And so on.</p> <p>Small pieces of this philosophical puzzle have been broken off and <a href="/lw/hok/link_scott_aaronson_on_free_will/9546?context=1#comments">turned into</a> math, e.g. <a href="">Pearlian causal analysis</a> and <a href="/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/">Solomonoff induction</a>. Pearl's math has since been used to produce causal inference software that can be run on today's computers, whereas engineers have thus far succeeded in implementing (tractable approximations of) Solomonoff induction&nbsp;only for <a href="">very limited applications</a>.</p> <p>Toy versions of two pieces of the "stable self-modification" problem were transformed into math problems in <a href="">de Blanc (2011)</a> and <a href="">Yudkowsky &amp; Herreshoff (2013)</a>, though this was done to enable further insight via formal analysis, not to assert that these small pieces of the philosophical problem had been&nbsp;<em>solved</em>&nbsp;to the level of math.</p> <p>Thanks to Patrick LaVictoire and other <a href="">MIRI workshop</a> participants,<sup>1</sup> Douglas Hofstadter's FAI-relevant philosophical idea of "<a href="">superrationality</a>" seems to have been, for the most part, successfully <a href="">transformed</a> into math, and a bit of the engineering work has also <a href="">been done</a>.</p> <p>I say "seems" because, while humans are fairly skilled at turning math into feats of practical engineering, we seem to be <a href="/lw/4zs/philosophy_a_diseased_discipline/">much</a> <a href=""><em>less</em></a> <a href="">skilled</a> at turning philosophy into math, without leaving anything out. For example, some very sophisticated thinkers have <a href="">claimed</a>&nbsp;that "Solomonoff induction solves the problem of inductive inference," or <a href="">that</a> "Solomonoff has successfully invented a perfect theory of induction."&nbsp;And indeed, it certainly&nbsp;<em>seems</em> like a truly universal induction procedure. However, it <a href="/lw/cw1/open_problems_related_to_solomonoff_induction/">turns out</a> that Solomonoff induction <em>doesn't</em> fully solve the problem of inductive inference, for relatively subtle reasons.<sup>2</sup></p> <p>Unfortunately, philosophical mistakes like this could be fatal when humanity builds the first self-improving AGI (<a href="">Yudkowsky 2008</a>).<sup>3</sup> FAI-relevant philosophical work is, as Nick Bostrom says, "philosophy with a deadline."</p> <p>&nbsp;</p> <p>&nbsp;</p> <p><sup>1</sup> <small>And before them, <a href="">Moshe Tennenholtz</a>.</small></p> <p><sup>2</sup> <small>Yudkowsky plans to write more about how to improve on Solomonoff induction, later.</small></p> <p><small></small><sup>3</sup> <small>This is a specific instance of a problem Peter Ludlow described like <a href="">this</a>: "the technological curve is pulling away from the philosophy curve very rapidly and is about to leave it completely behind."</small></p> lukeprog TDTnut9pkT6hdKrf8 2013-11-04T15:43:55.704Z The Inefficiency of Theoretical Discovery <p><small>Previously: <a href="">Why Neglect Big Topics</a>.</small></p> <p>Why was there no serious philosophical discussion of <a href="">normative uncertainty</a> until <a href="">1989</a>, given that all the necessary ideas and tools were present at the time of Jeremy Bentham?</p> <p>Why did no professional philosopher analyze I.J. Good&rsquo;s important &ldquo;intelligence explosion&rdquo; thesis (from 1959<sup><a id="fnref:1" href="#fn:1">1</a></sup>) until <a href="">2010</a>?</p> <p>Why was <a href="/lw/h1k/reflection_in_probabilistic_logic/">reflectively consistent probabilistic metamathematics</a> not described until 2013, given that the ideas it builds on go back at least to the 1940s?</p> <p>Why did it take <a href="">until 2003</a> for professional philosophers to begin updating causal decision theory for the age of causal Bayes nets, and until <a href=",%20Rationality%20and%20Success.pdf">2013</a> to formulate a reliabilist metatheory of rationality?</p> <p>By analogy to <a href="">financial market efficiency</a>, I like to say that &ldquo;theoretical discovery is fairly inefficient.&rdquo; That is: there are often large, unnecessary delays in theoretical discovery.</p> <p>This shouldn&rsquo;t surprise us. For one thing, there aren&rsquo;t necessarily large personal rewards for making theoretical progress. But it does mean that those who <em>do</em> care about certain kinds of theoretical progress shouldn&rsquo;t necessarily think that progress will be hard. There is often low-hanging fruit to be plucked by investigators who know where to look.</p> <p>Where should we look for low-hanging fruit? I&rsquo;d guess that theoretical progress may be relatively easy where:</p> <ol> <li>Progress has no obvious, immediately profitable applications.</li> <li>Relatively few quality-adjusted researcher hours have been devoted to the problem.</li> <li>New tools or theoretical advances open up promising new angles of attack.</li> <li>Progress is only valuable to those with unusual views.</li> </ol> <p>These guesses make sense of the abundant low-hanging fruit in much of MIRI&rsquo;s theoretical research, with the glaring exception of decision theory. Our September decision theory workshop revealed plenty of low-hanging fruit, but why should that be? Decision theory is widely applied in <a href="">multi-agent systems</a>, and in philosophy it&rsquo;s clear that visible progress in decision theory is one way to &ldquo;make a name&rdquo; for oneself and advance one&rsquo;s career. Tons of quality-adjusted researcher hours have been devoted to the problem. Yes, new theoretical advances (e.g. causal Bayes nets and program equilibrium) open up promising new angles of attack, but they don&rsquo;t seem necessary to much of the low-hanging fruit discovered thus far. And progress in decision theory is definitely not valuable only to those with unusual views. What gives?</p> <p>Anyway, three questions:</p> <ol> <li>Do you agree about the relative inefficiency of theoretical discovery?</li> <li>What are some other signs of likely low-hanging fruit for theoretical progress?</li> <li>What&rsquo;s up with decision theory having so much low-hanging fruit?</li> </ol> <div><br /></div> <p><sup>1</sup><small> <a id="fn:1" href="">Good (1959)</a> is the earliest statement of the intelligence explosion: &ldquo;Once a machine is designed that is good enough&hellip; it can be put to work designing an even better machine. At this point an &rdquo;explosion&ldquo; will clearly occur; all the problems of science and technology will be handed over to machines and it will no longer be necessary for people to work. Whether this will lead to a Utopia or to the extermination of the human race will depend on how the problem is handled by the machines. The important thing will be to give them the aim of serving human beings.&rdquo; The term itself, &ldquo;intelligence explosion,&rdquo; originates with <a href="">Good (1965)</a>. Technically, artist and philosopher&nbsp;<a href="">Stefan Themerson</a> wrote a "philosophical analysis" of Good's intelligence explosion thesis called&nbsp;<em><a href="">Special Branch</a>,</em>&nbsp;published in 1972, but by "philosophical analysis" I have in mind a more analytic, argumentative kind of philosophical analysis than is found in Themerson's literary&nbsp;<em>Special Branch</em>.&nbsp;<a href="#fnref:1">&nbsp;↩</a></small></p> lukeprog jZtyt67PS2QfehkE8 2013-11-03T21:26:52.468Z Intelligence Amplification and Friendly AI <p><small>Part of the series <a href="/lw/ajm/ai_risk_and_opportunity_a_strategic_analysis/">AI Risk and Opportunity: A Strategic Analaysis</a>. Previous articles on this topic: <a href="/lw/6mi/some_thoughts_on_singularity_strategies/">Some Thoughts on Singularity Strategies</a>, <a href="/lw/10l/intelligence_enhancement_as_existential_risk/">Intelligence enhancement as existential risk mitigation</a>, <a href="/lw/6j9/outline_of_possible_singularity_scenarios_that/">Outline of possible Singularity scenarios that are not completely disastrous</a>.</small></p> <p>Below are my quickly-sketched thoughts on intelligence amplification and FAI, without much effort put into organization or clarity, and without many references.<a id="fnref:1" class="footnote" title="see footnote" href="#fn:1">[1]</a> But first, I briefly review some strategies for increasing the odds of FAI, one of which is to work on intelligence amplification (IA).</p> <h3 id="somepossiblebestcurrentoptionsforincreasingtheoddsoffai"><a id="more"></a><br /></h3> <h3>Some possible &ldquo;best current options&rdquo; for increasing the odds of FAI</h3> <p>Suppose you find yourself in a pre-AGI world,<a id="fnref:2" class="footnote" title="see footnote" href="#fn:2">[2]</a> and you&rsquo;ve been convinced that the status quo world is unstable, and within the next couple centuries we&rsquo;ll likely<a id="fnref:3" class="footnote" title="see footnote" href="#fn:3">[3]</a> settle into one of <a href="">four stable outcomes</a>: FAI, uFAI, non-AI extinction, or a sufficiently powerful global government which can prevent AGI development<a id="fnref:4" class="footnote" title="see footnote" href="#fn:4">[4]</a>. And you <em>totally</em> prefer the FAI option. What should you do to get there?</p> <ul> <li>Obvious direct approach: start solving the technical problems that must be solved to get FAI: goal stability under self-modification, decision algorithms that handle counterfactuals and logical uncertainty properly, indirect normativity, and so on. (<a href="">MIRI&rsquo;s work</a>, some FHI work.)</li> <li>Do strategy research, to potentially identify superior alternatives to the other items on this list, or superior versions of the things on this list already. (<a href="">FHI&rsquo;s work</a>, some MIRI work, etc.)</li> <li>Accelerate IA technologies, so that smarter humans can tackle FAI. (E.g. <a href="">cognitive genomics</a>.)</li> <li>Try to make sure we get high-fidelity <a href="">WBEs</a> before AGI, without WBE work first enabling dangerous neuromorphic AGI. (<a href="">Dalyrmple&rsquo;s work?</a>)</li> <li>Improve political and scientific institutions so that the world is <a href="">more likely to handle AGI wisely</a> when it comes. (<a href="">Prediction markets?</a> <a href="">Vannevar Group?</a>)</li> <li>Capacity-building. Grow the rationality community, the x-risk reduction community, the effective altruism movement, etc.</li> <li>Other stuff. (More in later posts).</li> </ul> <h3 id="theiaroute"><br /></h3> <h3>The IA route</h3> <p>Below are some key considerations about the IA route. I&rsquo;ve numbered them so they&rsquo;re easy to refer to later. My discussion assumes <a href="">MIRI&rsquo;s basic assumptions</a>, including timelines similar to <a href="">my own AGI timelines</a>.</p> <ol> <li>Maybe FAI is so hard that we can <em>only</em> get FAI with a large team of IQ 200+ humans, whereas uFAI can be built by a field of IQ 130&ndash;170 humans with a few more decades and lots of computing power and trial and error. So to have any chance of FAI at all, we&rsquo;ve got to do WBE or IA first.</li> <li>You could accelerate FAI relative to AGI if you somehow kept IA technology secret, for use only by FAI researchers (and maybe their supporters).</li> <li>Powerful IA technologies would likely get wide adoption, and accelerate economic growth and scientific progress in general. If you think <a href="/lw/hoz/do_earths_with_slower_economic_growth_have_a/">Earths with slower economic growth have a better chance at FAI</a>, that could be bad for our FAI chances. If you think the opposite, then broad acceleration from IA could be good for FAI.</li> <li>Maybe IA increases one&rsquo;s &ldquo;rationality&rdquo; and &ldquo;philosophical ability&rdquo; (in scare quotes because we mostly don&rsquo;t know how to measure them yet), and thus IA increases the frequency with which people will realize the risks of AGI and do sane things about it.</li> <li>Maybe IA increases the role of intelligence and designer understanding, relative to hardware and accumulated knowledge, in AI development.<a id="fnref:5" class="footnote" title="see footnote" href="#fn:5">[5]</a></li> </ol> <p>Below are my thoughts about all this. These are only <em>my current views</em>: other MIRI personnel (including Eliezer) disagree with some of the points below, and I wouldn&rsquo;t be surprised to change my mind about some of these things after extended discussion (hopefully in public, on Less Wrong).</p> <p>I doubt (1) is true. I think IQ 130&ndash;170 humans could figure out FAI in 50&ndash;150 years if they were trying to solve the right problems, and if FAI development wasn&rsquo;t in a death race with the strictly easier problem of uFAI. If normal smart humans <em>aren&rsquo;t</em> capable of building FAI in that timeframe, that&rsquo;s probably for lack of rationality and philosophical skill, not for lack of IQ. And I&rsquo;m not confident that rationality and philosophical skill predictably improve with IQ after about IQ 140. It&rsquo;s a good sign that <a href="">atheism increases with IQ after IQ 140</a>, but on the other hand I know too many high-IQ people who think that (e.g.) an AI that maximizes K-complexity is a win, and also there&rsquo;s <a href="">Stanovich&rsquo;s research</a> on how IQ and rationality come apart. For these reasons, I&rsquo;m also not convinced (4) would be a large positive effect on our FAI chances.</p> <p>Can we train people in rationality and philosophical skill beyond that of say, the 95th percentile Less Wronger? <a href="">CFAR</a> has plans to find out, but they need to grow a lot first to execute such an ambitious research program.</p> <p>(2) looks awfully hard, unless we can find a powerful IA technique that also, say, gives you a 10% chance of cancer. Then some EAs devoted to building FAI might just use the technique, and maybe the AI community in general doesn&rsquo;t.</p> <p>(5) seems right, though I doubt it&rsquo;ll be a big enough effect to make a difference for the final outcome.</p> <p>I think (3) is the dominant consideration here, along with the worry about lacking the philosophical skill (but not IQ) to build FAI at all. At the moment, I (sadly) lean toward the view that slower Earths have a better chance at FAI. (Much of my brain doesn&rsquo;t know this, though: I remember reading the <a href="">Summers news</a> with glee, and then remembering that on my current model this was actually bad news for FAI.)</p> <p>I could say more, but I&rsquo;ll stop for now and see what comes up in discussion.</p> <div class="footnotes"> <hr /> <ol> <li id="fn:1"> <p>My thanks to Justin Shovelain for sending me his <a href="/lw/6mi/some_thoughts_on_singularity_strategies/4ikh">old notes</a> on the &ldquo;IA first&rdquo; case, and to Wei Dai, Carl Shulman, and Eliezer Yudkowsky for their feedback on this post. <a class="reversefootnote" title="return to article" href="#fnref:1">&nbsp;↩</a></p> </li> <li id="fn:2"> <p>Not counting civilizations that might be simulating our world. This matters, but I won&rsquo;t analyze that here. <a class="reversefootnote" title="return to article" href="#fnref:2">&nbsp;↩</a></p> </li> <li id="fn:3"> <p>There are other possibilities. For example, there could be a global nuclear war that kills all but about 100,000 people, which could set back social, economic, and technological progress by centuries, thus delaying the crucial point in Earth&rsquo;s history in which it settles into one of the four stable outcomes. <a class="reversefootnote" title="return to article" href="#fnref:3">&nbsp;↩</a></p> </li> <li id="fn:4"> <p>And perhaps also advanced nanotechnology, intelligence amplification technologies, and whole brain emulation. <a class="reversefootnote" title="return to article" href="#fnref:4">&nbsp;↩</a></p> </li> <li id="fn:5"> <p>Thanks to Carl Shulman for making this point. <a class="reversefootnote" title="return to article" href="#fnref:5">&nbsp;↩</a></p> </li> </ol></div> lukeprog jmgyfDYDDYs7YpqJg 2013-09-27T01:09:15.978Z AI ebook cover design brainstorming <p>Thanks to everyone who brainstormed&nbsp;<a href="/r/discussion/lw/io3/help_us_name_a_short_primer_on_ai_risk/">possible titles for MIRI&rsquo;s upcoming ebook on machine intelligence</a>. Our leading contender for the book title is&nbsp;<em>Smarter than Us: The Rise of Machine Intelligence</em>.</p> <p>What we need now are suggestions for a book <strong>cover design</strong>. AI is hard to depict without falling back on cliches, such as a brain image mixed with computer circuitry, a humanoid robot, HAL, an imitation of <em><a href="">Creation of Adam</a></em>&nbsp;with human and robot fingers touching, or an imitation of <em><a href="">March of Progress</a></em>&nbsp;with an AI at the far right.</p> <p>A few ideas/examples:</p> <ol> <li> <p>Something that conveys &lsquo;AI&rsquo; in the middle (a computer screen? a server tower?) connected by arrow/wires/something to various &lsquo;skills/actions/influences&rsquo;, like giving a speech, flying unmanned spacecraft, doing science, predicting the stock market, etc., in an attempt to convey the diverse superpowers of a machine intelligence.</p> </li> <li> <p>A more minimalist text-only cover.</p> </li> <li> <p>A fairly minimal cover with just an ominous-looking server rack in the middle, with a few blinking lights and submerged in darkness around it. A bit like <a href=";qid=1380239258&amp;sr=8-4&amp;keywords=fade+lisa+mcmann">this cover</a>.</p> </li> <li> <p>Similar to the above, except a server farm along the bottom fading into the background, with a frame composition similar to&nbsp;<a href="">this</a>.</p> </li> <li> <p>A darkened, machine-gunned room with a laptop sitting alone on a desk, displaying the text of the title on the screen. (This is the scene from the first chapter, about a Terminator who encounters an unthreatening-looking laptop which ends up being way more powerful and dangerous than the Terminator because it is more intelligent.)</p> </li> </ol> <p>Alex Vermeer sketched the first four of these ideas:</p> <p><a href=""><img src="" alt="" /></a></p> <p>Some general inspiration may be found <a href="">here</a>.</p> <p>We think we want something kinda dramatic, rather than cartoony, but less epic and unbelievable than the <em><a href="">Facing the Intelligence Explosion</a></em> cover.</p> <p>Thoughts?</p> lukeprog wBaWgt45WWfv5ZTPD 2013-09-26T23:49:03.319Z Help us Optimize the Contents of the Sequences eBook <p>MIRI's ongoing effort to publish&nbsp;<a href="">the sequences</a> as an eBook has given us the opportunity to update their contents and organization.</p> <p>We're looking for suggested posts to <strong>reorder, add, or remove</strong>.</p> <p>To help with this, here is a breakdown of the <em>current planned contents of the eBook</em> and any currently planned modifications. Following that is a list of the most popular links within the sequences to posts that are <em>not included</em> therein.</p> <p>Now's a good time to suggested changes or improvements!</p> <p>&mdash;&mdash;&mdash;</p> <h2><a href="">Map and Territory</a></h2> <p>Added <a href="/lw/gp/whats_a_bias_again/">&hellip;What's a Bias Again?</a> because it's meant to immediately follow <a href="/lw/go/why_truth_and/">Why Truth, And&hellip;</a>.</p> <h2><a href="">Mysterious Answers to Mysterious Questions</a></h2> <p>No changes.</p> <h2><a href="">A Human's Guide to Words</a></h2> <p>No changes.</p> <h2><a href="">How to Actually Change Your Mind</a></h2> <h4><a href="">Politics is the Mind-Killer</a></h4> <p>Removed <a href="/lw/lt/the_robbers_cave_experiment/">The Robbers Cave Experiment</a> because it already appears in Death Spirals and the Cult Attractor, and there in the original chronological order which flows better.</p> <h4><a href="">Death Spirals and the Cult Attractor</a></h4> <p>Removed <a href="/lw/m2/the_litany_against_gurus/">The Litany Against Gurus</a> because it already appears in Politics is the Mind-killer.</p> <h4><a href="">Seeing with Fresh Eyes</a></h4> <p>Removed <a href="/lw/m9/aschs_conformity_experiment/">Asch's Conformity Experiment</a> and <a href="/lw/mb/lonely_dissent/">Lonely Dissent</a> because they both appear at the end of Death Spirals. Removed <a href="/lw/s3/the_genetic_fallacy/">The Genetic Fallacy</a> because it's in the Metaethics sequence: that's where it falls chronologically and it fits better there with the surrounding posts.</p> <h4><a href="">Noticing Confusion</a></h4> <p><em>Removed this entire subsequence</em> because it is entirely contained within Mysterious Answers to Mysterious Questions.</p> <h4><a href="">Against Rationalization</a></h4> <p>Added <a href="/lw/kd/pascals_mugging_tiny_probabilities_of_vast/">Pascal's Mugging</a> (before Torture vs Dust Specks) because it explains the 3^^^3 notation. Added <a href="/lw/kn/torture_vs_dust_specks/">Torture vs Dust Specks</a> before <a href="/lw/ko/a_case_study_of_motivated_continuation/">A Case Study of Motivated Continuation</a> because A Case Study refers to it frequently.</p> <h4><a href="">Against Doublethink</a></h4> <p>No changes.</p> <h4><a href="">Overly Convenient Excuses</a></h4> <p>Removed <a href="/lw/jr/how_to_convince_me_that_2_2_3/">How to Convince Me that 2+2=3</a> because it's already in Map &amp; Territory.</p> <h4><a href="">Letting Go</a></h4> <p>No change.</p> <h2><a href="">The Simple Math of Evolution</a></h2> <p>Added <a href="/lw/l1/evolutionary_psychology/">Evolutionary Psychology</a> because it fits nicely at the end and it's referred to by other posts many times.</p> <h2><a href="">Challenging the Difficult</a></h2> <p>No change.</p> <h2><a href="">Yudkowsky's Coming of Age</a></h2> <p>No change.</p> <h2><a href="">Reductionism</a></h2> <p>No change. (Includes the Zombies subsequence.)</p> <h2><a href="/lw/r5/the_quantum_physics_sequence/">Quantum Physics</a></h2> <p>No change. Doesn't include any "Preliminaries" posts, since they'd all be duplicates</p> <h2><a href="">Metaethics</a></h2> <p>No change.</p> <h2><a href="/lw/xy/the_fun_theory_sequence/">Fun Theory</a></h2> <p>No change.</p> <h2><a href="/lw/cz/the_craft_and_the_community/">The Craft and the Community</a></h2> <p>No change.</p> <h2>Appendix</h2> <p>Includes:</p> <ul> <li><a href="">The Simple Truth</a></li> <li><a href="">An Intuitive Explanation of Bayes' Theorem</a></li> <li><a href="">A Technical Explanation of Technical Explanation</a></li> </ul> <p>&mdash;&mdash;&mdash;</p> <p>Here are the most-frequently-referenced links within the sequences to posts outside of the sequences (with a count of three or more). This may help you notice posts that you think should be included in the sequences eBook.</p> <ul> <li><a href="/lw/nc/newcombs_problem_and_regret_of_rationality/">Newcomb's Problem and Regret of Rationality</a> =&gt; 24</li> <li><a href="/lw/o5/the_second_law_of_thermodynamics_and_engines_of/">The Second Law of Thermodynamics, and Engines of Cognition</a> =&gt; 22</li> <li><a href="/lw/l4/terminal_values_and_instrumental_values/">Terminal Values and Instrumental Values</a> =&gt; 16</li> <li><a href="/lw/jk/burdensome_details/">Burdensome Details</a> =&gt; 16</li> <li><a href="/lw/kg/expecting_short_inferential_distances/">Expecting Short Inferential Distances</a> =&gt; 15</li> <li><a href="/lw/l3/thou_art_godshatter/">Thou Art Godshatter</a> =&gt; 14</li> <li><a href="/lw/i8/religions_claim_to_be_nondisprovable/">Religion's Claim to be Non-Disprovable</a> =&gt; 14</li> <li><a href="/lw/hw/scope_insensitivity/">Scope Insensitivity</a> =&gt; 13</li> <li><a href="/lw/rc/the_ultimate_source/">The Ultimate Source</a> =&gt; 13</li> <li><a href="/lw/kj/no_one_knows_what_science_doesnt_know/">No One Knows What Science Doesn't Know</a> =&gt; 12</li> <li><a href="/lw/rm/the_design_space_of_mindsingeneral/">The Design Space of Minds-In-General</a> =&gt; 11</li> <li><a href="/lw/hs/think_like_reality/">Think Like Reality</a> =&gt; 10</li> <li><a href="/lw/rd/passing_the_recursive_buck/">Passing the Recursive Buck</a> =&gt; 9</li> <li><a href="/lw/le/lost_purposes/">Lost Purposes</a> =&gt; 9</li> <li><a href="/lw/ld/the_hidden_complexity_of_wishes/">The Hidden Complexity of Wishes</a> =&gt; 9</li> <li><a href="/lw/in/scientific_evidence_legal_evidence_rational/">Scientific Evidence, Legal Evidence, Rational Evidence</a> =&gt; 9</li> <li><a href="/lw/k2/a_priori/">A Priori</a> =&gt; 8</li> <li><a href="/lw/mt/beautiful_probability/">Beautiful Probability</a> =&gt; 8</li> <li><a href="/lw/rb/possibility_and_couldness/">Possibility and Could-ness</a> =&gt; 8</li> <li><a href="/lw/j6/why_is_the_future_so_absurd/">Why is the Future So Absurd?</a> =&gt; 8</li> <li><a href="/lw/lq/fake_utility_functions/">Fake Utility Functions</a> =&gt; 8</li> <li><a href="/lw/j5/availability/">Availability</a> =&gt; 7</li> <li><a href="/lw/rf/ghosts_in_the_machine/">Ghosts in the Machine</a> =&gt; 7</li> <li><a href="/lw/x5/nonsentient_optimizers/">Nonsentient Optimizers</a> =&gt; 7</li> <li><a href="/lw/lp/fake_fake_utility_functions/">Fake Fake Utility Functions</a> =&gt; 7</li> <li><a href="/lw/o7/searching_for_bayesstructure/">Searching for Bayes-Structure</a> =&gt; 7</li> <li><a href="/lw/gv/outside_the_laboratory/">Outside the Laboratory</a> =&gt; 7</li> <li><a href="/lw/tf/dreams_of_ai_design/">Dreams of AI Design</a> =&gt; 6</li> <li><a href="/lw/rj/surface_analogies_and_deep_causes/">Surface Analogies and Deep Causes</a> =&gt; 6</li> <li><a href="/lw/l9/artificial_addition/">Artificial Addition</a> =&gt; 6</li> <li><a href="/lw/lb/not_for_the_sake_of_happiness_alone/">Not for the Sake of Happiness (Alone)</a> =&gt; 6</li> <li><a href="/lw/h3/superstimuli_and_the_collapse_of_western/">Superstimuli and the Collapse of Western Civilization</a> =&gt; 5</li> <li><a href="/lw/q4/decoherence_is_falsifiable_and_testable/">Decoherence is Falsifiable and Testable</a> =&gt; 5</li> <li><a href="/lw/t6/the_cartoon_guide_to_l&ouml;bs_theorem/">The Cartoon Guide to L&ouml;b's Theorem</a> =&gt; 5</li> <li><a href="/lw/x7/cant_unbirth_a_child/">Can't Unbirth a Child</a> =&gt; 5</li> <li><a href="/lw/rl/the_psychological_unity_of_humankind/">The Psychological Unity of Humankind</a> =&gt; 5</li> <li><a href="/lw/so/humans_in_funny_suits/">Humans in Funny Suits</a> =&gt; 5</li> <li><a href="/lw/7i/rationality_is_systematized_winning/">Rationality is Systematized Winning</a> =&gt; 5</li> <li><a href="/lw/tn/the_true_prisoners_dilemma/">The True Prisoner's Dilemma</a> =&gt; 5</li> <li><a href="/lw/m7/zen_and_the_art_of_rationality/">Zen and the Art of Rationality</a> =&gt; 5</li> <li><a href="/lw/n9/the_intuitions_behind_utilitarianism/">The "Intuitions" Behind "Utilitarianism"</a> =&gt; 5</li> <li><a href="/lw/ws/for_the_people_who_are_still_alive/">For The People Who Are Still Alive</a> =&gt; 4</li> <li><a href="/lw/mg/the_twoparty_swindle/">The Two-Party Swindle</a> =&gt; 4</li> <li><a href="/lw/ji/conjunction_fallacy/">Conjunction Fallacy</a> =&gt; 4</li> <li><a href="/lw/st/anthropomorphic_optimism/">Anthropomorphic Optimism</a> =&gt; 4</li> <li><a href="/lw/gr/the_modesty_argument/">The Modesty Argument</a> =&gt; 4</li> <li><a href="">Rational evidence</a> =&gt; 4</li> <li><a href="/lw/hk/priors_as_mathematical_objects/">Priors as Mathematical Objects</a> =&gt; 4</li> <li><a href="/lw/a6/the_unfinished_mystery_of_the_shangrila_diet/">The Unfinished Mystery of the Shangri-La Diet/</a> =&gt; 4</li> <li><a href="/lw/ig/i_defy_the_data/">I Defy the Data!</a> =&gt; 4</li> <li><a href="/lw/9j/bystander_apathy/">Bystander Apathy</a> =&gt; 3</li> <li><a href="/lw/ja/we_dont_really_want_your_participation/">We Don't Really Want Your Participation</a> =&gt; 3</li> <li><a href="/lw/wq/you_only_live_twice/">You Only Live Twice</a> =&gt; 3</li> <li><a href="/lw/vm/lawful_creativity/">Lawful Creativity</a> =&gt; 3</li> <li><a href="/lw/hx/one_life_against_the_world/">One Life Against the World</a> =&gt; 3</li> <li><a href="">Locate the hypothesis</a> =&gt; 3</li> <li><a href="/lw/ym/cynical_about_cynicism/">Cynical About Cynicism</a> =&gt; 3</li> <li><a href="/lw/tx/optimization/">Optimization</a> =&gt; 3</li> <li><a href="/lw/ke/illusion_of_transparency_why_no_one_understands/">Illusion of Transparency: Why No One Understands You</a> =&gt; 3</li> <li><a href="/lw/sp/detached_lever_fallacy/">Detached Lever Fallacy</a> =&gt; 3</li> <li><a href="/lw/n3/circular_altruism/">Circular Altruism</a> =&gt; 3</li> <li><a href="/lw/my/the_allais_paradox/">The Allais Paradox</a> =&gt; 3</li> <li><a href="/lw/gn/the_martial_art_of_rationality/">The Martial Art of Rationality</a> =&gt; 3</li> <li><a href="/lw/ky/fake_morality/">Fake Morality</a> =&gt; 3</li> </ul> <p>Suggestions?</p> lukeprog ZsDmi6XbLu3LF3gi7 2013-09-19T04:31:20.391Z Help us name a short primer on AI risk! <p>MIRI will soon publish a short book by Stuart Armstrong on the topic of AI risk. The book is <em>currently</em> titled &ldquo;AI-Risk Primer&rdquo; by default, but we&rsquo;re looking for something a little more catchy (just as we did for the upcoming <a href="/lw/h7t/help_us_name_the_sequences_ebook/">Sequences ebook</a>).</p> <p>The book is meant to be accessible and avoids technical jargon. Here is the table of contents and a few snippets from the book, to give you an idea of the content and style:</p> <ol> <li>Terminator versus the AI</li> <li>Strength versus Intelligence</li> <li>What Is Intelligence? Can We Achieve It Artificially?</li> <li>How Powerful Could AIs Become?</li> <li>Talking to an Alien Mind</li> <li>Our Values Are Complex and Fragile</li> <li>What, Precisely, Do We Really (Really) Want?</li> <li>We Need to Get It All <em>Exactly</em> Right</li> <li>Listen to the Sound of Absent Experts</li> <li>A Summary</li> <li>That&rsquo;s Where <em>You</em> Come In &hellip;</li> </ol> <blockquote> <p>The Terminator is a creature from our primordial nightmares: tall, strong, aggressive, and nearly indestructible. We&rsquo;re strongly primed to fear such a being&mdash;it resembles the lions, tigers, and bears that our ancestors so feared when they wandered alone on the savanna and tundra.</p> <p>&hellip;</p> <p>As a species, we humans haven&rsquo;t achieved success through our natural armor plating, our claws, our razor-sharp teeth, or our poison-filled stingers. Though we have reasonably efficient bodies, it&rsquo;s our <em>brains</em> that have made the difference. It&rsquo;s through our social, cultural, and technological intelligence that we have raised ourselves to our current position.</p> <p>&hellip;</p> <p>Consider what would happen if an AI ever achieved the ability to function socially&mdash;to hold conversations with a reasonable facsimile of human fluency. For humans to increase their social skills, they need to go through painful trial and error processes, scrounge hints from more articulate individuals or from television, or try to hone their instincts by having dozens of conversations. An AI could go through a similar process, undeterred by social embarrassment, and with perfect memory. But it could also sift through vast databases of previous human conversations, analyze thousands of publications on human psychology, anticipate where conversations are leading many steps in advance, and always pick the right tone and pace to respond with. Imagine a human who, every time they opened their mouth, had spent a solid year to ponder and research whether their response was going to be maximally effective. That is what a social AI would be like.</p> </blockquote> <p>So, title suggestions?</p> lukeprog atyyMJ6z6d3mpvMjm 2013-09-17T20:35:34.895Z Help MIRI run its Oxford UK workshop in November <p>This <a href="">November 23-29</a>, MIRI is running its first European research workshop, at Oxford University.</p> <p>We need somebody familiar with Oxford UK to (1) help us locate and secure lodging for the workshop participants ahead of time, (2) order food for delivery during the workshop, and (3) generally handle on-the-ground logistics.</p> <p><a href="">Apply here</a> for the chance to:</p> <p><ol> <li>Work with, and hang out with, MIRI staff.</li> <li>Spend some time (during breaks) with the workshop participants.</li> <li>Help MIRI work towards its goals.</li> </ol></p> <p>You can either volunteer to help us for free, or indicate how much you'd need to be paid per hour to take the job.</p> lukeprog F6wRgFH9kz63LzTAB 2013-09-15T03:13:36.553Z How well will policy-makers handle AGI? (initial findings) <p><small>Cross-posted from <a href="">MIRI's blog</a>.</small></p> <p>MIRI's <a href="">mission</a> is "to ensure that the creation of smarter-than-human intelligence has a positive impact." One policy-relevant question is: <strong>How well should we expect policy makers to handle the invention of AGI, and what does this imply about how much effort to put into AGI risk mitigation vs. other concerns?</strong></p> <p><strong></strong>To investigate these questions, we asked&nbsp;<a href="">Jonah Sinick</a>&nbsp;to examine how well policy-makers handled past events analogous in some ways to the future invention of AGI, and summarize his findings. We pre-committed to publishing our entire email exchange on the topic (with minor editing), just as with our project on <a href="">how well we can plan for future decades</a>. The post below is a summary of findings from&nbsp;<a href="">our full email exchange (.docx)</a>&nbsp;so far.</p> <p>As with our investigation of how well we can plan for future decades,<strong>&nbsp;we decided to publish our initial findings after investigating only a few historical cases</strong>. This allows us to gain feedback on the value of the project, as well as suggestions for improvement, before continuing. It also means that&nbsp;<strong>we aren't yet able to draw any confident conclusions about our core questions</strong>.</p> <p>The most significant results from this project so far are:</p> <ol> <li>We came up with a preliminary list of 6 seemingly-important ways in which a historical case could be analogous to the future invention of AGI, and evaluated several historical cases on these criteria.</li> <li>Climate change risk seems sufficiently disanalogous to AI risk that studying climate change mitigation efforts probably gives limited insight into how well policy-makers will deal with AGI risk: the expected damage of climate change appears to be very small relative to the the expected damage due to AI risk, especially when one looks at expected damage to policy makers.</li> <li>The 2008 financial crisis appears, after a shallow investigation, to be sufficiently analogous to AGI risk that it should give us some small reason to be concerned that policy-makers will not manage the invention of AGI wisely.</li> <li>The risks to critical infrastructure from geomagnetic storms are far too small to be in the same reference class with risks from AGI.</li> <li>The eradication of smallpox is only somewhat analogous to the invention of AGI.</li> <li>Jonah performed very shallow investigations of how policy-makers have handled risks from cyberwarfare, chlorofluorocarbons, and the Cuban missile crisis, but these cases need more study before even "initial thoughts" can be given.</li> <li>We identified additional historical cases that could be investigated in the future.</li> </ol> <p>Further details are given below. For sources and more, please see&nbsp;<a href="">our full email exchange (.docx)</a>.</p> <!--more--> <h3><br /></h3> <h3>6 ways a historical case can be analogous to the invention of AGI</h3> <p>In conversation, Jonah and I identified six features of the future invention of AGI that, if largely shared by a historical case, seem likely to allow the historical case to shed light on how well policy-makers will deal with the invention of AGI:</p> <ol> <li>AGI may become a major threat in a somewhat unpredictable time.</li> <li>AGI may become a threat when the world has very limited experience with it.</li> <li>A good outcome with AGI may require solving a difficult global coordination problem.</li> <li>Preparing for the AGI threat adequately may require lots of careful work in advance.</li> <li>Policy-makers have strong personal incentives to solve the AGI problem.</li> <li>A bad outcome with AGI would be a global disaster, and a good outcome with AGI would have global humanitarian benefit.</li> </ol> <p>More details on these criteria and their use are given in the second email of our full email exchange. &nbsp;</p> <h3><br /></h3> <h3>Risks from climate change</h3> <p>People began to see climate change as a potential problem in the early 1970s, but there was some ambiguity as to whether human activity was causing warming (because of carbon emissions) or cooling (because of smog particles). The first <a href="">IPCC</a> report was issued in 1990, and stated that were was substantial anthropogenic global warming due to greenhouse gases. By 2001, there was a strong scientific consensus behind this claim. While policy-makers' response to risks from climate change might seem likely to shed light on whether policy-makers will deal wisely with AGI, there are some important disanalogies:</p> <ul> <li>The harms of global warming are expected to fall disproportionately on disadvantaged people in poor countries, not on policy-makers. So policy-makers have much less personal incentive to solve the problem than is the case with AGI.</li> <li>In the median case, humanitarian losses from global warming <a href="/lw/hi1/potential_impacts_of_climate_change/">seems to be</a> about 20% of GDP per year for the poorest people.&nbsp;In light of anticipated economic development and marginal diminishing utility, this is a <em>much</em> smaller negative humanitarian impact than AGI risk (even ignoring future generations). For example, economist Indur Goklany <a href="">estimated</a> that "through 2085, only 13% of [deaths] from hunger, malaria, and extreme weather events (including coastal flooding from sea level rise) should be from [global] warming."</li> <li>Thus, potential analogies to AGI risk come from climate change's <em>tail risk</em>. But there seem to be few credentialed scientists who have views compatible with a prediction that even a temperature increase in the 95th percentile of the probability distribution (by 2100) would do more than just begin to render some regions of Earth uninhabitable.</li> <li>According to the <a href="">5th IPCC</a>, the risk of human extinction from climate change seems very low: "Some thresholds that all would consider dangerous have no support in the literature as having a non-negligible chance of occurring. For instance, a 'runaway greenhouse effect'&mdash;analogous to Venus&mdash;appears to have virtually no chance of being induced by anthropogenic activities."</li> </ul> &nbsp; <h3>The 2008 financial crisis</h3> <p>Jonah did a shallow investigation of the 2008 financial crisis, but the preliminary findings are interesting enough for us to describe them in some detail. Jonah's impressions about the relevance of the 2008 financial crisis to the AGI situation are based on a reading of&nbsp;<em><a href="">After the Music Stopped</a></em> by Alan Blinder, who was the vice chairman of the federal reserve for 1.5 years during the Clinton administration. Naturally, many additional sources should be consulted before drawing firm conclusions about the relevance of policy-makers' handling of the financial crisis to their likelihood of handling AGI wisely.</p> <p>Blinder's seven main factors leading to the recession are (p. 27):</p> <ol> <li>Inflated asset prices, especially of houses (the housing bubble) but also of certain securities (the bond bubble);</li> <li>Excessive leverage (heavy borrowing) throughout the financial system and the economy;</li> <li>Lax financial regulation, both in terms of what the law left unregulated and how poorly the various regulators performed their duties;</li> <li>Disgraceful banking practices in subprime and other mortgage lending;</li> <li>The crazy-quilt of unregulated securities and derivatives that were built on these bad mortgages;</li> <li>The abysmal performance of the statistical rating agencies, which helped the crazy-quilt get stitched together; and</li> <li>The perverse compensation systems in many financial institutions that created powerful incentives to go for broke.</li> </ol> <p>With these factors in mind, let's look at the strength of the analogy between the 2008 financial crisis and the future invention of AGI:</p> <ol> <li>Almost tautologically, a financial crisis is unexpected, though we do know that financial crises happen with some regularity.</li> <li>The 2008 financial crisis was not unprecedented in kind, only in degree (in some ways).</li> <li>Avoiding the 2008 financial crisis would have required solving a difficult national coordination problem, rather than a global coordination problem. Still, this analogy seems fairly strong. As Jonah writes, "While the 2008 financial crisis seems to have been largely US specific (while having broader ramifications), there's a sense in which preventing it would have required solving a difficult coordination problem. The causes of the crisis are diffuse, and responsibility falls on many distinct classes of actors."</li> <li>Jonah's analysis wasn't deep enough to discern whether the 2008 financial crisis is analogous to the future invention of AGI with regard to how much careful work would have been required in advance to avert the risk.</li> <li>In contrast with AI risk, the financial crisis wasn't a life or death matter for almost any of the actors involved. Many people in finance didn't have incentives to avert the financial crisis: indeed, some of the key figures involved were rewarded with large bonuses. But it's plausible that government decision makers had incentive to avert a financial crisis for reputational reasons, and many interest groups are adversely affected by financial crises.</li> <li>Once again, the scale of the financial crisis wasn't on a par with AI risk, but it was closer to that scale than the other risks Jonah looked at in this initial investigation.</li> </ol> <p>Jonah concluded that "the conglomerate of poor decisions [leading up to] the 2008 financial crisis constitute a small but significant challenge to the view that [policy-makers] will successfully address AI risk." His reasons were:</p> <ol> <li>The magnitude of the financial crisis is nontrivial (even if small) compared with the magnitude of the AI risk problem (not counting future generations).</li> <li>The financial crisis adversely affected a very broad range of people, apparently including a large fraction of those people in positions of power (this seems truer here than in the case of climate change). A recession is bad for most businesses and for most workers. Yet these actors weren't able to recognize the problem, coordinate, and prevent it.</li> <li>The reasons that policy-makers weren't able to recognize the problem, coordinate, and prevent it seem related to reasons why people might not recognize AI risk as a problem, coordinate, and prevent it. First, several&nbsp;key actors involved seem to have exhibited conspicuous overconfidence and neglect of tail risk (e.g. Summers, etc. ignoring Brooksley Born's warnings about excessive leverage). If true, this shows that people in positions of power are notably susceptible to overconfidence and neglect of tail risk. Avoiding overconfidence and giving sufficient weight to tail risk may be crucial in mitigating AI risk.&nbsp;Second, one gets a sense that bystander effect and tragedy of the commons played a large role in the case of the financial crisis. There are risks that weren't adequately addressed because doing so didn't fall under the purview of any of the existing government agencies. This may have corresponded to a mentality of the type "that's not my job &mdash; somebody else can take care of it." If people think that AI risk is large, then they might think "if nobody's going to take care of it then I will, because otherwise I'm going to die." But if people think that AI risk is small, they might think "This probably won't be really bad for me, and even though someone should take care of it, it's not going to be me."</li> </ol> &nbsp; <h3>Risks from geomagnetic storms</h3> <p>Large geomagnetic storms like the <a href="">1859 Carrington Event</a> are infrequent, but could cause serious damage to satellites and critical infrastructure. See <a href="">this OECD report</a> for an overview.</p> <p>Jonah's investigation revealed a wide range in expected losses from geomagnetic storms, from $30 million per year to $30 billion per year. But even this larger number amounts to $1.5 trillion in expected losses over the next 50 years. Compare this with the losses from the 2008 financial crisis (roughly a 1 in 50 years event), which are&nbsp;<a href="">estimated</a> to be about $13 trillion for Americans alone.</p> <p>Though serious, the risks from geomagnetic storms appear to be small enough to be disanalogous to the future invention of AGI. &nbsp;</p> <h3><br /></h3> <h3>The eradication of smallpox</h3> <p><a href="">Smallpox</a>, after killing more than 500 million people over the past several millennia, was eradicated in 1979 after a decades-long global eradication effort. Though a hallmark of successful global coordination, it doesn't seem especially relevant to whether policy-makers will handle the invention of AGI wisely.</p> <p>Here's how the eradication of smallpox does our doesn't fit our criteria for being analogous to the future invention of AGI:</p> <ol> <li>Smallpox didn't arrive at an unpredictable time; it arrived millennia before the eradication campaign.</li> <li>The world didn't have experience eradicating a disease before smallpox was eradicated, but a number of nations had eliminated smallpox.</li> <li>Smallpox eradication required solving a difficult global coordination problem, but in a way disanalogous to the invention of AGI safety (see the other points on this list).</li> <li>Preparing for smallpox eradication required effort in advance in some sense, but the effort had mostly already been exerted before the campaign was announced.</li> <li>Nations without smallpox had incentive to eradicate smallpox so that they didn't have to spend money to immunize citizens so that the virus would not be (re)-introduced to their countries. For example, in 1968, the United States spent about $100 million on routine smallpox vaccinations.</li> <li>Smallpox can be thought of as a global disaster: by 1966, about 2 million people died of smallpox each year.</li> </ol> &nbsp; <h3>Shallow investigations of&nbsp;risks from cyberwarfare, chlorofluorocarbons, and the Cuban missile crisis</h3> <p>Jonah's shallow investigation of risks from cyberwarfare revealed that experts disagree significantly about the nature and scope of these risks. It's likely that dozens of hours of research would be required to develop a well-informed model of these risks.</p> <p>To investigate how policy-makers handled the discovery that chlorofluorocarbons (CFCs) depleted the ozone layer, Jonah summarized the first 100 pages of&nbsp;<em><a href="">Ozone Crisis: The 15-Year Evolution of a Sudden Global Emergency</a></em>&nbsp;(see our full email exchange for the summary).&nbsp;This historical case seems worth investigating further, and may be a case of policy-makers solving a global risk with surprising swiftness, though whether the response was appropriately prompt is debated.</p> <p>Jonah also did a shallow investigation of the <a href="">Cuban missile crisis</a>. It's difficult to assess how likely it was for the crisis to escalate into a global nuclear war, but it appears that policy-makers made many poor decisions leading up to and during the Cuban missile crisis (see our full email exchange for a list). Jonah concludes:</p> <blockquote>even if the probability of the Cuban missile crisis leading to an all out nuclear war was only 1% or so, the risk was still sufficiently great so that the way in which the actors handled the situation is evidence against elites handling the creation of AI well. (This contrasts with the situation with climate change, in that elites had strong personal incentives to avert an all-out nuclear war.)</blockquote> <p>However, this is only a guess based on a shallow investigation, and should not be taken too seriously before a more thorough investigation of the historical facts can be made. &nbsp;</p> <h3><br /></h3> <h3>Additional historical cases that could be investigated</h3> <p>We also identified additional historical cases that could be investigated for potentially informative analogies to the future invention of AGI:</p> <ol> <li>The 2003 <a href="">Iraq War</a></li> <li>The frequency with which dictators are deposed or assassinated due to "unforced errors" they made</li> <li><a href="">Nuclear proliferation</a></li> <li><a href="">Recombinant DNA</a></li> <li><a href="">Molecular nanotechnology</a></li> <li><a href="">Near Earth objects</a></li> <li>Pandemics and potential pandemics (e.g. <a href="">HIV</a>,&nbsp;<a href="">SARS</a>)</li> </ol> lukeprog 4JGQtc4ZgmdsrtBhB 2013-09-12T07:21:30.255Z How effectively can we plan for future decades? (initial findings) <p><small>Cross-posted from <a href="">MIRI's blog</a>.</small></p> <p>MIRI aims to do research now that increases humanity's odds of successfully managing important AI-related events that are at least&nbsp;<a href="">a few decades away</a>. Thus, we'd like to know: To what degree can we take actions now that will predictably have positive effects on AI-related events decades from now? And, which factors predict success and failure in planning for decades-distant events that share important features with future AI events?</p> <p>Or, more generally:&nbsp;<strong>How effectively can humans plan for future decades? Which factors predict success and failure in planning for future decades?</strong></p> <p><strong></strong>To investigate these questions, we asked <a href="">Jonah Sinick</a> to examine historical attempts to plan for future decades and summarize his findings. We pre-committed to publishing our entire email exchange on the topic (with minor editing), just as Jonah had done previously with GiveWell&nbsp;<a href="">on the subject of insecticide-treated nets</a>. The post below is a summary of findings from&nbsp;<a href="">our full email exchange (.docx)</a>&nbsp;so far.</p> <p><strong>We decided to publish our initial findings after investigating only a few historical cases</strong>. This allows us to gain feedback on the value of the project, as well as suggestions for improvement, before continuing. It also means that <strong>we aren't yet able to draw any confident conclusions about our core questions</strong>.</p> <p>The most significant results from this project so far are:</p> <ol> <li>Jonah's initial impressions about&nbsp;<em>The Limits to Growth</em>&nbsp;(1972), a famous forecasting study on population and resource depletion, were that its long-term predictions were mostly wrong, and also that its authors (at the time of writing it) didn't have credentials that would predict forecasting success. Upon reading the book, its critics, and its defenders, Jonah concluded that many critics and defenders had &nbsp;seriously misrepresented the book, and that the book itself&nbsp;exhibits high epistemic standards and does not make significant predictions that turned out to be wrong.</li> <li>Svante Arrhenius (1859-1927) did a surprisingly good job of climate modeling given the limited information available to him, but he was nevertheless wrong about two important policy-relevant factors. First, he failed to predict how quickly carbon emissions would increase. Second, he predicted that global warming would have positive rather than negative humanitarian impacts.&nbsp;If more people had taken Arrhenius' predictions seriously and burned fossil fuels faster for humanitarian reasons, then today's scientific consensus on the effects of climate change suggests that the humanitarian effects would have been negative.</li> <li>In retrospect, Norbert Wiener's concerns about the medium-term dangers of increased automation appear naive, and it seems likely that even at the time, better epistemic practices would have yielded substantially better predictions.</li> <li>Upon initial investigation, several historical cases seemed unlikely to shed substantial light on our &nbsp;core questions: Norman&nbsp;Rasmussen's analysis of the safety of nuclear power plants, Leo Szilard's choice to keep secret a patent related to nuclear chain reactions,&nbsp;Cold War planning efforts to win decades later, and several cases of "ethically concerned scientists."</li> <li>Upon initial investigation, two historical cases seemed like they&nbsp;<em>might</em> shed light on our &nbsp;core questions, but only after many hours of additional research on each of them: China's one-child policy, and the Ford Foundation's impact on India's 1991 financial crisis.</li> <li>We listed many other historical cases that may be worth investigating.</li> </ol> <p>The project has also produced a chapter-by-chapter list of some key lessons from Nate Silver's&nbsp;<a href=""><em>The Signal and the Noise</em></a>, available <a href="/lw/hxx/some_highlights_from_nate_silvers_the_signal_and/">here</a>.</p> <p>Further details are given below. For sources and more, please see&nbsp;<a href="">our full email exchange (.docx)</a>.</p> <!--more--> <p><a id="more"></a></p> <h3>The Limits to Growth</h3> <p>In his initial look at&nbsp;<em><a href="">The Limits to</a></em><a href=""> Growth</a>&nbsp;(1972), Jonah noted that the authors were fairly young at the time of writing (the oldest was 31), and they lacked credentials in long-term forecasting. Moreover, it appeared that&nbsp;<em>Limits to Growth</em> predicted a sort of doomsday scenario -&nbsp;<em>ala</em> Ehrlich's&nbsp;<em><a href="">The Population Bomb</a></em>&nbsp;(1968) - that had failed to occur. In particular, it appeared that&nbsp;<em>Limits to Growth</em> had failed to appreciate <a href="">Julian Simon</a>'s point that other resources would substitute for depleted resources. Upon reading the book, Jonah found that:</p> <ul> <li>The book avoids strong, unconditional claims.&nbsp;Its core claim is that <em>if</em> exponential growth of resource usage continues, <em>then</em> there will likely be a societal collapse by 2100.</li> <li>The book was careful to qualify its claims, and met high epistemic standards. Jonah wrote: "The book doesn't look naive even in retrospect, which is impressive given that it was written 40 years ago. "</li> <li>The authors discuss substitutability at length in chapter 4.</li> <li>The book discusses mitigation at a theoretical level, but doesn't give explicit policy recommendations, perhaps because the issues involved were too complex.</li> </ul> <h3><br /></h3> <h3>Svante Arrhenius</h3> <p>Derived more than a century ago, <a href="">Svante Arrhenius</a>'&nbsp;equation for how the Earth's temperature varies as a function of concentration of carbon dioxide is the same equation used today. But while Arrhenius' climate modeling was impressive given the information available to him at the time, he failed to predict (by a large margin) how quickly fossil fuels would be burned. He also predicted that global warming would have positive humanitarian effects,&nbsp;but based on our current understanding, the expected humanitarian effects seem negative.</p> <p>Arrhenius's predictions were mostly ignored at the time, but had people taken them seriously and burned fossil fuels more quickly,&nbsp;the humanitarian effects would probably have been negative. &nbsp;</p> <h3><br /></h3> <h3>Norbert Wiener</h3> <p>As Jonah explains,&nbsp;<a href="">Norbert Wiener</a> (1894-1964) "believed that unless countermeasures were taken, automation would render low skilled workers unemployable. He believed that this would precipitate an economic crisis far worse than that of the Great Depression." Nearly 50 years after his death, this <a href="/lw/hh4/the_robots_ai_and_unemployment_antifaq/">doesn't seem to have happened</a>&nbsp;much, though it may eventually happen.</p> <p>Jonah's impression is that Wiener had strong views on the subject, doesn't seem to have updated much in response to incoming evidence, and seems to have relied to heavily on what <a href="">Berlin (1953)</a> and&nbsp;<a href="">Tetlock (2005)</a> described as "hedgehog" thinking: "the fox knows many things, but the hedgehog knows one big thing." &nbsp;</p> <h3><br /></h3> <h3>Some historical cases that seem unlikely to shed light on our questions</h3> <p><a href="">Rasmussen (1975)</a> is a probabilistic risk assessment of nuclear power plants, written before any nuclear power plant disasters had occurred. However, Jonah concluded that this historical case wasn't very relevant to our specific questions about taking actions useful for decades-distant AI outcomes, in part because the issue is highly domain specific, and because the report makes a large number of small predictions rather than a few salient predictions.</p> <p>In 1936,&nbsp;<a href="">Le&oacute; Szil&aacute;rd</a> assigned his&nbsp;chain reaction patent in a way that ensured it would be kept secret from the Nazis. However, Jonah concluded:</p> <blockquote>I think that this isn't a good example of a nontrivial future prediction. The destructive potential seems pretty obvious &ndash; anything that produces a huge amount of concentrated energy can be used in a destructive way. As for the Nazis, Szilard was himself Jewish and fled from the Nazis, and it seems pretty obvious that one wouldn't want a dangerous regime to acquire knowledge that has destructive potential. It would be more impressive if the early developers of quantum mechanics had kept their research secret on account of dimly being aware of the possibility of destructive potential, or if Szilard had filed his patent secretly in a hypothetical world in which the Nazi regime was years away.</blockquote> <p>Jonah briefly investigated Cold War efforts aimed at winning the war decades later, but concluded that it was "too difficult to tie these efforts to war outcomes."</p> <p>Jonah also investigated Kaj Sotala's&nbsp;<a href="/lw/gln/a_brief_history_of_ethically_concerned_scientists/">A brief history of ethically concerned scientists</a>. Most of the historical cases cited there didn't seem relevant to this project. Many cases involved "scientists concealing their discoveries out of concern that they would be used for military purposes," but this seems to be an increasingly irrelevant sort of historical case, since science and technology markets are now relatively efficient, and concealing a discovery rarely delays progress for very long (e.g. see <a href="">Kelly 2011</a>). Other cases involved efforts to reduce the use of dangerous weapons for which the threat was imminent during the time of the advocacy. There may be lessons among these cases, but they appear to be of relatively weak relevance to our current project. &nbsp;&nbsp;</p> <h3><br /></h3> <h3>Some historical cases that might shed light on our questions with much additional research</h3> <p>Jonah performed an initial investigation of the impacts of China's <a href="">one-child policy</a>, and concluded that it would take many, many hours of research to determine both the sign and the magnitude of the policy's impacts.</p> <p>Jonah also investigated a case involving the <a href="">Ford Foundation</a>. In <a href="">a conversation with GiveWell</a>, Lant Pritchett said:</p> <blockquote>[One] example of transformative philanthropy is related to India&rsquo;s recovery from its economic crisis of 1991. Other countries had previously had similar crises and failed to implement good policies that would have allowed them to recover from their crises. By way of contrast, India implemented good policies and recovered in a short time frame. Most of the key actors who ensured that India implemented the policies that it did were influenced by a think tank established by the Ford Foundation ten years before the crisis. The think tank exposed Indians to relevant ideas from the developed world about liberalization. The difference between (a) India&rsquo;s upward economic trajectory and (b) what its upward economic trajectory would have been if it had been unsuccessful in recovering from the 1991 crisis is in the trillions of dollars. As such, the Ford Foundation&rsquo;s investment in the think tank had a huge impact. For the ten years preceding the crisis, it looked like the think tank was having no impact, but it turned out to have a huge impact.</blockquote> <p>Unfortunately, Jonah was unable to find any sources or contacts that would allow him to check whether this story is true. &nbsp;</p> <h3><br /></h3> <h3>Other historical cases that might be worth investigating</h3> <p>Historical cases we identified but did not yet investigate include:</p> <ul> <li><a href="">Eric Drexler</a>'s early predictions about the feasibility and likely effects of nanotechnology.</li> <li>The <a href="">Asilomar conference on recombinant DNA</a></li> <li>Efforts to <a href="">detect asteroids before they threaten Earth</a></li> <li>The <a href="">Green Revolution</a></li> <li>The modern history of <a href="">cryptography</a></li> <li>Early efforts to <a href="">mitigate global warming</a></li> <li>Possible deliberate long term efforts to produce scientific breakthroughs (the transistor? the human genome?)</li> <li>Rachel Carson's&nbsp;<a href=""><em>Silent Spring</em></a> (1962)</li> <li>Paul Ehrlich's&nbsp;<a href=""><em>The Population Bomb</em></a> (1968)</li> <li>The Worldwatch Institute's <a href=""><em>State of the World</em></a> reports (since 1984)</li> <li>The WCED's&nbsp;<a href=""><em>Our Common Future</em></a> (1987)</li> </ul> &nbsp; lukeprog gYiXucYZBgWdWdxJe 2013-09-04T22:42:05.195Z Which subreddits should we create on Less Wrong? <p>Less Wrong is based on <a href="">reddit</a> code, which means we can create <a href="">subreddits</a> with relative ease.</p> <p>Right now we have two subreddits, Main and Discussion. These are distinguished not by subject matter, but by whether a post is the <em>type</em> of thing that might be promoted to the front page or not (e.g. a meetup announcement, or a particularly well-composed and useful post).</p> <p>As a result, almost everything is published to Discussion, and thus <strong>it is difficult for busy people to follow only the subjects they care about</strong>. More people will be able to engage if we split things into topic-specific subreddits, and make it easy to follow only what they care about.</p> <p>To make it easier for people to follow only what they care about, we're building the code for a Dashboard thingie.</p> <p>But we also need to figure out <em>which</em> subreddits to create, and we'd like community feedback about that.</p> <p>We'll probably start small, with just 1-5 new subreddits.</p> <p>Below are some initial ideas, to get the conversation started.</p> <p>&nbsp;</p> <p><strong>Idea 1</strong></p> <p> <ul> <li><em>Main</em>: still the place for things that might be promoted.</li> <li><em>Applied Rationality</em>: for articles about&nbsp;<a href="/lw/gs5/improving_human_rationality_through_cognitive/">what Jonathan Baron would call</a>&nbsp;descriptive and prescriptive rationality, for both epistemic and instrumental rationality (stuff about biases, self-improvement stuff, etc.).</li> <li><em>Normative Rationality</em>: for articles about what Baron would call normative rationality, for both epistemic and instrumental rationality (examining the foundations of probability theory, decision theory, anthropics, and lots of stuff that is called "philosophy").&nbsp;</li> <li><em>The Future</em>: for articles about forecasting, x-risk, and future technologies.</li> <li><em>Misc</em>:&nbsp;Discussion, renamed, for everything that doesn't belong in the other subreddits.</li> </ul> <strong></strong></p> <p>&nbsp;</p> <p><strong>Idea 2</strong></p> <p> <ul> <li><em>Main</em></li> <li><em>Epistemic Rationality</em>: for articles about how to figure out the world, spanning the descriptive, prescriptive, and normative.</li> <li><em>Instrumental Rationality</em>: for articles about how to take action to achieve your goals, spanning the descriptive, prescriptive, and normative. (One difficulty with the epistemic/instrumental split is that many (most?) applied rationality techniques seem to be relevant to both epistemic and instrumental rationality.)</li> <li><em>The Future</em></li> <li><em>Misc.</em></li> </ul> <div><br /></div> <div><br /></div> </p> lukeprog kFwtkx9wtSzssYv9W 2013-09-04T17:56:33.729Z Artificial explosion of the Sun: a new x-risk? <p><a href="">Bolonkin &amp; Friedlander (2013)</a> argues that it might be possible for "a dying dictator" to blow up the Sun, and thus destroy all life on Earth:</p> <blockquote> <p>The Sun contains ~74% hydrogen by weight. The isotope hydrogen-1 (99.985% of hydrogen in nature) is a usable fuel for fusion thermonuclear reactions. This reaction runs slowly within the Sun because its temperature is low (relative to the needs of nuclear reactions). If we create higher temperature and density in a limited region of the solar interior, we may be able to produce self-supporting detonation thermonuclear reactions that spread to the full solar volume. This is analogous to the triggering mechanisms in a thermonuclear bomb. Conditions within the bomb can be optimized in a small area to initiate ignition, then spread to a larger area, allowing producing a hydrogen bomb of any power. In the case of the Sun certain targeting practices may greatly increase the chances of an artificial explosion of the Sun. This explosion would annihilate the Earth and the Solar System, as we know them today. The reader naturally asks: Why even contemplate such a horrible scenario? It is necessary because as thermonuclear and space technology spreads to even the least powerful nations in the centuries ahead, a dying dictator having thermonuclear missile weapons can [produce] (with some considerable mobilization of his military/industrial complex)&mdash;an artificial explosion of the Sun and take into his grave the whole of humanity. It might take tens of thousands of people to make and launch the hardware, but only a very few need know the final targeting data of what might be otherwise a weapon purely thought of (within the dictator&rsquo;s defense industry) as being built for peaceful, deterrent use. Those concerned about Man&rsquo;s future must know about this possibility and create some protective system&mdash;or ascertain on theoretical grounds that it is entirely [impossible]. Humanity has fears, justified to greater or lesser degrees, about asteroids, warming of Earthly climate, extinctions, etc. which have very small probability. But all these would leave survivors&mdash;nobody thinks that the terrible annihilation of the Solar System would leave a single person alive. That explosion appears possible at the present time. In this paper is derived the &ldquo;AB-Criterion&rdquo; which shows conditions wherein the artificial explosion of Sun is possible. The author urges detailed investigation and proving or disproving of this rather horrifying possibility, so that it may be dismissed from mind&mdash;or defended against.</p> </blockquote> <p><strong>Warning</strong>: the paper is published in an obscure journal by publisher #206 on&nbsp;<a href="">Beall&rsquo;s List of Predatory Publishers 2013</a>, and I was unable to find confirmation of the authors' <a href="">claimed</a> <a href="">credentials</a> from any reputable sources with 5 minutes of Googling. It also has two spelling errors <em>in the abstract</em>. (It has no citations on Google scholar, but I wouldn't expect it to have any since it was only released in July 2013.)</p> <p>I haven't read the paper, and I'd love to see someone fluent in astrophysics comment on its contents.&nbsp;</p> <p>My guess is that this is <em>not a risk at all</em>&nbsp;or,&nbsp;as with <a href="">proposed high-energy physics disasters</a>,&nbsp;the risk is extremely low-probability but physically conceivable (though perhaps not by methods imagined by Bolonkin &amp; Friedlander).&nbsp;</p> lukeprog dkvoauE8tXKZKAXDq 2013-09-02T06:12:39.019Z Transparency in safety-critical systems <p>I've just posted an analysis to MIRI's blog called <a href="">Transparency in Safety-Critical Systems</a>. Its aim is to explain a common view about transparency and system reliability, and then open a dialogue about which parts of that view are wrong, or don't apply well to AGI.</p> <p>The "common view" (not universal by any means) explained in the post is, roughly:</p> <blockquote> <p>Black box testing can provide some confidence that a system will behave as intended, but if a system is built such that it is transparent to human inspection, then additional methods of reliability verification are available. Unfortunately, many of AI&rsquo;s most useful methods are among its least transparent. Logic-based systems are typically more transparent than statistical methods, but statistical methods are more widely used. There are exceptions to this general rule, and some people are working to make statistical methods more transparent.</p> </blockquote> <p>Three caveats / open problems listed at the end of the post are:</p> <p><ol> <li>How does the transparency of a method change with scale? A 200-rules logical AI might be more transparent than a 200-node Bayes net, but what if we&rsquo;re comparing 100,000 rules vs. 100,000 nodes? At least we can query the Bayes net to ask &ldquo;what it believes about X,&rdquo; whereas we can&rsquo;t necessarily do so with the logic-based system.</li> <li>Do the categories above really &ldquo;carve reality at its joints&rdquo; with respect to transparency? Does a system&rsquo;s status as a logic-based system or a Bayes net reliably predict its transparency, given that in principle we can use either one to express a probabilistic model of the world?</li> <li>How much of a system&rsquo;s transparency is &ldquo;intrinsic&rdquo; to the system, and how much of it depends on the quality of the user interface used to inspect it? How much of a &ldquo;transparency boost&rdquo; can different kinds of systems get from excellently designed user interfaces?</li> </ol></p> <p>The MIRI blog has only recently begun to regularly host substantive, non-news content, so it doesn't get much commenting action yet. Thus, I figured I'd post here and try to start a dialogue. Comment away!</p> lukeprog 9xLMeCix89i8ozuqS 2013-08-25T18:52:07.757Z How Efficient is the Charitable Market? <p>When I talk about the <a href="">poor distribution of funds in charity</a>, people in the <a href="/lw/hx4/four_focus_areas_of_effective_altruism/">effective altruism</a> movement sometimes say, "Didn't Holden Karnofsky show that charity is an efficient market in his post <a href="">Broad Market Efficiency</a>?"</p> <p>My reply is "No. Holden never said, and doesn't believe, that charity is an efficient market."</p> <p>&nbsp;</p> <h4>What is an efficient market?</h4> <p>An efficient market is one <a href="">in which</a> "one cannot consistently achieve returns in excess of average market returns... given the information available at the time the investment is made." (Details <a href="">here</a>.)</p> <p>Of course, market efficiency is a spectrum, not a yes/no question. As Holden writes, "The most efficient markets can be consistently beaten only by the most talented/dedicated players, while the least efficient [markets] can be beaten with fairly little in the way of talent and dedication."</p> <p>Moreover, market efficiency is multi-dimensional. Any particular market may be efficient in some ways, and in some domains, while highly inefficient in other ways and other domains.</p> <p><a id="more"></a></p> <h4><br /></h4> <h4>Charity as an inefficient market</h4> <p>Financial markets are relatively efficient. It's rare for players to consistently beat the market by a large margin. You can beat the average by investing in a <a href="">low-fee index fund</a>, but not by a lot, and it's hard to beat hedge funds.</p> <p>Philanthropic markets appear to be less efficient than financial markets in many ways. In charity, one can consistently beat the market by a wide margin simply by giving to <a href="">GiveWell's recommended charities</a>, which achieve far greater returns (in social value) per marginal dollar than the average charity does. However, Holden <a href="">points out</a> that it has been surprisingly difficult for GiveWell to find ways to beat <em>well-run large foundations</em> like the <a href="">Gates Foundation</a>'s work in global health.</p> <p>Why should we expect charity to be less efficient than financial markets?</p> <p>For one thing, most people giving to charity don't even seem to <em>care</em> what returns (in social value) they're getting with their investments. That's why, when proto-GiveWell initially contacted a bunch of charities to ask for evidence of positive impact, some of those charities reported that nobody who gave them money had ever <em>asked</em> that question before. And when charities sent proto-GiveWell their internal reports about effectiveness, they were so inadequate that they "led [proto-GiveWell] to understand that the charities <em>themselves</em> did not know whether they were helping or hurting a given situation" (<a href="">Stern 2012</a>).</p> <p>For another thing, "market incentives of the nonprofit world push charities toward happy anecdote and inspiring narrative rather than toward careful planning, research, and evidence-based investments" (details <a href="">here</a>).</p> <p>Also, as Brian Tomasik <a href="">notes</a>, "Efficiency in the realm of charity is inherently less plausible than in financial markets because in charity there&rsquo;s not a common unit of what 'good' means... Indeed, one man&rsquo;s good may be another man&rsquo;s bad (e.g., abortion, gun control, extinction risks)." But even when we focus on relatively common units of 'good' (e.g. human welfare, QALYs, or DALYs), charity is still relatively inefficient: we can easily purchase more QALYs per dollar via <a href="">AMF</a> than via, say, the popular <a href="">Make-a-Wish Foundation</a>.</p> <p>&nbsp;</p> <h4>What is "broad market efficiency", then?</h4> <p>If Holden agrees that philanthropic markets are relatively inefficient in the sense that it's easy to consistently and substantially beat average market returns by giving to GiveWell's recommended charities, then what does he mean by "broad market efficiency"? Holden introduces "broad market efficiency" as a term for the <em>spectrum</em> of market efficiency, but remains uncertain as to where charity falls on that spectrum of market efficiency.</p> <p>Brian Tomasik <a href="">worried</a> that the term "broad market efficiency" would confuse some readers into thinking Holden was claiming that philanthropic markets are relatively efficient and thus that "it doesn&rsquo;t really matter where you donate." Holden <a href="">said</a> he wasn't worried about this, saying, "I don&rsquo;t think 'broad market efficiency' is a common phrase or one with a clear meaning." But I think the phrase <em>is</em> confusing, that many readers interpret it as meaning "market efficiency," and indeed that people in economics and finance sometimes use it that way: search for the phrase "broad market efficiency" <a href="[;source=CONTENT&amp;ServiceName=PublicServiceView&amp;ContentID=1111183580](">here</a>, <a href="">here</a>, <a href="">here</a>, and <a href="">here</a>.</p> <p>&nbsp;</p> <h4>The research ahead</h4> <p>So how efficient <em>is</em> the charitable market, and in which ways? My own guess is that it's far less efficient than financial markets, but GiveWell's research has <a href="">provided</a> valuable and surprising (to me) information on this topic, and I look forward to future discoveries.</p> lukeprog JBKrSNEejyE7nYmqq 2013-08-24T05:57:48.169Z Engaging Intellectual Elites at Less Wrong <p>Is Less Wrong, despite its flaws, the highest-quality relatively-general-interest forum on the web? It seems to me that, to find reliably higher-quality discussion, I must turn to more narrowly focused sites, e.g. <a href="">MathOverflow</a> and the <a href="">GiveWell blog</a>.</p> <p>Many people smarter than myself have reported the same impression. But if you know of any comparably high-quality relatively-general-interest forums, please link me to them!</p> <p>In the meantime: suppose it's true that Less Wrong is the highest-quality relatively-general-interest forum on the web. In that case, we're sitting on a big opportunity to grow Less Wrong into the "standard" general-interest discussion hub for people with high intelligence and high metacognition (shorthand: "intellectual elites").</p> <p>Earlier, Jonah Sinick lamented <a href="/lw/hky/the_paucity_of_elites_online/">the scarcity of elites on the web</a>. How can we get more intellectual elites to engage on the web, and in particular at Less Wrong?</p> <p>Some projects to improve the situation are extremely costly:</p> <ol> <li>Pay some intellectual elites with unusually good writing skills (like Eliezer) to generate a constant stream of new, interesting content.</li> <li>Comb through Less Wrong to replace community-specific jargon with more universally comprehensible terms, and change community norms about jargon. (E.g. GiveWell's jargon tends to be more transparent, such as their phrase "room for more funding.")</li> </ol> <p>Code changes, however, could be significantly less costly. New features or site structure elements could increase engagement by intellectual elites. (To avoid <a href="/lw/k3/priming_and_contamination/">priming and contamination</a>, I'll hold back from naming specific examples here.)</p> <p>To help us figure out which code changes are most likely to increase engagement on Less Wrong by intellectual elites, specific MIRI volunteers will be interviewing intellectual elites who (1) are familiar enough with Less Wrong to be able to simulate which code changes might cause them to engage more, but who (2) mostly just lurk, currently.</p> <p>In the meantime, I figured I'd throw these ideas to the community for feedback and suggestions.</p> lukeprog dzPaFnf9cWiP3wbG9 2013-08-13T17:55:05.719Z How to Measure Anything <p><a href=""><img style="padding: 30px;" src="" alt="" align="right" /></a>Douglas Hubbard&rsquo;s <em><a href="">How to Measure Anything</a></em> is one of my favorite how-to books. I hope this summary inspires you to buy the book; it&rsquo;s worth it.</p> <p>The book opens:</p> <blockquote> <p>Anything can be measured. If a thing can be observed in any way at all, it lends itself to some type of measurement method. No matter how &ldquo;fuzzy&rdquo; the measurement is, it&rsquo;s still a measurement if it tells you more than you knew before. And those very things most likely to be seen as immeasurable are, virtually always, solved by relatively simple measurement methods.</p> </blockquote> <p>The sciences have many established measurement methods, so Hubbard&rsquo;s book focuses on the measurement of &ldquo;business intangibles&rdquo; that are important for decision-making but tricky to measure: things like management effectiveness, the &ldquo;flexibility&rdquo; to create new products, the risk of bankruptcy, and public image.</p> <p>&nbsp;</p> <h3 id="basicideas">Basic Ideas</h3> <p>A <em>measurement</em> is an observation that quantitatively reduces uncertainty. Measurements might not yield precise, certain judgments, but they <em>do</em> reduce your uncertainty.</p> <p>To be measured, the <em>object of measurement</em> must be described clearly, in terms of observables. A good way to clarify a vague object of measurement like &ldquo;IT security&rdquo; is to ask &ldquo;What is IT security, and why do you care?&rdquo; Such probing can reveal that &ldquo;IT security&rdquo; means things like a reduction in unauthorized intrusions and malware attacks, which the IT department cares about because these things result in lost productivity, fraud losses, and legal liabilities.</p> <p><em>Uncertainty</em> is the lack of certainty: the true outcome/state/value is not known.</p> <p><em>Risk</em> is a state of uncertainty in which some of the possibilities involve a loss.</p> <p>Much pessimism about measurement comes from a lack of experience making measurements. Hubbard, who is <em>far</em> more experienced with measurement than his readers, says:</p> <ol> <li>Your problem is not as unique as you think.</li> <li>You have more data than you think.</li> <li>You need less data than you think.</li> <li>An adequate amount of new data is more accessible than you think.</li> </ol> <h3 id="appliedinformationeconomics"><br /></h3> <h3>Applied Information Economics</h3> <p>Hubbard calls his method &ldquo;Applied Information Economics&rdquo; (AIE). It consists of 5 steps:</p> <ol> <li>Define a decision problem and the relevant variables. (Start with the decision you need to make, then figure out which variables would make your decision easier if you had better estimates of their values.)</li> <li>Determine what you know. (Quantify your uncertainty about those variables in terms of ranges and probabilities.)</li> <li>Pick a variable, and compute the value of additional information for that variable. (Repeat until you find a variable with reasonably high information value. If no remaining variables have enough information value to justify the cost of measuring them, skip to step 5.)</li> <li>Apply the relevant measurement instrument(s) to the high-information-value variable. (Then go back to step 3.)</li> <li>Make a decision and act on it. (When you&rsquo;ve done as much uncertainty reduction as is economically justified, it&rsquo;s time to act!)</li> </ol> <p>These steps are elaborated below.</p> <h3 id="step1:defineadecisionproblemandtherelevantvariables"><a id="more"></a><br /></h3> <h3>Step 1: Define a decision problem and the relevant variables</h3> <p>Hubbard illustrates this step by telling the story of how he helped the Department of Veterans Affairs (VA) with a measurement problem.</p> <p>The VA was considering seven proposed IT security projects. They wanted to know &ldquo;which&hellip; of the proposed investments were justified and, after they were implemented, whether improvements in security justified further investment&hellip;&rdquo; Hubbard asked his standard questions: &ldquo;What do you mean by &lsquo;IT security&rsquo;? Why does it matter to you? What are you observing when you observe improved IT security?&rdquo;</p> <p>It became clear that <em>nobody</em> at the VA had thought about the details of what &ldquo;IT security&rdquo; meant to them. But after Hubbard&rsquo;s probing, it became clear that by &ldquo;IT security&rdquo; they meant a reduction in the frequency and severity of some undesirable events: agency-wide virus attacks, unauthorized system access (external or internal),unauthorized physical access, and disasters affecting the IT infrastructure (fire, flood, etc.) And each undesirable event was on the list because of specific costs associated with it: productivity losses from virus attacks, legal liability from unauthorized system access, etc.</p> <p>Now that the VA knew what they meant by &ldquo;IT security,&rdquo; they could measure specific variables, such as the number of virus attacks per year.</p> <h3 id="step2:determinewhatyouknow"><br /></h3> <h3>Step 2: Determine what you know</h3> <h4 id="uncertaintyandcalibration">Uncertainty and calibration</h4> <p>The next step is to determine your level of uncertainty about the variables you want to measure. To do this, you can express a &ldquo;confidence interval&rdquo; (CI). A 90% CI is a range of values that is 90% likely to contain the correct value. For example, the security experts at the VA were 90% confident that each agency-wide virus attack would affect between 25,000 and 65,000 people.</p> <p>Unfortunately, few people are well-calibrated estimators. For example in some studies, the true value lay in subjects&rsquo; 90% CIs only 50% of the time! These subjects were overconfident. For a well-calibrated estimator, the true value will lie in her 90% CI roughly 90% of the time.</p> <p>Luckily, &ldquo;assessing uncertainty is a general skill that can be taught with a measurable improvement.&rdquo;</p> <p>Hubbard uses several methods to calibrate each client&rsquo;s value estimators, for example the security experts at the VA who needed to estimate the frequency of security breaches and their likely costs.</p> <p>His first technique is the <em>equivalent bet test</em>. Suppose you&rsquo;re asked to give a 90% CI for the year in which Newton published the universal laws of gravitation, and you can win $1,000 in one of two ways:</p> <ol> <li>You win $1,000 if the true year of publication falls within your 90% CI. Otherwise, you win nothing.</li> <li>You spin a dial divided into two &ldquo;pie slices,&rdquo; one covering 10% of the dial, and the other covering 90%. If the dial lands on the small slice, you win nothing. If it lands on the big slice, you win $1,000.</li> </ol> <p>If you find yourself preferring option #2, then you must think spinning the dial has a higher chance of winning you $1,000 than option #1. That suggest your stated 90% CI isn&rsquo;t really your 90% CI. Maybe it&rsquo;s your 65% CI or your 80% CI instead. By preferring option #2, your brain is trying to tell you that your originally stated 90% CI is overconfident.</p> <p>If instead you find yourself preferring option #1, then you must think there is <em>more</em> than a 90% chance your stated 90% CI contains the true value. By preferring option #1, your brain is trying to tell you that your original 90% CI is under confident.</p> <p>To make a better estimate, adjust your 90% CI until option #1 and option #2 seem equally good to you. Research suggests that even <em>pretending</em> to bet money in this way will improve your calibration.</p> <p>Hubbard&rsquo;s second method for improving calibration is simply <em>repetition and feedback</em>. Make lots of estimates and then see how well you did. For this, play CFAR&rsquo;s <a href="">Calibration Game</a>.</p> <p>Hubbard also asks people to identify reasons why a particular estimate might be right, and why it might be wrong.</p> <p>He also asks people to look more closely at each bound (upper and lower) on their estimated range. A 90% CI &ldquo;means there is a 5% chance the true value could be greater than the upper bound, and a 5% chance it could be less than the lower bound. This means the estimators must be 95% sure that the true value is less than the upper bound. If they are not that certain, they should increase the upper bound&hellip; A similar test is applied to the lower bound.&rdquo;</p> <h4 id="simulations"><br /></h4> <h4>Simulations</h4> <p>Once you determine what you know about the uncertainties involved, how can you use that information to determine what you know about the <em>risks</em> involved? Hubbard summarizes:</p> <blockquote> <p>&hellip;all risk in any project&hellip; can be expressed by one method: the ranges of uncertainty on the costs and benefits, and probabilities on events that might affect them.</p> </blockquote> <p>The simplest tool for measuring such risks accurately is the Monte Carlo (MC) simulation, which can be run by Excel and many other programs. To illustrate this tool, suppose you are wondering whether to lease a new machine for one step in your manufacturing process.</p> <blockquote> <p>The one-year lease [for the machine] is $400,000 with no option for early cancellation. So if you aren&rsquo;t breaking even, you are still stuck with it for the rest of the year. You are considering signing the contract because you think the more advanced device will save some labor and raw materials and because you think the maintenance cost will be lower than the existing process.</p> </blockquote> <p>Your pre-calibrated estimators give their 90% CIs for the following variables:</p> <ul> <li>Maintenance savings (MS): $10 to $20 per unit</li> <li>Labor savings (LS): -$2 to $8 per unit</li> <li>Raw materials savings (RMS): $3 to $9 per unit</li> <li>Production level (PL): 15,000 to 35,000 units per year</li> </ul> <p>Thus, your annual savings will equal (MS + LS + RMS) &times; PL.</p> <p>When measuring risk, we don&rsquo;t just want to know the &ldquo;average&rdquo; risk or benefit. We want to know the probability of a huge loss, the probability of a small loss, the probability of a huge savings, and so on. That&rsquo;s what Monte Carlo can tell us.</p> <p>An MC simulation uses a computer to randomly generate thousands of possible values for each variable, based on the ranges we&rsquo;ve estimated. The computer then calculates the outcome (in this case, the annual savings) for each generated combination of values, and we&rsquo;re able to see how often different kinds of outcomes occur.</p> <p>To run an MC simulation we need not just the 90% CI for each variable but also the <em>shape</em> of each distribution. In many cases, the <a href="">normal distribution</a> will work just fine, and we&rsquo;ll use it for all the variables in this simplified illustration. (Hubbard&rsquo;s book shows you how to work with other distributions).</p> <p>To make an MC simulation of a normally distributed variable in Excel, we use this formula:</p> <blockquote> <p>=norminv(rand(), mean, standard deviation)</p> </blockquote> <p>So the formula for the maintenance savings variable should be:</p> <blockquote> <p>=norminv(rand(), 15, (20&ndash;10)/3.29)</p> </blockquote> <p>Suppose you enter this formula on cell A1 in Excel. To generate (say) 10,000 values for the maintenance savings value, just (1) copy the contents of cell A1, (2) enter &ldquo;A1:A10000&rdquo; in the cell range field to select cells A1 through A10000, and (3) paste the formula into all those cells.</p> <p>Now we can follow this process in other columns for the other variables, including a column for the &ldquo;total savings&rdquo; formula. To see how many rows made a total savings of $400,000 or more (break-even), use Excel&rsquo;s <a href="">countif</a> function. In this case, you should find that about 14% of the scenarios resulted in a savings of less than $400,000 &ndash; a loss.</p> <p><img src="" alt="" align="right" />We can also make a histogram (see right) to show how many of the 10,000 scenarios landed in each $100,000 increment (of total savings). This is even more informative, and tells us a great deal about the distribution of risk and benefits we might incur from investing in the new machine. (Download the full spreadsheet for this example <a href="">here</a>.)</p> <p>The simulation concept can (and in high-value cases <em>should</em>) be carried beyond this simple MC simulation. The first step is to learn how to use a greater variety of distributions in MC simulations. The second step is to deal with correlated (rather than independent) variables by generating correlated random numbers or by modeling what the variables have in common.</p> <p>A more complicated step is to use a <a href="">Markov simulation</a>, in which the simulated scenario is divided into many time intervals. This is often used to model stock prices, the weather, and complex manufacturing or construction projects. Another more complicated step is to use an <a href="">agent-based model</a>, in which independently-acting agents are simulated. This method is often used for traffic simulations, in which each vehicle is modeled as an agent.</p> <h3 id="step3:pickavariableandcomputethevalueofadditionalinformationforthatvariable"><br /></h3> <h3>Step 3: Pick a variable, and compute the value of additional information for that variable</h3> <p>Information can have three kinds of value:</p> <ol> <li>Information can affect people&rsquo;s behavior (e.g. common knowledge of germs affects sanitation behavior).</li> <li>Information can have its own market value (e.g. you can sell a book with useful information).</li> <li>Information can reduce uncertainty about important decisions. (This is what we&rsquo;re focusing on here.)</li> </ol> <p>When you&rsquo;re uncertain about a decision, this means there&rsquo;s a chance you&rsquo;ll make a non-optimal choice. The cost of a &ldquo;wrong&rdquo; decision is the difference between the wrong choice and the choice you would have made with perfect information. But it&rsquo;s too costly to acquire perfect information, so instead we&rsquo;d like to know which decision-relevant variables are the <em>most</em> valuable to measure more precisely, so we can decide which measurements to make.</p> <p>Here&rsquo;s a simple example:</p> <blockquote> <p>Suppose you could make $40 million profit if [an advertisement] works and lose $5 million (the cost of the campaign) if it fails. Then suppose your calibrated experts say they would put a 40% chance of failure on the campaign.</p> </blockquote> <p>The expected opportunity loss (EOL) for a choice is the probability of the choice being &ldquo;wrong&rdquo; times the cost of it being wrong. So for example the EOL if the campaign is approved is $5M &times; 40% = $2M, and the EOL if the campaign is rejected is $40M &times; 60% = $24M.</p> <p>The difference between EOL before and after a measurement is called the &ldquo;expected value of information&rdquo; (EVI).</p> <p>In most cases, we want to compute the VoI for a range of values rather than a binary succeed/fail. So let&rsquo;s tweak the advertising campaign example and say that a calibrated marketing expert&rsquo;s 90% CI for sales resulting from the campaign was from 100,000 units to 1 million units. The risk is that we don&rsquo;t sell enough units from this campaign to break even.</p> <p>Suppose we profit by $25 per unit sold, so we&rsquo;d have to sell at least 200,000 units from the campaign to break even (on a $5M campaign). To begin, let&rsquo;s calculate the expected value of <em>perfect</em> information (EVPI), which will provide an upper bound on how much we should spend to reduce our uncertainty about how many units will be sold as a result of the campaign. Here&rsquo;s how we compute it:</p> <ol> <li>Slice the distribution of our variable into thousands of small segments.</li> <li>Compute the EOL for each segment. EOL = segment midpoint times segment probability.</li> <li>Sum the products from step 2 for all segments.</li> </ol> <p>Of course, we&rsquo;ll do this with a computer. For the details, see Hubbard&rsquo;s book and the Value of Information spreadsheet from <a href="">his website</a>.</p> <p>In this case, the EVPI turns out to be about $337,000. This means that we shouldn&rsquo;t spend more than $337,000 to reduce our uncertainty about how many units will be sold as a result of the campaign.</p> <p>And in fact, we should probably spend much less than $337,000, because no measurement we make will give us <em>perfect</em> information. For more details on how to measure the value of <em>imperfect</em> information, see Hubbard&rsquo;s book and these three LessWrong posts: (1) <a href="/lw/cih/value_of_information_8_examples/">VoI: 8 Examples</a>, (2) <a href="/lw/85x/value_of_information_four_examples/">VoI: Four Examples</a>, and (3) <a href="/lw/8j4/5second_level_case_study_value_of_information/">5-second level case study: VoI</a>.</p> <p>I do, however, want to quote Hubbard&rsquo;s comments about the &ldquo;measurement inversion&rdquo;:</p> <blockquote> <p>By 1999, I had completed the&hellip; Applied Information Economics analysis on about 20 major [IT] investments&hellip; Each of these business cases had 40 to 80 variables, such as initial development costs, adoption rate, productivity improvement, revenue growth, and so on. For each of these business cases, I ran a macro in Excel that computed the information value for each variable&hellip; [and] I began to see this pattern: * The vast majority of variables had an information value of zero&hellip; * The variables that had high information values were routinely those that the client had never measured&hellip; * The variables that clients [spent] the most time measuring were usually those with a very low (even zero) information value&hellip; &hellip;since then, I&rsquo;ve applied this same test to another 40 projects, and&hellip; [I&rsquo;ve] noticed the same phenomena arise in projects relating to research and development, military logistics, the environment, venture capital, and facilities expansion.</p> </blockquote> <p>Hubbard calls this the &ldquo;Measurement Inversion&rdquo;:</p> <blockquote> <p>In a business case, the economic value of measuring a variable is usually inversely proportional to how much measurement attention it usually gets.</p> </blockquote> <p>Here is one example:</p> <blockquote> <p>A stark illustration of the Measurement Inversion for IT projects can be seen in a large UK-based insurance client of mine that was an avid user of a software complexity measurement method called &ldquo;function points.&rdquo; This method was popular in the 1980s and 1990s as a basis of estimating the effort for large software development efforts. This organization had done a very good job of tracking initial estimates, function point estimates, and actual effort expended for over 300 IT projects. The estimation required three or four full-time persons as &ldquo;certified&rdquo; function point counters&hellip;</p> </blockquote> <blockquote> <p>But a very interesting pattern arose when I compared the function point estimates to the initial estimates provided by project managers&hellip; The costly, time-intensive function point counting did change the initial estimate but, on average, it was no closer to the actual project effort than the initial effort&hellip; Not only was this the single largest measurement effort in the IT organization, it literally added <em>no</em> value since it didn&rsquo;t reduce uncertainty at all. Certainly, more emphasis on measuring the benefits of the proposed projects &ndash; or almost anything else &ndash; would have been better money spent.</p> </blockquote> <p>Hence the importance of calculating EVI.</p> <h3 id="step4:applytherelevantmeasurementinstrumentstothehigh-information-valuevariable"><br /></h3> <h3>Step 4: Apply the relevant measurement instrument(s) to the high-information-value variable</h3> <p>If you followed the first three steps, then you&rsquo;ve defined a variable you want to measure in terms of the decision it affects and how you observe it, you&rsquo;ve quantified your uncertainty about it, and you&rsquo;ve calculated the value of gaining additional information about it. Now it&rsquo;s time to reduce your uncertainty about the variable &ndash; that is, to measure it.</p> <p>Each scientific discipline has its own specialized measurement methods. Hubbard&rsquo;s book describes measurement methods that are often useful for reducing our uncertainty about the &ldquo;softer&rdquo; topics often encountered by decision-makers in business.</p> <h4 id="selectingameasurementmethod"><br /></h4> <h4>Selecting a measurement method</h4> <p>To figure out which category of measurement methods are appropriate for a particular case, we must ask several questions:</p> <ol> <li>Decomposition: Which parts of the thing are we uncertain about?</li> <li>Secondary research: How has the thing (or its parts) been measured by others?</li> <li>Observation: How do the identified observables lend themselves to measurement?</li> <li>Measure just enough: How much do we need to measure it?</li> <li>Consider the error: How might our observations be misleading?</li> </ol> <h5 id="decomposition"><br /></h5> <h5>Decomposition</h5> <p>Sometimes you&rsquo;ll want to start by decomposing an uncertain variable into several parts to identify which observables you can most easily measure. For example, rather than directly estimating the cost of a large construction project, you could break it into parts and estimate the cost of each part of the project.</p> <p>In Hubbard&rsquo;s experience, it&rsquo;s often the case that decomposition itself &ndash; even without making any new measurements &ndash; often reduces one&rsquo;s uncertainty about the variable of interest.</p> <h5 id="secondaryresearch"><br /></h5> <h5>Secondary research</h5> <p>Don&rsquo;t reinvent the world. In almost all cases, someone has already invented the measurement tool you need, and you just need to find it. Here are Hubbard&rsquo;s tips on secondary research:</p> <ol> <li>If you&rsquo;re new to a topic, start with Wikipedia rather than Google. Wikipedia will give you a more organized perspective on the topic at hand.</li> <li>Use search terms often associated with quantitative data. E.g. don&rsquo;t just search for &ldquo;software quality&rdquo; or &ldquo;customer perception&rdquo; &ndash; add terms like &ldquo;table,&rdquo; &ldquo;survey,&rdquo; &ldquo;control group,&rdquo; and &ldquo;standard deviation.&rdquo;</li> <li>Think of internet research in two levels: general search engines and topic-specific repositories (e.g. the CIA World Fact Book).</li> <li>Try multiple search engines.</li> <li>If you find marginally related research that doesn&rsquo;t directly address your topic of interest, check the bibliography more relevant reading material.</li> </ol> <p>I&rsquo;d also recommend my post <a href="/lw/5me/scholarship_how_to_do_it_efficiently/">Scholarship: How to Do It Efficiently</a>.</p> <h5 id="observation"><br /></h5> <h5>Observation</h5> <p>If you&rsquo;re not sure how to measure your target variable&rsquo;s observables, ask these questions:</p> <ol> <li>Does it leave a trail? Example: longer waits on customer support lines cause customers to hang up and not call back. Maybe you can also find a correlation between customers who hang up after long waits and reduced sales to those customers.</li> <li>Can you observe it directly? Maybe you haven&rsquo;t been tracking how many of the customers in your parking lot show an out-of-state license, but you could start. Or at least, you can observe a sample of these data.</li> <li>Can you create a way to observe it indirectly? added a gift-wrapping feature in part so they could better track how many books were being purchased as gifts. Another example is when consumers are given coupons so that retailers can see which newspapers their customers read.</li> <li>Can the thing be forced to occur under new conditions which allow you to observe it more easily? E.g. you could implement a proposed returned-items policy in some stores but not others and compare the outcomes.</li> </ol> <h5 id="measurejustenough"><br /></h5> <h5>Measure just enough</h5> <p>Because initial measurements often tell you quite a lot, and also change the value of continued measurement, Hubbard often aims for spending 10% of the EVPI on a measurement, and sometimes as little as 2% (especially for very large projects).</p> <h5 id="considertheerror"><br /></h5> <h5>Consider the error</h5> <p>It&rsquo;s important to be conscious of some common ways in which measurements can mislead.</p> <p>Scientists distinguish two types of measurement error: systemic and random. Random errors are random variations from one observation to the next. They can&rsquo;t be individually predicted, but they fall into patterns that can be accounted for with the laws of probability. Systemic errors, in contrast, are consistent. For example, the sales staff may routinely overestimate the next quarter&rsquo;s revenue by 50% (on average).</p> <p>We must also distinguish precision and accuracy. A &ldquo;precise&rdquo; measurement tool has low random error. E.g. if a bathroom scale gives the exact same displayed weight every time we set a particular book on it, then the scale has high precision. An &ldquo;accurate&rdquo; measurement tool has low systemic error. The bathroom scale, while precise, might be inaccurate if the weight displayed is systemically biased in one direction &ndash; say, eight pounds too heavy. A measurement tool can also have low precision but good accuracy, if it gives inconsistent measurements but they average to the true value.</p> <p>Random error tends to be easier to handle. Consider this example:</p> <blockquote> <p>For example, to determine how much time sales reps spend in meetings with clients versus other administrative tasks, they might choose a complete review of all time sheets&hellip; [But] if a complete review of 5,000 time sheets&hellip; tells us that sales reps spend 34% of their time in direct communication with customers, we still don&rsquo;t know how far from the truth it might be. Still, this &ldquo;exact&rdquo; number seems reassuring to many managers. Now, suppose a sample of direct observations of randomly chosen sales reps at random points in time finds that sales reps were in client meetings or on client phone calls only 13 out of 100 of those instances. (We can compute this without interrupting a meeting by asking as soon as the rep is available.) As we will see [later], in the latter case, we can statistically compute a 90% CI to be 7.5% to 18.5%. Even though this random sampling approach gives us only a range, we should prefer its findings to the census audit of time sheets. The census&hellip; gives us an exact number, but we have no way to know by how much and in which direction the time sheets err.</p> </blockquote> <p>Systemic error is also called a &ldquo;bias.&rdquo; Based on his experience, Hubbard suspects the three most important to avoid are:</p> <ol> <li>Confirmation bias: people see what they want to see.</li> <li>Selection bias: your sample might not be representative of the group you&rsquo;re trying to measure.</li> <li>Observer bias: the very act of observation can affect what you observe. E.g. in one study, researchers found that worker productivity improved no matter <em>what</em> they changed about the workplace. The workers seem to have been responding merely to the <em>fact</em> that they were being observed in <em>some</em> way.</li> </ol> <h5 id="chooseanddesignthemeasurementinstrument"><br /></h5> <h5>Choose and design the measurement instrument</h5> <p>After following the above steps, Hubbard writes, &ldquo;the measurement instrument should be almost completely formed in your mind.&rdquo; But if you still can&rsquo;t come up with a way to measure the target variable, here are some additional tips:</p> <ol> <li><em>Work through the consequences</em>. If the value is surprisingly high, or surprisingly low, what would you expect to see?</li> <li><em>Be iterative</em>. Start with just a few observations, and then recalculate the information value.</li> <li><em>Consider multiple approaches</em>. Your first measurement tool may not work well. Try others.</li> <li><em>What&rsquo;s the really simple question that makes the rest of the measurement moot?</em> First see if you can detect <em>any</em> change in research quality before trying to measure it more comprehensively.</li> </ol> <h4 id="samplingreality"><br /></h4> <h4>Sampling reality</h4> <p>In most cases, we&rsquo;ll estimate the values in a population by measuring the values in a small sample from that population. And for reasons discussed in chapter 7, a very small sample can often offer large reductions in uncertainty.</p> <p>There are a variety of tools we can use to build our estimates from small samples, and which one we should use often depends on how outliers are distributed in the population. In some cases, outliers are very close to the mean, and thus our estimate of the mean can converge quickly on the true mean as we look at new samples. In other cases, outliers can be several orders of magnitude away from the mean, and our estimate converges very slowly or not at all. Here are some examples:</p> <ul> <li>Very quick convergence, only 1&ndash;2 samples needed: cholesterol level of your blood, purity of public water supply, weight of jelly beans.</li> <li>Usually quickly convergence, 5&ndash;30 samples needed: Percentage of customers who like the new product, failure loads of bricks, age of your customers, how many movies people see in a year.</li> <li>Potentially slow convergence: Software project cost overruns, factory downtime due to an accident.</li> <li>Maybe non-convergent: Market value of corporations, individual levels of income, casualties of wars, size of volcanic eruptions.</li> </ul> <p>Below, I survey just a few of the many sampling methods Hubbard covers in his book.</p> <h5 id="mathlessestimation"><br /></h5> <h5>Mathless estimation</h5> <p>When working with a quickly converging phenomenon and a symmetric distribution (uniform, normal, camel-back, or bow-tie) for the population, you can use the <a href="">t-statistic</a> to develop a 90% CI even when working with very small samples. (See the book for instructions.)</p> <p>Or, even easier, make use of the <em>Rule of FIve</em>: &ldquo;There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population.&rdquo;</p> <p>The Rule of Five has another advantage over the t-statistic: it works for any distribution of values in the population, including ones with slow convergence or no convergence at all! It can do this because it gives us a confidence interval for the <em>median</em> rather than the <em>mean</em>, and it&rsquo;s the mean that is far more affected by outliers.</p> <p>Hubbard calls this a &ldquo;mathless&rdquo; estimation technique because it doesn&rsquo;t require us to take square roots or calculate standard deviation or anything like that. Moreover, this mathless technique extends beyond the Rule of Five: If we sample 8 items, there is a 99.2% chance that the median of the population falls within the largest and smallest values. If we take the <em>2nd</em> largest and smallest values (out of 8 total values), we get something close to a 90% CI for the median. Hubbard generalizes the tool with this handy reference table:</p> <p><img src="" alt="" align="center" /></p> <p>And if the distribution is symmetrical, then the mathless table gives us a 90% CI for the mean as well as for the median.</p> <h5 id="catch-recatch"><br /></h5> <h5>Catch-recatch</h5> <p>How does a biologist measure the number of fish in a lake? SHe catches and tags a sample of fish &ndash; say, 1000 of them &ndash; and then releases them. After the fish have had time to spread amongst the rest of the population, she&rsquo;ll catch another sample of fish. Suppose she caught 1000 fish again, and 50 of them were tagged. This would mean 5% of the fish were tagged, and thus that were about 20,000 fish in the entire lake. (See Hubbard&rsquo;s book for the details on how to calculate the 90% CI.)</p> <h5 id="spotsampling"><br /></h5> <h5>Spot sampling</h5> <p>The fish example was a special case of a common problem: population proportion sampling. Often, we want to know what proportion of a population has a particular trait. How many registered voters in California are Democrats? What percentage of your customers prefer a new product design over the old one?</p> <p>Hubbard&rsquo;s book discusses how to solve the general problem, but for now let&rsquo;s just consider another special case: spot sampling.</p> <p>In spot sampling, you take random snapshots of things rather than tracking them constantly. What proportion of their work hours do employees spend on Facebook? To answer this, you &ldquo;randomly sample people through the day to see what they were doing <em>at that moment</em>. If you find that in 12 instances out of 100 random samples&rdquo; employees were on Facebook, you can guess they spend about 12% of their time on Facebook (the 90% CI is 8% to 18%).</p> <h5 id="clusteredsampling"><br /></h5> <h5>Clustered sampling</h5> <p>Hubbard writes:</p> <blockquote> <p>&ldquo;Clustered sampling&rdquo; is defined as taking a random sample of groups, then conducting a census or a more concentrated sampling within the group. For example, if you want to see what share of households has satellite dishes&hellip; it might be cost effective to randomly choose several city blocks, then conduct a complete census of everything in a block. (Zigzagging across town to individually selected households would be time consuming.) In such cases, we can&rsquo;t really consider the number of [households] in the groups&hellip; to be the number of random samples. Within a block, households may be very similar&hellip; [and therefore] it might be necessary to treat the effective number of random samples as the number of blocks&hellip;</p> </blockquote> <h5 id="measuretothethreshold"><br /></h5> <h5>Measure to the threshold</h5> <p>For many decisions, one decision is required if a value is above some threshold, and another decision is required if that value is below the threshold. For such decisions, you don&rsquo;t care as much about a measurement that reduces uncertainty in general as you do about a measurement that tells you which decision to make based on the threshold. Hubbard gives an example:</p> <blockquote> <p>Suppose you needed to measure the average amount of time spent by employees in meetings that could be conducted remotely&hellip; If a meeting is among staff members who communicate regularly and for a relatively routine topic, but someone has to travel to make the meeting, you probably can conduct it remotely. You start out with your calibrated estimate that the median employee spends between 3% to 15% traveling to meetings that could be conducted remotely. You determine that if this percentage is actually over 7%, you should make a significant investment in tele meetings. The [EVPI] calculation shows that it is worth no more than $15,000 to study this. According to our rule of thumb for measurement costs, we might try to spend about $1,500&hellip;</p> </blockquote> <blockquote> <p>Let&rsquo;s say you sampled 10 employees and&hellip; you find that only 1 spends less time in these activities than the 7% threshold. Given this information, what is the chance that the median time spent in such activities is actually below 7%, in which case the investment would not be justified? One &ldquo;common sense&rdquo; answer is 1/10, or 10%. Actually&hellip; the real chance is much smaller.</p> </blockquote> <p>Hubbard shows how to derive the real chance in his book. The key point is that &ldquo;the uncertainty about the threshold can fall much faster than the uncertainty about the quantity in general.&rdquo;</p> <h5 id="regressionmodeling"><br /></h5> <h5>Regression modeling</h5> <p>What if you want to figure out the cause of something that has many possible causes? One method is to perform a <em>controlled experiment</em>, and compare the outcomes of a test group to a control group. Hubbard discusses this in his book (and yes, he&rsquo;s a Bayesian, and a skeptic of p-value hypothesis testing). For this summary, I&rsquo;ll instead mention another method for isolating causes: regression modeling. Hubbard explains:</p> <blockquote> <p>If we use regression modeling with historical data, we may not need to conduct a controlled experiment. Perhaps, for example, it is difficult to tie an IT project to an increase in sales, but we might have lots of data about how something <em>else</em> affects sales, such as faster time to market of new products. If we know that faster time to market is possible by automating certain tasks, that this IT investment eliminates certain tasks, and those tasks are on the critical path in the time-to-market, we can make the connection.</p> </blockquote> <p>Hubbard&rsquo;s book explains the basics of linear regressions, and of course gives the caveat that correlation does not imply causation. But, he writes, &ldquo;you should conclude that one thing causes another only if you have some <em>other</em> good reason besides the correlation itself to suspect a cause-and-effect relationship.&rdquo;</p> <h4 id="bayes"><br /></h4> <h4>Bayes</h4> <p>Hubbard&rsquo;s 10th chapter opens with a tutorial on Bayes&rsquo; Theorem. For an online tutorial, see <a href="">here</a>.</p> <p>Hubbard then zooms out to a big-picture view of measurement, and recommends the &ldquo;instinctive Bayesian approach&rdquo;:</p> <ol> <li>Start with your calibrated estimate.</li> <li>Gather additional information (polling, reading other studies, etc.)</li> <li>Update your calibrated estimate subjectively, without doing any additional math.</li> </ol> <p>Hubbard says a few things in support of this approach. First, he points to some studies (e.g. <a href="">El-Gamal &amp; Grether (1995)</a>) showing that people often reason in roughly-Bayesian ways. Next, he says that in his experience, people become better intuitive Bayesians when they (1) are made aware of the <a href="">base rate fallacy</a>, and when they (2) are better calibrated.</p> <p>Hubbard says that once these conditions are met,</p> <blockquote> <p>[then] humans seem to be mostly logical when incorporating new information into their estimates along with the old information. This fact is extremely useful because a human can consider qualitative information that does not fit in standard statistics. For example, if you were giving a forecast for how a new policy might change &ldquo;public image&rdquo; &ndash; measured in part by a reduction in customer complaints, increased revenue, and the like &ndash; a calibrated expert should be able to update current knowledge with &ldquo;qualitative&rdquo; information about how the policy worked for other companies, feedback from focus groups, and similar details. Even with sampling information, the calibrated estimator &ndash; who has a Bayesian instinct &ndash; can consider qualitative information on samples that most textbooks don&rsquo;t cover.</p> </blockquote> <p>He also offers a chart showing how a pure Bayesian estimator compares to other estimators:</p> <p><img src="" alt="" align="center" /></p> <p>Also, Bayes&rsquo; Theorem allows us to perform a &ldquo;Bayesian inversion&rdquo;:</p> <blockquote> <p>Given a particular observation, it may seem more obvious to frame a measurement by asking the question &ldquo;What can I conclude from this observation?&rdquo; or, in probabilistic terms, &ldquo;What is the probability X is true, given my observation?&rdquo; But Bayes showed us that we could, instead, start with the question, &ldquo;What is the probability of this observation if X were true?&rdquo;</p> </blockquote> <blockquote> <p>The second form of the question is useful because the answer is often more straightforward and it leads to the answer to the other question. It also forces us to think about the likelihood of different observations given a particular hypothesis and what that means for interpreting an observation.</p> </blockquote> <blockquote> <p>[For example] if, hypothetically, we know that only 20% of the population will continue to shop at our store, then we can determine the chance [that] exactly 15 out of 20 would say so&hellip; [The details are explained in the book.] Then we can invert the problem with Bayes&rsquo; theorem to compute the chance that only 20% of the population will continue to shop there given [that] 15 out of 20 said so in a random sample. We would find that chance to be very nearly zero&hellip;</p> </blockquote> <h4 id="othermethods"><br /></h4> <h4>Other methods</h4> <p>Other chapters discuss other measurement methods, for example prediction markets, Rasch models, methods for measuring preferences and happiness, methods for improving the subjective judgments of experts, and many others. </p> <h3 id="step5:makeadecisionandactonit"><br /></h3> <h3>Step 5: Make a decision and act on it</h3> <p>The last step will make more sense if we first &ldquo;bring the pieces together.&rdquo; Hubbard now organizes his consulting work with a firm into 3 phases, so let&rsquo;s review what we&rsquo;ve learned in the context of his 3 phases.</p> <h4 id="phase0:projectpreparation"><br /></h4> <h4>Phase 0: Project Preparation</h4> <ul> <li><em>Initial research</em>: Interviews and secondary research to get familiar on the nature of the decision problem.</li> <li><em>Expert identification</em>: Usually 4&ndash;5 experts who provide estimates.</li> </ul> <h4 id="phase1:decisionmodeling"><br /></h4> <h4>Phase 1: Decision Modeling</h4> <ul> <li><em>Decision problem definition</em>: Experts define the problem they&rsquo;re trying to analyze.</li> <li><em>Decision model detail</em>: Using an Excel spreadsheet, the AIE analyst elicits from the experts all the factors that matter for the decision being analyzed: costs and benefits, ROI, etc.</li> <li><em>Initial calibrated estimates</em>: First, the experts undergo calibration training. Then, they fill in the values (as 90% CIs or other probability distributions) for the variables in the decision model.</li> </ul> <h4 id="phase2:optimalmeasurements"><br /></h4> <h4>Phase 2: Optimal measurements</h4> <ul> <li><em>Value of information analysis</em>: Using Excel macros, the AIE analyst runs a value of information analysis on every variable in the model.</li> <li><em>Preliminary measurement method designs</em>: Focusing on the few variables with highest information value, the AIE analyst chooses measurement methods that should reduce uncertainty.</li> <li><em>Measurement methods</em>: Decomposition, random sampling, Bayesian inversion, controlled experiments, and other methods are used (as appropriate) to reduce the uncertainty of the high-VoI variables.</li> <li><em>Updated decision model</em>: The AIE analyst updates the decision model based on the results of the measurements.</li> <li><em>Final value of information analysis</em>: The AIE analyst runs a VoI analysis on each variable again. As long as this analysis shows information value much greater than the cost of measurement for some variables, measurement and VoI analysis continues in multiple iterations. Usually, though, only one or two iterations are needed before the VoI analysis shows that no further measurements are justified.</li> </ul> <h4 id="phase3:decisionoptimizationandthefinalrecommendation"><br /></h4> <h4>Phase 3: Decision optimization and the final recommendation</h4> <ul> <li><em>Completed risk/return analysis</em>: A final MC simulation shows the likelihood of possible outcomes.</li> <li><em>Identified metrics procedures</em>: Procedures are put in place to measure some variables (e.g. about project progress or external factors) continually.</li> <li><em>Decision optimization</em>: The final business decision recommendation is made (this is rarely a simple &ldquo;yes/no&rdquo; answer).</li> </ul> <h4 id="finalthoughts"><br /></h4> <h4>Final thoughts</h4> <p>Hubbard&rsquo;s book includes two case studies in which Hubbard describes how he led two fairly different clients (the EPA and U.S. Marine Corps) through each phase of the AIE process. Then, he closes the book with the following summary:</p> <ul> <li>If it&rsquo;s really that important, it&rsquo;s something you can define. If it&rsquo;s something you think exists at all, it&rsquo;s something you&rsquo;ve already observed somehow.</li> <li>If it&rsquo;s something important and something uncertain, you have a cost of being wrong and a chance of being wrong.</li> <li>You can quantify your current uncertainty with calibrated estimates.</li> <li>You can compute the value of additional information by knowing the &ldquo;threshold&rdquo; of the measurement where it begins to make a difference compared to your existing uncertainty.</li> <li>Once you know what it&rsquo;s worth to measure something, you can put the measurement effort in context and decide on the effort it should take.</li> <li>Knowing just a few methods for random sampling, controlled experiments, or even merely improving on the judgments of experts can lead to a significant reduction in uncertainty.</li> </ul> lukeprog ybYBCK9D7MZCcdArB 2013-08-07T04:05:58.366Z Algorithmic Progress in Six Domains <p>Today MIRI released a new technical report by visiting researcher <a href="">Katja Grace</a>&nbsp;called "<strong><a href="">Algorithmic Progress in Six Domains</a></strong>." The report summarizes data on algorithmic progress &ndash; that is, better performance per fixed amount of computing hardware &ndash; in six domains:</p> <ul> <li><span style="line-height: 13px;">SAT solvers,</span></li> <li><span style="line-height: 13px;">Chess and Go programs,</span></li> <li><span style="line-height: 13px;">Physics simulations,</span></li> <li><span style="line-height: 13px;">Factoring,</span></li> <li><span style="line-height: 13px;">Mixed integer programming, and</span></li> <li><span style="line-height: 13px;">Some forms of machine learning. </span></li> </ul> <p>MIRI's purpose for collecting these data was to shed light on the question of <a href="/lw/hbd/new_report_intelligence_explosion_microeconomics/">intelligence explosion microeconomics</a>, though we suspect the report will be of broad interest within the software industry and computer science academia.</p> <p>One finding from the report was previously discussed by Robin Hanson <a href="">here</a>. (Robin saw an early draft on the intelligence explosion microeconomics <a href="">mailing list</a>.)</p> <p>This is the preferred page for discussing the report in general.</p> <p>Summary:</p> <blockquote>In recent <em>boolean satisfiability</em> (SAT) competitions, SAT solver performance has increased 5&ndash;15% per year, depending on the type of problem. However, these gains have been driven by widely varying improvements on particular problems. Retrospective surveys of SAT performance (on problems chosen after the fact) display significantly faster progress.</blockquote> <blockquote><em>Chess programs</em> have improved by around 50 Elo points per year over the last four decades. Estimates for the significance of hardware improvements are very noisy, but are consistent with hardware improvements being responsible for approximately half of progress. Progress has been smooth on the scale of years since the 1960s, except for the past five. <em>Go programs</em> have improved about one stone per year for the last three decades. Hardware doublings produce diminishing Elo gains, on a scale consistent with accounting for around half of progress.</blockquote> <blockquote>Improvements in a variety of <em>physics simulations</em> (selected after the fact to exhibit performance increases due to software) appear to be roughly half due to hardware progress.</blockquote> <blockquote>The <em>largest number factored</em> to date has grown by about 5.5 digits per year for the last two decades; computing power increased 10,000-fold over this period, and it is unclear how much of the increase is due to hardware progress.</blockquote> <blockquote>Some <em>mixed integer programming</em> (MIP) algorithms, run on modern MIP instances with modern hardware, have roughly doubled in speed each year. MIP is an important optimization problem, but one which has been called to attention after the fact due to performance improvements. Other optimization problems have had more inconsistent (and harder to determine) improvements.</blockquote> <blockquote>Various forms of <em>machine learning</em> have had steeply diminishing progress in percentage accuracy over recent decades. Some vision tasks have recently seen faster progress.</blockquote> lukeprog ueBMpvDsDEZgKiESt 2013-08-03T02:29:21.928Z MIRI's 2013 Summer Matching Challenge <p><small>(<a href="">MIRI</a>&nbsp;maintains Less Wrong, with generous help from <a href="">Trike Apps</a>, and much of the core content is written by salaried MIRI staff members.)</small></p> <p><strong>Update 09-15-2013</strong>: The fundraising drive has been completed! My thanks to everyone who contributed.</p> <p>The original post follows below...</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>Thanks to the generosity of several major donors,<sup>&dagger;</sup> every donation to the Machine Intelligence Research Institute made from now until (the end of) August 15th, 2013 will be <strong>matched dollar-for-dollar</strong>, up to a total of $200,000! &nbsp;</p> <p style="font-size: 300%;" align="center"><strong><a href="">Donate Now!</a></strong></p> <p>Now is your chance to <strong>double your impact</strong> while helping us raise up to $400,000 (with matching) to fund&nbsp;<a href="">our research program</a>.</p> <p>This post is also a good place to <em>ask your questions</em> about our activities and plans &mdash; just post a comment!</p> <p>If you have questions about what your dollars will do at MIRI, you can also schedule a quick call with MIRI Deputy Director Louie Helm: (email), 510-717-1477 (phone),&nbsp;louiehelm (Skype).</p> <p align="center"><a href=""><img src="" alt="progress bar" /></a></p> <hr /> <p>Early this year we made a transition from movement-building to research, and we've&nbsp;<em>hit the ground running</em> with six major new research papers, six new strategic analyses on our blog, and much more. Give now to support our ongoing work on <a href="">the future's most important problem</a>.</p> <h3><span>Accomplishments in 2013 so far</span></h3> <ul> <li>Released <strong>six new research papers</strong>: (1)&nbsp;<a href="/lw/h1k/reflection_in_probabilistic_set_theory/">Definability of Truth in Probabilistic Logic</a>, (2)&nbsp;<a href="">Intelligence Explosion Microeconomics</a>, (3)&nbsp;<a href="">Tiling Agents for Self-Modifying AI</a>, (4)&nbsp;<a href="">Robust Cooperation in the Prisoner's Dilemma</a>, (5)&nbsp;<a href="">A Comparison of Decision Algorithms on Newcomblike Problems</a>, and (6)&nbsp;<a href="">Responses to Catastrophic AGI Risk: A Survey</a>.</li> <li>Held our <a href="">2nd</a> and <a href="">3rd</a> research workshops.</li> <li>Published <strong>six new analyses</strong> to our blog: <a href="">The Lean Nonprofit</a>, <a href="">AGI Impact Experts and Friendly AI Experts</a>, <a href="">Five Theses...</a>, <a href="">When Will AI Be Created?</a>, <a href="">Friendly AI Research as Effective Altruism</a>, and <a href="">What is Intelligence?</a></li> <li>Published the&nbsp;<em><a href="">Facing the Intelligence Explosion</a>&nbsp;</em>ebook.</li> <li>Published several other substantial articles:&nbsp;<a href="">Recommended Courses for MIRI Researchers</a>, <a href="/lw/gu1/decision_theory_faq/">Decision Theory FAQ</a>,&nbsp;<a href="/lw/gln/a_brief_history_of_ethically_concerned_scientists/">A brief history of ethically concerned scientists</a>, <a href="/lw/gzq/bayesian_adjustment_does_not_defeat_existential/">Bayesian Adjustment Does Not Defeat Existential Risk Charity</a>,&nbsp;and others.</li> <li>Published our first three expert interviews, with <a href="">James Miller</a>, <a href="">Roman Yampolskiy</a>, and <a href="">Nick Beckstead</a>.</li> <li>Launched our new website at as part of&nbsp;<a href="">changing our name</a>&nbsp;to MIRI.</li> <li>Relocated to&nbsp;<a href="">new offices</a>...&nbsp;2 blocks from UC Berkeley, which is&nbsp;ranked&nbsp;<a href="">5th</a>&nbsp;in the world in mathematics, and&nbsp;<a href="">1st</a>&nbsp;in the world in mathematical logic.</li> <li>And of course <em>much</em> more.</li> </ul> <h3><span>Future Plans You Can Help Support</span></h3> <ul> <li>We will host many more research workshops, including <a href=" &lrm;">one in September</a>&nbsp;in Berkeley, one in December (with <a href="">John Baez</a> attending) in Berkeley, and one in Oxford, UK (dates TBD).</li> <li>Eliezer will continue to publish about open problems in Friendly AI. (Here is&nbsp;<a href="/lw/hbd/new_report_intelligence_explosion_microeconomics/">#1</a>&nbsp;and&nbsp;<a href="/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/">#2</a>.)</li> <li>We will continue to publish strategic analyses and expert interviews, mostly via <a href="">our blog</a>.</li> <li>We will publish nicely-edited ebooks (Kindle, iBooks, and PDF) for more of our materials, to make them more accessible: <em><a href="">The Sequences, 2006-2009</a>&nbsp;</em>and <em><a href="">The Hanson-Yudkowsky AI Foom Debate</a></em>.</li> <li>We will continue to set up the infrastructure (e.g. <a href="">new offices</a>, researcher endowments) required to host a productive Friendly AI research team, and (over several years) recruit enough top-level math talent to launch it.</li> <li>We hope to hire an experienced development director (job ad not yet posted), so that the contributions of our current supporters can be multiplied even further by a professional fundraiser.</li> </ul> <p>(Other projects are still being surveyed for likely cost and strategic impact.)</p> <p>We appreciate your support for our high-impact work! Donate now, and seize a better than usual chance to move our work forward.</p> <p>If you have questions about donating, please contact Louie Helm at (510) 717-1477 or&nbsp;<a href=""></a>.</p> <p><sup>&dagger;</sup> $200,000 of total matching funds has been provided by Jaan Tallinn, Loren Merritt, Rick Schwall, and Alexei Andreev.</p> lukeprog 6CnoNSworudoxJZtb 2013-07-23T19:05:56.873Z Model Combination and Adjustment <p><a href=""><img style="float: right; padding: 10px;" src="" alt="" /></a>The debate on the <a href="/lw/gvk/induction_or_the_rules_and_etiquette_of_reference/">proper use</a> of inside and outside views has raged for some time now. I suggest a way forward, building on a family of methods commonly used in statistics and machine learning to address this issue &mdash; an approach I'll call "model combination and adjustment."</p> <p>&nbsp;</p> <h3>Inside and outside views: a quick review</h3> <p><strong>1</strong>. There are two ways you might predict outcomes for a phenomenon. If you make your predictions using a detailed visualization of how something works, you're using an <em>inside view</em>. If instead you ignore the details of how something works, and instead make your predictions by assuming that a phenomenon will behave roughly like other similar phenomena, you're using an <em>outside view</em> (also called <em>reference class forecasting</em>).</p> <p>Inside view examples:</p> <ul> <li>"When I break the project into steps and visualize how long each step will take, it looks like the project will take 6 weeks"</li> <li>"When I combine what I know of physics and computation, it looks like the serial speed formulation of Moore's Law will break down around 2005, because we haven't been able to scale down energy-use-per-computation as quickly as we've scaled up computations per second, which means the serial speed formulation of Moore's Law will run into roadblocks from energy consumption and heat dissipation somewhere around 2005."</li> </ul> <p>Outside view examples:</p> <ul> <li>"I'm going to ignore the details of this project, and instead compare my project to similar projects. Other projects like this have taken 3 months, so that's probably about how long my project will take."</li> <li>"The serial speed formulation of Moore's Law has held up for several decades, through several different physical architectures, so it'll probably continue to hold through the next shift in physical architectures."</li> </ul> <p><small>See also chapter 23 in <a href="">Kahneman (2011)</a>; <a href="/lw/jg/planning_fallacy/">Planning Fallacy</a>; <a href="">Reference class forecasting</a>. Note that, after several decades of past success, the serial speed formulation of Moore's Law did in fact break down in 2004 for the reasons described (<a href="">Fuller &amp; Millett 2011</a>).</small></p> <p><a id="more"></a></p> <p>&nbsp;</p> <p><strong>2</strong>. An outside view works best when using a reference class with a <em>similar causal structure</em> to the thing you're trying to predict. An inside view works best when a phenomenon's causal structure is well-understood, and when (to your knowledge) there are very few phenomena with a similar causal structure that you can use to predict things about the phenomenon you're investigating. See: <a href="/lw/ri/the_outside_views_domain/">The Outside View's Domain</a>.</p> <p>When writing a textbook that's much like other textbooks, you're probably best off predicting the cost and duration of the project by looking at similar textbook-writing projects. When you're predicting the trajectory of the serial speed formulation of Moore's Law, or predicting which spaceship designs will successfully land humans on the moon for the first time, you're probably best off using an (intensely <em>informed</em>) inside view.</p> <p>&nbsp;</p> <p><strong>3</strong>. Some things aren't very predictable with <em>either</em> an outside view or an inside view. Sometimes, the thing you're trying to predict seems to have a significantly different causal structure than other things, <em>and</em> you don't understand its causal structure very well. What should we do in such cases? This remains a matter of debate.</p> <p>Eliezer Yudkowsky recommends a <a href="/lw/vz/the_weak_inside_view/">weak inside view</a> for such cases:</p> <blockquote> <p>On problems that are drawn from a barrel of causally similar problems, where human optimism runs rampant and unforeseen troubles are common, the Outside View beats the Inside View... [But] on problems that are new things under the Sun, where there's a huge change of context and a structural change in underlying causal forces, the Outside View also fails - try to use it, and you'll just get into arguments about what is the proper domain of "similar historical cases" or what conclusions can be drawn therefrom. In this case, the best we can do is use the Weak Inside View &mdash; visualizing the causal process &mdash; to produce <em>loose qualitative conclusions about only those issues where there seems to be lopsided support</em>.</p> </blockquote> <p>In contrast, Robin Hanson <a href="">recommends</a> an outside view for difficult cases:</p> <blockquote> <p>It is easy, way too easy, to generate new mechanisms, accounts, theories, and abstractions. To see if such things are <em>useful</em>, we need to vet them, and that is easiest "nearby", where we know a lot. When we want to deal with or understand things "far", where we know little, we have little choice other than to rely on mechanisms, theories, and concepts that have worked well near. Far is just the wrong place to try new things.</p> <p>There are a bazillion possible abstractions we could apply to the world. For each abstraction, the question is not whether one <em>can</em> divide up the world that way, but whether it "carves nature at its joints", giving <em>useful</em> insight not easily gained via other abstractions. We should be wary of inventing new abstractions just to make sense of things far; we should insist they first show their value nearby.</p> </blockquote> <p>In <a href="">Yudkowsky (2013)</a>, sec. 2.1, Yudkowsky offers a reply to these paragraphs, and continues to advocate for a weak inside view. He also adds:</p> <blockquote> <p>the other major problem I have with the &ldquo;outside view&rdquo; is that everyone who uses it seems to come up with a different reference class and a different answer.</p> </blockquote> <p>This is the problem of "<a href="/lw/1p5/outside_view_as_conversationhalter/">reference class tennis</a>": each participant in the debate claims their own reference class is most appropriate for predicting the phenomenon under discussion, and if disagreement remains, they might each say "I&rsquo;m taking my reference class and going home."</p> <p>Responding to the same point made <a href="/lw/cze/reply_to_holden_on_tool_ai/">elsewhere</a>, Robin Hanson <a href="">wrote</a>:</p> <blockquote> <p>[Earlier, I] warned against over-reliance on &ldquo;unvetted&rdquo; abstractions. I wasn&rsquo;t at all trying to claim there is one true analogy and all others are false. Instead, I argue for preferring to rely on abstractions, including categories and similarity maps, that have been found useful by a substantial intellectual community working on related problems.</p> </blockquote> <h3><br /></h3> <h3>Multiple reference classes</h3> <p>Yudkowsky (2013) adds one more complaint about reference class forecasting in difficult forecasting circumstances:</p> <blockquote> <p>A final problem I have with many cases of 'reference class forecasting' is that... [the] final answers [generated from this process] often seem more specific than I think our state of knowledge should allow. [For example,] I don&rsquo;t think you <em>should</em> be able to tell me that the next major growth mode will have a doubling time of between a month and a year. The alleged outside viewer claims to know too much, once they stake their all on a single preferred reference class.</p> </blockquote> <p>Both this comment and Hanson's last comment above point to the vulnerability of relying on any <em>single</em> reference class, at least for difficult forecasting problems. <a href="">Beware brittle arguments</a>, says Paul Christiano.</p> <p>One obvious solution is to use <em>multiple</em> reference classes, and weight them by how relevant you think they are to the phenomenon you're trying to predict. Holden Karnofsky writes of investigating things from "<a href="">many different angles</a>." Jonah Sinick refers to "<a href="/lw/hmb/many_weak_arguments_vs_one_relatively_strong/">many weak arguments</a>." Statisticians call this "<a href="">model combination</a>." Machine learning researchers call it "<a href="">ensemble learning</a>" or "<a href="">classifier combination</a>."</p> <p>In other words, we can use <em>many</em> outside views.</p> <p>Nate Silver does this when he predicts elections (see <a href="">Silver 2012</a>, ch. 2). Venture capitalists do this when they evaluate startups. The best political forecasters studied in <a href="">Tetlock (2005)</a>, the "foxes," tended to do this.</p> <p>In fact, most of us do this regularly.</p> <p>How do you predict which restaurant's food you'll most enjoy, when visiting San Francisco for the first time? One outside view comes from the restaurant's Yelp reviews. Another outside view comes from your friend Jade's opinion. Another outside view comes from the fact that you usually enjoy Asian cuisines more than other cuisines. And so on. Then you <em>combine</em> these different models of the situation, weighting them by how robustly they each tend to predict your eating enjoyment, and you grab a taxi to <a href="">Osha Thai</a>.</p> <p><small>(Technical note: I say "model combination" rather than "model averaging" <a href="">on purpose</a>.)</small></p> <h3><br /></h3> <h3>Model combination and adjustment</h3> <p>You can probably do even better than this, though &mdash; if you know some things about the phenomenon and you're very careful. Once you've combined a handful of models to arrive at a qualitative or quantitative judgment, you should still be able to "adjust" the judgment in some cases using an inside view.</p> <p>For example, suppose I used the above process, and I plan to visit Osha Thai for dinner. Then, somebody gives me my first taste of the <em><a href="">Synsepalum dulcificum</a></em> fruit. I happen to know that this fruit contains a molecule called <a href="">miraculin</a> which binds to one's tastebuds and makes sour foods taste sweet, and that this effect lasts for about an hour (<a href="">Koizumi et al. 2011</a>). Despite the results of my earlier model combination, I predict I won't particularly enjoy Osha Thai at the moment. Instead, I decide to try some tabasco sauce, to see whether it <a href="">now tastes like doughnut glaze</a>.</p> <p>In some cases, you might also need to <a href="">adjust for your prior</a> over, say, "expected enjoyment of restaurant food," if for some reason your original model combination procedure didn't capture your prior properly.</p> <h3><br /></h3> <h3>Against "the outside view"</h3> <p>There is a <em>lot</em> more to say about model combination and adjustment (e.g. <a href="/lw/vq/the_weighted_majority_algorithm/">this</a>), but for now let me make a suggestion about language usage.</p> <p>Sometimes, small changes to our language can help us think more accurately. For example, gender-neutral language can reduce male bias in our associations (<a href=";lpg=PA163&amp;ots=fwSignDEag&amp;dq=Representation%20of%20the%20sexes%20in%20language&amp;lr&amp;pg=PA163#v=onepage&amp;q&amp;f=false">Stahlberg et al. 2007</a>). In this spirit, I recommend we retire the phrase "the outside view..", and instead use phrases like "some outside view<em>s</em>..." and "<em>an</em> outside view..."</p> <p>My reasons are:</p> <ol> <li> <p>Speaking of "the" outside view privileges a particular reference class, which could make us overconfident of that particular model's predictions, and leave model uncertainty unaccounted for.</p> </li> <li> <p>Speaking of "the" outside view <a href="/lw/1p5/outside_view_as_conversationhalter/">can act as a conversation-stopper</a>, whereas speaking of multiple outside views encourages further discussion about how much weight each model should be given, and what each of them implies about the phenomenon under discussion.</p> </li> </ol> lukeprog iyRpsScBa6y4rduEt 2013-07-17T20:31:08.687Z Writing Style and the Typical Mind Fallacy <p>For a long time, Eliezer has been telling me I should write more like he does. I've mostly resisted, preferring instead to write like this:</p> <div><ol> <li>Explain the lesson of the post immediately, and outline the ideas clearly with lots of headings, subheadings, lists, etc.</li> <li>State the abstract points first, then give concrete examples.</li> <li>Provide lots of links and references to related work so that readers have the opportunity to read more detail about what I'm trying to say (in case it wasn't clear in a single sentence or paragraph), or read the same thing from a different angle (in case the metaphors and language I used weren't clear to that reader).</li> </ol> <div>Eliezer talks as though his style is simply&nbsp;<em>better writing</em>, while I've complained that I often can't even tell what his posts are <em>saying</em>.</div> </div> <div><br /></div> <div>I'm a bit embarrassed to admit that it wasn't until sometime last month that I realized that,&nbsp;<em>obviously</em>, different people prefer each style, and Eliezer and I were both falling prey to the&nbsp;<a href="">typical mind fallacy</a>.</div> <div><br /></div> <p>&nbsp;</p> <p>At the recent <a href="">Effective Altruism Summit</a> I tried to figure out which personal features predicted writing style preference.</p> <p>One hypothesis was that people who read lots of fiction (like Eliezer) will tend to prefer Eliezer's story-like style, while those who read almost exclusively non-fiction (like me) will tend to prefer my "just gimme the facts" style. This hypothesis didn't hold up well on my non-scientific survey of ~10 LW-reading effective altruists.</p> <p>Another hypothesis was that most people would prefer Eliezer's more exciting posts, while people trained in the sciences or analytic philosophy (which insist on clear organization, definitions, references to related work, etc.) would prefer my posts. This hypothesis fared a bit better, but not by much.</p> <p>Another hypothesis was that people who had acquired an <a href="/lw/dxr/epiphany_addiction/">epiphany addiction</a> would prefer Eliezer's style, whereas those who just want to <a href="">learn everything</a>&nbsp;<a href="/lw/5me/scholarship_how_to_do_it_efficiently/">efficiently</a> would prefer my style. But I didn't test this.</p> <p>Another hypothesis that occurs to me is that people with short attention spans could prefer my more skimmable style. But I haven't tested this.</p> <p>Perhaps the community would like to propose some hypotheses, and test them with LW polling?</p> lukeprog 7Q3MoE9YzFMxGZR7F 2013-07-14T04:47:48.167Z Four Focus Areas of Effective Altruism <p><img style="float: right; padding: 10px;" src="" alt="" />It was a pleasure to see all major strands of the <a href="">effective altruism movement</a> gathered in one place at last week's <a href="">Effective Altruism Summit</a>.</p> <p>Representatives from&nbsp;<a href="">GiveWell</a>, <a href="">The Life You Can Save</a>, <a href="">80,000 Hours</a>, <a href="">Giving What We Can</a>, <a href="">Effective Animal Altruism</a>,&nbsp;<a href="">Leverage Research</a>, the <a href="">Center for Applied Rationality</a>, and the <a href="">Machine Intelligence Research Institute</a> either attended or gave presentations.&nbsp;My thanks to Leverage Research&nbsp;for organizing and hosting the event!</p> <p>What do all these groups have in common? As <a href="">Peter Singer</a> said in&nbsp;<a href="">his TED talk</a>, effective altruism "combines both the heart and the head." The heart motivates us to be empathic and altruistic toward others, while the head can "make sure that what [we] do is effective and well-directed," so that altruists can do not just&nbsp;<em>some</em> good but&nbsp;<em>as much good as possible</em>.</p> <p>Effective altruists (EAs) tend to:</p> <ol> <li><span style="line-height: 13px;"><strong>Be globally altruistic</strong>:&nbsp;</span><span style="line-height: 13px;">EAs care about people equally, regardless of location. Typically, the most cost-effective altruistic cause won't happen to be in one's home country.</span></li> <li><strong>Value consequences</strong>: EAs tend to value causes according to their consequences, whether those consequences are happiness, health, justice, fairness and/or other values.</li> <li><strong>Try to do as much good as possible</strong>: EAs don't just want to do <em>some</em> good; they want to do (roughly)&nbsp;<em>as much good as possible</em>. As such, they hope to devote their altruistic resources (time, money, energy, attention) to unusually cost-effective causes. (This <a href="">doesn't</a> necessarily mean that EAs think "explicit" cost effectiveness calculations are the best <em>method for&nbsp;figuring out</em> which causes are likely to do the most good.)</li> <li><strong>Think scientifically and quantitatively</strong>: EAs tend to be analytic, scientific, and quantitative when trying to figure out which causes <em>actually</em> do the most good.</li> <li><strong>Be willing to make significant life changes to be more effectively altruistic</strong>: As a result of their efforts to be more effective in their altruism, EAs often (1) change which charities they support financially, (2) change careers, (3) spend significant chunks of time investigating which causes are most cost-effective according to their values, or (4) make other significant life changes.</li> </ol> <p>Despite these similarities, EAs are a diverse bunch, and they focus their efforts on a variety of causes.</p> <p>Below are four popular focus areas of effective altruism, ordered roughly by how large and visible they appear to be at the moment. Many EAs work on several of these focus areas at once, due to uncertainty about both facts and values.</p> <p>Though labels and categories <a href="/lw/od/37_ways_that_words_can_be_wrong/">have</a> <a href="">their</a> <a href="">dangers</a>, they can also enable <a href="">chunking</a>, which has benefits for memory, learning, and communication. There are many other ways we might categorize the efforts of today's EAs; this is only one categorization.</p> <h4><a id="more"></a><br /></h4> <h4>Focus Area 1: Poverty Reduction</h4> <p>Here, "poverty reduction" is meant in a broad sense that includes (e.g.) economic benefit, better health, and better education.</p> <p>Major organizations in this focus area include:</p> <ul> <li><a href="">GiveWell</a>&nbsp;is home to the most rigorous research on charitable causes, especially poverty reduction and global health. Their current charity recommendations are the <a href="">Against Malaria Foundation</a>, <a href="">GiveDirectly</a>, and the <a href="">Schistosomiasis Control Initiative</a>. (Note that GiveWell also does quite a bit of "meta effective altruism"; see below.)</li> <li><a href="">Good Ventures</a>&nbsp;works closely with GiveWell.</li> <li><a href="">The Life You Can Save</a>&nbsp;(TLYCS), named after Peter Singer's <a href="">book</a> on effective altruism, encourages people to pledge a fraction of their income to effective charities. TLYCS currently recommends GiveWell's recommended charities and <a href="">several others</a>.</li> <li><a href="">Giving What We Can</a>&nbsp;(GWWC) does some charity evaluation and also encourages people to pledge 10% of their income effective charities. GWWC currently recommends two of GiveWell's recommended charities and <a href="">two others</a>.</li> <li><a href="">AidGrade</a> evaluates the cost effectiveness of poverty reduction causes, with less of a focus on individual organizations.</li> </ul> <p>In addition, some well-endowed foundations seem to have "one foot" in effective poverty reduction. For example, the <a href="">Bill &amp; Melinda Gates Foundation</a> has funded many of the most cost-effective causes in the developing world (e.g. vaccinations),&nbsp;although it also funds less cost-effective-seeming interventions in the developed world.</p> <p>In the future, poverty reduction EAs might&nbsp;also focus on economic, political, or research-infrastructure changes that might achieve poverty reduction, global health, and educational improvements more&nbsp;indirectly, as when&nbsp;<a href="">Chinese economic reforms</a>&nbsp;lifted hundreds of millions out of poverty. Though it is generally easier to evaluate the cost-effectiveness of direct efforts than that of indirect efforts, some groups (e.g. <a href="">GiveWell Labs</a>&nbsp;and&nbsp;<a href="">The Vannevar Group</a>) are beginning to evaluate the likely cost-effectiveness of these causes. &nbsp;</p> <h4><br /></h4> <h4>Focus Area 2: Meta Effective Altruism</h4> <p>Meta effective altruists focus less on specific causes and more on "meta" activities such as raising awareness of the importance of evidence-based altruism, helping EAs reach their potential, and doing research to help EAs decide which focus areas they should contribute to.</p> <p>Organizations in this focus area include:</p> <ul> <li><a href="">80,000 Hours</a>&nbsp;highlights the importance of helping the world effectively through one's career. They also offer personal counseling to help EAs choose a career and a set of causes to support.</li> <li>Explicitly, the&nbsp;<a href="">Center for Applied Rationality</a>&nbsp;(CFAR) just trains people in rationality skills. But&nbsp;<em>de facto</em>&nbsp;they are especially focused on the application of rational thought to the practice of altruism, and are deeply embedded in the effective altruism community.</li> <li><a href="">Leverage Research</a> focuses on growing and empowering the EA movement, e.g. by running <a href="">Effective Altruism Summit</a>, by organizing the <a href="">THINK</a> student group network, and by searching for "mind hacks" (like the <a href="">memory palace</a>) that can make EAs more effective.</li> </ul> <p>Other people and organizations contribute to meta effective altruism, too. Paul Christiano examines effective altruism from a high level at <a href="">Rational Altruist</a>. GiveWell and others often write about the <a href="">ethics</a> and <a href="">epistemology</a> of effective altruism in addition to focusing on their chosen causes. And, of course, most EA organizations spend&nbsp;<em>some</em> resources growing the EA movement. &nbsp;</p> <h4><br /></h4> <h4>Focus Area 3: The Long-Term Future</h4> <p>Many EAs value future people roughly as much as currently-living people, and think that nearly all potential value is found in the well-being of the astronomical numbers of people who could populate the long-term future (<a href="">Bostrom 2003</a>; <a href="">Beckstead 2013</a>). Future-focused EAs aim to somewhat-directly&nbsp;capture these "astronomical benefits" of the long-term future, e.g. via explicit efforts to <a href="">reduce existential risk</a>.</p> <p>Organizations in this focus area include:</p> <ul> <li><span style="line-height: 13px;">The <a href="">Future of Humanity Institute</a> at Oxford University is the primary hub of research on <a href="">existential risk mitigation</a> within the effective altruism movement. (<a href="">CSER</a> may join it soon, if it gets funding.)</span></li> <li>The <a href="">Machine Intelligence Research Institute</a>&nbsp;focuses on doing the research needed for humanity to one day build <a href="">Friendly AI</a> that could make astronomical numbers of future people enormously better off. It also runs the <a href="/">Less Wrong</a> group blog and forum, where much of today's EA analysis and discussion occurs.</li> </ul> <p>Other groups study particular existential risks (among other things), though perhaps not explicitly from the view of effective altruism. For example, NASA has spent time <a href="">identifying nearby asteroids</a> that could be an existential threat, and many organizations (e.g. <a href="">GCRI</a>) study worst-case scenarios for climate change or nuclear warfare that <em>might</em> result in human extinction but are more likely to result in "merely catastrophic" damage.</p> <p>Some EAs (e.g.&nbsp;<a href="">Holden Karnofsky</a>,&nbsp;<a href="">Paul Christiano</a>) have argued that even if nearly all value lies in the long-term future, focusing on nearer-term goals (e.g. effective poverty reduction or meta effective altruism) may be more likely to realize that value than more direct efforts.</p> <p>&nbsp;</p> <h4>Focus Area 4: Animal Suffering</h4> <p>Effective animal altruists are focused on reducing animal suffering in cost-effective ways. After all, animals vastly outnumber humans, and growing numbers of scientists <a href="">believe</a> that many animals <a href="">consciously experience</a> pleasure and suffering.</p> <p>The only organization of this type so far (that I know of) is <a href="">Effective Animal Activism</a>, which currently recommends supporting <a href="">The Humane League</a> and <a href="">Vegan Outreach</a>.</p> <p><em>Edit</em>: There is now also <a href="">Animal Ethics, Inc</a>.</p> <p>Major inspirations for those in this focus area include <a href="">Peter Singer</a>, <a href="">David Pearce</a>, and <a href="">Brian Tomasik</a>. &nbsp;</p> <h4><br /></h4> <h4>Other focus areas</h4> <p>I could perhaps have listed "effective environmental altruism" as focus area 5. The environmental movement <em>in general</em>&nbsp;is large and well-known, but I'm not aware of many effective altruists who take environmentalism to be the most important cause for them to work on, after closely investigating the above focus areas. In contrast, the groups and people named above&nbsp;tend to have influenced each other, and have considered all these focus areas explicitly. For this reason, I've left "effective environmental altruism" off the list, though perhaps a popular focus on effective environmental altruism could arise in the future.</p> <p>Other focus areas could later come to prominence, too.</p> <h4><br /></h4> <h4>Working together</h4> <p>I was pleased to see the EAs from different strands of the EA movement cooperating and learning from each other at the Effective Altruism Summit. Cooperation is crucial for growing the EA movement, so I hope that even if it&rsquo;s <a href="/lw/3h/why_our_kind_cant_cooperate/">not always easy</a>, EAs will "go out of their way" to cooperate and work together, no matter which focus areas they&rsquo;re sympathetic to.</p> lukeprog JmmA2Mf5GrY9D6nQD 2013-07-09T00:59:40.963Z Responses to Catastrophic AGI Risk: A Survey <p>A great many Less Wrongers gave feedback on earlier drafts of "Responses to Catastrophic AGI Risk: A Survey," which has now been <a href="">released</a>. This is the preferred discussion page for the paper.</p> <p>The report, co-authored by past MIRI researcher Kaj Sotala and University of Louisville&rsquo;s Roman Yampolskiy, is a summary of the extant literature (250+ references) on AGI risk, and can serve either as a guide for researchers or as an introduction for the uninitiated.</p> <p>Here is the abstract:</p> <blockquote> <p>Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may pose a catastrophic risk to humanity. After summarizing the arguments for why AGI may pose such a risk, we survey the field&rsquo;s proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors, and proposals for creating AGIs that are safe due to their internal design.</p> </blockquote> lukeprog EAp3AQJv8dzTqAdKW 2013-07-08T14:33:50.800Z Start Under the Streetlight, then Push into the Shadows <p><img style="float: right; padding: 10px;" src="" alt="" /><small>See also: <a href="/lw/8ns/hack_away_at_the_edges/">Hack Away at the Edges</a>.</small></p> <h3>The streetlight effect</h3> <p>You've heard <a href="">the joke</a> before:</p> <blockquote> <p>Late at night, a police officer finds a drunk man crawling around on his hands and knees under a streetlight. The drunk man tells the officer he&rsquo;s looking for his wallet. When the officer asks if he&rsquo;s sure this is where he dropped the wallet, the man replies that he thinks he more likely dropped it across the street. Then why are you looking over here? the befuddled officer asks. Because the light&rsquo;s better here, explains the drunk man.</p> </blockquote> <p>The joke illustrates the <a href="">streetlight effect</a>: we "<a href="">tend to</a> look for answers where the looking is good, rather than where the answers are likely to be hiding."</p> <p><a href="">Freedman (2010)</a> documents at length some harms caused by the streetlight effect. For <a href="">example</a>:</p> <blockquote> <p>A bolt of excitement ran through the field of cardiology in the early 1980s when anti-arrhythmia drugs burst onto the scene. Researchers knew that heart-attack victims with steady heartbeats had the best odds of survival, so a medication that could tamp down irregularities seemed like a no-brainer. The drugs became the standard of care for heart-attack patients and were soon smoothing out heartbeats in intensive care wards across the United States.</p> <p>But in the early 1990s, cardiologists realized that the drugs were also doing something else: killing about 56,000 heart-attack patients a year. Yes, hearts were beating more regularly on the drugs than off, but their owners were, on average, one-third as likely to pull through. Cardiologists had been so focused on immediately measurable arrhythmias that they had overlooked the longer-term but far more important variable of <em>death</em>.</p> </blockquote> <h3><a id="more"></a><br /></h3> <h3>Start under the streetlight</h3> <p>Of course, there are <a href="">good reasons</a> to search under the streetlight:</p> <blockquote> <p>It is often extremely difficult or even impossible to cleanly measure what is really important, so scientists instead cleanly measure what they can, hoping it turns out to be relevant.</p> </blockquote> <p>In retrospect, we might wish cardiologists had done a decade-long longitudinal study measuring the long-term effects of the new&nbsp;anti-arrhythmia&nbsp;drugs of the 1980s. But it's easy to understand why they didn't. Decades-long longitudinal studies are expensive, and resources are limited. It was more efficient to rely on an easily-measurable proxy variable like arrhythmias.</p> <p>We must remember, however, that the analogy to the streetlight joke isn't exact. Searching under the streetlight gives the drunkard virtually <em>no</em> information about where his wallet might be. But in science and other disciplines, searching under the streetlight can reveal helpful clues about the puzzle you're investigating. Given limited resources, it's often best to start searching under the streetlight and then, initial clues in hand, push into the shadows.<sup>1</sup></p> <p>The problem with streetlight science isn't that it relies on easily-measurable proxy variables. If you want to figure out how some psychological trait works, start with a small study and use free undergraduates at your home university &mdash; that's a good way to test hypotheses cheaply. The problem comes in when researchers don't appropriately <em>flag</em> the fact their subjects were <a href="">WEIRD</a> and that a larger study needs to be done on a more representative population before we start drawing conclusions. (Another problem is that despite some researcher's cautions against overgeneralizing from a study of WEIRD subjects, the media will write splashy, universalizing headlines anyway.)</p> <p>But money and time aren't the only resources that might be limited. Another is <em>human reasoning ability</em>. Human brains were built for hunting and gathering in the savannah, not for unlocking the mysteries of fundamental physics or intelligence or consciousness. So even if time and money aren't limiting factors, it's often best to break a complex problem into pieces and think through the simplest pieces, or the pieces for which our data are most robust, before trying to answer the questions you <em>most</em> want to solve.</p> <p>As P&oacute;lya advises in his hugely popular <em><a href="">How to Solve It</a></em>, "If you cannot solve the proposed problem, try to solve first some related [but easier] problem." In physics, this related but easier problem is often called a <a href="">toy model</a>. In other fields, it is sometimes called a <a href="">toy problem</a>. <a href="">Animal models</a> are often used as toy models in biology and medicine.</p> <p>Or, as Scott Aaronson <a href="">put it</a>:</p> <blockquote> <p>...I don&rsquo;t spend my life thinking about P versus NP [because] there are vastly easier prerequisite questions that we already don&rsquo;t know how to answer. In a field like [theoretical computer science], you very quickly get used to being able to state a problem with perfect clarity, knowing exactly what would constitute a solution, and still not having any clue how to solve it... And at least in my experience, being pounded with this situation again and again slowly reorients your worldview... Faced with a [very difficult question,] you learn to respond: &ldquo;What&rsquo;s another question that&rsquo;s easier to answer, and that probably has to be answered anyway before we have any chance on the original one?&rdquo;</p> </blockquote> <p>I'll close with two examples: <a href="">GiveWell</a> on <a href="">effective altruism</a> and <a href="">MIRI</a> on <a href="/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/">stability under self-modification</a>.</p> <h3><br /></h3> <h3>GiveWell on effective altruism</h3> <p>GiveWell's <a href="">mission</a> is "to find outstanding giving opportunities and publish the full details of our analysis to help donors decide where to give."</p> <p>But finding and verifying outstanding giving opportunities is <em>hard</em>. Consider the case of one straightforward-seeming intervention: <a href="">deworming</a>.</p> <p>Nearly 2 billion people (mostly in poor countries) are infected by parasitic worms that hinder their cognitive development and overall health. This is also producing barriers to economic development where parasitic worms are common. Luckily, deworming pills are cheap, and early studies indicated that they <a href=";">improved</a> <a href=";">educational</a> <a href="">outcomes</a>. The <a href="">DCP2</a>, produced by over 300 contributors and in collaboration with the World Health Organization, estimated that a particular deworming treatment was one of the most cost-effective treatments in global health, at just $3.41 per <a href="">DALY</a>.</p> <p>Unfortunately, things are not so simple. A <a href="">careful review</a> of the evidence in 2008 by The Cochrane Collaboration concluded that, due to weaknesses in some studies' designs and other factors, "No effect [of deworming drugs] on cognition or school performance has been demonstrated." And in 2011, GiveWell <a href="">found</a> that a spreadsheet used to produce the DCP2's estimates contained <em>5 separate errors</em> that, when corrected, increased the cost estimate for deworming by roughly <em>a factor of 100</em>. In 2012, <a href="">another Cochrane review</a> was even more damning for the effectiveness of deworming, concluding that "Routine deworming drugs given to school children... has not shown benefit on weight in most studies... For haemoglobin and cognition, community deworming seems to have little or no effect, and the evidence in relation to school attendance, and school performance is generally poor, with no obvious or consistent effect."</p> <p>On the other hand, Innovations for Poverty Action <a href="">critiqued</a> the 2012 Cochrane review, and GiveWell <a href="">said</a> the review did not fully undermine the case for its <a href="">#3 recommended charity</a>, which focuses on deworming.</p> <p>What are we to make of this? Thousands of hours of data collection and synthesis went into producing the initial case for deworming as a cost-effective intervention, and thousands of additional hours were required to discover flaws in those initial analyses. In the end, GiveWell recommends one deworming charity, the Schistosomiasis Control Initiative, but their <a href="">page on SCI</a> is littered with qualifications and concerns and "We don't know"s.</p> <p>GiveWell had to wrestle with these complications despite the fact that it <em>chose</em> to search under the streetlight. Global health interventions are among the <em>easiest</em> interventions to analyze, and have often been subjected to multiple randomized controlled trials and dozens of experimental studies. Such high-quality evidence usually isn't available when trying to estimate the cost-effectiveness of, say, certain forms of political activism.</p> <p>GiveWell co-founder Holden Karnofsky suspects that the best giving opportunities are <em>not</em> in the domain of global health, but GiveWell began their search in global health &mdash; under the spotlight &mdash; (in part) because the evidence was clearer there.<sup>2</sup></p> <p>It's <a href="">difficult</a> to do counterfactual history, but I suspect GiveWell made the right choice. While investigating global health, GiveWell has learned <a href="">many</a> <a href="">important</a> <a href="">lessons</a> <a href="">about</a> <a href="">effective</a> <a href="">altruism</a> &mdash; lessons it would have been more difficult to learn with the same clarity if they had begun with investigations of even-more-challenging domains like <a href="">meta-research</a> and pollitical activism. But now that they've learned those lessons, they're beginning to push into the shadows where the evidence is less clear, via <a href="">GiveWell Labs</a>.</p> <h3><br /></h3> <h3>MIRI on stability under self-modification</h3> <p>MIRI's <a href="">mission</a> is "to ensure that the creation of smarter-than-human intelligence has a positive impact."</p> <p>Many different interventions have been <a href="/lw/ffh/how_can_i_reduce_existential_risk_from_ai/">proposed</a> as methods for increasing the odds that smarter-than-human intelligence has a positive impact, but for <a href="">several reasons</a> MIRI decided to focus its efforts on "Friendly AI research" during 2013.</p> <p>The FAI research program decomposes into a wide variety of technical research questions. One of those questions is the question of <em>stability under self-modification</em>:</p> <blockquote> <p>How can we ensure that an AI will serve its intended purpose even after repeated self-modification?</p> </blockquote> <p>This is a challenging and ill-defined question. How might we make progress on such a puzzle?</p> <p>For puzzles such as this one, Scott Aaronson <a href="">recommends</a> a strategy he calls "bait and switch":</p> <blockquote> <p>[Philosophical] progress has almost always involved a [kind of] &ldquo;bait-and-switch.&rdquo; In other words: one replaces an unanswerable philosophical riddle Q by a &ldquo;merely&rdquo; scientific or mathematical question Q&prime;, which captures part of what people have wanted to know when they&rsquo;ve asked Q. Then, with luck, one solves Q&prime;... this process of &ldquo;breaking off&rdquo; answerable parts of unanswerable riddles, then trying to answer those parts, is the closest thing to philosophical progress that there is.</p> <p>Successful examples of this breaking-off process fill intellectual history. The use of calculus to treat infinite series, the link between mental activity and nerve impulses, natural selection, set theory and first-order logic, special relativity, G&ouml;del&rsquo;s theorem, game theory, information theory, computability and complexity theory, the Bell inequality, the theory of common knowledge, Bayesian causal networks &mdash; each of these advances addressed questions that could rightly have been called &ldquo;philosophical&rdquo; before the advance was made.</p> </blockquote> <p>The recent MIRI report on <a href="">Tiling Agents</a> performs one such "bait and switch." It replaces the philosophical puzzle of "How can we ensure that an AI will serve its intended purpose even after repeated self-modification?" (Q) with a better-specified <em>formal</em> puzzle on which it is possible to make measurable progress: "How can an agent perform perfectly tiling self-modifications despite L&ouml;b's Theorem?" (Q')</p> <p>This allows us to state <a href="/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/94dx">at least three</a> crisp technical problems: L&ouml;b and coherent quantified belief (sec. 3 of 'Tiling Agents'), nonmonotonicity of probabilistic reasoning (secs. 5.2 &amp; 7), and maximizing/satisficing not being satisfactory for bounded agents (sec. 8). It also allows us to identify progress: formal results that mankind had not previously uncovered (sec. 4).</p> <p>Of course, even if Q' is eventually solved, we'll need to check whether there are other pieces of Q we need to solve. Or perhaps Q will have been <em>dissolved</em> by our efforts to solve Q', similar to how the question "What force distinguishes living matter from non-living matter?" was dissolved by 20th century biology.</p> <h4><br /></h4> <h4><br /></h4> <h4>Notes</h4> <p><sup>1</sup> <small><a href="">Karnofsky (2011)</a> suggests that it may often be best to start under the streetlight <em>and stay there</em>, at least in the context of effective altruism. Karnofsky asks, "What does it look like when we build knowledge only where we&rsquo;re best at building knowledge, rather than building knowledge on the 'most important problems?'" His reply is: "Researching topics we&rsquo;re good at researching can have a lot of benefits, some unexpected, some pertaining to problems we never expected such research to address. Researching topics we&rsquo;re bad at researching doesn&rsquo;t seem like a good idea no matter how important the topics are. Of course I&rsquo;m in favor of thinking about how to develop new research methods to make research good at what it was formerly bad at, but I&rsquo;m against applying current problematic research methods to current projects just because they&rsquo;re the best methods available." Here's one example: "what has done more for political engagement in the U.S.: studying how to improve political engagement, or studying the technology that led to the development of the Internet, the World Wide Web, and ultimately to sites like" I am sympathetic with Karnofsky's view in many cases, but I will give two points of reply with respect to my post above. First, in the above post I wanted to focus on the question of how to tackle difficult questions, not the question of whether difficult questions should be tackled in the first place. And conditional on one's choice to tackle a difficult question, I recommend one start under the streetlight and push into the shadows. Second, my guess is that I'm talking about a broader notion of the streetlight effect than Karnofsky is. For example, I doubt Karnofsky would object to the process of tackling a problem in theoretical computer science or math by trying to solve easier, related problems first.</small></p> <p><sup>2</sup> <small>In GiveWell's January 24th, 2013 board meeting (starting at 6:35 in <a href="">the MP3 recording</a>), GiveWell co-founder Holden Karnofsky said that interventions outside global health are "where we would bet today that we'll find... the best giving opportunities... that best fulfill GiveWell's mission as originally [outlined] in the mission statement." This doesn't appear to be a recently acquired view of things, either. Starting at 22:47 in the same recording, Karnofsky says "There were reasons that we focused on [robustly evidence-backed] interventions for GiveWell initially, but... the [vision] I've been pointing to [of finding giving opportunities outside global health, where less evidence is available]... has [to me] been the vision all along." In personal communication with me, Karnofsky wrote that "We sought to start 'under the streetlight,' as you say, and so focused on finding opportunities to fund things with strong documented evidence of being 'proven, cost-effective and scalable.' Initially we looked at both U.S. and global interventions, and within developing-world interventions we looked at health but also economic empowerment. We ended up focusing on global health because it performed best by these criteria."</small></p> lukeprog hRAzDwwMuu8CZSTvN 2013-06-24T00:49:22.961Z Elites and AI: Stated Opinions <p>Previously, I asked "<a href="/lw/hlc/will_the_worlds_elites_navigate_the_creation_of/">Will the world's elites navigate the creation of AI just fine?</a>" My current answer is "probably not," but I think it's a question worth additional investigation.</p> <p>As a preliminary step, and with the help of MIRI interns Jeremy Miller and Oriane Gaillard, I've collected a few <strong>stated opinions</strong> on the issue. This survey of stated opinions is not <em>representative</em> of any particular group, and is not meant to provide strong evidence about what is <em>true</em> on the matter. It's merely a collection of quotes we happened to find on the subject. Hopefully others can point us to other stated opinions &mdash; or state their own opinions.</p> <p><a id="more"></a></p> <p><a href="">MIRI</a> researcher <strong><a href="">Eliezer Yudkowsky</a></strong> is famously pessimistic on this issue. For example, in a <a href="/lw/15x/friendlier_ai_through_politics/11q8">2009 comment</a>, he replied to the question "What kind of competitive or political system would make fragmented squabbling AIs safer than an attempt to get the monolithic approach right?" by saying "the answer is, 'None.' It's like asking how you should move your legs to walk faster than a jet plane" &mdash; again, implying extreme skepticism that political elites will manage AI properly.<sup>1</sup></p> <p>Cryptographer <strong><a href="">Wei Dai</a></strong> is also <a href="/lw/hlc/will_the_worlds_elites_navigate_the_creation_of/9ag2">quite pessimistic</a>:</p> <blockquote> <p>...even in a relatively optimistic scenario, one with steady progress in AI capability along with apparent progress in AI control/safety (and nobody deliberately builds a UFAI for the sake of "maximizing complexity of the universe" or what have you), it's probably only a matter of time until some AI crosses a threshold of intelligence and manages to "throw off its shackles". This may be accompanied by a last-minute scramble by mainstream elites to slow down AI progress and research methods of scalable AI control, which (if it does happen) will likely be too late to make a difference.</p> </blockquote> <p>Stanford philosopher <strong><a href="">Ken Taylor</a></strong> has also expressed pessimism, in an episode of <em>Philosophy Talk</em> called "<a href="">Turbo-charging the mind</a>":</p> <blockquote> <p>Think about nuclear technology. It evolved in a time of war... The probability that nuclear technology was going to arise at a time when we use it well rather than [for] destruction was low... Same thing with... superhuman artificial intelligence. It's going to emerge... in a context in which we make a mess out of everything. So the probability that we make a mess out of this is really high.</p> </blockquote> <p>Here, Taylor seems to express the view that humans are not yet morally and rationally advanced enough to be trusted with powerful technologies. This general view has been expressed before by many others, including Albert Einstein, who <a href="">wrote</a> that "Our entire much-praised technological progress... could be compared to an axe in the hand of a pathological criminal."</p> <p>In response to Taylor's comment, MIRI researcher <strong>Anna Salamon</strong> (now Executive Director of <a href="">CFAR</a>) expressed a more optimistic view:</p> <blockquote> <p>I... disagree. A lot of my colleagues would [agree with you] that 40% chance of human survival is absurdly optimistic... But, probably we're not close to AI. Probably by the time AI hits we will have had more thinking going into it... [Also,] if the Germans had successfully gotten the bomb and taken over the world, there would have been somebody who profited. If AI runs away and kills everyone, there's nobody who profits. There's a lot of incentive to try and solve the problem together...</p> </blockquote> <p>Economist <strong><a href="">James Miller</a></strong> is another voice of pessimism. In <em><a href="">Singularity Rising</a></em>, chapter 5, he worries about game-theoretic mechanisms incentivizing speed of development over safety of development:</p> <blockquote> <p>Successfully creating [superhuman AI] would give a country control of everything, making [superhuman AI] far more militarily useful than mere atomic weapons. The first nation to create an obedient [superhuman AI] would also instantly acquire the capacity to terminate its rivals&rsquo; AI development projects. Knowing the stakes, rival nations might go full throttle to win [a race to superhuman AI], even if they understood that haste could cause them to create a world-destroying [superhuman AI]. These rivals might realize the danger and desperately wish to come to an agreement to reduce the peril, but they might find that the logic of the widely used game theory paradox of the Prisoners&rsquo; Dilemma thwarts all cooperation efforts... Imagine that both the US and Chinese militaries want to create [superhuman AI]. To keep things simple, let&rsquo;s assume that each military has the binary choice to proceed either slowly or quickly. Going slowly increases the time it will take to build [superhuman AI] but reduces the likelihood that it will become unfriendly and destroy humanity. The United States and China might come to an agreement and decide that they will both go slowly... [But] if the United States knows that China will go slowly, it might wish to proceed quickly and accept the additional risk of destroying the world in return for having a much higher chance of being the first country to create [superhuman AI]. (During the Cold War, the United States and the Soviet Union risked destroying the world for less.) The United States might also think that if the Chinese proceed quickly, then they should go quickly, too, rather than let the Chinese be the likely winners of the... race.</p> </blockquote> <p>In chapter 6, Miller expresses similar worries about corporate incentives and AI:</p> <blockquote> <p>Paradoxically and tragically, the fact that [superhuman AI] would destroy mankind increases the chance of the private sector developing it. To see why, pretend that you&rsquo;re at the racetrack deciding whether to bet on the horse Recursive Darkness. The horse offers a good payoff in the event of victory, but her odds of winning seem too small to justify a bet&mdash;until, that is, you read the fine print on the racing form: "If Recursive Darkness loses, the world ends." Now you bet everything you have on her because you realize that the bet will either pay off or become irrelevant.</p> </blockquote> <p>Miller expanded on some of these points in his chapter in <em><a href="">Singularity Hypotheses</a></em>.</p> <p>In a short reply to Miller, GMU economist <strong><a href="">Robin Hanson</a></strong> wrote that</p> <blockquote> <p>[Miller's analysis is] only as useful as the assumptions on which it is based. Miller's chosen assumptions seem to me quite extreme, and quite unlikely.</p> </blockquote> <p>Unfortunately, Hanson does not explain his reasons for rejecting Miller's analysis.</p> <p>Sun Microsystems co-founder <strong><a href="">Bill Joy</a></strong> is famous for the techno-pessimism of his <em>Wired</em> essay "<a href="">Why the Future Doesn't Need Us</a>," but that article's predictions about elites' likely handling of AI are actually somewhat mixed:</p> <blockquote> <p>we all wish our course could be determined by our collective values, ethics, and morals. If we had gained more collective wisdom over the past few thousand years, then a dialogue to this end would be more practical, and the incredible powers we are about to unleash would not be nearly so troubling.</p> <p>One would think we might be driven to such a dialogue by our instinct for self-preservation. Individuals clearly have this desire, yet as a species our behavior seems to be not in our favor. In dealing with the nuclear threat, we often spoke dishonestly to ourselves and to each other, thereby greatly increasing the risks. Whether this was politically motivated, or because we chose not to think ahead, or because when faced with such grave threats we acted irrationally out of fear, I do not know, but it does not bode well.</p> <p>The new Pandora's boxes of genetics, nanotechnology, and robotics are almost open, yet we seem hardly to have noticed... Churchill remarked, in a famous left-handed compliment, that the American people and their leaders 'invariably do the right thing, after they have examined every other alternative.' In this case, however, we must act more presciently, as to do the right thing only at last may be to lose the chance to do it at all...</p> <p>...And yet I believe we do have a strong and solid basis for hope. Our attempts to deal with weapons of mass destruction in the last century provide a shining example of relinquishment for us to consider: the unilateral US abandonment, without preconditions, of the development of biological weapons. This relinquishment stemmed from the realization that while it would take an enormous effort to create these terrible weapons, they could from then on easily be duplicated and fall into the hands of rogue nations or terrorist groups.</p> </blockquote> <p>Former GiveWell researcher <strong><a href="">Jonah Sinick</a></strong> has <a href="/lw/hlc/will_the_worlds_elites_navigate_the_creation_of/92tx">expressed optimism</a> on the issue:</p> <blockquote> <p>I personally am optimistic about the world's elites navigating AI risk as well as possible subject to inherent human limitations that I would expect everybody to have, and the inherent risk. Some points:</p> <ol> <li> <p>I've been surprised by people's ability to avert bad outcomes. Only two nuclear weapons have been used since nuclear weapons were developed, despite the fact that there are 10,000+ nuclear weapons around the world. Political leaders are assassinated very infrequently relative to how often one might expect a priori.</p> </li> <li> <p>AI risk is a Global Catastrophic Risk in addition to being an x-risk. Therefore, even people who don't care about the far future will be motivated to prevent it.</p> </li> <li> <p>The people with the most power tend to be the most rational people, and the effect size can be expected to increase over time... The most rational people are the people who are most likely to be aware of and to work to avert AI risk...</p> </li> <li> <p>Availability of information is increasing over time. At the time of the Dartmouth conference, information about the potential dangers of AI was not very salient, now it's more salient, and in the future it will be still more salient...</p> </li> <li> <p>In the Manhattan project, the "will bombs ignite the atmosphere?" question was analyzed and dismissed without much (to our knowledge) double-checking. The amount of risk checking per hour of human capital available can be expected to increase over time. In general, people enjoy tackling important problems, and risk checking is more important than most of the things that people would otherwise be doing.</p> </li> </ol></blockquote> <p><strong><a href="">Paul Christiano</a></strong> is another voice of <a href="">optimism</a> about elites' handling of AI. Here are some snippets from his "mainline" scenario for AI development:</p> <blockquote> <p>It becomes fairly clear some time in advance, perhaps years, that broadly human-competitive AGI will be available soon. As this becomes obvious, competent researchers shift into more directly relevant work, and governments and researchers become more concerned with social impacts and safety issues...</p> <p>Call the point where the share of human workers is negligible point Y. After Y humans are very unlikely to maintain control over global economic dynamics---the effective population is overwhelmingly dominated by machine intelligences... This picture becomes clear to serious onlookers well in advance of the development of human-level AGI... [hence] there is much intellectual activity aimed at understanding these dynamics and strategies for handling them, carried out both in public and within governments.</p> <p>Why should we expect the control problem to be solved? each point when we face a control problem more difficult than any we have faced so far and with higher consequences for failure, we expect to have faced slightly easier problems with only slightly lower consequences for failure in the past.</p> <p>As long as solutions to the control problem are not quite satisfactory, the incentives to resolve control problems are comparable to the incentives to increase the capabilities of systems. If solutions are particularly unsatisfactory, then incentives to resolve control problems are very strong. So natural economic incentives build a control system (in the traditional sense from robotics) which keeps solutions to the control problem from being too unsatisfactory.</p> </blockquote> <p>Christiano is no Polyanna, however. In the same document, he outlines "what could go wrong," and what we might do about it.</p> <p>&nbsp;</p> <p><small>Notes</small></p> <p><sup>1</sup> <small>I originally included another quote from Eliezer, but then I noticed that other readers on Less Wrong had elsewhere interpreted that same quote differently than I had, so I removed it from this post.</small></p> lukeprog ToNNGwqNS5kecZaNQ 2013-06-15T19:52:36.207Z Will the world's elites navigate the creation of AI just fine? <p>One open question in AI risk strategy is: Can we trust the world's elite decision-makers (hereafter "elites") to navigate the creation of human-level AI (and beyond) just fine, without the kinds of special efforts that e.g. Bostrom and Yudkowsky think are needed?</p> <p>Some reasons for <em>concern</em> include:</p> <ul> <li>Otherwise smart people say unreasonable things about AI safety. </li> <li>Many people who believed AI was around the corner didn't take safety very seriously.</li> <li>Elites have failed to navigate many important issues wisely (2008 financial crisis, climate change, Iraq War, etc.), for a variety of reasons.</li> <li>AI may arrive rather suddenly, leaving little time for preparation.</li> </ul> <p>But if you were trying to argue for <em>hope</em>, you might argue along these lines (presented for the sake of argument; I don't actually endorse this argument):</p> <ul> <li><em>If AI is preceded by visible signals, elites are likely to take safety measures.</em> Effective measures were taken to address asteroid risk. Large resources are devoted to mitigating climate change risks. Personal and tribal selfishness align with AI risk-reduction in a way they may not align on climate change. Availability of information is increasing over time.</li> <li><em>AI is likely to be preceded by visible signals.</em> Conceptual insights often take years of incremental tweaking. In vision, speech, games, compression, robotics, and other fields, performance curves are mostly smooth. "Human-level performance at X" benchmarks influence perceptions and should be more exhaustive and come more rapidly as AI approaches. Recursive self-improvement capabilities could be charted, and are likely to be AI-complete. If AI succeeds, it will likely succeed for reasons comprehensible by the AI researchers of the time.</li> <li><em>Therefore, safety measures will likely be taken</em>.</li> <li><em>If safety measures are taken, then elites will navigate the creation of AI just fine.</em> Corporate and government leaders can use simple heuristics (e.g. Nobel prizes) to access the upper end of expert opinion. AI designs with easily tailored tendency to act may be the easiest to build. The use of early AIs to solve AI safety problems creates an attractor for "safe, powerful AI." Arms races not insurmountable.</li> </ul> <p>The basic structure of this 'argument for hope' is due to Carl Shulman, though he doesn't necessarily endorse the details. (Also, it's just a rough argument, and as stated is not deductively valid.)</p> <p>Personally, I am not very comforted by this argument because:</p> <ul> <li>Elites often fail to take effective action despite plenty of warning.</li> <li>I think there's a &gt;10% chance AI will not be preceded by visible signals.</li> <li>I think the elites' safety measures will likely be insufficient.</li> </ul> <p>Obviously, there's a lot more for me to spell out here, and some of it may be unclear. The reason I'm posting these thoughts in such a rough state is so that <a href="">MIRI</a> can get some help on our research into this question.</p> <p>In particular, I'd like to know:</p> <ul> <li><em>Which historical events are analogous to AI risk in some important ways?</em> Possibilities include: nuclear weapons, climate change, recombinant DNA, nanotechnology, chloroflourocarbons, asteroids, cyberterrorism, Spanish flu, the 2008 financial crisis, and large wars. </li> <li><em>What are some good resources (e.g. books) for investigating the relevance of these analogies to AI risk</em> (for the purposes of illuminating elites' likely response to AI risk)? </li> <li><em>What are some good studies on elites' decision-making abilities in general?</em> </li> <li><em>Has the increasing availability of information in the past century noticeably improved elite decision-making?</em> </li> </ul> lukeprog Ba8LNjWKDF5nrn9Q6 2013-05-31T18:49:10.861Z Help us name the Sequences ebook <p>&nbsp;</p> <p><em><a href="">Quantum Computing Since Democritus</a></em> got me thinking that we may want a more riveting title for <em>The Sequences, 2006-2009</em> ebook we're preparing for release (like the&nbsp;<a href="">FtIE ebook</a>). Maybe it could be something like <em>[Really Catchy Title]: The Less Wrong Sequences, 2006-2009</em>.</p> <p>The reason for "2006&ndash;2009" is that <a href="">Highly Advanced Epistemology 101 for Beginners</a> will be its own ebook, and future Yudkowskian LW sequences (if there are any) won't be included either.</p> <p>&nbsp;</p> <p>Example options:</p> <p>&nbsp;</p> <ul> <li><em>The Craft of Rationality: The Less Wrong Sequences, 2006&ndash;2009</em></li> <li><em>The Art of Rationality: The Less Wrong Sequences, 2006&ndash;2009</em></li> <li><em>Becoming Less Wrong: The Sequences, 2006&ndash;2009</em></li> </ul> <div><br /></div> <div>In the end, we <em>might</em>&nbsp;just call it <em>The Sequences, 2006&ndash;2009</em>, but I'd like to check whether somebody else can come up with a better name.</div> <div><br /></div> <div>Suggestions?</div> <div><br /></div> <div>(Update on 5/5/2013 is <a href="/lw/h7t/help_us_name_the_sequences_ebook/8x0f">here</a>.)</div> <p>&nbsp;</p> <p>&nbsp;</p> lukeprog Hmc2NE6oTipCvuZdN 2013-04-15T19:59:13.969Z Estimate Stability <p>I've been trying to get clear on something you might call "estimate stability." Steven Kaas recently <a href="">posted my question to StackExchange</a>, but we might as well post it here as well:</p> <blockquote>I'm trying to reason about something I call "estimate stability," and I'm hoping you can tell me whether there&rsquo;s some relevant technical language...</blockquote> <blockquote>What do I mean by "estimate stability?" Consider these three different propositions:</blockquote> <blockquote><ol> <li>We&rsquo;re 50% sure that a coin (known to be fair) will land on heads.</li> <li>We&rsquo;re 50% sure that Matt will show up at the party.</li> <li>We&rsquo;re 50% sure that Strong AI will be invented by 2080.</li> </ol></blockquote> <blockquote>These estimates feel different. One reason they feel different is that the estimates have different degrees of "stability." In case (1) we don't expect to gain information that will change our probability estimate. But for cases (2) and (3), we may well come upon some information that causes us to adjust the estimate either up or down.</blockquote> <blockquote>So estimate (1) is more "stable," but I'm not sure how this should be quantified. Should I think of it in terms of running a Monte Carlo simulation of what future evidence might be, and looking at something like the variance of the distribution of the resulting estimates? What happens when it&rsquo;s a whole probability distribution for e.g. the time Strong AI is invented? (Do you do calculate the stability of the probability density for every year, then average the result?)</blockquote> <blockquote>Here are some other considerations that would be useful to relate more formally to considerations of estimate stability:</blockquote> <blockquote> <ul> <li>If we&rsquo;re estimating some variable, having a narrow probability distribution (prior to future evidence with respect to which we&rsquo;re trying to assess the stability) corresponds to having a lot of data. New data, in that case, would make less of a contribution in terms of changing the mean and reducing the variance.</li> <li>There are differences in model uncertainty between the three cases. I know what model to use when predicting a coin flip. My method of predicting whether Matt will show up at a party is shakier, but I have some idea of what I&rsquo;m doing. With the Strong AI case, I don&rsquo;t really have any good idea of what I&rsquo;m doing. Presumably model uncertainty is related to estimate stability, because the more model uncertainty we have, the more we can change our estimate by reducing our model uncertainty.</li> <li>Another difference between the three cases is the degree to which our actions allow us to improve our estimates, increasing their stability. For example, we can reduce the uncertainty and increase the stability of our estimate about Matt by calling him, but we don&rsquo;t really have any good ways to get better estimates of Strong AI timelines (other than by waiting).</li> <li>Value-of-information affects how we should deal with delay. Estimates that are unstable in the face of evidence we expect to get in the future seem to imply higher VoI. This creates a reason to accept delays in our actions. Or if we can easily gather information that will make our estimates more accurate and stable, that means we have more reason to pay the cost of gathering that information. If we expect to forget information, or expect our future selves not to take information into account, dynamic inconsistency becomes important. This is another reason why estimates might be unstable. One possible strategy here is to precommit to have our estimates regress to the mean.</li> </ul> </blockquote> <blockquote>Thanks for any thoughts!</blockquote> lukeprog K33mYmEk9LoTbN92L 2013-04-13T18:33:23.799Z Fermi Estimates <p>Just before the <a href="">Trinity test</a>, Enrico Fermi decided he wanted a rough estimate of the blast's power before the diagnostic data came in. So he dropped some pieces of paper from his hand as the blast wave passed him, and used this to estimate that the blast was equivalent to 10 kilotons of TNT. His guess was remarkably accurate for having so little data: the true answer turned out to be 20 kilotons of TNT.</p> <p>Fermi had a knack for making roughly-accurate estimates with very little data, and therefore such an estimate is known today as a <a href="">Fermi estimate</a>.</p> <p>Why bother with Fermi estimates, if your estimates are likely to be off by a factor of 2 or even 10? Often, getting an estimate within a factor of 10 or 20 is enough to make a decision. So Fermi estimates can save you a lot of time, especially as you gain more practice at making them.</p> <p>&nbsp;</p> <h3>Estimation tips</h3> <p><small>These first two sections are adapted from <em><a href="">Guestimation 2.0</a></em>.</small></p> <p><strong>Dare to be imprecise.</strong> Round things off enough to do the calculations in your head. I call this the <a href="">spherical cow principle</a>, after a joke about how physicists oversimplify things to make calculations feasible:</p> <blockquote> <p>Milk production at a dairy farm was low, so the farmer asked a local university for help. A multidisciplinary team of professors was assembled, headed by a theoretical physicist. After two weeks of observation and analysis, the physicist told the farmer, "I have the solution, but it only works in the case of spherical cows in a vacuum."</p> </blockquote> <p>By the spherical cow principle, there are 300 days in a year, people are six feet (or 2 meters) tall, the circumference of the Earth is 20,000 mi (or 40,000 km), and cows are spheres of meat and bone 4 feet (or 1 meter) in diameter.</p> <p><strong>Decompose the problem.</strong> Sometimes you can give an estimate in one step, within a factor of 10. (How much does a new compact car cost? $20,000.) But in most cases, you'll need to break the problem into several pieces, estimate each of them, and then recombine them. I'll give several examples below.</p> <p><strong>Estimate by bounding.</strong> Sometimes it is easier to give lower and upper bounds than to give a point estimate. How much time per day does the average 15-year-old watch TV? I don't spend any time with 15-year-olds, so I haven't a clue. It could be 30 minutes, or 3 hours, or 5 hours, but I'm pretty confident it's more than 2 minutes and less than 7 hours (400 minutes, by the spherical cow principle).</p> <p>Can we convert those bounds into an estimate? You bet. But we don't do it by taking the <em>average</em>. That would give us (2 mins + 400 mins)/2 = 201 mins, which is within a factor of 2 from our upper bound, but a factor <em>100</em> greater than our lower bound. Since our goal is to estimate the answer within a factor of 10, we'll probably be way off.</p> <p>Instead, we take the <em>geometric mean</em> &mdash; the square root of the product of our upper and lower bounds. But square roots often require a calculator, so instead we'll take the <em>approximate</em> geometric mean (AGM). To do that, we average the coefficients and exponents of our upper and lower bounds.</p> <p>So what is the AGM of 2 and 400? Well, 2 is 2&times;10<sup>0</sup>, and 400 is 4&times;10<sup>2</sup>. The average of the coefficients (2 and 4) is 3; the average of the exponents (0 and 2) is 1. So, the AGM of 2 and 400 is 3&times;10<sup>1</sup>, or 30. The precise geometric mean of 2 and 400 turns out to be 28.28. Not bad.</p> <p>What if the sum of the exponents is an odd number? Then we round the resulting exponent down, and multiply the final answer by three. So suppose my lower and upper bounds for how much TV the average 15-year-old watches had been 20 mins and 400 mins. Now we calculate the AGM like this: 20 is 2&times;10<sup>1</sup>, and 400 is still 4&times;10<sup>2</sup>. The average of the coefficients (2 and 4) is 3; the average of the exponents (1 and 2) is 1.5. So we round the exponent down to 1, and we multiple the final result by three: 3(3&times;10<sup>1</sup>) = 90 mins. The precise geometric mean of 20 and 400 is 89.44. Again, not bad.</p> <p><strong>Sanity-check your answer</strong>. You should always sanity-check your final estimate by comparing it to some reasonable analogue. You'll see examples of this below.</p> <p><strong>Use Google as needed</strong>. You can often quickly find the exact quantity you're trying to estimate on Google, or at least some <em>piece</em> of the problem. In those cases, it's probably not worth trying to estimate it <em>without</em> Google.</p> <p><a id="more"></a></p> <h3><br /></h3> <h3>Fermi estimation failure modes</h3> <p>Fermi estimates go wrong in one of three ways.</p> <p>First, we might badly overestimate or underestimate a quantity. Decomposing the problem, estimating from bounds, and looking up particular pieces on Google should protect against this. Overestimates and underestimates for the different pieces of a problem <a href="">should roughly cancel out</a>, especially when there are many pieces.</p> <p>Second, we might model the problem incorrectly. If you estimate teenage deaths per year on the assumption that most teenage deaths are from suicide, your estimate will probably be way off, because most teenage deaths are caused by accidents. To avoid this, try to decompose each Fermi problem by using a model you're fairly confident of, even if it means you need to use more pieces or give wider bounds when estimating each quantity.</p> <p>Finally, we might choose a nonlinear problem. Normally, we assume that if one object can get some result, then two objects will get twice the result. Unfortunately, this doesn't hold true for nonlinear problems. If one motorcycle on a highway can transport a person at 60 miles per hour, then 30 motorcycles can transport 30 people at 60 miles per hour. However, 10<sup>4</sup> motorcycles cannot transport 10<sup>4</sup> people at 60 miles per hour, because there will be a huge traffic jam on the highway. This problem is difficult to avoid, but with practice you will get better at recognizing when you're facing a nonlinear problem.</p> <p>&nbsp;</p> <h3>Fermi practice</h3> <p>When getting started with Fermi practice, I recommend estimating quantities that you can easily look up later, so that you can see how accurate your Fermi estimates tend to be. Don't look up the answer before constructing your estimates, though! Alternatively, you might allow yourself to look up particular pieces of the problem &mdash; e.g. the <a href="">number of Sikhs</a> in the world, the formula for <a href="">escape velocity</a>, or the <a href="">gross world product</a>&nbsp;&mdash; but not the final quantity you're trying to estimate.</p> <p>Most books about Fermi estimates are filled with examples done by Fermi estimate experts, and in many cases the estimates were probably adjusted after the author looked up the true answers. This post is different. My examples below are estimates I made <em>before</em> looking up the answer online, so you can get a realistic picture of how this works from someone who isn't "cheating." Also, there will be no selection effect: I'm going to do four Fermi estimates for this post, and I'm not going to throw out my estimates if they are way off. Finally, I'm not all that practiced doing "Fermis" myself, so you'll get to see what it's like for a relative newbie to go through the process. In short, I hope to give you a realistic picture of what it's like to do Fermi practice when you're just getting started.</p> <p>&nbsp;</p> <h3>Example 1: How many new passenger cars are sold each year in the USA?</h3> <p><img style="float: right; padding: 10px;" src="" alt="" />The classic Fermi problem is "How many piano tuners are there in Chicago?" This kind of estimate is useful if you want to know the approximate size of the customer base for a new product you might develop, for example. But I'm not sure anyone knows how many piano tuners there <em>really</em> are in Chicago, so let's try a different one we probably <em>can</em> look up later: "How many new passenger cars are sold each year in the USA?"</p> <p>As with all Fermi problems, there are many different models we could build. For example, we could estimate how many new cars a dealership sells per month, and then we could estimate how many dealerships there are in the USA. Or we could try to estimate the annual demand for new cars from the country's population. Or, if we happened to have read how many Toyota Corollas were sold last year, we could try to build our estimate from there.</p> <p>The second model looks more robust to me than the first, since I know roughly how many Americans there are, but I have no idea how many new-car dealerships there are. Still, let's try it both ways. (I <em>don't</em> happen to know how many new Corollas were sold last year.)</p> <h4>Approach #1: Car dealerships</h4> <p>How many new cars does a dealership sell per month, on average? Oofta, I dunno. To support the dealership's existence, I assume it has to be at least 5. But it's probably not more than 50, since most dealerships are in small towns that don't get much action. To get my point estimate, I'll take the AGM of 5 and 50. 5 is 5&times;10<sup>0</sup>, and 50 is 5&times;10<sup>1</sup>. Our exponents sum to an odd number, so I'll round the exponent down to 0 and multiple the final answer by 3. So, my estimate of how many new cars a new-car dealership sells per month is 3(5&times;10<sup>0</sup>) = 15.</p> <p>Now, how many new-car dealerships are there in the USA? This could be tough. I know several towns of only 10,000 people that have 3 or more new-car dealerships. I don't recall towns much smaller than that having new-car dealerships, so let's exclude them. How many cities of 10,000 people or more are there in the USA? I have no idea. So let's decompose this problem a bit more.</p> <p>How many <em>counties</em> are there in the USA? I remember seeing a map of counties colored by which national ancestry was dominant in that county. (Germany was the most common.) Thinking of that map, there were definitely more than 300 counties on it, and definitely less than 20,000. What's the AGM of 300 and 20,000? Well, 300 is 3&times;10<sup>2</sup>, and 20,000 is 2&times;10<sup>4</sup>. The average of coefficients 3 and 2 is 2.5, and the average of exponents 2 and 4 is 3. So the AGM of 300 and 20,000 is 2.5&times;10<sup>3</sup> = 2500.</p> <p>Now, how many towns of 10,000 people or more are there per county? I'm pretty sure the average must be larger than 10 and smaller than 5000. The AGM of 10 and 5000 is 300. (I won't include this calculation in the text anymore; you know how to do it.)</p> <p>Finally, how many car dealerships are there in cities of 10,000 or more people, on average? Most such towns are pretty small, and probably have 2-6 car dealerships. The largest cities will have many more: maybe 100-ish. So I'm pretty sure the average number of car dealerships in cities of 10,000 or more people must be between 2 and 30. The AGM of 2 and 30 is 7.5.</p> <p>Now I just multiply my estimates:</p> <p>[15 new cars sold per month per dealership] &times; [12 months per year] &times; [7.5 new-car dealerships per city of 10,000 or more people] &times; [300 cities of 10,000 or more people per county] &times; [2500 counties in the USA] = 1,012,500,000.</p> <p>A sanity check immediately invalidates this answer. There's no way that 300 million American citizens buy a <em>billion</em> new cars per year. I suppose they <em>might</em> buy 100 million new cars per year, which would be within a factor of 10 of my estimate, but I doubt it.</p> <p>As I suspected, my first approach was problematic. Let's try the second approach, starting from the population of the USA.</p> <h4>Approach #2: Population of the USA</h4> <p>There are about 300 million Americans. How many of them own a car? Maybe 1/3 of them, since children don't own cars, many people in cities don't own cars, and many households share a car or two between the adults in the household.</p> <p>Of the 100 million people who own a car, how many of them bought a <em>new</em> car in the past 5 years? Probably less than half; most people buy used cars, right? So maybe 1/4 of car owners bought a new car in the past 5 years, which means 1 in 20 car owners bought a new car in the past <em>year</em>.</p> <p>100 million / 20 = 5 million new cars sold each year in the USA. That doesn't seem crazy, though perhaps a bit low. I'll take this as my estimate.</p> <p>Now is your last chance to try this one on your own; in the next paragraph I'll reveal the true answer.</p> <p>&hellip;</p> <p>&hellip;</p> <p>...</p> <p>Now, I Google <a href=";output=search&amp;sclient=psy-ab&amp;q=new+cars+sold+per+year+in+the+USA&amp;oq=new+cars+sold+per+year+in+the+USA">new cars sold per year in the USA</a>. Wikipedia is the first result, and it <a href="">says</a> "In the year 2009, about 5.5 million new passenger cars were sold in the United States according to the U.S. Department of Transportation."</p> <p>Boo-yah!</p> <p>&nbsp;</p> <h3>Example 2: How many fatalities from passenger-jet crashes have there been in the past 20 years?</h3> <p><img style="float: right; padding: 10px;" src="" alt="" />Again, there are multiple models I could build. I could try to estimate how many passenger-jet flights there are per year, and then try to estimate the frequency of crashes and the average number of fatalities per crash. Or I could just try to guess the total number of passenger-jet crashes around the world per year and go from there.</p> <p>As far as I can tell, passenger-jet crashes (with fatalities) almost always make it on the TV news and (more relevant to me) the front page of Google News. Exciting footage and multiple deaths will do that. So working just from memory, it feels to me like there are about 5 passenger-jet crashes (with fatalities) per year, so maybe there were about 100 passenger jet crashes with fatalities in the past 20 years.</p> <p>Now, how many fatalities per crash? From memory, it seems like there are usually two kinds of crashes: ones where <em>everybody</em> dies (meaning: about 200 people?), and ones where only about 10 people die. I think the "everybody dead" crashes are less common, maybe 1/4 as common. So the average crash with fatalities should cause (200&times;1/4)+(10&times;3/4) = 50+7.5 = 60, by the spherical cow principle.</p> <p>60 fatalities per crash &times; 100 crashes with fatalities over the past 20 years = 6000 passenger fatalities from passenger-jet crashes in the past 20 years.</p> <p>Last chance to try this one on your own...</p> <p>&hellip;</p> <p>&hellip;</p> <p>&hellip;</p> <p>A Google search again brings me to Wikipedia, which reveals that an organization called ACRO <a href="">records</a> the number of airline fatalities each year. Unfortunately for my purposes, they include fatalities from cargo flights. After more Googling, I tracked down Boeing's "<a href="">Statistical Summary of Commercial Jet Airplane Accidents, 1959-2011</a>," but that report excludes jets lighter than 60,000 pounds, and excludes crashes caused by hijacking or terrorism.</p> <p>It appears it would be a major research project to figure out the true answer to our question, but let's at least estimate it from the ACRO data. Luckily, ACRO has statistics on which percentage of accidents are from passenger and other kinds of flights, which I'll take as a proxy for which percentage of <em>fatalities</em> are from different kinds of flights. According to <a href="">that page</a>, 35.41% of accidents are from "regular schedule" flights, 7.75% of accidents are from "private" flights, 5.1% of accidents are from "charter" flights, and 4.02% of accidents are from "executive" flights. I think that captures what I had in mind as "passenger-jet flights." So we'll guess that 52.28% of fatalities are from "passenger-jet flights." I won't round this to 50% because we're not doing a Fermi estimate right now; we're trying to <em>check</em> a Fermi estimate.</p> <p>According to ACRO's <a href="">archives</a>, there were 794 fatalities in 2012, 828 fatalities in 2011, and... well, from 1993-2012 there were a total of 28,021 fatalities. And 52.28% of that number is 14,649.</p> <p>So my estimate of 6000 was off by less than a factor of 3!</p> <p>&nbsp;</p> <h3>Example 3: How much does the New York state government spends on K-12 education every year?</h3> <p><img style="float: right; padding: 10px;" src="" alt="" />How might I estimate this? First I'll estimate the number of K-12 students in New York, and then I'll estimate how much this should cost.</p> <p>How many people live in New York? I seem to recall that NYC's greater metropolitan area is about 20 million people. That's probably most of the state's population, so I'll guess the total is about 30 million.</p> <p>How many of those 30 million people attend K-12 public schools? I can't remember what the United States' <a href="">population pyramid</a> looks like, but I'll guess that about 1/6 of Americans (and hopefully New Yorkers) attend K-12 at any given time. So that's 5 million kids in K-12 in New York. The number attending private schools probably isn't large enough to matter for factor-of-10 estimates.</p> <p>How much does a year of K-12 education cost for one child? Well, I've heard teachers don't get paid much, so after benefits and taxes and so on I'm guessing a teacher costs about $70,000 per year. How big are class sizes these days, 30 kids? By the spherical cow principle, that's about $2,000 per child, per year on teachers' salaries. But there are lots of other expenses: buildings, transport, materials, support staff, etc. And maybe some money goes to private schools or other organizations. Rather than estimate all those things, I'm just going to guess that about $10,000 is spent per child, per year.</p> <p>If that's right, then New York spends $50 billion per year on K-12 education.</p> <p>Last chance to make your own estimate!</p> <p>&hellip;</p> <p>&hellip;</p> <p>&hellip;</p> <p>Before I did the Fermi estimate, I had <a href="">Julia Galef</a> check Google to find this statistic, but she didn't give me any hints about the number. Her two sources were <a href="">Wolfram Alpha</a> and a <a href="">web chat</a> with New York's Deputy Secretary for Education, both of which put the figure at approximately $53 billion.</p> <p>Which is definitely within a factor of 10 from $50 billion. :)</p> <p>&nbsp;</p> <h3>Example 4: How many plays of My Bloody Valentine's "Only Shallow" have been reported to</h3> <p><img style="float: right; padding: 10px;" src="" alt="" /><a href=""></a> makes a record of every audio track you play, if you enable the relevant feature or plugin for the music software on your phone, computer, or other device. Then, the service can show you charts and statistics about your listening patterns, and make personalized music recommendations from them. My own charts are <a href="">here</a>. (Chuck Wild / <a href="">Liquid Mind</a> dominates my charts because I used to listen to that artist while sleeping.)</p> <p>My Fermi problem is: How many plays of "<a href="">Only Shallow</a>" have been reported to</p> <p>My Bloody Valentine is a popular "indie" rock band, and "Only Shallow" is probably one of their most popular tracks. How can I estimate how many plays it has gotten on</p> <p>What do I know that might help?</p> <ul> <li>I know is popular, but I don't have a sense of whether they have 1 million users, 10 million users, or 100 million users. </li> <li>I accidentally saw on's Wikipedia page that just over 50 billion track plays have been recorded. We'll consider that to be one piece of data I looked up to help with my estimate. </li> <li>I seem to recall reading that major music services like iTunes and Spotify have about 10 million tracks. Since records songs that people play from their private collections, whether or not they exist in popular databases, I'd guess that the total number of different tracks named in's database is an order of magnitude larger, for about 100 million tracks named in its database.</li> </ul> <p>I would guess that track plays obey a <a href="">power law</a>, with the most popular tracks getting vastly more plays than tracks of average popularity. I'd also guess that there are maybe 10,000 tracks more popular than "Only Shallow."</p> <p>Next, I simulated being good at math by having <a href="/user/Qiaochu_Yuan/overview/">Qiaochu Yuan</a> show me how to do the calculation. I also allowed myself to use a calculator. Here's what we do:</p> <blockquote> <p>Plays(rank) = C/(rank<sup>P</sup>)</p> </blockquote> <p>P is the exponent for the power law, and C is the proportionality constant. We'll guess that P is 1, a common power law exponent for empirical data. And we calculate C like so:</p> <blockquote> <p>C &asymp; [total plays]/ln(total songs) &asymp; 2.5 billion</p> </blockquote> <p>So now, assuming the song's rank is 10,000, we have:</p> <blockquote> <p>Plays(10<sup>4</sup>) = 2.5&times;10<sup>9</sup>/(10<sup>4</sup>)</p> <p>Plays("Only Shallow") = 250,000</p> </blockquote> <p>That seems high, but let's roll with it. Last chance to make your own estimate!</p> <p>&hellip;</p> <p>&hellip;</p> <p>...</p> <p>And when I <a href="">check the answer</a>, I see that "Only Shallow" has about 2 million plays on</p> <p>My answer was off by less than a factor of 10, which for a Fermi estimate is called <em>victory</em>!</p> <p>Unfortunately, doesn't publish all-time track rankings or other data that might help me to determine which parts of my model were correct and incorrect.</p> <p>&nbsp;</p> <h3>Further examples</h3> <p>I focused on examples that are similar in structure to the kinds of quantities that entrepreneurs and CEOs might want to estimate, but of course there are all kinds of things one can estimate this way. Here's a sampling of Fermi problems featured in various books and websites on the subject:</p> <p><a href="">Play Fermi Questions</a>: 2100 Fermi problems and counting.</p> <p><em><a href="">Guesstimation</a></em> (2008): If all the humans in the world were crammed together, how much area would we require? What would be the mass of all 10<sup>8</sup> MongaMillions lottery tickets? On average, how many people are airborne over the US at any given moment? How many cells are there in the human body? How many people in the world are picking their nose right now? What are the relative costs of fuel for NYC rickshaws and automobiles?</p> <p><em><a href="">Guesstimation 2.0</a> (2011): If we launched a trillion one-dollar bills into the atmosphere, what fraction of sunlight hitting the Earth could we block with those dollar bills? If a million monkeys typed randomly on a million typewriters for a year, what is the longest string of consecutive correct letters of *The Cat in the Hat</em> (starting from the beginning) would they likely type? How much energy does it take to crack a nut? If an airline asked its passengers to urinate before boarding the airplane, how much fuel would the airline save per flight? What is the radius of the largest rocky sphere from which we can reach escape velocity by jumping?</p> <p><em><a href="">How Many Licks?</a></em> (2009): What fraction of Earth's volume would a <a href="">mole</a> of hot, sticky, chocolate-jelly doughnuts be? How many miles does a person walk in a lifetime? How many times can you outline the continental US in shoelaces? How long would it take to read every book in the library? How long can you shower and still make it more environmentally friendly than taking a bath?</p> <p><em><a href="">Ballparking</a></em> (2012): How many bolts are in the floor of the Boston Garden basketball court? How many lanes would you need for the outermost lane of a running track to be the length of a marathon? How hard would you have to hit a baseball for it to never land?</p> <p><a href="">University of Maryland Fermi Problems Site</a>: How many sheets of letter-sized paper are used by all students at the University of Maryland in one semester? How many blades of grass are in the lawn of a typical suburban house in the summer? How many golf balls can be fit into a typical suitcase?</p> <p><a href="">Stupid Calculations</a>: a blog of silly-topic Fermi estimates.</p> <p>&nbsp;</p> <h3>Conclusion</h3> <p>Fermi estimates can help you become more efficient in your day-to-day life, and give you increased confidence in the decisions you face. If you want to become proficient in making Fermi estimates, I recommend practicing them 30 minutes per day for three months. In that time, you should be able to make about (2 Fermis per day)&times;(90 days) = 180 Fermi estimates.</p> <p>If you'd like to write down your estimation attempts and then publish them here, please do so as a reply to <a href="/lw/h5e/fermi_estimates/8ppa">this comment</a>. One Fermi estimate per comment, please!</p> <p>Alternatively, post your Fermi estimates to the <a href="">dedicated subreddit</a>.</p> <p><em>Update 03/06/2017: I keep getting requests from professors to use this in their classes, so: I license anyone to use this article noncommercially, so long as its authorship is noted (me = Luke Muehlhauser).</em></p> lukeprog PsEppdvgRisz5xAHG 2013-04-11T17:52:28.708Z Explicit and tacit rationality <p><a href="">Like Eliezer</a>, I "do my best thinking into a keyboard." It starts with a <a href="">burning itch</a> to figure something out. I collect ideas and arguments and evidence and sources. I arrange them, tweak them, criticize them. I explain it all in my own words so I can understand it better. By then it is nearly something that others would want to read, so I clean it up and publish, say, <a href="/lw/3w3/how_to_beat_procrastination/">How to Beat Procrastination</a>. I write <a href="">essays</a> in the <a href="">original sense</a> of the word: "attempts."</p> <p>This time, I'm trying to figure out something we might call "tacit rationality" (c.f. <a href="">tacit knowledge</a>).</p> <p>I tried and failed to write a <em>good</em> post about tacit rationality, so I wrote a <em>bad</em> post instead &mdash; one that is basically a patchwork of somewhat-related musings on explicit and tacit rationality. Therefore I'm posting this article to LW Discussion. I hope the ensuing discussion ends up leading somewhere with more clarity and usefulness.</p> <p>&nbsp;</p> <h3>Three methods for training rationality</h3> <p>Which of these three options do you think will train rationality (i.e. <a href="/lw/7i/rationality_is_systematized_winning/">systematized winning</a>, or "winning-rationality") most effectively?</p> <ol> <li>Spend one year reading and re-reading <em><a href="">The Sequences</a></em>, studying the math and cognitive science of rationality, and discussing rationality online and at Less Wrong meetups.</li> <li>Attend a <a href="">CFAR workshop</a>, then spend the next year practicing those skills and <a href="/lw/fc3/checklist_of_rationality_habits/">other rationality habits</a> every week.</li> <li>Run a startup or small business for one year.</li> </ol> <p>Option 1 seems to be pretty effective at training people to talk intelligently <em>about</em> rationality (let's call that "talking-rationality"), and it seems to inoculate people against some common philosophical mistakes.</p> <p>We don't yet have any examples of someone doing Option 2 (the first CFAR workshop was May 2012), but I'd expect Option 2 &mdash; if actually executed &mdash; to result in more winning-rationality than Option 1, and also a modicum of talking-rationality.</p> <p>What about Option 3? Unlike Option 2 or especially Option 1, I'd expect it to train almost no ability to talk intelligently about rationality. But I <em>would</em> expect it to result in relatively good winning-rationality, due to its tight feedback loops.</p> <p>&nbsp;</p> <h3>Talking-rationality and winning-rationality can come apart</h3> <blockquote> <p>I've come to believe... that the best way to succeed is to discover what you love and then find a way to offer it to others in the form of service, working hard, and also allowing the energy of the universe to lead you.</p> </blockquote> <p align="right"><a href="">Oprah Winfrey</a></p> <p>Oprah isn't known for being a rational thinker. She is a known <a href="">peddler of pseudoscience</a>, and she attributes her success (in part) to allowing "the energy of the universe" to lead her.</p> <p>Yet she must be doing <em>something</em> right. Oprah is a true rags-to-riches story. Born in Mississippi to an unwed teenage housemaid, she was so poor she wore dresses made of potato sacks. She was molested by a cousin, an uncle, and a family friend. She became pregnant at age 14.</p> <p>But in high school she became an honors student, won oratory contests and a beauty pageant, and was hired by a local radio station to report the news. She became the youngest-ever news anchor at Nashville's WLAC-TV, then hosted several shows in Baltimore, then moved to Chicago and within months her own talk show shot from last place to first place in the ratings there. Shortly afterward her show went national. She also produced and starred in several TV shows, was nominated for an Oscar for her role in a Steven Spielberg movie, launched her own TV cable network and her own magazine (the "most successful startup ever in the [magazine] industry" according to <em><a href="">Fortune</a></em>), and became the world's first female black billionaire.</p> <p>I'd like to suggest that Oprah's climb probably didn't come <em>merely</em> through inborn talent, hard work, and luck. To get from potato sack dresses to the Forbes billionaire list, Oprah had to make thousands of pretty good decisions. She had to make pretty accurate guesses about the likely consequences of various actions she could take. When she was wrong, she had to correct course fairly quickly. In short, she had to be fairly <em>rational</em>, at least in some domains of her life.</p> <p>Similarly, I know plenty of business managers and entrepreneurs who have a steady track record of good decisions and wise judgments, and yet they are religious, or they commit basic errors in logic and probability when they talk about non-business subjects.</p> <p>What's going on here? My guess is that successful entrepreneurs and business managers and other people must have pretty good <em>tacit rationality</em>, even if they aren't very proficient with the "rationality" concepts that Less Wrongers tend to discuss on a daily basis. Stated another way, successful businesspeople make fairly rational decisions and judgments, even though they may confabulate rather silly <em>explanations</em> for their success, and even though they don't understand the math or science of rationality well.</p> <p>LWers can probably outperform Mark Zuckerberg on the CRT and the Berlin Numeracy Test, but Zuckerberg is laughing at them from atop a huge pile of utility.</p> <p>&nbsp;</p> <h3>Explicit and tacit rationality</h3> <p>Patri Friedman, in <a href="/lw/2po/selfimprovement_or_shiny_distraction_why_less/">Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality</a>, reminded us that skill acquisition comes from <a href=";ie=utf-8&amp;oe=utf-8&amp;aq=t&amp;rls=org.mozilla:en-US:official&amp;client=firefox-a">deliberate practice</a>, and reading LW is a "shiny distraction," not deliberate practice. He said a <em>real</em> rationality practice would look more like... well, what Patri describes <a href="/lw/h5t/new_applied_rationality_workshops_april_may_and/8q54">is basically CFAR</a>, though CFAR didn't exist at the time.</p> <p>In response, and again long before CFAR existed, Anna Salamon wrote <a href="/lw/34a/goals_for_which_less_wrong_does_and_doesnt_help/">Goals for which Less Wrong does (and doesn't) help</a>. Summary: Some domains provide rich, cheap feedback, so you don't need much LW-style rationality to become successful in those domains. But many of us have goals in domains that don't offer rapid feedback: e.g. whether to buy cryonics, which 40-year investments are safe, which metaethics to endorse. For this kind of thing you need LW-style rationality. (We could also state this as "Domains with rapid feedback train tacit rationality with respect to those domains, but for domains without rapid feedback you've got to do the best you can with LW-style "explicit rationality".)</p> <p>The good news is that you should be able to combine explicit and tacit rationality. Explicit rationality can help you realize that you should force tight feedback loops into whichever domains you want to succeed in, so that you can have develop good intuitions about how to succeed in those domains. (See also: <a href="">Lean Startup</a> or <a href="">Lean Nonprofit</a> methods.)</p> <p>Explicit rationality could also help you realize that the cognitive biases most-discussed in the literature aren't necessarily the ones you should focus on ameliorating, as Aaron Swartz <a href="/lw/di4/reply_to_holden_on_the_singularity_institute/75an">wrote</a>:</p> <blockquote> <p>Cognitive biases cause people to make choices that are <em>most obviously</em> irrational, but not <em>most importantly</em> irrational... Since cognitive biases are the primary focus of research into rationality, rationality tests mostly measure how good you are at avoiding them... LW readers tend to be fairly good at avoiding cognitive biases... But there a whole series of much more important irrationalities that LWers suffer from. (Let's call them "practical biases" as opposed to "cognitive biases," even though both are ultimately practical and cognitive.)</p> <p>...Rationality, properly understood, is in fact a predictor of success. Perhaps if LWers used success as their metric (as opposed to getting better at avoiding obvious mistakes), they might focus on their most important irrationalities (instead of their most obvious ones), which would lead them to be more rational and more successful.</p> </blockquote> <h3><br /></h3> <h3>Final scattered thoughts</h3> <ul> <li>If someone is consistently winning, and not just because they have tons of wealth or fame, then maybe you should conclude they have pretty good tacit rationality even if their explicit rationality is terrible. </li> <li>The positive effects of tight feedback loops might trump the effects of explicit rationality training. </li> <li>Still, I suspect explicit rationality <em>plus</em> tight feedback loops could lead to the best results of all. </li> <li>I really hope we can develop a real <a href="/lw/h5t/new_applied_rationality_workshops_april_may_and/8q54">rationality dojo</a>. </li> <li>If you're reading this post, you're probably spending too <em>much</em> time reading Less Wrong, and too <em>little</em> time <a href="">hacking your motivation system</a>, <a href="/lw/5p6/how_and_why_to_granularize/">learning social skills</a>, and <a href="">learning</a> how to inject tight feedback loops into everything you can.</li> </ul> lukeprog NLJ6NyHFZPJ2oNSZ8 2013-04-09T23:33:29.127Z Critiques of the heuristics and biases tradition <p>The chapter on judgment under uncertainty in the (excellent) new <em><a href="">Oxford Handbook of Cognitive Psychology</a></em> has a handy little section on recent critiques of the "heuristics and biases" tradition. It also discusses problems with the somewhat-competing "fast and frugal heuristics" school of thought, but for now let me just quote the section on heuristics and biases (pp. 608-609):</p> <blockquote> <p>The heuristics and biases program has been highly influential; however, some have argued that in recent years the influence, at least in psychology, has waned (McKenzie, 2005). This waning has been due in part to pointed critiques of the approach (e.g., Gigerenzer, 1996). This critique comprises two main arguments: (1) that by focusing mainly on coherence standards [e.g. their <em>rationality</em> given the subject's other beliefs, as contrasted with <em>correspondence standards</em> having to do with the real-world accuracy of a subject's beliefs] the approach ignores the role played by the environment or the context in which a judgment is made; and (2) that the explanations of phenomena via one-word labels such as availability, anchoring, and representativeness are vague, insufficient, and say nothing about the processes underlying judgment (see Kahneman, 2003; Kahneman &amp; Tversky, 1996 for responses to this critique).</p> <p>The accuracy of some of the heuristics proposed by Tversky and Kahneman can be compared to correspondence criteria (availability and anchoring). Thus, arguing that the tradition only uses the &ldquo;narrow norms&rdquo; (Gigerenzer, 1996) of coherence criteria is not strictly accurate (cf. Dunwoody, 2009). Nonetheless, responses in famous examples like the Linda problem can be reinterpreted as sensible rather than erroneous if one uses conversational or pragmatic norms rather than those derived from probability theory (Hilton, 1995). For example, Hertwig, Benz and Krauss (2008) asked participants which of the following two statements is more probable:</p> <blockquote> <p>[X] The percentage of adolescent smokers in Germany decreases at least 15% from current levels by September 1, 2003.</p> <p>[X&amp;Y] The tobacco tax in Germany is increased by 5 cents per cigarette and the percentage of adolescent smokers in Germany decreases at least 15% from current levels by September 1, 2003.</p> </blockquote> <p>According to the conjunction rule, [X&amp;Y cannot be more probable than X] and yet the majority of participants ranked the statements in that order. However, when subsequently asked to rank order four statements in order of how well each one described their understanding of X&amp;Y, there was an overwhelming tendency to rank statements like &ldquo;X and therefore Y&rdquo; or &ldquo;X and X is the cause for Y&rdquo; higher than the simple conjunction &ldquo;X and Y.&rdquo; Moreover, the minority of participants who did not commit the conjunction fallacy in the first judgment showed internal coherence by ranking &ldquo;X and Y&rdquo; as best describing their understanding in the second judgment.These results suggest that people adopt a causal understanding of the statements, in essence ranking the probability of X, given Y as more probable than X occurring alone. If so, then arguably the conjunction &ldquo;error&rdquo; is no longer incorrect. (See Moro, 2009 for extensive discussion of the reasons underlying the conjunction fallacy, including why &ldquo;misunderstanding&rdquo; cannot explain all instances of the fallacy.)</p> <p>The &ldquo;vagueness&rdquo; argument can be illustrated by considering two related phenomena: the gambler&rsquo;s fallacy and the hot-hand (Gigerenzer &amp; Brighton, 2009). The gambler&rsquo;s fallacy is the tendency for people to predict the opposite outcome after a run of the same outcome (e.g., predicting heads after a run of tails when flipping a fair coin); the hot-hand, in contrast, is the tendency to predict a run will continue (e.g., a player making a shot in basketball after a succession of baskets; Gilovich, Vallone, &amp; Tversky, 1985). Ayton and Fischer (2004) pointed out that although these two behaviors are opposite - ending or continuing runs - they have both been explained via the label &ldquo;representativeness.&rdquo; In both cases a faulty concept of randomness leads people to expect short sections of a sequence to be &ldquo;representative&rdquo; of their generating process. In the case of the coin, people believe (erroneously) that long runs should not occur, so the opposite outcome is predicted; for the player, the presence of long runs rules out a random process so a continuation is predicted (Gilovich et al., 1985). The &ldquo;representativeness&rdquo; explanation is therefore incomplete without specifying a priori which of the opposing prior expectations will result. More important, representativeness alone does not explain <em>why</em> people have the misconception that random sequences should exhibit local representativeness when in reality they do not (Ayton &amp; Fischer, 2004).</p> </blockquote> <p>&nbsp;</p> <p><small>My thanks to MIRI intern Stephen Barnes for transcribing this text.</small></p> lukeprog DAf4W9ZYuzuLaGvd5 2013-03-18T23:49:57.035Z Decision Theory FAQ <p><small>Co-authored with <a href="/user/crazy88/overview/">crazy88</a>. Please let us know when you find mistakes, and we'll fix them. Last updated 03-27-2013.</small></p> <p><strong>Contents</strong>:</p> <div id="TOC"> <ul> <li><a href="#what-is-decision-theory">1. What is decision theory?</a></li> <li><a href="#is-the-rational-decision-always-the-right-decision">2. Is the rational decision always the right decision?</a></li> <li><a href="#how-can-i-better-understand-a-decision-problem">3. How can I better understand a decision problem?</a></li> <li><a href="#how-can-i-measure-an-agents-preferences">4. How can I measure an agent's preferences?</a> <ul> <li><a href="#the-concept-of-utility">4.1. The concept of utility</a></li> <li><a href="#types-of-utility">4.2. Types of utility</a></li> </ul> </li> <li><a href="#what-do-decision-theorists-mean-by-risk-ignorance-and-uncertainty">5. What do decision theorists mean by "risk," "ignorance," and "uncertainty"?</a></li> <li><a href="#how-should-i-make-decisions-under-ignorance">6. How should I make decisions under ignorance?</a> <ul> <li><a href="#the-dominance-principle">6.1. The dominance principle</a></li> <li><a href="#maximin-and-leximin">6.2. Maximin and leximin</a></li> <li><a href="#maximax-and-optimism-pessimism">6.3. Maximax and optimism-pessimism</a></li> <li><a href="#other-decision-principles">6.4. Other decision principles</a></li> </ul> </li> <li><a href="#can-decisions-under-ignorance-be-transformed-into-decisions-under-uncertainty">7. Can decisions under ignorance be transformed into decisions under uncertainty?</a></li> <li><a href="#how-should-i-make-decisions-under-uncertainty">8. How should I make decisions under uncertainty?</a> <ul> <li><a href="#the-law-of-large-numbers">8.1. The law of large numbers</a></li> <li><a href="#the-axiomatic-approach">8.2. The axiomatic approach</a></li> <li><a href="#the-von-neumann-morgenstern-utility-theorem">8.3. The Von Neumann-Morgenstern utility theorem</a></li> <li><a href="#vnm-utility-theory-and-rationality">8.4. VNM utility theory and rationality</a></li> <li><a href="#objections-to-vnm-rationality">8.5. Objections to VNM-rationality</a></li> <li><a href="#should-we-accept-the-vnm-axioms">8.6. Should we accept the VNM axioms?</a></li> </ul> </li> <li><a href="#does-axiomatic-decision-theory-offer-any-action-guidance">9. Does axiomatic decision theory offer any action guidance?</a></li> <li><a href="#how-does-probability-theory-play-a-role-in-decision-theory">10. How does probability theory play a role in decision theory?</a> <ul> <li><a href="#the-basics-of-probability-theory">10.1. The basics of probability theory</a></li> <li><a href="#bayes-theorem-for-updating-probabilities">10.2. Bayes theorem for updating probabilities</a></li> <li><a href="#how-should-probabilities-be-interpreted">10.3. How should probabilities be interpreted?</a></li> </ul> </li> <li><a href="#what-about-newcombs-problem-and-alternative-decision-algorithms">11. What about "Newcomb's problem" and alternative decision algorithms?</a> <ul> <li><a href="#newcomblike-problems-and-two-decision-algorithms">11.1. Newcomblike problems and two decision algorithms</a></li> <li><a href="#benchmark-theory-bt">11.2. Benchmark theory (BT)</a></li> <li><a href="#timeless-decision-theory-tdt">11.3. Timeless decision theory (TDT)</a></li> <li><a href="#decision-theory-and-winning">11.4. Decision theory and &ldquo;winning&rdquo;</a></li> </ul> </li> </ul> </div> <h2 id="what-is-decision-theory"><br /></h2> <h2><a href="#what-is-decision-theory">1. What is decision theory?</a></h2> <p><em>Decision theory</em>, also known as <em>rational choice theory</em>, concerns the study of preferences, uncertainties, and other issues related to making "optimal" or "rational" choices. It has been discussed by economists, psychologists, philosophers, mathematicians, statisticians, and computer scientists.</p> <p>We can divide decision theory into three parts (<a href="">Grant &amp; Zandt 2009</a>; <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Baron 2008</a>). <em>Normative</em> decision theory studies what an ideal agent (a perfectly rational agent, with infinite computing power, etc.) would choose. <em>Descriptive</em> decision theory studies how non-ideal agents (e.g. humans) <em>actually</em> choose. <em>Prescriptive</em> decision theory studies how non-ideal agents can improve their decision-making (relative to the normative model) despite their imperfections.</p> <p>For example, one's <em>normative</em> model might be <a href="">expected utility theory</a>, which says that a rational agent chooses the action with the highest expected utility. Replicated results in psychology <em>describe</em> humans repeatedly <em>failing</em> to maximize expected utility in particular, <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">predictable</a> ways: for example, they make some choices based not on potential future benefits but on irrelevant past efforts (the "<a href="">sunk cost fallacy</a>"). To help people avoid this error, some theorists <em>prescribe</em> some basic training in microeconomics, which has been shown to reduce the likelihood that humans will commit the sunk costs fallacy (<a href="">Larrick et al. 1990</a>). Thus, through a coordination of normative, descriptive, and prescriptive research we can help agents to succeed in life by acting more in accordance with the normative model than they otherwise would.</p> <p>This FAQ focuses on normative decision theory. Good sources on descriptive and prescriptive decision theory include <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Stanovich (2010)</a> and <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Hastie &amp; Dawes (2009)</a>.</p> <p>Two related fields beyond the scope of this FAQ are <a href="">game theory</a> and <a href="">social choice theory</a>. Game theory is the study of conflict and cooperation among multiple decision makers, and is thus sometimes called "interactive decision theory." Social choice theory is the study of making a collective decision by combining the preferences of multiple decision makers in various ways.</p> <p>This FAQ draws heavily from two textbooks on decision theory: <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Resnik (1987)</a> and <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Peterson (2009)</a>. It also draws from more recent results in decision theory, published in journals such as <em><a href="">Synthese</a></em> and <em><a href="">Theory and Decision</a></em>.</p> <p><a id="more"></a></p> <h2 id="is-the-rational-decision-always-the-right-decision"><a href="#is-the-rational-decision-always-the-right-decision">2. Is the rational decision always the right decision?</a></h2> <p>No. Peterson (<a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">2009</a>, ch. 1) explains:</p> <blockquote> <p>[In 1700], King Carl of Sweden and his 8,000 troops attacked the Russian army [which] had about ten times as many troops... Most historians agree that the Swedish attack was irrational, since it was almost certain to fail... However, because of an unexpected blizzard that blinded the Russian army, the Swedes won...</p> </blockquote> <blockquote> <p>Looking back, the Swedes' decision to attack the Russian army was no doubt right, since the <em>actual outcome</em> turned out to be success. However, since the Swedes had no <em>good reason</em> for expecting that they were going to win, the decision was nevertheless irrational.</p> </blockquote> <blockquote> <p>More generally speaking, we say that a decision is <em>right</em> if and only if its actual outcome is at least as good as that of every other possible outcome. Furthermore, we say that a decision is <em>rational</em> if and only if the decision maker [<em>aka</em> the "agent"] chooses to do what she has most reason to do at the point in time at which the decision is made.</p> </blockquote> <p>Unfortunately, we cannot know with certainty what the right decision is. Thus, the best we can do is to try to make "rational" or "optimal" decisions based on our preferences and incomplete information.</p> <p>&nbsp;</p> <h2 id="how-can-i-better-understand-a-decision-problem"><a href="#how-can-i-better-understand-a-decision-problem">3. How can I better understand a decision problem?</a></h2> <p>First, we must <em>formalize</em> a decision problem. It usually helps to <em>visualize</em> the decision problem, too.</p> <p>In decision theory, decision rules are only defined relative to a formalization of a given decision problem, and a formalization of a decision problem can be visualized in multiple ways. Here is an example from Peterson (<a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">2009</a>, ch. 2):</p> <blockquote> <p>Suppose... that you are thinking about taking out fire insurance on your home. Perhaps it costs $100 to take out insurance on a house worth $100,000, and you ask: Is it worth it?</p> </blockquote> <p>The most common way to formalize a decision problem is to break it into states, acts, and outcomes. When facing a decision problem, the decision maker aims to choose the <em>act</em> that will have the best <em>outcome</em>. But the outcome of each act depends on the <em>state</em> of the world, which is unknown to the decision maker.</p> <p>In this framework, speaking loosely, a state is a part of the world that is not an act (that can be performed now by the decision maker) or an outcome (the question of what, more precisely, states are is a complex question that is beyond the scope of this document). Luckily, not all states are relevant to a particular decision problem. We only need to take into account states that affect the agent's preference among acts. A simple formalization of the fire insurance problem might include only two states: the state in which your house doesn't (later) catch on fire, and the state in which your house <em>does</em> (later) catch on fire.</p> <p>Presumably, the agent prefers some outcomes to others. Suppose the four conceivable outcomes in the above decision problem are: (1) House and $0, (2) House and -$100, (3) No house and $99,900, and (4) No house and $0. In this case, the decision maker might prefer outcome 1 over outcome 2, outcome 2 over outcome 3, and outcome 3 over outcome 4. (We'll discuss measures of value for outcomes in the next section.)</p> <p>An act is commonly taken to be a function that takes one set of the possible states of the world as input and gives a particular outcome as output. For the above decision problem we could say that if the act "Take out insurance" has the world-state "Fire" as its input, then it will give the outcome "No house and $99,900" as its output.</p> <div class="figure"><img src="" alt="An outline of the states, acts and outcomes in the insurance case" /> <p class="caption">An outline of the states, acts and outcomes in the insurance case</p> </div> <p>Note that decision theory is concerned with <em>particular</em> acts rather than <em>generic</em> acts, e.g. "sailing west in 1492" rather than "sailing." Moreover, the acts of a decision problem must be <em>alternative</em> acts, so that the decision maker has to choose exactly <em>one</em> act.</p> <p>Once a decision problem has been formalized, it can then be visualized in any of several ways.</p> <p>One way to visualize this decision problem is to use a <em>decision matrix</em>:</p> <table border="0" cellspacing="5" cellpadding="3"> <tbody> <tr> <td>&nbsp;</td> <td><em>Fire</em></td> <td><em>No fire</em></td> </tr> <tr> <td><em>Take out insurance</em></td> <td>No house and $99,900</td> <td>House and -$100</td> </tr> <tr> <td><em>No insurance</em></td> <td>No house and $0</td> <td>House and $0</td> </tr> </tbody> </table> <p>Another way to visualize this problem is to use a <em>decision tree</em>:</p> <p><img src="" alt="" /></p> <p>The square is a <em>choice node</em>, the circles are <em>chance nodes</em>, and the triangles are <em>terminal nodes</em>. At the choice node, the decision maker chooses which branch of the decision tree to take. At the chance nodes, <em>nature</em> decides which branch to follow. The triangles represent outcomes.</p> <p>Of course, we could add more branches to each choice node and each chance node. We could also add more choice nodes, in which case we are representing a <em>sequential</em> decision problem. Finally, we could add probabilities to each branch, as long as the probabilities of all the branches extending from each single node sum to 1. And because a decision tree obeys the laws of probability theory, we can calculate the probability of any given node by multiplying the probabilities of all the branches preceding it.</p> <p>Our decision problem could also be represented as a <em>vector</em> &mdash; an ordered list of mathematical objects that is perhaps most suitable for computers:</p> <blockquote> <p>[<br /> [a<sub>1</sub> = take out insurance,<br /> a<sub>2</sub> = do not];<br /> [s<sub>1</sub> = fire,<br /> s<sub>2</sub> = no fire];<br /> [(a<sub>1</sub>, s<sub>1</sub>) = No house and $99,900,<br /> (a<sub>1</sub>, s<sub>2</sub>) = House and -$100,<br /> (a<sub>2</sub>, s<sub>1</sub>) = No house and $0,<br /> (a<sub>2</sub>, s<sub>2</sub>) = House and $0]<br /> ]</p> </blockquote> <p>For more details on formalizing and visualizing decision problems, see <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Skinner (1993)</a>.</p> <p>&nbsp;</p> <h2 id="how-can-i-measure-an-agents-preferences"><a href="#how-can-i-measure-an-agents-preferences">4. How can I measure an agent's preferences?</a></h2> <h3 id="the-concept-of-utility"><a href="#the-concept-of-utility">4.1. The concept of utility</a></h3> <p>It is important not to measure an agent's preferences in terms of <em>objective</em> value, e.g. monetary value. To see why, consider the absurdities that can result when we try to measure an agent's preference with money alone.</p> <p>Suppose you may choose between (A) receiving a million dollars <em>for sure</em>, and (B) a 50% chance of winning either $3 million or nothing. The <em>expected monetary value</em> (EMV) of your act is computed by multiplying the monetary value of each possible outcome by its probability. So, the EMV of choice A is (1)($1 million) = $1 million. The EMV of choice B is (0.5)($3 million) + (0.5)($0) = $1.5 million. Choice B has a higher expected monetary value, and yet many people would prefer the guaranteed million.</p> <p>Why? For many people, the difference between having $0 and $1 million is <em>subjectively</em> much larger than the difference between having $1 million and $3 million, even if the latter difference is larger in dollars.</p> <p>To capture an agent's <em>subjective</em> preferences, we use the concept of <em>utility</em>. A <em>utility function</em> assigns numbers to outcomes such that outcomes with higher numbers are preferred to outcomes with lower numbers. For example, for a particular decision maker &mdash; say, one who has no money &mdash; the utility of $0 might be 0, the utility of $1 million might be 1000, and the utility of $3 million might be 1500. Thus, the <em>expected utility</em> (EU) of choice A is, for this decision maker, (1)(1000) = 1000. Meanwhile, the EU of choice B is (0.5)(1500) + (0.5)(0) = 750. In this case, the expected utility of choice A is greater than that of choice B, even though choice B has a greater expected monetary value.</p> <p>Note that those from the field of statistics who work on decision theory tend to talk about a "loss function," which is simply an <em>inverse</em> utility function. For an overview of decision theory from this perspective, see <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Berger (1985)</a> and <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Robert (2001)</a>. For a critique of some standard results in statistical decision theory, see <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Jaynes (2003, ch. 13)</a>.</p> <p>&nbsp;</p> <h3 id="types-of-utility"><a href="#types-of-utility">4.2. Types of utility</a></h3> <p>An agent's utility function can't be directly observed, so it must be constructed &mdash; e.g. by asking them which options they prefer for a large set of pairs of alternatives (as on <a href=""></a>). The number that corresponds to an outcome's utility can convey different information depending on the <em>utility scale</em> in use, and the utility scale in use depends on how the utility function is constructed.</p> <p>Decision theorists distinguish three kinds of utility scales:</p> <ol style="list-style-type: decimal"> <li> <p>Ordinal scales ("12 is better than 6"). In an ordinal scale, preferred outcomes are assigned higher numbers, but the numbers don't tell us anything about the differences or ratios between the utility of different outcomes.</p> </li> <li> <p>Interval scales ("the difference between 12 and 6 equals that between 6 and 0"). An interval scale gives us more information than an ordinal scale. Not only are preferred outcomes assigned higher numbers, but also the numbers accurately reflect the <em>difference</em> between the utility of different outcomes. They do not, however, necessarily reflect the ratios of utility between different outcomes. If outcome A has utility 0, outcome B has utility 6, and outcome C has utility 12 on an interval scale, then we know that the difference in utility between outcomes A and B and between outcomes B and C is the same, but we can't know whether outcome B is "twice as good" as outcome A.</p> </li> <li> <p>Ratio scales ("12 is exactly <em>twice</em> as valuable as 6"). Numerical utility assignments on a ratio scale give us the most information of all. They accurately reflect preference rankings, differences, <em>and</em> ratios. Thus, we can say that an outcome with utility 12 is exactly <em>twice</em> as valuable to the agent in question as an outcome with utility 6.</p> </li> </ol> <p>Note that neither <em>experienced utility</em> (happiness) nor the notions of "average utility" or "total utility" discussed by utilitarian moral philosophers are the same thing as the <em>decision utility</em> that we are discussing now to describe decision preferences. As the situation merits, we can be even more specific. For example, when discussing the type of decision utility used in an interval scale utility function constructed using Von Neumann &amp; Morgenstern's axiomatic approach (see section 8), some people use the term <em>VNM-utility</em>.</p> <p>Now that you know that an agent's preferences can be represented as a "utility function," and that assignments of utility to outcomes can mean different things depending on the utility scale of the utility function, we are ready to think more formally about the challenge of making "optimal" or "rational" choices. (We will return to the problem of constructing an agent's utility function later, in section 8.3.)</p> <p>&nbsp;</p> <h2 id="what-do-decision-theorists-mean-by-risk-ignorance-and-uncertainty"><a href="#what-do-decision-theorists-mean-by-risk-ignorance-and-uncertainty">5. What do decision theorists mean by "risk," "ignorance," and "uncertainty"?</a></h2> <p>Peterson (<a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">2009</a>, ch. 1) explains:</p> <blockquote> <p>In decision theory, everyday terms such as <em>risk</em>, <em>ignorance</em>, and <em>uncertainty</em> are used as technical terms with precise meanings. In decisions under risk the decision maker knows the probability of the possible outcomes, whereas in decisions under ignorance the probabilities are either unknown or non-existent. Uncertainty is either used as a synonym for ignorance, or as a broader term referring to both risk and ignorance.</p> </blockquote> <p>In this FAQ, a "decision under ignorance" is one in which probabilities are <em>not</em> assigned to all outcomes, and a "decision under uncertainty" is one in which probabilities <em>are</em> assigned to all outcomes. The term "risk" will be reserved for discussions related to utility.</p> <p>&nbsp;</p> <h2 id="how-should-i-make-decisions-under-ignorance"><a href="#how-should-i-make-decisions-under-ignorance">6. How should I make decisions under ignorance?</a></h2> <p>A decision maker faces a "decision under ignorance" when she (1) knows which acts she could choose and which outcomes they may result in, but (2) is unable to assign probabilities to the outcomes.</p> <p>(Note that many theorists think that all decisions under ignorance can be transformed into decisions under uncertainty, in which case this section will be irrelevant except for subsection 6.1. For details, see section 7.)</p> <p>&nbsp;</p> <h3 id="the-dominance-principle"><a href="#the-dominance-principle">6.1. The dominance principle</a></h3> <p>To borrow an example from Peterson (<a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">2009</a>, ch. 3), suppose that Jane isn't sure whether to order hamburger or monkfish at a new restaurant. Just about any chef can make an edible hamburger, and she knows that monkfish is fantastic if prepared by a world-class chef, but she also recalls that monkfish is difficult to cook. Unfortunately, she knows too little about this restaurant to assign any probability to the prospect of getting good monkfish. Her decision matrix might look like this:</p> <table border="0" cellspacing="5" cellpadding="3"> <tbody> <tr> <td>&nbsp;</td> <td><em>Good chef</em></td> <td><em>Bad chef</em></td> </tr> <tr> <td><em>Monkfish</em></td> <td>good monkfish</td> <td>terrible monkfish</td> </tr> <tr> <td><em>Hamburger</em></td> <td>edible hamburger</td> <td>edible hamburger</td> </tr> <tr> <td><em>No main course</em></td> <td>hungry</td> <td>hungry</td> </tr> </tbody> </table> <p>Here, decision theorists would say that the "hamburger" choice <em>dominates</em> the "no main course" choice. This is because choosing the hamburger leads to a better outcome for Jane no matter which possible state of the world (good chef or bad chef) turns out to be true.</p> <p>This <em>dominance principle</em> comes in two forms:</p> <ul> <li><em>Weak dominance</em>: One act is <em>more</em> rational than another if (1) all its possible outcomes are at least as good as those of the other, and if (2) there is at least one possible outcome that is better than that of the other act.</li> <li><em>Strong dominance</em>: One act is <em>more</em> rational than another if all of its possible outcome are better than that of the other act.</li> </ul> <div class="figure"><img src="" alt="A comparison of strong and weak dominance" /> <p class="caption">A comparison of strong and weak dominance</p> </div> <p>The dominance principle can also be applied to decisions under uncertainty (in which probabilities <em>are</em> assigned to all the outcomes). If we assign probabilities to outcomes, it is still rational to choose one act over another act if all its outcomes are at least as good as the outcomes of the other act.</p> <p>However, the dominance principle only applies (non-controversially) when the agent&rsquo;s acts are independent of the state of the world. So consider the decision of whether to steal a coat:</p> <table border="0" cellspacing="5" cellpadding="3"> <tbody> <tr> <td>&nbsp;</td> <td><em>Charged with theft</em></td> <td><em>Not charged with theft</em></td> </tr> <tr> <td><em>Theft</em></td> <td>Jail and coat</td> <td>Freedom and coat</td> </tr> <tr> <td><em>No theft</em></td> <td>Jail</td> <td>Freedom</td> </tr> </tbody> </table> <p>In this case, stealing the coat dominates not doing so but isn&rsquo;t necessarily the rational decision. After all, stealing increases your chance of getting charged with theft and might be irrational for this reason. So dominance doesn&rsquo;t apply in cases like this where the state of the world is not independent of the agents act.</p> <p>On top of this, not all decision problems include an act that dominates all the others. Consequently additional principles are often required to reach a decision.</p> <p>&nbsp;</p> <h3 id="maximin-and-leximin"><a href="#maximin-and-leximin">6.2. Maximin and leximin</a></h3> <p>Some decision theorists have suggested the <em>maximin principle</em>: if the worst possible outcome of one act is better than the worst possible outcome of another act, then the former act should be chosen. In Jane's decision problem above, the maximin principle would prescribe choosing the hamburger, because the worst possible outcome of choosing the hamburger ("edible hamburger") is better than the worst possible outcome of choosing the monkfish ("terrible monkfish") and is also better than the worst possible outcome of eating no main course ("hungry").</p> <p>If the worst outcomes of two or more acts are equally good, the maximin principle tells you to be indifferent between them. But that doesn't seem right. For this reason, fans of the maximin principle often invoke the <em>lexical</em> maximin principle ("leximin"), which says that if the worst outcomes of two or more acts are equally good, one should choose the act for which the <em>second worst</em> outcome is best. (If that doesn't single out a single act, then the <em>third worst</em> outcome should be considered, and so on.)</p> <p>Why adopt the leximin principle? Advocates point out that the leximin principle transforms a decision problem under ignorance into a decision problem under partial certainty. The decision maker doesn't know what the outcome will be, but they know what the worst possible outcome will be.</p> <p>But in some cases, the leximin rule seems clearly irrational. Imagine this decision problem, with two possible acts and two possible states of the world:</p> <table border="0" cellspacing="5" cellpadding="3"> <tbody> <tr> <td>&nbsp;</td> <td>s<sub>1</sub></td> <td>s<sub>2</sub></td> </tr> <tr> <td>a<sub>1</sub></td> <td>$1</td> <td>$10,001.01</td> </tr> <tr> <td>a<sub>2</sub></td> <td>$1.01</td> <td>$1.01</td> </tr> </tbody> </table> <p>In this situation, the leximin principle prescribes choosing a<sub>2</sub>. But most people would agree it is rational to risk losing out on a single cent for the chance to get an extra $10,000.</p> <p>&nbsp;</p> <h3 id="maximax-and-optimism-pessimism"><a href="#maximax-and-optimism-pessimism">6.3. Maximax and optimism-pessimism</a></h3> <p>The maximin and leximin rules focus their attention on the worst possible outcomes of a decision, but why not focus on the <em>best</em> possible outcome? The <em>maximax principle</em> prescribes that if the best possible outcome of one act is better than the best possible outcome of another act, then the former act should be chosen.</p> <p>More popular among decision theorists is the <em>optimism-pessimism rule</em> (<em>aka</em> the <em>alpha-index rule</em>). The optimism-pessimism rule prescribes that one consider both the best and worst possible outcome of each possible act, and then choose according to one's degree of optimism or pessimism.</p> <p>Here's an example from Peterson (<a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">2009</a>, ch. 3):</p> <table border="0" cellspacing="5" cellpadding="3"> <tbody> <tr> <td>&nbsp;</td> <td>s<sub>1</sub></td> <td>s<sub>2</sub></td> <td>s<sub>3</sub></td> <td>s<sub>4</sub></td> <td>s<sub>5</sub></td> <td>s<sub>6</sub></td> </tr> <tr> <td>a<sub>1</sub></td> <td>55</td> <td>18</td> <td>28</td> <td>10</td> <td>36</td> <td>100</td> </tr> <tr> <td>a<sub>2</sub></td> <td>50</td> <td>87</td> <td>55</td> <td>90</td> <td>75</td> <td>70</td> </tr> </tbody> </table> <p>We represent the decision maker's level of optimism on a scale of 0 to 1, where 0 is maximal pessimism and 1 is maximal optimism. For a<sub>1</sub>, the worst possible outcome is 10 and the best possible outcome is 100. That is, min(a<sub>1</sub>) = 10 and max(a<sub>1</sub>) = 100. So if the decision maker is 0.85 optimistic, then the total value of a<sub>1</sub> is (0.85)(100) + (1 - 0.85)(10) = 86.5, and the total value of a<sub>2</sub> is (0.85)(90) + (1 - 0.85)(50) = 84. In this situation, the optimism-pessimism rule prescribes action a<sub>1</sub>.</p> <p>If the decision maker's optimism is 0, then the optimism-pessimism rule collapses into the maximin rule because (0)(max(a<sub>i</sub>)) + (1 - 0)(min(a<sub>i</sub>)) = min(a<sub>i</sub>). And if the decision maker's optimism is 1, then the optimism-pessimism rule collapses into the maximax rule. Thus, the optimism-pessimism rule turns out to be a generalization of the maximin and maximax rules. (Well, sort of. The minimax and maximax principles require only that we measure value on an ordinal scale, whereas the optimism-pessimism rule requires that we measure value on an interval scale.)</p> <p>The optimism-pessimism rule pays attention to both the best-case and worst-case scenarios, but is it rational to ignore all the outcomes in between? Consider this example:</p> <table border="0" cellspacing="5" cellpadding="3"> <tbody> <tr> <td>&nbsp;</td> <td>s<sub>1</sub></td> <td>s<sub>2</sub></td> <td>s<sub>3</sub></td> </tr> <tr> <td>a<sub>1</sub></td> <td>1</td> <td>2</td> <td>100</td> </tr> <tr> <td>a<sub>2</sub></td> <td>1</td> <td>99</td> <td>100</td> </tr> </tbody> </table> <p>The maximum and minimum values for a<sub>1</sub> and a<sub>2</sub> are the same, so for every degree of optimism both acts are equally good. But it seems obvious that one should choose a<sub>2</sub>.</p> <p>&nbsp;</p> <h3 id="other-decision-principles"><a href="#other-decision-principles">6.4. Other decision principles</a></h3> <p>Many other decision principles for dealing with decisions under ignorance have been proposed, including <a href="">minimax regret</a>, <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">info-gap</a>, and <a href="">maxipok</a>. For more details on making decisions under ignorance, see <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Peterson (2009)</a> and <a href="">Bossert et al. (2000)</a>.</p> <p>One queer feature of the decision principles discussed in this section is that they willfully disregard some information relevant to making a decision. Such a move could make sense when trying to find a decision algorithm that performs well under tight limits on available computation (<a href="">Brafman &amp; Tennenholtz (2000)</a>), but it's unclear why an <em>ideal</em> agent with infinite computing power (fit for a <em>normative</em> rather than a <em>prescriptive</em> theory) should willfully disregard information.</p> <p>&nbsp;</p> <h2 id="can-decisions-under-ignorance-be-transformed-into-decisions-under-uncertainty"><a href="#can-decisions-under-ignorance-be-transformed-into-decisions-under-uncertainty">7. Can decisions under ignorance be transformed into decisions under uncertainty?</a></h2> <p>Can decisions under ignorance be transformed into decisions under uncertainty? This would simplify things greatly, because there is near-universal agreement that decisions under uncertainty should be handled by "maximizing expected utility" (see section 11 for clarifications), whereas decision theorists still debate what should be done about decisions under ignorance.</p> <p>For <a href="">Bayesians</a> (see section 10), <em>all</em> decisions under ignorance are transformed into decisions under uncertainty (<a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Winkler 2003</a>, ch. 5) when the decision maker assigns an "ignorance prior" to each outcome for which they don't know how to assign a probability. (Another way of saying this is to say that a Bayesian decision maker never faces a decision under ignorance, because a Bayesian must always assign a prior probability to events.) One must then consider how to assign priors, an important debate among Bayesians (see section 10).</p> <p>Many non-Bayesian decision theorists also think that decisions under ignorance can be transformed into decisions under uncertainty due to something called the <em>principle of insufficient reason</em>. The principle of insufficient reason prescribes that if you have literally <em>no</em> reason to think that one state is more probable than another, then one should assign <em>equal</em> probability to both states.</p> <p>One objection to the principle of insufficient reason is that it is very sensitive to how states are individuated. Peterson (<a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">2009</a>, ch. 3) explains:</p> <blockquote> <p>Suppose that before embarking on a trip you consider whether to bring an umbrella or not. [But] you know nothing about the weather at your destination. If the formalization of the decision problem is taken to include only two states, viz. rain and no rain, [then by the principle of insufficient reason] the probability of each state will be 1/2. However, it seems that one might just as well go for a formalization that divides the space of possibilities into three states, viz. heavy rain, moderate rain, and no rain. If the principle of insufficient reason is applied to the latter set of states, their probabilities will be 1/3. In some cases this difference will affect our decisions. Hence, it seems that anyone advocating the principle of insufficient reason must [defend] the rather implausible hypothesis that there is only one correct way of making up the set of states.</p> </blockquote> <div class="figure"><img src="" alt="An objection to the principle of insufficient reason" /> <p class="caption">An objection to the principle of insufficient reason</p> </div> <p>Advocates of the principle of insufficient reason might respond that one must consider <em>symmetric</em> states. For example if someone gives you a die with <em>n</em> sides and you have no reason to think the die is biased, then you should assign a probability of 1/<em>n</em> to each side. But, Peterson notes:</p> <blockquote> <p>...not all events can be described in symmetric terms, at least not in a way that justifies the conclusion that they are equally probable. Whether Ann's marriage will be a happy one depends on her future emotional attitude toward her husband. According to one description, she could be either in love or not in love with him; then the probability of both states would be 1/2. According to another equally plausible description, she could either be deeply in love, a little bit in love or not at all in love with her husband; then the probability of each state would be 1/3.</p> <p>&nbsp;</p> </blockquote> <h2 id="how-should-i-make-decisions-under-uncertainty"><a href="#how-should-i-make-decisions-under-uncertainty">8. How should I make decisions under uncertainty?</a></h2> <p>A decision maker faces a "decision under uncertainty" when she (1) knows which acts she could choose and which outcomes they may result in, and she (2) assigns probabilities to the outcomes.</p> <p>Decision theorists generally agree that when facing a decision under uncertainty, it is rational to choose the act with the highest expected utility. This is the principle of <em>expected utility maximization</em> (EUM).</p> <p>Decision theorists offer two kinds of justifications for EUM. The first has to do with the law of large numbers (see section 8.1). The second has to do with the axiomatic approach (see sections 8.2 through 8.6).</p> <p>&nbsp;</p> <h3 id="the-law-of-large-numbers"><a href="#the-law-of-large-numbers">8.1. The law of large numbers</a></h3> <p>The "law of large numbers," which states that <em>in the long run</em>, if you face the same decision problem again and again and again, and you always choose the act with the highest expected utility, then you will almost certainly be better off than if you choose any other acts.</p> <p>There are two problems with using the law of large numbers to justify EUM. The first problem is that the world is ever-changing, so we rarely if ever face the same decision problem "again and again and again." The law of large numbers says that if you face the same decision problem infinitely many times, then the probability that you could do better by not maximizing expected utility approaches zero. But you won't ever face the same decision problem infinitely many times! Why should you care what would happen if a certain condition held, if you know that condition will never hold?</p> <p>The second problem with using the law of large numbers to justify EUM has to do with a mathematical theorem known as <em>gambler's ruin</em>. Imagine that you and I flip a fair coin, and I pay you $1 every time it comes up heads and you pay me $1 every time it comes up tails. We both start with $100. If we flip the coin enough times, one of us will face a situation in which the sequence of heads or tails is longer than we can afford. If a long-enough sequence of heads comes up, I'll run out of $1 bills with which to pay you. If a long-enough sequence of tails comes up, you won't be able to pay me. So in this situation, the law of large numbers guarantees that you will be better off in the long run by maximizing expected utility only if you start the game with an infinite amount of money (so that you never go broke), which is an unrealistic assumption. (For technical convenience, assume utility increases linearly with money. But the basic point holds without this assumption.)</p> <p>&nbsp;</p> <h3 id="the-axiomatic-approach"><a href="#the-axiomatic-approach">8.2. The axiomatic approach</a></h3> <p>The other method for justifying EUM seeks to show that EUM can be derived from axioms that hold regardless of what happens in the long run.</p> <p>In this section we will review perhaps the most famous axiomatic approach, from <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Von Neumann and Morgenstern (1947)</a>. Other axiomatic approaches include <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Savage (1954)</a>, <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Jeffrey (1983)</a>, and <a href="">Anscombe &amp; Aumann (1963)</a>.</p> <p>&nbsp;</p> <h3 id="the-von-neumann-morgenstern-utility-theorem"><a href="#the-von-neumann-morgenstern-utility-theorem">8.3. The Von Neumann-Morgenstern utility theorem</a></h3> <p>The first decision theory axiomatization appeared in an appendix to the second edition of Von Neumann &amp; Morgenstern's <em><a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Theory of Games and Economic Behavior</a></em> (1947). An important point to note up front is that, in this axiomatization, Von Neumann and Morgenstern take the options that the agent chooses between to not be acts, as we&rsquo;ve defined them, but lotteries (where a lottery is a set of outcomes, each paired with a probability). As such, while discussing their axiomatization, we will talk of lotteries. (Despite making this distinction, acts and lotteries are closely related. Under the conditions of uncertainty that we are considering here, each act will be associated with some lottery and so preferences over lotteries could be used to determine preferences over acts, if so desired).</p> <p>The key feature of the Von Neumann and Morgenstern axiomatization is a proof that if a decision maker states her preferences over a set of lotteries, and if her preferences conform to a set of intuitive structural constraints (axioms), then we can construct a utility function (on an interval scale) from her preferences over lotteries and show that she acts <em>as if</em> she maximizes expected utility with respect to that utility function.</p> <p>What are the axioms to which an agent's preferences over lotteries must conform? There are four of them.</p> <ol style="list-style-type: decimal"> <li> <p>The <em>completeness axiom</em> states that the agent must <em>bother to state a preference</em> for each pair of lotteries. That is, the agent must prefer A to B, or prefer B to A, or be indifferent between the two.</p> </li> <li> <p>The <em>transitivity axiom</em> states that if the agent prefers A to B and B to C, she must also prefer A to C.</p> </li> <li> <p>The <em>independence axiom</em> states that, for example, if an agent prefers an apple to an orange, then she must also prefer the lottery [55% chance she gets an apple, otherwise she gets cholera] over the lottery [55% chance she gets an orange, otherwise she gets cholera]. More generally, this axiom holds that a preference must hold independently of the possibility of another outcome (e.g. cholera).</p> </li> <li> <p>The <em>continuity axiom</em> holds that if the agent prefers A to B to C, then there exists a unique <em>p</em> (probability) such that the agent is indifferent between [<em>p</em>(A) + (1 - <em>p</em>)(C)] and [outcome B with certainty].</p> </li> </ol> <p>The continuity axiom requires <a href="">more explanation</a>. Suppose that A = $1 million, B = $0, and C = Death. If <em>p</em> = 0.5, then the agent's two lotteries under consideration for the moment are:</p> <ol style="list-style-type: decimal"> <li>(0.5)($1M) + (1 - 0.5)(Death) [win $1M with 50% probability, die with 50% probability]</li> <li>(1)($0) [win $0 with certainty]</li> </ol> <p>Most people would <em>not</em> be indifferent between $0 with certainty and [50% chance of $1M, 50% chance of Death] &mdash; the risk of Death is too high! But if you have continuous preferences, there is <em>some</em> probability <em>p</em> for which you'd be indifferent between these two lotteries. Perhaps <em>p</em> is very, very high:</p> <ol style="list-style-type: decimal"> <li>(0.999999)($1M) + (1 - 0.999999)(Death) [win $1M with 99.9999% probability, die with 0.0001% probability]</li> <li>(1)($0) [win $0 with certainty]</li> </ol> <p>Perhaps now you'd be indifferent between lottery 1 and lottery 2. Or maybe you'd be <em>more</em> willing to risk Death for the chance of winning $1M, in which case the <em>p</em> for which you'd be indifferent between lotteries 1 and 2 is lower than 0.999999. As long as there is <em>some</em> <em>p</em> at which you'd be indifferent between lotteries 1 and 2, your preferences are "continuous."</p> <p>Given this setup, Von Neumann and Morgenstern proved their theorem, which states that if the agent's preferences over lotteries obeys their axioms, then:</p> <ul> <li>The agent's preferences can be represented by a utility function that assigns higher utility to preferred lotteries.</li> <li>The agent acts in accordance with the principle of maximizing expected utility.</li> <li>All utility functions satisfying the above two conditions are "positive linear transformations" of each other. (Without going into the details: this is why VNM-utility is measured on an interval scale.)</li> </ul> <h3 id="vnm-utility-theory-and-rationality"><br /></h3> <h3><a href="#vnm-utility-theory-and-rationality">8.4. VNM utility theory and rationality</a></h3> <p>An agent which conforms to the VNM axioms is sometimes said to be "VNM-rational." But why should "VNM-rationality" constitute our notion of <em>rationality in general</em>? How could VNM's result justify the claim that a rational agent maximizes expected utility when facing a decision under uncertainty? The argument goes like this:</p> <ol style="list-style-type: decimal"> <li>If an agent chooses lotteries which it prefers (in decisions under uncertainty), and if its preferences conform to the VNM axioms, then it is rational. Otherwise, it is irrational.</li> <li>If an agent chooses lotteries which it prefers (in decisions under uncertainty), and if its preferences conform to the VNM axioms, then it maximizes expected utility.</li> <li>Therefore, a rational agent maximizes expected utility (in decisions under uncertainty).</li> </ol> <p>Von Neumann and Morgenstern proved premise 2, and the conclusion follows from premise 1 and 2. But why accept premise 1?</p> <p>Few people deny that it would be irrational for an agent to choose a lottery which it does not prefer. But why is it irrational for an agent's preferences to violate the VNM axioms? I will save that discussion for section 8.6.</p> <p>&nbsp;</p> <h3 id="objections-to-vnm-rationality"><a href="#objections-to-vnm-rationality">8.5. Objections to VNM-rationality</a></h3> <p>Several objections have been raised to Von Neumann and Morgenstern's result:</p> <ol style="list-style-type: decimal"> <li> <p><em>The VNM axioms are too strong</em>. Some have argued that the VNM axioms are not self-evidently true. See section 8.6.</p> </li> <li> <p><em>The VNM system offers no action guidance</em>. A VNM-rational decision maker cannot use VNM utility theory for action guidance, because she must state her preferences over lotteries at the start. But if an agent can state her preferences over lotteries, then she already knows which lottery to choose. (For more on this, see section 9.)</p> </li> <li> <p><em>In the VNM system, utility is defined via preferences over lotteries rather than preferences over outcomes</em>. To many, it seems odd to <em>define</em> utility with respect to preferences over lotteries. Many would argue that utility should be defined in relation to preferences over <em>outcomes</em> or <em>world-states</em>, and that's not what the VNM system does. (Also see section 9.)</p> </li> </ol> <h3 id="should-we-accept-the-vnm-axioms"><br /></h3> <h3><a href="#should-we-accept-the-vnm-axioms">8.6. Should we accept the VNM axioms?</a></h3> <p>The VNM preference axioms define what it is for an agent to be VNM-rational. But why should we accept these axioms? Usually, it is argued that each of the axioms are <em>pragmatically justified</em> because an agent which violates the axioms can face situations in which they are guaranteed end up worse off (from <em>their own</em> perspective).</p> <p>In sections 8.6.1 and 8.6.2 I go into some detail about pragmatic justifications offered for the transitivity and completeness axioms. For more detail, including arguments about the justification of the other axioms, see <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Peterson (2009, ch. 8)</a> and <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Anand (1993)</a>.</p> <p>&nbsp;</p> <h4 id="the-transitivity-axiom"><a href="#the-transitivity-axiom">8.6.1. The transitivity axiom</a></h4> <p>Consider the <em>money-pump argument</em> in favor of the transitivity axiom ("if the agent prefers A to B and B to C, she must also prefer A to C").</p> <blockquote> <p>Imagine that a friend offers to give you exactly one of her three... novels, x or y or z... [and] that your preference ordering over the three novels is... [that] you prefer x to y, and y to z, and z to x... [That is, your preferences are <em>cyclic</em>, which is a type of <em>intransitive</em> preference relation.] Now suppose that you are in possession of z, and that you are invited to swap z for y. Since you prefer y to z, rationality obliges you to swap. So you swap, and temporarily get y. You are then invited to swap y for x, which you do, since you prefer x to y. Finally, you are offered to <em>pay a small amount</em>, say one cent, for swapping x for z. Since z is strictly [preferred to] x, even after you have paid the fee for swapping, rationality tells you that you should accept the offer. This means that you end up where you started, the only difference being that you now have one cent less. This procedure is thereafter iterated over and over again. After a billion cycles you have lost ten million dollars, for which you have got nothing in return. (<a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Peterson 2009</a>, ch. 8)</p> </blockquote> <div class="figure"><img src="" alt="An example of a money-pump argument" /> <p class="caption">An example of a money-pump argument</p> </div> <p>Similar arguments (e.g. <a href="">Gustafsson 2010</a>) aim to show that the other kind of intransitive preferences (acyclic preferences) are irrational, too.</p> <p>(Of course, pragmatic arguments need not be framed in monetary terms. We could just as well construct an argument showing that an agent with intransitive preferences can be "pumped" of all their happiness, or all their moral virtue, or all their Twinkies.)</p> <p>&nbsp;</p> <h4 id="the-completeness-axiom"><a href="#the-completeness-axiom">8.6.2. The completeness axiom</a></h4> <p>The completeness axiom ("the agent must prefer A to B, or prefer B to A, or be indifferent between the two") is often attacked by saying that some goods or outcomes are incommensurable &mdash; that is, they cannot be compared. For example, must a rational agent be able to state a preference (or indifference) between money and human welfare?</p> <p>Perhaps the completeness axiom can be justified with a pragmatic argument. If you think it is rationally permissible to swap between two incommensurable goods, then one can construct a money pump argument in favor of the completeness axiom. But if you think it is <em>not</em> rational to swap between incommensurable goods, then one cannot construct a money pump argument for the completeness axiom. (In fact, even if it is rational to swap between incommensurable goods, <a href="">Mandler, 2005</a> has demonstrated that an agent that allows their current choices to depend on the previous ones can avoid being money pumped.)</p> <p>And in fact, there is a popular argument <em>against</em> the completeness axiom: the "small improvement argument." For details, see <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Chang (1997)</a> and <a href="">Espinoza (2007)</a>.</p> <p>Note that in <a href="">revealed preference theory</a>, according to which preferences are revealed through choice behavior, there is no room for incommensurable preferences because every choice always reveals a preference relation of "better than," "worse than," or "equally as good as."</p> <p>Another proposal for dealing with the apparent incommensurability of some goods (such as money and human welfare) is the <em>multi-attribute approach</em>:</p> <blockquote> <p>In a multi-attribute approach, each type of attribute is measured in the unit deemed to be most suitable for that attribute. Perhaps money is the right unit to use for measuring financial costs, whereas the number of lives saved is the right unit to use for measuring human welfare. The total value of an alternative is thereafter determined by aggregating the attributes, e.g. money and lives, into an overall ranking of available alternatives...</p> </blockquote> <blockquote> <p>Several criteria have been proposed for choosing among alternatives with multiple attributes... [For example,] additive criteria assign weights to each attribute, and rank alternatives according to the weighted sum calculated by multiplying the weight of each attribute with its value... [But while] it is perhaps contentious to measure the utility of very different objects on a common scale, seems equally contentious to assign numerical weights to attributes as suggested here....</p> </blockquote> <blockquote> <p>[Now let us] consider a very general objection to multi-attribute approaches. According to this objection, there exist several equally plausible but different ways of constructing the list of attributes. Sometimes the outcome of the decision process depends on which set of attributes is chosen. (<a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Peterson 2009</a>, ch. 8)</p> </blockquote> <p>For more on the multi-attribute approach, see <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Keeney &amp; Raiffa (1993)</a>.</p> <p>&nbsp;</p> <h4 id="the-allais-paradox"><a href="#the-allais-paradox">8.6.3. The Allais paradox</a></h4> <p>Having considered the transitivity and completeness axioms, we can now turn to independence (a preference holds independently of considerations of other possible outcomes). Do we have any reason to reject this axiom? Here&rsquo;s one reason to think we might: in a case known as the <em>Allais paradox</em> <a href="">Allais (1953)</a> it may seem reasonable to act in a way that contradicts independence.</p> <p>The Allais paradox asks us to consider two decisions (this version of the paradox is based on <a href="/lw/my/the_allais_paradox/">Yudkowsky (2008)</a>).The first decision involves the choice between:</p> <p>(1A) A certain $24,000; and (1B) A 33/34 chance of $27,000 and a 1/34 chance of nothing.</p> <p>The second involves the choice between:</p> <p>(2A) A 34% chance of $24, 000 and a 66% chance of nothing; and (2B) A 33% chance of $27, 000 and a 67% chance of nothing.</p> <p>Experiments have shown that many people prefer (1A) to (1B) and (2B) to (2A). However, these preferences contradict independence. Option 2A is the same as [a 34% chance of option 1A and a 66% chance of nothing] while 2B is the same as [a 34% chance of option 1B and a 66% chance of nothing]. So independence implies that anyone that prefers (1A) to (1B) must also prefer (2A) to (2B).</p> <p>When this result was first uncovered, it was presented as evidence against the independence axiom. However, while the Allais paradox clearly reveals that independence fails as a <em>descriptive</em> account of choice, it&rsquo;s less clear what it implies about the normative account of rational choice that we are discussing in this document. As noted in <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Peterson (2009, ch. 4)</a>, however:</p> <blockquote> <p>[S]ince many people who have thought very hard about this example still feel that it would be rational to stick to the problematic preference pattern described above, there seems to be something wrong with the expected utility principle.</p> </blockquote> <p>However, Peterson then goes on to note that, many people, like the statistician Leonard Savage, argue that it is people&rsquo;s preference in the Allais paradox that are in error rather than the independence axiom. If so, then the paradox seems to reveal the danger of relying too strongly on intuition to determine the form that should be taken by normative theories of rational.</p> <p>&nbsp;</p> <h4 id="the-ellsberg-paradox"><a href="#the-ellsberg-paradox">8.6.4. The Ellsberg paradox</a></h4> <p>The Allais paradox is far from the only case where people fail to act in accordance with EUM. Another well-known case is the Ellsberg paradox (the following is taken from <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Resnik (1987)</a>:</p> <blockquote> <p>An urn contains ninety uniformly sized balls, which are randomly distributed. Thirty of the balls are yellow, the remaining sixty are red or blue. We are not told how many red (blue) balls are in the urn &ndash; except that they number anywhere from zero to sixty. Now consider the following pair of situations. In each situation a ball will be drawn and we will be offered a bet on its color. In situation A we will choose between betting that it is yellow or that it is red. In situation B we will choose between betting that it is red or blue or that it is yellow or blue.</p> </blockquote> <p>If we guess the correct color, we will receive a payout of $100. In the Ellsberg paradox, many people bet <em>yellow</em> in situation A and <em>red or blue</em> in situation B. Further, many people make these decisions not because they are indifferent in both situations, and so happy to choose either way, but rather because they have a strict preference to choose in this manner.</p> <div class="figure"><img src="" alt="The Ellsberg paradox" /> <p class="caption">The Ellsberg paradox</p> </div> <p>However, such behavior cannot be in accordance with EUM. In order for EUM to endorse a strict preference for choosing <em>yellow</em> in situation A, the agent would have to assign a probability of more than 1/3 to the ball selected being blue. On the other hand, in order for EUM to endorse a strict preference for choosing <em>red or blue</em> in situation B the agent would have to assign a probability of less than 1/3 to the selected ball being blue. As such, these decisions can&rsquo;t be jointly endorsed by an agent following EUM.</p> <p>Those who deny that decisions making under ignorance can be transformed into decision making under uncertainty have an easy response to the Ellsberg paradox: as this case involves deciding under a situation of ignorance, it is irrelevant whether people&rsquo;s decisions violate EUM in this case as EUM is not applicable to such situations.</p> <p>Those who believe that EUM provides a suitable standard for choice in such situations, however, need to find some other way of responding to the paradox. As with the Allais paradox, there is some disagreement about how best to do so. Once again, however, many people, including Leonard Savage, argue that EUM reaches the right decision in this case. It is our intuitions that are flawed (see again <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Resnik (1987)</a> for a nice summary of Savage&rsquo;s argument to this conclusion).</p> <p>&nbsp;</p> <h4 id="the-st-petersburg-paradox"><a href="#the-st-petersburg-paradox">8.6.5. The St Petersburg paradox</a></h4> <p>Another objection to the VNM approach (and to expected utility approaches generally), the <a href="">St. Petersburg paradox</a>, draws on the possibility of infinite utilities. The St. Petersburg paradox is based around a game where a fair coin is tossed until it lands heads up. At this point, the agent receives a prize worth 2<sup>n</sup> utility, where <em>n</em> is equal to the number of times the coin was tossed during the game. The so-called paradox occurs because the expected utility of choosing to play this game is infinite and so, according to a standard expected utility approach, the agent should be willing to pay any finite amount to play the game. However, this seems unreasonable. Instead, it seems that the agent should only be willing to pay a relatively small amount to do so. As such, it seems that the expected utility approach gets something wrong.</p> <p>Various responses have been suggested. Most obviously, we could say that the paradox does not apply to VNM agents, since the VNM theorem assigns real numbers to all lotteries, and infinity is not a real number. But it's unclear whether this escapes the problem. After all, at it's core, the St. Petersburg paradox is not about infinite utilities but rather about cases where expected utility approaches seem to overvalue some choice, and such cases seem to exist even in finite cases. For example, if we let <em>L</em> be a finite limit on utility we could consider the following scenario (from <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Peterson, 2009, p. 85</a>):</p> <blockquote> <p>A fair coin is tossed until it lands heads up. The player thereafter receives a prize worth min {2<sup>n</sup> &middot; 10<sup>-100</sup>, L} units of utility, where <em>n</em> is the number of times the coin was tossed.</p> </blockquote> <p>In this case, even if an extremely low value is set for <em>L</em>, it seems that paying this amount to play the game is unreasonable. After all, as Peterson notes, about nine times out of ten an agent that plays this game will win no more than 8 &middot; 10<sup>-100</sup> utility. If paying 1 utility is, in fact, unreasonable in this case, then simply limiting an agent's utility to some finite value doesn't provide a defence of expected utility approaches. (Other problems abound. See <a href="/lw/kd/pascals_mugging_tiny_probabilities_of_vast/">Yudkowsky, 2007</a> for an interesting finite problem and <a href="">Nover &amp; Hajek, 2004</a> for a particularly perplexing problem with links to the St Petersburg paradox.)</p> <p>As it stands, there is no agreement about precisely what the St Petersburg paradox reveals. Some people accept one of the various resolutions of the case and so find the paradox unconcerning. Others think the paradox reveals a serious problem for expected utility theories. Still others think the paradox is unresolved but don't think that we should respond by abandoning expected utility theory.</p> <p>&nbsp;</p> <h2 id="does-axiomatic-decision-theory-offer-any-action-guidance"><a href="#does-axiomatic-decision-theory-offer-any-action-guidance">9. Does axiomatic decision theory offer any action guidance?</a></h2> <p>For the decision theories listed in section 8.2, it's often claimed the answer is "no." To explain this, I must first examine some differences between <em>direct</em> and <em>indirect</em> approaches to axiomatic decision theory.</p> <p><a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Peterson (2009, ch. 4)</a> explains:</p> <blockquote> <p>In the indirect approach, which is the dominant approach, the decision maker does not prefer a risky act [or lottery] to another <em>because</em> the expected utility of the former exceeds that of the latter. Instead, the decision maker is asked to state a set of preferences over a set of risky acts... Then, if the set of preferences stated by the decision maker is consistent with a small number of structural constraints (axioms), it can be shown that her decisions can be described <em>as if</em> she were choosing what to do by assigning numerical probabilities and utilities to outcomes and then maximising expected utility...</p> </blockquote> <blockquote> <p>[In contrast] the direct approach seeks to generate preferences over acts from probabilities and utilities <em>directly</em> assigned to outcomes. In contrast to the indirect approach, it is not assumed that the decision maker has access to a set of preferences over acts before he starts to deliberate.</p> </blockquote> <p>The axiomatic decision theories listed in section 8.2 all follow the indirect approach. These theories, it might be said, cannot offer any action guidance because they require an agent to state its preferences over acts "up front." But an agent that states its preferences over acts already knows which act it prefers, so the decision theory can't offer any action guidance not already present in the agent's own stated preferences over acts.</p> <p><a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Peterson (2009, ch .10)</a> gives a practical example:</p> <blockquote> <p>For example, a forty-year-old woman seeking advice about whether to, say, divorce her husband, is likely to get very different answers from the [two approaches]. The [indirect approach] will advise the woman to first figure out what her preferences are over a very large set of risky acts, including the one she is thinking about performing, and then just make sure that all preferences are consistent with certain structural requirements. Then, as long as none of the structural requirements is violated, the woman is free to do whatever she likes, no matter what her beliefs and desires actually are... The [direct approach] will [instead] advise the woman to first assign numerical utilities and probabilities to her desires and beliefs, and then aggregate them into a decision by applying the principle of maximizing expected utility.</p> </blockquote> <p>Thus, it seems only the direct approach offers an agent any action guidance. But the direct approach is very recent (<a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Peterson 2008</a>; <a href="">Cozic 2011</a>), and only time will show whether it can stand up to professional criticism.</p> <p>Warning: Peterson's (<a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">2008</a>) direct approach is confusingly called "non-Bayesian decision theory" despite assuming Bayesian probability theory.</p> <p>For other attempts to pull action guidance from normative decision theory, see <a href="/lw/fu1/why_you_must_maximize_expected_utility/">Fallenstein (2012)</a> and <a href="/lw/gap/a_fungibility_theorem/">Stiennon (2013)</a>.</p> <p>&nbsp;</p> <h2 id="how-does-probability-theory-play-a-role-in-decision-theory"><a href="#how-does-probability-theory-play-a-role-in-decision-theory">10. How does probability theory play a role in decision theory?</a></h2> <p>In order to calculate the expected utility of an act (or lottery), it is necessary to determine a probability for each outcome. In this section, I will explore some of the details of probability theory and its relationship to decision theory.</p> <p>For further introductory material to probability theory, see <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Howson &amp; Urbach (2005)</a>, <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Grimmet &amp; Stirzacker (2001)</a>, and <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Koller &amp; Friedman (2009)</a>. This section draws heavily on <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Peterson (2009, chs. 6 &amp; 7)</a> which provides a very clear introduction to probability in the context of decision theory.</p> <p>&nbsp;</p> <h3 id="the-basics-of-probability-theory"><a href="#the-basics-of-probability-theory">10.1. The basics of probability theory</a></h3> <p>Intuitively, a probability is a number between 0 or 1 that labels how likely an event is to occur. If an event has probability 0 then it is impossible and if it has probability 1 then it can't possibly be false. If an event has a probability between these values, then this event it is more probable the higher this number is.</p> <p>As with EUM, probability theory can be derived from a small number of simple axioms. In the probability case, there are three of these, which are named the Kolmogorov axioms after the mathematician Andrey Kolmogorov. The first of these states that probabilities are real numbers between 0 and 1. The second, that if a set of events are mutually exclusive and exhaustive then their probabilities should sum to 1. The third that if two events are mutually exclusive then the probability that one or the other of these events will occur is equal to the sum of their individual probabilities.</p> <p>From these three axioms, the remainder of probability theory can be derived. In the remainder of this section, I will explore some aspects of this broader theory.</p> <p>&nbsp;</p> <h3 id="bayes-theorem-for-updating-probabilities"><a href="#bayes-theorem-for-updating-probabilities">10.2. Bayes theorem for updating probabilities</a></h3> <p>From the perspective of decision theory, one particularly important aspect of probability theory is the idea of a conditional probability. These represent how probable something is given a piece of information. So, for example, a conditional probability could represent how likely it is that it will be raining, conditioning on the fact that the weather forecaster predicted rain. A powerful technique for calculating conditional probabilities is Bayes theorem (see <a href="">Yudkowsky, 2003</a> for a detailed introduction). This formula states that:</p> <div class="figure"><img src="" alt="P(A|B)=(P(B|A)P(A))/P(B)" /> <p class="caption">P(A|B)=(P(B|A)P(A))/P(B)</p> </div> <p>Bayes theorem is used to calculate the probability of some event, A, given some evidence, B. As such, this formula can be used to <em>update</em> probabilities based on new evidence. So if you are trying to predict the probability that it will rain tomorrow and someone gives you the information that the weather forecaster predicted that it will do so then this formula tells you how to calculate a new probability that it will rain based on your existing information. The initial probability in such cases (before the information is factored into account) is called the <em>prior probability</em> and the result of applying Bayes theorem is a new, <em>posterior probability</em>.</p> <div class="figure"><img src="" alt="Using Bayes theorem to update probabilities based on the evidence provided by a weather forecast" /> <p class="caption">Using Bayes theorem to update probabilities based on the evidence provided by a weather forecast</p> </div> <p>Bayes theorem can be seen as solving the problem of how to update prior probabilities based on new information. However, it leaves open the question of how to determine the prior probability in the first place. In some cases, there will be no obvious way to do so. One solution to this problem suggests that any reasonable prior can be selected. Given enough evidence, repeated applications of Bayes theorem will lead this prior probability to be updated to much the same posterior probability, even for people with widely different initial priors. As such, the initially selected prior is less crucial than it may at first seem.</p> <p>&nbsp;</p> <h3 id="how-should-probabilities-be-interpreted"><a href="#how-should-probabilities-be-interpreted">10.3. How should probabilities be interpreted?</a></h3> <p>There are two main views about what probabilities mean: objectivism and subjectivism. Loosely speaking, the objectivist holds that probabilities tell us something about the external world while the subjectivist holds that they tell us something about our beliefs. Most decision theorists hold a subjectivist view about probability. According to this sort of view, probabilities represent a subjective degrees of belief. So to say the probability of rain is 0.8 is to say that the agent under consideration has a high degree of belief that it will rain (see <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Jaynes, 2003</a> for a defense of this view). Note that, according to this view, another agent in the same circumstance could assign a different probability that it will rain.</p> <p>&nbsp;</p> <h4 id="why-should-degrees-of-belief-following-the-laws-of-probability"><a href="#why-should-degrees-of-belief-following-the-laws-of-probability">10.3.1. Why should degrees of belief follow the laws of probability?</a></h4> <p>One question that might be raised against the subjective account of probability is why, on this account, our degrees of belief should satisfy the Kolmogorov axioms. For example, why should our subjective degrees of belief in mutually exclusive, exhaustive events add to 1? One answer to this question shows that agents whose degrees of belief don&rsquo;t satisfy these axioms will be subject to Dutch Book bets. These are bets where the agent will inevitably lose money. <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Peterson (2009, ch. 7)</a> explains:</p> <blockquote> <p>Suppose, for instance, that you believe to degree 0.55 that at least one person from India will win a gold medal in the next Olympic Games (event G), and that your subjective degree of belief is 0.52 that no Indian will win a gold medal in the next Olympic Games (event &not;G). Also suppose that a cunning bookie offers you a bet on both of these events. The bookie promises to pay you $1 for each event that actually takes place. Now, since your subjective degree of belief that G will occur is 0.55 it would be rational to pay up to $1&middot;0.55 = $0.55 for entering this bet. Furthermore, since your degree of belief in &not;G is 0.52 you should be willing to pay up to $0.52 for entering the second bet, since $1&middot;0.52 = $0.52. However, by now you have paid $1.07 for taking on two bets that are certain to give you a payoff of $1 <em>no matter what happens</em>...Certainly, this must be irrational. Furthermore, the reason why this is irrational is that your subjective degrees of belief violate the probability calculus.</p> </blockquote> <div class="figure"><img src="" alt="A Dutch Book argument" /> <p class="caption">A Dutch Book argument</p> </div> <p>It can be proven that an agent is subject to Dutch Book bets if, and only if, their degrees of belief violate the axioms of probability. This provides an argument for why degrees of beliefs should satisfy these axioms.</p> <p>&nbsp;</p> <h4 id="measuring-subjective-probabilities"><a href="#measuring-subjective-probabilities">10.3.2. Measuring subjective probabilities</a></h4> <p>Another challenges raised by the subjective view is how we can measure probabilities. If these represent subjective degrees of belief there doesn&rsquo;t seem to be an easy way to determine these based on observations of the world. However, a number of responses to this problem have been advanced, one of which is explained succinctly by <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Peterson (2009, ch. 7)</a>:</p> <blockquote> <p>The main innovations presented by... Savage can be characterised as systematic procedures for linking probability... to claims about objectively observable behavior, such as preference revealed in choice behavior. Imagine, for instance, that we wish to measure Caroline's subjective probability that the coin she is holding in her hand will land heads up the next time it is tossed. First, we ask her which of the following very generous options she would prefer.</p> </blockquote> <blockquote> <p>A: "If the coin lands heads up you win a sports car; otherwise you win nothing."</p> </blockquote> <blockquote> <p>B: "If the coin <em>does not</em> land heads up you win a sports car; otherwise you win nothing."</p> </blockquote> <blockquote> <p>Suppose Caroline prefers A to B. We can then safely conclude that she thinks it is <em>more probable</em> that the coin will land heads up rather than not. This follows from the assumption that Caroline prefers to win a sports car rather than nothing, and that her preference between uncertain prospects is entirely determined by her beliefs and desires with respect to her prospects of winning the sports car...</p> </blockquote> <blockquote> <p>Next, we need to generalise the measurement procedure outlined above such that it allows us to always represent Caroline's degrees of belief with precise numerical probabilities. To do this, we need to ask Caroline to state preferences over a <em>much larger</em> set of options and then <em>reason backwards</em>... Suppose, for instance, that Caroline wishes to measure her subjective probability that her car worth $20,000 will be stolen within one year. If she considers $1,000 to be... the highest price she is prepared to pay for a gamble in which she gets $20,000 if the event S: "The car stolen within a year" takes place, and nothing otherwise, then Caroline's subjective probability for S is 1,000/20,000 = 0.05, given that she forms her preferences in accordance with the principle of maximising expected monetary value...</p> </blockquote> <blockquote> <p>The problem with this method is that very few people form their preferences in accordance with the principle of maximising expected monetary value. Most people have a decreasing marginal utility for money...</p> </blockquote> <blockquote> <p>Fortunately, there is a clever solution to [this problem]. The basic idea is to impose a number of structural conditions on preferences over uncertain options [e.g. the transitivity axiom]. Then, the subjective probability function is established by reasoning backwards while taking the structural axioms into account: Since the decision maker preferrred some uncertain options to others, and her preferences... satisfy a number of structure axioms, the decision maker behaves <em>as if</em> she were forming her preferences over uncertain options by first assigning subjective probabilities and utilities to each option and thereafter maximising expected utility.</p> </blockquote> <blockquote> <p>A peculiar feature of this approach is, thus, that probabilities (and utilities) are derived from 'within' the theory. The decision maker does not prefer an uncertain option to another <em>because</em> she judges the subjective probabilities (and utilities) of the outcomes to be more favourable than those of another. Instead, the... structure of the decision maker's preferences over uncertain options logically implies that they can be described <em>as if</em> her choices were governed by a subjective probability function and a utility function...</p> </blockquote> <blockquote> <p>...Savage's approach [seeks] to explicate subjective interpretations of the probability axioms by making certain claims about preferences over... uncertain options. But... why on earth should a theory of subjective probability involve assumptions about preferences, given that preferences and beliefs are separate entities? Contrary to what is claimed by [Savage and others], emotionally inert decision makers failing to muster any preferences at all... could certainly hold partial beliefs.</p> </blockquote> <p>Other theorists, for example <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">DeGroot (1970)</a>, propose other approaches:</p> <blockquote> <p>DeGroot's basic assumption is that decision makers can make <em>qualitative</em> comparisons between pairs of events, and judge which one they think is most likely to occur. For example, he assumes that one can judge whether it is <em>more</em>, <em>less</em>, or <em>equally</em> likely, according to one's own beliefs, that it will rain today in Cambridge than in Cairo. DeGroot then shows that if the agent's qualitative judgments are sufficiently fine-grained and satisfy a number of structural axioms, then [they can be described by a probability distribution]. So in DeGroot's... theory, the probability function is obtained by fine-tuning qualitative data, thereby making them quantitative.</p> </blockquote> <h2 id="what-about-newcombs-problem-and-alternative-decision-algorithms"><br /></h2> <h2><a href="#what-about-newcombs-problem-and-alternative-decision-algorithms">11. What about "Newcomb's problem" and alternative decision algorithms?</a></h2> <p>Saying that a rational agent "maximizes expected utility" is, unfortunately, not specific enough. There are a variety of decision algorithms which aim to maximize expected utility, and they give <em>different answers</em> to some decision problems, for example "Newcomb's problem."</p> <p>In this section, we explain these decision algorithms and show how they perform on Newcomb's problem and related "Newcomblike" problems.</p> <p>General sources on this topic include: <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Campbell &amp; Sowden (1985)</a>, <a href="">Ledwig (2000)</a>, <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Joyce (1999)</a>, and <a href="">Yudkowsky (2010)</a>. <a href="">Moertelmaier (2013)</a> discusses Newcomblike problems in the context of the agent-environment framework.</p> <p>&nbsp;</p> <h3 id="newcomblike-problems-and-two-decision-algorithms"><a href="#newcomblike-problems-and-two-decision-algorithms">11.1. Newcomblike problems and two decision algorithms</a></h3> <p>I'll begin with an exposition of several Newcomblike problems, so that I can refer to them in later sections. I'll also introduce our first two decision algorithms, so that I can show how one's choice of decision algorithm affects an agent's outcomes on these problems.</p> <p>&nbsp;</p> <h4 id="newcombs-problem"><a href="#newcombs-problem">11.1.1. Newcomb's Problem</a></h4> <p>Newcomb's problem was formulated by the physicist <a href="">William Newcomb</a> but first published in <a href="">Nozick (1969)</a>. Below I present a version of it inspired by <a href="">Yudkowsky (2010)</a>.</p> <p>A superintelligent machine named Omega visits Earth from another galaxy and shows itself to be very good at predicting events. This isn't because it has magical powers, but because it knows more science than we do, has billions of sensors scattered around the globe, and runs efficient algorithms for modeling humans and other complex systems with unprecedented precision &mdash; on an array of computer hardware the size of our moon.</p> <p>Omega presents you with two boxes. Box A is transparent and contains $1000. Box B is opaque and contains either $1 million or nothing. You may choose to take both boxes (called "two-boxing"), or you may choose to take only box B (called "one-boxing"). If Omega predicted you'll two-box, then Omega has left box B empty. If Omega predicted you'll one-box, then Omega has placed $1M in box B.</p> <p>By the time you choose, Omega has already left for its next game &mdash; the contents of box B won't change after you make your decision. Moreover, you've watched Omega play a thousand games against people like you, and on every occasion Omega predicted the human player's choice accurately.</p> <p>Should you one-box or two-box?</p> <div class="figure"><img src="" alt="Newcomb&rsquo;s problem" /> <p class="caption">Newcomb&rsquo;s problem</p> </div> <p>Here's an argument for two-boxing. The $1M either <em>is</em> or <em>is not</em> in the box; your choice cannot affect the contents of box B now. So, you should two-box, because then you get $1K plus whatever is in box B. This is a straightforward application of the dominance principle (section 6.1). Two-boxing dominantes one-boxing.</p> <p>Convinced? Well, here's an argument for one-boxing. On all those earlier games you watched, everyone who two-boxed received $1K, and everyone who one-boxed received $1M. So you're almost certain that you'll get $1K for two-boxing and $1M for one-boxing, which means that to maximize your expected utility, you should one-box.</p> <p><a href="">Nozick (1969)</a> reports:</p> <blockquote> <p>I have put this problem to a large number of people... To almost everyone it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly.</p> </blockquote> <p>This is not a "merely verbal" dispute (<a href="">Chalmers 2011</a>). Decision theorists have offered different <em>algorithms</em> for making a choice, and they have different outcomes. Translated into English, the first algorithm (<em>evidential decision theory</em> or EDT) says "Take actions such that you would be glad to receive the news that you had taken them." The second algorithm (<em>causal decision theory</em> or CDT) says "Take actions which you expect to have a positive effect on the world."</p> <p>Many decision theorists have the intuition that CDT is right. But a CDT agent appears to "lose" on Newcomb's problem, ending up with $1000, while an EDT agent gains $1M. Proponents of EDT can ask proponents of CDT: "If you're so smart, why aren't you rich?" As <a href="">Spohn (2012)</a> writes, "this must be poor rationality that complains about the reward for irrationality." Or as <a href="">Yudkowsky (2010)</a> argues:</p> <blockquote> <p>An expected utility maximizer should maximize <em>utility</em> &mdash; not formality, reasonableness, or defensibility...</p> </blockquote> <p>In response to EDT's apparent "win" over CDT on Newcomb's problem, proponents of CDT have presented similar problems on which a CDT agent "wins" and an EDT agent "loses." Proponents of EDT, meanwhile, have replied with additional Newcomblike problems on which EDT wins and CDT loses. Let's explore each of them in turn.</p> <p>&nbsp;</p> <h4 id="evidential-and-causal-decision-theory"><a href="#evidential-and-causal-decision-theory">11.1.2. Evidential and causal decision theory</a></h4> <p>First, however, we will consider our two decision algorithms in a little more detail.</p> <p>EDT can be described simply: according to this theory, agents should use conditional probabilities when determining the expected utility of different acts. Specifically, they should use the probability of the world being in each possible state conditioning on them carrying out the act under consideration. So in Newcomb&rsquo;s problem they consider the probability that Box B contains $1 million or nothing conditioning on the evidence provided by their decision to one-box or two-box. This is how the theory formalizes the notion of an act providing good news.</p> <p>CDT is more complex, at least in part because it has been formulated in a variety of different ways and these formulations are equivalent to one another only if certain background assumptions are met. However, a good sense of the theory can be gained by considering the counterfactual approach, which is one of the more intuitive of these formulations. This approach utilizes the probabilities of certain counterfactual conditionals, which can be thought of as representing the causal influence of an agent&rsquo;s acts on the state of the world. These conditionals take the form &ldquo;if I were to carry out a certain act, then the world would be in a certain state." So in Newcomb&rsquo;s problem, for example, this formulation of CDT considers the probability of the counterfactuals like &ldquo;if I were to one-box, then Box B would contain $1 million&rdquo; and, in doing so, considers the causal influence of one-boxing on the contents of the boxes.</p> <p>The same distinction can be made in formulaic terms. Both EDT and CDT agree that decision theory should be about maximizing expected utility where the expected utility of an act, A, given a set of possible outcomes, O, is defined as follows:</p> <p><img src="" alt="expected utility formula" />.</p> <p>In this equation, V(A &amp; O) represents the value to the agent of the combination of an act and an outcome. So this is the utility that the agent will receive if they carry out a certain act and a certain outcome occurs. Further, Pr<sub>A</sub>O represents the probability of each outcome occurring on the supposition that the agent carries out a certain act. It is in terms of this probability that CDT and EDT differ. EDT uses the conditional probability, Pr(O|A), while CDT uses the probability of subjunctive conditionals, Pr(A <img src="" alt="" /> O).</p> <p>Using these two versions of the expected utility formula, it's possible to demonstrate in a formal manner why EDT and CDT give the advice they do in Newcomb's problem. To demonstrate this it will help to make two simplifying assumptions. First, we will presume that each dollar of money is worth 1 unit of utility to the agent (and so will presume that the agent's utility is linear with money). Second, we will presume that Omega is a perfect predictor of human actions so that if the agent two-boxes it provides definitive evidence that there is nothing in the opaque box and if the agent one-boxes it provides definitive evidence that there is $1 million in this box. Given these assumptions, EDT calculates the expected utility of each decision as follows:</p> <div class="figure"><img src="" alt="EU for two-boxing according to EDT" /> <p class="caption">EU for two-boxing according to EDT</p> </div> <div class="figure"><img src="" alt="EU for one-boxing according to EDT" /> <p class="caption">EU for one-boxing according to EDT</p> </div> <p>Given that one-boxing has a higher expected utility according to these calculations, an EDT agent will one-box.</p> <p>On the other hand, given that the agent's decision doesn't causally influence Omega's earlier prediction, CDT will use the same probability regardless of whether you one or two box. The decision endorsed will be the same regardless of what probability we use so, to demonstrate the theory, we can simply arbitrarily assign an 0.5 probability that the opaque box has nothing in it and an 0.5 probability that it has one million dollars in it. CDT then calculates the expected utility of each decision as follows:</p> <div class="figure"><img src="" alt="EU for two-boxing according to CDT" /> <p class="caption">EU for two-boxing according to CDT</p> </div> <div class="figure"><img src="" alt="EU for one-boxing according to CDT" /> <p class="caption">EU for one-boxing according to CDT</p> </div> <p>Given that two-boxing has a higher expected utility according to these calculations, a CDT agent will two-box. This approach demonstrates the result given more informally in the previous section: CDT agents will two-box in Newcomb's problem and EDT agents will one box.</p> <p>As mentioned before, there are also alternative formulations of CDT. What are these? For example, David Lewis <a href="">(1981)</a> and Brian Skyrms <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">(1980)</a> both present approaches that rely on the partition of the world into states to capture causal information, rather than counterfactual conditionals. On Lewis&rsquo;s version of this account, for example, the agent calculates the expected utility of acts using their unconditional credence in states of the world that are <em>dependency hypotheses</em>, which are descriptions of the possible ways that the world can depend on the agent&rsquo;s actions. These dependency hypotheses intrinsically contain the required causal information.</p> <p>Other traditional approaches to CDT include the imaging approach of <a href="">Sobel (1980)</a> (also see <a href="">Lewis 1981</a>) and the unconditional expectations approach of Leonard Savage <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">(1954)</a>. Those interested in the various traditional approaches to CDT would be best to consult Lewis <a href="">(1981)</a>, <a href="">Weirich (2008)</a>, and <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Joyce (1999)</a>. More recently, work in computer science on a tool called causal Bayesian networks has led to an innovative approach to CDT that has received some recent attention in the philosophical literature (<a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Pearl 2000, ch. 4</a> and <a href="">Spohn 2012</a>).</p> <p>Now we return to an analysis of decision scenarios, armed with EDT and the counterfactual formulation of CDT.</p> <p>&nbsp;</p> <h4 id="medical-newcomb-problems"><a href="#medical-newcomb-problems">11.1.3. Medical Newcomb problems</a></h4> <p>Medical Newcomb problems share a similar form but come in many variants, including Solomon's problem (<a href="">Gibbard &amp; Harper 1976</a>) and the smoking lesion problem (<a href="">Egan 2007</a>). Below I present a variant called the "chewing gum problem" (<a href="">Yudkowsky 2010</a>):</p> <blockquote> <p>Suppose that a recently published medical study shows that chewing gum seems to cause throat abscesses &mdash; an outcome-tracking study showed that of people who chew gum, 90% died of throat abscesses before the age of 50. Meanwhile, of people who do not chew gum, only 10% die of throat abscesses before the age of 50. The researchers, to explain their results, wonder if saliva sliding down the throat wears away cellular defenses against bacteria. Having read this study, would you choose to chew gum? But now a second study comes out, which shows that most gum-chewers have a certain gene, CGTA, and the researchers produce a table showing the following mortality rates:</p> </blockquote> <blockquote> <table border="0"> <tbody> <tr> <td>&nbsp;</td> <td>CGTA present</td> <td>CGTA absent</td> </tr> <tr> <td>Chew Gum</td> <td>89% die</td> <td>8% die</td> </tr> <tr> <td>Don&rsquo;t chew</td> <td>99% die</td> <td>11% die</td> </tr> </tbody> </table> </blockquote> <blockquote> <p>This table shows that whether you have the gene CGTA or not, your chance of dying of a throat abscess goes down if you chew gum. Why are fatalities so much higher for gum-chewers, then? Because people with the gene CGTA tend to chew gum and die of throat abscesses. The authors of the second study also present a test-tube experiment which shows that the saliva from chewing gum can kill the bacteria that form throat abscesses. The researchers hypothesize that because people with the gene CGTA are highly susceptible to throat abscesses, natural selection has produced in them a tendency to chew gum, which protects against throat abscesses. The strong correlation between chewing gum and throat abscesses is not because chewing gum causes throat abscesses, but because a third factor, CGTA, leads to chewing gum and throat abscesses.</p> </blockquote> <blockquote> <p>Having learned of this new study, would you choose to chew gum? Chewing gum helps protect against throat abscesses whether or not you have the gene CGTA. Yet a friend who heard that you had decided to chew gum (as people with the gene CGTA often do) would be quite alarmed to hear the news &mdash; just as she would be saddened by the news that you had chosen to take both boxes in Newcomb&rsquo;s Problem. This is a case where [EDT] seems to return the wrong answer, calling into question the validity of the... rule &ldquo;Take actions such that you would be glad to receive the news that you had taken them.&rdquo; Although the news that someone has decided to chew gum is alarming, medical studies nonetheless show that chewing gum protects against throat abscesses. [CDT's] rule of &ldquo;Take actions which you expect to have a positive physical effect on the world&rdquo; seems to serve us better.</p> </blockquote> <p>One response to this claim, called the <em>tickle defense</em> (<a href=";uid=2129&amp;uid=2&amp;uid=70&amp;uid=4&amp;sid=21101205363271">Eells, 1981</a>), argues that EDT actually reaches the right decision in such cases. According to this defense, the most reasonable way to construe the &ldquo;chewing gum problem&rdquo; involves presuming that CGTA causes a desire (a mental &ldquo;tickle&rdquo;) which then causes the agent to be more likely to chew gum, rather than CGTA directly causing the action. Given this, if we presume that the agent already knows their own desires and hence already knows whether they&rsquo;re likely to have the CGTA gene, chewing gum will not provide the agent with further bad news. Consequently, an agent following EDT will chew in order to get the good news that they have decreased their chance of getting abscesses.</p> <p>Unfortunately, the tickle defense fails to achieve its aims. In introducing this approach, Eells hoped that EDT could be made to mimic CDT but without an allegedly inelegant reliance on causation. However, <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Sobel (1994, ch. 2)</a> demonstrated that the tickle defense failed to ensure that EDT and CDT would decide equivalently in all cases. On the other hand, those who feel that EDT originally got it right by one-boxing in Newcomb&rsquo;s problem will be disappointed to discover that the tickle defense leads an agent to two-box in some versions of Newcomb&rsquo;s problem and so solves one problem for the theory at the expense of introducing another.</p> <p>So just as CDT &ldquo;loses&rdquo; on Newcomb&rsquo;s problem, EDT will "lose&rdquo; on Medical Newcomb problems (if the tickle defense fails) or will join CDT and "lose" on Newcomb&rsquo;s Problem itself (if the tickle defense succeeds).</p> <p>&nbsp;</p> <h4 id="newcombs-soda"><a href="#newcombs-soda">11.1.4. Newcomb's soda</a></h4> <p>There are also similar problematic cases for EDT where the evidence provided by your decision relates not to a feature that you were born (or created) with but to some other feature of the world. One such scenario is the <em>Newcomb&rsquo;s soda</em> problem, introduced in <a href="">Yudkowsky (2010)</a>:</p> <blockquote> <p>You know that you will shortly be administered one of two sodas in a double-blind clinical test. After drinking your assigned soda, you will enter a room in which you find a chocolate ice cream and a vanilla ice cream. The first soda produces a strong but entirely subconscious desire for chocolate ice cream, and the second soda produces a strong subconscious desire for vanilla ice cream. By &ldquo;subconscious&rdquo; I mean that you have no introspective access to the change, any more than you can answer questions about individual neurons firing in your cerebral cortex. You can only infer your changed tastes by observing which kind of ice cream you pick.</p> </blockquote> <blockquote> <p>It so happens that all participants in the study who test the Chocolate Soda are rewarded with a million dollars after the study is over, while participants in the study who test the Vanilla Soda receive nothing. But subjects who actually eat vanilla ice cream receive an additional thousand dollars, while subjects who actually eat chocolate ice cream receive no additional payment. You can choose one and only one ice cream to eat. A pseudo-random algorithm assigns sodas to experimental subjects, who are evenly divided (50/50) between Chocolate and Vanilla Sodas. You are told that 90% of previous research subjects who chose chocolate ice cream did in fact drink the Chocolate Soda, while 90% of previous research subjects who chose vanilla ice cream did in fact drink the Vanilla Soda. Which ice cream would you eat?</p> </blockquote> <div class="figure"><img src="" alt="Newcomb&rsquo;s soda" /> <p class="caption">Newcomb&rsquo;s soda</p> </div> <p>In this case, an EDT agent will decide to eat chocolate ice cream as this would provide evidence that they drank the chocolate soda and hence that they will receive $1 million after the experiment. However, this seems to be the wrong decision and so, once again, the EDT agent &ldquo;loses&rdquo;.</p> <p>&nbsp;</p> <h4 id="bostroms-meta-newcomb-problem"><a href="#bostroms-meta-newcomb-problem">11.1.5. Bostrom's meta-Newcomb problem</a></h4> <p>In response to attacks on their theory, the proponent of EDT can present alternative scenarios where EDT &ldquo;wins&rdquo; and it is CDT that &ldquo;loses&rdquo;. One such case is the <em>meta-Newcomb problem</em> proposed in <a href="">Bostrom (2001)</a>. Adapted to fit my earlier story about Omega the superintelligent machine (section 11.1.1), the problem runs like this: Either Omega has <em>already</em> placed $1M or nothing in box B (depending on its prediction about your choice), or else Omega is watching as you choose and <em>after</em> your choice it will place $1M into box B only if you have one-boxed. But you don't know which is the case. Omega makes its move before the human player's choice about half the time, and the rest of the time it makes its move <em>after</em> the player's choice.</p> <p>But now suppose there is another superintelligent machine, Meta-Omega, who has a perfect track record of predicting both Omega's choices and the choices of human players. Meta-Omega tells you that either you will two-box and Omega will "make its move" <em>after</em> you make your choice, or else you will one-box and Omega has <em>already</em> made its move (and gone on to the next game, with someone else).</p> <p>Here, an EDT agent one-boxes and walks away with a million dollars. On the face of it, however, a CDT agent faces a dilemma: if she two-boxes then Omega's action depends on her choice, so the "rational" choice is to one-box. But if the CDT agent one-boxes, then Omega's action temporally precedes (and is thus physically independent of) her choice, so the "rational" action is to two-box. It might seem, then, that a CDT agent will be unable to reach any decision in this scenario. However, further reflection reveals that the issue is more complicated. According to CDT, what the agent ought to do in this scenario depends on their credences about their own actions. If they have a high credence that they will two-box, they ought to one-box and if they have a high credence that they will one-box, they ought to two box. Given that the agent's credences in their actions are not given to us in the description of the meta-Newcomb problem, the scenario is underspecified and it is hard to know what conclusions should be drawn from it.</p> <p>&nbsp;</p> <h4 id="the-psychopath-button"><a href="#the-psychopath-button">11.1.6. The psychopath button</a></h4> <p>Fortunately, another case has been introduced where, according to CDT, what an agent ought to do depends on their credences about what they will do. This is the <em>psychopath button</em>, introduced in <a href="">Egan (2007)</a>:</p> <blockquote> <p>Paul is debating whether to press the &ldquo;kill all psychopaths&rdquo; button. It would, he thinks, be much better to live in a world with no psychopaths. Unfortunately, Paul is quite confident that only a psychopath would press such a button. Paul very strongly prefers living in a world with psychopaths to dying. Should Paul press the button?</p> </blockquote> <p>Many people think Paul should not. After all, if he does so, he is almost certainly a psychopath and so pressing the button will almost certainly cause his death. This is also the response that an EDT agent will give. After all, pushing the button would provide the agent with the bad news that they are almost certainly a psychopath and so will die as a result of their action.</p> <p>On the other hand, if Paul is fairly certain that he is not a psychopath, then CDT will say that he ought to press the button. CDT will note that, given Paul&rsquo;s confidence that he isn&rsquo;t a psychopath, his decision will almost certainly have a positive impact as it will result in the death of all psychopaths and Paul&rsquo;s survival. On the face of it, then, a CDT agent would decide inappropriately in this case by pushing the button. Importantly, unlike in the meta-Newcomb problem, the agent's credences about their own behavior are specified in Egan's full version of this scenario (in non-numeric terms, the agent thinks they're unlikely to be a psychopath and hence unlikely to press the button).</p> <p>However, in order to produce this problem for CDT, Egan made a number of assumptions about how an agent should decide when what they ought to do depends on what they think they will do. In response, alternative views about deciding in such cases have been advanced (particular in <a href=";uid=2&amp;uid=4&amp;sid=21101299066461">Arntzenius, 2008</a> and <a href="">Joyce, 2012</a>). Given these factors, opinions are split about whether the psychopath button problem does in fact pose a challenge to CDT.</p> <p>&nbsp;</p> <h4 id="parfits-hitchhiker"><a href="#parfits-hitchhiker">11.1.7. Parfit's hitchhiker</a></h4> <p>Not all decision scenarios are problematic for just one of EDT or CDT. There are also cases that can be presented where both an EDT agent and a CDT agent will both "lose". One such case is <em>Parfit&rsquo;s Hitchhiker</em> (<a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Parfit, 1984, p. 7</a>):</p> <blockquote> <p>Suppose that I am driving at midnight through some desert. My car breaks down. You are a stranger, and the only other driver near. I manage to stop you, and I offer you a great reward if you rescue me. I cannot reward you now, but I promise to do so when we reach my home. Suppose next that I am <em>transparent</em>, unable to deceive others. I cannot lie convincingly. Either a blush, or my tone of voice, always gives me away. Suppose, finally, that I know myself to be never self-denying. If you drive me to my home, it would be worse for me if I gave you the promised reward. Since I know that I never do what will be worse for me, I know that I shall break my promise. Given my inability to lie convincingly, you know this too. You do not believe my promise, and therefore leave me stranded in the desert.</p> </blockquote> <p>In this scenario the agent "loses" if they would later refuse to give the stranger the reward. However, both EDT agents and CDT agents will refuse to do so. After all, by this point the agent will already be safe so giving the reward can neither provide good news about, nor cause, their safety. So this seems to be a case where both theories &ldquo;lose&rdquo;.</p> <p>&nbsp;</p> <h4 id="transparent-newcombs-problem"><a href="#transparent-newcombs-problem">11.1.8. Transparent Newcomb's problem</a></h4> <p>There are also other cases where both EDT and CDT "lose". One of these is the <em>Transparent Newcomb's problem</em> which, in at least one version, is due to <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Drescher (2006, p. 238-242)</a>. This scenario is like the original Newcomb's problem but, in this case, both boxes are transparent so you can see their contents when you make your decision. Again, Omega has filled box A with $1000 and Box B with either $1 million or nothing based on a prediction of your behavior. Specifically, Omega has predicted how you would decide if you witnessed $1 million in Box B. If Omega predicted that you would one-box in this case, he placed $1 million in Box B. On the other hand, if Omega predicted that you would two-box in this case then he placed nothing in Box B.</p> <p>Both EDT and CDT agents will two-box in this case. After all, the contents of the boxes are determined and known so the agent's decision can neither provide good news about what they contain nor cause them to contain something desirable. As with two-boxing in the original version of Newcomb&rsquo;s problem, many philosophers will endorse this behavior.</p> <p>However, it&rsquo;s worth noting that Omega will almost certainly have predicted this decision and so filled Box B with nothing. CDT and EDT agents will end up with $1000. On the other hand, just as in the original case, the agent that one-boxes will end up with $1 million. So this is another case where both EDT and CDT &ldquo;lose&rdquo;. Consequently, to those that agree with the earlier comments (in section 11.1.1) that a decision theory shouldn't lead an agent to "lose", neither of these theories will be satisfactory.</p> <p>&nbsp;</p> <h4 id="counterfactual-mugging"><a href="#counterfactual-mugging">11.1.9. Counterfactual mugging</a></h4> <p>Another similar case, known as <em>counterfactual mugging</em>, was developed in <a href="/lw/3l/counterfactual_mugging/">Nesov (2009)</a>:</p> <blockquote> <p>Imagine that one day, Omega comes to you and says that it has just tossed a fair coin, and given that the coin came up tails, it decided to ask you to give it $100. Whatever you do in this situation, nothing else will happen differently in reality as a result. Naturally you don't want to give up your $100. But see, the Omega tells you that if the coin came up heads instead of tails, it'd give you $10000, but only if you'd agree to give it $100 if the coin came up tails.</p> </blockquote> <p>Should you give up the $100?</p> <p>Both CDT and EDT say no. After all, giving up your money neither provides good news about nor influences your chances of getting $10 000 out of the exchange. Further, this intuitively seems like the right decision. On the face of it, then, it is appropriate to retain your money in this case.</p> <p>However, presuming you take Omega to be perfectly trustworthy, there seems to be room to debate this conclusion. If you are the sort of agent that gives up the $100 in counterfactual mugging then you will tend to do better than the sort of agent that won&rsquo;t give up the $100. Of course, in the particular case at hand you will lose but rational agents often lose in specific cases (as, for example, when such an agent loses a rational bet). It could be argued that what a rational agent should not do is be the type of agent that loses. Given that agents that refuse to give up the $100 are the type of agent that loses, there seem to be grounds to claim that counterfactual mugging is another case where both CDT and EDT act inappropriately.</p> <p>&nbsp;</p> <h4 id="prisoners-dilemma"><a href="#prisoners-dilemma">11.1.10. Prisoner's dilemma</a></h4> <p>Before moving on to a more detailed discussion of various possible decision theories, I&rsquo;ll consider one final scenario: the <em>prisoner&rsquo;s dilemma</em>. <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Resnik (1987, pp. 147-148 )</a> outlines this scenario as follows:</p> <blockquote> <p>Two prisoners...have been arrested for vandalism and have been isolated from each other. There is sufficient evidence to convict them on the charge for which they have been arrested, but the prosecutor is after bigger game. He thinks that they robbed a bank together and that he can get them to confess to it. He summons each separately to an interrogation room and speaks to each as follows: "I am going to offer the same deal to your partner, and I will give you each an hour to think it over before I call you back. This is it: If one of you confesses to the bank robbery and the other does not, I will see to it that the confessor gets a one-year term and that the other guy gets a twenty-five year term. If you both confess, then it's ten years apiece. If neither of you confesses, then I can only get two years apiece on the vandalism charge..."</p> </blockquote> <p>The decision matrix of each vandal will be as follows:</p> <table border="0" cellspacing="5" cellpadding="3"> <tbody> <tr> <td>&nbsp;</td> <td><em>Partner confesses</em></td> <td><em>Partner lies</em></td> </tr> <tr> <td><em>Confess</em></td> <td>10 years in jail</td> <td>1 year in jail</td> </tr> <tr> <td><em>Lie</em></td> <td>25 years in jail</td> <td>2 years in jail</td> </tr> </tbody> </table> <p>Faced with this scenario, a CDT agent will confess. After all, the agent&rsquo;s decision can&rsquo;t influence their partner&rsquo;s decision (they&rsquo;ve been isolated from one another) and so the agent is better off confessing regardless of what their partner chooses to do. According to the majority of decision (and game) theorists, confessing is in fact the rational decision in this case.</p> <p>Despite this, however, an EDT agent may lie in a prisoner&rsquo;s dilemma. Specifically, if they think that their partner is similar enough to them, the agent will lie because doing so will provide the good news that they will both lie and hence that they will both get two years in jail (good news as compared with the bad news that they will both confess and hence that they will get 10 years in jail).</p> <p>To many people, there seems to be something compelling about this line of reasoning. For example, <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Douglas Hofstadter (1985, pp. 737-780)</a> has argued that an agent acting &ldquo;superrationally&rdquo; would co-operate with other superrational agents for precisely this sort of reason: a superrational agent would take into account the fact that other such agents will go through the same thought process in the <em>prisoner&rsquo;s dilemma</em> and so make the same decision. As such, it is better that that the decision that both agents reach be to lie than that it be to confess. More broadly, it could perhaps be argued that a rational agent should lie in the <em>prisoner&rsquo;s dilemma</em> as long as they believe that they are similar enough to their partner that they are likely to reach the same decision.</p> <div class="figure"><img src="" alt="An argument for cooperation in the prisoners&rsquo; dilemma" /> <p class="caption">An argument for cooperation in the prisoners&rsquo; dilemma</p> </div> <p>It is unclear, then, precisely what should be concluded from the prisoner&rsquo;s dilemma. However, for those that are sympathetic to Hofstadter&rsquo;s point or the line of reasoning appealed to by the EDT agent, the scenario seems to provide an additional reason to seek out an alternative theory to CDT.</p> <p>&nbsp;</p> <h3 id="benchmark-theory-bt"><a href="#benchmark-theory-bt">11.2. Benchmark theory (BT)</a></h3> <p>One recent response to the apparent failure of EDT to decide appropriately in medical Newcomb problems and CDT to decide appropriately in the psychopath button is Benchmark Theory (BT) which was developed in <a href="">Wedgwood (2011)</a> and discussed further in <a href="">Briggs (2010)</a>.</p> <p>In English, we could think of this decision algorithm as saying that agents should decide so as to give their future self good news about how well off they are compared to how well off they could have been. In formal terms, BT uses the following formula to calculate the expected utility of an act, A:</p> <p><img src="" alt="BT expected value formula" />.</p> <p>In other words, it uses the conditional probability, as in EDT but calculates the value differently (as indicated by the use of V&rsquo; rather than V). V&rsquo; is calculated relative to a benchmark value in order to give a comparative measure of value (both of the above sources go into more detail about this process).</p> <p>Taking the informal perspective, in the <em>chewing gum problem</em>, BT will note that by chewing gum, the agent will always get the good news that they are comparatively better off than they could have been (because chewing gum helps control throat abscesses) whereas by not chewing, the agent will always get the bad news that they could have been comparatively better off by chewing. As such, a BT agent will chew in this scenario.</p> <p>Further, BT seems to reach what many consider to be the right decision in the <em>psychopath button</em>. In this case, the BT agent will note that if they push the button they will get the bad news that they are almost certainly a psychopath and so that they would have been comparatively much better off by not pushing (as pushing will kill them). On the other hand, if they don&rsquo;t push they will get the less bad news that they are almost certainly not a psychopath and so could have been comparatively a little better off it they had pushed the button (as this would have killed all the psychopaths but not them). So refraining from pushing the button gives the less bad news and so is the rational decision.</p> <p>On the face of it, then, there seem to be strong reasons to find BT compelling: it decides appropriately in these scenarios while, according to some people, EDT and CDT only decide appropriately in one or the other of them.</p> <p>Unfortunately, a BT agent will fail to decide appropriately in other scenarios. First, those that hold that one-boxing is the appropriate decision in Newcomb&rsquo;s problem will immediately find a flaw in BT. After all, in this scenario two-boxing gives the good news that the agent did comparatively better than they could have done (because they gain the $1000 from Box A which is more than they would have received otherwise) while one-boxing brings the bad news that they did comparatively worse than they could have done (as they did not receive this money). As such, a BT agent will two-box in Newcomb&rsquo;s problem.</p> <p>Further, <a href="">Briggs (2010)</a> argues, though <a href="">Wedgwood (2011)</a> denies, that BT suffers from other problems. As such, even for those who support two-boxing in Newcomb&rsquo;s problem, it could be argued that BT doesn&rsquo;t represent an adequate theory of choice. It is unclear, then, whether BT is a desirable replacement to alternative theories.</p> <p>&nbsp;</p> <h3 id="timeless-decision-theory-tdt"><a href="#timeless-decision-theory-tdt">11.3. Timeless decision theory (TDT)</a></h3> <p><a href="">Yudkowsky (2010)</a> offers another decision algorithm, <em>timeless decision theory</em> or TDT (see also <a href="">Altair, 2013</a>). Specifically, TDT is intended as an explicit response to the idea that a theory of rational choice should lead an agent to &ldquo;win&rdquo;. As such, it will appeal to those who think it is appropriate to one-box in Newcomb&rsquo;s problem and chew in the chewing gum problem.</p> <p>In English, this algorithm can be approximated as saying that an agent ought to choose as if CDT were right but they were determining not their actual decision but rather the result of the abstract computation of which their decision is one concrete instance. Formalizing this decision algorithm would require a substantial document in its own right and so will not be carried out in full here. Briefly, however, TDT is built on top of causal Bayesian networks <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">(Pearl, 2000)</a> which are graphs where the arrows represent causal influence. TDT supplements these graphs by adding nodes representing abstract computations and taking the abstract computation that determines an agent&rsquo;s decision to be the object of choice rather than the concrete decision itself (see <a href="">Yudkowsky, 2010</a> for a more detailed description).</p> <p>Returning to an informal discussion, an example will help clarify the form taken by TDT: imagine that two perfect replicas of a person are placed in identical rooms and asked to make the same decision. While each replica will make their own decision, in doing so, they will be carrying out the same computational process. As such, TDT will say that the replicas ought to act as if they are determining the result of this process and hence as if they are deciding the behavior of both copies.</p> <p>Something similar can be said about Newcomb&rsquo;s problem. In this case it is almost like there is again a replica of the agent: Omega&rsquo;s model of the agent that it used to predict the agent&rsquo;s behavior. Both the original agent and this &ldquo;replica&rdquo; responds to the same abstract computational process as one another. In other words, both Omega&rsquo;s prediction and the agent&rsquo;s behavior are influenced by this process. As such, TDT advises the agent to act as if they are determining the result of this process and, hence, as if they can determine Omega&rsquo;s box filling behavior. As such, a TDT agent will one-box in order to determine the result of this abstract computation in a way that leads to $1 million being placed in Box B.</p> <p>TDT also succeeds in other areas. For example, in the chewing gum problem there is no &ldquo;replica&rdquo; agent so TDT will decide in line with standard CDT and choose to chew gum. Further, in the prisoner&rsquo;s dilemma, a TDT agent will lie if its partner is another TDT agent (or a relevantly similar agent). After all, in this case both agents will carry out the same computational process and so TDT will advise that the agent act as if they are determining this process and hence simultaneously determining both their own and their partner&rsquo;s decision. If so then it is better for the agent that both of them lie than that both of them confess.</p> <p>However, despite its success, TDT also &ldquo;loses&rdquo; in some decision scenarios. For example, in counterfactual mugging, a TDT agent will not choose to give up the $100. This might seem surprising. After all, as with Newcomb&rsquo;s problem, this case involves Omega predicting the agent&rsquo;s behavior and hence involves a &ldquo;replica&rdquo;. However, this case differs in that the agent knows that the coin came up heads and so knows that they have nothing to gain by giving up the money.</p> <p>For those who feel that a theory of rational choice should lead an agent to &ldquo;win&rdquo;, then, TDT seems like a step in the right direction but further work is required if it is to &ldquo;win&rdquo; in the full range of decision scenarios.</p> <p>&nbsp;</p> <h3 id="decision-theory-and-winning"><a href="#decision-theory-and-winning">11.4. Decision theory and &ldquo;winning&rdquo;</a></h3> <p>In the previous section, I discussed TDT, a decision algorithm that could be advanced as replacements for CDT and EDT. One of the primary motivations for developing TDT is a sense that both CDT and EDT fail to reason in a desirable manner in some decision scenarios. However, despite acknowledging that CDT agents end up worse off in Newcomb's Problem, many (and perhaps the majority of) decision theorists are proponents of CDT. On the face of it, this may seem to suggest that these decision theorists aren't interested in developing a decision algorithm that "wins" but rather have some other aim in mind. If so then this might lead us to question the value of developing one-boxing decision algorithms.</p> <p>However, the claim that most decision theorists don&rsquo;t care about finding an algorithm that &ldquo;wins&rdquo; mischaracterizes their position. After all, proponents of CDT tend to take the challenge posed by the fact that CDT agents &ldquo;lose&rdquo; in Newcomb's problem seriously (in the philosophical literature, it's often referred to as the <em>Why ain'cha rich?</em> problem). A common reaction to this challenge is neatly summarized in <a href=";camp=1789&amp;creative=390957&amp;creativeASIN=0321928423&amp;linkCode=as2&amp;tag=lesswrong-20">Joyce (1999, p. 153-154 )</a> as a response to a hypothetical question about why, if two-boxing is rational, the CDT agent does not end up as rich as an agent that one-boxes:</p> <blockquote> <p>Rachel has a perfectly good answer to the "Why ain't you rich?" question. "I am not rich," she will say, "because I am not the kind of person [Omega] thinks will refuse the money. I'm just not like you, Irene [the one-boxer]. Given that I know that I am the type who takes the money, and given that [Omega] knows that I am this type, it was reasonable of me to think that the $1,000,000 was not in [the box]. The $1,000 was the most I was going to get no matter what I did. So the only reasonable thing for me to do was to take it."</p> </blockquote> <blockquote> <p>Irene may want to press the point here by asking, "But don't you wish you were like me, Rachel?"... Rachel can and should admit that she <em>does</em> wish she were more like Irene... At this point, Irene will exclaim, "You've admitted it! It wasn't so smart to take the money after all." Unfortunately for Irene, her conclusion does not follow from Rachel's premise. Rachel will patiently explain that wishing to be a [one-boxer] in a Newcomb problem is not inconsistent with thinking that one should take the $1,000 <em>whatever type one is</em>. When Rachel wishes she was Irene's type she is wishing for <em>Irene's options</em>, not sanctioning her choice... While a person who knows she will face (has faced) a Newcomb problem might wish that she were (had been) the type that [Omega] labels a [one-boxer], this wish does not provide a reason for <em>being</em> a [one-boxer]. It might provide a reason to try (before [the boxes are filled]) to change her type <em>if she thinks this might affect [Omega's] prediction</em>, but it gives her no reason for doing anything other than taking the money once she comes to believes that she will be unable to influence what [Omega] does.</p> </blockquote> <p>In other words, this response distinguishes between the <em>winning decision</em> and the <em>winning type of agent</em> and claims that two-boxing is the winning decision in Newcomb&rsquo;s problem (even if one-boxers are the winning type of agent). Consequently, insofar as decision theory is about determining which <em>decision</em> is rational, on this account CDT reasons correctly in Newcomb&rsquo;s problem.</p> <p>For those that find this response perplexing, an analogy could be drawn to the <em>chewing gum problem</em>. In this scenario, there is near unanimous agreement that the rational decision is to chew gum. However, statistically, non-chewers will be better off than chewers. As such, the non-chewer could ask, &ldquo;if you&rsquo;re so smart, why aren&rsquo;t you healthy?&rdquo; In this case, the above response seems particularly appropriate. The chewers are less healthy not because of their decision but rather because they&rsquo;re more likely to have an undesirable gene. Having good genes doesn&rsquo;t make the non-chewer more rational but simply more lucky. The proponent of CDT simply makes a similar response to Newcomb&rsquo;s problem: one-boxers aren&rsquo;t richer because of their decision but rather because of the type of agent that they were when the boxes were filled.</p> <p>One final point about this response is worth noting. A proponent of CDT can accept the above argument but still acknowledge that, if given the choice before the boxes are filled, they would be rational to choose to modify themselves to be a one-boxing type of agent (as Joyce acknowledged in the above passage and as argued for in <a href="">Burgess, 2004</a>). To the proponent of CDT, this is unproblematic: if we are sometimes rewarded not for the rationality of our decisions in the moment but for the type of agent we were at some past moment then it should be unsurprising that changing to a different type of agent might be beneficial.</p> <p>The response to this defense of two-boxing in Newcomb&rsquo;s problem has been divided. Many find it compelling but others, like <a href="">Ahmed and Price (2012)</a> think it does not adequately address to the challenge:</p> <blockquote> <p>It is no use the causalist's whining that foreseeably, Newcomb problems do in fact reward irrationality, or rather CDT-irrationality. The point of the argument is that if everyone knows that the CDT-irrational strategy will in fact do better on average than the CDT-rational strategy, then it's rational to play the CDT-irrational strategy.</p> </blockquote> <p>Given this, there seem to be two positions one could take on these issues. If the response given by the proponent of CDT is compelling, then we should be attempting to develop a decision theory that two-boxes on Newcomb&rsquo;s problem. Perhaps the best theory for this role is CDT but perhaps it is instead BT, which many people think reasons better in the psychopath button scenario. On the other hand, if the response given by the proponents of CDT is not compelling, then we should be developing a theory that one-boxes in Newcomb&rsquo;s problem. In this case, TDT, or something like it, seems like the most promising theory currently on offer.</p> lukeprog zEWJBFFMvQ835nq6h 2013-02-28T14:15:55.090Z Improving Human Rationality Through Cognitive Change (intro) <p><small>This is the introduction to a paper I started writing long ago, but have since given up on. The paper was going to be an overview of methods for improving human rationality through cognitive change. Since it contains lots of handy references on rationality, I figured I'd publish it, in case it's helpful to others.</small></p> <p>&nbsp;</p> <h2>1. Introduction</h2> <p>During the last half-century, cognitive scientists have catalogued dozens of common errors in human judgment and decision-making (<a href="">Griffin et al. 2012</a>; <a href="">Gilovich et al. 2002</a>). <a href="">Stanovich (1999)</a> provides a sobering introduction:</p> <blockquote> <p>For example, people assess probabilities incorrectly, they display confirmation bias, they test hypotheses inefficiently, they violate the axioms of utility theory, they do not properly calibrate degrees of belief, they overproject their own opinions onto others, they allow prior knowledge to become implicated in deductive reasoning, they systematically underweight information about nonoccurrence when evaluating covariation, and they display numerous other information-processes biases...</p> </blockquote> <p>The good news is that researchers have also begun to understand the cognitive mechanisms which produce these errors (<a href="">Kahneman 2011</a>; <a href="">Stanovich 2010</a>), they have found several "debiasing" techniques that groups or individuals may use to partially avoid or correct these errors (<a href="">Larrick 2004</a>), and they have discovered that environmental factors can be used to help people to exhibit fewer errors (<a href="">Thaler and Sunstein 2009</a>; <a href="">Trout 2009</a>).</p> <p>This "heuristics and biases" research program teaches us many lessons that, if put into practice, could improve human welfare. Debiasing techniques that improve human rationality may be able to decrease rates of violence caused by ideological extremism (<a href="">Lilienfeld et al. 2009</a>). Knowledge of human bias can help executives make more profitable decisions (<a href="">Kahneman et al. 2011</a>). Scientists with improved judgment and decision-making skills ("rationality skills") may be more apt to avoid experimenter bias (<a href="">Sackett 1979</a>). Understanding the nature of human reasoning can also improve the practice of philosophy (<a href="">Knobe et al. 2012</a>; <a href="">Talbot 2009</a>; <a href="">Bishop and Trout 2004</a>; <a href="/lw/fpe/philosophy_needs_to_trust_your_rationality_even/">Muehlhauser 2012</a>), which has too often made false assumptions about how the mind reasons (<a href="">Weinberg et al. 2001</a>; <a href="">Lakoff and Johnson 1999</a>; <a href="">De Paul and Ramsey 1999</a>). Finally, improved rationality could help decision makers to choose better policies, especially in domains likely by their very nature to trigger biased thinking, such as investing (<a href="">Burnham 2008</a>), military command (<a href="">Lang 2011</a>; <a href="">Williams 2010</a>; <a href="">Janser 2007</a>), intelligence analysis (<a href="">Heuer 1999</a>), or the study of global catastrophic risks (<a href="">Yudkowsky 2008a</a>).</p> <p>But is it possible to improve human rationality? The answer, it seems, is "Yes." Lovallo and Sibony (<a href="">2010</a>) showed that when organizations worked to reduce the effect of bias on their investment decisions, they achieved returns of 7% or higher. Multiple studies suggest that a simple instruction to "think about alternative hypotheses" can counteract overconfidence, confirmation bias, and anchoring effects, leading to more accurate judgments (<a href="">Mussweiler et al. 2000</a>; <a href="">Koehler 1994</a>; <a href="">Koriat et al. 1980</a>). Merely warning people about biases can decrease their prevalence, at least with regard to framing effects (<a href="">Cheng and Wu 2010</a>), hindsight bias (<a href="">Hasher et al. 1981</a>; <a href="">Reimers and Butler 1992</a>), and the outcome effect (<a href="">Clarkson et al. 2002</a>). Several other methods have been shown to meliorate the effects of common human biases (<a href="">Larrick 2004</a>). Judgment and decision-making appear to be skills that can be learned and improved with practice (<a href="">Dhami et al. 2012</a>).</p> <p>In this article, I first explain what I mean by "rationality" as a normative concept. I then review the state of our knowledge concerning the causes of human errors in judgment and decision-making (JDM). The largest section of our article summarizes what we currently know about how to improve human rationality through cognitive change (e.g. "rationality training"). We conclude by assessing the prospects for improving human rationality through cognitive change, and by recommending particular avenues for future research.</p> <p><a id="more"></a></p> <p>&nbsp;</p> <h2>2. Normative Rationality</h2> <p>In cognitive science, rationality is a normative concept (<a href="">Stanovich 2011</a>). As Stanovich (<a href="">2012</a>) explains, "When a cognitive scientist terms a behavior irrational he/she means that the behavior departs from the optimum prescribed by a particular normative model."</p> <p>This normative model of rationality consists in logic, probability theory, and rational choice theory. In their opening chapter for <em>The Oxford Handbook of Thinking and Reasoning</em>, Chater and Oaksford (<a href="">2012</a>) explain:</p> <blockquote> <p>Is it meaningful to attempt to develop a general theory of rationality at all? We might tentatively suggest that it is a prima facie sign of irrationality to believe in alien abduction, or to will a sports team to win in order to increase their chance of victory. But these views or actions might be entirely rational, given suitably nonstandard background beliefs about other alien activity and the general efficacy of psychic powers. Irrationality may, though, be ascribed if there is a clash between a particular belief or behavior and such background assumptions. Thus, a thorough-going physicalist may, perhaps, be accused of irrationality if she simultaneously believes in psychic powers. A theory of rationality cannot, therefore, be viewed as clarifying either what people should believe or how people should act&mdash;but it can determine whether beliefs and behaviors are compatible. Similarly, a theory of rational choice cannot determine whether it is rational to smoke or to exercise daily; but it might clarify whether a particular choice is compatible with other beliefs and choices.</p> <p>From this viewpoint, normative theories can be viewed as clarifying conditions of consistency&hellip; Logic can be viewed as studying the notion of consistency over beliefs. Probability&hellip; studies consistency over degrees of belief. Rational choice theory studies the consistency of beliefs and values with choices.</p> </blockquote> <p>There are many good tutorials on logic (<a href="">Schechter 2005</a>), probability theory (<a href="">Koller and Friedman 2009</a>), and rational choice theory (<a href="">Allington 2002</a>; <a href="">Parmigiani and Inoue 2009</a>), so I will make only two quick points here. First, by "probability" I mean the subjective or Bayesian interpretation of probability, because that is the interpretation which captures degrees of belief (<a href="">Oaksford and Chater 2007</a>; <a href="">Jaynes 2003</a>; <a href="">Cox 1946</a>). Second, in rational choice theory I am of course endorsing the normative principle of expected utility maximization (<a href="">Grant &amp; Zandt 2009</a>).</p> <p>According to this concept of rationality, then, an agent is rational if its beliefs are consistent with the laws of logic and probability theory and its decisions are consistent with the laws of rational choice theory. An agent is irrational to the degree that its beliefs violate the laws of logic or probability theory, or its decisions violate the laws of rational choice theory.<sup>1</sup></p> <p>Researchers working in the heuristics and biases tradition have shown that humans regularly violate the norms of rationality (<a href="">Manktelow 2012</a>; <a href="">Pohl 2005</a>). These researchers tend to assume that human reasoning could be improved, and thus they have been called "Meliorists" (<a href="">Stanovich 1999</a>, <a href="">2004</a>), and their program of using psychological findings to make recommendations for improving human reasoning has been called "ameliorative psychology" (<a href="">Bishop and Trout 2004</a>).</p> <p>Another group of researchers, termed the "Panglossians,"<sup>2</sup> argue that human performance is generally "rational" because it manifests an evolutionary adaptation for optimal information processing (<a href="">Gigerenzer et al. 1999</a>).</p> <p>I disagree with the Panglossian view for reasons detailed elsewhere (<a href=";pg=PA27&amp;lpg=PA27&amp;dq=%22in+contrast+to+the+emphasis+on+errors+in+much+of+the+literature+on+judgment+and+decision-making%22&amp;source=bl&amp;ots=jyVv6JLtz_&amp;sig=In2N57M0AHAhPI3htZ23SqH5w3w&amp;hl=en&amp;sa=X&amp;ei=xV4pUay9KI3LigLhsICwAg&amp;ved=0CDMQ6AEwAA#v=onepage&amp;q=%22in%20contrast%20to%20the%20emphasis%20on%20errors%20in%20much%20of%20the%20literature%20on%20judgment%20and%20decision-making%22&amp;f=false">Griffiths et al. 2012:27</a>; <a href="">Stanovich 2010, ch. 1</a>; <a href="">Stanovich and West 2003</a>; <a href="">Stein 1996</a>), though I also believe the original dispute between Meliorists and Panglossians has been exaggerated (<a href="">Samuels et al. 2002</a>). In any case, a verbal dispute over what counts as "normative" for human JDM need not detain us here.<sup>3</sup> I have stipulated my definition of normative rationality &mdash; for the purposes of cognitive psychology &mdash; above. MY concern is with the question of whether cognitive change can improve human JDM in ways that enable humans to achieve their goals more effectively than without cognitive change, and it seems (as I demonstrate below) that the answer is "yes."</p> <p>MY view of normative rationality does not imply, however, that humans ought to explicitly use the laws of rational choice theory to make every decision. Neither humans nor machines have the knowledge and resources to do so (<a href="">Van Rooij 2008</a>; <a href="">Wang 2011</a>). Thus, in order to approximate normative rationality as best we can, we often (rationally) engage in a "bounded rationality" (<a href="">Simon 1957</a>) or "ecological rationality" (<a href="">Gigerenzer and Todd 2012</a>) or "grounded rationality" (<a href="">Elqayam 2011</a>) that employs simple heuristics to imperfectly achieve our goals with the limited knowledge and resources at our disposal (<a href="">Vul 2010</a>; <a href="">Vul et al. 2009</a>; <a href="">Kahneman and Frederick 2005</a>). Thus, the best prescription for human reasoning is not necessarily to always use the normative model to govern one's thinking (<a href="">Grant &amp; Zandt 2009</a>; <a href="">Stanovich 1999</a>; <a href="">Baron 1985</a>). Baron (<a href="">2008, ch. 2</a>) explains:</p> <blockquote> <p>In short, normative models tell us how to evaluate judgments and decisions in terms of their departure from an ideal standard. Descriptive models specify what people in a particular culture actually do and how they deviate from the normative models. Prescriptive models are designs or inventions, whose purpose is to bring the results of actual thinking into closer conformity to the normative model. If prescriptive recommendations derived in this way are successful, the study of thinking can help people to become better thinkers.</p> </blockquote> <p>&nbsp;</p> <p>[next, I was going to discuss the probable causes of JDM errors, tested methods for amelioration, and promising avenues for further research]</p> <p>&nbsp;</p> <h3>Notes</h3> <p><small><sup>1</sup> For a survey of other conceptions of rationality, see <a href="">Nickerson (2007)</a>. Note also that our concept of rationality is personal, not subpersonal (<a href="">Frankish 2009</a>; <a href="">Davies 2000</a>; <a href="">Stanovich 2010</a>:5).</small></p> <p><small><sup>2</sup> The adjective "Panglossian" was originally applied by <a href="">Steven Jay Gould and Richard Lewontin (1979)</a>, who used it to describe knee-jerk appeals to natural selection as the force that explains every trait. The term comes from Voltaire's character Dr. Pangloss, who said that "our noses were made to carry spectacles" <a href="">(Voltaire 1759)</a>.</small></p> <p><small><sup>3</sup> To resolve such verbal disputes we can employ the "method of elimination" (<a href="">Chalmers 2011</a>) or, as <a href="/lw/nv/replace_the_symbol_with_the_substance/">Yudkowsky (2008)</a> put it, we can "replace the symbol with the substance."</small></p> lukeprog hR92kW2ZSvmuca5Nf 2013-02-24T04:49:48.976Z Great rationality posts in the OB archives <p>Those aching for good rationality writing can get their fix from&nbsp;<a href="/lw/gof/great_rationality_posts_by_lwers_not_posted_to_lw/">Great rationality posts by LWers not posted to LW</a>, and also from the <a href="">Overcoming Bias</a> archives. Some highlights are below, up through June 28, 2007.</p> <p>&nbsp;</p> <ul> <li>Finney, <a href="">Foxes vs. Hedgehogs: Predictive Success</a></li> <li>Hanson, <a href="">When Error is High, Simplify</a></li> <li>Shulman, <a href="">Meme Lineages and Expert Consensus</a></li> <li>Hanson, <a href="">Resolving Your Hypocrisy</a></li> <li>Hanson, <a href="">Academic Overconfidence</a></li> <li>Hanson, <a href="">Conspicuous Consumption of Info</a></li> <li>Sandberg, <a href="">Supping with the Devil</a></li> <li>Hanson, <a href="">Conclusion-Blind Review</a></li> <li>Shulman, <a href="">Should We Defer to Secret Evidence?</a></li> <li>Shulman, <a href="">Sick of Textbook Errors</a></li> <li>Hanson, <a href="">Dare to Deprogram Me?</a></li> <li>Armstrong, <a href="">Biases, By and Large</a></li> <li>Friedman, <a href="">A Tough Balancing Act</a></li> <li>Hanson, <a href="">RAND Health Insurance Experiment</a></li> <li>Armstrong, <a href="">The Case for Dangerous Testing</a></li> <li>Hanson, <a href="">In Obscurity Errors Remain</a></li> <li>Falkenstein, <a href="">Hofstadter's Law</a></li> <li>Hanson, <a href="">Against Free Thinkers</a></li> </ul> <div><br /></div> <p>&nbsp;</p> lukeprog RKoxt8FBtLWXXPirZ 2013-02-23T23:33:51.624Z Great rationality posts by LWers not posted to LW <p>Ever since Eliezer, Yvain, and myself stopped posting regularly, LW's front page has mostly been populated by meta posts. (The Discussion section is still abuzz with interesting content, though, including <a href="/lw/f6o/original_research_on_less_wrong/">original research</a>.)</p> <p>Luckily, many LWers are posting potentially front-page-worthy content to <a href="/lw/d8t/blogs_by_lwers/">their own blogs</a>.</p> <p>Below are some recent-ish highlights outside Less Wrong, for your reading enjoyment. I've added an <strong>*</strong> to my personal favorites.</p> <p>&nbsp;</p> <p><strong><a href="">Overcoming Bias</a></strong> (Robin Hanson, Rob Wiblin, Katja Grace, Carl Shulman)</p> <ul> <li>Hanson, <a href="">Beware Far Values</a></li> <li>Wiblin, <a href="">Is US Gun Control an Important Issue?</a></li> <li>Wiblin, <a href="">Morality As Though It Really Mattered</a></li> <li>Grace, <a href="">Can a Tiny Bit of Noise Destroy Communication?</a></li> <li>Shulman,&nbsp;<a href="">Nuclear winter and human extinction: Q&amp;A with Luke Oman</a></li> <li>Wiblin,&nbsp;<a href="">Does complexity bias biotechnology towards doing damage?</a></li> </ul> <div><strong><a href="">Yvain</a></strong> (now moved <a href="">here</a>)</div> <div> <ul> <li><a href="">Kurzweil's Law of Accelerating Returns</a>&nbsp;<strong>*</strong></li> <li><a href="">The Great Stagnation</a></li> <li><a href="">Epistemic Learned Helplessness</a>&nbsp;<strong>*</strong></li> <li><a href="">The Biodeterminist's Guide to Parenting</a></li> </ul> <div><a href=""><strong>The Rationalist Conspiracy</strong></a> (Alyssa Vance)</div> <div> <ul> <li><a href="">What Caring Is</a></li> <li><a href="">The Real America of 2022</a></li> <li><a href="">Why Most Online Medical Information Sucks</a></li> </ul> <div><strong><a href="">Reflective Disequilibrium</a></strong> (Carl Shulman)</div> <div> <ul> <li><a href="">Spreading happiness to the stars seems little harder than just spreading</a></li> <li><a href="">Rawls' original position, potential people, and Pascal's Mugging</a></li> <li><a href="">Philosophers vs economists on discounting</a></li> <li><a href="">Utilitarianism, contractualism, and self-sacrifice</a></li> <li><a href="">Are pain and pleasure equally energy-efficient?</a>&nbsp;<strong>*</strong></li> </ul> <div><strong><a href="">Rational Altruist</a></strong> (Paul Christiano)</div> <div> <ul> <li><a href="">Pressing Ethical Questions</a></li> <li><a href="">Replaceability</a>&nbsp;<strong>*</strong></li> <li><a href="">How Useful is Progress?</a></li> </ul> <div><strong><a href="">Alex Vermeer</a></strong></div> <div> <ul> <li><a href="">15 Benefits of the Growth Mindset</a></li> </ul> <div><a href=""><strong>Prince Mm Mm</strong></a> (Giles)</div> <div> <ul> <li><a href="">Anthropic Principle Primer</a></li> <li><a href="">Entropy and Unconvincing Models</a></li> </ul> <div><br /></div> </div> </div> </div> </div> </div> </div> lukeprog xZdW7D43AaCiQQzvM 2013-02-16T00:31:20.077Z A reply to Mark Linsenmayer about philosophy <p><a href="">Mark Linsenmayer</a>, one of the hosts of a top philosophy podcast called <em><a href="">The Partially Examined Life</a></em>, has written a <a href="">critique</a> of the view that <a href="">Eliezer</a> and I seem to take of philosophy. Below, I respond to a few of Mark's comments. Naturally, I speak only for myself, not for Eliezer.</p> <p>&nbsp;</p> <p>&nbsp;</p> <blockquote> <p>I'm generally skeptical when someone proclaims that "rationality" itself should get us to throw out 90%+ of philosophy...</p> </blockquote> <p><a href="'s_Law">Sturgeon's Law</a> declares that "90% of everything is crap." I think <em>something</em> like that is true, though perhaps it's 88% crap in physics, 99% crap in philosophy, and 99.99% crap on <a href="">4chan</a>.</p> <p>But let me be more precise. I <em>do</em> claim that almost all philosophy is useless <em>for figuring out <a href="/lw/eqn/the_useful_idea_of_truth/">what is true</a></em>, for reasons explained in several of my posts:</p> <ul> <li><a href="/lw/4zs/philosophy_a_diseased_discipline/">Philosophy: A Diseased Discipline</a> </li> <li><a href="/lw/4zs/philosophy_a_diseased_discipline/">Concepts Don't Work That Way</a> </li> <li><a href="/lw/foz/philosophy_by_humans_3_intuitions_arent_shared/">Intuitions Aren't Shared That Way</a> </li> <li><a href="/lw/frp/train_philosophers_with_pearl_and_kahneman_not/">Train Philosophers with Pearl and Kahneman, Not Plato and Kant</a> </li> </ul> <p>Mark replies that the kinds of unscientific philosophy I dismiss can be "useful at least in the sense of entertaining," which of course isn't something I'd deny. I'm just trying to say that Heidegger is pretty darn useless for figuring out what's true. There are thousands of readings that will more efficiently make your model of the world more accurate.</p> <p>If you want to read Heidegger as poetry or entertainment, that's fine. I watch <em>Game of Thrones</em>, but not because it's a useful inquiry into truth.</p> <p>Also, I'm not sure what it would mean to say we should throw out 90% of philosophy <em>because of rationality</em>, but I probably don't agree with the "because" clause, there.</p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a id="more"></a></p> <blockquote> <p>[Luke's] accusation is that most philosophizing is useless unless explicitly based on scientific knowledge on how the brain works, and in particular where intuitions come from... [But] to then throw out the mass of the philosophical tradition because it has been ignorant of [cognitive biases] is [a mistake].</p> </blockquote> <p>I don't, in fact, think that "most philosophizing is useless unless explicitly based on scientific knowledge [about] how the brain works," nor do I "throw out the mass of the philosophical tradition because it has been ignorant of [cognitive biases]." Sometimes, people do pretty good philosophy without knowing much of modern psychology. Look at all the progress Hume and Frege made.</p> <p>What I <em>do</em> claim is that many <em>specific</em> philosophical positions and methods are undermined by scientific knowledge about how brains and other systems work. For example, I've <a href="/lw/7tz/philosophy_by_humans_1_concepts_dont_work_that_way/">argued</a> that a particular kind of philosophical analysis, which assumes concepts are defined by necessary and sufficient conditions, is undermined by psychological results showing that brains don't store concepts that way.</p> <p>If some poor philosopher doesn't know this, because she thinks it's okay to spend all day using her brain to philosophize without knowing much about how brains work, she might spend several years of her career pointlessly trying to find a necessary-and-sufficient-conditions analysis of knowledge that is immune to Gettier-style counterexamples.</p> <p>That's one reason to study psychology before doing much philosophy. Doing so can save you lots of time.</p> <p>Another reason to study psychology is that psychology is a significant component of <a href="">rationality training</a> (yes, with daily study and exercise, like piano training). Rationality training is important for doing philosophy because <a href="/r/lesswrong/lw/fpe/philosophy_needs_to_trust_your_rationality_even/">philosophy needs to trust your rationality even though it shouldn't</a>.</p> <p>&nbsp;</p> <p>&nbsp;</p> <blockquote> <p>...Looking over Eliezer's site and Less Wrong... my overall impression is again that... none of this adds up to the blanket critique/world-view that comes through very clearly</p> </blockquote> <p>Less Wrong is a group blog, so it doesn't quite have its own philosophy or worldview.</p> <p><em>Eliezer</em>, however, most certainly does. His approach to epistemology is pretty thoroughly documented in the ongoing, book-length sequence <a href="">Highly Advanced Epistemology 101 for Beginners</a>. Additional parts of his "worldview" comes to light in his many posts on <a href="/lw/od/37_ways_that_words_can_be_wrong/">philosophy of language</a>, <a href="">free will</a>, <a href="">metaphysics</a>, <a href="">metaethics</a>, <a href="/lw/n3/circular_altruism/">normative ethics</a>, <a href="/lw/xy/the_fun_theory_sequence/">axiology</a>, and <a href="">philosophy of mind</a>.</p> <p>I've written less about my own philosophical views, but you can get some of them in two (ongoing) sequences: <a href="">Rationality and Philosophy</a> and <a href="">No-Nonsense Metaethics</a>.</p> <p>&nbsp;</p> <p>&nbsp;</p> <blockquote> <p>I think it's instructive to contrast Eliezer with David Chalmers... who is very much on top of the science in his field... and yet he is not on board with any of this "commit X% of past philosophy to the flames" nonsense, doesn't think metaphysical arguments are meaningless or that difficult philosophical problems need to be defined away in some way, and, most provocatively, sees in consciousness a challenge to a physicalist world-view... I respectfully suggest that while reading more in contemporary science is surely a good idea... the approach to philosophy that is actually schooled in philosophy a la Chalmers is more worthy of emulation than Eliezer's dismissive anti-philosophy take.</p> </blockquote> <p>Chalmers is a smart dude, a good writer, and fun to hang with. But Mark doesn't explain here <em>why</em> it's "nonsense" to propose that truth-seekers (<em>qua</em> truth-seekers) should ignore 99% of all philosophy, <em>why</em> many metaphysical arguments aren't meaningless, <em>why</em> some philosophical problems can't simply be dissolved, nor why Chalmers' approach to philosophy is superior to Eliezer's.</p> <p>And that's fine. As Mark wrote, "I intended this post to be a high-level overview of positions." I'd just like to flag that arguments weren't provided in Mark's post.</p> <p>Meanwhile, I've linked above to many posts Eliezer and I have written about why most philosophy is useless for truth-seeking, why some metaphysical arguments are meaningless, and why some philosophical problems can be dissolved. (We'd have to be more specific about the Chalmers vs. Eliezer question before I could weigh in. For example, I find Chalmers' writing to be clearer, but Eliezer's choice of topics for investigation more important for the human species.)</p> <p>Finally, I'll note that <a href="">Nick Bostrom</a> takes roughly the same approach to philosophy as Eliezer and I do, but Nick has a position at Oxford University, publishes in leading philosophy journals, and so on. On philosophical method, I recommend Nick's first professional paper, <a href="">Predictions from Philosophy</a> (1997). It sums up the motivation behind much of what Nick and Eliezer have done since then.</p> lukeprog 2r28gzYtALsb7M3aR 2013-01-05T11:25:25.242Z Ideal Advisor Theories and Personal CEV <p><strong>Update 5-24-2013</strong>: A cleaned-up, citable version of this article is now available <a href="">on MIRI's website</a>.</p> <p>Co-authored with <a href="/user/crazy88/">crazy88</a></p> <p><small><em>Summary</em>: Yudkowsky's "coherent extrapolated volition" (CEV) concept shares much in common Ideal Advisor theories in moral philosophy. Does CEV fall prey to the same objections which are raised against Ideal Advisor theories? Because CEV is an epistemic rather than a metaphysical proposal, it seems that at least one family of CEV approaches (inspired by Bostrom's parliamentary model) may escape the objections raised against Ideal Advisor theories. This is not a particularly ambitious post; it mostly aims to place CEV in the context of mainstream moral philosophy.</small></p> <p>What is of value to an agent? Maybe it's just whatever they desire. Unfortunately, our desires are often the product of ignorance or confusion. I may desire to drink from the glass on the table because I think it is water when really it is bleach. So perhaps something is of value to an agent if they would desire that thing <em>if fully informed</em>. But here we crash into a different problem. It might be of value for an agent who wants to go to a movie to look up the session times, but the fully informed version of the agent will not desire to do so &mdash; they are fully-informed and hence already know all the session times. The agent and its fully-informed counterparts have different needs. Thus, several philosophers have suggested that something is of value to an agent if an ideal version of that agent (fully informed, perfectly rational, etc.) would <em>advise</em> the non-ideal version of the agent to pursue that thing.</p> <p>This idea of idealizing or extrapolating an agent's preferences<sup>1</sup> goes back at least as far as <a href="">Sidgwick (1874)</a>, who considered the idea that "a man's future good" consists in "what he would now desire... if all the consequences of all the different [actions] open to him were accurately forseen..." Similarly, <a href="">Rawls (1971)</a> suggested that a person's good is the plan "that would be decided upon as the outcome of careful reflection in which the agent reviewed, in the light of all the relevant facts, what it would be like to carry out these plans..." More recently, in an article about rational agents and moral theory, <a href="">Harsanyi (1982)</a> defined what an agent's rational wants as &ldquo;the preferences he <em>would</em> have if he had all the relevant factual information, always reasoned with the greatest possible care, and were in a state of mind most conducive to rational choice.&rdquo; Then, a few years later, <a href="">Railton (1986)</a> identified a person's good with "what he would want himself to want... were he to contemplate his present situation from a standpoint fully and vividly informed about himself and his circumstances, and entirely free of cognitive error or lapses of instrumental rationality."</p> <p><a href="">Rosati (1995)</a> calls these theories Ideal Advisor theories of value because they identify one's personal value with what an ideal version of oneself would advise the non-ideal self to value.</p> <p>Looking not for a metaphysical account of value but for a practical solution to machine ethics (<a href="">Wallach &amp; Allen 2009</a>; <a href="">Muehlhauser &amp; Helm 2012</a>), <a href="">Yudkowsky (2004)</a> described a similar concept which he calls "coherent extrapolated volition" (CEV):</p> <blockquote> <p>In poetic terms, our <em>coherent extrapolated volition</em> is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.</p> </blockquote> <p>In other words, the CEV of humankind is about the preferences that we would have as a species if our preferences were extrapolated in certain ways. Armed with this concept, Yudkowsky then suggests that we implement CEV as an "initial dynamic" for "Friendly AI." <a href="">Tarleton (2010)</a> explains that the intent of CEV is that "our volition be extrapolated <em>once</em> and acted on. In particular, the initial extrapolation could generate an object-level goal system we would be willing to endow a superintelligent [machine] with."</p> <p>CEV theoretically avoids many problems with other approaches to machine ethics (Yudkowsky 2004; Tarleton 2010; Muehlhauser &amp; Helm 2012). However, there are reasons it may not succeed. In this post, we examine one such reason: Resolving CEV at the level of humanity (<em>Global CEV</em>) might require at least partially resolving CEV at the level of individuals (<em>Personal CEV</em>)<sup>2</sup>, but Personal CEV is similar to ideal advisor theories of value,<sup>3</sup> and such theories face well-explored difficulties. As such, these difficulties may undermine the possibility of determining the Global CEV of humanity.</p> <p>Before doing so, however, it's worth noting one key difference between Ideal Advisor theories of value and Personal CEV. Ideal Advisor theories typically are linguistic or metaphysical theories, while the role of Personal CEV is epistemic. Ideal Advisor theorists attempts to define <em>what it is</em> for something to be of value for an agent. Because of this, their accounts needs to give an unambiguous and plausible answer in all cases. On the other hand, Personal CEV's role is an epistemic one: it isn't intended to define what is of value for an agent. Rather, Personal CEV is offered as a technique that can help an AI to <em>come to know</em>, to some reasonable but not necessarily perfect level of accuracy, what is of value for the agent. To put it more precisely, Personal CEV is intended to allow an initial AI to determine what sort of superintelligence to create such that we end up with what Yudkowsky calls a "Nice Place to Live." Given this, certain arguments are likely to threaten Ideal Advisor theories and not to Personal CEV, and vice versa.</p> <p>With this point in mind, we now consider some objections to ideal advisor theories of value, and examine whether they threaten Personal CEV.</p> <p><a id="more"></a></p> <p>&nbsp;</p> <h3>Sobel's First Objection: Too many voices</h3> <p>Four prominent objections to ideal advisor theories are due to <a href="">Sobel (1994)</a>. The first of these, the &ldquo;too many voices&rdquo; objection, notes that the evaluative perspective of an agent changes over time and, as such, the views that would be held by the perfectly rational and fully informed version of the agent will also change. This implies that each agent will be associated not with one idealized version of themselves but with a set of such idealized versions (one at time <em>t</em>, one at time <em>t+1</em>, etc.), some of which may offer conflicting advice. Given this &ldquo;discordant chorus,&rdquo; it is unclear how the agent&rsquo;s non-moral good should be determined.</p> <p>Various responses to this objection run into their own challenges. First, privileging a single perspective (say, the idealized agent at time <em>t+387</em>) seems ad hoc. Second, attempting to aggregate the views of multiple perspectives runs into the question of how trade offs should be made. That is, if two of the idealized viewpoints disagree about what is to be preferred, it&rsquo;s unclear how an overall judgment should be reached.<sup>4</sup> Finally, suggesting that the idealized versions of the agent at different times will have the same perspective seems unlikely, and surely it's a substantive claim requiring a substantive defense. So the obvious responses to Sobel&rsquo;s first objection introduce serious new challenges which then need to be resolved.</p> <p>One final point is worth noting: it seems that this objection is equally problematic for Personal CEV. The extrapolated volition of the agent is likely to vary at different times, so how ought we determine an overall account of the agent&rsquo;s extrapolated volition?</p> <p>&nbsp;</p> <h3>Sobel&rsquo;s Second and Third Objections: Amnesia</h3> <p>Sobel&rsquo;s second and third objections build on two other claims (see <a href="">Sobel 1994</a> for a defense of these). First: some lives can only be evaluated if they are experienced. Second: experiencing one life can leave you incapable of experiencing another in an unbiased way. Given these claims, Sobel presents an <em>amnesia model</em> as the most plausible way for an idealized agent to gain the experiences necessary to evaluate all the relevant lives. According to this model, an agent experiences each life sequentially but undergoes an amnesia procedure after each one so that they may experience the next life uncolored by their previous experiences. After experiencing all lives, the amnesia is then removed.</p> <p>Following on from this, Sobel&rsquo;s second objection is that the sudden recollection of a life from one evaluative perspective and living a life from a vastly different evaluative perspective may be strongly dissimilar experiences. So when the amnesia is removed, the agent has a particular evaluative perspective (informed by their memories of all the lives they&rsquo;ve lived) that differs so much from the evaluative perspective they had when they lived the life independently of such memories that they might be incapable of adequately evaluating the lives they&rsquo;ve experienced based on their current, more knowledgeable, evaluative perspective.</p> <p>Sobel&rsquo;s third objection also relates to the amnesia model: Sobel argues that the idealized agent might be driven insane by the entire amnesia process and hence might not be able to adequately evaluate what advice they ought to give the non-ideal agent. In response to this, there is some temptation to simply demand that the agent be idealized not just in terms of rationality and knowledge but also in terms of their sanity. However, perhaps any idealized agent that is similar enough to the original to serve as a standard for their non-moral good will be driven insane by the amnesia process and so the demand for a sane agent will simply mean that no adequate agent can be identified.</p> <p>If we grant that an agent needs to experience some lives to evaluate them, and we grant that experiencing some lives leaves them incapable of experiencing others, then there seems to be a strong drive for Personal CEV to rely on an amnesia model to adequately determine what an agent&rsquo;s volition would be if extrapolated. If so, however, then Personal CEV seems to face the challenges raised by Sobel.</p> <p>&nbsp;</p> <h3>Sobel&rsquo;s Fourth Objection: Better Off Dead</h3> <p>Sobel&rsquo;s final objection is that the idealized agent, having experienced such a level of perfection, might come to the conclusion that their non-ideal counterpart is so limited as to be better off dead. Further, the ideal agent might make this judgment because of the relative level of well-being of the non-ideal agent rather than the agent&rsquo;s absolute level of well-being. (That is, the ideal agent may look upon the well-being of the non-ideal agent as we might look upon our own well-being after an accident that caused us severe mental damage. In such a case, we might be unable to objectively judge our life after the accident due to the relative difficulty of this life as compared with our life before the accident.) As such, this judgment may not capture what is actually in accordance with the agent&rsquo;s non-moral good.</p> <p>Again, this criticism seems to apply equally to Personal CEV: when the volition of an agent is extrapolated, it may turn out that this volition endorses killing the non-extrapolated version of the agent. If so this seems to be a mark against the possibility that Personal CEV can play a useful part in a process that should eventually terminate in a "Nice Place to Live."</p> <p>&nbsp;</p> <h3>A model of Personal CEV</h3> <p>The seriousness of these challenges for Personal CEV is likely to vary depending on the exact nature of the extrapolation process. To give a sense of the impact, we will consider one family of methods for carrying out this process: the <em>parliamentary model</em> (inspired by <a href="">Bostrom 2009</a>). According to this model, we determine the Personal CEV of an agent by simulating multiple versions of them, extrapolated from various starting times and along different developmental paths. Some of these versions are then assigned as a parliament where they vote on various choices and make trades with one another.</p> <p>Clearly this approach allows our account of Personal CEV to avoid the too many voices objection. After all, the parliamentary model provides us with an account of how we can aggregate the views of the agent at various times: we should simulate the various agents and allow them to vote and trade on the choices to be made. It is through this voting and trading that the various voices can be combined into a single viewpoint. While this process may not be adequate as a metaphysical account of value, it seems more plausible as an account of Personal CEV as an epistemic notion. Certainly, your authors would deem themselves to be more informed about what they value if they knew the outcome of the parliamentary model for themselves.</p> <p>This approach is also able to avoid Sobel&rsquo;s second and third objections. The objections were specifically targeted at the amnesia model where one agent experienced multiple lives. As the parliamentary model does not utilize amnesia, it is immune to these concerns.</p> <p>What of Sobel&rsquo;s fourth objection? Sobel&rsquo;s concern here is not simply that the idealized agent might advise the agent to kill themselves. After all, sometimes death may, in fact, be of value for an agent. Rather, Sobel&rsquo;s concern is that the idealized agent, having experienced such heights of existence, will become biased against the limited lives of normal agents.</p> <p>It's less clear how the parliamentary model deals with Sobel's fourth objection which plausibly retains its initial force against this model of Personal CEV. However, we're not intending to solve Personal CEV entirely in this short post. Rather, we aim to demonstrate only that the force of Sobel's four objections will depend on the model of Personal CEV selected. Reflection on the parliamentary model makes this point clear.</p> <p>So the parliamentary model seems able to avoid at least three of the direct criticisms raised by Sobel. It is worth noting, however, that some concerns remain. Firstly, for those that accept Sobel&rsquo;s claim that experience is necessary to evaluate some lives, it is clear that no member of the parliament will be capable of comparing their life to all other possible lives, as none will have all the required experience. As such, the agents may falsely judge a certain aspect of their life to be more or less valuable than it, in fact, is. For a metaphysical account of personal value, this problem might be fatal. Whether it is also fatal for the parliamentary model of Personal CEV depends on whether the knowledge of the various members of the parliament is enough to produce a &ldquo;Nice Place to Live&rdquo; regardless of its imperfection.</p> <p>Two more issues might arise. First, the model might require careful selection of who to appoint to the parliament. For example, if most of the possible lives that an agent could live would drive them insane, then selecting which of these agents to appoint to the parliament at random might lead to a vote by the mad. Second, it might seem that this approach to determining Personal CEV will require a reasonable level of accuracy in simulation. If so, there might be concerns about the creation of, and responsibility to, potential moral agents.</p> <p>Given these points, a full evaluation of the parliamentary model will require more detailed specification and further reflection. However, two points are worth noting in conclusion. First, the parliamentary model does seem to avoid at least three of Sobel&rsquo;s direct criticisms. Second, even if this model eventually ends up being flawed on other grounds, the existence of one model of Personal CEV that can avoid three of Sobel&rsquo;s objections gives us reason to expect other promising models of Personal CEV may be discovered.</p> <p>&nbsp;</p> <h3>Notes</h3> <p><sup>1</sup> Another clarification to make concerns the difference between <a href="">idealization</a> and <a href="">extrapolation</a>. An <em>idealized agent</em> is a version of the agent with certain idealizing characteristics (perhaps logical omniscience and infinite speed of thought). An <em>extrapolated agent</em> is a version of the agent that represents what they would be like if they underwent certain changes or experiences. Note two differences between these concepts. First, an extrapolated agent need not be ideal in any sense (though useful extrapolated agents often will be) and certainly need not be <em>perfectly</em> idealized. Second, extrapolated agent are determined by a specific type of process (extrapolation from the original agent) whereas no such restriction is placed on how the form of an idealized agent is determined. CEV utilizes extrapolation rather than idealization, as do some Ideal Advisor theories. In this post, we talk about "ideal" or "idealized" agents as a catch-all for both idealized agents and extrapolated agents.</p> <p><sup>2</sup> Standard objections to ideal advisor theories of value are also relevant to some proposed variants of CEV, for example Tarleton (2010)'s suggestion of "Individual Extrapolated Volition followed by Negotiation, where each individual human&rsquo;s preferences are extrapolated by factual correction and reflection; once that process is fully complete, the extrapolated humans negotiate a combined utility function for the resultant superintelligence..." Furthermore, some objections to Ideal Advisor theories also seem relevant to Global CEV even if they are not relevant to a particular approach to Personal CEV, though that discussion is beyond the scope of this article. As a final clarification, see <a href="/lw/1oj/complexity_of_value_complexity_of_outcome/">Dai (2010)</a>.</p> <p><sup>3</sup> Ideal Advisor theories are not to be confused with "Ideal Observer theory" (<a href="">Firth 1952</a>). For more on Ideal Advisor theories of value, see <a href="">Zimmerman (2003)</a>; <a href="">Tanyi (2006)</a>; <a href="">Enoch (2005)</a>;&nbsp;<a href="">Miller (2013, ch. 9)</a>.</p> <p><sup>4</sup> This is basically an intrapersonal version of the standard worries about interpersonal comparisons of well-being. The basis of these worries is that even if we can specify an agent&rsquo;s preferences numerically, it&rsquo;s unclear how we should compare the numbers assigned by one agent with the numbers assigned by the other. In the intrapersonal case, the challenge is to determine how to compare the numbers assigned by the same agent at different times. See <a href="">Gibbard (1989)</a>.</p> lukeprog q9ZSXiiA7wEuRgnkS 2012-12-25T13:04:46.889Z Noisy Reasoners <p>One of the more interesting papers at this year's <a href="">AGI-12 conference</a> was Finton Costello's <a href="">Noisy Reasoners</a>. I think it will be of interest to Less Wrong:</p> <blockquote> <p>&nbsp;</p> <p>This paper examines reasoning under uncertainty in the case&nbsp;where the AI reasoning mechanism is itself subject to random error or&nbsp;noise in its own processes. The main result is a demonstration that systematic, directed biases naturally arise if there is random noise in a&nbsp;reasoning process that follows the normative rules of probability theory.&nbsp;A number of reliable errors in human reasoning under uncertainty can&nbsp;be explained as the consequence of these systematic biases due to noise.&nbsp;Since AI systems are subject to noise, we should expect to see the same&nbsp;biases and errors in AI reasoning systems based on probability theory.</p> <p>&nbsp;</p> </blockquote> lukeprog h6mNG2nC56sP88w3D 2012-12-13T07:53:29.193Z