esrogs feed - LessWrong 2.0 Reader esrogs’s posts and comments on the Effective Altruism Forum en-us Comment by ESRogs on Open Thread April 2019 https://www.lesswrong.com/posts/dYMih9oqYuFzQaS3c/open-thread-april-2019#N5wNgHWHhLGwThPsm <p>Apparently the author is a science writer (makes sense), and it&#x27;s his first book:</p><blockquote>I’m a freelance science writer. Until January 2018 I was science writer for BuzzFeed UK; before that, I was a comment and features writer for the Telegraph, having joined in 2007. My first book, The Rationalists: AI and the geeks who want to save the world, for Weidenfeld &amp; Nicolson, is due to be published spring 2019. Since leaving BuzzFeed, I’ve written for the Times, the i, the Telegraph, UnHerd, politics.co.uk, and elsewhere.</blockquote><p><a href="https://tomchivers.com/about/">https://tomchivers.com/about/</a></p> esrogs N5wNgHWHhLGwThPsm 2019-04-28T19:50:25.182Z Comment by ESRogs on Open Thread April 2019 https://www.lesswrong.com/posts/dYMih9oqYuFzQaS3c/open-thread-april-2019#wt6SWZoBzRL5vHkXx <p>Someone wrote a book about us:</p><blockquote>Overall, they have sparked a remarkable change.  They’ve made the idea of AI as an existential risk mainstream; sensible, grown-up people are talking about it, not just fringe nerds on an email list.  From my point of view, that’s a good thing.  I don’t think AI is definitely going to destroy humanity.  But nor do I think that it’s so unlikely we can ignore it.  There is a small but non-negligible probability that, when we look back on this era in the future, we’ll think that Eliezer Yudkowsky and Nick Bostrom  — and the SL4 email list, and LessWrong.com — have saved the world.  If Paul Crowley is right and my children don’t die of old age, but in a good way — if they and humanity reach the stars, with the help of a friendly superintelligence — that might, just plausibly, be because of the Rationalists.</blockquote><p><a href="https://marginalrevolution.com/marginalrevolution/2019/04/the-ai-does-not-hate-you.html">https://marginalrevolution.com/marginalrevolution/2019/04/the-ai-does-not-hate-you.html</a></p><p>H/T <a href="https://twitter.com/XiXiDu/status/1122432162563788800">https://twitter.com/XiXiDu/status/1122432162563788800</a></p> esrogs wt6SWZoBzRL5vHkXx 2019-04-28T18:37:49.268Z Comment by ESRogs on Book review: The Sleepwalkers by Arthur Koestler https://www.lesswrong.com/posts/3Ts4GrXGQEqTinPD2/book-review-the-sleepwalkers-by-arthur-koestler#MJHbkbMDLNhzYZMTq <blockquote>Figuring out what&#x27;s up with that seems like a major puzzle of our time.</blockquote><p>Would be curious to hear more about your confusion and why it seems like such a puzzle. Does &quot;when you aggregate over large numbers of things, complex lumpiness smooths out into boring sameness&quot; not feel compelling to you?</p><p>If not, why not? Maybe you can confuse me too ;-)</p> esrogs MJHbkbMDLNhzYZMTq 2019-04-25T02:40:33.084Z Comment by ESRogs on The Principle of Predicted Improvement https://www.lesswrong.com/posts/cpWgnzhbZyxQFzCsj/the-principle-of-predicted-improvement#zwqes8SFWWwe5TaRi <blockquote>E[P(H|D)]≥E[P(H)]</blockquote><blockquote>In English the theorem says that the probability we should expect to assign to the true value of H after observing the true value of D is greater than or equal to the expected probability we assign to the true value of H before observing the value of D.</blockquote><p>I have a very basic question about notation -- what tells me that H in the equation refers to the true hypothesis?</p><p>Put another way, I don&#x27;t really understand why that equation has a different interpretation than the <em>conservation-of-expected-evidence</em> equation: E[P(H=hi|D)]=P(H=hi).</p><p>In both cases I would interpret it as talking about the expected probability of some hypothesis, given some evidence, compared to the prior probability of that hypothesis.</p> esrogs zwqes8SFWWwe5TaRi 2019-04-25T02:35:19.607Z Comment by ESRogs on Alignment Newsletter One Year Retrospective https://www.lesswrong.com/posts/3onCb5ph3ywLQZMX2/alignment-newsletter-one-year-retrospective#aLzwZLBCrxWuYxZhx <blockquote>I think I&#x27;ve commented on your newsletters a few times, but haven&#x27;t comment more because it seems like the number of people who would read and be interested in such a comment would be relatively small, compared to a comment on a more typical post.</blockquote><p>I am surprised you think this. Don&#x27;t the newsletters tend to be relatively highly upvoted? They&#x27;re one of the kinds of links that I always automatically click on when I see them on the LW front page.</p><p>Maybe I&#x27;m basing this too much on my own experience, but I would love to see more discussion on the newsletter posts.</p> esrogs aLzwZLBCrxWuYxZhx 2019-04-12T17:09:29.049Z Comment by ESRogs on Degrees of Freedom https://www.lesswrong.com/posts/Nd6XGxCiYrm2qJdh6/degrees-of-freedom#wjFbqZ4ooZceyiJ7o <p>For freedom-as-arbitrariness, see also: <a href="https://www.lesswrong.com/posts/yLLkWMDbC9ZNKbjDG/slack">Slack</a></p> esrogs wjFbqZ4ooZceyiJ7o 2019-04-03T02:00:26.862Z Comment by ESRogs on Degrees of Freedom https://www.lesswrong.com/posts/Nd6XGxCiYrm2qJdh6/degrees-of-freedom#z7agaxxabt5YmKCT5 <blockquote>If your car was subject to a perpetual auction and ownership tax as Weyl proposes, bashing your car to bits with a hammer would cost you even if you didn’t personally need a car, because it would hurt the rental or resale value and you’d still be paying tax.</blockquote><p>I don&#x27;t think this is right. COST stands for &quot;Common Ownership Self-Assessed Tax&quot;. The self-assessed part refers to the idea that you personally state the value you&#x27;d be willing to sell the item for (and pay tax on that value). Once you&#x27;ve destroyed the item, presumably you&#x27;d be willing to part with the remains for a lower price, so you should just re-state the value and pay a lower tax.</p><p>It&#x27;s true that damaging the car hurts the resale value and thus costs you (in terms of your material wealth), but this would be true whether or not you were living under a COST regime.</p> esrogs z7agaxxabt5YmKCT5 2019-04-03T01:18:59.695Z Comment by ESRogs on How good is a human's gut judgement at guessing someone's IQ? https://www.lesswrong.com/posts/KFWA9dMFAnic56Zt3/how-good-is-a-human-s-gut-judgement-at-guessing-someone-s-iq#HDoRsSGFFMk2bZpvd <blockquote>Whatever ability IQ tests and math tests measure, I believe that lacking that ability doesn’t have <em>any </em>effect on one’s ability to make a good social impression or even to “seem smart” in conversation.</blockquote><p>That section of Sarah&#x27;s post jumped out at me too, because it seemed to be the opposite of my experience. In my (limited, subject-to-confirmation-bias) experience, how smart someone seems to me in conversation seems to match pretty well with how they did on standardized tests (or other measures of academic achievement). Obviously not perfectly, but way way better than chance.</p> esrogs HDoRsSGFFMk2bZpvd 2019-04-03T00:19:17.970Z Comment by ESRogs on How good is a human's gut judgement at guessing someone's IQ? https://www.lesswrong.com/posts/KFWA9dMFAnic56Zt3/how-good-is-a-human-s-gut-judgement-at-guessing-someone-s-iq#bRAdHf3MWDKEyh8Wk <blockquote>I would also expect that courtesy of things like Dunning-Kruger, people towards the bottom will be as bad at estimating IQ as they are competence at any particular thing.</blockquote><p>FWIW, the original Dunning-Kruger study did not show the effect that it&#x27;s become known for. See: <a href="https://danluu.com/dunning-kruger/">https://danluu.com/dunning-kruger/</a></p><p>In particular:</p><blockquote>In two of the four cases, there&#x27;s an obvious positive correlation between perceived skill and actual skill, which is the opposite of the pop-sci conception of Dunning-Kruger.</blockquote> esrogs bRAdHf3MWDKEyh8Wk 2019-04-03T00:09:16.628Z Comment by ESRogs on Unconscious Economies https://www.lesswrong.com/posts/PrCmeuBPC4XLDQz8C/unconscious-economies#i5APJ9Ltq3EYAMF4B <p>I&#x27;m not totally sure I&#x27;m parsing this sentence correctly. Just to clarify, &quot;large firm variation in productivity&quot; means &quot;large variation in the productivity of firms&quot; rather than &quot;variation in the productivity of large firms&quot;, right?</p><p>Also, the second part is saying that on average there is productivity growth across firms, because the productive firms expand more than the less productive firms, yes?</p> esrogs i5APJ9Ltq3EYAMF4B 2019-03-28T17:42:56.240Z Comment by ESRogs on What failure looks like https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like#QxagmGtduoT9XcjBf <p><span>Not sure exactly what you mean by &quot;numerical simulation&quot;, but you may be interested in <a href="https://ought.org/">https://ought.org/</a></span><span> (where Paul is a collaborator), or in Paul&#x27;s work at OpenAI: <a href="https://openai.com/blog/authors/paul/">https://openai.com/blog/authors/paul/</a></span> .</p> esrogs QxagmGtduoT9XcjBf 2019-03-19T17:41:21.784Z Comment by ESRogs on UBI for President https://www.lesswrong.com/posts/D2CWfcnwcJCR8rNzZ/ubi-for-president#iDbhzGDkctnoWcYm7 <blockquote>Just had a call with Nick Bostrom who schooled me on AI issues of the future. We have a lot of work to do.</blockquote><p><span><a href="https://twitter.com/andrewyangvfa/status/1103352317221445634">https://twitter.com/andrewyangvfa/status/1103352317221445634</a></span></p> esrogs iDbhzGDkctnoWcYm7 2019-03-17T21:15:21.215Z Comment by ESRogs on UBI for President https://www.lesswrong.com/posts/D2CWfcnwcJCR8rNzZ/ubi-for-president#tNph9LHyeAjMF9DWQ <p>This same candidate (whom the <a href="https://electionbettingodds.com/">markets</a> currently give a 5% chance of being the Democratic nominee) also wants to create a cabinet-level position to monitor emerging technology, especially AI:</p><p><br/><em>Advances in automation and Artificial Intelligence (AI) hold the potential to bring about new levels of prosperity humans have never seen. They also hold the potential to disrupt our economies, ruin lives throughout several generations, and, if experts such as Stephen Hawking and Elon Musk are to be believed, destroy humanity.</em></p><p><em>...<br/><br/>As President, I will…<br/>* Create a new executive department – the Department of Technology – to work with private industry and Congressional leaders to monitor technological developments, assess risks, and create new guidance. The new Department would be based in Silicon Valley and would initially be focused on Artificial Intelligence.<br/>* Create a new Cabinet-level position of Secretary of Technology who will be tasked with leading the new Department.<br/>* Create a public-private partnership between leading tech firms and experts within government to identify emerging threats and suggest ways to mitigate those threats while maximizing the benefit of technological innovation to society.</em></p><p><span><a href="https://www.yang2020.com/policies/regulating-ai-emerging-technologies/">https://www.yang2020.com/policies/regulating-ai-emerging-technologies/</a></span></p> esrogs tNph9LHyeAjMF9DWQ 2019-03-17T21:14:37.347Z Comment by ESRogs on Active Curiosity vs Open Curiosity https://www.lesswrong.com/posts/22LAkTNdWv6QaQyZY/active-curiosity-vs-open-curiosity#iZ4XDKeFzzzoFjEua <p>It seems to me that perhaps the major difference between active/concentrated curiosity and open/diffuse curiosity is how much of an expectation you have that there&#x27;s one specific piece of information you could get that would satisfy the curiosity. (And for this reason the &quot;concentrated&quot; and &quot;diffuse&quot; labels do seem somewhat apt to me.)</p><p>Active/concentrated curiosity is focused on finding the answer to a specific question, while open/diffuse curiosity seeks to explore and gain understanding. (And that exploration may or may not start out with its attention on a single object/emotion/question.)</p> esrogs iZ4XDKeFzzzoFjEua 2019-03-16T08:15:30.697Z Comment by ESRogs on Verifying vNM-rationality requires an ontology https://www.lesswrong.com/posts/DoyJiFwEzeeMBchH7/verifying-vnm-rationality-requires-an-ontology#fv7NmYbHTcXn9nJF3 <p>See also my comment <a href="https://www.lesswrong.com/posts/TE5nJ882s5dCMkBB8/conclusion-to-the-sequence-on-value-learning#pmPAd8qkEgDYEx73Q">here</a> on non-exploitability.</p> esrogs fv7NmYbHTcXn9nJF3 2019-03-13T21:47:12.152Z Comment by ESRogs on Verifying vNM-rationality requires an ontology https://www.lesswrong.com/posts/DoyJiFwEzeeMBchH7/verifying-vnm-rationality-requires-an-ontology#Tymzjt7gAZXg7c2sB <p>Nitpick: I think the intro example would be clearer if there were explicit numbers of grapes/oranges rather than &quot;some&quot;. Nothing is surprising about the original story if Beatriz got more oranges from Deion than she gave up to Callisto. (Or gave away fewer grapes to Deion than she received from Callisto.)</p> esrogs Tymzjt7gAZXg7c2sB 2019-03-13T21:39:02.770Z Comment by ESRogs on Karma-Change Notifications https://www.lesswrong.com/posts/yb3Js7ArenCiiHxKF/karma-change-notifications#RzZomCAfibg8MrB8u <p>Unless I missed it, neither this comment nor the main post explains why you ultimately decided in <em>favor</em> of karma notifications. You&#x27;ve listed a bunch of cons -- I&#x27;m curious what the pros were.</p><p>Was it just an attempt to achieve this?</p><blockquote>I want new users who show up on the site to feel rewarded when they engage with content</blockquote> esrogs RzZomCAfibg8MrB8u 2019-03-02T03:25:21.044Z Comment by ESRogs on UBI for President https://www.lesswrong.com/posts/D2CWfcnwcJCR8rNzZ/ubi-for-president#nrbcNJWzHjnvsZGqt <p>Great long-form interview with Andrew Yang here: <a href="https://www.youtube.com/watch?v=cTsEzmFamZ8">Joe Rogan Experience #1245 - Andrew Yang</a>.</p> esrogs nrbcNJWzHjnvsZGqt 2019-02-18T00:51:49.464Z Comment by ESRogs on Why do you reject negative utilitarianism? https://www.lesswrong.com/posts/8Eo52cjzxcSP9CHea/why-do-you-reject-negative-utilitarianism#w3iBokPiDusGib4oN <p>Did you make any update regarding the simplicity / complexity of value?</p><p>My impression is that theoretical simplicity is a major driver of your preference for NU, and also that if others (such as myself) weighed theoretical simplicity more highly that they would likely be more inclined towards NU.</p><p>In other words, I think theoretical simplicity may be a <a href="https://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-resolving-disagreement">double crux</a> in the disagreements here about NU. Would you agree with that?</p> esrogs w3iBokPiDusGib4oN 2019-02-14T19:36:58.335Z Comment by ESRogs on Why do you reject negative utilitarianism? https://www.lesswrong.com/posts/8Eo52cjzxcSP9CHea/why-do-you-reject-negative-utilitarianism#qwL4SHZvN2PMMJEJ5 <p>Meta-note: I am surprised by the current karma rating of this question. At present, it is sitting at +9 points with 7 votes, but it would be at +2 with 6 votes w/o my strong upvote.</p><p>To those who downvoted, or do not feel inclined to upvote -- does this question not seem like a good use of LW&#x27;s question system? To me it seems entirely on-topic, and very much the kind of thing I would want to see here. I found myself disagreeing with much of the text, but it seemed to be an honest question, sincerely asked.</p><p>Was it something about the wording (either of the headline or the explanatory text) that put you off?</p> esrogs qwL4SHZvN2PMMJEJ5 2019-02-14T19:27:58.309Z Comment by ESRogs on On Long and Insightful Posts https://www.lesswrong.com/posts/YYiGCT6AAcqPQ6mBs/on-long-and-insightful-posts#ErYrWgLTnsB4DMrRg <p>Relatedly: shorter articles don&#x27;t need to be as well-written and engaging for me to actually read to the end of them.</p><p>I suspect, though, that there is wide variation in willingness to read long posts, perhaps explained (in part) by reading speed.</p> esrogs ErYrWgLTnsB4DMrRg 2019-02-13T22:08:31.583Z Comment by ESRogs on Why do you reject negative utilitarianism? https://www.lesswrong.com/posts/8Eo52cjzxcSP9CHea/why-do-you-reject-negative-utilitarianism#a2hJTpFzf4oh7nvkC <blockquote>If the rationality and EA communities are looking for a unified theory of value</blockquote><p>Are they? Many of us seem to have accepted that <a href="https://wiki.lesswrong.com/wiki/Complexity_of_value">our values are complex</a>. </p><blockquote>Absolute negative utilitarianism (ANU) is a minority view despite the theoretical advantages of terminal value monism (suffering is the only thing that motivates us “by itself”) over pluralism (there are many such things). Notably, ANU doesn’t require solving value incommensurability, because all other values can be instrumentally evaluated by their relationship to the suffering of sentient beings, using only one terminal value-grounded common currency for everything.</blockquote><p>This seems like an argument that it would be convenient if our values were simple. This does not seem like strong evidence that they actually are simple. (Though I grant that you could make an argument that it might be better to try to achieve only part of what we value if we&#x27;re much more likely to be successful that way.)</p> esrogs a2hJTpFzf4oh7nvkC 2019-02-11T21:03:00.320Z Comment by ESRogs on The Case for a Bigger Audience https://www.lesswrong.com/posts/2E3fpnikKu6237AF6/the-case-for-a-bigger-audience#ZuxxaFNhL3EBuzNTr <p>FWIW, I was thinking of the <em>related</em> relationship as a human-defined one. That is, the author (or someone else?) manually links another question as related.</p> esrogs ZuxxaFNhL3EBuzNTr 2019-02-10T07:04:05.579Z Comment by ESRogs on The Case for a Bigger Audience https://www.lesswrong.com/posts/2E3fpnikKu6237AF6/the-case-for-a-bigger-audience#kqCQNgquMotFyZCpT <blockquote>Q&amp;A in particular is something that I can imagine productively scaling to a larger audience, in a way that actually causes the contributions from the larger audience to result in real intellectual progress.</blockquote><p>Do you mean scaling it as is, or in the future?</p><p>I think there&#x27;s a lot of potential to innovate on the Q&amp;A system, and I think it&#x27;d be valuable to make progress on that before trying to scale. In particular, I&#x27;d like to see some method of tracking (or taking advantage of) the structure behind questions -- something to do with how they&#x27;re related to each other.</p><p>Maybe this is as simple as marking two questions as &quot;related&quot; (as I think you and I have discussed offline). Maybe you&#x27;d want more fine-grained relationships.</p><p>It&#x27;d also be cool to have some way of quickly figuring out what the major open questions are in some area (e.g. IDA, or value learning), or maybe what specific people consider to be important open questions.</p> esrogs kqCQNgquMotFyZCpT 2019-02-10T01:25:22.828Z Comment by ESRogs on The Case for a Bigger Audience https://www.lesswrong.com/posts/2E3fpnikKu6237AF6/the-case-for-a-bigger-audience#FYXoJYTByH4dyJN9e <blockquote>Have any posts from LW 2.0 generated new conceptual handles for the community like &quot;the sanity waterline&quot;? If not, maybe it&#x27;s because they just aren&#x27;t reaching a big enough audience.</blockquote><p>Doesn&#x27;t this get the causality backwards? I&#x27;m confused about the model that would generate this hypothesis.</p><p>One way I can imagine good concepts not taking root in &quot;the community&quot; is if not enough of the community is reading the posts. But then why would (most of) the prescriptions seem to be about advertising to the outside world?</p> esrogs FYXoJYTByH4dyJN9e 2019-02-10T01:09:54.504Z Comment by ESRogs on When should we expect the education bubble to pop? How can we short it? https://www.lesswrong.com/posts/EBAccQwDWMiRCWnyk/when-should-we-expect-the-education-bubble-to-pop-how-can-we#Cxmto95QzFMH9jPEq <p>And the <a href="https://twitter.com/Austen/status/1082815326960533504">stories</a> of their students are heartwarming.</p> esrogs Cxmto95QzFMH9jPEq 2019-02-10T00:37:52.441Z Comment by ESRogs on When should we expect the education bubble to pop? How can we short it? https://www.lesswrong.com/posts/EBAccQwDWMiRCWnyk/when-should-we-expect-the-education-bubble-to-pop-how-can-we#aG5DcYy5WGsEsZtLS <p>Btw, Lambda School twitter is fun to follow. They&#x27;re doing some <a href="https://twitter.com/Austen/status/1094037165929943040">impressive stuff</a>.</p> esrogs aG5DcYy5WGsEsZtLS 2019-02-10T00:30:01.943Z Comment by ESRogs on When should we expect the education bubble to pop? How can we short it? https://www.lesswrong.com/posts/EBAccQwDWMiRCWnyk/when-should-we-expect-the-education-bubble-to-pop-how-can-we#Tv8BDntHXS6jgms3F <blockquote>2) Which assets will be more scarce/in demand as that happens? Are there currently available opportunities for &quot;shorting&quot; the education bubble and invest in ways which will yield profit when it pops?</blockquote><p></p><p>Vocational schools seem like a reasonable bet. In particular something like <a href="https://lambdaschool.com/">Lambda School</a>, where they&#x27;ve aligned incentives by tying tuition to alumni income.</p><p>VCs seem to agree, <a href="https://www.crunchbase.com/organization/lambda-school#section-funding-rounds">pouring in</a> $14MM in a series A in October 2018, followed by an additional $30MM in a series B just 3 months later.</p> esrogs Tv8BDntHXS6jgms3F 2019-02-09T23:00:13.285Z Comment by ESRogs on Conclusion to the sequence on value learning https://www.lesswrong.com/posts/TE5nJ882s5dCMkBB8/conclusion-to-the-sequence-on-value-learning#pmPAd8qkEgDYEx73Q <p>It seems to me that perhaps your argument about expected utility maximization being a trivial property extends back one step previous in the argument, to non-exploitability as well.</p><p>AlphaZero is better than us at chess, and so it is non-exploitable at chess (or you might say that being better at chess is the same thing as being non-exploitable at chess). If that&#x27;s true, then it must also appear to us to be an expected utility maximizer. But notably the kind of EU-maximizer that it must appear to be is: one whose utility function is defined in terms of chess outcomes. AlphaZero *is* exploitable if we&#x27;re secretly playing a slightly different game, like how-many-more-pawns-do-I-have-than-my-opponent-after-twenty-moves, or the game don&#x27;t-get-unplugged.</p><p>Going the other direction, from EU-maximization to non-exploitability, we can point out that any agent could be thought of as an EU-maximizer (perhaps with a very convoluted utility function), and if it&#x27;s very competent w.r.t. its utility function, then it will be non-exploitable by us, w.r.t. outcomes related to its utility function.</p><p>In other words, non-exploitability is only meaningful with respect to some utility function, and is not a property of &quot;intelligence&quot; or &quot;competence&quot; in general.</p><p>Would you agree with this statement?</p> esrogs pmPAd8qkEgDYEx73Q 2019-02-03T23:25:26.988Z Comment by ESRogs on How does Gradient Descent Interact with Goodhart? https://www.lesswrong.com/posts/pcomQ4Fwi7FnfBZBR/how-does-gradient-descent-interact-with-goodhart#XMgtDW2xaxZX5zv4h <blockquote>when everything that can go wrong is the agent breaking the vase, and breaking the vase allows higher utility solutions</blockquote><p>What does &quot;breaking the vase&quot; refer to here?</p><p>I would assume this is an allusion to the scene in The Matrix with Neo and the Oracle (where there&#x27;s a paradox about whether Neo would have broken the vase if the Oracle hadn&#x27;t said, &quot;Don&#x27;t worry about the vase,&quot; causing Neo to turn around to look for the vase and then bump into it), but I&#x27;m having trouble seeing how that relates to sampling and search.</p> esrogs XMgtDW2xaxZX5zv4h 2019-02-03T20:17:00.126Z Comment by ESRogs on How does Gradient Descent Interact with Goodhart? https://www.lesswrong.com/posts/pcomQ4Fwi7FnfBZBR/how-does-gradient-descent-interact-with-goodhart#rguBBPvZMikFXXoAs <p>For the parenthetical in Proposed Experiment #2,</p><blockquote>or you can train a neural net to try to copy U</blockquote><p>should this be &quot;try to copy V&quot;, since V is what you want a proxy for, and U is the proxy?</p> esrogs rguBBPvZMikFXXoAs 2019-02-03T19:57:11.844Z Comment by ESRogs on Drexler on AI Risk https://www.lesswrong.com/posts/bXYtDfMTNbjCXFQPh/drexler-on-ai-risk#jhAjddisvK47kNcoi <blockquote>As I was writing the last few paragraphs, and thinking about <a href="https://www.lesswrong.com/posts/x3fNwSe5aWZb5yXEG/reframing-superintelligence-comprehensive-ai-services-as/comment/gMZes7XnQK8FHcZsu">Wei Dei&#x27;s objections</a>, I found it hard to clearly model how CAIS would handle the cancer example.</blockquote><p>This link appears to be broken. It directs me to <a href="https://www.lesswrong.com/posts/x3fNwSe5aWZb5yXEG/reframing-superintelligence-comprehensive-ai-services-as/comment/gMZes7XnQK8FHcZsu">https://www.lesswrong.com/posts/x3fNwSe5aWZb5yXEG/reframing-superintelligence-comprehensive-ai-services-as/comment/gMZes7XnQK8FHcZsu</a>, which does not seem to exist.</p><p>Replacing the /comment/ part with a # gives <a href="https://www.lesswrong.com/posts/x3fNwSe5aWZb5yXEG/reframing-superintelligence-comprehensive-ai-services-as#gMZes7XnQK8FHcZsu">https://www.lesswrong.com/posts/x3fNwSe5aWZb5yXEG/reframing-superintelligence-comprehensive-ai-services-as#gMZes7XnQK8FHcZsu</a>, which does work.</p><p>(Also it should be &quot;Dai&quot;, not &quot;Dei&quot;.)</p> esrogs jhAjddisvK47kNcoi 2019-02-01T19:43:37.643Z Comment by ESRogs on Applied Rationality podcast - feedback? https://www.lesswrong.com/posts/izjgvZzo9DCCzb3fH/applied-rationality-podcast-feedback#6YwCFqEGYjf3LyfiB <blockquote>you should actually first try to integrate each technique and get a sense of whether it worked for you (or why it did not).</blockquote><p>This could actually be the theme of the podcast. &quot;Each week I try to integrate one technique and then report on how it went.&quot;</p><p>Sounds more interesting than just an explanation of what the technique is.</p> esrogs 6YwCFqEGYjf3LyfiB 2019-02-01T02:21:15.041Z Comment by ESRogs on Masculine Virtues https://www.lesswrong.com/posts/oA5dxBFWJkRxTdXrK/masculine-virtues#pE2FKu3oGia7du3Jn <p>I wanted to get a better sense of the risk, so here is some arithmetic.</p><p>Putting together one of the quotes above:</p><blockquote>An estimated 300,000 sport-related traumatic brain injuries, predominantly concussions, occur annually in the United States.</blockquote><p>And this bit from the recommended Prognosis section:</p><blockquote>Most TBIs are mild and do not cause permanent or long-term disability; however, all severity levels of TBI have the potential to cause significant, long-lasting disability. Permanent disability is thought to occur in 10% of mild injuries, 66% of moderate injuries, and 100% of severe injuries.</blockquote><p>And this bit from the Epidemiology section:</p><blockquote>a US study found that moderate and severe injuries each account for 10% of TBIs, with the rest mild.</blockquote><p>We get that there are 300k sport-related TBI&#x27;s per year in the US, and of those, 240k are mild, 30k are moderate, and 30k are severe. Those severity levels together result in 24k + 20k + 30k ~= 75k cases of permanent disability per year.</p><p>To put that in perspective, we can compare to another <a href="https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in_U.S._by_year">common activity</a> that has potential to cause harm:</p><blockquote>In 2010, there were an estimated 5,419,000 crashes, 30,296 of with fatalities, killing 32,999, and injuring 2,239,000.</blockquote><p>If we say that a fatality and a permanent disability due to brain injury are the same order of magnitude of badness, this suggests that sports and traveling by car expose the average (as in mean, not median) American to roughly the same level of risk.</p><p>Would be interesting to dig deeper to see how much time Americans spend in cars vs playing sports on average (and then you&#x27;d also want to look at the benefits you get from each), but I&#x27;ll stop here for now.</p> esrogs pE2FKu3oGia7du3Jn 2019-01-31T12:00:00.124Z Comment by ESRogs on [Link] Did AlphaStar just click faster? https://www.lesswrong.com/posts/du8qqfgQz3ovBm26A/link-did-alphastar-just-click-faster#3ciSDrQ7534fWJzJy <blockquote>but this seems like reason to doubt that AI has surpassed human strategy in StarCraft</blockquote><p>I think Charlie might be suggesting that AlphaStar would be superior to humans, even with only human or sub-human APM, because the <em>precision</em> of those actions would still be superhuman, even if the total number was slightly subhuman:</p><blockquote>the micro advantage for 98% of the game isn&#x27;t because it&#x27;s clicking faster, its clicks are just better</blockquote><p>This wouldn&#x27;t necessarily mean that AlphaStar is better at strategy.</p> esrogs 3ciSDrQ7534fWJzJy 2019-01-29T04:41:53.078Z Comment by ESRogs on [Link] Did AlphaStar just click faster? https://www.lesswrong.com/posts/du8qqfgQz3ovBm26A/link-did-alphastar-just-click-faster#ymewFnm4Amdrt2cxM <blockquote>&quot;Does perfect stalker micro really count as intelligence?&quot;</blockquote><p>Love this bit.</p><blockquote>the evidence is pretty strong that AlphaStar (at least the version without attention that just perceived the whole map) could beat humans under whatever symmetric APM cap you want</blockquote><p>This does not seem at all clear to me. Weren&#x27;t all the strategies using micro super-effectively? And apparently making other human-detectable mistakes? Seems possible that AlphaStar would win anyway without the micro, but not at all certain.</p><p></p> esrogs ymewFnm4Amdrt2cxM 2019-01-28T22:48:37.267Z Comment by ESRogs on "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II", DeepMind [won 10 of 11 games against human pros] https://www.lesswrong.com/posts/f3iXyQurcpwJfZTE9/alphastar-mastering-the-real-time-strategy-game-starcraft-ii#jxfYD3yHLZrGH2FmE <p>Interesting analysis <a href="https://www.reddit.com/r/MachineLearning/comments/ak3v4i/d_an_analysis_on_how_alphastars_superhuman_speed/">here</a>:</p><blockquote>I will try to make a convincing argument for the following:</blockquote><blockquote>1. AlphaStar played with superhuman speed and precision.</blockquote><blockquote>2. Deepmind claimed to have restricted the AI from performing actions that would be physically impossible to a human. They have not succeeded in this and most likely are aware of it.</blockquote><blockquote>3. <strong>The reason why AlphaStar is performing at superhuman speeds is most likely due to it&#x27;s inability to unlearn the human players tendency to spam click. I suspect Deepmind wanted to restrict it to a more human like performance but are simply not able to. </strong>It&#x27;s going to take us some time to work our way to this point but it is the whole reason why I&#x27;m writing this so I ask you to have patience.</blockquote> esrogs jxfYD3yHLZrGH2FmE 2019-01-26T21:11:19.762Z Comment by ESRogs on "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II", DeepMind [won 10 of 11 games against human pros] https://www.lesswrong.com/posts/f3iXyQurcpwJfZTE9/alphastar-mastering-the-real-time-strategy-game-starcraft-ii#axYojriXwE6RoeRSM <blockquote>I think it&#x27;s quite possible that when they instituted the cap they thought it was fair, however from the actual gameplay it should be obvious to anyone who is even somewhat familiar with Starcraft II (e.g., many members of the AlphaStar team) that AlphaStar had a large advantage in &quot;micro&quot;, which in part came from the APM cap still allowing superhumanly fast and accurate actions at crucial times. It&#x27;s also possible that the blogpost and misleading APM comparison graph were written by someone who did <em>not</em> realize this, but then those who did realize should have objected to it and had it changed after they noticed.</blockquote><p>It&#x27;s not so obvious to me that someone who realizes that AlphaStar is superior at &quot;micro&quot; should have objected to those graphs.</p><p>Think about it like this -- you&#x27;re on the DeepMind team, developing AlphaStar, and the whole <em>point</em> is to make it superhuman at StarCraft. So there&#x27;s going to be some part of the game that it&#x27;s superhuman at, and to some extent this will be &quot;unfair&quot; to humans. The team decided to try not to let AlphaStar have &quot;physical&quot; advantages, but I don&#x27;t see any indication that they explicitly decided that it should not be better at &quot;micro&quot; or unit control in general, and should only win on &quot;strategy&quot;.</p><p>Also, separating &quot;micro&quot; from &quot;strategy&quot; is probably not that simple for a model-free RL system like this. So I think they made a very reasonable decision to focus on a relatively easy-to-measure APM metric. When the resulting system doesn&#x27;t play exactly as humans do, or in a way that would be easy for humans to replicate, to me it doesn&#x27;t seem so-obvious-that-you&#x27;re-being-deceptive-if-you-don&#x27;t-notice-it that this is &quot;unfair&quot; and that you should go back to the drawing board with your handicapping system.</p><p>It seems to me that which ways for AlphaStar to be superhuman are &quot;fair&quot; or &quot;unfair&quot; is to some extent a matter of taste, and there will be many cases that are ambiguous. To give a non &quot;micro&quot; example -- suppose AlphaStar is able to better keep track of exactly how many units its opponent has (and at what hit point levels) throughout the game, than a human can, and this allows it to make just slightly more fine-grained decisions about which units it should produce. This might allow it to win a game in a way that&#x27;s not replicable by humans. It didn&#x27;t find a new strategy -- it just executed better. Is that fair or unfair? It feels maybe less unfair than just being super good at micro, but exactly where the dividing line is between &quot;interesting&quot; and &quot;uninteresting&quot; ways of winning seems not super clear.</p><p>Of course, now that a much broader group of StarCraft players has seen these games, and a consensus has emerged that this super-micro does not really seem fair, it would be weird if DeepMind did not take that into account for its next release. I will be quite surprised if they don&#x27;t adjust their setup to reduce the micro advantage going forward.</p> esrogs axYojriXwE6RoeRSM 2019-01-26T19:38:02.994Z Comment by ESRogs on Vote counting bug? https://www.lesswrong.com/posts/ZP34gbkcr9ZF7Pcir/vote-counting-bug#q6c9CXBKxCepCcufy <p>Fixed link <a href="https://github.com/LessWrong2/Lesswrong2/issues/1520">here</a>.</p> esrogs q6c9CXBKxCepCcufy 2019-01-24T00:03:43.813Z Comment by ESRogs on Disentangling arguments for the importance of AI safety https://www.lesswrong.com/posts/JbcWQCxKWn3y49bNB/disentangling-arguments-for-the-importance-of-ai-safety#aRH3HMhXComgj8m5K <p>To me the difference is that when I read 5 I&#x27;m thinking about people being careless or malevolent, in an everyday sense of those terms, whereas when I read 4 I&#x27;m thinking about how maybe there&#x27;s no such thing as a human who&#x27;s not careless or malevolent, if given enough power and presented with a weird enough situation.</p> esrogs aRH3HMhXComgj8m5K 2019-01-22T01:23:49.885Z Comment by ESRogs on Towards formalizing universality https://www.lesswrong.com/posts/M8WdeNWacMrmorNdd/towards-formalizing-universality#tjywiyw2TDXGM6MGp <blockquote>Simple caricatured examples:</blockquote><blockquote>C might propose a design for a computer that has a backdoor that an attacker can use to take over the computer. But if this backdoor will actually be effective, then A[C] will know about it.</blockquote><blockquote>C might propose a design that exploits a predictable flaw in A&#x27;s reasoning (e.g. overlooking consequences of a certain kind, being overly optimistic about some kinds of activities, incorrectly equating two importantly different quantities...). But then A[C] will know about it, and so if A[C] actually reasons in that way then (in some sense) it is endorsed.</blockquote><p>These remind me of Eliezer&#x27;s notions of <a href="https://arbital.com/p/efficiency/">Epistemic and instrumental efficiency</a>, where the first example (about the backdoor) would roughly correspond to A[C] being instrumentally efficient relative to C, and the second example (about potential bias) would correspond to A[C] being epistemically efficient relative to C.</p> esrogs tjywiyw2TDXGM6MGp 2019-01-16T21:27:49.665Z Comment by ESRogs on What are the open problems in Human Rationality? https://www.lesswrong.com/posts/gX8fcAwk3HGkFyJgk/what-are-the-open-problems-in-human-rationality#4oJBs7eydcmdYDqqN <p>Also superforecasting and GJP are no longer new. Seems not at all surprising that most of the words written about them would be from when they were.</p> esrogs 4oJBs7eydcmdYDqqN 2019-01-14T19:58:14.963Z Comment by ESRogs on Strategic High Skill Immigration https://www.lesswrong.com/posts/aB3WP4JbdC4i8qaRH/strategic-high-skill-immigration#dMtzzvdkFN5W85J8D <blockquote>Quan Xuesen</blockquote><p><a href="https://en.wikipedia.org/wiki/Qian_Xuesen">Qian</a>, not Quan. Pronounced something like if you said &quot;chee-ann&quot; as one syllable.</p> esrogs dMtzzvdkFN5W85J8D 2019-01-14T19:51:07.559Z Comment by ESRogs on What are the open problems in Human Rationality? https://www.lesswrong.com/posts/gX8fcAwk3HGkFyJgk/what-are-the-open-problems-in-human-rationality#ZYXWC9ru9GyCCWm2W <blockquote>What are genuinely confusing problems at the edge of the current rationality field – perhaps far away from the point where even specialists can implement them yet, but where we seem confused in a basic way about how the mind works, or how probability or decision theory work.</blockquote><p>For one example of this, see Abram&#x27;s <a href="https://www.lesswrong.com/posts/WkPf6XCzfJLCm2pbK/cdt-edt-udt">most recent post</a>, which begins: &quot;So... what&#x27;s the deal with counterfactuals?&quot; :-)</p> esrogs ZYXWC9ru9GyCCWm2W 2019-01-14T05:49:53.043Z Comment by ESRogs on Towards formalizing universality https://www.lesswrong.com/posts/M8WdeNWacMrmorNdd/towards-formalizing-universality#pRKNEhd3DGszFeQrE <blockquote>The only thing that makes it dominate C is the fact that C can do actual work that causes its beliefs to be accurate.</blockquote><p>Was this meant to read, &quot;The only thing that makes it <em>hard to</em> dominate C ...&quot;, or something like that? I don&#x27;t quite understand the meaning as written.</p> esrogs pRKNEhd3DGszFeQrE 2019-01-14T05:35:46.030Z Comment by ESRogs on Towards formalizing universality https://www.lesswrong.com/posts/M8WdeNWacMrmorNdd/towards-formalizing-universality#fyQ2yX9tjbGKBcWKP <p>I think this <em>ascription</em> is meant to be pretty informal and general. So you could say for example that quicksort believes that 5 is less than 6.</p><p>I don&#x27;t think there&#x27;s meant to be any presumption that the inner workings of the algorithm are anything like a mind. That&#x27;s my read from section <em>I.2. Ascribing beliefs to arbitrary computations</em>.</p> esrogs fyQ2yX9tjbGKBcWKP 2019-01-13T23:34:27.651Z Comment by ESRogs on What are the open problems in Human Rationality? https://www.lesswrong.com/posts/gX8fcAwk3HGkFyJgk/what-are-the-open-problems-in-human-rationality#D5FH6Ac5JRHpk47tz <p>Interesting! Would you be willing to give a brief summary?</p> esrogs D5FH6Ac5JRHpk47tz 2019-01-13T22:26:45.086Z Comment by ESRogs on What are the open problems in Human Rationality? https://www.lesswrong.com/posts/gX8fcAwk3HGkFyJgk/what-are-the-open-problems-in-human-rationality#AnsThJkZxwzjzNSRd <p>I don&#x27;t have a crisp question yet, but one general area I&#x27;d be interested in understanding better is the interplay between inside views and outside views.</p><p>In some cases, having some outside view probability in mind can guide your search (e.g. &quot;No, that can&#x27;t be right because then such and such, and I have a prior that such and such is unlikely.&quot;), while in other cases, thinking too much about outside views seems like it can distract you from exploring underlying models (e.g. when people talk about AI timelines in a way that just seems to be about parroting and aggregating other people&#x27;s timelines).</p><p>A related idea is the distinction between impressions and beliefs. In this view impressions are roughly inside views (what makes sense to you given the models and intuitions you have), while beliefs are what you&#x27;d bet on (taking into account the opinions of others).</p><p>I have some intuitions and heuristics about when it&#x27;s helpful to focus on impressions vs beliefs. But I&#x27;d like to have better explicit models here, and I suspect there might be some interesting open questions in this area.</p> esrogs AnsThJkZxwzjzNSRd 2019-01-13T08:12:29.368Z Comment by ESRogs on Modernization and arms control don’t have to be enemies. https://www.lesswrong.com/posts/2Hc8Bb84hhadHJLF9/modernization-and-arms-control-don-t-have-to-be-enemies#8nPZKgWxkxz2L2LWq <blockquote>if you want to reduce the number of nuclear weapons the U.S. has in its stockpile, it may actually make sense to improve the ability of the U.S. to produce new weapons.</blockquote><p>If you&#x27;ve reduced the stock but increased that rate at which new warheads can be produced, have you actually made the situation any safer?</p><p>To the extent that increasing the production rate funges for maintaining a stockpile from a strategic perspective, aren&#x27;t the two also interchangeable from a risk-of-catastrophe perspective?</p><p>(I suppose one could argue that the risk of terrorists getting their hands on part of a stockpile is greater than the risk of them seizing control of production facilities. But it&#x27;s not obvious to me that that&#x27;s necessarily the case, and also I would have guessed that risk of weapons falling into the hands of terrorists is only a small percentage of the total risk from the stockpile.)</p> esrogs 8nPZKgWxkxz2L2LWq 2019-01-13T07:31:35.612Z Comment by ESRogs on CDT Dutch Book https://www.lesswrong.com/posts/wkNQdYj47HX33noKv/cdt-dutch-book#iSeTQY6MFCcX9nGwo <blockquote>I conclude from this that CDT should equal EDT (hence, causality must account for logical correlations, IE include logical causality). By &quot;CDT&quot; I really mean any approach at all to counterfactual reasoning; counterfactual expectations should equal evidential expectations.</blockquote><blockquote>As with most of my CDT=EDT arguments, this only provides an argument that the expectations should be equal for actions taken with nonzero probability. In fact, the amount lost to Dutch Book will be proportional to the probability of the action in question. So, differing counterfactual and evidential expectations are smoothly more and more tenable as actions become less and less probable.</blockquote><p>I&#x27;m having a little trouble following the terminology here (despite the disclaimer).</p><p>One particular thing that confuses me -- you say, &quot;the expectations should be equal for actions taken with nonzero probability&quot; and also &quot;differing counterfactual and evidential expectations are smoothly more and more tenable as actions become less and less probable&quot;, but I&#x27;m having trouble understanding how they could both be true. How does, &quot;they&#x27;re equal for nonzero probability&quot; match with &quot;they move further and further apart the closer the probability gets to zero&quot;? (Or are those incorrect paraphrases?)</p><p>It seems to me that if you have two functions that are equal whenever then input (the probability of an action) is nonzero, then they can&#x27;t also get closer and closer together as the input increases from zero -- they&#x27;re already equal as soon as the input does not equal zero! I assume that I have misunderstood something, but I&#x27;m not sure which part.</p> esrogs iSeTQY6MFCcX9nGwo 2019-01-13T07:10:19.599Z Henry Kissinger: AI Could Mean the End of Human History https://www.lesswrong.com/posts/QTE2M5dCwCqEtRCWQ/henry-kissinger-ai-could-mean-the-end-of-human-history esrogs QTE2M5dCwCqEtRCWQ 2018-05-15T20:11:11.136Z AskReddit: Hard Pills to Swallow https://www.lesswrong.com/posts/crFMh5bsMQESQbDge/askreddit-hard-pills-to-swallow esrogs crFMh5bsMQESQbDge 2018-05-14T11:20:37.470Z Predicting Future Morality https://www.lesswrong.com/posts/toE6i842jhkkvXy7W/predicting-future-morality <p>Robin Hanson <a href="https://www.lesswrong.com/posts/toE6i842jhkkvXy7W/predicting-future-morality#spHdwbMaEnn4qW8CA">suggests</a> that recent changes in moral attitudes (in the last few hundred years) are better explained by changing circumstances than by progress in moral reasoning.</p><p>This seems plausible to me. It also seems likely that there would be a bit of a lag between the change in circumstance and the common acceptance of the new morality. (The sexual revolution following the introduction of the pill seems like a good example.)</p><p>Suppose this is broadly right -- that moral attitudes follow circumstances. Is there anything we can predict about where moral attitudes will be in the next few decades (or <a href="https://sideways-view.com/2018/02/24/takeoff-speeds/">economic doublings</a>), based on either recent technological or economic changes, or on those we can see on the horizon?</p> esrogs toE6i842jhkkvXy7W 2018-05-06T07:17:16.548Z AI Safety via Debate https://www.lesswrong.com/posts/wo6NsBtn3WJDCeWsx/ai-safety-via-debate <p>New paper and blog post by Geoffrey Irving, Paul Christiano, and Dario Amodei (the OpenAI safety team).</p> esrogs wo6NsBtn3WJDCeWsx 2018-05-05T02:11:25.655Z FLI awards prize to Arkhipov’s relatives https://www.lesswrong.com/posts/CxjMErp5PHL9nFEbE/fli-awards-prize-to-arkhipov-s-relatives esrogs CxjMErp5PHL9nFEbE 2017-10-28T19:40:43.928Z Functional Decision Theory: A New Theory of Instrumental Rationality https://www.lesswrong.com/posts/AGAGgoWymRhJ5Rqyv/functional-decision-theory-a-new-theory-of-instrumental <div class="ory-row"><div class="ory-cell ory-cell-sm-12 ory-cell-xs-12"><div class="ory-cell-inner ory-cell-leaf"><div><p></p></div></div></div></div> esrogs AGAGgoWymRhJ5Rqyv 2017-10-20T08:09:25.645Z A Software Agent Illustrating Some Features of an Illusionist Account of Consciousness https://www.lesswrong.com/posts/Zd7qCXYkMnpmvLPSp/a-software-agent-illustrating-some-features-of-an <div class="ory-row"><div class="ory-cell ory-cell-sm-12 ory-cell-xs-12"><div class="ory-cell-inner ory-cell-leaf"><div><p></p></div></div></div></div> esrogs Zd7qCXYkMnpmvLPSp 2017-10-17T07:42:28.822Z Neuralink and the Brain’s Magical Future https://www.lesswrong.com/posts/vRtGTn5CCj9XNRAiB/neuralink-and-the-brain-s-magical-future esrogs vRtGTn5CCj9XNRAiB 2017-04-23T07:27:30.817Z Request for help with economic analysis related to AI forecasting https://www.lesswrong.com/posts/gc5Lf5LvD7CJsoxHB/request-for-help-with-economic-analysis-related-to-ai <p>[Cross-posted <a href="https://www.facebook.com/esrogs/posts/10101324358726304">from FB</a>]</p> <p style="margin: 6px 0px; color: #141823; font-family: helvetica, arial, sans-serif; font-size: 14px; line-height: 19.32px;">I've got an economic question that I'm not sure how to answer.</p> <p style="margin: 6px 0px; color: #141823; font-family: helvetica, arial, sans-serif; font-size: 14px; line-height: 19.32px;">I've been thinking about trends in AI development, and trying to get a better idea of what we should expect progress to look like going forward.</p> <p style="margin: 6px 0px; color: #141823; font-family: helvetica, arial, sans-serif; font-size: 14px; line-height: 19.32px;">One important question is: how much do existing AI systems help with research and the development of new, more capable AI systems?</p> <p style="margin: 6px 0px; color: #141823; font-family: helvetica, arial, sans-serif; font-size: 14px; line-height: 19.32px;">The obvious answer is, "not much." But I think of AI systems as being on a continuum from calculators on up. Surely AI researchers sometimes have to do arithmetic and other tasks that they already outsource to computers. I expect that going forward, the share of tasks that AI researchers outsource to computers will (gradually) increase. And I'd like to be able to draw a trend line. (If there's some point in the future when we can expect most of the work of AI R&amp;D to be automated, that would be very interesting to know about!)</p> <p style="margin: 6px 0px; color: #141823; font-family: helvetica, arial, sans-serif; font-size: 14px; line-height: 19.32px;">So I'd like to be able to measure the share of AI R&amp;D done by computers vs humans. I'm not sure of the best way to measure this. You could try to come up with a list of tasks that AI researchers perform and just count, but you might run into trouble as the list of tasks to changes over time (e.g. suppose at some point designing an AI system requires solving a bunch of integrals, and that with some later AI architecture this is no longer necessary).</p> <p style="margin: 6px 0px; color: #141823; font-family: helvetica, arial, sans-serif; font-size: 14px; line-height: 19.32px;">What seems more promising is to abstract over the specific tasks that computers vs human researchers perform and use some aggregate measure, such as the total amount of energy consumed by the computers or the human brains, or the share of an R&amp;D budget spent on computing infrastructure and operation vs human labor. Intuitively, if most of the resources are going towards computation, one might conclude that computers are doing most of the work.</p> <p style="margin: 6px 0px; color: #141823; font-family: helvetica, arial, sans-serif; font-size: 14px; line-height: 19.32px;">Unfortunately I don't think that intuition is correct. Suppose AI researchers use computers to perform task X at cost C_x1, and some technological improvement enables X to be performed more cheaply at cost C_x2. Then, all else equal, the share of resources going towards computers will decrease, even though their share of tasks has stayed the same.</p> <p style="margin: 6px 0px; color: #141823; font-family: helvetica, arial, sans-serif; font-size: 14px; line-height: 19.32px;">On the other hand, suppose there's some task Y that the researchers themselves perform at cost H_y, and some technological improvement enables task Y to be performed more cheaply at cost C_y. After the team outsources Y to computers the share of resources going towards computers has gone up. So it seems like it could go either way -- in some cases technological improvements will lead to the share of resources spent on computers going down and in some cases it will lead to the share of resources spent on computers going up.</p> <p style="margin: 6px 0px 0px; display: inline; color: #141823; font-family: helvetica, arial, sans-serif; font-size: 14px; line-height: 19.32px;">So here's the econ part -- is there some standard economic analysis I can use here? If both machines and human labor are used in some process, and the machines are becoming both more cost effective and more capable, is there anything I can say about how the expected share of resources going to pay for the machines changes over time?</p> esrogs gc5Lf5LvD7CJsoxHB 2016-02-06T01:27:39.810Z [Link] AlphaGo: Mastering the ancient game of Go with Machine Learning https://www.lesswrong.com/posts/fiAmoEBDapMTPGZ8J/link-alphago-mastering-the-ancient-game-of-go-with-machine <p>DeepMind's go AI, called AlphaGo, has beaten the European champion with a score of 5-0. A match against top ranked human, Lee Se-dol, is scheduled for March.</p> <p>&nbsp;</p> <blockquote> <p>Games are a great testing ground for developing smarter, more flexible algorithms that have the ability to tackle problems in ways similar to humans. Creating programs that are able to play games better than the best humans has a long history</p> <p>[...]</p> <p>But one game has thwarted A.I. research thus far: the ancient game of Go.</p> </blockquote> <div><br /></div> <div>http://googleresearch.blogspot.com/2016/01/alphago-mastering-ancient-game-of-go.html</div> esrogs fiAmoEBDapMTPGZ8J 2016-01-27T21:04:55.183Z [LINK] Deep Learning Machine Teaches Itself Chess in 72 Hours https://www.lesswrong.com/posts/aGFEYRrrv8m2fh46T/link-deep-learning-machine-teaches-itself-chess-in-72-hours <blockquote> <p style="margin: 0px 0px 3rem; padding: 0px; border: 0px; font-family: NHG, 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 14px; font-stretch: inherit; line-height: 2rem; vertical-align: baseline;"><span style="font-family: inherit; font-size: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit;">Lai has created an artificial intelligence machine called Giraffe that has taught itself to play chess by evaluating positions much more like humans and in an entirely different way to conventional chess engines.</span></p> <p style="margin: 0px 0px 3rem; padding: 0px; border: 0px; font-family: NHG, 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 14px; font-stretch: inherit; line-height: 2rem; vertical-align: baseline;"><span style="margin: 0px; padding: 0px; border: 0px; font-family: inherit; font-size: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; font-stretch: inherit; line-height: inherit; vertical-align: baseline;">Straight out of the box, the new machine plays at the same level as the best conventional chess engines, many of which have been fine-tuned over many years. On a human level, it is equivalent to FIDE International Master status, placing it within the top 2.2 percent of tournament chess players.</span></p> <p style="margin: 0px 0px 3rem; padding: 0px; border: 0px; font-family: NHG, 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 14px; font-stretch: inherit; line-height: 2rem; vertical-align: baseline;"><span style="margin: 0px; padding: 0px; border: 0px; font-family: inherit; font-size: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; font-stretch: inherit; line-height: inherit; vertical-align: baseline;">The technology behind Lai&rsquo;s new machine is a neural network.&nbsp;</span><span style="font-family: inherit; font-size: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit;">[...]&nbsp;</span>His network consists of four layers that together examine each position on the board in three different ways.</p> <p style="margin: 0px 0px 3rem; padding: 0px; border: 0px; font-family: NHG, 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 14px; font-stretch: inherit; line-height: 2rem; vertical-align: baseline;"><span style="font-family: inherit; font-size: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit;">The first looks at the global state of the game, such as the number and type of pieces on each side, which side is to move, castling rights and so on. The second looks at piece-centric features such as the location of each piece on each side, while the final aspect is to map the squares that each piece attacks and defends.</span></p> <p style="margin: 0px 0px 3rem; padding: 0px; border: 0px; font-family: NHG, 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 14px; font-stretch: inherit; line-height: 2rem; vertical-align: baseline;">[...]</p> <p style="margin: 0px 0px 3rem; padding: 0px; border: 0px; font-family: NHG, 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 14px; font-stretch: inherit; line-height: 2rem; vertical-align: baseline;"><span style="margin: 0px; padding: 0px; border: 0px; font-family: inherit; font-size: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; font-stretch: inherit; line-height: inherit; vertical-align: baseline;">Lai generated his dataset by randomly choosing five million positions from a database of computer chess games. He then created greater variety by adding a random legal move to each position before using it for training. In total he generated 175 million positions in this way.</span></p> <p style="margin: 0px 0px 3rem; padding: 0px; border: 0px; font-family: NHG, 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 14px; font-stretch: inherit; line-height: 2rem; vertical-align: baseline;"><span style="font-family: inherit; font-size: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit;">[...]</span></p> <p style="margin: 0px 0px 3rem; padding: 0px; border: 0px; font-family: NHG, 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 14px; font-stretch: inherit; line-height: 2rem; vertical-align: baseline;"><span style="font-family: inherit; font-size: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit;">One disadvantage of Giraffe is that neural networks are much slower than other types of data processing. Lai says Giraffe takes about 10 times longer than a conventional chess engine to search the same number of positions.</span></p> <p style="margin: 0px 0px 3rem; padding: 0px; border: 0px; font-family: NHG, 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 14px; font-stretch: inherit; line-height: 2rem; vertical-align: baseline;"><span style="margin: 0px; padding: 0px; border: 0px; font-family: inherit; font-size: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; font-stretch: inherit; line-height: inherit; vertical-align: baseline;">But even with this disadvantage, it is competitive. &ldquo;Giraffe is able to play at the level of an FIDE International Master on a modern mainstream PC,&rdquo; says Lai. By comparison, the top engines play at super-Grandmaster level.</span></p> <p style="margin: 0px 0px 3rem; padding: 0px; border: 0px; font-family: NHG, 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 14px; font-stretch: inherit; line-height: 2rem; vertical-align: baseline;">[...]</p> <p style="margin: 0px 0px 3rem; padding: 0px; border: 0px; font-family: NHG, 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 14px; font-stretch: inherit; line-height: 2rem; vertical-align: baseline;"><span style="margin: 0px; padding: 0px; border: 0px; font-family: inherit; font-size: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; font-stretch: inherit; line-height: inherit; vertical-align: baseline;">Ref:<span style="margin: 0px; padding: 0px; border: 0px; font-family: inherit; font-size: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; font-stretch: inherit; line-height: inherit; vertical-align: baseline;">&nbsp;</span><a style="margin: 0px; padding: 0px; border: 0px; font-family: inherit; font-size: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; font-stretch: inherit; line-height: inherit; vertical-align: baseline; color: #206f96;" href="http://arxiv.org/abs/1509.01549" target="_blank"><span style="margin: 0px; padding: 0px; border: 0px; font-family: inherit; font-size: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; font-stretch: inherit; line-height: inherit; vertical-align: baseline;">arxiv.org/abs/1509.01549</span></a><span style="margin: 0px; padding: 0px; border: 0px; font-family: inherit; font-size: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; font-stretch: inherit; line-height: inherit; vertical-align: baseline;">&nbsp;</span>: Giraffe: Using Deep Reinforcement Learning to Play Chess</span></p> </blockquote> <p>http://www.technologyreview.com/view/541276/deep-learning-machine-teaches-itself-chess-in-72-hours-plays-at-international-master/</p> <p>&nbsp;</p> <p>H/T&nbsp;http://lesswrong.com/user/Qiaochu_Yuan</p> esrogs aGFEYRrrv8m2fh46T 2015-09-14T19:38:11.447Z [Link] First almost fully-formed human [foetus] brain grown in lab, researchers claim https://www.lesswrong.com/posts/W79XCGSm9wkCE7AS7/link-first-almost-fully-formed-human-foetus-brain-grown-in <p>This seems significant:</p> <blockquote> <p>An almost fully-formed human brain has been grown in a lab for the first time, claim scientists from Ohio State University. The team behind the feat hope the brain could transform our understanding of neurological disease.</p> <p>Though not conscious the miniature brain, which resembles that of a five-week-old foetus, could potentially be useful for scientists who want to study the progression of developmental diseases.&nbsp;</p> <p>...</p> <p>The brain, which is about the size of a pencil eraser, is engineered from adult human skin cells and is the most complete human brain model yet developed</p> <p>...</p> <p>Previous attempts at growing whole brains have at best achieved mini-organs that resemble those of nine-week-old foetuses, although these &ldquo;cerebral organoids&rdquo; were not complete and only contained certain aspects of the brain. &ldquo;We have grown the entire brain from the get-go,&rdquo; said Anand.</p> <p>...</p> <p>The ethical concerns were non-existent, said Anand. &ldquo;We don&rsquo;t have any sensory stimuli entering the brain. This brain is not thinking in any way.&rdquo;</p> <p>...</p> <p>If the team&rsquo;s claims prove true, the technique could revolutionise personalised medicine. &ldquo;If you have an inherited disease, for example, you could give us a sample of skin cells, we could make a brain and then ask what&rsquo;s going on,&rdquo; said Anand.</p> <p>...</p> <p>For now, the team say they are focusing on using the brain for military research, to understand the effect of post traumatic stress disorder and traumatic brain injuries.</p> </blockquote> <p>http://www.theguardian.com/science/2015/aug/18/first-almost-fully-formed-human-brain-grown-in-lab-researchers-claim</p> <p>&nbsp;</p> <p>&nbsp;</p> esrogs W79XCGSm9wkCE7AS7 2015-08-19T06:37:21.049Z [Link] Neural networks trained on expert Go games have just made a major leap https://www.lesswrong.com/posts/WjoPDTdTRaiLzMnQS/link-neural-networks-trained-on-expert-go-games-have-just <p>From the <a href="http://arxiv.org/abs/1412.6564">arXiv</a>:</p> <blockquote> <p><strong>Move Evaluation in Go Using Deep Convolutional Neural Networks</strong></p> <p>Chris J. Maddison, Aja Huang, Ilya Sutskever, David Silver</p> <p>The game of Go is more challenging than other board games, due to the difficulty of constructing a position or move evaluation function. In this paper we investigate whether deep convolutional networks can be used to directly represent and learn this knowledge. We train a large 12-layer convolutional neural network by supervised learning from a database of human professional games. The network correctly predicts the expert move in 55% of positions, equalling the accuracy of a 6 dan human player. When the trained convolutional network was used directly to play games of Go, without any search, it beat the traditional search program GnuGo in 97% of games, and matched the performance of a state-of-the-art Monte-Carlo tree search that simulates a million positions per move.</p> </blockquote> <p>This approach looks like it could be combined with MCTS. Here's their conclusion:</p> <blockquote> <p>In this work, we showed that large deep convolutional neural networks can predict the next move made by Go experts with an accuracy that exceeds previous methods by a large margin, approximately matching human performance. Furthermore, this predictive accuracy translates into much stronger move evaluation and playing strength than has previously been possible. Without any search, the network is able to outperform traditional search based programs such as GnuGo, and compete with state-of-the-art MCTS programs such as Pachi and Fuego.</p> <p>In Figure 2 we present a sample game played by the 12-layer CNN (with no search) versus Fuego (searching 100K rollouts per move) which was won by the neural network player. It is clear that the neural network has implicitly understood many sophisticated aspects of Go, including good shape (patterns that maximise long term effectiveness of stones), Fuseki (opening sequences), Joseki (corner patterns), Tesuji (tactical patterns), Ko fights (intricate tactical battles involving repeated recapture of the same stones), territory (ownership of points), and influence (long-term potential for territory). It is remarkable that a single, unified, straightforward architecture can master these elements of the game to such a degree, and without any explicit lookahead.</p> <p>On the other hand, we note that the network still has weaknesses: notably it sometimes fails to understand the global picture, behaving as if the life and death status of large groups has been incorrectly assessed. Interestingly, it is precisely these global aspects of the game for which Monte-Carlo search excels, suggesting that these two techniques may be largely complementary. We have provided a preliminary proof-of-concept that MCTS and deep neural networks may be combined effectively. It appears that we now have two core elements that scale effectively with increased computational resource: scalable planning, using Monte-Carlo search; and scalable evaluation functions, using deep neural networks. In the future, as parallel computation units such as GPUs continue to increase in performance, we believe that this trajectory of research will lead to considerably stronger programs than are currently possible.</p> </blockquote> <p>H/T: <a href="http://rjlipton.wordpress.com/2014/12/28/the-new-chess-world-champion/">Ken Regan</a></p> <p>Edit -- see also:&nbsp;<a href="http://arxiv.org/abs/1412.3409">Teaching Deep Convolutional Neural Networks to Play Go</a>&nbsp;(also published to the arXiv in December 2014), and&nbsp;<a href="http://www.technologyreview.com/view/533496/why-neural-networks-look-set-to-thrash-the-best-human-go-players-for-the-first-time/">Why Neural Networks Look Set to Thrash the Best Human Go Players for the First Time</a> (MIT Technology Review article)</p> esrogs WjoPDTdTRaiLzMnQS 2015-01-02T15:48:16.283Z [LINK] Attention Schema Theory of Consciousness https://www.lesswrong.com/posts/PJJg5oArXNa5pTg6F/link-attention-schema-theory-of-consciousness <p>I found this theory pretty interesting, and it reminded me of Gary Drescher's explanation of consciousness in <em>Good and Real</em>:</p> <blockquote> <p><strong><em>How the light gets out</em></strong></p> <p><em>Consciousness is the &lsquo;hard problem&rsquo;, the mystery that confounds scientists and philosophers. Has a new theory cracked it?</em></p> <p>[...]</p> <p>Attention requires control. In the modern study of robotics there is something called control theory, and it teaches us that, if a machine such as a brain is to control something, it helps to have an internal model of that thing. Think of a military general with his model armies arrayed on a map: they provide a simple but useful representation &mdash; not always perfectly accurate, but close enough to help formulate strategy. Likewise, to control its own state of attention, the brain needs a constantly updated simulation or model of that state. Like the general&rsquo;s toy armies, the model will be schematic and short on detail. The brain will attribute a property to itself and that property will be a simplified proxy for attention. It won&rsquo;t be precisely accurate, but it will convey useful information. What exactly is that property? When it is paying attention to thing X, we know that the brain usually attributes an experience of X to itself &mdash; the property of being conscious, or aware, of something. Why? Because that attribution helps to keep track of the ever-changing focus of attention.</p> <p>I call this the &lsquo;attention schema theory&rsquo;. It has a very simple idea at its heart: that consciousness is a schematic model of one&rsquo;s state of attention. Early in evolution, perhaps hundreds of millions of years ago, brains evolved a specific set of computations to construct that model. At that point, &lsquo;I am aware of X&rsquo; entered their repertoire of possible computations.</p> </blockquote> <p>- Princeton neuroscientist, Michael Graziano, <a href="http://www.aeonmagazine.com/being-human/how-consciousness-works/">writing in Aeon Magazine</a>.</p> esrogs PJJg5oArXNa5pTg6F 2013-08-25T22:30:01.903Z [LINK] Well-written article on the Future of Humanity Institute and Existential Risk https://www.lesswrong.com/posts/x3ozuqLDn9SwjwEjN/link-well-written-article-on-the-future-of-humanity <p>This introduction to the concept of existential risk is perhaps the best such article I've read targeted at a general audience. &nbsp;It manages to cover a lot of ground in a way that felt engaging to me and that I think would carry along many readers who are intellectually curious but may not yet have had exposure to all of the related prerequisite ideas.</p> <p>&nbsp;</p> <p class="p1"><a href="http://www.aeonmagazine.com/world-views/ross-andersen-human-extinction/">Omens: When we peer into the fog of the deep future what do we see &ndash; human extinction or a future among the stars?</a></p> <blockquote> <p class="p1">Sometimes, when you dig into the Earth, past its surface and into the crustal layers, omens appear. In 1676, Oxford professor Robert Plot was putting the final touches on his masterwork, The Natural History of Oxfordshire, when he received a strange gift from a friend. The gift was a fossil, a chipped-off section of bone dug from a local quarry of limestone. Plot recognised it as a femur at once, but he was puzzled by its extraordinary size. The fossil was only a fragment, the knobby end of the original thigh bone, but it weighed more than 20 lbs (nine kilos). It was so massive that Plot thought it belonged to a giant human, a victim of the Biblical flood. He was wrong, of course, but he had the conceptual contours nailed. The bone did come from a species lost to time; a species vanished by a prehistoric catastrophe. Only it wasn&rsquo;t a giant. It was a Megalosaurus, a feathered carnivore from the Middle Jurassic.</p> <p class="p1">Plot&rsquo;s fossil was the first dinosaur bone to appear in the scientific literature, but many have followed it, out of the rocky depths and onto museum pedestals, where today they stand erect, symbols of a radical and haunting notion: a set of wildly different creatures once ruled this Earth, until something mysterious ripped them clean out of existence.</p> <p class="p1">[...]</p> <p class="p1">There are good reasons for any species to think darkly of its own extinction. Ninety-nine percent of the species that have lived on Earth have gone extinct, including more than five tool-using hominids.</p> <p class="p1">[...]</p> <p class="p1">Bostrom isn&rsquo;t too concerned about extinction risks from nature. Not even cosmic risks worry him much, which is surprising, because our starry universe is a dangerous place.</p> <p class="p1">[discussion of threats of supernovae, asteroid impacts, supervolcanoes, nuclear weapons, bioterrorism ...]</p> <p class="p1">These risks are easy to imagine. We can make them out on the horizon, because they stem from foreseeable extensions of current technology. [...] Bostrom&rsquo;s basic intellectual project is to reach into the epistemological fog of the future, to feel around for potential threats. It&rsquo;s a project that is going to be with us for a long time, until &mdash; if &mdash; we reach technological maturity, by inventing and surviving all existentially dangerous technologies.</p> <p class="p1">There is one such technology that Bostrom has been thinking about a lot lately. Early last year, he began assembling notes for a new book, a survey of near-term existential risks. After a few months of writing, he noticed one chapter had grown large enough to become its own book. &lsquo;I had a chunk of the manuscript in early draft form, and it had this chapter on risks arising from research into artificial intelligence,&rsquo; he told me. &lsquo;As time went on, that chapter grew, so I lifted it over into a different document and began there instead.&rsquo;</p> <p class="p1">[very good introduction to the threat of superintelligent AI, touching on the alienness of potential AI goals, the complexity of specifying human value, the dangers of even Oracle AI, and techniques for keeping an AI in a box, with the key quotes including, "To understand why an AI might be dangerous, you have to avoid anthropomorphising it." and, "The problem is you are building a very powerful, very intelligent system that is your enemy, and you are putting it in a cage." ...]</p> <p class="p1">One night, over dinner, Bostrom and I discussed the Curiosity Rover, the robot geologist that NASA recently sent to Mars to search for signs that the red planet once harbored life. The Curiosity Rover is one of the most advanced robots ever built by humans. It functions a bit like the Terminator. It uses a state of the art artificial intelligence program to scan the Martian desert for rocks that suit its scientific goals. After selecting a suitable target, the rover vaporises it with a laser, in order to determine its chemical makeup. Bostrom told me he hopes that Curiosity fails in its mission, but not for the reason you might think.</p> <p class="p1">It turns out that Earth&rsquo;s crust is not our only source of omens about the future. There are others to consider, including a cosmic omen, a riddle written into the lifeless stars that illuminate our skies. But to glimpse this omen, you first have to grasp the full scope of human potential, the enormity of the spatiotemporal canvas our species has to work with. You have to understand what Henry David Thoreau meant when he wrote, in Walden (1854), &lsquo;These may be but the spring months in the life of the race.&rsquo; You have to step into deep time and look hard at the horizon, where you can glimpse human futures that extend for trillions of years.</p> <p class="p1">[introduction to the idea of the Great Filter, and also that fighting existential risk is about saving all future humans and not just those alive at the time of any particular potential catastrophe ...]</p> <p class="p1">As Bostrom and I strolled among the skeletons at the Museum of Natural History in Oxford, we looked backward across another abyss of time. We were getting ready to leave for lunch, when we finally came upon the Megalosaurus, standing stiffly behind display glass. It was a partial skeleton, made of shattered bone fragments, like the chipped femur that found its way into Robert Plot&rsquo;s hands not far from here. As we leaned in to inspect the ancient animal&rsquo;s remnants, I asked Bostrom about his approach to philosophy. How did he end up studying a subject as morbid and peculiar as human extinction?</p> <p class="p1">He told me that when he was younger, he was more interested in the traditional philosophical questions. He wanted to develop a basic understanding of the world and its fundamentals. He wanted to know the nature of being, the intricacies of logic, and the secrets of the good life.</p> <p class="p1">&lsquo;But then there was this transition, where it gradually dawned on me that not all philosophical questions are equally urgent,&rsquo; he said. &lsquo;Some of them have been with us for thousands of years. It&rsquo;s unlikely that we are going to make serious progress on them in the next ten. That realisation refocused me on research that can make a difference right now. It helped me to understand that philosophy has a time limit.&rsquo;</p> </blockquote> <p class="p1">H/T -&nbsp;<a href="https://plus.google.com/u/0/103530621949492999968/posts/X6QcBVs8itn">gwern</a></p> esrogs x3ozuqLDn9SwjwEjN 2013-03-02T12:36:39.402Z The Center for Sustainable Nanotechnology https://www.lesswrong.com/posts/Xj9Ad3ACmMvoejSwK/the-center-for-sustainable-nanotechnology <p>Those concerned about existential risks may be interested to learn that, as of last September, the National Science Foundation is funding a&nbsp;<a href="http://susnano.chem.wisc.edu/">Center for Sustainable Nanotechnology</a>. &nbsp;Though I haven't yet seen anywhere where they explicitly characterize nanotechnology as an existential threat to humanity (they seem mostly to be concerned with the potential hazards of nanoparticle pollution, rather than any kind of&nbsp;<a href="http://en.wikipedia.org/wiki/Grey_goo">grey goo</a>&nbsp;scenario), I was still pleased to discover that this group exists.&nbsp;</p> <p>Here is how they describe themselves on their <a href="http://susnano.chem.wisc.edu/about">main page</a>:</p> <blockquote> <p class="p1">The Center for Sustainable Nanotechnology is a multi-institutional partnership devoted to investigating the fundamental molecular mechanisms by which nanoparticles interact with biological systems.</p> <p class="p1">...</p> <p class="p1">While nanoparticles have a great potential to improve our society, relatively little is yet known about how nanoparticles interact with organisms, and how the unintentional release of nanoparticles from consumer or industrial products might impact the environment.</p> <p class="p1">The goal of the Center for Sustainable Nanotechnology is to develop and utilize a molecular-level understanding of nanomaterial-biological interactions to enable development of sustainable, societally beneficial nanotechnologies. In effect, we aim to understand the molecular-level chemical and physical principles that govern how nanoparticles interact with living systems, in order to provide the scientific foundations that are needed to ensure that continued developments in nanotechnology can take place with the minimal environmental footprint and maximum benefit to society.</p> <p class="p1">...</p> <p class="p1">Funding for the CSN comes from the National Science Foundation Division of Chemistry through the Centers for Chemical Innovation Program.</p> </blockquote> <p>And on their <a href="http://sustainable-nano.com/2013/01/29/why-are-nanomaterials-so-special-and-what-is-the-center-for-sustainable-nanotechnology/">public outreach website</a>:</p> <blockquote> <p class="p1">Our &ldquo;center&rdquo; is actually a group of people who care about our environment and are doing collaborative research to help ensure that our planet will be habitable hundreds of years from now &ndash; in other words, that the things we do every day as humans will be sustainable in the long run.</p> <p class="p1">Now you&rsquo;re probably wondering what that has to do with nanotechnology, right? Well, it turns out that nanoparticles &ndash; chunks of materials around 10,000 times smaller than the width of a human hair &ndash; may provide new and important solutions to many of the world&rsquo;s problems. For example, new kinds of nanoparticle-based solar cells are being made that could, in the future, be painted onto the sides of buildings.</p> <p class="p1">...</p> <p class="p1">What&rsquo;s the (potential) problem? Well, these tiny little chunks of materials are so small that they can move around and do things in ways that we don&rsquo;t fully understand. For example, really tiny particles could potentially be absorbed through skin. In the environment, nanoparticles might be able to be absorbed into insects or fish that are at the bottom of the food chain for larger animals, including us.</p> <p class="p1">Before nanoparticles get incorporated into consumer products on a large scale, it&rsquo;s our responsibility to figure out what the downsides could be if nanoparticles were accidentally released into the environment. However, this is a huge challenge because nanoparticles can be made out of different stuff and come in many different sizes, shapes, and even internal structures.</p> <p class="p1">Because there are so many different types of nanoparticles that could be used in the future, it&rsquo;s not practical to do a lot of testing of each kind. Instead, the people within our center are working to understand what the &ldquo;rules of behavior&rdquo; are for nanoparticles in general. If we understand the rules, then we should be able to predict what different types of nanoparticles might do, and we should be able to use this information to design and make new, safer nanoparticles.</p> <p class="p1">In the end, it&rsquo;s all about people working together, using science to create a better, safer, more sustainable world. We hope you will join us!</p> </blockquote> esrogs Xj9Ad3ACmMvoejSwK 2013-02-26T06:55:18.542Z