Posts

Last chance to donate for 2011 2011-12-30T18:25:19.455Z
[Link] New arm of GiveWell Research: GiveWell Labs 2011-09-08T21:23:54.234Z
Impact of India-Pakistan nuclear war on x-risk? 2011-09-03T05:14:27.111Z
[LINK] Brief Discussion of Asteroid & Nuclear Risk from paper by Hellman 2011-08-17T20:07:32.902Z
Decimal digit computations as a testing ground for reasoning about probabilities 2011-07-15T04:19:21.908Z
Efficient philanthropy: local vs. global approaches 2011-06-16T04:21:21.352Z
Model Uncertainty, Pascalian Reasoning and Utilitarianism 2011-06-14T03:19:08.189Z
Generalizing From One Example & Evolutionary Game Theory 2011-05-31T23:23:32.998Z
[Reference request] Article by scientist giving lower and upper bounds on the probability of superintelligence 2011-05-08T22:16:23.190Z
John Baez Interviews with Eliezer (Parts 2 and 3) 2011-03-29T17:36:12.514Z
[LINK] John Baez Interview with astrophysicist Gregory Benford 2011-03-02T09:53:26.298Z
Some Considerations Against Short-Term and/or Explicit Focus on Existential Risk Reduction 2011-02-27T04:31:03.693Z
Friendly AI Research and Taskification 2010-12-14T06:30:02.477Z
Efficient Charity 2010-12-04T10:27:58.909Z
DRAFT: Three Intellectual Temperaments: Birds, Frogs and Beavers 2010-10-30T17:49:15.846Z
Monetary Incentives and Performance 2010-10-16T16:01:09.279Z
Beauty in Mathematics 2010-10-13T09:12:21.578Z
Vanity and Ambition in Mathematics 2010-10-12T05:49:15.498Z
Great Mathematicians on Math Competitions and "Genius" 2010-10-11T11:50:46.004Z
Fields Medalists on School Mathematics 2010-10-11T10:06:25.516Z
Reflections on a Personal Public Relations Failure: A Lesson in Communication 2010-10-01T00:29:26.468Z
The Effectiveness of Developing World Aid 2010-09-12T21:56:55.904Z
Reason is not the only means of overcoming bias 2010-09-09T22:59:58.922Z
Transparency and Accountability 2010-08-21T13:01:24.750Z
The Importance of Self-Doubt 2010-08-19T22:47:15.149Z
Other Existential Risks 2010-08-17T21:24:51.520Z
Existential Risk and Public Relations 2010-08-15T07:16:32.802Z
Against Cryonics & For Cost-Effective Charity 2010-08-10T03:59:28.119Z
Extraterrestrial paperclip maximizers 2010-08-08T20:35:39.547Z
Missed opportunities for doing well by doing good 2010-07-21T07:45:26.301Z
(One reason) why capitalism is much maligned 2010-07-19T03:48:43.524Z
Fight Zero-Sum Bias 2010-07-18T05:57:11.759Z

Comments

Comment by multifoliaterose on against "AI risk" · 2012-04-14T00:07:59.560Z · LW · GW

However, when I take a "disinterested altruism" point of view x-risk looms large: I would rather bring 100 trillion fantastic lives into being than improve the quality of life of a single malaria patient.

What's your break even point for "bring 100 trillion fantastic lives into being with probability p" vs. "improve the quality of a single malaria patient" and why?

Comment by multifoliaterose on Global warming is a better test of irrationality that theism · 2012-03-30T22:07:38.509Z · LW · GW

The occasional contrarians who mount fundamental criticism do this with a tacit understanding that they've destroyed their career prospects in the academia and closely connected institutions, and they are safely ignored or laughed off as crackpots by the mainstream. (To give a concrete example, large parts of economics clearly fit this description.)

I don't find this example concrete. I know very little about economics ideology. Can you give more specific examples?

Comment by multifoliaterose on [LINK] Nuclear winter: a reminder · 2012-03-19T18:30:54.529Z · LW · GW

It seems almost certain that nuclear winter is not an existential risk in and of itself but it could precipitate a civilizational collapse from which it's impossible to recover (e.g. because we've already depleted too much of the low hanging natural resource supply). This seems quite unlikely, maybe the chance conditional on nuclear winter is between 1 and 10 percent. Given that governments already consider nuclear war to be a national security threat and that the probability seems much lower than x-risk due to future technologies it seems best to focus on other things. Even if nothing direct can be done about x-risk from future technologies, movement building seems better than nuclear risk reduction.

Comment by multifoliaterose on Heuristics and Biases in Charity · 2012-03-04T02:51:43.327Z · LW · GW

So part of what I think is going on here is that giving to statistical charity is a slippery slope. There is no one number that it's consistent to give: if I give $10 to fight malaria, one could reasonably ask why I didn't give $100; if I give $100, why not $1000; and if $1000, why not every spare cent I make? Usually when we're on a slippery slope like this, we look for a Schelling point, but there are only two good Schelling points here: zero and every spare cent for the rest of your life. Since most people won't donate every spare cent, they stick to "zero". I first realized this when I thought about why I so liked Giving What We Can's philosophy of donating 10% of what you make; it's a powerful suggestion because it provides some number between 0 and 100 which you can reach and then feel good about yourself.

There's another option which I think may be better for some people (but I don't know because it hasn't been much explored). One can stagger one's donations over time (say, on a quarterly basis) and alter the amount that one gives according to how one feels about donating based on the feeling of past donations. It seems like this may maximize the amount that one gives locally subject to the constraint of avoiding moral burnout.

If one feels uncomfortable with the amount that one is donating because it's interfering with one's lifestyle one can taper off. On the flip side I've found that donating gives the same pleasure that buying something does: a sense of empowerment. Buying a new garment that one realistically isn't going to wear or a book that one realistically isn't going to read feels good, but probably not as good as donating. This is a pressure toward donating more.

Comment by multifoliaterose on How do you notice when you're rationalizing? · 2012-03-03T01:06:27.844Z · LW · GW

Cue: Non-contingency of my arguments (such that the same argument could be applied to argue for conclusions which I disagree with).

Comment by multifoliaterose on How do you notice when you're rationalizing? · 2012-03-03T01:02:22.831Z · LW · GW

Bob: "We shouldn't do question three this way; you only think so because you're a bad writer". My mouth/brain: "No, we should definitely do question three this way! [because I totally don't want to think I'm a bad writer]"

It's probably generically the case that the likelihood of rationalization increases with the contextual cue of a slight. But one usually isn't aware of this in real time.

Comment by multifoliaterose on What happens when your beliefs fully propagate · 2012-02-19T01:34:07.700Z · LW · GW

I find this comment vague and abstract, do you have examples in mind?

Comment by multifoliaterose on Efficient Charity: Cheap Utilons via bone marrow registration · 2012-02-13T00:45:52.098Z · LW · GW

GiveWell itself (it directs multiple dollars to its top charities on the dollar invested, as far as I can see, and powers the growth of an effective philanthropy movement with broader implications).

There's an issue of room for more funding.

Some research in the model of Poverty Action Lab.

What information do we have from Poverty Action Lab that we wouldn't have otherwise? (This is not intended as a rhetorical question; I don't know much about what Poverty Action Lab has done).

A portfolio of somewhat outre endeavours like Paul Romer's Charter Cities.

Even in the face of the possibility of such endeavors systematically doing more harm than good due to culture clash?

Political lobbying for AMF-style interventions (Gates cites his lobbying expenditures as among their very best), carefully optimized as expected-value charity rather than tribalism using GiveWell-style empiricism, with the collective action problems of politics offsetting the reduced efficiency and corruption of the government route

Here too maybe there's an issue of room for more funding: if there's room for more funding then why does the Gates Foundation spend money on many other things?

Putting money in a Donor-Advised Fund to await the discovery of more effective charities, or special time-sensitive circumstances demanding funds especially strongly

What would the criterion for using the money be? (If one doesn't have such a criterion then one forever holds off on a better opportunity and this has zero expected value.)

Comment by multifoliaterose on On Saying the Obvious · 2012-02-03T21:45:05.564Z · LW · GW

Saying that something is 'obvious' can provide useful information to the listener of the form "If you think about this for a few minutes you'll see why this is true; this stands in contrast with some of the things that I'm talking about today." Or even "though you may not understand why this is true, for experts who are deeply immersed in this theory this part appears to be straightforward."

I personal wish that textbooks more often highlighted the essential points over those theorems that follow from a standard method that the reader is probably familiar with.

But here I really have in mind graduate / research level math where there's widespread understanding that a high percentage of the time people are unable to follow someone who believes his or her work to be intelligible and so who have a prior against such remarks being intended as a slight. It seems like a bad communication strategy for communicating with people who are not in such a niche.

Comment by multifoliaterose on How I Ended Up Non-Ambitious · 2012-01-28T06:22:03.178Z · LW · GW

Do you know of anyone who tried and quit?

No, I don't. This thread touches on important issues which warrant fuller discussion; I'll mull them over and might post more detailed thoughts under the discussion board later on.

Comment by multifoliaterose on Urges vs. Goals: The analogy to anticipation and belief · 2012-01-25T22:33:55.745Z · LW · GW

(People rarely exhibit long-term planning to acquire social status any more than we/they exhibit long-term planning to acquire health. E.g., most unhappily single folk do not systematically practice their social skills unless this is encouraged by their local social environment.)

Is lack of social skills typically the factor that prevents unhappily single folk from finding relationships? Surely this is true in some cases but I would be surprised to learn that it's generic.

Comment by multifoliaterose on Urges vs. Goals: The analogy to anticipation and belief · 2012-01-25T05:31:49.432Z · LW · GW

I strongly endorse your second and fourth points; thanks for posting this. They're related to Yvain's post Would Your Real Preferences Please Stand Up?.

Comment by multifoliaterose on How I Ended Up Non-Ambitious · 2012-01-25T05:26:10.521Z · LW · GW

The only problem here is charity: I do think it may be morally important to be ambitious in helping others, which might even include taking a lucrative career in order to give money to charity. This is especially true if the Singularity memeplex is right and we're living in a desperate time that calls for a desperate effort. See for example Giving What You Can's powerpoint on ethical careers. At some point you need to balance how much good you want to do, with how likely you are to succeed in a career, with how miserable you want to make yourself - and at the very least rationality can help clarify that decision.

I don't know a single example of somebody who chose a career substantially less enjoyable than what they would otherwise have been doing in order to help people and successfully stuck to it. Do you?

Comment by multifoliaterose on The Singularity Institute's Arrogance Problem · 2012-01-19T13:50:47.331Z · LW · GW

(a) My experience with the sociology of academia has been very much in line with what Lukeprog's friend, Shminux and RolfAndreassen describe. This is the culture that I was coming from in writing my post titled Existential Risk and Public Relations. Retrospectively I realize that the modesty norm is unusually strong in academia and to that extent I was off-base in my criticism.

The modesty norms have some advantages and disadvantages. I think that it's appropriate for even the best people take the view "I'm part of a vast undertaking; if I hadn't gotten there first it's not unlikely that someone else would have gotten there within a few decades." However, I'm bothered by the fact that the norm is so strong that innocuous questions/comments which quite are weak signals of immodesty are frowned upon.

(b) I agree with cousin it that it would be good for SIAI staff to "communicate more carefully, like Holden Karnofsky or Carl Shulman."

Comment by multifoliaterose on Open Thread, January 15-31, 2012 · 2012-01-18T22:34:23.970Z · LW · GW

But what's the purported effect size?

Comment by multifoliaterose on Open Thread, January 15-31, 2012 · 2012-01-18T22:31:50.050Z · LW · GW

I know Bach's music quite well from a listener's perspective though not from a theoretician's perspective. I'd be happy to share some pieces recordings that I've enjoyed / have found accessible.

Your last paragraph is obscure to me and I share your impression that you started to ramble :-).

Comment by multifoliaterose on Open Thread, January 15-31, 2012 · 2012-01-18T16:24:21.839Z · LW · GW

I wasn't opening up discussion of the book so much as inquiring why you find the fact that you cite interesting.

Comment by multifoliaterose on Open Thread, January 15-31, 2012 · 2012-01-18T13:35:13.846Z · LW · GW

Why do you bring this up?

For what it's worth my impression is that while there exist people who have genuinely benefited from the book; a very large majority of the interest expressed in the book is almost purely signaling.

Comment by multifoliaterose on Open Thread, January 15-31, 2012 · 2012-01-18T13:33:27.463Z · LW · GW

I agree

Comment by multifoliaterose on Open Thread, January 15-31, 2012 · 2012-01-18T13:32:58.703Z · LW · GW

Why are you asking?

Comment by multifoliaterose on Q&A with experts on risks from AI #3 · 2012-01-12T13:32:09.169Z · LW · GW

You didn't address my criticism of the question about provably friendly AI nor my point about the researchers lacking relevant context for thinking about AI risk. Again, the issues that I point to seems to make the researchers' response to the questions about friendliness & existential risk due to AI carry little information

Comment by multifoliaterose on Q&A with experts on risks from AI #2 · 2012-01-11T02:54:01.205Z · LW · GW

I find some of your issues with the piece legitimate but stand by my characterization of the most serious existential threat from AI being of the type described in the therein.

Comment by multifoliaterose on [Template] Questions regarding possible risks from artificial intelligence · 2012-01-10T15:38:26.778Z · LW · GW

The whole of question 3 seems problematic to me.

Concerning parts (a) and (b), I doubt that researchers will know what you have in mind by "provably friendly." For that matter I myself don't know what you have in mind by "probably friendly" despite having read a number of relevant posts on Less Wrong.

Concerning part (c); I doubt that experts are thinking in terms of money needed to possible mitigate AI risks at all; presumably in most cases if they saw this as a high priority issue and tractable issue they would have written about it already.

Comment by multifoliaterose on Dead Child Currency · 2012-01-10T04:36:38.937Z · LW · GW

To illustrate the fact that the value of goods is determined by their scarcity/abundance relative to demand?

Comment by multifoliaterose on Dead Child Currency · 2012-01-10T03:23:42.412Z · LW · GW

I don't see the relevance of your response to my question, care to elaborate?

Comment by multifoliaterose on Q&A with experts on risks from AI #2 · 2012-01-10T01:41:30.346Z · LW · GW

I generally agree with paulfchristiano here. Regarding Q2, Q5 and Q6 I'll note that that aside from Nils Nilsson, the researchers in question do not appear to be familiar with the most serious existential risk from AGI: the one discussed in Omohundro's The Basic AI Drives. Researchers without this background context are unlikely to deliver informative answers on Q2, Q5 and Q6.

Comment by multifoliaterose on Dead Child Currency · 2012-01-10T01:27:20.892Z · LW · GW

I was thinking over a dramatically cheap mosquito zapping laser (putting as much of the complexity into software rather than high precision hardware).

I don't understand this sentence. Is this something that you were contemplating doing personally? The Gates Foundation has already funded such a project.

I can't say I care a whole ton though - it's not my fault the world is naturally a hell-hole.

I agree with the second clause but don't think that it has a great deal to do with the first clause. Most people would upon being confronted by a sabertooth tiger would care about not being maimed by it despite the fact that it's not their fault that there's a possibility that they might be maimed by a saber tooth tiger. A sense of bearing responsibility for a problem is one route toward caring about fixing it but there are other routes.

Nevertheless, sadly I can relate to not caring very much.

Think about it, the condition of suffering has evolved because it is very useful to prodding you forward - in the natural conditions you suffer a lot, the pain circuitry gets a lot of use. No species can just live happily, the evolution will make such species work harder at reproducing and suffer.

Any reason to think that negative feelings are a more effective motivator than positive feelings? If not, is there any reason to doubt that it's in principle possible for a species to have motivational mechanisms consisting exclusively of rewards?

Comment by multifoliaterose on Dead Child Currency · 2012-01-10T01:23:23.551Z · LW · GW

How does the person singled out react?

Comment by multifoliaterose on Dead Child Currency · 2012-01-10T01:12:08.933Z · LW · GW

I didn't downvote you but I suspect that the reason for the downvotes is the combination of your claim appearing dubious and the absence of a supporting argument.

Comment by multifoliaterose on [Link] Correlation Graphs Reveal Shocking Information · 2011-12-26T15:00:24.537Z · LW · GW

once people pass a certain intelligence level

This seems crucial to me; you're really talking about a few percent of the population, right?

Also, I'll note that when (even very smart) people are motivated to believe in the existence of a phenomenon they're apt to attribute causal structure in.correlated data.

For example: It's common wisdom among math teachers that precalculus is important preparation for calculus. Surely taking precalculus has some positive impact on calculus performance but I would guess that this impact is swamped by preexisting variance in mathematical ability/preparation.

Comment by multifoliaterose on Video Q&A with Singularity Institute Executive Director · 2011-12-10T17:36:32.153Z · LW · GW

Huh? I didn't mean opportunity cost, but simply that successful neuromorphic AI destroys the world. Staging a global catastrophe does have lower expected value than protecting from global catastrophe (with whatever probabilities), but also lower expected value than watching TV.

I was saying that it could be that with more information we would find that

0 < EU(Friendly AI research) < EU(Pushing for relatively safe neuromorphic AI) < EU(Successful construction of a Friendly AI).

even if there's a high chance that relatively safe neuromorphic AI would cause global catastrophe and carry no positive benefits. This could be the case if Friendly AI research sufficiently hard. I think that given the current uncertainty about the difficulty of friendly AI research would have to be extremely confident that relatively safe neuromorphic AI that would cause global catastrophe to rule this possibility out.

Indirectly, but with influence that compresses expected time-to-catastrophe after the tech starts working from decades-centuries to years (decades if WBE tech comes early and only slow or few uploads can be supported initially). It's not all lost at that point, since WBEs could do some FAI research, and would be in a better position to actually implement a FAI and think longer about it, but ease of producing an UFAI would go way up (directly, by physically faster research of AGI, or by experimenting with variations on human brains or optimization processes built out of WBEs).

Agree with this

The main thing that distinguishes WBEs is that they are still initially human, still have same values. All other tech breaks values, and giving it power makes humane values lose the world.

I think that I'd rather have an uploaded crow brain have its computational power and memory substantially increased and then go FOOM than have an arbitrary powerful optimization process; just because a neuromorphic AI wouldn't have values that are precisely human doesn't mean it wouldn't be totally devoid of value from our point of view.

Comment by multifoliaterose on Video Q&A with Singularity Institute Executive Director · 2011-12-10T16:30:40.182Z · LW · GW

I believe it won't be "less valuable", but instead would directly cause existential catastrophe, if successful.

I meant in expected value.

As Anna mentioned in one of her Google AGI talks there's the possibility of an AGI being willing to trade with humans to avoid a small probabity of being destroyed by humans (though I concede that it's not at all clear how one would create an enforceable agreement). Also a neuromorphic AI could be not so far from a WBE. Do you think that whole brain emulation would directly cause existential catastrophe?

Comment by multifoliaterose on Video Q&A with Singularity Institute Executive Director · 2011-12-10T15:50:09.530Z · LW · GW

Believing problem intractable isn't a step towards solving the problem. It might be correct to downgrade your confidence in a problem being solvable, but isn't in itself a useful thing if the goal remains motivated.

I agree, but it may be appropriate to be more modest in aim (e.g. by pushing for neuromorphic AI with some built-in safety precautions even if achieving this outcome is much less valuable than creating a Friendly AI would be).

Comment by multifoliaterose on Video Q&A with Singularity Institute Executive Director · 2011-12-10T14:41:15.792Z · LW · GW

Luke: I appreciate your transparency and clear communication regarding SingInst.

The main reason that I remain reluctant to donate to SingInst is that I find your answer (and the answers of other SingInst affiliates who I've talked with) to the question about Friendly AI subproblems to be unsatisfactory. Based on what I know at present, subproblems of the type that you mention are way too vague for it to be possible for even the best researchers to make progress on them.

My general impression is that the SingInst staff have insufficient exposure to technical research to understand how hard it is to answer questions posed at such a level of generality. I'm largely in agreement with Vladimir M's comments on this thread.

Now, it may well be possible to further subdivide and sharpen the subproblems at hand to the point where they're well defined enough to answer, but the fact that you seem unaware of how crucial this is is enough to make me seriously doubt SingInst's ability to make progress on these problems.

I'm glad to see that you place high priority on talking to good researchers, but I think that the main benefit that will derive from doing so (aside from increasing awareness of AI risk) will be to shift SingInst staff member's beliefs in the direction of the Friendly AI problem being intractable.

Comment by multifoliaterose on Video Q&A with Singularity Institute Executive Director · 2011-12-10T13:49:06.537Z · LW · GW

One doesn't need to know that hundreds of people have been influenced to know that Eliezer's writings have had x-risk reduction value; if he's succeeded in getting a handful of people seriously interested in x-risk reduction relative to the counterfactual his work is of high value. Based on my conversations with those who have been so influenced, this last point seems plausible to me. But I agree that the importance of the sequences for x-risk reduction has been overplayed.

Comment by multifoliaterose on Video Q&A with Singularity Institute Executive Director · 2011-12-10T13:42:05.096Z · LW · GW

The company could generate profit to help fund SingInst and give evidence that the rationality techniques that Vassar, etc. use work in a context with real world feedback. This in turn could give evidence of them being useful in the context of x-risk reduction where empirical feedback is not available.

Comment by multifoliaterose on [LINKS] New GiveWell Top Charities · 2011-12-01T07:26:28.254Z · LW · GW

I misread your earlier comment, sorry for the useless response. I understand where you're coming from now. Holden has written about the possibility of efficient opportunities for donors drying up as the philanthropic sector improves, suggesting that it might be best to help now because the poor people who can be easily helped are around today and will not be in the future. See this mailing list post.

I personally think that even if this is true probably true, the expected value of waiting to give later is higher than the expected value of donating to AMF or SCI now. But I might give to AMF or SCI in the near future to maintain the sense of being an active donor.

Comment by multifoliaterose on [LINKS] New GiveWell Top Charities · 2011-11-30T15:49:03.869Z · LW · GW

If you know that you can donate to SCI later, the expected utility of waiting would have to be at least that of donating to it now.

Why? Because you can invest the money and use the investment to donate more later? But donating more now increases the recipients' functionality so that they're able to contribute more to their respective societies in the than they would otherwise be able to in the time between now and later.

Comment by multifoliaterose on [LINKS] New GiveWell Top Charities · 2011-11-30T02:52:21.136Z · LW · GW

It seems very unlikely to me that the expected value of donating to SCI is precisely between 1/2 and 1 times as high as the best alternative.

Comment by multifoliaterose on [LINKS] New GiveWell Top Charities · 2011-11-29T23:21:04.503Z · LW · GW

I don't understand your question; are you wondering whether you should give through the donation-matching pledge or about whether you should give to AMF or SCI at all?

Comment by multifoliaterose on LessWrong opinion of Nietzsche? · 2011-11-25T20:30:00.774Z · LW · GW

See Vladimir_M's comment on Nietzsche (which I agree with).

Comment by multifoliaterose on Intelligence Explosion analysis draft: Why designing digital intelligence gets easier over time · 2011-11-23T02:37:25.004Z · LW · GW

Embryo selection for better scientists. At age 8, Terrence Tao scored 760 on the math SAT, one of only [2?3?] children ever to do this at such an age; he later went on to [have a lot of impact on math]. Studies of similar kids convince researchers that there is a large “aptitude” component to mathematical achievement, even at the high end.7 How rapidly would mathematics or AI progress if we could create hundreds of thousands of Terrence Tao’s?

Though I think agree with the general point that you're trying to make here (that there's a large "aptitude" component to the skills relevant to AI research and that embryo selection technology could massively increase the number of people who have high aptitude), I don't think that it's so easy to argue:

(a) The math that Terence Tao does is arguably quite remote from AI research.

(b) More broadly, the relevance of mathematical skills to AI research skills is not clear cut.

(c) The SAT tests mathematical aptitude only very obliquely.

(d) Correlation is not causation; my own guess is that high mathematical aptitude as measured by conventional metrics (e.g. mathematical olympiads) is usually necessary but seldom sufficient for the highest levels of success as a mathematical researcher.

(e) Terence Tao is a single example

7 [Benbow etc. on study of exceptional talent; genetics of g; genetics of conscientiousness and openness, pref. w/ any data linking conscientiousness or openness to scientific achievement. Try to frame in a way that highlights hard work type variables, so as to alienate people less.]

Is there really high quality empirical data here? I vaguely remember Carl referencing a study about people at the one in ten thousand level of IQ having more success becoming professors than others, but my impression is that there's not much research in the way of the genetics of high achieving scientists.

For what it's worth I think that the main relevant variable here is a tendency (almost involuntary) to work in a highly focused way for protracted amounts of time. This seems to me much more likely to be the limiting factor than g.

I think that one would be on more solid footing both rhetorically and factually just saying something like "capacity for scientific achievement appears to have a large genetic component and it may be possible to select for genes relevant to high scientific achievement by studying the genes of high achieving scientists."

Comment by multifoliaterose on Transhumanism and Gender Relations · 2011-11-17T15:47:35.356Z · LW · GW

Characteristically Burkian.

Comment by multifoliaterose on Whole Brain Emulation: Looking At Progress On C. elgans · 2011-10-30T16:21:36.008Z · LW · GW

While I wouldn't say whole brain emulation could never happen, this looks to me like it is a very long way out, probably hundreds of years.

Does this assessment take into account the possibility of intermediate acceleration of human cognition?

Comment by multifoliaterose on What to do after college? · 2011-10-10T03:25:55.898Z · LW · GW

You might point him to the High Impact Careers Network. There's not much on the website right now but the principles have doing in-depth investigation of the prospects for doing good in various careers and might well be inclined to share draft materials with your friend.

Comment by multifoliaterose on Rationality Drugs · 2011-10-02T15:30:29.353Z · LW · GW

Me too!

Comment by multifoliaterose on Impact of India-Pakistan nuclear war on x-risk? · 2011-09-04T17:13:54.233Z · LW · GW

Thanks for the excellent response. I'm familiar with much of the content but you've phrased it especially eloquently.

Comment by multifoliaterose on Impact of India-Pakistan nuclear war on x-risk? · 2011-09-03T21:51:37.665Z · LW · GW

Agree with most of what you say here.

If technological progress is halted completely this won't be a problem.

No, if technological progress is halted completely then we'll never be able to become transhumans. From a certain perspective this is almost as bad as going extinct.

The question as phrased also emphasizes climate change rather than other issues. In the case of such a nuclear war, there would be many other negative results. India is a major economy at this point and such a war would result in largescale economic problems throughout.

The Robock and Toon article estimates 20 million immediate deaths from an India-Pakistan war, less than 5% of the relevant population (although presumably among the most productive such people). This number is roughly consistent with extrapolating from data from Hiroshima and Nagasaki. In light of this, would you guess that the India-Pakistan specific economic disruption would be greater than the economic disruption caused by a billion deaths due to starvation?

One serious problem with coming back from societal collapse that is often neglected is the problem of resource management. Nick Bostrom has pointed out that. to get to our current tech level we had to bootstrap up using non-renewable fossil fuels and other non-renewable resources.

Do you know if anyone's attempted an analysis of the issues relevant here? On the most crude level we can look at the amount of oil that's been used so far.

Overall, nuclear war is an example of many sorts of situations that would increase existential risk across the board. In that regard it isn't that different from a smallish asteroid impact (say 2-3 km) in a major country, or Yellowstone popping, or a massive disease outbreak or a lot of other situations.

Agree, but I think that the probability of nuclear war is higher than the probability of the other possibilities that you mention. See http://lesswrong.com/lw/75b/link_brief_discussion_of_asteroid_nuclear_risk/ if you haven't already done so.

Comment by multifoliaterose on Impact of India-Pakistan nuclear war on x-risk? · 2011-09-03T15:26:53.924Z · LW · GW

The lack of ICBM capacity for either side makes nuclear weapons in the hands of Pakistan and India effective as MAD deterrence due to the simple fact that any use of such weapons is likely to be nearly as destructive to their own side as it would to the enemy.

Can you substantiate this claim?

Comment by multifoliaterose on [LINK] Brief Discussion of Asteroid & Nuclear Risk from paper by Hellman · 2011-08-24T19:54:45.325Z · LW · GW

Thanks to you too!