Posts

Comments

Comment by Gastogh on Three more ways identity can be a curse · 2013-04-28T06:56:15.876Z · LW · GW

Additionally, optimizing for a particular identity might not only be counterproductive - it might actually be a quick way to get people to despise you.

Sure, but not optimizing for a particular identity can easily be just as harmful. This goes especially for social situations; consider being gay and not optimizing for a non-gay facade in an emphatically anti-gay environment.

Given that, the obvious follow-up question is how to tell the good identities from the bad, and I think the post does well in identifying some of the bad types. This, for example:

Synthesizing these three pieces of information leads me to believe that the worst thing you can possibly do for your akrasia is to tie your success and productivity to your sense of identity/self-worth, especially if you're using negative motivation to do so, and especially if you suffer or have recently suffered from depression or low-self esteem.

...seems well on the mark and I see a lot of myself in it. Could do without the superlative ("The worst thing you can possibly do!"), but otherwise it seems sound.

Comment by Gastogh on Help us name the Sequences ebook · 2013-04-16T06:44:42.529Z · LW · GW

Don't know if this is where it comes from, but I always thought of "sequences" as an elaboration on the idea of rationality as a martial art; the term has some significance in theatrical swordplay, and it could also be compared to the Japanese kata.

Comment by Gastogh on Eliezer's YU lecture on FAI and MOR [link] · 2013-03-08T09:09:23.668Z · LW · GW

I read the first half, skimmed the second, and glanced at a handful of the slides. Based on that, I would say it's mostly introductory material with nothing new for those who have read the sequences. IOW, a summary of the lecture would basically be a summary of a summary of LW.

Comment by Gastogh on Sayeth the Girl · 2013-02-26T11:32:08.628Z · LW · GW

To me it seems like a joke.

Comment by Gastogh on LW Women: LW Online · 2013-02-26T11:25:10.746Z · LW · GW

This would explain why some people recommend starting sentences with "I think..." etc. to reduce conflicts.

In a model-sharing mode that does not make much sense. Sentences "I think X" and "X" are equivalent.

I think it does make sense, even in model-sharing mode. "I think" has a modal function; modal expressions communicate something about your degree of certainty in what you're saying, and so does leaving them out. The general pattern is that flat statements without modal qualifiers are interpreted as being spoken with great/absolute confidence.

I also question the wisdom of dividing interpersonal communication into separate "listener-handling" and "model-sharing" modes. Sharing anything that might reasonably be expected to have an impact on other people's models is only not "listener-handling" if we discount "potentially changing people's models" as a way of "handling" them. Which doesn't seem to make a lot of sense to me.

Comment by Gastogh on Singularity Fiction · 2013-02-26T10:20:24.271Z · LW · GW

Seconded. Granted, my sample size is pretty minuscule, but still.

And as an extra reason why LW folks might be interested in Rajaniemi's books, the second book of the series, The Fractal Prince, mentions something called "extrapolated volition" being at the heart of one of the cultures in the novels' setting.

Comment by Gastogh on How do you not be a hater? · 2013-02-25T11:11:15.406Z · LW · GW

Why do you think that having Asperger's gives you immunity to revulsion at the quality of a review?

Comment by Gastogh on Discussion: Which futures are good enough? · 2013-02-24T08:52:23.579Z · LW · GW
  1. Are there other values that, if we traded them off, might make MFAI much easier?

I don't understand this question. Is it somehow not trivially obvious that the more values you remove from the equation (starting with "complexity"), the easier things become?

Comment by Gastogh on Rationalist Lent · 2013-02-20T15:14:11.848Z · LW · GW

Sign me up for the interest list as well. On a related note: given the number of upvotes for the others who have expressed interest, the writeup might warrant a Discussion-level post when the time comes; if it does end up working anywhere near as well as Rhinehart's personal experiences, I feel we shouldn't risk the finding being buried in the comments of this thread.

Also, in case you don't share his misgivings about providing brand names, such a list would be appreciated. Part of the reason is that Rhinehart says he lives in one of the largest metropolitan areas in the world, and if he says some things are "hard to get" and have to be obtained from small suppliers, I might end up having to import them.

Comment by Gastogh on Domesticating reduced impact AIs · 2013-02-14T17:12:54.252Z · LW · GW

I mostly steer clear of AI posts like this, but I wanted to give props for the drawing of unsurpassable elegance.

Comment by Gastogh on Welcome to Less Wrong! (July 2012) · 2013-02-10T12:10:08.330Z · LW · GW

Why are we causing them to think of LW in terms of identity in the first place, instead of, say, a place to learn about and discuss some interesting ideas?

Some possibilities:

  1. There have been deliberate efforts at community-building, as evidenced by all the meetup-threads and one whole sequence, which may suggest that one is supposed to identify with the locals. Even relatively innocuous things like introduction and census threads can contribute to this if one chooses to take a less than charitable view of them, since they focus on LW itself instead of any "interesting idea" external to LW.

  2. Labeling and occasionally hostile rhetoric: Google gives dozens of hits for terms like "lesswrongian" and "LWian", and there have been recurring dismissive attitudes regarding The Others and their intelligence and general ability. This includes all snide digs at "Frequentists", casual remarks to the effect of how people who don't follow certain precepts are "insane", etc.

  3. The demographic homogeneity probably doesn't help.

Comment by Gastogh on [SEQ RERUN] Failed Utopia #4-2 · 2013-02-08T14:13:23.866Z · LW · GW

I always was rather curious about that other story EY mentions in the comments. (The "gloves off on the application of FT" one, not the boreanas one.) It could have made for tremendously useful memetic material / motivation for those who can't visualize a compelling future. Given all the writing effort he would later invest in MoR, I suppose the flaw with that prospect was a perceived forced tradeoff between motivating the unmotivated and demotivating the motivated.

Comment by Gastogh on Cryo and Social Obligations · 2013-01-27T15:44:56.593Z · LW · GW

What's even more interesting is that if this idea has any actual basis in reality... then it offers the possibility of coming up with approaches to counter it: promoting the idea that waking up from cryo will involve being enmeshed in a community rightaway.

Do we expect that to really be the case, though?

Comment by Gastogh on Cryo and Social Obligations · 2013-01-27T15:34:08.081Z · LW · GW

This may be somewhat besides the point of the OP, but "cryonics" + "social obligations" in the context of the old headache about the popularity of cryonics reminded me of this:

The laws of different countries allow potential donors to permit or refuse donation, or give this choice to relatives. The frequency of donations varies among countries.

There are two main methods for determining voluntary consent: "opt in" (only those who have given explicit consent are donors) and "opt out" (anyone who has not refused is a donor). Opt-out legislative systems dramatically increase effective rates of consent for donation.[1] For example, Germany, which uses an opt-in system, has an organ donation consent rate of 12% among its population, while Austria, a country with a very similar culture and economic development, but which uses an opt-out system, has a consent rate of 99.98%.[1][2]

~ Wikipedia on organ donation

Comment by Gastogh on I attempted the AI Box Experiment (and lost) · 2013-01-21T07:01:22.044Z · LW · GW

I voted No, but then I remembered that under the terms of the experiment as well as for practical purposes, there are things far more subtle than merely pushing a "Release" button that would count as releasing the AI. That said, if I could I'd change my vote to Not sure.

Comment by Gastogh on Farewell Aaron Swartz (1986-2013) · 2013-01-14T12:59:50.573Z · LW · GW

No suicide note has surfaced, PGP-signed or otherwise. No public statements that I've been able to find have identified witnesses or method.

Some of this information has been released since the posting of the parent, but because the tone of the post feels like it was jumping a gun or two, I wanted to throw this out there:

There are good reasons why the media might not want to go into detail on these things, especially when the person in question was young, famous and popular. The relatively recent Bridgend suicide spiral was (is?) a prime example of such neglected media ethics, but the effect itself is nothing new.

Also: some things are always bound to get out via the social grapevine, but the lack of detailed official statements within a day or two is hardly even weak evidence for anything. I'll bet the "possibility that this was not a natural event" also occurred to the police, and immediately publishing relevant details of what might have become a criminal investigation just seems plain dumb.

Comment by Gastogh on How to Seem (and Be) Deep · 2013-01-09T10:55:11.745Z · LW · GW

I'm not sure how literally I'm supposed to take that last statement, or how general its intended application is. It just doesn't seem practicable.

I'm assuming you wouldn't drop everything else that's going on in your life for an unspecified amount of time in order to personally force a stranger to stay alive, all just as a response to them stating that it would be their preference to die. Was this only meant to apply if it was someone close to you who expressed that desire, or do you actually work full-time in suicide prevention or something?

Comment by Gastogh on Caring about what happens after you die · 2012-12-19T21:05:33.340Z · LW · GW

If they really can't even see that someone can care, then it certainly sounds as though the problem is in their understanding rather than your explanations. The viewpoint of "I don't care what happens if it doesn't involve me in any way" doesn't seem in any way inherently self-contradictory, so it'd be a hard position to argue against, but that shouldn't be getting in the way of seeing that not everyone has to think that way. Things like these three comments might have a shot at bridging the empathic gap, but if that fails... I got nothing.

Comment by Gastogh on Caring about what happens after you die · 2012-12-18T15:26:56.950Z · LW · GW

This may seem like nitpicking, but I promise it's for non-troll purposes.

In short, I don't understand what the problem is. What do you mean by falling flat? That they don't understand what you're saying, that they don't agree with you, or something else? Are you trying to change their minds so that they'd think less about themselves and more about the civilization at large? What precisely is the goal that you're failing to accomplish?

Comment by Gastogh on Harry Potter and the Methods of Rationality discussion thread, part 17, chapter 86 · 2012-12-18T14:36:12.551Z · LW · GW

Wanting to kill a specific person may be a requirement for fueling the spell, sure, but I don't see why that necessarily entails everyone else being immune to what is essentially a profoundly lethal effect. Once a bullet is in the air, it doesn't matter what motivated the firing of the gun.

The bit about nobody mentioning collateral damage sounds like an argument from silence. I'll tentatively grant you the point about "no possible defense", but to me it seems like Moody could well have been talking about deliberate, cold-blooded murder rather than all possible circumstances. I mean, by the time of the "no possible defense" line he's already name-dropped the Monroe Act, which is nothing if not a big, fat exception.

Comment by Gastogh on Harry Potter and the Methods of Rationality discussion thread, part 17, chapter 86 · 2012-12-18T13:33:54.924Z · LW · GW

I don't remember anything about the spell not being able to hit anything but the intended target, either in canon or the MoRverse. What's your source? Or, if there is no explicit source, what makes it "obvious"?

Comment by Gastogh on 2012 Winter Fundraiser for the Singularity Institute · 2012-12-15T16:50:44.145Z · LW · GW

Gave 200 $ this time.

Comment by Gastogh on Analyzing FF.net reviews of 'Harry Potter and the Methods of Rationality' · 2012-11-09T15:57:34.514Z · LW · GW

I started reading too late to catch most notes of this sort by EY (and I often skip Author's Notes anyway), but from personal real-time observation of other fanfics it seems to be a tremendous help for authors to beg for reviews, in any and all senses of "begging". Asking for stuff is good, and holding updates hostage for the price of reviews is even better (assuming there actually are any readers). Giving public thanks to reviewers also works.

Comment by Gastogh on 2012 Less Wrong Census/Survey · 2012-11-09T15:14:50.608Z · LW · GW

Kudos to the one who formulated the questions. I found them unusually easy to answer, at large.

I'm only puzzled at the lack of an umbrella option for the humanities in the question on profession. Were they meant to fall into the category of social sciences?

Comment by Gastogh on A My Little Pony fanfic allegedly but not mainly about immortality · 2012-09-10T07:30:45.444Z · LW · GW

Not being that well-versed in the MLP-verse I didn't read the fic, but here's my two cents anyway:

If "I'm afraid of dying" didn't manage the intended emotional appeal, it may be because of those allegations of selfishness you already noted. One solution is to steer attention away from what death implies for her, and towards what it means for someone else. Altruism, if not overdone, should work better than self-interest (however enlightened). Here's an excerpt from one Damien's fanfic Ascension, which I felt worked quite well:

This Saria was just too young to understand. Paige didn't believe she had to explain herself to a child and her biases toward the Kokiri began to surface. "Well, Link is Hylian and he needs a Hylian to raise him and meet his needs. You're just a child, yourself, cursed to be young forever! What could you possibly know about children?"

Almost as soon as the words left her mouth, with a great suddenness the sky opened up and the rain began to pour down on the strange couple. Though her face remained angered, the fear that she was in a very magical place and that she may have over stepped her bounds, was creeping into Paige's bones. Looking at the face of Saria and the tears she was sure that were racing down the child's face lost in the rainwater, Paige knew the skies were mimicking the mood of the Kokiri.

"Is that so wrong?" Saria asked in a quiet voice that despite the roar of the rain seemed to echo through out the woods. "Blacky" the white wolfos, sensing the mood of her friend, nuzzled closer to Saria. "Is it wrong to be a child forever? What is so great about being an adult?" a bite of anger was starting to enter into Saria's normally angelic voice and a peal of lightening boomed from the sky. "Working all day… Worrying about this or that… growing gray, weak, old… Watching yourself and everything and everyone you know slowly decaying. What is so great about dying? I don't want those things to happen to him."

Comment by Gastogh on Russian plan for immortality [link] · 2012-08-02T06:42:55.544Z · LW · GW

Can anyone with a better historical perspective on these things tell me if there's a single recorded occurrence of the year 2045 being mentioned as the magic deadline for some cool futuristic thing before Permutation City was published? It just seems like I'm seeing that date a whole lot in these contexts.

Comment by Gastogh on SI's Summer 2012 Matching Drive Ends July 31st · 2012-07-23T14:10:24.068Z · LW · GW

Thanks for posting this here. I hadn't been keeping tabs on the SIAI site itself and hadn't noticed the whole matching drive until this post.

Comment by Gastogh on Imperfect Voting Systems · 2012-07-20T16:15:26.431Z · LW · GW

Upvoting for capturing the remark for those of us who didn't catch it before it was edited out. Yvain has the best puns.

Comment by Gastogh on Moderate alcohol consumption inversely correlated with all-cause mortality · 2012-07-13T13:57:46.961Z · LW · GW

I agree that there's some merit to treating alcohol's effects on you and others separately, but if we do that, shouldn't we then also work to exclude some of its benefits as "social externalities"? Like the whole "alcohol -> socializing -> mental well-being"-pattern?

Comment by Gastogh on Moderate alcohol consumption inversely correlated with all-cause mortality · 2012-07-13T13:43:51.804Z · LW · GW

Yeah, I guess the equation was misapplied there. The point was that the statistics won't (or might not) chalk the death up to alcohol like they should, which I'd say is a harmfully misleading omission; even if it's not a longevity problem for the drunk driver, it is for the other person.

Comment by Gastogh on Moderate alcohol consumption inversely correlated with all-cause mortality · 2012-07-12T11:49:40.507Z · LW · GW

Color me unconvinced. These "benefits" may come from any number of things, and taking alcohol as a general remedy may not be an advisable course of action because the problem is likely to be specific. Consider the following (I'll be using "longevity" as shorthand for "improvement WRT total mortality"):

  • Alcohol -> lowered social anxiety -> more socialization -> mental well-being -> longevity
  • Alcohol -> distraction from (seemingly) insurmountable problems -> mental well-being -> longevity
  • Alcohol -> [insert chemical that triggers some elusive beneficial biological process that causes your cells to degenerate slower or whatever] -> longevity

The last one seems least likely to me, and if you can get the social benefits through some other avenue, you may want to consider those first. I do recall reading up on some other classic studies that showed that red wine has some genuine antioxidant properties and such, but a significant impact of general longevity? I 'unno. You may still be better off using your beer bucks to buy supplements or exercise opportunities.

And that's all assuming the researchers were conscientious enough to control for the other stuff in the first place. Apologies in advance if they actually did, but I've been generally unimpressed with the rigor of studies that claim to show correlations between Purportedly-But-Not-Really-Simple Thing X and Complicated Gestalt Such As Total Mortality, and so I deliberately skimped on the conscientiousness myself. Corrections are welcome in case you guys did read the whole article. But in the meantime, try these on for size:

  • Alcohol -> indication that your income level is comfortable enough that you can afford to buy alcohol -> selection bias -> longevity
  • Alcohol -> drink and drive -> don't die yourself, but WHOOPS, you just killed a pedestrian -> the statistics give the cause of death as "car accident" rather than "alcohol" -> longevity
  • Life sucks -> alcohol -> get wasted regularly rather than commit suicide -> getting wasted gets in the way of fixing the actual problem -> improved but still stunted longevity
  • Etc.
Comment by Gastogh on Cultural norms in choice of mate · 2012-07-11T08:01:19.337Z · LW · GW

People can mean one of two things when they talk about sex ratios; the first is birth rates, and the second is the number of people that exist at a given moment. In much of the world men have a lifespan several years shorter than women (and lead riskier lives, though that may already be taken into account), which may indeed lead to women being the majority.

Comment by Gastogh on Cultural norms in choice of mate · 2012-07-10T09:30:49.352Z · LW · GW

The best solution I’ve heard started by looking at who benefits from this norm [older women] and wondering whether they could have contributed to it.

Young men benefit from the decreased competition in the mating market.

Another, less plausible, suggestion I’ve heard is that it’s to do with mental capacity. I find this unconvincing because we have few objections to a high-status man dating a beautiful but low-intellect woman.

The objections never seemed all that few to me. The negative connotations of the term "trophy wife" are pretty well-established, IMO.

Comment by Gastogh on Links about naturalism · 2012-07-08T19:53:21.965Z · LW · GW

The Happiness and Self-Help-section might have Klevador's Be Happier in it. The post could serve as an index to many of the recurring themes in that section, as well as a springboard for further research, what with all those sources plugged at the end.

Comment by Gastogh on [SEQ RERUN] Existential Angst Factory · 2012-07-07T20:21:52.056Z · LW · GW

The two links to an article on Solving The Wrong Problem found in the original are dead. I'm doubtful of that article having much of value to add to what's right on the tin, but in case it did (or simply for the sake of completeness): does anyone know where it could be found? Googling the title returns thousands of hits, some of them blog posts by the same name by various authors.

Comment by Gastogh on My Algorithm for Beating Procrastination · 2012-07-07T19:59:40.020Z · LW · GW

I'm considering buying Parachute and Flow, but I have a few questions about the latter. Its author has written more than one book on the topic, so I'd like to know:

a) Is this the only book among his publications that I should read? b) ...and if not, which ones should I read and what's the appropriate order? c) Are you recommending this particular book over the others by Csíkszentmihályi because you've read them all and consider it the best, or because you've only read the one and found it worth the time even in isolation?

Comment by Gastogh on [SEQ RERUN] Lawrence Watt-Evans's Fiction · 2012-07-04T20:25:39.885Z · LW · GW

I believe EY mentioned somewhere that 'Verres' was a composite of Herreshoff and Vassar.

Comment by Gastogh on A Rationalist's Account of Objectification? · 2012-06-24T21:45:55.027Z · LW · GW

There are anecdotes where pseudo-explanations like "memory bias" just don't cut it—in order for you to confidently deny psi you have to confidently accuse them of lying,

Can you give an example or two of such anecdotes?

Comment by Gastogh on [SEQ RERUN] Is Morality Preference? · 2012-06-24T14:29:58.488Z · LW · GW

Am I only one who has serious trouble following presentations in a fictitious dialogue format such as this? The sum of my experience of the whole Obert/Subhan exchange and almost every intermediate step therein boils down to the line:

Subhan: "Fair enough. Where were we?"

Comment by Gastogh on The Power of Reinforcement · 2012-06-21T10:26:57.298Z · LW · GW

On Skype with Eliezer, I said: "Eliezer, you've been unusually pleasant these past three weeks. I'm really happy to see that, and moreover, it increases my probability than an Eliezer-led FAI research team will work. What caused this change, do you think?"

Eliezer replied: "Well, three weeks ago I was working with Anna and Alicorn, and every time I said something nice they fed me an M&M."

Made me smile. Thanks for sharing.

Comment by Gastogh on Suggest alternate names for the "Singularity Institute" · 2012-06-19T12:33:52.399Z · LW · GW

I'd prefer AI Safety Institute over Center for AI Safety, but I agree with the others that that general theme is the most appropriate given what you do.

Comment by Gastogh on Local Ordinances of Fun · 2012-06-19T11:57:02.280Z · LW · GW

More seriously, Internet shows a lot about what people truly like, since there's so much choice, and it's not constrained by issues like practicality and prices. Notice total lack of interest in realistic violence and gore and anything more than one standard deviation outside of sexual norms of the society, and none of these due to lack of availability.

Eh? Total lack of interest? Have you ever been on 4chan? Realistic violence threads crop up regularly over there, and it's notorious for catering to almost any kind of sexual deviance the average person can think of. (Out of curiosity: what would you consider "more than one standard deviation" outside the sexual norms of the society? How about two?) I say almost, because 4chan is regulated and it isn't the go-to place for quite everything; child pornography nets its posters permabans pretty quickly and it doesn't have the dedicated guro boards of its Japanese counterpart. Which is to say nothing of blood sports like traditional bullfighting or cockfights, for which even a quick search on YouTube can offer some clips (relatively mellow and barely containing any actual blood as they are).

Stuff like that may not match the tastes of the majority, but that hardly implies a lack of interest. There is a practical issue with availability and it comes from laws, regulation and prices (in the case of adult content that passes the legal filters). There are heavy selection effects at play here, since there are penalties for uploading and hosting certain kinds of content, penalties that aren't handed out for uploading cute kitten videos on YouTube.

Comment by Gastogh on Computation Hazards · 2012-06-14T11:51:40.871Z · LW · GW

For example, suppose a computer program needs to model people very accurately to make some predictions, and it models those people so accurately that the "simulated" people can experience conscious suffering. In a very large computation of this type, millions of people could be created, suffer for some time, and then be destroyed when they are no longer needed for making the predictions desired by the program. This idea was first mentioned by Eliezer Yudkowsky in Nonperson Predicates.

Nitpick: we can date this concern at least as far back as Vernor Vinge's A Fire Upon the Deep:

Pham Nuwen's ticket to the Transcend was based on a Power's sudden interest in the Straumli perversion. This innocent's ego might end up smeared across a million death cubes, running a million million simulations of human nature.

Comment by Gastogh on Wanted: "The AIs will need humans" arguments · 2012-06-14T11:30:41.205Z · LW · GW

For example, Butler (1863) argues that machines will need us to help them reproduce,

I'm not sure if this is going to win you any points. Maybe for thoroughness, but citing something almost 150 years old in the field of AI doesn't reflect particularly well on the citer's perceived understanding of what's up to scratch and not in this day and age. It kind of reads like a strawnman; "the arguments for this position are so weak we have to go back to the nineteenth century to find any." That may actually be the case, but if so, it might not be worth the trouble to include it even for the sake of thoroughness.

That aside, if there is any well thought out and not obviously wishful-thinking-mode reasons to suppose the machines would need us for something, add me to the interest list. All I've seen of this thinking is B-grade, author-on-board humanism in scifi where someone really really wants to believe humanity is Very Special in the Grand Scheme of Things.

Comment by Gastogh on Rationality Quotes June 2012 · 2012-06-10T21:29:08.651Z · LW · GW

I'd say "moral atheism" is being used as an idiomatic expression; a set of more than one word with a meaning that's gestalt to its individual components. One of the synonyms for "atheism" is "godlessness", so by analogy "moral atheism" would just mean "morality-lessness".

Comment by Gastogh on Rationality Quotes June 2012 · 2012-06-10T19:18:02.299Z · LW · GW

It paraphrases the bottom line of the metaethics sequence - or what I took to be the bottom line of those posts, anyway. Namely, that one can have values and a naturalistic worldview at the same time.

Comment by Gastogh on Rationality Quotes June 2012 · 2012-06-09T16:15:59.481Z · LW · GW

Whatever doubt or doctrinal Atheism you and your friends may have, don't fall into moral atheism.

-Charles Kingsley

Comment by Gastogh on Suggestion: Less Wrong Writing Circle? · 2012-06-05T17:09:40.521Z · LW · GW

PM sent.

ff.net will do fine; the message system there runs faster than my email and a prolonged discussion would probably clutter other formats such as my primary emailbox or the threads around LW.

Some common forum might be necessary for projects whose commentariat consists of several people, so as to avoid people stating the same points over and over, but the idea of mailing lists doesn't hold much appeal - I get huge noise-to-signal ratios from some of my university mailing lists and I'd rather not add to the list of mailboxes I have to sieve through on a regular basis. I've never tried LJ; does it work differently and is it any better?

Comment by Gastogh on Suggestion: Less Wrong Writing Circle? · 2012-06-05T07:54:50.073Z · LW · GW

Sounds good. Will you be uploading yours here à la Chapter One Part One up there, or is there another site you're using?

Comment by Gastogh on One possible issue with radically increased lifespan · 2012-06-04T13:41:09.165Z · LW · GW

I'm starting to feel I don't know what's being meant by uncertainty here. It is not, to me, a reason in and of itself either way - to push the button or not. And not being a reason to do one thing or another, I find myself confused at the idea of looking for "reasons other than uncertainty". (Or did I misunderstand that part of your post?) For me it's just a thing I have to reason in the presence of, a fault line to be aware of and to be minimized to the best of my ability when making predictions.

For the other point, here's some direct disclosure about why I think what I think:

  • There's plenty of historical precedent for conflict over resources, and a biological immortality pill/button would do nothing to fix the underlying causes behind that phenomenon. One notable source of trouble would be the non-negligible desire people have to produce offspring. So, assuming no fundamental, species-wide changes in how people behave, if there were to be a significant drop in the global death rate, population would spike and resources would rapidly grow scarcer, leading to increased tensions, more and bloodier conflicts, accelerated erosion, etc.

  • To avoid the previous point, the newfound immortality would need to be balanced out by some other means. Restrictions on people's rights to breed would be difficult to sell to the public and equally difficult to enforce. Again, it seems to me that the expectation that such restrictions would be policed successfully assumes more than the expectation for those restrictions to fail.

Am I misusing the Razor when I use it to back these claims?