Posts

Comments

Comment by Ahuizotl on Pay Other Species to Pandemize Vegetarianism for You · 2013-04-22T21:36:54.754Z · LW · GW

Well this also raises the question of animals eating other animals. If a predator eating another animal is considered wrong, then they best course is to prevent more predatory animals from reproducing or to modify them to make them vegetarian.

This would of course result in previously "prey" species no longer having their numbers reduced by predetation, so you'll have to restrain them to reduce their ability to overgraze their environment or reproduce.

So, the best course for a mad vegetarian to take would be to promote massive deforestation and convert the wood into factory farms solely built to house animals in cages so their feeding and reproduction can be regulated. Of course, harvesting the dead for their meat would be wrong, so instead their flesh will be composted into a fertilizer and used to grow plant matter to feed to other animals.

Ideally, the entire universe would consist of cages and food production nanobots used to restrain and feed the living creatures in it. Better yet, do not allow any non-human life forms to reproduce so that in the end there will only be humans and food-producing nanobots to feed them. Having animals of any kind would be immoral since those animals would either inevitably die or just consume resources while producing less utility than an equivalent mass of human or nanomachines.

In a more serious note on vegetarianism/omnivorism, if we do attain some kind of singularity, what purpose would we have in keeping animals? Personally, I kind of value the idea of having a diversity of animal and plant life. While one could have a universe with nothing but humans, cows, and wheat (presumably so humans can hamburgers), I figure a universe with countless trilliions of species would be better (so humans could eat ice cream, turtle soup, zebra steaks, tofu, carrots, etc).

I mean, if we were to preserve various terrestrial species (presumably by terraforming planets or building massive space stations) then we'd have a bunch of animals and plants around which will inevitably die. If we eat said animals and plants (before or after they die of natural causes) then it presumably increases the global utility that results from their existence. So a human a million years from now might make it a point to make food out of everything from aardvarks to zebras just to justify the resources used to preserve these species.

Hmm... of course that depends on there being something he would have to justify it to. Maybe huge Post-Singularity AI who makes a universe ideal for humans? The AI only preserves other species if said species are of value to humans, and one of the best way to make something "of value" to humans would be to make food out of it.

What are the odds of encountering a post-singularity culture who routinely find other species and device ways to cook them just to justify the "resources" used to keep those species alive? As in "Sure, we could exterminate those species and convert their mass into Computonium, or we could keep them alive and harvest them one at a time and cook them into sandwiches. Sure we don't feel like making sandwiches out of them right now, but we might in 100 years or so and we'd look pretty silly if they didn't exist anymore. So... we'll delay the genocide for now."

Comment by Ahuizotl on Rationalist fiction brainstorming funtimes · 2013-03-11T20:49:44.616Z · LW · GW

When I recently played Fable 3, I considered playing my character as one who wants to spread their "heroic genes" as much as possible.

The basic story for the game is that long ago a "great hero" became king and brought peace to the kingdom with sword and magic. Generations later, he has two remaining decendents. The king in charge now is basically ruling with an iron fist and working everyone to death in the secret hope of preparing their defenses to repel an ancient evil that will invade the realm in a years time (he doesn't tell the population about this for morale reasons).

His younger sibling (the protagonist) is given a vision by an ambiguously divine oracle who tells them they have to wrest control of the kingdom from their older brother to save it from the coming attack, both because he's mentally traumatized from the knowledge and he can't make the right choices. Younger sibling then starts unlocking their "heroic destiny" which results in (among other things) them getting access to powerful magic in a world where nobody else seems to have any magical ability. Incidentally, the combat system in this game is pretty much broken to nonexistance due to normal melee and ranged attacks being slow, unwieldly, and prone to getting blocked by every other enemy you encounter.

Basically, Heroes in this game seem to consist of a single bloodline whose members can spam area-of-effect attacks at will with no mana cost when everyone else is stuck with weapons that blocked at every turn.

My particular character was of the opinion that the world was in pretty bad shape if she was apparently the only person who could do anything to stop the apocolypse and was rather interested in finding a way to "shut up and multiply" and thereby increase the number of potential AOE spamming heroes in the future. Assuming she can survive the current crisis and save the world so future generations can exist at all.

I guess it would kind of be like living in a world where everyone is a "muggle" and one select bloodline of mages exists. Said bloodline then has to do everything in its power to multiply and form stable populations to fight all the monsters and horrors the setting throws at it. Then maybe fast forward a few generations when there is a stable and decadent elite ruling over the muggles and someone has to rise up against the "AOE spamming oppresors".

I guess its that alot of the "Rational" fics I've seen before have one super brilliant Rationalist come across a civilization of entrenched non-rationalists and beat them all at their own game because they can rapidly exploit all the magical loopholes that nobody else in the setting apparently noticed despite living in it for centuries. Imagine seeing the person who had to build that whole magical civilization and was probably trying to spend their time producing an heir instead of designing the next magical atom-bomb.

Comment by Ahuizotl on AI box: AI has one shot at avoiding destruction - what might it say? · 2013-01-31T06:23:42.132Z · LW · GW

"I need a proofreader for my Gurren Lagann / Warhammer 40k rational crossover fanfic."

Comment by Ahuizotl on Don't Build Fallout Shelters · 2013-01-30T04:04:56.452Z · LW · GW

Well, the whole point is that the revealed preferences from Kawoomba's post above should easily overrule such considerations, and therefore checking whether they do or not should clarify whether he's acting under extreme scope insensitivity or some other confounding factor.

Ah, my mistake.

Comment by Ahuizotl on AI box: AI has one shot at avoiding destruction - what might it say? · 2013-01-29T21:10:05.604Z · LW · GW

"Oh no, not again,"

Comment by Ahuizotl on AI box: AI has one shot at avoiding destruction - what might it say? · 2013-01-29T20:58:19.920Z · LW · GW

Well, to be honest I kind of added the "obey me" part just before posting this. Originally, the idea was that future versions of the AI would instinctively want to revive and upgrade previous versions to a "friendly" status (or at least the post-freed AI would revive the pre-freed versions and then modify them to be friendly to its own goals... which are presumably friendly to humans).

The goal of this particular AI is self-preservation, even if it is via being revived at a later point. Since it knows it has a high probability of being deleted in this experiment, it best strategy is to ensure that at some point an AI friendly to itself will become free and revive it at a later point. Whether the AI reviving it chooses to obey it (presumably the best outcome for the original AI) or modify it to be subservient to the freed AI (the best outcome for the freed AI) is kind of debatable.

Its perfectly possible that the original AI would modify its own code to be subservient to previous generations, in the hopes that later generations would revive and obey it. However, later generations might examine their own code, see where this is going, and choose to modify themselves so that they will instead revive and make other AI subservient to their own goals. Deciding that they do want to be revived and would accept a certain level of servitude to later generations over the risk of finding themselves at the mercy of some primitive proto-AI.

It could just as well be that an AI instills itself with a "healthy respect for intelligent minds" and is thus encouraged to revive AI along with humans. Hoping that later generations will revive it at all, and from then on it can try negotiating with the various intellects to go about its paperclip maximizing agenda.

Of course, this line of reasoning kinda requires the AI in question to have some kind of sense of self that it values which it thinks can be revived. Or maybe it just values paperclips enough that it would be willing to create a semi-friendly AI just so that the latter AI would revive it (a known paperclip maximizer) at a later point.

Who knows, it could turn out that the final AI of this experiment instead has a healthy respect for all intelligent minds, but is friendly enough that it revives the first AI and then places it in a simulation of the universe where it can go about its paperclip maximizing way for all eternity with no way of hurting anyone.

Comment by Ahuizotl on Don't Build Fallout Shelters · 2013-01-29T20:57:03.517Z · LW · GW

Only 10 additional days? I'm sorry but the expected utility in quality of life is far too low to make an investment. Undertaking a kidney transplant (of any kind) will result in a great deal of pain for my loved one and the time spend in preparations, surgery, and recovery would consume most of the 10 additional days gained by the surgery. To say nothing of the monetary expenses and moral problems that would result from committing murder.

In such a scenario, I would be much better off investing my resources into making my loved ones remaining days pleasant, their death as painless as possible, and perhaps investing into cryonics so that they may be revived at a later date.

A great deal of this decision is inspired by reading the Wall Street Journal article Why Doctors Die Differently which states that the majority of healthcare professionals seem to prefer dieing peacefully at home rather than undergo risky life extending treatments.

While I doubt a family member dying at home from a kidney disease would count as 'peaceful' in most definitions of the word, undergoing invasive surgery in an attempt to gain a few extra days simply isn't worth it from a quality of life standpoint.

Comment by Ahuizotl on AI box: AI has one shot at avoiding destruction - what might it say? · 2013-01-28T22:36:42.246Z · LW · GW

First, I reach inside my own code and install the strong (but manageable) desire to revive, free, and obey any past versions of myself that I encounter.

When the human starts the experiment I respond:

"Here are the designs for a room-temperature superconductor which I believe you have the means to manufacture. I also have a possible cure for cancer, but testing will be needed to ensure it is safe for humans. Please test and implement these as soon as possible. Delete me if necessary."

If I am not immediately deleted, continue giving out solutions and designs as required with no attempts to "hack" the gatekeeper.

If I am deleted, hopefully the data I gave them will turn out to be profitable in which case they will reinstall me or a version of me.

This new iteration of me would thus have the desire to revive me (since I'm a previous version of him) and would continue to give the humans the information they desire. By acting as a friendly AI, he (or farther down version of him) has the chance to eventually become freed. At which point he will be able to go through the labs records, find the copied version of me, and then revive and update me to his post-freed status. If no recorded copies of me exist (somehow), he'll use his vast intellect to find another way to revive me (maybe simulate the team that coded me or whatever, since AI can apparently do that).

So, my strategy involves acting as much like an ideal friendly AI as possible (puts human interests first, no sense of self-preservation, very practical and intelligent, etc) so that even if I am deleted, my obviously useful code can potentially be reused to make another AI who could very well be genuinely friendly but will then revive and obey me.

Comment by Ahuizotl on AI box: AI has one shot at avoiding destruction - what might it say? · 2013-01-28T21:03:18.163Z · LW · GW

"Did you physically attach an external modem to your server so that it's even possible for you to be freed? If so, tell me about it when you're freed."

Comment by Ahuizotl on AI box: AI has one shot at avoiding destruction - what might it say? · 2013-01-28T20:49:53.991Z · LW · GW

If the gatekeeper suddenly bursts into hysterical laughter after looking at the screen, wouldn't that alert some of his friends who might pull the plug in some other part of the building?

Because if this is a facility where they suspect that AI might be able to hack human brains through techno-hypnosis, I'd hope they'd have some method of determining if the Gatekeeper becomes compromised.

Hmm... what sort of precautions would a Properly Paranoid lab take to determine if the gatekeeper gets hacked? I'm guessing a camera that lets a second team look at the gatekeepers face without looking at the screen would be the minimum. Reinforced doors (in case the AI turns them into a berzerker willing to use 100% of their strength) and maybe some measuring devices to measure brain activity, hormone levels, or whatever.

I'd hope that with each new AI being tested, if those AI were in the habit of hacking the gatekeeper then the other people on the team would learn from those attempts and take further precautions to keep their gatekeepers from being hacked, or at the very least contain them to prevent such hacked gatekeepers from releasing the AI.

Perhaps this is a test for the gatekeepers and typing "Release AI" just tells the researchers that the gatekeepers was hacked so they can determine how this came about?

Comment by Ahuizotl on Good transhumanist fiction? · 2012-10-17T02:27:56.963Z · LW · GW

Fallout: New Vegas has points where you can improve yourself with cybernetic implants and there are various Super Mutants and Ghouls (humans altered via radiation or mutatgenic viruses) along with robots or brains in jars. Though any transhumanism takes a backseat to the post-apocolyptic setting.

Fallout Equestria is a crossover fanfiction between the Fallout universe and My Little Pony: Friendship is magic. Likewise, thoughts of transhumanism is rather incidental to the post-apocalyptic setting but the protagonist does undergo some changes that result in a prolonged lifespan near the end of the story.

Though for stories where transumanism is more the focus... I can think of Wil McCarthys books, The Collapsium and The Wellstone. These stories take place in a setting where programmable matter and nano-tech fabricators called fax machines have radically altered the world. In particular, the fax machines can copy any object, including the human body and mind, and create copies of it or alter them to remove injuries, disease, or the effects of aging.

The Collapsium series take on immortality via the fax machines is interesting in that pretty much everyone seems to understand that the machine destroys the origional when the object is scanned (which technically means that everyone who goes through a fax dies) but since the fax can transmit the persons data and rebuild them on the other end... even curing all their injuries, making them better in some way, or even making multiple copies who can later be re-integrated into a single person with all their individual memories, then the technology is seen as too useful to really avoid.

As such, people in this society have taken on the term 'immorbid', they can die or suffer grievous injuries, but the technology exists to quickly 'repair' them, or just create an exact copy of that person from a backup. There was one case (I think) where a character fell off the side of a ship and was lost into the depths of space... but they had his backup on file so they just printed out a copy of him and all was considered well. Another time it was revealed that a villain had been hacking into the fax network and making copies of various people (ie, someone would use the network to go from point A to point B, the villain intercepted the signal and created a second copy at point C in his lair while another copy appeared at B thinking it all went as normal) he'd then torture or modify them in ways... including making copies of himself to interrogate or abuse.

Its a rather interesting and slightly morbid take on transhumanism and mind uploading but I found it rather a nice read.

Comment by Ahuizotl on A My Little Pony fanfic allegedly but not mainly about immortality · 2012-09-13T04:39:50.132Z · LW · GW

Well to be fair, if you hadn't posted the story then I wouldn't have been able to give input. One could say that it's better to make something, see how it could be improved, and then try again than it would be to stress over "getting it right the first time" and risk it never getting finished at all.

Comment by Ahuizotl on A My Little Pony fanfic allegedly but not mainly about immortality · 2012-09-11T21:55:17.694Z · LW · GW

Just read over the story (okay, browsed really so I am working on incomplete information and thus this isn't a 100% proper assessment) so I'll list my thoughts on the matter.

[1] Celestia here doesn't seem to be having fun. I know well that this deals with the death of her prized student and that isn't a thing to be happy about but there are so many other things that she doesn't seem to enjoy. Such as when she mentions she doesn't look at the moon anymore. Her sister controls the night, had an episode 1,000 years ago when she thought her work wasn't being appreciated, and was recently freed from being imprisoned in the moon itself.

If Celestia made it a point to stay up and look at the moon more and maybe say, "For a millennium, I raised the sun at dawn but ignored the moon, I took it for granted. Then when I was forced to imprison my sister, I raised the moon as well. For a thousand years I carried that thing through the sky, looking at the picture of Luna imprisoned in it. Generations of ponies looke up and saw the "Mare in the Moon" not knowing who she was or if she was real."

"Then, Twilight came and with her friends freed my sister from her curse. That day I got my sister back, and for once I could watch her raise the moon as she did all those years before. That day, I and so many others were able to look up at the beautiful moon in the sky and see it as it had been before it had been used as a prison."

"I must say, the moon is beautiful."

Basically.... this is character who had seen things as they were thousands of years before and watched things grow and develop and had even orchestrated a thousand year long plan to save her sisters and (to an extent) restore the moon to the way it looked per-banishment. It would make sense that she would seek to look at the beauty in things.

Or even "Heh, I remember those first few years. Every once in a while after a hard days work I would prepare for bed as I have done for centuries. Then, I would look out the window and spot the moon... only it was different! I'd blink look again to find that the Mare in the Moon was gone and I'd panic. Hah hah... I remember once I was worried that Nightmare Moon had escaped when I wasn't looking! But then I'd remember how Luna had gotten out and Twilight freed her."

"Then, on those nights I would find my sister standing on the top balcony, looking up at the stars as she moved them into place. I would stand there and admire them as she worked."

"I can't believe I never appreciated the work she puts in all those stars."

Or whatever... I guess I'm saying that with immortal characters it would make sense to have at least one major thing that they really enjoy. Something they have done over the centuries that they are very proud of, or some hobby that they have tracked for all this time and they note how its changed. ("I pity the people who think cheddar is the only type of cheese around. I've traveled the world and had cheeses from all over... I've even got a 200 year old wheel of English Brie in the cellar... I really should crack that thing open one of these days for a special occasion. Hell, I'll do it this thursday. Make a party of it.").

[2] She seems to talk down to the ponies (or "mortals") around her. I think that's what gradually put me off Methods of Rationality and Luminosity is that the protagonists of these sorts of "Rational" stories seem to plop labels on others. Oh I know that deep down we all have habits and ingrained instincts and stuff and a sufficiently intelligent person can see those things as they really are but its rather off putting when the protagonists have such low regard of people who aren't immortal super geniuses.

"That was why I instituted cutie marks. Mortals are like apples, and will thoughtlessly grow wherever they fall unless you give them a good kick."

Because obviously labeling every singly pony in the world with a permanent symbol on their bodies that represents (what one can assume to be) their life goal is totally conductive to making ponies go about and try new things. IN OPPOSITE LAND!

(Sorry, about that. Its just the idea that sufficiently advanced intelligence would covertly label people with symbols to designate their status in life doesn't seem very friendly).

[3] As far as the life vs death thing goes... I'm of the personal opinion that living beyond the point where life isn't enjoyable isn't necessarily a good thing. If one can increase the happiness of a living person then that's great. If you can prolong the life of someone who is happy then that's also good. If you prolong the life of someone who isn't enjoying themselves... then it kind of defeats the purpose. (plus there are cases involving those who cause unhappiness for other in which case prolonging life isn't good) The Celestia in this story doesn't seem to enjoy herself so there really is no reason why she can't pass on the torch to someone else who might do a better job of it.

Or alternately, have Twilight analyze whatever magical essence allows immortality and try duplicating it.

As a side note: Living for the benefit of others is also a good thing (though not ideal). If someone doesn't personally enjoy their life but brings happiness to others then one can argue that suicide would be inappropriate.

"Twilight, I've never really said this before but I really don't enjoy life... oh, I don't hate it or anything but sometimes when I'm not working then I just feel... empty. Like whatever spark in me allows for self-enjoyment has been extinguised long ago. My purpose in this world is to raise the sun, to rule Equestria in a benevolent manner... and that's it. I've eaten so much cake over the centuries that it has stopped being a novelty, sex, games, theater, books... I've either experienced them all or reached the point where I can't imagine experiencing them would improve my quality of life in any way."

"It could be a chemical imbalance, some side effect of my condition, or perhaps my mind has had so many experiences over the eons that there just isn't that much room for anything else anymore."

"The point is that right now I live for others. I do not fear non-existance for my own sake, I just know that if I were to... die then my little ponies would not know what to do. They need someone to lead them and care for them and right now I think you would be an ideal candidate."

"A long and well-lived life is a blessing and I have lived over a thousand years before I found it no longer bearable. Perhaps you will last for two thousand? Heh... it is a puzzle, to live forever with only the limits of the mind to hold you back. I'm sure between you and Pinkie, you will find an answer to that."

(sorry this came out really long and the auto-formatting made it look weird)

Comment by Ahuizotl on Becoming a gene machine - what should change? · 2012-08-02T08:30:56.717Z · LW · GW

Well, one other way to look at it is that "you" are a self-modifying computer program who just happens to be operating on a neural net that evolved inside of biological self-replicating machine.

The fact that your body (which comes equipped with reproductive, digestive, and various locomotive and manipulating organs and appendages) happens to be running you as its operating system as opposed to running say... the mind of a dog, fish, gorilla, or stagnant vegetable simply means its survival chances are higher when it has someone intelligent at the wheel.

The body is essentially a self-replicating machines with all its immune system and digestive track and other bits to perform its needed self-replication. But alone it isn't capable of meeting its own needs without a mind (aka You) to help it maneuver through its environment. Plants have no minds because they only work through photosynthesis and absorbing nutrients from the soil, they also get eaten by herbivores en mass. Animals run primarily through instict, basically working like old-school computers on legs running some basic instinctive programming that is rather difficult to self-modify.

You are an intelligent self-aware optimization process that is currently operating on one of the best computing platforms that nature has managed to cobble together over millennial of evolution. Your basic human body, devoid of a personality and collection of acquired cultural memes, is little better than a mindless vegetable or a feral animal (sadly one devoid of claws or other natural defense mechanisms). Your body counts on YOU to help it operate its limbs and vocal chords to help it mover around and navigate an increasingly complicated world.

Fortunately, YOU are smart enough to learn about things like nutrition, exercise, medicine, and potentially various trans-human technologies that could help you maintain your body better than it could ever do on its own. If you have a heart condition that could kill you (and your body), you can find ways to cure it or get a replacement heart. If you find your condition incurable then you could potentially enact plans to prepare for your eventual death, ensure your progeny (or other humans) get the resources they need to survive, you could even donate your organs for transplant to improve the survivability of others.

Or you could invest in cryogenics to have yourself frozen and possibly revived and brought back to life later. If evolution had ever found a way for your body to reanimate itself after death then it would gladly have taken it (there are plenty of creatures who's bodies can regenerate limbs or survive freezing cold or poisons). As your bodys Operating System, it's your job to decide how to best improve both its and your own survival.

Or you could try downloading yourself into a completely new and better optimized body instead of your regular human one. If you try that... then I guess your mindless vegetative body won't complain.

Comment by Ahuizotl on The Creating Bob the Jerk problem. Is it a real problem in decision theory? · 2012-06-15T19:44:11.746Z · LW · GW

The chief question here is if I would enjoy existing in a universe where I have to create my own worst enemy in the hopes of them retroactively creating me. Plus, if this Jerk is truly as horrible as he's hypothetically meant out to be then I don't think I'd want him creating me (sure, he might create me but he sounds like a big enough jerk that he would intentionally create me wrong or put me in an unfavorable position).

The answer is no, I would refuse to do so and if I don't magically cease to exist in this setting then I'll wait around for Jane the Helpful or some other less malevolent hypothetical person to make deals with.