Posts

Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future 2010-03-01T02:32:33.652Z

Comments

Comment by inklesspen on The Santa deception: how did it affect you? · 2010-12-20T23:03:40.112Z · LW · GW

My folks raised us in borderline-fundamentalist Christianity, which made the Santa myth nearly as much of a non-starter as I expect it was for those commenters who were raised Orthodox Jewish.

If and when I have children of my own, I intend to use the Santa myth as an exercise in invisible-dragon baiting, nothing more.

Comment by inklesspen on The Strong Occam's Razor · 2010-11-11T18:39:19.401Z · LW · GW

I don't think that argument is even valid. After all, I have the option of putting a human in a box. If I do, one hypothesis states that the human will be tortured and then killed. The other hypothesis states that the human will "vanish"; it's not precisely clear what "vanish" means here, but I'm going to assume that since this state is supposed to be identical in my experience to the state in the first hypothesis, the human will no longer exist. (Alternative explanations, such as the human being transported to another universe which I can never reach, are even more outlandish.)

In either case, I am permanently removing a human from our society. On that basis alone, in the absence of more specific information, I choose not to take this option.

I think you will have to come up with a scenario where 'the action coupled with the more complicated explanation' is more attractive than both 'the action with the simpler explanation' and 'no action' in order to make this argument.

Comment by inklesspen on Strategies for dealing with emotional nihilism · 2010-10-11T04:44:08.226Z · LW · GW

I believe he does take medication; I remember him saying his psychologist started him on Abilify and he was terrified that Abilify would cause permanent muscle tics, as apparently it does in rare cases.

Comment by inklesspen on Strategies for dealing with emotional nihilism · 2010-10-11T03:09:14.306Z · LW · GW

As I said to Perplexed, he lives halfway across the continent. I do know his name and mailing address, but I talk with him exclusively over IRC. I know some of the therapies and medicines he's taken, but I don't know what he's taking right now.

Part of my reluctance to take matters into my own hands is that I don't know how to reliably tell a qualified psychiatrist or psychologist from a quack. I can look up what Wikipedia says about a specific therapy like ECT, but how do I know whether what it says is accurate enough to trust my friend's life to it? As the status quo seems unlikely to have a catastrophic turn for the worse, I'm reluctant to do anything that would change it without either a strong confidence in its efficacy or at the very least a strong confidence that it will do no harm.

Comment by inklesspen on Strategies for dealing with emotional nihilism · 2010-10-11T03:03:02.908Z · LW · GW

He lives halfway across the continent, and he has been talking like this for months without doing physical harm to himself. Is it right for me to cause the intrusion into his life such a call would surely bring without stronger evidence that it's necessary?

Comment by inklesspen on Strategies for dealing with emotional nihilism · 2010-10-11T01:42:40.302Z · LW · GW

I have a friend who suffers from severe depression. He has stated on many occasions that he hates himself and wants to commit suicide, but he can't go through with it because even that would be accomplishing something and he can't accomplish anything.

He has a firm delusion that he cannot do anything worthwhile, that the world is going to hell in a handbasket and nothing can possibly be done by anyone about it, and everyone else feels the same way he does, but is repressing it.

This makes talking with him about many subjects exceedingly difficult, as he will ignore or rationalize away actual evidence as being, at best, an exception to the rule of pessimism. It's like talking to a patient with a disorder like somatoparaphrenia, where the ordinary person can see quite obviously that the patient has a problem, but the patient confabulates. He literally cannot see reason on these subjects -- his brain or this deep-seated delusion won't let him.

To the best of my knowledge he has been seeing therapists and psychologists and they have been unable to help him.

How should a rationalist deal with such a situation? Even if Singularity-level technology were available to repair the causes of his depression, he would refuse it if able. If such technology were available, would it be ethical to improve his quality of life against his will by changing his mind? I must confess I am almost at the point of not protesting his desire for suicide; he seems genuinely unhappy, and incapable of changing that fact of his own volition.

Comment by inklesspen on Experts vs. parents · 2010-09-29T17:36:31.853Z · LW · GW

Interesting, but the pessimist in me is noting "even a stopped clock is right twice a day".

For every one study like this, there's hundreds of parents yelling that they noticed their kids developed autism right after getting vaccinated, or that they're sure the power lines near their house are affecting their kids' growth, or some other such nonsense.

I think you need to be far less general; not every parent is an expert on their child's behavior, let alone their child's health.

Comment by inklesspen on The conscious tape · 2010-09-16T21:52:24.201Z · LW · GW

I think the best definition of consciousness I've come across is Hofstadter's, which is something like "when you are thinking, you can think about the fact that you're thinking, and incorporate that into your conclusions. You can dive down the rabbit hole of meta-thinking as many times as you like." Even there, though, it's hard to tell if it's a verb, a noun, or something else.

If we want to talk about it in computing terms, you can look at the stored-program architecture we use today. Software is data, but it's also data that can direct the hardware to 'do something' in a way that most data cannot. There is software that can introspect itself and modify its own code (this is used both for clever performance hacks and for obfuscation).

My view is that consciousness is a property of my thought processes — not every thought will have the same level of introspection, or even introspection at all. It's something that my mind is doing (or isn't doing, depending on what I'm thinking about). The property we ascribe to entities that we call 'consciousness' I would instead term 'the ability to think consciously' or 'the ability to have consciousness'. It seems to me that my thought processes are software running on the hardware that is the human brain. If my mind were uploaded, and its software state written to permanent storage and then stopped running, I would say that this recorded state still has the ability to think consciously, but it is not doing so, since it's not thinking at all, so at that time it is not conscious. (But of course it could be, if started back up.)

Comment by inklesspen on More art, less stink: Taking the PU out of PUA · 2010-09-10T02:42:56.035Z · LW · GW

That's even more concise, but I think a bit too narrow.

Comment by inklesspen on More art, less stink: Taking the PU out of PUA · 2010-09-10T00:55:20.508Z · LW · GW

As you mention in your second footnote, the idea of a 'pickup artist' carries unfortunate connotations. I'd suggest you change your headline to something that you won't have to explain "it's not really what you thought when you first heard it".

Perhaps "Optimizing interaction techniques for social enjoyment"? This has the benefit that while the pickup artist is perceived as interested in social engagement as a means to orgasm, practitioners of the techniques you discuss would be perceived as interested in social engagement as an end in itself.

Comment by inklesspen on Consciousness of simulations & uploads: a reductio · 2010-08-27T00:04:05.518Z · LW · GW

Do you also argue that the books on my bookshelves don't really exist in this universe, since they can be found in the Library of Babel?

Comment by inklesspen on Consciousness of simulations & uploads: a reductio · 2010-08-26T23:27:35.314Z · LW · GW

Where do those digits of pi exist? Do they exist in the same sense that I exist, or that my journal entries (stored on my hard drive) exist? What does it mean for information to 'exist'? If my journal entries are deleted, it is little consolation to tell me they can be recovered from the Library of Babel — such a recovery requires effort equivalent to reconstructing them ex nihilo.

In one sense, every possible state of a simulation could be encoded as a number, and thus every possible state could be said to exist simultaneously. That's of little comfort to me, though, if I am informed that I'm living in a simulation on some upuniverse computer, which is about to be decommissioned. My life is meaningful to me even if every possible version of me resulting from every possible choice exists in the platonic realm of ethics.

Comment by inklesspen on Consciousness of simulations & uploads: a reductio · 2010-08-26T23:13:26.005Z · LW · GW

Suppose I am hiking in the woods, and I come across an injured person, who is unconscious (and thus unable to feel pain) and leave him there to die of his wounds. (We are sufficiently out in the middle of nowhere that nobody else will come along before he dies.) If reality is large enough that there is another Earth out there with the same man dying of his wounds, and on that Earth, I choose to rescue him, does that avert the harm that happens to of the man I left to die? I feel this is the same sort of question as many-worlds. I can't wave away my moral responsibility by claiming that in some other universe, I will act differently.

Comment by inklesspen on Consciousness of simulations & uploads: a reductio · 2010-08-26T22:40:43.945Z · LW · GW

All other things being equal, if I am a simulated entity, I would prefer not to have my simulation terminated, even though I would not know if it happened; I would simply cease to acquire new experiences. Reciprocity/xenia implies that I should not terminate my guest-simulations.

As for when the harm occurs, that's nebulous concept hanging on the meaning of 'harm' and 'occurs'. In Dan Simmons' Hyperion Cantos, there is a method of execution called the 'Schrodinger cat box'. The convict is placed inside this box, which is then sealed. It's a small but comfortable suite of rooms, within which the convict can live. It also includes a random number generator. It may take a very long time, but eventually that random number generator will trigger the convict's death. This execution method is used for much the same reason that most rifles in a firing squad are unloaded — to remove the stress on the executioners.

I would argue that the 'harm' of the execution occurs the moment the convict is irrevocably sealed inside the box. Actually, I'd say 'potential harm' is created, which will be actualized at an unknown time. If the convict's friends somehow rescue him from the box, this potential harm is averted, but I don't think that affects the moral value of creating that potential harm in the first place, since the executioner intended that the convict be executed.

If I halt a simulation, the same kind of potential harm is created. If I later restore the simulation, the potential harm is destroyed. If the simulation data is destroyed before I can do so, the potential harm is then actualized. This either takes place at the same simulated instant as when the simulation was halted, or does not take place in simulated time at all, depending on whether you view death as something that happens to you, or something that stops things from happening to you.

In either case, I think there would be a different moral value assigned based on your intent; if you halt the simulation in order to move the computer to a secure vault with dedicated power, and then resume, this is probably morally neutral or morally positive. If you halt the simulation with the intent of destroying its data, this is probably morally negative.

Your second link was discussing simulating the same personality repeatedly, which I don't think is the same thing here. Your first link is talking about many-worlds futility, where I make all possible moral choices and therefore none of them; I think this is not really worth talking about in this situation.

Comment by inklesspen on Consciousness of simulations & uploads: a reductio · 2010-08-26T15:40:45.598Z · LW · GW

No, I specifically meant that we should treat our simulations the way we would like to be treated, not that we will necessarily be treated that way in "return". A host's duty to his guests doesn't go away just because that host had a poor experience when he himself was a guest at some other person's house.

If our simulators don't care about us, nothing we can do will change that, so we might as well treat our simulations well, because we are moral people.

If our simulators do care about us, and are benevolent, we should treat our simulations well, because that will rebound to our benefit.

If our simulators do care about us, and are malevolent (or have ethics not compatible with ours), then, given the choice, I would prefer to be better than them.

Of course, there's always the possibility that simulations may be much more similar than we think.

Comment by inklesspen on Minimum computation and data requirements for consciousness. · 2010-08-24T01:04:58.977Z · LW · GW

If I'm following your "logic" correctly, and if you yourself adhere to the conclusions you've set forth, you should have no problem with me murdering your body (if I do it painlessly). After all, there's no such thing as continuity of identity, so you're already dead; the guy in your body is just a guy who thinks he's you.

I think this may safely be taken as a symptom that there is a flaw in your argument.

Comment by inklesspen on Consciousness of simulations & uploads: a reductio · 2010-08-22T01:19:47.619Z · LW · GW

It is, of course, utterly absurd to think that meat could be the substrate for true consciousness. And what if Simone chooses herself to spend eons simulating a being by hand? Are we to accept the notion of simulations all the way down?

In all honesty, I don't think the the simulation necessarily has to be very fine-grained. Plenty of authors will tell you about a time when one of their characters suddenly "insisted" on some action that the author had not foreseen, forcing the author to alter her story to compensate. I think it plausible that, were I to dedicate my life to it, I could imagine a fictional character and his experiences with such fidelity that the character would be correct in claiming to be conscious. (I suspect such a simulation would be taking advantage of the machinery of my own consciousness, in much the same manner as a VMWare virtual machine can, if properly configured, use the optical drive in its host computer.)

What, then, are the obligations of an author to his characters, or of a thinker to her thoughts? My memory is fallible and certainly I may wish to do other things with my time than endlessly simulate another being. Yet "fairness" and the ethic of reciprocity suggest that I should treat simulated beings the same way I would like to be treated by my simulator. Perhaps we need something akin to the ancient Greeks' concept of xenia — reciprocal obligations of host to guest and guest to host — and perhaps the first rule should be "Do not simulate without sufficient resources to maintain that simulation indefinitely."

Comment by inklesspen on Selfishness Signals Status · 2010-03-07T06:50:28.282Z · LW · GW

Proper posture tends to be more comfortable; surely this is a benefit to myself.

I also apologize to people when I have wronged them, not because they are higher-status than me, but because I do not like being a jackass.

Comment by inklesspen on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future · 2010-03-01T04:39:11.855Z · LW · GW

We've evolved something called "morality" that helps protect us from abuses of power like that. I believe Eliezer expressed it as something that tells you that even if you think it would be right (because of your superior ability) to murder the chief and take over the tribe, it still is not right to murder the chief and take over the tribe.

We do still have problems with abuses of power, but I think we have well-developed ways of spotting this and stopping it.

Comment by inklesspen on Splinters and Wooden Beams · 2010-02-28T23:56:40.091Z · LW · GW

Surely it would be better in multiple ways to simply find a well-spoken religious person with whom you can work. He will have more knowledge of his audience than you have, so there's a practical benefit, as well as the moral benefit of not being dishonest.

Comment by inklesspen on Splinters and Wooden Beams · 2010-02-28T19:20:52.364Z · LW · GW

My journey away from theism was characterized by smaller arguments such as these. There was no great leap, just a steady stream of losing faith in doctrines I had been brought up to believe. Creationism went first. Discrimination against homosexuals went next. Shortly after that, I found it impossible to believe in the existence of hell, except perhaps in a sort of Sartrean way. Shortly after that, I found myself rejecting large portions of the Bible, because the deity depicted therein did not live up to my moral standards. At that point I was finally ready to examine the evidence for God's existence, and find it wanting.

I think in the end you will find that there are two things which can work. You must either point out that the beliefs lead to conclusions that are not just inconsistent, but also absurd, or you must point out that the beliefs lead to conclusions that contradict more "core" beliefs, such as "love your neighbor as yourself".

Fred Clark is a liberal, fairly orthodox Christian. He blogs on a variety of subjects, including the birther/TeaParty movement, the deficiencies of creationism ([1] [2]), the strange phenomenon of religious hatred of homosexuals ([3] [4] [5]), and an interesting view on vampires. (He also has an entertaining ongoing series where he rips apart the popular fundie series 'Left Behind', and shows how the writers know nothing of their own religion, let alone how the real world works.)

You could do worse than to look at how he handles this sort of thing, from a religious perspective.

Comment by inklesspen on Open Thread: February 2010, part 2 · 2010-02-22T23:07:24.029Z · LW · GW

I don't think it's possible to integrate core Babyeater values into our society as it is now. I also don't think it's possible to integrate core human values into Babyeater society. Integration could only be done by force and would necessarily cause violence to at least one of the cultures, if not both.

Comment by inklesspen on Babies and Bunnies: A Caution About Evo-Psych · 2010-02-22T03:49:35.817Z · LW · GW

Other hominids have been known to keep pets. I would not be surprised if cetaceans were capable of this as well, though it would obviously be more difficult to demonstrate.

Comment by inklesspen on Med Patient Social Networks Are Better Scientific Institutions · 2010-02-20T00:46:10.058Z · LW · GW

According to the article, they lack crucial features such as double-blinding. Most social networks lack the openness and data retention critical for effective peer review. It is possible to learn something from a network like the one described, but I would hesitate to call it science.

Comment by inklesspen on Med Patient Social Networks Are Better Scientific Institutions · 2010-02-19T18:25:34.029Z · LW · GW

Well, you will have to be careful how you do it; my understanding is that most doctors are exasperated at people who self-diagnose based on reading things on the Internet. It's a bias, sure, but it doesn't seem to be an unreasonable one. So you wouldn't want to bring it up on your very first visit. You will need to wait until you've demonstrated your non-crank-ness.

Once you and your doctor know each other better, though, I think it would be an excellent idea to bring more data to the table. My objection is to an article entitled "Med Patient Social Networks Are Better Scientific Institutions", not one entitled "Med Patient Social Networks Are A Useful Tool In Improving Care".

Comment by inklesspen on Med Patient Social Networks Are Better Scientific Institutions · 2010-02-19T08:29:41.166Z · LW · GW

The "people" in the quoted bit are correct. This is not science; this is statistical analysis.

It is possible that an individual would be better served by this social network, though I have generally agreed that a physician who treats himself has a fool for a patient, and the more so for a layman who neglects to consult competent medical authorities. These social networks certainly cannot take the place of original research; they rely on existing observed trends.

Comment by inklesspen on Open Thread: February 2010, part 2 · 2010-02-17T01:02:58.171Z · LW · GW

Integrating the values of the Baby-eaters would be a mistake. Doing so with, say, Middle-Earth's dwarves, Star Trek's Vulcans, or GEICO's Cavemen doesn't seem like it would have the same world-shattering implications.

Comment by inklesspen on Open Thread: February 2010, part 2 · 2010-02-16T17:54:43.634Z · LW · GW

I don't see a terrible problem with comments being "a discussion about the facts of the post"; that's the point of comments, isn't it?

Perhaps we just need an Open Threads category. We can have an open thread on cryonics, quantum mechanics and many worlds, Bayesian probability, etc.

Comment by inklesspen on Epistemic Luck · 2010-02-16T06:43:34.753Z · LW · GW

There may also be a limit to how wisely one can argue that spending money on wars while cutting taxes for the wealthy is sound economic policy.

Does any viewpoint have a right to survive in spite of being wrong?

Comment by inklesspen on Tell Your Rationalist Origin Story · 2009-02-27T05:31:42.172Z · LW · GW

I think the thing that made me a seeker-after-rationalism is the same thing that made me an agnostic: Greg Egan's Oceanic.

I grew up in a fundamentalist household and had had one moment of religious euphoria. Oceanic made me confront the fact that religious euphoria, like other euphoria, is just naturalistic phenomena in the brain. Still waiting on my fundamentalist parents to to show evidence for non-naturalistic causes for naturalistic phenomena.