-
post by HoverHell · 2011-02-03T08:53:07.660Z · ? · GW · Legacy · 16 commentsContents
16 comments
-
16 comments
Comments sorted by top scores.
comment by Perplexed · 2011-02-04T16:11:26.693Z · LW · GW
Ok, just to clarify what the question is:
Suppose I am trying to decide whether to swallow an anti-aging pill which introduces some nano-bots into my bloodstream.
My doctor informs me that the bots will remove plaque from my arteries, amyloid deposits from my brain cells, and brown fat deposits from my abdomen. "Makes you look and feel ten years younger, and adds fifteen years to your life expectancy. But there are some side effects, including personality changes."
"Well doc," I answer, "the important thing to me is that I keep my sense of humor. That, more than anything else, is my VI. If the pill preserves my sense of humor, then any other changes it might make to my personality are tolerable."
"Just a second," interjects nurse HoverHell, who had been lurking nearby. "You need to define your VI physically. After all, we aren't dualists here."
"But I don't know how to define my sense of humor physically," I reply. "I define it operationally, by which TV shows I like, which jokes I laugh at, whether I come up with the snappy comeback and whether anyone smiles when I do."
"The nanobots work at the physical level," says HoverHell. "So we have to have a physical definition of 'sense of humor' in order to answer the question of whether the nanobots preserve it."
"Nonsense!" I reply. "You need an operational definition, so you can administer behavioral sense-of-humor tests to people before and after they take the pill, interview friends and families, and characterize the side-effect at the behavioral level. Claims that something is preserved (within certain tolerances) at the physical-level don't help me at all, since what I am really interested in is whether or not my sense of humor changes."
I'm not sure how the dialog proceeds from this point.
Replies from: HoverHell, knb↑ comment by HoverHell · 2011-02-07T06:59:46.839Z · LW · GW
-
Replies from: Perplexed↑ comment by Perplexed · 2011-02-07T13:59:58.601Z · LW · GW
I'm suggesting to use material (physical) definition here for avoiding vagueness
A worthy goal. But does that work? When philosophers "build up the definition from most basic known physical model", does that really reduce vagueness? Particularly in a field like cognition where there don't yet exist any good (i.e. not speculative) physical models?
My experience is that this reductionist move does not help. If it has any effect at all, it provides a pseudo-dissolution of the question - leading to someone reframing the question in an even more indirect and mysterious form - one which is more resistant to being dissolved.
I'm myself by now (or “for now”) a proponent of phenomenological ontology, not a materialistic one; but please ignore if you are not sure what it is.
Sounds promising. I'm myself an anti-realist; but please ignore because I am not really sure what that is.
Replies from: HoverHell↑ comment by HoverHell · 2011-02-08T06:25:21.473Z · LW · GW
-
Replies from: Perplexed↑ comment by Perplexed · 2011-02-08T14:56:03.279Z · LW · GW
I'm myself an anti-realist; but please ignore because I am not really sure what that is.
This looks like a quite pathetic attempt to trolling to me. What did you actually want to express with that?
It was more of a pathetic attempt at humor. Absolutely no hostile intent in that line. I will admit to a tiny bit of hostility (or more accurately exasperation) in the rest of my responses. That is probably the source of your intuition that I am trolling you.
The reason I am a bit exasperated is that you are asking people to address a very difficult problem that we do not yet have the scientific knowledge to solve. You seem to be asking for a reduction of our intuitions of personal identity. As far as I can tell, it is an unsolved problem, though almost all philosophers of mind mention it, and philosophers interested in issues like AI and 'uploading' mention it frequently.
My own outlook on this is that the problem is premature. Either it will be dissolved with all the other puzzles of mind when we reduce mind to brain - or it will not. If it will be resolved, we waste time now speculating about it. If it will not be resolved, we still waste time now speculating about it, because after a successful reduction the problem will be completely transformed.
On the other hand, if someone invents something like a matter transporter before we have completed the reduction, and we need answers quickly, I think that my brand of phenomenological empiricism is about the best we can come up with.
Oh, and on the outcomes: what I see here is [various disappointments]
If you are looking for wisdom in an internet forum, the second best way to find it is to politely ask for it. The best way is to just say something wrong, and then to pick the best of the corrections you receive.
If you are dissatisfied with the results of asking the question, why not try supplying your own favorite answer and seeing what happens?
Replies from: HoverHell↑ comment by HoverHell · 2011-02-10T10:49:50.423Z · LW · GW
-
↑ comment by knb · 2011-02-04T16:48:38.161Z · LW · GW
Why would you want to get rid of brown fat but not white fat?
Replies from: Perplexed↑ comment by Perplexed · 2011-02-04T17:17:03.954Z · LW · GW
Don't ask me. I didn't design the pill. I'm just a patient. :) But it appears, upon doing a little research, that you are probably right that the bots are mis-targeted.
comment by Jack · 2011-02-03T18:55:10.183Z · LW · GW
I don't think the thing I want to preserve is a physical entity so I don't think there is a way to correctly define it that way.
Replies from: HoverHell↑ comment by HoverHell · 2011-02-04T02:32:45.938Z · LW · GW
-
Replies from: Jack↑ comment by Jack · 2011-02-04T03:03:42.814Z · LW · GW
What it is for me to persist is for there to be, instantiated in some system, an algorithm with a particular relation to the algorithm which characterizes me at present. And I don't think algorithms are physical entities (though, in this universe they seem to be instantiated that way). I suppose you could define "physical" to mean any possible physics but even then the fact that an algorithm is instantiated physically doesn't mean it is identical to that physical entity (the instantiation). Plus any accurate physical description couldn't apply to the entirety of possible instantiations.
I gave up trying to talk my self out of Platonism a while ago.
Replies from: torekp↑ comment by torekp · 2011-02-06T19:24:00.568Z · LW · GW
What it is for me to persist is for there to be, instantiated in some system, an algorithm with a particular relation to the algorithm which characterizes me at present.
Emphasis added. I think "instantiates an algorithm of type X" qualifies as a physical specification in the relevant sense; Platonism is not a problem. However to answer the post you would need to define (or point us in the direction of) the "particular relation" you have in mind.
Replies from: Jack↑ comment by Jack · 2011-02-06T20:53:40.034Z · LW · GW
However to answer the post you would need to define (or point us in the direction of) the "particular relation" you have in mind.
The particular relation where x is the future algorithm and y is the algorithm which characterizes me now is " x is psychologically continuous with y". If x contains memories of y's experiences, or if x contains memories of the experiences of intervening algorithms which in turn contain memories of y, or if x and y are linked by any series of algorithms connected in this way we would have good reason to think x is psychologically continuous with y. Though memory connections aren't sufficient,- you'd also want beliefs, values and emotions to persist or not persist in the right way.
Of course, death is rather flexible. There is no fact of the matter about how continuous with you an algorithm has to be for it to count as you dying.
comment by [deleted] · 2011-02-03T12:58:04.489Z · LW · GW
First there is the brain. This brain has specific input and specific output nerves. There is a relationship between input collected over time and output. This is the first thing I want to preserve. There also is a relationship between output and input. In my current situation specific output (e.g. speaking) produces a physical effect in the surrounding world that again produces specific input. This relation is the second thing I want to preserve. And third, I want to preserve the collected input over time that already determines my brain's output so that after the upload (or the like) new input is added to this collection. I think that's all.
Edit: I think number 2 is ill specified. I need to think about that some more.
Replies from: HoverHell↑ comment by HoverHell · 2011-02-04T02:34:56.412Z · LW · GW
-
comment by TheOtherDave · 2011-02-03T19:51:12.677Z · LW · GW
I'm not sure there is anything I really want to preserve by means of personal immortality.
But assuming that there is, it's a specific dynamic pattern of relationships among representations of hypothetical attributes of my environment (including me). Which includes a number of words that I haven't defined here, admittedly.
It could be instantiated physically in any number of ways. It's (probably) currently instantiated in a network of protoplasmic cells, but I don't particularly want to preserve any part of that network.
Edit: Responding to the edited question... the direction is a more precise and detailed understanding of how cognitive structures relate to one another, and relate to events in the world.