Which Parts Are "Me"?
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-10-22T18:15:10.000Z · LW · GW · Legacy · 117 commentsContents
117 comments
Everything I am, is surely my brain; but I don't accept everything my brain does, as "me".
Though I don't like the fact, it does seem in my case to be true, that reflecting upon a passion can diminish it. It does seem that in taking a step back from myself to look at what my brain is doing, that this introduces a greater emotional distance from the world, than when I am not reflecting. Though I am extremely cautious when it comes to the moral issue of how to modify human nature, this is one thing that I would change, I think.
The point at which you start thinking that not everything that goes on your brain is to be identified with "you", is something of a Rubicon, I fear. And those who prize the intensity of a simple passionate existence, above success in life or a complex layered personality, should not cross that Rubicon.
(Though in this case you should accept advice from rationalists about practical policies for guarding yourself, because this is a dangerous world in which to be unreflective. For example: if you don't follow the link to the rest of this post, then sign the #$!@ up for cryonics regardless of what your simple passions say on the subject.)
Still here? Then here is the Rubicon of which I speak:
(1) Never seeing ex-girlfriend-X again is so terribly unbearably sad!
->
(2) Why am I so terribly sad?
->
(3) Damn, my stupid brain is still obsessing about this when I just want to get over it.
The first scenario is what I would call the intensity of simple passion; nothing between the world and the emotions. The sentence contains no "I" to get in the way. There is nothing to think about except the world itself, the sadness and the missing person.
In the second scenario it is not the world that is sad, but some particular person, an "I"; and the sadness of this "I" is something that can be called into question.
And in the third scenario, the borders of "I" have been drawn in a way that doesn't include everything in the brain, so that "I" is the victim of the brain, struggling against it. And this is not paradoxical. Everything that I am, has to be in my brain somewhere, because there is nowhere else for it to be. But that doesn't mean I have to identify with everything that my brain does. Just as I draw the border of "me" to include my brain but exclude my computer's CPU - which is still a sensible decision at least for now - I can define the border of myself to exclude certain events that happen in my brain, which I do not control, do not want to happen, and do not agree with.
That time I faced down the power-corrupts circuitry, I thought, "my brain is dumping this huge dose of unwanted positive reinforcement", and I sat there waiting for the surge to go away and trying not to let it affect anything.
Thinking "I am being tempted" wouldn't have quite described it, since the deliberate process that I usually think of as "me" - the little voice inside my own head - was not even close to being swayed by the attempted dose of reward that neural circuit was dumping. I wasn't tempted by power; I'd already made my decision, and the question was enforcing it.
But a dangerous state of mind indeed it would have been, to think "How tempting!" without an "I" to be tempted. From there it would only be a short step to thinking "How wonderful it is to exercise power!" This, so far as I can guess, is what the brain circuit is supposed to do to a human.
So it was a fine thing that I was reflective, on this particular occasion.
The problem is when I find myself getting in the way of even the parts I call "me". The joy of helping someone, or for that matter, the sadness of death - these emotions that I judge right and proper, which must be me if anything is me - I don't want those feelings diminished.
And I do better at this, now that my metaethics are straightened out, and I know that I have no specific grounds left for doubting my feelings.
But I still suspect that there's a little distance there, that wouldn't be there otherwise, and I wish my brain would stop doing that.
I have always been inside and outside myself, for as long as I can remember. To my memory, I have always been reflective. But I have witnessed the growth of others, and in at least one case I've taken someone across that Rubicon. The one now possesses a more complex and layered personality - seems more to me now like a real person, even - but also a greater emotional distance. Life's lows have been smoothed out, but also the highs. That's a sad tradeoff and I wish it didn't exist.
I don't want to have to choose between sanity and passion. I don't want to smooth out life's highs or even life's lows, if those highs and lows make sense. I wish to feel the emotion appropriate to the event. If death is horrible then I should fight death, not fight my own grief.
But if I am forced to choose, I will choose stability and deliberation, for the sake of what I protect. And my personality does reflect that. What you are willing to trade off, will sometimes get traded away - a dire warning in full generality.
This post is filed under "morality" because the question "Which parts of my brain are 'me'?" is a moral question - it's not predicted so much as chosen. You can't perform a test on neural tissue to find whether it's in or out. You have to accept or reject any particular part, based on what you think humans in general, and yourself particularly, ought to be.
The technique does have its advantages: It brings greater stability, being less subject to sudden changes of mind in the winds of momentary passion. I was unsettled the first time I met an unreflective person because they changed so fast, seemingly without anchors. Reflection conveys a visibly greater depth and complexity of personality, and opens a realm of thought that otherwise would not exist. It makes you more moral (at least in my experience and observation) because it gives you the opportunity to make moral choices about things that would otherwise be taken for granted, or decided for you by your brain. Waking up to reflection is like the difference between being an entirely helpless prisoner and victim of yourself, versus becoming aware of the prison and getting a chance to escape it sometimes. Not that you are departing your brain entirely, but the you that is the little voice in your own head may get a chance to fight back against some circuits that it doesn't want to be influenced by.
And the technique's use, to awaken the unreflective, is as I have described: First you must cross the gap between events-in-the-world just being terribly sad or terribly important or whatever, of themselves; and say, "I feel X". Then you must begin to judge the feeling, saying, "I do not want to feel this - I feel this way, but I wish I didn't." Justifying yourself with "This is not what a human should be", or "the emotion does not seem appropriate to the event".
And finally there is the Rubicon of "I wish my brain wouldn't do this", at which point you are thinking as if the feeling comes from outside the inner you, imposed upon you by your brain. (Which does not say that you are something other than your brain, but which does say that not every brain event will be accepted by you as you.)
After crossing this Rubicon you have set your feet fully upon the reflective Way; and I've yet to hear of anyone turning back successfully, though I think some have tried, or wished they could.
And once your feet are set on walking down that path, there is nothing left but to follow it forward, and try not to be emotionally distanced from the parts of yourself that you accept as you - an effort that a mind of simple passion would not need to make in the first place. And an effort which can easily backfire by drawing your attention to the layered depths of your selfhood, away from the event and the emotion.
Somewhere at the end of this, I think, is a mastery of techniques that are Zenlike but not Zen, so that you have full passion in the parts of yourself that you identify with, and distance from the pieces of your brain that you reject; and a complex layered personality with a stable inner core, without smoothing out those highs or lows of life that you accept as appropriate to the event.
And if not, then screw it, let's hack the brain so that it works that way. I have no confidence in my ability to judge how human nature should change, and would sooner leave it up to a more powerful mind in the same metamoral reference frame. But if I had to guess, I think that's the right thing to do.
117 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by RobinHanson · 2008-10-22T18:30:09.000Z · LW(p) · GW(p)
There are many more Rubicons one can cross in this general direction, and fewer still who do cross or want to. Your journey has hardly begun, if you have the will to continue.
comment by MichaelG · 2008-10-22T18:55:05.000Z · LW(p) · GW(p)
So do you think it's possible to deal with depression by thinking "oh, just ignore that mood. It's just a defective portion of my brain speaking."
Or is the act of getting an antidepressant med basically acting on the desire to change your own brain?
What does it say about our regard for self and willingness to change our mental structure that so many people take antidepressants? If we were uploaded, would we freely modify our minds, or fear losing ourselves in the process?
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-10-22T19:04:17.000Z · LW(p) · GW(p)
Robin, care to name the next Rubicon? Or a next Rubicon?
Michael, you can help depression by thinking "I wish my brain would stop releasing these depression neurotransmitters" - it doesn't command your brain but it does prevent you from being helplessly caught up in the feeling and swept away; it stops you from thinking that your life is inherently absolutely awful and immedicable or thinking up more reasons why you should be depressed.
Getting an antidepressant is obviously an act of rebellion against a part of one's brain (by another part of one's brain, of course!)
I think that giving anyone who hasn't shown their ability to build a Friendly AI, the ability to modify their own brain circuitry, is like giving a loaded gun to a 2-year-old. And it's not that being able to build a Friendly AI means you know enough to modify yourself. It means that you know for yourself why you shouldn't. Modifying a system as messy as a human has to be left to something smarter than human - hence the point of designing a much cleaner Friendly AI that (provably correctly) self-improves to the point where it can handle the job.
comment by MichaelG · 2008-10-22T19:07:37.000Z · LW(p) · GW(p)
I'm depressed about the coming end of the human race. Got a solution for that? :-)
Replies from: Dojan↑ comment by Dojan · 2012-12-22T22:01:20.434Z · LW(p) · GW(p)
I'd say that is an accurate feeling. You should not want it to go away, by any other means than making the coming end of the human race go away.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-10-22T19:18:32.000Z · LW(p) · GW(p)
Yeah, shut up and save the world.
comment by Thom_Blake · 2008-10-22T19:29:34.000Z · LW(p) · GW(p)
I've yet to hear of anyone turning back successfully, though I think some have tried, or wished they could.
It seems to be one interpretation of the Buddhist project
Regarding self, I tend to include much more than my brain in "I" - but then, I'm not one of those who thinks being 'uploaded' makes a whole lot of sense.
comment by Jef_Allbright · 2008-10-22T19:45:53.000Z · LW(p) · GW(p)
Eliezer, A few years ago I sat across from you at dinner and mentioned how much you reminded me of my younger self. I expected, incorrectly, that you would receive this with the appreciation of a person being understood, but saw instead on your face an only partially muted expression of snide mirth. For the next hour you sat quietly as the conversation continued around us, and on my drive home from the Bay Area back to Santa Barbara I spent a bit more time reflecting on the various interactions during the dinner and updating my model of others and you.
For as long as I can remember, well before the age of 4, I've always experienced myself from both within and without as you describe. On rare occasions I've found someone else who knows what I'm talking about, but I can't say I've ever known anyone closely for whom it's such a strong and constant part of their subjective experience as it has been for me. The emotions come and go, in all their intensity, but they are both felt and observed. The observations of the observations are also observed, and all this up to typically and noticeably, about 4 levels of abstraction. (Reflected in my natural writing style as well.)
This leads easily and naturally to a model representing a part of oneself dealing with another part of oneself. Which worked well for me up until about the age of 18, when a combination of long-standing unsatisfied questions of an epistemological nature on the nature of induction and entropy, readings (Pirsig, Buckminster Fuller, Hofstadter, Dennett, and some of the more coherent and higher-integrity books on Buddhism) lead me to question and then reorganize my model of my relationship to my world. At some point about 7 years later (about 1985) it hit me one day that I had completely given up belief in an essential "me", while fully embracing a pragmatic "me". It was interesting to observe myself then for the next few years; every 6 months or so I would exclaim to myself (if no one else cared to listen) that I could feel more and more pieces settling into a coherent and expanding whole. It was joyful and liberating in that everything worked just as before, but I had to accommodate one less hypothesis, and certain areas of thinking, meta-ethics in particular, became significantly more coherent and extensible. [For example, a piece of the puzzle I have yet to encounter in your writing is the functional self-similarity of agency extended from the "individual" to groups.]
Meanwhile I continued in my career as a technical manager and father, and had yet to read Cosmides and Tooby, Kahneman and Tversky, E.T. Jaynes or Judea Pearl -- but when I found them they felt like long lost family.
I know of many reasons why its difficult to nigh impossible to convey this conceptual leap, and hardly any reason why one would want to make it, other than one who's values already drive him to continue to refine his model of reality.
I offer this reflection on my own development, not as a "me too" or any sort of game of competition of perceived superiority, but only as a gentle reminder that, as you've already seen in your previous development, what appears to be a coherent model now, can and likely will be upgraded (not replaced) to accommodate a future, expanded, context of observations.
Replies from: Ivan_Tishchenko, ABranco↑ comment by Ivan_Tishchenko · 2010-04-03T07:34:26.975Z · LW(p) · GW(p)
@Thom: Why don't you write an article / sequence of articles here, on LW, on your now significantly more coherent and extensive model of reality? I, sincerely, would be really glad to read that.
comment by Moshe_Gurvich · 2008-10-22T20:13:15.000Z · LW(p) · GW(p)
MichaelG: I see a depression as mental and emotional loop with positive reinforcement feedback.
Predisposition is required for loop to be complete:
- Brain, which is not trained to analyze/debug itself
- tends to react unconsciously,
- which means drawing conclusions without seeing the full picture,
- which causes blaming entities unrelated to real problem,
- which results in senseless waste of energy and time trying to fix the unfixable.
So the loop goes as follows:
- Feel depression
- Focus on depression
- Try to fight with depression
- Depression grows as as it consumes more of your time and energy.
- Next iteration starting from step 1, but spiralling in intensity.
To break the loop, focus on productive things instead of depression.
Recognize that you ARE the master in your own mind and CAN change your train of thought in ONE moment.
Yes, it is that easy. I've been there and I know it's not just a theory.
Good luck :)
comment by Matthew_C.2 · 2008-10-22T20:18:17.000Z · LW(p) · GW(p)
There is no actual "you" in the way that it seems to be. A persistent thought pattern / meme complex got mistaken for a "you" by awareness and, sooner or later, awareness can see through the "you", which is a tremendous relief when it occurs.
As Einstein put it:
A human being is a part of the whole, called by us, "Universe," a part limited in time and space. He experiences himself, his thoughts and feelings as something separated from the rest -- a kind of optical delusion of his consciousness.
This delusion is a kind of prison for us. . .
comment by Sebastian_Hagen2 · 2008-10-22T20:21:46.000Z · LW(p) · GW(p)
The of helping someone, ...Missing word?
comment by Jef_Allbright · 2008-10-22T20:26:31.000Z · LW(p) · GW(p)
Matthew C quoting Einstein: "A human being is a part of the whole, called by us, "Universe," a part limited in time and space. He experiences himself, his thoughts and feelings as something separated from the rest -- a kind of optical delusion of his consciousness."
Further to this point, and Eliezer's description of the Rubicon: It seems that recognizing (or experiencing) that perceived separation is a step necessary to its eventual resolution. Those many who've never even noticed to ask the question will not notice the answer, no matter how close to them it may be.
comment by Pete · 2008-10-22T20:52:27.000Z · LW(p) · GW(p)
There is no actual "you" in the way that it seems to be. A persistent thought pattern / meme complex got mistaken for a "you" by awareness and, sooner or later, awareness can see through the "you", which is a tremendous relief when it occurs.
As Einstein put it. . .
As Julian Jaynes put it:
"...this space of consciousness inside our own heads. We also assume it is there in others'. In talking with a friend, maintaining periodic eye-to-eye contact, we are always assuming a space behind our companion's eyes into which we are talking, similar to the space we imagine inside our own heads where we are talking from.
And this is the very heartbeat of the matter. For we all know perfectly well that there is no such space inside anyone's head at all! There is nothing inside my head or yours except a physiological tissue of one sort or another. And the fact that it is predominantly neurological tissue is irrelevant."
comment by JK3 · 2008-10-22T21:20:00.000Z · LW(p) · GW(p)
#1 is you. #2 is an attempt by "I" to escape #1 (yourself). #3 is like #2 but a bit more complex. The internal monologue that is the "I" is terrible at dealing with anything non-technical, such as a psychological problem involving your idea of who "I" was, is, and your hope of what "I" will become. It ignores the details that are the most important to solving the problem as it exists entirely to run away from the problem. You can be entirely free from it but, as one poster mentioned before, it is a continuous process of being aware of when "I" pops in to field a problem it should not.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-10-22T21:40:15.000Z · LW(p) · GW(p)
Jef Allbright: Eliezer, A few years ago I sat across from you at dinner and mentioned how much you reminded me of my younger self. I expected, incorrectly, that you would receive this with the appreciation of a person being understood, but saw instead on your face an only partially muted expression of snide mirth.
I can't imagine why I might have been amused at your belief that you are what a grown-up Eliezer Yudkowsky looks like.
I don't know if I've mentioned this publicly before, but as you've posted in this vein several times now, I'll go ahead and say it:
functional self-similarity of agency extended from the 'individual' to groups
I believe that the difficult-to-understand, high-sounding ultra-abstract concepts you use with high frequency and in great volume, are fake. I don't think you're a poor explainer; I think you have nothing to say.
If I don't give you as much respect as you think you deserve, no more explanation is needed than that, a conclusion I came to years ago.
comment by Jeremy2 · 2008-10-22T21:46:23.000Z · LW(p) · GW(p)
I'm not sure if you'll find this interesting, but I quit smoking using something like the method you are describing. Basically I labeled the craving for nicotine as an entity apart from myself - I named it "the beast". So instead of thinking "I want a smoke" I'd think: "the beast is hungry." This didn't work all by itself (I had a lot of practice quitting) but it was the last of 5-6 attempts to quit and its stuck for nearly ten years now.
comment by Z._M._Davis · 2008-10-22T21:56:53.000Z · LW(p) · GW(p)
I'm confused. Eliezer, you seem to be saying that reflectivity leads to distance from one's emotions, but this completely contradicts my experience: I'm constantly introspecting and analyzing myself, and yet I am also extremely emotional, not infrequently to the point of hysterical crying fits. Maybe I'm introspective but not reflective in the sense meant here? I will have to think about this for a while.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2013-01-31T08:14:56.788Z · LW(p) · GW(p)
Maybe I'm introspective but not reflective in the sense meant here?
That's right. Reflection here refers to the skill of reasoning about your own reasoning mechanisms using the same methods you that use to reason about anything else. "Solving your own psychological problems" is then a trivial special case of "solving problems," but with the bonus that solving the problem of making yourself better at solving problems, makes you better at solving future problems. Surprisingly, it turns out that this is actually pretty useful, but you probably won't understand what I'm talking about for another four years and three months.
Replies from: rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2013-02-01T01:03:28.414Z · LW(p) · GW(p)
Congrats on "leveling up".
By the way, I found your last sentence inscrutable (even after reading its parent) and gave up trying to decipher it, telling myself, "Zack's writing is almost always unambiguous and decipherable; today is an exception." It was only by accident that I read it again and realized that you are replying to yourself, which cleared things up for me.
(This confirms my belief in the utility of a habit I adopted 5 years ago, of always explicitly pointing it out whenever I am replying to myself.)
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-10-22T22:04:25.000Z · LW(p) · GW(p)
ZM, the question is whether being more reflective makes you less passionate, not so much the absolute levels. But you're correct that being introspective is not at all the same as what I'm describing here.
comment by MichaelG · 2008-10-22T22:06:49.000Z · LW(p) · GW(p)
Moshe Gurvich, thanks for the encouragement. I can never decide if my problem is Depression as a disease, or just reaction to my particular life circumstances.
There are people who recommend purely cognitive approaches to depression, including a lot of self-monitoring. Finding a project that engages you, so that you don't dwell on your depression, is a different approach, although also purely cognitive.
My point on the original post though was that you might naively assume that people would be scared of self-modification. But then you see people using Prozac without a second thought. More commonly, alcohol and other mood-altering substances are used. So perhaps we aren't frightened of self-modification after all.
As Eliezer implies, that would make us even more unstable as an upload with direct self-modification abilities.
comment by Cassandra2 · 2008-10-22T22:19:15.000Z · LW(p) · GW(p)
I gained this kind of reflectivity when I was barely able to even think and I did not know how to use it wisely. One of my first memories is relentlessly purging my early childhood personality shortly after I discovered how to perform the trick then panicking and rebuilding a new self any sort of stuff laying around. Still think that rampant self-modification left scars on my mind that are still there today. Emotional distance did increase the more I examined, altered and experimented with this and eventually caused some really painful side effects. Eek.
comment by vanveen · 2008-10-22T22:45:51.000Z · LW(p) · GW(p)
eliezer, with all due respect, jef's brief description of iterated reflective experience was more elucidative than yours.
i'm amused that you responded with contempt and anger to a perfectly well-intentioned comment after making the post you just made. get your shoeshine box, yudkowsky.
comment by RobinHanson · 2008-10-22T22:58:05.000Z · LW(p) · GW(p)
Eliezer, inside each of us are whole societies of mind, which form and reform coalitions depending on circumstances. Coalitions at times declare themselves to be the "real me" but treat that with the same skepticism you would apply to some particular part of the USA to calling itself the "real America."
comment by Z._M._Davis · 2008-10-22T23:00:27.000Z · LW(p) · GW(p)
"ZM, the question is whether being more reflective makes you less passionate, not so much the absolute level [...]"But if that were the only issue at hand, then that would generate the prediction that I would be even more unstable (!) if I were less analytical, which is the opposite of what I would have predicted.
Yes, it could possibly be that it is this introspection/reflectivity dichotomy that's tripping me up. A deep conceptual understanding that one's self can be distinct from what-this-brain-is-thinking-and-feeling-right-now does not necessarily imply the ability to draw this distinction consistently and in real time. Maybe successful reflectivity decreases passion, but an awareness of, combined with an inability to reconcile, the morass of conflicting thoughts and desires, only inflames the passions?
Okay, now I'm really confused. I think.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-10-22T23:06:59.000Z · LW(p) · GW(p)
Cassandra, I went through something like that, but it was something like age 15-19, not a fast episode during childhood! Your story intrigues me and I am interested in knowing more; also whether you're otherwise a mathematical thinker and whether you've read/enjoyed Godel, Escher, Bach.
Robin, it seems to me that a fairly stable coalition has gained control of my fingers, so presumably you can treat with that if you're interested in my blog posts, or any source code I generate? Or to look at it another way, I understand (have a story about?) what happened inside my mind as a result of crossing the reflective Rubicon - I became more detached from emotions I put outside my boundary, and also somewhat more detached from all emotions as a result of thinking about where to put them. What happens to me if I start regarding myself as a coalition? Is it something I should want to do, and why?
Davis, what you were saying made sense to me, so I'm confused as to what you could be confused about.
comment by Jef_Allbright · 2008-10-22T23:14:45.000Z · LW(p) · GW(p)
@Eliezer: I can't imagine why I might have been amused at your belief that you are what a grown-up Eliezer Yudkowsky looks like.
No, but of course I wasn't referring to similarity of physical appearance, nor do I characteristically comment at such a superficial level. Puhleease.
I don't know if I've mentioned this publicly before, but as you've posted in this vein several times now, I'll go ahead and say it:
functional self-similarity of agency extended from the 'individual' to groups
I believe that the difficult-to-understand, high-sounding ultra-abstract concepts you use with high frequency and in great volume, are fake. I don't think you're a poor explainer; I think you have nothing to say.
If I don't give you as much respect as you think you deserve, no more explanation is needed than that, a conclusion I came to years ago.
Well that explains the ongoing appearance of disdain and dismissal. But my kids used to do something similar and then I was sometimes gratified to see shortly after an echo of my concepts in their own words.
Let me expand on my "fake" hint of a potential area of growth for your moral epistemology:
If you can accept that the concept of agency is inherent to any coherent meta-ethics, then we might proceed. But, you seem to preserve and protect a notion of agency that can't be coherently modeled.
You continue to posit agency that exploits information at a level unavailable to the system, and wave it away with hopes of math that "you don't yet have." Examples are your post today that has "real self" somehow dominating lesser aspects of self as if quite independent systems, or with your "profound" but unmodelable interpretation of ishoukenmei which bears only a passing resemblance to the very realistic usage I learned while living in Japan.
You continue to speak (and apparently think) in terms of "goals", even when such "goals" can't be effectively specified in the uncertain context of a complex evolving future, and you don't seem to consider the cybernetic or systems-theoretic reality that ultimately no system of interesting complexity, including humans, actually attains long-term goals so much as it simply tries to null out the difference between its (evolving) internal model and its perceptions of its present reality. All the intelligence is in the transform function effecting its step-wise actions. And that's good enough, but never absolutely perfect. But the good enough that you can have is always preferable to the absolutely perfect that you can never have (unless you intend to maintain a fixed context.)
You posit certainty (e.g. friendliness) as an achievable goal, and use rigorous-sounding terms like "invariant goal" in regard to decision-making in an increasingly uncertain future, but blatantly and blithely ignore concerns addressed to you over the years by myself and others as to how you think that this can possibly work, given the ineluctable combinatorial explosion, and the fundamentally critically underspecified priors.
I realize it's like a Pascal's Wager for you, and I admire your contributions in a sense somewhat tangential to your own, but like an isolated machine intelligence of high processing power but lacking an environment of interaction of complexity similar to its own - eventually you run off at high speed exploring quite irrelevant reaches of possibility space.
As to my hint to you today, if you have a workable concept of agency, then you might profit from consideration of the functional self-similarity of agencies composed of agencies, and so on, self-similar with increasing scale, and how the emergent (yeah, I know you dismiss "emergence" too) dynamics will tend to be perceived as increasingly moral (from within the system, as each of us necessarily is) due to the multi-level selection and therefore alignment for "what works" (nulling out the proximal difference between their model and their perceived reality, wash, rinse, repeat) by agents each acting in their own interest within an ecology of competing interests.
Sheesh, I may be abstract, I may be a bit too out there to relate to easily, but I have a hard time with "fake."
I meant to shake your tree a bit, in a friendly way, but not to knock you out of it. I've said repeatedly that I appreciate the work you do and even wish I could afford to do something similar. I'm a bit dismayed, however, by the obvious emotional response and meanness from someone who prides himself on sharpening the blade of his rationality by testing it against criticism.
comment by Zubon · 2008-10-22T23:24:36.000Z · LW(p) · GW(p)
Mine was age 16. I don't recall having any sort of panic, but given the extent to which my adult personality resembles what I was reading at the time, I may have "rebuil[t] a new self any sort of stuff laying around." It felt intentional at the time... No painful side effects, but that may be scar tissue. (I'm a quant, and I found GEB too clever for its own good. Maybe it was unique at the time, but by the time I read it, I had already seen its better points elsewhere.)
comment by Z._M._Davis · 2008-10-22T23:30:14.000Z · LW(p) · GW(p)
"Davis, what you were saying made sense to me, so I'm confused as to what you could be confused about."
I came up with a nice story (successful reflection decreases passion; failed reflection increases it) that seems to fit the data (Eliezer says reflection begets emotional detachment, whereas I try to reflect and drive myself mad), but my thought process just felt really (I can think of no better word:) muddled, so I'm wondering whether I should wonder whether I made a fake explanation.
comment by Caledonian2 · 2008-10-22T23:30:18.000Z · LW(p) · GW(p)
My point on the original post though was that you might naively assume that people would be scared of self-modification. But then you see people using Prozac without a second thought. More commonly, alcohol and other mood-altering substances are used. So perhaps we aren't frightened of self-modification after all.As far as we can determine, humans have always wanted to submerge their consciousness beneath various drugs and estatic practices. Thinking is unpleasant for most people, and they work hard at turning off their capacity for self-reflection.
Give them a genuine ability to reshape their minds that doesn't require massive amounts of thought on their part, and they won't look back.
I am highly skeptical of people who believe they could be wireheaded without the ultimate destruction of their minds. The closest equivalents we have today are extraordinarily addictive.
comment by RobinHanson · 2008-10-22T23:58:26.000Z · LW(p) · GW(p)
Eliezer, I suspect the coalition in control of your fingers is not as coherent or stable as it appears. Ruling coalitions like to give the impression that they have little effective opposition and are unified without internal dissent, but the truth is usually otherwise.
comment by Nick_Tarleton · 2008-10-23T00:01:01.000Z · LW(p) · GW(p)
I'd say my self has been rebuilt, oh, 3-5 times, first at age 12, varying in duration between hours and months, and, unlike the above anecdotes, never with conscious direction. As a gross generalization, each time has involved less "building a new self from any sort of stuff laying around" and more of a sense of being guided to inevitable rational conclusions. (Very mathematical thinker, started and enjoyed but never finished GEB.)
comment by Matthew_C.2 · 2008-10-23T00:16:00.000Z · LW(p) · GW(p)
I'm a bit dismayed, however, by the obvious emotional response and meanness from someone who prides himself on sharpening the blade of his rationality by testing it against criticism.
Let's be fair. All "someones" operate according to the same basic Darwinian principles, which involve the subsumption of some ideas and rejection of others into a self-concept which then defends "itself" against any perceived threat. And the biggest threat, of course, is the truth that the self is not fundamentally real. When that is clearly seen, the gig is up.
Expecting "someones" to operate according to principles of integrity and truth-seeking is like expecting foxes to babysit chickens without indulging their appetites. Sure, there is an (at first) fun and interesting game of status seeking to be played called "I'm more honest (and smarter BTW) than you". But it's all in the service of covering up the truth about the imagined "I" who is playing that game.
When reality is actually engaged with an approach of genuine inquiry rather than an chest-expanded assumption that the "someone" is well along the "straight and narrow path" and treading "the way", then the "someone" is seen to be insubstantial, unimportant and essentially unreal, and displays of self-importance, pomposity and grandiosity fade away. And many of the activities and goals that seemed oh-so-important to the "someone", are smiled at, and put away on the shelf like the other outgrown toys of childhood.
comment by Matthew_C.2 · 2008-10-23T00:20:34.000Z · LW(p) · GW(p)
Eliezer, I suspect the coalition in control of your fingers is not as coherent or stable as it appears. Ruling coalitions like to give the impression that they have little effective opposition and are unified without internal dissent, but the truth is usually otherwise.
That comment was quintessentially Hanson, and an observation whose insight gives me much cause to believe that the coalition in control of those fingers has travelled across many a Rubicon. . .
comment by Jef_Allbright · 2008-10-23T00:21:17.000Z · LW(p) · GW(p)
@Cyan: "... you're going to need more equations and fewer words."
Don't you see a lower-case sigma representing a series every time I say "increasingly"? ;-)
Seriously though, I read a LOT of technical papers and it seems to me much of the beautiful LaTex equations and formulas are only to give the impression of rigor. And there are few equations that could "prove" anything in this area of inquiry.
What would help my case, if it were not already long lost in Eliezer's view, is to have provided examples, references, and commentary along with each abstract formulation. I lack the time to do so, so I've always considered my "contributions" to be seeds of thought to grow or not depending on whether they happen to find fertile soil.
comment by Richard_Hollerith2 · 2008-10-23T00:26:53.000Z · LW(p) · GW(p)
Well, OK, but your anti-reductionism is still wrong.
comment by Richard_Hollerith2 · 2008-10-23T00:28:20.000Z · LW(p) · GW(p)
Allbright slipped in. (Mine was a reply to Matthew C.)
comment by Jef_Allbright · 2008-10-23T00:28:57.000Z · LW(p) · GW(p)
Mathew C: "And the biggest threat, of course, is the truth that the self is not fundamentally real. When that is clearly seen, the gig is up."
Spot on. That is by far the biggest impasse I have faced anytime I try to convey a meta-ethics denying the very existence of the "singularity of self" in favor of the self of agency over increasing context. I usually to downplay this aspect until after someone has expressed a practical level of interest, but it's right there out front for those who can see it.
Thanks. Nice to be heard...
Based on the disproportionate reaction from our host, I'm going to sit quietly now.
comment by Partiallybright · 2008-10-23T01:02:18.000Z · LW(p) · GW(p)
On Eliezer's comment about abstract stuff = fake, nothing to say:
If you really want to communicate your ideas, transfer them all the way to another brain, you would try harder to present them so that almost anyone who wants to understand them (with the right level of background info) has no hard time doing so. Instead it's like you cram whole functions or classes into convoluted one-liners like some extreme programmer showing off his chops. Yeah, it may work, but you got to show that your code really runs for us mortals.
I understand Eliezer's ideas without much difficulty. I read and integrate information from the top guys you mentioned. I have hard time following your stuff or don't get it at all.
For me, Eli & Co 1 = Real vs You 0 = Fake, or whatever you want to call it.
I just love it when Eli keeps it real and doesn't spare the bullets. That accurate & lethal sniper rifle of his is never pointed at the wrong target. I detected no emotion involved, by the way; never with haste, never without a reason.
comment by James_Blair · 2008-10-23T01:39:01.000Z · LW(p) · GW(p)
I had crossed when I was much younger, without realizing what I'd done or the consequences. I wish I was informed, but it's too late now. I guess I committed myself to this path, I might as well see where it leads.
Eliezer: If there is more than one rubicon to cross, is it possible to skip one? Does the question make any sense?
Robin: What coalitions should I expect to see? Who's in charge of Robin Hanson right now?
Jef: Give me exactly one reason why I should listen to you. Ignore his current inability in FAI: nothing you've said has convinced me that he is making a mistake that matters. If the mistake is that big, I can discover the ramifications for myself after I know what's going on.
comment by Son_of_my_father,_Mr._Allbright · 2008-10-23T01:40:51.000Z · LW(p) · GW(p)
@Partiallybright: "If you really want to communicate your ideas, transfer them all the way to another brain, you would try harder to present them so that almost anyone who wants to understand them (with the right level of background info) has no hard time doing so."
Yes, criticism fully accepted.
Partiallybright: "Instead it's like you cram whole functions or classes into convoluted one-liners like some extreme programmer showing off his chops."
Well, I code in Python most of the time, and I tend to write in functional/imperative style because it's so much clearer and more concise to me and to others who I can consider to be more advanced. Funny thing is, people who think in procedural style find it very difficult to read my demonstrably functional code. What to do? Yes, add additional comments appropriate to their background.
But that doesn't imply the code was fake. When something isn't understood, where's the indication that it's fake?
When my wife and I have disagreed it has ALWAYS been because I tend to reason over the big chunks (assuming things are obvious (that's our running joke at home) and fill in detail only as needed. Eventually she gets enough detail, wherever she needs it, to see where I'm coming from, and then we either agree, or decide that we were arguing the wrong question. We have ALWAYS agreed when we get to the point of understanding each other's priors and basis of reasoning (and she knows nothing of Aumann.)
Partiallybright: "I just love it when Eli keeps it real and doesn't spare the bullets. That accurate & lethal sniper rifle of his is never pointed at the wrong target. I detected no emotion involved, by the way..."
Are you sure there's no emotion involved here, at any level?
Replies from: arfle↑ comment by arfle · 2010-09-12T09:02:10.843Z · LW(p) · GW(p)
"Well, I code in Python most of the time, and I tend to write in functional/imperative style because it's so much clearer and more concise to me and to others who I can consider to be more advanced. Funny thing is, people who think in procedural style find it very difficult to read my demonstrably functional code. " [Italics added]
Perhaps you could show us examples of the two contrasting styles?
If we are truly in contact with someone who can accurately form abstractions without considering examples, then I would expect to be impressed and baffled by their code.
And as you say, there would be evidence of its correctness from its successful execution.
Don't bother with the comments. Just say what it's supposed to do.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-10-23T02:03:33.000Z · LW(p) · GW(p)
There's emotion involved. I enjoy calling people's bluffs.
Jef, if you want to argue further here, I would suggest explaining just this one phrase "functional self-similarity of agency extended from the 'individual' to groups".
comment by Cassandra2 · 2008-10-23T02:05:36.000Z · LW(p) · GW(p)
I have enjoyed what parts I read of Godel, Escher, Bach but I have yet to finish it. As far as being a mathematical thinker... I haven't really identified how my thought process works other than being confused much of the time. I do enjoy math but I don't seem to have much of a talent for it. Haven't really found anything I do have a talent for. I am trying to become more of a mathematical thinker and to construct a good foundation to build a system of knowledge on but I have this strong natural inclination to trust anyone and that tends to undermine some of my efforts. My current plan to counter that is to simply find people that I can trust and fill my head up with good stuff so I can use it to block out the bad stuff.
As far as my childhood goes I created a lot of problems for myself by trying to force myself into a mold which conflicted strongly with the way my brain was setup. For most of my youth and teenage years I was this weird wanna-be artist anti-rationality, anti-science stereotype that was stuck in a sophistic nightmare for years. Luckily I can look back on that cringe in horror. I do realize now that there was no way that I could of ever been successful in that field even if I hadn't made mistake after mistake because my creative writing, for example, reads like a VCR manual. I have noticed now that I have stopped struggling against my general nature and adopted more technical and rational based approaches that I do so much better.
I still suck at this whole mathematical thinking thing but I believe that has more to do with me working to undo years of stupid shit and lacking the necessary experience than a lack of talent maybe. My entire life is just one big game of catchup right now and its extraordinarily stressful. Not sure if this is the story you wanted to hear, feel free to send me an email or something I guess, if you wanted different information.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-10-23T02:15:44.000Z · LW(p) · GW(p)
Thanks, Cassandra, that was the information I was looking for.
It's interesting that others have shared this experience, trying to distance ourselves from, control, or delete too much of ourselves - then having to undo it. I hadn't read of anyone else having this experience, until people started posting here.
The idea of having to reweave yourself out of immediately available parts is startling to me - I remember my own recovery as being more like a chain of "Undos" to restore the original state. I wonder what the alternative would have been like, but I'm not going to try it again to see!
comment by Cassandra2 · 2008-10-23T02:33:56.000Z · LW(p) · GW(p)
In regards to referring to yourself as a coalition. I am not so sure that would be a good idea. Cassie would be a good little hive drone because she never had any sense of strong identity to begin with. And I like to speak in the third person. ;) Seriously I am mildly uncomfortable with even referring to myself as 'I' these days because I try to keep very careful record of which factors influence my mind and how they influence me and after I add all this up it seems pretty clear to me that I do not exist. Perhaps I am simply wrong but this is the conclusion I have come to.
comment by Tom_McCabe2 · 2008-10-23T02:41:12.000Z · LW(p) · GW(p)
"As far as my childhood goes I created a lot of problems for myself by trying to force myself into a mold which conflicted strongly with the way my brain was setup."
"It's interesting that others have shared this experience, trying to distance ourselves from, control, or delete too much of ourselves - then having to undo it. I hadn't read of anyone else having this experience, until people started posting here."
For some mysterious reason, my younger self was so oblivious to the world that I never experienced (to my recollection) a massive belief system rewrite. I assume that what you're referring to is learning a whole bunch of stuff, finding out later on that it's all wrong, and then go back and undoing it all. I don't think I ever learned the whole bunch of stuff in the first place- eg., when I discovered atheism, I didn't really have an existing Christian belief structure that had to be torn down. I knew about Jesus and God and the resurrection and so forth, but I hadn't really integrated it into my head, so when I discovered atheists, I just accepted their arguments as true and moved on.
comment by Jordan · 2008-10-23T03:00:32.000Z · LW(p) · GW(p)
When I was a teenager I had a concept I referred to as "the double edged sword of apathy". It was precisely the concept that separating oneself from certain aspects of oneself (which at the time I called fostering apathy) is a destructive tool which can be either positive or negative. Care must be taken not to slice your own arm off.
I don't believe that this danger should be removed though, at least I wouldn't personally allow it. I hold "self-modifying" to be the deepest aspect of life. When we finally get the technology to do source level modifications I won't let an AI do the job: I'll do it myself, regardless of the risk.
comment by frelkins · 2008-10-23T03:16:00.000Z · LW(p) · GW(p)
I grow more interested in the ideas of William James on this subject. Statements such as:
The passing Thought itself is the only verifiable thinker
Thought is a passing thought that incessantly remembers previous thoughts and appropriates some of them to itself
There is a "judging Thought" that identifies and owns some parts of the stream of consciousness while disowning others
The next moment another Thought takes up the expiring Thought and appropriates it. It binds the individual past facts with each other and with itself.
In this way, what holds the thoughts together is not a separate spirit or ego but only another thought of a special kind.
seem in accord with my experience. Thus I am beginning to see "myself" as something that my body wears, a lengthening necklace of black pearls and white diamonds on a singing string - each appropriated thought a pearl, each owned part of the stream a diamond, and the "special kind" of thought my string.
comment by Savage · 2008-10-23T03:36:00.000Z · LW(p) · GW(p)
This is an awesome and freaky topic.
"It's interesting that others have shared this experience, trying to distance ourselves from, control, or delete too much of ourselves - then having to undo it. I hadn't read of anyone else having this experience, until people started posting here."
Haha... this is indeed kind of weird to see. I am very familiar with this experience.
"Seriously I am mildly uncomfortable with even referring to myself as 'I' these days because I try to keep very careful record of which factors influence my mind and how they influence me and after I add all this up it seems pretty clear to me that I do not exist."
That's a great insight. Few people have this extreme maxed-out reflectivity. One person I think of is Michael Wilson.
It's actually hilarious in a kind of horrible way to watch people screw it up. Like having some extreme reflectivity but not maxing it out, or somehow it doesn't work right. Hard to explain, but I guess your example works:
"For most of my youth and teenage years I was this weird wanna-be artist anti-rationality, anti-science stereotype that was stuck in a sophistic nightmare for years. Luckily I can look back on that cringe in horror."
Kind of relates to what Jordan said:
"When I was a teenager I had a concept I referred to as "the double edged sword of apathy". It was precisely the concept that separating oneself from certain aspects of oneself (which at the time I called fostering apathy) is a destructive tool which can be either positive or negative. Care must be taken not to slice your own arm off."
" a massive belief system rewrite. I assume that what you're referring to is learning a whole bunch of stuff, finding out later on that it's all wrong, and then go back and undoing it all. "
That's not exactly how I would describe it. It's more like perfecting the Way. "If you speak overmuch of the Way you will not attain it." But if you totally ignore the Way, then you are just blasting off in the wrong direction.
"Jef, if you want to argue further here, I would suggest explaining just this one phrase "functional self-similarity of agency extended from the 'individual' to groups"."
I am very much appreciating your hard line here.
" Mathew C: "And the biggest threat, of course, is the truth that the self is not fundamentally real. When that is clearly seen, the gig is up."
Spot on. That is by far the biggest impasse I have faced anytime I try to convey a meta-ethics denying the very existence of the "singularity of self" in favor of the self of agency over increasing context. I usually to downplay this aspect until after someone has expressed a practical level of interest, but it's right there out front for those who can see it. "
I think you are misinterpreting things here. I would call it a false dichotomy.
comment by Savage · 2008-10-23T03:41:00.000Z · LW(p) · GW(p)
"I gained this kind of reflectivity when I was barely able to even think and I did not know how to use it wisely. One of my first memories is relentlessly purging my early childhood personality shortly after I discovered how to perform the trick then panicking and rebuilding a new self any sort of stuff laying around. Still think that rampant self-modification left scars on my mind that are still there today. Emotional distance did increase the more I examined, altered and experimented with this and eventually caused some really painful side effects. Eek."
Yes, yes, yes.
I'm liking my analogy with "the Way" more and more. I was unsure about it at first.
comment by Savage · 2008-10-23T03:47:00.000Z · LW(p) · GW(p)
" " Mathew C: "And the biggest threat, of course, is the truth that the self is not fundamentally real. When that is clearly seen, the gig is up."
Spot on. That is by far the biggest impasse I have faced anytime I try to convey a meta-ethics denying the very existence of the "singularity of self" in favor of the self of agency over increasing context. I usually to downplay this aspect until after someone has expressed a practical level of interest, but it's right there out front for those who can see it. "
I think you are misinterpreting things here. I would call it a false dichotomy."
What I mean is, just because there is no "ontological" self doesn't mean there isn't a really complex "self-like" process that is highly dependent on, correlated with, and based in a single, individual brain - a process that simply is not isomorphic to a group of such processes, especially with respect to the Singularity.
comment by Daniel_Franke · 2008-10-23T04:04:00.000Z · LW(p) · GW(p)
EY: I remember my own recovery as being more like a chain of "Undos" to restore the original state.
Presuming you're referring to your religious upbringing, that seems like a funny way of characterizing it. Virtually every old primitive civilization that we know about had religious superstitions that all look pretty similar to each other and whose differences are mostly predictable given the civilization's history and geography. (I say "virtually" just as cover; I don't actually know any exceptions). Modern Judaism is a whole lot saner than any of these, and even somewhat saner than most modern mainstream alternatives. So it seems to me that your parents did pretty well: what they taught you was far from ideal, but it's lot better than what you would likely have come up with on your own if you had been raised by wolves. Rejecting religion is development, not rehabilitation, because religion isn't active stupidity; merely the rest state of an ignorant mind.
Cassandra: One of my first memories is relentlessly purging my early childhood personality shortly after I discovered how to perform the trick then panicking and rebuilding a new self any sort of stuff laying around. Still think that rampant self-modification left scars on my mind that are still there today.
I'm with EY in finding this unusual. Since the point of having a physically-developed brain, I've been through five cataclysmic adjustments to my worldview, each spurred by the influence of a particular writer (Ayn Rand, Eric Raymond, Paul Graham, Murray Rothbard, and Richard Dawkins in that sequence, with EY currently vying to be #6). But these have always been exciting, not frightening or traumatic even at the most reptilian level. When I come across a writer with a surprising philosophy that I'm intrigued by but decide to reject, I'm disappointed, not relieved. I can't relate to it feeling otherwise.
N.b., discarding religion was not a cataclysm. It was pretty gradual. I was labeling myself a Pascal's Wager agnostic since before my Bar Mitzvah, and by the time I came across Dawkins I was already an atheist. Dawkins merely brought me out of the closet, getting me to take pride in being an atheist and to denounce superstition rather than just passively reject it.
I'd like to propose a "personal development score" of sorts. Most adults consider their teenage self a fool. How many times over have you gotten to this point? That is, think of the most recent revision of you which your current self considers a fool. Then recurse back to that point and determine what you would have answered then. How many times can you recurse before you reach back to childhood? Deduct obvious regressions. For example, if you were a cult member between 2004 and 2006, and you_2008 consider both you_2005 and you_2002 to be fools, you_2005 considers you_2002 a fool, but you_2008 consider you_2005 wiser than you_2002, then count that as one improvement rather than two. Then divide by (your age - 13).
comment by Daniel_Franke · 2008-10-23T04:10:00.000Z · LW(p) · GW(p)
Sorry, I botched the second-to-last sentence. It should read "For example, if you were a cult member between 2004 and 2006, and you_2008 consider both you_2005 and you_2002 to be fools, you_2005 considers you_2002 a fool, but you_2008 consider you_2002 wiser than you_2005, then count that as one improvement rather than two."
comment by Another_Anonymous · 2008-10-23T04:21:00.000Z · LW(p) · GW(p)
I'm pretty consequentialist and its hard to predict the effects of Eliezer's harsh criticism of Jef Albright, but in the absence of a compelling argument that it is for the better, I would not be so mean.
That said, Jef's comments were (perhaps unintentionally) condescending.
As for their substantive disagreement, I have little to add to what they and other commentators are contributing.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-10-23T04:21:00.000Z · LW(p) · GW(p)
It seems that Tom McCabe and Daniel Franke didn't go through anything like what Cassandra and I went through, since they have no referent for at all for the experience, and tried to map it onto belief systems or religions.
If I understand C correctly, the mutual experience that we're describing is nothing like that. You find out how to disable pieces of yourself. Then one day you find that you've disabled too much. It doesn't necessarily have anything to do with religion or even with beliefs, except for whatever beliefs spurred you to start deleting pieces of yourself.
comment by Daniel_Franke · 2008-10-23T04:37:00.000Z · LW(p) · GW(p)
EY: You find out how to disable pieces of yourself. Then one day you find that you've disabled too much. It doesn't necessarily have anything to do with religion or even with beliefs, except for whatever beliefs spurred you to start deleting pieces of yourself.
Okay, I now see what you're saying. I haven't experienced it. I understand the trick of disabling pieces of oneself, but I've never in my recollection abused it. However, I can understand what it would be like because I've experienced something that I'm guessing is similar: I'm a high-functioning autistic, and I've had to put considerable effort into software emulation of the emotional hardware that I'm missing.
comment by mtraven · 2008-10-23T04:42:00.000Z · LW(p) · GW(p)
Everything I am, is surely my brain; but I don't accept everything my brain does, as "me".
Such an awkwardly phrased and punctuated sentence is evidence of cognitive failure, or at least a hiccup. There's a fundamental mistake you are trying to paper over right at the start of this essay, which goes downhill from there.
Why are hardcore materialists, who presumably have no truck with Cartesian mind/body dualism, so eager to embrace brain/body dualism? Or software/hardware dualism?
So you start by restricting your self to your brain (at least, I think that's what that sentence means), and follow up by being obligated to lop off large chunks of that. Keep it up and you'll wind up as a ghost in the machine after all.
I'm afraid this seems like the opposite of Zen to me.
comment by Richard_Hollerith · 2008-10-23T04:57:00.000Z · LW(p) · GW(p)
trying to distance ourselves from, control, or delete too much of ourselves - then having to undo it.
I cannot recall ever trying to delete or even control a large part of myself, so no opinion there, but "distancing ourselves from ourselves" sounds a lot like developing what some have called an observing self, which is probably a very valuable thing for an person wishing to make a large contribution to the world IMHO.
A person worried about not feeling alive enough would probably get more bang for his buck by avoiding exposure to mercury, which binds permanently to serotonin receptors, causing a kind of deadening.
comment by Daniel_Franke · 2008-10-23T05:29:00.000Z · LW(p) · GW(p)
mtraeven: Why are hardcore materialists, who presumably have no truck with Cartesian mind/body dualism, so eager to embrace brain/body dualism? Or software/hardware dualism?
Has anyone but me brought up software/hardware dualism? I'm only using it metaphorically. I'm not claiming any fundamental, bright-line distinction.
comment by michael_vassar · 2008-10-23T06:59:00.000Z · LW(p) · GW(p)
Hmm. I remember being non-reflective in first grade but not in second grade. One consequence was that I couldn't re-write explicit beliefs in response to new information and I saw general injunctions and commands as relatively binding and automatic. Conflicting commands couldn't be accommodated, nor could common sense. I don't think that my emotions were any more intense. I never re-wrote myself, or noticed a change at the time, but I notice it in my memories. Early ones don't include the question "why am I doing this?" or "why do this rather than that". In 6th grade I suffered catastrophic failure to relate to anyone who was not reflective and largely but incompletely corrected this failure as a college sophomore at age 18. Efforts to correct it continue, with large steps in the last 2 years.
I agree with much of what Robin has said here but wish he would write his own blog post about it.
comment by Nate_Barna · 2008-10-23T07:19:00.000Z · LW(p) · GW(p)
I think I prefer, and should prefer, my smoothed out highs and lows. During a finite manipulation sequence of a galactic supercluster, whose rules I pre-established, I wouldn't necessarily need to feel much -- since that might feel like 'a lot of pointless muscle straining' -- other than a modest, homo sapiens-level positive reinforcement that it's getting done. Consciousness, if I may also give my best guess, is only good for abstract senses (with and without extensions), and where these abstract senses seem concrete, even to an infinite precision, not "highs" and certainly not "lows" are necessary.
comment by pdf23ds · 2008-10-23T07:24:00.000Z · LW(p) · GW(p)
My mom tells me that when I was 2 or 3 years old and in preschool, one day they had a group of kids come up and recite poems or sing songs or something. I got really scared and wouldn't do my poem. Riding home afterwords, I said "I'm ready to say my poem now."
"You can't honey, everybody's already gone," my mom said.
She explained to me why it's important to be ready when it's time to do something.
"I don't want to be scared. I want to be a person who does things when it's time," I said.
I have no recollection of this event, but even my earliest memories include some where I'm reflective. So as far as I can tell, I've always been reflective. Not much of a choice there.
On the other hand, I've never had any big sudden shifts in the way I look at myself. I have changed substantially in a couple of identifiable areas over time. One was becoming more sociable and friendly and more persuasive, which happened between the ages of 12-15. Another was becoming more materialistic, reductionist, and math-minded from the ages of 15-21. But if there were any more discrete changes, I'm not bringing them to mind at the moment.
comment by Ian_C. · 2008-10-23T07:42:00.000Z · LW(p) · GW(p)
The ability to become emotionally detached is a useful skill (e.g. if you are being tortured) but when it becomes an automatic reflex to any emotion, it can take all the colour out of life.
Sometimes highly intelligent people are also overwhelmingly sensitive/empathetic so detaching is very tempting. The first few minutes of this video with the genius girl walking around the spaceship shows what it's like to be highly empathetic (Firefly).
http://www.youtube.com/watch?v=MsyuTLYx59g
But also: emotions come from the subconscious, and the subconscious contains that which is done repeatedly on the conscious level. So if you are habitually rational, does that effect your subconscious and therefore your emotions?
I think what happens is, you are so consistent on the conscious level (e.g. the way the our host cross-links all his posts) that the subconscious is also highly consistent. So when it produces an emotion it produces it with the whole of itself, instead of just one part contradicted by another (mixed emotions). Therefore the genius has very strong emotions, which interestingly is the stereotype: the overwrought genius who flys off the handle.
The sheer strength of having a conscious and subconscious in total agreement, and both in turn in agreement with reality, can be overwhelming and, like the girl above ("It's getting very crowded in here!") you just want to shut it off.
comment by vanveen · 2008-10-23T07:43:00.000Z · LW(p) · GW(p)
"You find out how to disable pieces of yourself. Then one day you find that you've disabled too much. It doesn't necessarily have anything to do with religion or even with beliefs, except for whatever beliefs spurred you to start deleting pieces of yourself."
you're trying to describe a very common experience using uncommon (and frankly, bizarre - "disabling pieces of yourself"?) language to make yourself feel special. sorry, but that's what many will see when they read through this stuff.
cassandra had a set of dispositions or preference schemata that inclined her towards analytical thinking. because she found herself in an art-loving or math-hating milieu, or because of some chance encounter with an estimable art enthusiast, or because of any number of other random reasons (cool kids liked art, cool boys hated mathy females, her favorite tv character was an artist, the meanderings of her imagination led her to revere the intuitive artiste, etc.), she acquired a new conflicting preference schema. using some 'meta-preference' schema, which is probably best understood as a probabilistic attentional weighting system (susceptible to the vicissitudes of the immediate environment/arousal circuits, etc.), her behaviors aligned more closely with the new schema than with her originals. this is a very common phenomenon. literally every well-calibrated social individual does it constantly and unthinkingly. i deliberately opted out of gifted programs and math contests because my friends attached negative value to it. i didnt deliberate on the matter at the time, and i was completely unaware of the 'preference calculus' going on in my unconscious, but eventually i habituated myself to thinking and behaving "differently", at least when socializing. is that self-modification? is that 'disabling of the self'? maybe, but conscious reflexivity wasn't required and the involvement of 'purposiveness' is questionable.
conscious reflexivity, or thinking about thinking, tempers and impoverishes emotions by activating associational networks that are only tenuously connected to remembered or imagined experiences with affective weight. you can 'purposively', assuming you have the necessary preference schemata, redirect your thoughts towards your immediate experience (or the mind's focus of fancy, as the case may be), so i question whether the layered 'reflexivity' you're describing leads ineluctably to 'emotional detachment'. it certainly doesn't in my case. moreover, almost every champion of 'rationality' and 'dispassionate reason' i know has a very bland and uncomplicated personality, at least to most outside observers (including me, and i play the keynesian beauty contest for a living). that's, you know, one of the stereotypes about 'rationalist' types. you're free to choose descriptors favorable to your cause, of course, but it is my duty as a reader of "overcomingbias.com" to point out their not infrequent absurdity.
<3 robin and jef.
comment by Oshaberi · 2008-10-23T08:47:00.000Z · LW(p) · GW(p)
I have no memory of a time when I didn't think self-reflexively, I'm pretty sure I was doing it as far back as kindergarden. Though I'm not sure I took it to the extremes of some of you, I only ever did modest personality modification :). I realized I could like any previously hated food just by trying. It's somewhat useful having such great control over thoughts and emotions. Like I've amplified my fear to help me in facing the feared situation (this might sound nonsensical, but it really works). In 6th grade I was confused when the teacher talked about what she called "metacognition" because I had assumed everyone did it all the time.
These sorts of ideas don't show up much in fiction, but has anyone read "The GamePlayers of Zan"? For example: one of the characters erases her own mind at the start of the book.
Cassandra: I was especially interested in your post because of certain ahem similarities to myself. (I'm really sorry if that link's too personal). We seem to think quite similarly. In fact everyone here does, I really need to stop lurking and start posting.
comment by Tim_Tyler · 2008-10-23T08:50:00.000Z · LW(p) · GW(p)
no system of interesting complexity, including humans, actually attains long-term goals so much as it simply tries to null out the difference between its (evolving) internal model and its perceptions of its present reality.
That's no reason not to talk about goals, and instead only mention something like "utility". Humans are psychologically goal-oriented - i.e. if you talk about goals, people understand what you mean.
Talk about "goals" can be formally translated into talk about "utility", by considering utility to be estimated proximity to your goals. Whether goals are attained or not is a side issue. You can still discuss and model conquering the universe without actually doing it. So: no need to taboo "goals".
comment by Yvain2 · 2008-10-23T11:26:00.000Z · LW(p) · GW(p)
This is a beautiful comment thread. Too rarely do I get to hear anything at all about people's inner lives, so too much of my theory of mind is generalizations from one example.
For example, I would never have guessed any of this about reflectivity. Before reading this post, I didn't think there was such a thing as people who hadn't "crossed the Rubicon", except young children. I guess I was completely wrong.
Either I feel reflective but there's higher level of reflectivity I haven't reached and can't even imagine (which I consider unlikely but am including for purposes of fake humility), I'm misunderstanding what is meant by this post, or I've just always been reflective as far back as I can remember (6? 7?).
The only explanation I can give for that is that I've always had pretty bad obsessive-compulsive disorder which takes the form of completely irrational and inexplicable compulsions to do random things. It was really, really easy to identify those as "external" portions of my brain pestering me, so I could've just gotten in the habit of believing that about other things.
As for the original article, it would be easier to parse if I'd ever heard a good reduction of "I". Godel Escher Bach was brilliant, funny, and fascinating, but for me at least didn't dissolve this question.
comment by Daniel_Franke · 2008-10-23T11:54:00.000Z · LW(p) · GW(p)
Yvian, I too am surprised to be told that there are many people who aren't at stage 2. It's not a bit surprising that most people aren't at stage 3. I've been capable of that kind of thought for as long as I can remember, but it's only since maybe 17 (I'm currently 23) that I've actually had a habit of thinking that way.
I'm amused by the number of people on this thread saying that they've acquired thought habit X through overcoming a mental disorder.
comment by Nick_Tarleton · 2008-10-23T13:15:00.000Z · LW(p) · GW(p)
[Eliezer] It's interesting that others have shared this experience, trying to distance ourselves from, control, or delete too much of ourselves - then having to undo it. I hadn't read of anyone else having this experience, until people started posting here.
I think it's only the fact of having consciously distanced yourself that's unusual; I believe it's very common for someone to unconsciously disidentify with some part of their self that they'd be better off having, and have to reintegrate it.
comment by Sci-fi_Life · 2008-10-23T13:44:00.000Z · LW(p) · GW(p)
I think this character Eliezer Yudkowsky is straight out of some hard sci-fi novel. A cross-breed between man and machine, or an AI emulating humanness and only partially pulling it off. No way a non-fictional person can come up with all of this ... words fail ... stuff.
But hey, it's the new millenium and it's supposed to be a frikkin' sci-fi life, with humans turned into sci-fi characters, right?!
Out of the 6 billion of us, I think he's one of the most ready for uploading. Meet you on UploadEarth 2045, dude.
comment by Richard_Kennaway · 2008-10-23T14:16:00.000Z · LW(p) · GW(p)
So we're all reflective people on this bus. Is that it?
comment by gaffa2 · 2008-10-23T14:37:00.000Z · LW(p) · GW(p)
Partly on topic, perhaps someone here can give me a helping hand in my attempt to level-up intellectually. A heavy obstacle for me is that I have a hard time thinking in terms of math, numbers and logic. I can understand concepts on the superficial level and kind of intuitively "feel" their meaning in the back of my mind, but I have a hard time bringing the concepts into the frond of my mind and visualize them in detail using mathematical reasoning. I tend to end up in a sort of "I know that you can calculate X with this information, and knowing this is good enough for me"-state, but I'd like to be in the state where I am using the information to actually calculate the value of X in my head. When approaching a scientific or mathematical problem, I often find myself trying hard to avoid having to calculate and reason, and instead try to reach for an "intuitive" understanding in the back of my mind, but that understanding, if I can even find it, is rarely sufficient when dealing with actual problems.
So I'm just throwing this out there in case someone here might have an idea of what the hell I am talking about and be able to recommend a book, program or something that could help me begin to reach a more mathematical way of thinking. I think the ideal thing might be some kind of computer program, kind of like a game, that forces me to reason in that way before progressing (or simply a generator/collection of applied mathematical/statistical/probability theoretical problems). I don't know though, any comments are highly appreciated.
comment by Jef_Allbright · 2008-10-23T16:01:00.000Z · LW(p) · GW(p)
@Eliezer: There's emotion involved. I enjoy calling people's bluffs.
Jef, if you want to argue further here, I would suggest explaining just this one phrase "functional self-similarity of agency extended from the 'individual' to groups".
Eliezer, it's clear that your suggestion isn't friendly, and I intended not to argue, but rather, to share and participate in building better understanding. But you've turned it into a game which I can either play, or allow you to use it against me. So be it.
The phrase is a simple one, but stripped of context, as you've done here, it may indeed appear meaningless. So to explain, let's first restore context.
Your essay, Which Parts are "Me", highlighted some interesting and significant similarities -- and differences -- in our thinking. Interesting, because they match an epistemological model I held tightly and would still defend against simpler thinking, and significant, because a coherent theory of self, or rather agency, is essential to a coherent meta-ethics.
So I wrote (after trying to establish some similarity of background):
"At some point about 7 years later (about 1985) it hit me one day that I had completely given up belief in an essential "me", while fully embracing a pragmatic "me". It was interesting to observe myself then for the next few years; every 6 months or so I would exclaim to myself (if no one else cared to listen) that I could feel more and more pieces settling into a coherent and expanding whole. It was joyful and liberating in that everything worked just as before, but I had to accommodate one less hypothesis, and certain areas of thinking, meta-ethics in particular, became significantly more coherent and extensible. [For example, a piece of the puzzle I have yet to encounter in your writing is the functional self-similarity of agency extended from the "individual" to groups.]"
So I offered a hint, of an apparently unexplored (for you) direction of thought, which, given a coherent understanding of the functional role of agency, might benefit your further thinking on meta-ethics.
The phrase represents a simple concept, but rests on a subtle epistemic foundation which, as Mathew C pointed out, tends to bring out vigorous defenses in support of the Core Self. Further to the difficulty, an epistemic foundation cannot be conveyed, but must be created in the mind of thinker as described pretty well recently by Meltzer in a paper that "stunned" Robin Hanson, entitled Pedagogical Motives for Esoteric Writing. So, the phrase is simple, but the meaning depends on background, and along the road to acquiring that background, there is growth.
To break it down: "Functional self-similarity of agency extended from the 'individual' to groups."
"Functional" indicates that I'm referring to similarity in terms of function, i.e. relations of output to input, rather than e.g. similarities of implementation, structure, or appearance. More concretely [I almost neglected to include the concrete.] I'm referring to the functional aspects of agency, in essence, action on behalf of perceived interests (an internal model of some sort) in relation to which the agent acts on its immediate environment so as to (tend to) null out any differences.
"Self-similarity" refers to some entity replicated, conserved, re-used over a range of scale. More concretely, I'm referring to patterns of agency which repeat -- in functional terms, even though the implementation may be quite different in structure, substrate, or otherwise.
"Extended from the individual to groups" refers to the scale of the subject, in other words, that functional self-similarity of agency is conserved over increasing scale from the common and popularly conceived case of individual agency, extending to groups, groups of groups, and so on. More concretely, I'm referring to the essential functional similarities, in terms of agency, which are conserved when a model scales for example, from individual human acting on its interests, to a family acting on its interests, to tribe, company, non-profit, military unit, city-state, etc. especially in terms of the dynamics of its interactions with entities of similar (functional) scale, but also with regard to the internal alignments (increasing coherence) of its own nature due to selection for "what works."
As you must realize, regularities observed over increasing scale tend to indicate and increasingly profound principle. That was the potential value I offered to you.
In my opinion, the foregoing has a direct bearing on a coherent meta-ethics, and is far from "fake". Maybe we could work on "increasing coherence with increasing context" next?
comment by Partiallybright · 2008-10-23T16:37:00.000Z · LW(p) · GW(p)
Jef: That's more like it. Though part of your explanation is still unnecessarily convoluted and nested (try using shorter sentences), now I get it. It's an alright, non-fake concept/observation. But that still doesn't mean the perception of fakeness/having nothing to say isn't valid. But it may be just a perception in many if not all cases - you need to work harder to avoid being perceived as having nothing to say, that's all - talking a lot and saying a lot instead of talking a lot and saying little/nothing.
comment by Jef_Allbright · 2008-10-23T16:38:00.000Z · LW(p) · GW(p)
@Tim Tyler: "That's no reason not to talk about goals, and instead only mention something like "utility"."
Tim, the problem with expected utility maps directly onto the problem with goals. Each is coherent only to the extent that the future context can be effectively specified (functionally modeled, such that you could interact with it and ask it questions, not to be confused with simply pointing to it.) Applied to a complexly evolving future of increasingly uncertain context, due to combinatorial explosion but also due to critical underspecification of priors, we find that ultimately (in the bigger picture) rational decision-making is not so much about "expected utility" or "goals" as it is about promoting a present model of evolving values into one's future, via increasingly effective interaction with one's (necessarily local) environment of interaction. Wash, rinse, repeat. Certainty, goals, and utility are always only a special case, applicable to the extent that the context is adequately specifiable. This is the key to so-called "paradoxes" such a Prisoners's Dilemma and Parfit's Repugnant Conclusion as well.
Tim, this forum appears to be over-heated and I'm only a guest here. Besides, I need to pack and get on my motorcycle and head up to San Jose for Singularity Summit 08 and a few surrounding days of high geekdom.
I'm (virtually) outta here.
comment by AnneC · 2008-10-23T16:49:00.000Z · LW(p) · GW(p)
You find out how to disable pieces of yourself. Then one day you find that you've disabled too much.
This definitely happened to me. I realized at some point (a few years ago) that in trying to force-fit myself into roles I thought I needed to fill in order to be a "responsible person", I'd succeeded in turning off aspects of my brain that are actually pretty vital for me to learn effectively. I didn't do anything about this realization, though, until I experienced an Epic Fail that showed me that the way I was trying to operate was neither useful nor sustainable. The big turning point for me was in moving away from seeing myself as "a broken version of normal" to "a different kind of thing entirely".
I am not talking about becoming complacent, mind you -- quite the opposite, as I'd gotten to the point where I kept running into all kinds of walls whenever I wanted to get something done, and I was very tired of having that happen.
It was as if my native operating system was different from the operating system I'd been trying to run in emulation mode for many years, to the point where things started running a lot more smoothly once I stripped away the emulation processes and got re-acquainted with my native architecture.
comment by Nato_Welch · 2008-10-23T17:19:00.000Z · LW(p) · GW(p)
Why must I be like that?
Why must I chase the cat?
Nothin' but the dog in me.
--George Clinton - Atomic Dog
comment by Tyrrell_McAllister2 · 2008-10-23T17:28:00.000Z · LW(p) · GW(p)
gaffa: A heavy obstacle for me is that I have a hard time thinking in terms of math, numbers and logic. I can understand concepts on the superficial level and kind of intuitively "feel" their meaning in the back of my mind, but I have a hard time bringing the concepts into the frond of my mind and visualize them in detail using mathematical reasoning. I tend to end up in a sort of "I know that you can calculate X with this information, and knowing this is good enough for me"-state, but I'd like to be in the state where I am using the information to actually calculate the value of X in my head.
I've found that the only to get past this is to practice solving problems a whole bunch. If your brain doesn't already have the skill of looking at a problem and slicing it up into all the right pieces with the right labels so that a solution falls out, then the only way to get it to do that is to practice a lot.
I recommend getting an introductory undergraduate text in whatever field you want to understand mathematically, one with lots of exercises and a solutions manual. Read a chapter and then just start grinding through one exercise after another. On each exercise, give yourself a certain allotted time to try to solve it on your own, maybe 20 or 30 minutes or so. If you haven't solved it before the clock runs out, read the solutions manual and then work through it yourself. Then move on to the next problem, again trying to solve it within an allotted time.
Don't worry too much if the solutions manual whips out some crazy trick that seems totally unmotivated to you. Just make sure that you understand why the trick works, and then move on. Once you see the "trick" enough times, it will start to seem like the obvious thing to try, not a trick at all.
comment by Tim_Tyler · 2008-10-23T17:43:00.000Z · LW(p) · GW(p)
Tim, the problem with expected utility maps directly onto the problem with goals. Each is coherent only to the extent that the future context can be effectively specified (functionally modeled, such that you could interact with it and ask it questions, not to be confused with simply pointing to it.) Applied to a complexly evolving future of increasingly uncertain context, due to combinatorial explosion but also due to critical underspecification of priors, we find that ultimately (in the bigger picture) rational decision-making is not so much about "expected utility" or "goals" as it is about promoting a present model of evolving values into one's future, via increasingly effective interaction with one's (necessarily local) environment of interaction.
I don't think most of that makes much sense. If you think there's some sort of problem with utilitarian approaches to AI, feel free to spell it out - but IMHO, the sort of criticism offered here is too wishy-washy to be worth anything.
Problems with priors often wash out as you get more data. Combinatorial explosions are fun - but it's nice to know what is being combined. In biology, the future context organisms face is usually assumed to be similar to past contexts. Not a perfect assumption, but often good enough. Organisms have developmental plasticity (including brains) and developmental canalisation to help them deal with any changes.
IMO, you're barking up an empty tree here. The economic framework surrounding expected utility maximisation is incredibly broad and general - machine intelligence can't help but be captured by it.
comment by retired_urologist · 2008-10-23T18:32:00.000Z · LW(p) · GW(p)
Main post: Everything I am, is surely my brain.
It would seem that, as far as causes go, everything about any of us is contained in the zygote, long preceding any sort of "brain". Indeed, it would seem to go far more basic than that, as discussed in the Quantum Mechanics series. These recent discussions about ethics, morality, concept of self, etc. seem to be effects, rather than causes, the results of external forces interacting with the original selection and sequence of a relative few chemicals. Who can say that the eventual outcome and expression of a chemical code is analogous to, or necessary for, that of a mechanical code? None of this seems very reductionist to me, and most of the discussion could not possibly qualify as "science".
comment by Caledonian2 · 2008-10-23T18:55:00.000Z · LW(p) · GW(p)
I find it difficult to be empathetic with people who have had to reject religious thought, because I've never been religious. It has always been clear to me that my parents' beliefs on the subject were absurd; the only change has been in my response to them (patient tolerance to impatient intolerance). I was a conscious atheist even before I realized there was no Santa Claus.
Perhaps that experience was something like what other people go through when they reject religion... except that I cared little about the Santa Claus myth in itself, and was traumatized more by the realization that my parents were willing to lie to me merely for their own personal pleasure than anything else.
comment by Gordon_Worley · 2008-10-23T19:00:00.000Z · LW(p) · GW(p)
Interesting discussion.
Eli,
First, since no one has come out and said it yet, maybe it's just me but this post was kind of whiny. Maybe everyone else here is more in-tune with you (or living in your reality distortion field), but the writing felt like you were secretly trying to make yourself out to be a martyr, fishing for sympathy. Based on my knowledge of you from past interactions and your other writings I doubt this to be the case, but none the less it's the sense I got from your writing.
Second, I, too, have been through a similar experience. When I was younger, maybe around the age of 11 or 12, I can remember being able to step back from myself and see what I thought at the time was often the pointlessness of my own and others actions. I'd say to myself "Why am I doing this? I don't want to do it and I don't know why I'm doing it." At this point I wasn't fully reflective, but was stepping back, looking in, and getting confused.
Over the next several years I worked to eliminate those things from myself which confused me. Initially I fought to remove anger and succeeded brilliantly so that to this day I still cannot get angry: frustrated and annoyed are as much as I can muster. Next it was other things, like "useless" emotions such as impatience and fear, and troublesome patterns of behavior, especially my OCD behavior patterns. Back then I blindly kept things like love, friendship, and sexual desire, having never been confused by them in the same way I was by anger, and tried to maintain things like a reluctance to change, foolishly believing that since adults didn't seem to change their minds very often or very far that this was a desirable state.
Shortly after I joined the sl4 mailing list, I experienced a breakthrough reading CFAI section 2 and woke up to myself. The best way I know to describe what happened to me was that I saw the territory for the first time and realized that all my life I had only been starring at maps. Not that I would have said that back then, but it was the watershed moment in my life when everything changed. I was no longer blind to certain emotions and behaviors, and for the first time I had the ability to reflect on essentially anything I wanted to within myself, up to the physical limitations of my brain.
A year or two later I started looking into the literature of cognitive science and came across a book that described the inner narrative all non-zombies experience as the result of part of the way the brain functions. Essentially it said that the brain functions like a machine, and around X milliseconds after your brain does something you experience it when a part of your brain processes signals coming to it from the rest of your brain into memories. This completed the opening of myself to reflection.
A couple years later, after having finally gotten on medication for my OCD and finding myself able to pull out all the junk from my brain that I could (although I still didn't know that much about heuristics and biases at that time, so I thought I was doing a lot better than I actually was), I started dating the girl who eventually became my wife. Up to this time my mental cleaning had gone on unopposed and, although I had gotten rid of a lot of what had been myself, I never felt like I was gone and needed to rebuild myself. In fact, I liked being empty! But then sometime after our first anniversary my then girlfriend started to express frustration, anger, and other emotions I hadn't known had been inside her. As it turned out, my emptiness was causing her pain. So I rebuilt myself to not be so empty so that I could better love her, although it's something I still struggle with, such as to not make jokes about things that most people take seriously, but that I have a hard time taking seriously because I distance myself through reflection.
That's where I stand today, partially rebuilt, not entirely human.
comment by Zubon · 2008-10-23T19:09:00.000Z · LW(p) · GW(p)
gaffa, have you tried any Raymond Smullyan? If you want logic and mathematical reasoning in a game-like structure, he has several books (like the linked) that present them as riddles. There is no shame in not getting how they work until you read the solution to at least one in each chapter, but you get the pattern of how to think through certain sorts of logical structures.
comment by Infotropism2 · 2008-10-23T20:10:00.000Z · LW(p) · GW(p)
When I first came across Eliezer's writings, it struck me that what I read felt so true to me, that for the first time I felt like I had found someone I could relate to.
I have been avidly reading everything from him I could come across, as long as it "felt" right, which was often. With time I noticed that we didn't think in the same way, and it felt to me Eliezer was much more rational, scientifical, structured, than I was.
I immediately felt that the desirable thing to do would be to read even more of him, so as to "absorb" those traits in me, which would be an advantage; as if reading again and again his writings would slowly diffuse a part of his thinking into me. I know I'm that suggestible, especially if I don't put safeguards between people's ideas and me.
And with time I have felt like a part of me was changing, that I was losing a bit of what made the good old "intuitive", messy me, and gained some of what I identified as rationality, systematic reasoning.
Before I would be aloof, would oversimplify any problem, and would follow my any emotions knowing somehow they were right most of the time, and helped me win at what I did. With the knowledge I gained here, I couldn't ever have that much confidence in my own raw drives and intuitions.
It sometimes feels like plugging incompatible software into my self, and messing the whole for as long as I possess both ways of thinking/feeling. But I think it is worth it, else I wouldn't have done it.
comment by Justin_Corwin · 2008-10-23T20:13:00.000Z · LW(p) · GW(p)
Eliezer: "I don't know if I've mentioned this publicly before..."
You definitely haven't mentioned that publicly in any place that I read, which makes me glad I decided to dip into the comments of this post. I always felt a tacit acceptance, or at least no active disagreement on your part of Jef's posts on similar subjects on SL4 and other online fora. (at least any available to immediate recall)
The subject of what parts of my influences, tendencies and opinions, and identifiable hardware quirks I call 'myself' is a driver of cycles of stress and self-doubt to me. Nobody here has mentioned anything similar, but I tend to experience this in two ways, extreme doubt and nostalgia regarding tendencies and traits I've eliminated or lost from myself, and increasing ambivalence and bifurcated opinion about things I feel I'm going to have to jettison or become averse to in the future.
The concrete issue of techniques to condition against bias or mental procedures to work through unwanted tendencies is something I'm always fascinated to hear scraps of from other rationalists and self-modifiers. It can't all be rationalization and reflection, I know from personal experience that can't touch everything, so how do other people correct? My own procedures seem fairly haphazard and I adopt them only out of pragmatism, not out of any confidence in their theoretical grounding.
comment by PJ_Eby · 2008-10-24T03:46:00.000Z · LW(p) · GW(p)
It's not a one-way street; with proper technique (e.g. NLP anchoring and reframing methods, to name just a couple) you can change the cached "meaning" of a certain class of events so that they have pretty much any emotional content you choose.
Granted, my personal experiences run in the direction of modifying "negative" things to be positive, and I haven't had much call for keeping around any negative feelings.
Truth is, your concern about losing the negative feelings is irrational... probably based on a science fictional ideal of "not being as human" if you lose the negative emotions. I used to be bothered by that, but then I got rid of the negative emotion I associated with getting rid of negative emotions. ;-)
Seriously, though, you need to distinguish between suppression or detachment/disassociation of a negative emotion, and not having it in the first place. It's like Spock vs the Dali Lama: big difference. At the point where you merely disidentify from an emotion, it's a step backwards.
What's necessary is to detach the emotional "tags" from with the experience - specifically, your brain's cached predictions of the future that will arise from a given situation. By updating the cached prediction, you can update the emotional response. Reframing, RET, "The Work" and other questioning techniques work by postulating interpretations that become a basis for an imagined alternative prediction, while techniques like anchoring, doyletics, EFT, et al operate directly on the emotional tags by disrupting the response or mixing it with non-specific state.
Whatever the technique, it should be empirically tested before and after use; with myself and my clients I have them notice their automatic feeling response to a chosen test stimulus (a remembered or imagined situation), and then compare it after applying different techniques. If the technique works, the stimulus should produce a new -- and usually unexpected -- response. (If you're not surprising yourself, then how could you say anything's changed about your brain? Interestingly, this also points to a separation in the brain between reflective modelling and active modelling of behavior: if your reflective model of yourself weren't separate, your behavior could never surprise you!)
Anyway, I read your blog with much interest; on occasion it has been helpful in my work as a "mind hacking instructor". Personally, it was also very helpful to read your thoughts about the lack of "meaning" labels on things, as at one point I semi-accidentally deleted my own sense of "meaning"... and it took a while to update myself to see that "meaningless" does not equal "bad, pointless, hopeless, despair." These sorts of cached thoughts (like "you're less human if you don't feel bad things") can be particularly insidious. ;-)
By the way, do consider that thinking you need negative emotions is a lot like thinking that you need death in order to fully experience life. We only need negative emotions to survive long enough to achieve some semblance of rationality, and the more of them I personally get rid of, the more time and opportunity I have to experience positive feelings.
Dissociation or suppression, on the other hand, does indeed lead to disconnection from all emotions, and less "humanity". So don't do that. Simply delete non-useful emotional responses, so that they don't arise in response to the stimulus, rather than waiting for them to first arise, and only then detaching from them.
comment by Abigail · 2008-10-24T10:17:00.000Z · LW(p) · GW(p)
I see "Me" as all that is within my skin.
I find it helpful to think of different bits of me. My "Inner Toddler" is the bit which Wants things, or is upset by things. If I just tell it to shut up, it will become recalcitrant. If I listen to it, even though I will not necessarily do what it says, it is happier and I am happier.
This is why I am not Rationalist. I try to use Rationality to understand the World, and "myself", but I use emotion to set my goals, just as it gives me my rewards.
comment by Abigail · 2008-10-24T14:22:00.000Z · LW(p) · GW(p)
And- I get a great deal from the theories of Carl Rogers, who postulated an Organismic Self, or a "Me" which is my whole organism, and a "Self-concept", which is those bits of me which I allow myself to be conscious of, excluding bits which I am too ashamed of to be conscious of them, and including bits which are not really me, but which I pretend are me because of my concept of "good". I also see myself as an evolved being, and so draw from this that I fit into that habitat which I find myself: if an ancestor did not fit enough, he would not have become my ancestor.
If I feel divided against myself, opposing my own conscious goals, this may be because there is a shadow-part of the organismic self, a part which I deny because it is too uncomfortable. So I try to draw these parts out of the shadow, because they are Good. I also try to see what is self-concept, but not really me.
Perhaps some will manage to keep their self-concepts very close to their organismic selves, and such struggles are of little interest. For me, others' struggles are of great interest. Thank you for sharing.
comment by Nathaniel_Eliot · 2008-10-24T14:54:00.000Z · LW(p) · GW(p)
Jef Allbright:
So "Functional self-similarity of agency extended from the 'individual' to groups.", in other words, means "groups of humans follow similar practices to achieve their goals"? Or am I missing some mystic subtlety in the choice of "functional" over "similar", "self-similarity" over "a grouping like its parts", and agency over "method of achieving goals"? You took a lot of time to dance around the point that "groups also exclude of parts of themselves for similar reasons".
You seem sure you're the smartest person in this conversation, though I'm not sure you're aware of it. It doesn't speak well of your judgment, given the present company.
Forgive me if I am biased toward Elizer's assessment. He has proved his worth to me, while you have only disclaimed yours.
comment by simon2 · 2008-10-24T20:33:00.000Z · LW(p) · GW(p)
Responding to Gaffa (I kind of intended to respond right after the comment, bot got sidetracked):
When approaching a scientific or mathematical problem, I often find myself trying hard to avoid having to calculate and reason, and instead try to reach for an "intuitive" understanding in the back of my mind, but that understanding, if I can even find it, is rarely sufficient when dealing with actual problems.
I would advise you to embrace calculation and reason, but just make sure you think about what you are doing and why. Use the tools, but try to get an intuitive understanding of why and how they work, both in general and each time you apply them. It is true that formualaic rules can serve as a crutch to avoid the need for understanding, but if you throw away calculation and reason, you are not likely to make much progress.
Finally, be realistic in your expectations: for a complicated problem, you should not expect to be able to get an intuitive understanding of the solution as a single step, but you can aim for a chain of individually intuitive steps and, if the chain is sufficiently short, an overall intuitive understanding of how the steps relate to one another.
comment by Doug_S. · 2008-10-24T21:49:00.000Z · LW(p) · GW(p)
I seem to have taken step #3 in the other direction. The bundle of desires, urges, and subconscious cognition that makes me feel things is "me", while that running voice in my head that wonders about why I feel what I feel, is, well, just a running voice in my head doing some analysis, but with no power to affect much of anything. My mind has basically never been able to overrule the parts of the brain that feel like "me", which gives my parents no end of frustration, as they repeatedly insist that people have to learn to do things they don't want, while I've endured 26 years without having learned how to do that. I reflect, sometimes, but my reflection is completely impotent when it comes to changing behaviors. In fact, the only effect I ever see of my reflection is that, practically every time I start reflecting, I become miserable, so I actively try to avoid it whenever possible, drowning my mind in video games and other people's writings in an attempt to keep my mind from dwelling on my inner self.
So, yeah, I identify with my limbic system and not my frontal cortex (or whatever the correct names for the relevant parts of the brain are).
Is this bad?
comment by outofculture · 2008-10-25T10:05:00.000Z · LW(p) · GW(p)
Eliezer: "It makes you more moral (at least in my experience and observation) because it gives you the opportunity to make moral choices about things that would otherwise be taken for granted, or decided for you by your brain."
I have to take specific issue with this (despite being further down in the comments than I think will attract anyone's attention). This post and its comments discuss a process by which a mind can modify its behavior through reflection, but while Eliezer and many others may use this power to strengthen their morality, it can just as easily be used to further any behavioral goal. "This compassion is getting in the way of my primary goal of achieving greater wealth; best to ignore it from now on." How could things like slavery or torture happen among humans who seem to all have the capacity for compassion, if not for our overriding capacity to selectively ignore parts of ourself we deem inappropriate? It is a power which makes us more adaptive, giving us the opportunity to make choices about things that we would otherwise take for granted, but not inherently towards morality. I do think it would be accurate to describe people without much skill in reflection as being limited in the morality they can apply to themselves. Notably, I cannot presently imagine an FAI that does not include this ability (except maybe the degenerate-case of an FAI that does nothing).
I also think the process described is just an example of many routes by which we modify our behavior. Regret is the example that pops out to me immediately. When we screw things up enough to cause an intense, negative, emotional reaction, we tend to avoid those behaviors which previously led to that situation. The exertion of control over emotional reactions is functionally the same; one mental construct inhibits action on the account of another. Personally, with respect to this example, I would much rather use reflection than regret, both for the cost of use and for the time-frame. Though not enough to agree with PJ Eby's statement "your concern about losing the negative feelings is irrational". I would find it very difficult to accept a proposed FAI that wasn't terrified of not being friendly (in proportion to the power it exerted over the fate of humanity), nor regretful of having failed to reach positive outcomes (in proportion to the distance from positive). Or whatever AI structures map to the same human emotions.
comment by Mario2 · 2008-10-25T10:06:00.000Z · LW(p) · GW(p)
I know this is a few days late, but I couldn't help but notice that no one mentioned how your "Zenlike but not Zen" philosophy is basically just a weak version of Stoicism (weak in that you seem to desire some passion, whereas a stoic would advocate distancing yourself from all highs and lows). There is no need to create techniques to do this from scratch, the path has already been laid out. I would encourage anyone interested in the topic to research Stoic teachings, particularly Epictetus, if you haven't done so already.
[I recommend Epictetus because there is an unfortunate tendency in ancient Stoic philosophers toward more mystical, almost religious, thinking. Epictetus refrains from that, for the most part, and concentrates on practical aspects.]
comment by V · 2009-01-07T05:06:00.000Z · LW(p) · GW(p)
Say, Eliezer: Have you considered identifying yourself only with the part of your brain which asks clearly stated questions? This questioner is most clearly "you", IMO.
For example, I could draw the boundary of my selfhood as including the times/ parts of my brain, when I ask clear and well stated questions like: "What is this guy saying?"
comment by Wei Dai (Wei_Dai) · 2010-06-30T10:17:54.131Z · LW(p) · GW(p)
If death is horrible then I should fight death, not fight my own grief.
Yes, but how do you tell whether death is horrible or not?
In other words, how is one supposed to know which of the following is true?
- Preventing death is the real terminal value. Grief is a rational feeling that has instrumental value (for motivating oneself to fight death).
- Avoiding grief is the real terminal value. Preventing death is just a subgoal of avoiding grief (and one should fight grief directly if that's easier/more effective).
↑ comment by cousin_it · 2010-06-30T10:32:58.023Z · LW(p) · GW(p)
First I was like, "wow, good question!" Then I was like, "ooh, that one's easy". In world A people are just like us except they don't die. In world B people are just like us except they don't feel grief about dying. RIght now, do you prefer our world to evolve into world A or world B? As far as I can tell, this is the general procedure for distinguishing your actual values from wireheading.
Replies from: Wei_Dai, Vladimir_Nesov↑ comment by Wei Dai (Wei_Dai) · 2010-06-30T10:49:47.014Z · LW(p) · GW(p)
That's not really a fair comparison, is it? There is no reason to choose world B since in world A nobody feels grief about dying either (since nobody dies).
To make it fair, I think we need to change world A so that nobody dies, but everyone feels grief at random intervals, so that the average amount of grief is the same as in our world. Then it's not clear to me which world I should prefer...
Replies from: cousin_it↑ comment by cousin_it · 2010-06-30T12:10:30.585Z · LW(p) · GW(p)
You're right, I didn't think about that. However, if avoiding grief were a terminal value but avoiding death weren't, you'd be indifferent between world A and world B (in my original formulation). Are you?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-06-30T13:25:24.064Z · LW(p) · GW(p)
I do prefer world A to world B in your original formulation. Unfortunately, from that fact we can only deduce that I'm not certain that avoiding death isn't a terminal value. But I already knew that...
Replies from: cousin_it↑ comment by cousin_it · 2010-06-30T17:42:25.704Z · LW(p) · GW(p)
If there is a fact of the matter on whether avoiding death is a terminal value, where does that fact reside? Do you believe your mind contains some additional information for identifying terminal values, but that information is somehow hidden and didn't stop you from claiming that "you're not certain"?
Replies from: wedrifid↑ comment by Vladimir_Nesov · 2010-06-30T13:48:34.382Z · LW(p) · GW(p)
World A is clearly better, because not only can people in it not feel grief, but they can do so indefinitely, without death stopping them from not feeling grief.
↑ comment by Vladimir_Nesov · 2010-06-30T14:05:40.569Z · LW(p) · GW(p)
Both are terminal values to some extent. Where the "consequentialist" evolution had a single (actual) outcome in mind, any instrumental influence on that process had a chance of getting engraved in people's minds. Godshatter can't clearly draw the boundaries, assert values applying only to a particular class of situations and not at all to other situations. Any given psychological drive influences moral value of all situations (although of course this influence could be insignificant on some situations and defining on the other). Where we are uncertain, the level of this influence is probably non-trivial.
comment by xamdam · 2010-07-01T15:36:02.652Z · LW(p) · GW(p)
Notably the rabbis of old made a step in the right direction: they created a vision of "the evil desire" as an external, anthropomorphic, force. This is already helpful to create a layer between the emotions and the "person", and muster the natural aggressive emotions against it. (and no, they did not think devil is an actual sadist with horns).
My criticism of this approach is that this framework is not fine enough to give you an advantage battling the "evil inclination": viewing the immoral desire as an external force, rather than a mechanism of the brain with its own set of rules, makes it a battle of will rather than a battle of wits. Since willpower is a scarce resource this is not a good strategy. To give them credit they did not have the advantage of modern psychology; I am only criticizing to emphasize the now more remediable weakness of their approach.
comment by A1987dM (army1987) · 2012-02-28T23:27:20.129Z · LW(p) · GW(p)
Though I don't like the fact, it does seem in my case to be true, that reflecting upon a passion can diminish it. It does seem that in taking a step back from myself to look at what my brain is doing, that this introduces a greater emotional distance from the world, than when I am not reflecting.
That's exactly what (Far Eastern) meditation is about.
comment by CharlieDavies · 2012-11-11T16:38:30.530Z · LW(p) · GW(p)
Robin Hanson comments that the "I" even in a reflective person's mind is an unstable coalition.
My guess is that Eliezer knows this, and is defining his "self" to mean something like "the shifting coalition within this brain, that is trying to save the world". If this guess is wrong, I'd love to find out, this seems like the crucial bit to me.
comment by Strangeattractor · 2014-09-02T23:04:28.613Z · LW(p) · GW(p)
I draw the lines differently.
I include the thoughts that I do not want to have as parts of my self, parts that I do not fully understand or control. I observe these thoughts, and I take care not to act on them, but I do not reject them from my definition of self.
I think something like "I want to be a person who does not think or feel X on a regular basis, but right now I am a person who is experiencing X." I put it in more neutral terms than rejection. I do not think of it as an imposition by something separate, although sometimes I get frustrated by continuing to have thoughts that I do not wish to have, and that seem to be a malfunction of some kind. But I don't stop there. I attempt to diagnose the malfunction. It can get tricky, since the source of the problem, the place where it most noticeably manifests, and the remedy can be all in different places, tenuously connected. Sometimes I do get fixated on "But I don't understand why, and I want to," when it might be more practical to dismiss the thought without understanding where it comes from.
I am also not comfortable blaming the whole thing on my brain. Consciousness is not well understood by science yet, and from what I do know, which seems to me to be a very rudimentary understanding of a complex topic (though I am often unimpressed by people who claim greater certainty on these topics) I have reasons to think that the brain might not be the only relevant thing at work. For example, the intestines have almost their own nervous system and can produce significant quantities of neurotransmitters, which can then affect the brain. Or so I've read. I do think that physical biological processes can influence emotions and thoughts more than most people in our culture realize.
I do not have the highs or the lows smoothed out. I sometimes do not feel as much distance as I want to feel, but I do not feel distance that I do not want to feel.
Writing that, I just realized that's not entirely true. On some brief occasions, when I was in a lot of pain, then I did start to tune out and feel numb, and pleasure was deadened, and then I had to concentrate on feeling the pain, so I could feel other things too.
Often, over time, I do eventually come to understand better what is going on with parts of myself that I previously did not understand, and many times I am able to make changes. The ones that are left, that I have not yet been able to change, I do not take at face value. They seem to be expressing something that I haven't figured out yet, or flowing from some process that is not legible to me yet. I can mostly, though not entirely, keep them from affecting my behaviour, but they can be frequent and intense and frustrating even so.