Posts
Comments
Interesting discussion.
Eli,
First, since no one has come out and said it yet, maybe it's just me but this post was kind of whiny. Maybe everyone else here is more in-tune with you (or living in your reality distortion field), but the writing felt like you were secretly trying to make yourself out to be a martyr, fishing for sympathy. Based on my knowledge of you from past interactions and your other writings I doubt this to be the case, but none the less it's the sense I got from your writing.
Second, I, too, have been through a similar experience. When I was younger, maybe around the age of 11 or 12, I can remember being able to step back from myself and see what I thought at the time was often the pointlessness of my own and others actions. I'd say to myself "Why am I doing this? I don't want to do it and I don't know why I'm doing it." At this point I wasn't fully reflective, but was stepping back, looking in, and getting confused.
Over the next several years I worked to eliminate those things from myself which confused me. Initially I fought to remove anger and succeeded brilliantly so that to this day I still cannot get angry: frustrated and annoyed are as much as I can muster. Next it was other things, like "useless" emotions such as impatience and fear, and troublesome patterns of behavior, especially my OCD behavior patterns. Back then I blindly kept things like love, friendship, and sexual desire, having never been confused by them in the same way I was by anger, and tried to maintain things like a reluctance to change, foolishly believing that since adults didn't seem to change their minds very often or very far that this was a desirable state.
Shortly after I joined the sl4 mailing list, I experienced a breakthrough reading CFAI section 2 and woke up to myself. The best way I know to describe what happened to me was that I saw the territory for the first time and realized that all my life I had only been starring at maps. Not that I would have said that back then, but it was the watershed moment in my life when everything changed. I was no longer blind to certain emotions and behaviors, and for the first time I had the ability to reflect on essentially anything I wanted to within myself, up to the physical limitations of my brain.
A year or two later I started looking into the literature of cognitive science and came across a book that described the inner narrative all non-zombies experience as the result of part of the way the brain functions. Essentially it said that the brain functions like a machine, and around X milliseconds after your brain does something you experience it when a part of your brain processes signals coming to it from the rest of your brain into memories. This completed the opening of myself to reflection.
A couple years later, after having finally gotten on medication for my OCD and finding myself able to pull out all the junk from my brain that I could (although I still didn't know that much about heuristics and biases at that time, so I thought I was doing a lot better than I actually was), I started dating the girl who eventually became my wife. Up to this time my mental cleaning had gone on unopposed and, although I had gotten rid of a lot of what had been myself, I never felt like I was gone and needed to rebuild myself. In fact, I liked being empty! But then sometime after our first anniversary my then girlfriend started to express frustration, anger, and other emotions I hadn't known had been inside her. As it turned out, my emptiness was causing her pain. So I rebuilt myself to not be so empty so that I could better love her, although it's something I still struggle with, such as to not make jokes about things that most people take seriously, but that I have a hard time taking seriously because I distance myself through reflection.
That's where I stand today, partially rebuilt, not entirely human.
(I'm catching up, so that's why this is posted so far after the original.)
When I attempted this exercise I tried to think of how I use the word "arbitrary" and came up with a definition along the lines of "Something is arbitrary if its choice from a set makes no difference to the veracity of a particular statement", i.e. arbitrary is a 2-part function, taking as input a choice and a statement, since without a statement to evaluate against calling something arbitrary to me just looks like membership.
But then I read on and realized that I was being too narrow in what I considered to be arbitrary. Perhaps from too much mathematical training, I didn't even think of the common use as described above. This is an subtle kind of error to watch out for: taking a technical term that happens to have the same spelling and pronunciation as a non-technical term and trying to apply the definition of the technical term back to the non-technical term. The effect is either that you confuse other people because you use a technical term that looks like a non-technical one or you confuse yourself by misunderstanding what people mean when they use the term in a non-technical sense. This sort of thing becomes a bigger problem, I reckon, as you become more and more specialized in a field with lots of technical language.
Eliezer,
You know that you can't succeed without the math, and slowing down for posts like this is taking away 24 hours that might have been better used to save humanity. Not that this was a bad post, but I think you would be better off letting others write the fun posts unless you need to write a fun post to recover from teaching.
I agree that it makes no sense, but as I was writing the comment I figured I would take you down the wrong path of what someone might naively think and then correct it. I think that someone who was overly trained in logic and not in probability might assume that if Raven(x)-->Black(x) being true leads to P(B|R) = 1, they might reason that since the reverse implication Black(x)-->Raven(x) is false, it leads to P(R|B) = 0. But based on the comments above, maybe only an ancient Greek philosopher would be inclined to make such a mistake.
Hopefully not taking away anyone's fun here, but to reconcile Raven(x)->Black(x) but not vice versa, what this statement wants to say, letting P(R) and P(B) be the probabilities of raven and black, respectively, is P(R|B)=0 and P(B|R)=1, which gives us that
P(R|B) = 0 P(RB)/P(B) = 0 P(RB) = 0
and
P(B|R) = 1 P(BR)/P(R) = 1 P(BR) = P(R)
But of course this leads to a contradiction, so it can't really be true that Black(x)-/->Raven(x), can it? Sure, because what is really meant by implies (-/->) is not P(B|R) = 0 but P(B|R)<1. But in logic we often forget this because anything with a probability less than 1 is assigned a truth value of false.
Logic has its value, since sometimes you want to prove something is true 100% of the time, but this is generally only possible in pure mathematics. If you try to do it elsewhere you'll get exceptions (e.g. albino ravens). So leave logic to mathematicians; you should use Bayesian inference.
I believe you made a slight typo, Eli.
You said: "Since there's an "unusually high" probability for P(Z1Y2) - defined as a probability higher than the marginal probabilities would indicate by default - it follows that observing Z1 is evidence which increases the probability of Y2. And by a symmetrical argument, observing Y2 must favor Z1."
But I think what you meant was "Since there's an "unusually high" probability for P(Z1Y2) - defined as a probability higher than the marginal probabilities would indicate by default - it follows that observing Y2 is evidence which increases the probability of Z1. And by a symmetrical argument, observing Z1 must favor Y2."
Nothing you said was untrue, but the implication of what you wrote doesn't match up with the example you actually gave just above that text.
For those saying they have nothing to protect or still need to find something to protect, remember that you are human and, unless you have no natural family or reproductive ties, you always have the people you love to protect. It may seem counterintuitive if you've bought into Hollywood rationality, but love is a powerful motivational force. If you think that, in theory, being more rational is good, but don't see how you can effect greater rationality in your mind, consider the many benefits of your increased rationality (again, not Hollywood rationality, but rationality of the type Eliezer describes above).
In my case, I know I'm trying harder than ever to become a better person because of my wife. And when I do something that hurts her, my first thought is to figure out what is wrong with my thinking that led to this. My second is to find a better way to express my love, through increasing her happiness and enjoyment of life. And, realizing that the best thing I can do is shut up an multiply, I figure out how to change myself to be a better multiplier.
Am I right in thinking that you've now brought the OB audience to where you need them in order to start trying to talk about AI (or "optimizing processes" or whatever terminology is sufficiently abstract to prevent linguistically inferred misunderstanding)?
Let's suppose we measure pain in pain points (pp). Any event which can cause pain is given a value in [0, 1], with 0 being no pain and 1 being the maximum amount of pain perceivable. To calculate the pp of an event, assign a value to the pain, say p, and then multiply it by the number of people who will experience the pain, n. So for the torture case, assume p = 1, then:
torture: 1*1 = 1 pp
For the spec in eye case, suppose it causes the least amount of pain greater than no pain possible. Denote this by e. Assume that the dust speck causes e amount of pain. Then if e < 1/3^^^3
spec: 1 * e < 1 pp
and if e > 1/3^^^3
spec: 1 * e > 1 pp
So assuming our moral calculus is to always choose whichever option generates the least pp, we need only ask if e is greater than or less than 1/n.
If you've been paying attention, I now have an out to give no answer: we don't know what e is, so I can't decide (at least not based on pp). But I'll go ahead and wager a guess. Since 1/3^^^3 is very small, I think that most likely any pain sensing system of any present or future intelligence will have e > 1/3^^^3, then I must choose torture because torture costs 1 pp but the specs cost more than 1 pp.
This doesn't feel like what, as a human, I would expect the answer to be. I want to say don't torture the poor guy and all the rest of us will suffer the spec so he need not be tortured. But I suspect this is human inability to deal with large numbers, because I think about how I would be willing to accept a spec so the guy wouldn't be torture since e pp < 1 pp, and every other individual, supposing they were pp-fearing people, would make the same short-sighted choice. But the net cost would be to distribute more pain with the specs than the torture ever would.
Weird how the human mind can find a logical answer and still expect a nonlogical answer to be the truth.
Between teaching mathematics to freshmen and spending most of my time learning mathematics, I've noticed this myself. When presented with a new result, the first inclination, especially depending on the authority of the source, is to believe it and figure there's a valid proof of it. But occasionally the teacher realizes that they made a mistake and may even scold the students for not noticing since it is incredibly obvious (e.g. changing something like ||z - z_0|| to ||z - z_1|| between steps, even though a few seconds thinking reveals it to be a typo rather than a mathematical insight).
Sometimes (and for a few lucky people, most of the time) individuals are in a mental state where they are actively thinking through everything being presented to them. For me, this happens a few times a semester in class, and almost always during meetings with my advisor. And occasionally I have a student who does it when I'm teaching. But in my experience this is a mentally exhausting task and often leaves you think-dead for a while afterwards (I find I can go about 40 minutes before I give out).
All this leads me to a conclusion, largely from my experience with what behavior produces what effects, that in mathematics the best way to teach is to assign problems and give students clues when they get stuck. The problems assigned, of course, should be ones that result in the student building up the mathematical theory. It's certainly more time consuming, but in the end more rewarding, in terms of both emotional satisfaction and understanding.
Eliezer, although the comments did eventually get better, don't despair for the early comments on this post. Remember yourself, all you are finding in the comments is evidence confirming the belief that no one reading this blog is learning anything. I conjecture that those who have learned something just don't get excited enough to post because they don't disagree with you strongly enough or aren't sufficiently surprised to thank you publicly.
Of course, I still suspect, as you probably do, from years of experience that most readers of this blog believe they are learning to overcome bias when in fact they are just convincing themselves that they are learning to overcome bias because they have read about it and believe it's a virtue to overcome bias. And I don't exclude myself from this group either, because although I don't feel as though I'm thinking this way, that doesn't mean I might not secretly be and I will be revealed when I discover a serious error in my behavior and beliefs.
The best thing about grad school is when you finish taking courses. To make it through the math courses you have to play the game, writing down proofs you know aren't right but that will get you some credit. Once you're done with that, then you can actually step back and learn something. Study only one or two things at a time, set a reasonable pace that will allow you time to think (and will be paced relative to your own speed of thought), and actually gain some understanding. Of course, some students use this as an excuse to be lazy, but a good advisor will know the difference.
As I see it, what's most important is to make a division between rationality and emotions in terms of where they fit in the equations. Rationality describes the equations, emotions provide a source of evidence that must be applied correctly. If an outcome makes me happy, that should make me desire that outcome more, but not make me think that outcome more likely than if it made me sad (unless, of course, I'm evaluating the probability that I will be motivated to do something).
Unfortunately, I think this model of mind is not how the human mind actually works. Emotions appear to change the equations, not their arguments, so eliminating emotions seems like an appropriate measure to increase the human brain's approximation of a rational process. Maybe you can allow yourself feel happy or sad at an outcome without it affecting the outcome, but getting to that point may require an unemotional transition period as you change your thinking to match that of a rational process.
zzz, I think you underestimate how people perceive gambles. Investing in financial markets isn't perceived as a bet, since we like to believe that if you only knew enough, you could make the right choices (whether you actually can or not is another matter). With lotteries and other forms of gambling, it doesn't matter how much you know, you can't anticipate the outcome any better than if you had no additional information. That, I think, is part of why gambling is much more popular than investment: even the least skilled person has the same chance of winning as the most.
As I've thought about the chronophone, a big part of the trouble with it is that we can't successfully transmit any idea where we already know what result we want. Thus to pick something desirable now that will be translated into something desirable then is essentially impossible, since if I already know it to be desirable, I must know enough of the result to know it's desirable, hence tainting all my thoughts. At best, I can tell Archimedes about things I'm working on now that are non-obvious and hope that they translate into something similarly non-obvious that would have generated from the same motivation in his own time. That is, if my motion to have fun causes me to research mathematical topic X, his motivation to have fun will cause him to research topic Y in his favorite field. X and Y may not even be analogues, just generated by the same kind of line of thinking.
I still haven't come up with something that I feel fits the spirit of the question, but my start is that I could tell Archimedes about atheism. Up until I was maybe 11 or 12 years old I never really considered the question of religion. My parents taught me the basic Christian tradition, but I never attended church or was deeply indoctrinated. At that age, though, other kids started asking me about religion as they began to become adult members of their religions. "What religion are you?" they would ask and my answer was "I don't know". Someone asked if I celebrated Christmas and I said yes so they told me I was a Christian, but I didn't really know what that meant. But over the next several years the more I learned about religion the less it seemed to make sense to me, until eventually someone shunned me for being an atheist, so then that put me on the path to actively learning what atheism was and seriously thinking about the question of whether anything supernatural exists.
In the end I concluded on atheism. If I recounted my steps to Archimedes, I think they would come through clearly, except rather than the Cult of Jesus he might hear Cult of Zeus. To what extent this might actually improve things I don't know, since I'm not even sure if promoting atheism for its own sake is a good idea.
In sum, I agree, but one small issue I take is when you argue that someone acts contrary to their learning it demonstrates that they don't really understand it. I'm sure this is often the case, but sometimes it's a matter of akrasia: the person knows what they should do and why, even deep down inside, yet finds themselves unable to do it.
Humans suffer heavily from their biases. I recall at in middle school I came to the conclusion that no deities existed, yet it took me a long while to act on it because of social pressures, so I continued to behave contrary to my beliefs out of fear. It was only later in life that I gained the self-confidence and bravery to act upon my beliefs, no matter how contrary to the social norm.
You might say that I didn't really understand and that if I did I would have acted differently, but I find this contrary to my own experience, and this is only one such example. The human brain is a mine field, and even when we understand, we may still fail to act correctly.