Posts

Comments

Comment by Tom_McCabe2 on (Moral) Truth in Fiction? · 2009-02-09T18:41:35.000Z · LW · GW

"3WC would be a terrible movie. "There's too much dialogue and not enough sex and explosions", they would say, and they'd be right."

Hmmm.. Maybe we should put together a play version of 3WC; plays can't have sex and explosions in any real sense, and dialogue is a much larger driver.

Comment by Tom_McCabe2 on She has joined the Conspiracy · 2009-01-13T21:00:39.000Z · LW · GW

In case that wasn't a rhetorical question, you almost certainly did: your Introduction to Bayesian Reasoning is the fourth Google hit for "Bayesian", the third Google hit for "Bayes", and has a pagerank of 5, the same as the Cryonics Institute's main website.

Comment by Tom_McCabe2 on Serious Stories · 2009-01-09T04:32:15.000Z · LW · GW

"Would they take the next step, and try to eliminate the unbearable pain of broken hearts, when someone's lover stops loving them?"

We already have an (admittedly limited) counterexample to this, in that many Westerners choose to seek out and do somewhat painful things (eg., climbing Everest), even when they are perfectly capable of choosing to avoid them, and even at considerable monetary cost.

Comment by Tom_McCabe2 on Growing Up is Hard · 2009-01-04T05:54:21.000Z · LW · GW

"Some ordinary young man in college suddenly decides that everyone around them is staring at them because they're part of the conspiracy."

I don't think that this is at all crazy, assuming that "they" refers to you (people are staring at me because I'm part of the conspiracy), rather than everyone else (people are staring at me because everyone in the room is part of the conspiracy). Certainly it's happened to me.

"Poetry aside, a human being isn't the seed of a god."

A human isn't, but one could certainly argue that humanity is.

Comment by Tom_McCabe2 on Living By Your Own Strength · 2008-12-22T01:40:30.000Z · LW · GW

"But with a sufficient surplus of power, you could start doing things the eudaimonic way. Start rethinking the life experience as a road to internalizing new strengths, instead of just trying to keep people alive efficiently."

It should be noted that this doesn't make the phenomenon of borrowed strength go away, it just outsources it to the FAI. If anything, given the kind of perfect recall and easy access to information that an FAI would have, the ratio of cached historical information to newly created information should be much higher than that of a human. Of course, an FAI wouldn't suffer the problem of losing the information's deep structure like a human would, but it seems to be a fairly consistent principle that the amount of cached data grows faster than the rate of data generation.

The problem here- the thing that actually decreases utility- is humans taking actions without sufficient understanding of the potential consequences, in cases where "Humans seem to do very well at recognizing the need to check for global consequences by perceiving local features of an action." (CFAI 3.2.2) fails. I wonder, out of a sense of morbid curiosity, what the record is for the highest amount of damage caused by a single human without said human ever realizing that they did anything bad.

Comment by Tom_McCabe2 on Sensual Experience · 2008-12-21T17:31:05.000Z · LW · GW

"By now, it's probably true that at least some people have eaten 162,329 potato chips in their lifetimes. That's even less novelty and challenge than carving 162,329 table legs."

Nitpick: it takes much less time and mental energy to eat a potato chip than to carve a table leg, so the total quantity of sphexishness is much smaller.

Comment by Tom_McCabe2 on Disjunctions, Antipredictions, Etc. · 2008-12-10T02:42:56.000Z · LW · GW

"Or, to make it somewhat less strong, as if I woke up one morning to find that banks were charging negative interest on loans?"

They already have, at least for a short while.

http://www.nytimes.com/2008/12/10/business/10markets.html

Comment by Tom_McCabe2 on Worse Than Random · 2008-11-12T19:02:58.000Z · LW · GW

"We are currently living through a crisis that is in large part due to this lack of appreciation for emergent behavior. Not only people in general but trained economists, even Nobel laureates like Paul Krugman, lack the imagination to understand the emergent behavior of free monetary systems."

"Emergence", in this instance, is an empty buzzword, see http://lesswrong.com/lw/iv/the_futility_of_emergence/. "Imagination" also seems likely to be an empty buzzword, in the sense of http://lesswrong.com/lw/jb/applause_lights/.

"precisely because the emergent behavior of the market is more powerful, more intelligent, in solving the problem of resource allocation than any committee."

Markets do not allocate resources anywhere near optimally, and sometimes they do even worse than committees of bureaucrats; the bureaucrats, for instance, may increase utility by allocating more resources to poor people on grounds of higher marginal utility per dollar per person.

"Once you understand it then it's not so amazing but it is very difficult to understand. Ben Bernanke doesn't understand and Alan Greenspan didn't understand before him."

If you think you know more than Bernanke, then why haven't you become rich by making better-than-expected bets?

"It can be improved on by randomisation: randomly betting on heads with p=0.5 and tails with p=0.5 is a stochastic strategy which offers improved returns - and there is no deterministic strategy which produces superior results to it."

Eliezer has already noted that it is possible for a random strategy to be superior to a stupid deterministic strategy:

"But it is possible in theory, since you can have things that are anti-optimized. Say, the average state has utility -10, but the current state has an unusually low utility of -100. So in this case, a random jump has an expected benefit. If you happen to be standing in the middle of a lava pit, running around at random is better than staying in the same place. (Not best, but better.) A given AI algorithm can do better when randomness is injected, provided that some step of the unrandomized algorithm is doing worse than random."

The point of the post is that a random strategy is never better than the best possible deterministic strategy. And assuming that you're betting on real, physical coinflips, a random strategy is actually worse than the deterministic strategy of betting that the coin will come up heads if it started as heads and vice versa (see http://www.npr.org/templates/story/story.php?storyId=1697475).

Comment by Tom_McCabe2 on Worse Than Random · 2008-11-12T16:20:56.000Z · LW · GW

"It is not clear this can be shown to be true. 'Improvement' depends on what is valued, and what the context permits. In the real world, the value of an algorithm depends on not only its abstract mathematical properties but the costs of implementing it in an environment for which we have only imperfect knowledge."

Eliezer specifically noted this in the post:

"Sometimes it is too expensive to take advantage of all the knowledge that we could, in theory, acquire from previous tests. Moreover, a complete enumeration or interval-skipping algorithm would still end up being stupid. In this case, computer scientists often use a cheap pseudo-random algorithm, because the computational cost of using our knowledge exceeds the benefit to be gained from using it. This does not show the power of randomness, but, rather, the predictable stupidity of certain specific deterministic algorithms on that particular problem."

Comment by Tom_McCabe2 on Worse Than Random · 2008-11-11T20:21:21.000Z · LW · GW

"This may not sound like a profound insight, since it is true by definition. But consider - how many comic books talk about "mutation" as if it were a source of power? Mutation is random. It's the selection part, not the mutation part, that explains the trends of evolution."

I think this is a specific case of people treating optimization power as if it just drops out of the sky at random. This is certainly true for some individual humans (eg., winning the lottery), but as you point out, it can't be true for the system as a whole.

"These greedy algorithms work fine for some problems, but on other problems it has been found that greedy local algorithms get stuck in local minima."

Er, do you mean local maxima?

"When dealing with a signal that is just below the threshold, a noiseless system won’t be able to perceive it at all. But a noisy system will pick out some of it - some of the time, the noise and the weak signal will add together in such a way that the result is strong enough for the system to react to it positively."

In such a case, you can clearly affect the content of the signal, so why not just give it a blanket boost of ten points (or whatever), if the threshold is so high that you're missing desirable data?

Comment by Tom_McCabe2 on San Jose Meetup, Sat 10/25 @ 7:30pm · 2008-10-25T04:14:45.000Z · LW · GW

I will not be there due to a screwup by Continental Airlines, my apologies.

Comment by Tom_McCabe2 on San Jose Meetup, Sat 10/25 @ 7:30pm · 2008-10-24T00:14:07.000Z · LW · GW

See everyone there.

Comment by Tom_McCabe2 on Which Parts Are "Me"? · 2008-10-23T02:41:12.000Z · LW · GW

"As far as my childhood goes I created a lot of problems for myself by trying to force myself into a mold which conflicted strongly with the way my brain was setup."

"It's interesting that others have shared this experience, trying to distance ourselves from, control, or delete too much of ourselves - then having to undo it. I hadn't read of anyone else having this experience, until people started posting here."

For some mysterious reason, my younger self was so oblivious to the world that I never experienced (to my recollection) a massive belief system rewrite. I assume that what you're referring to is learning a whole bunch of stuff, finding out later on that it's all wrong, and then go back and undoing it all. I don't think I ever learned the whole bunch of stuff in the first place- eg., when I discovered atheism, I didn't really have an existing Christian belief structure that had to be torn down. I knew about Jesus and God and the resurrection and so forth, but I hadn't really integrated it into my head, so when I discovered atheists, I just accepted their arguments as true and moved on.

Comment by Tom_McCabe2 on Ethical Injunctions · 2008-10-20T23:31:52.000Z · LW · GW

"Would you kill babies if it was the right thing to do? If no, under what circumstances would you not do the right thing to do? If yes, how right would it have to be, for how many babies?"

I would have answered "yes"; eg., I would have set off a bomb in Hitler's car in 1942, even if Hitler was surrounded by babies. This doesn't seem to be a case of corruption by unethical hardware; the benefit to me from setting off such a bomb is quite negative, as it greatly increases my chance of being tortured to death by the SS.

Comment by Tom_McCabe2 on Protected From Myself · 2008-10-19T02:48:13.000Z · LW · GW

"But what if you were "optimistic" and only presented one side of the story, the better to fulfill that all-important goal of persuading people to your cause? Then you'll have a much harder time persuading them away from that idea you sold them originally - you've nailed their feet to the floor, which makes it difficult for them to follow if you yourself take another step forward."

Hmmm... if you don't need people following you, could it help you (from a rationality standpoint) to lie? Suppose that you read about AI technique X. Technique X looks really impressive, but you're still skeptical of it. If you talk about how great technique X looks, people will start to associate you with technique X, and if you try to change your mind about it, they'll demand an explanation. But if you lie (either by omission, or directly if someone asks you about X), you can change your mind about X later on and nobody will call you on it.

NOTE: This does require telling the same lie to everyone; telling different lies to different groups of people is, as noted, too messy.

Comment by Tom_McCabe2 on Entangled Truths, Contagious Lies · 2008-10-16T03:07:29.000Z · LW · GW

"Human beings, who are not gods, often fail to imagine all the facts they would need to distort to tell a truly plausible lie."

One of my pet hobbies is constructing metaphors for reality which are blatantly, factually wrong, but which share enough of the deep structure of reality to be internally consistent. Suppose that you have good evidence for facts A, B, and C. If you think about A, B, and C, you can deduce facts D, E, F, and so forth. But given how tangled reality is, it's effectively impossible to come up with a complete list of humanly-deducible facts in advance; there's always going to be some fact, Q, which you just didn't think of. Hence, if you map A, B, and C to A', B', and C', use A', B', and C' to deduce Q', and map Q' back to Q, how accurate Q is is a good check for how well you understand A, B, and C.

Comment by Tom_McCabe2 on Why Does Power Corrupt? · 2008-10-14T02:32:18.000Z · LW · GW

"I am willing to admit of the theoretical possibility that someone could beat the temptation of power and then end up with no ethical choice left, except to grab the crown. But there would be a large burden of skepticism to overcome."

If all people, including yourself, become corrupt when given power, then why shouldn't you seize power for yourself? On average, you'd be no worse than anyone else, and probably at least somewhat better; there should be some correlation between knowing that power corrupts and not being corrupted.

Comment by Tom_McCabe2 on AIs and Gatekeepers Unite! · 2008-10-09T17:24:45.000Z · LW · GW

I volunteer to be the Gatekeeper party. I'm reasonably confident that no human could convince me to release them; if anyone can convince me to let them out of the box, I'll send them $20. It's possible that I couldn't be convinced by a transhuman AI, but I wouldn't bet $20 on it, let alone the fate of the world.

Comment by Tom_McCabe2 on Shut up and do the impossible! · 2008-10-08T23:55:59.000Z · LW · GW

"To accept this demand creates an awful tension in your mind, between the impossibility and the requirement to do it anyway. People will try to flee that awful tension."

More importantly, at least in me, that awful tension causes your brain to seize up and start panicking; do you have any suggestions on how to calm down, so one can think clearly?

Comment by Tom_McCabe2 on That Tiny Note of Discord · 2008-09-23T17:57:05.000Z · LW · GW

"Eliezer2000 lives by the rule that you should always be ready to have your thoughts broadcast to the whole world at any time, without embarrassment."

I can understand most of the paths you followed during your youth, but I don't really get this. Even if it's a good idea for Eliezer_2000 to broadcast everything, wouldn't it be stupid for Eliezer_1200, who just discovered scientific materialism, to broadcast everything?

"If everyone were to live for others all the time, life would be like a procession of ants following each other around in a circle."

For a more mathematical version of this, see http://www.acceleratingfuture.com/tom/?p=99.

"It does not seem a very intuitive belief (except for very religious types and Eliezer1997 was not one of those), so what was its justification?"

WARNING: Eliezer-1999 content.

http://yudkowsky.net/tmol-faq/tmol-faq.html

"Even so, if you don't try, or don't try hard enough, you don't get a chance to sit down at the high-stakes table - never mind the ability ante."

Are you referring to external exclusion of people who don't try, or self-exclusion?

Comment by Tom_McCabe2 on A Prodigy of Refutation · 2008-09-18T02:42:49.000Z · LW · GW

"And I wonder if that advice will turn out not to help most people, until they've personally blown off their own foot, saying to themselves all the while, correctly, "Clearly I'm winning this argument.""

I fell into this pattern for quite a while. My basic conception was that, if everyone presented their ideas and argued about them, the best ideas would win. Hence, arguing was beneficial for both me and the people on transhumanist forums- we both threw out mistaken ideas and accepted correct ones. Eliezer_2006 even seemed to support my position, with Virtue #5. It never really occurred to me that the best of everyone's ideas might not be good enough.

"It is Nature that I am facing off against, who does not match Her problems to your skill, who is not obliged to offer you a fair chance to win in return for a diligent effort, who does not care if you are the best who ever lived, if you are not good enough."

Perhaps we should create an online database of open problems, if one doesn't exist already. There are several precedents (http://en.wikipedia.org/wiki/Hilbert%27s_problems). So far as I know, if one wishes to attack open problems in physics/chemistry/biology/comp. sci./FAI, the main courses of action are to attack famous problems (where you're expected to fail and don't feel bad if you do), or to read the educational literature (where the level of problems is pre-matched to the level of the material).

Comment by Tom_McCabe2 on Singularity Summit 2008 · 2008-09-09T03:57:28.000Z · LW · GW

"Before anyone posts any angry comments: yes, the registration costs actual money this year."

For comparison: The Singularity Summit at Stanford cost $110K, all of which was provided by SIAI and sponsors. Singularity Summit 2007 undoubtedly cost more, and only $50K of that was raised through ticket sales. All ticket purchases for SS08 will be matched 2:1 by Peter Thiel and Brian Cartmell.

Comment by Tom_McCabe2 on Rationality Quotes 15 · 2008-09-06T21:42:49.000Z · LW · GW

My apologies, but my browser screwed up my comment's formatting; could an admin please fix it, and then delete this? Thanks.

Comment by Tom_McCabe2 on Rationality Quotes 15 · 2008-09-06T21:39:07.000Z · LW · GW

"Ask anyone, and they'll say the same thing: they're pretty open-minded, though they draw the line at things that are really wrong."

I generally find myself arguing against open-mindedness; because "open-mindedness" is a social virtue, a lot of people apply it indiscriminately, and so they wind up wasting time on long-debunked ideas.

"In the same way that we need statesmen to spare us the abjection of exercising power, we need scholars to spare us the abjection of learning."

How many people want to exercise government-type power over large numbers of people? A lot of people are, apparently, happy to let someone else tell them what to do. Most of the rest aren't very ambitious.

"Because giftedness is not to be talked about, no one tells high-IQ children explicitly, forcefully and repeatedly that their intellectual talent is a gift. That they are not superior human beings, but lucky ones. That the gift brings with it obligations to be worthy of it."

(remembers childhood)

When adults did tell me this, I didn't believe them- after all, wasn't it blatantly obvious that there was a strong negative correlation between intelligence and quality of life?

"The best part about math is that, if you have the right answer and someone disagrees with you, it really is because they're stupid."

This is true, but only for arbitrarily low values of "stupid". There are plenty of theorems which are obvious for a superintelligence, but counterintuitive to humans.

"Long-Term Capital Management had faith in diversification. Its history serves as ample notification that eggs in different baskets can and do all break at the same time."

If I recall correctly, LTCM was so highly leveraged that most of their eggs didn't have to break- if just 10% or so did, they were hosed anyway.

Comment by Tom_McCabe2 on Qualitative Strategies of Friendliness · 2008-08-30T02:53:43.000Z · LW · GW

"In fact, if you're interested in the field, you should probably try counting the ways yourself, before I continue. And score yourself on how deeply you stated a problem, not just the number of specific cases."

I got #1, but I mushed #2 and #3 together into "The AI will rewire our brains into computationally cheap super-happy programs with humanesque neurology", as I was thinking of failure modes and not reasons for why failure modes would be bad.

Comment by Tom_McCabe2 on Mirrors and Paintings · 2008-08-23T03:06:40.000Z · LW · GW

"The real question is when "Because Eliezer said so!" became a valid moral argument."

You're confusing the algorithm Eliezer is trying to approximate with the real, physical Eliezer. If Eliezer was struck by a cosmic ray tomorrow and became a serial killer, me, you, and Eliezer would all agree that this doesn't make being a serial killer right.

Comment by Tom_McCabe2 on Morality as Fixed Computation · 2008-08-08T05:32:07.000Z · LW · GW

"Tom McCabe: speaking as someone who morally disapproves of murder, I'd like to see the AI reprogram everyone back, or cryosuspend them all indefinitely, or upload them into a sub-matrix where they can think they're happily murdering each other without all the actual murder. Of course your hypothetical murder-lovers would call this immoral, but I'm not about to start taking the moral arguments of murder-lovers seriously."

Beware shutting yourself into a self-justifying memetic loop. If you had been born in 1800, and just recently moved here via time travel, would you have refused to listen to all of our modern anti-slavery arguments, on the grounds that no moral argument by negro-lovers could be taken seriously?

"The AI would use the previous morality to select its actions: depending on the content of that morality it might or might not reverse the reprogramming."

Do you mean would, or should? My question was what the AI should do, not what a human-constructed AI is likely to do.

It should be possible for an AI, upon perceiving any huge changes in renormalized human morality, to scrap its existing moral system and recalibrate from scratch, even if nobody actually codes an AI that way. Obviously, the previous morality will determine the AI's very next action, but the interesting question is whether the important actions (the ones that directly affect people) map on to a new morality or the previous morality.

Comment by Tom_McCabe2 on Morality as Fixed Computation · 2008-08-08T02:32:54.000Z · LW · GW

"You perceive, of course, that this destroys the world."

If the AI modifies humans so that humans want whatever happens to already exist (say, diffuse clouds of hydrogen), then this is clearly a failure scenario.

But what if the Dark Lords of the Matrix reprogrammed everyone to like murder, from the perspective of both the murderer and the murderee? Should the AI use everyone's prior preferences as morality, and reprogram us again to hate murder? Should the AI use prior preferences, and forcibly stop everyone from murdering each other, even if this causes us a great deal of emotional trauma? Or should the AI recalibrate morality to everyone's current preferences, and start creating lots of new humans to enable more murders?

Comment by Tom_McCabe2 on Anthropomorphic Optimism · 2008-08-05T07:52:04.000Z · LW · GW

"However, those objective values probably differ quite a lot from most of what most human beings find important in their lives; for example our obsessions with sex, romance and child-rearing probably aren't in there."

Several years ago, I was attracted to pure libertarianism as a possible objective morality for precisely this reason. The idea that, eg., chocolate tastes good can't possibly be represented directly in an objective morality, as chocolate is unique to Earth and objective moralities need to apply everywhere. However, the idea of immorality stemming from violation of another person's liberty seemed simple enough to arise spontaneously from the mathematics of utility functions.

It turns out that you do get a morality out of the mathematics of utility functions (sort of), in the sense that utility functions will tend towards certain actions and away from others unless some special conditions are met. Unfortunately, these actions aren't very Friendly; they involve things like turning the universe into computronium to solve the Riemann Hypothesis (see http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf for some examples). If libertarianism really was a universal morality, Friendly AI would be much simpler, as we could fail on the first try without the UFAI killing us all.

Comment by Tom_McCabe2 on Detached Lever Fallacy · 2008-08-01T08:59:46.000Z · LW · GW

"is true except where general intelligence is at work. It probably takes more complexity to encode an organism that can multiply 7 by 8 and can multiply 432 by 8902 but cannot multiply 6 by 13 than to encode an organism that can do all three,"

This is just a property of algorithms in general, not of general intelligence specifically. Writing a Python/C/assembler program to multiply A and B is simpler than writing a program to multiply A and B unless A % B = 340. It depends on whether you're thinking of multiplication as an algorithm or a giant lookup table (http://lesswrong.com/lw/l9/artificial_addition/).

Comment by Tom_McCabe2 on Detached Lever Fallacy · 2008-07-31T22:27:00.000Z · LW · GW

"Eventually, the good guys capture an evil alien ship, and go exploring inside it. The captain of the good guys finds the alien bridge, and on the bridge is a lever. "Ah," says the captain, "this must be the lever that makes the ship dematerialize!" So he pries up the control lever and carries it back to his ship, after which his ship can also dematerialize."

This type of thing is known to happen in real life, when technology gaps are so large that people have no idea what generates the magic. See http://en.wikipedia.org/wiki/Cargo_cult.

Comment by Tom_McCabe2 on The Meaning of Right · 2008-07-29T05:59:33.000Z · LW · GW

"You will find yourself saying, "If I wanted to kill someone - even if I thought it was right to kill someone - that wouldn't make it right." Why? Because what is right is a huge computational property- an abstract computation - not tied to the state of anyone's brain, including your own brain."

Coherent Extrapolated Volition (or any roughly similar system) protects against this failure for any specific human, but not in general. Eg., suppose that you use various lawmaking processes to approximate Right(x), and then one person tries to decide independently that Right(Murder) > 0. You can detect the mismatch between the person's actions and Right(x) by checking against the approximation (the legal code) and finding that murder is wrong. In the limit of the approximation, you can detect even mismatches that people at the time wouldn't notice (eg., slavery). CEV also protects against specific kinds of group failures, eg., convince everybody that the Christian God exists and that the Bible is literally accurate, and CEV will correct for it by replacing the false belief of "God is real" with the true belief of "God is imaginary", and then extrapolating the consequences.

However, CEV can't protect against features of human cognitive architecture that are consistent under reflection, factual accuracy, etc. Suppose that, tomorrow, you used magical powers to rewrite large portions of everyone's brain. You would expect that people now take actions with lower values of Right(x) than they previously did. But, now, there's no way to determine the value of anything under Right(x) as we currently understand it. You can't use previous records (these have all been changed, by act of magic), and you can't use human intuition (as it too has been changed). So while the external Right(x) still exists somewhere out in thingspace, it's a moot point, as nobody can access it. This wouldn't work for, say, arithmetic, as people would rapidly discover that assuming 2 + 2 = 5 in engineering calculations makes bridges fall down.

Comment by Tom_McCabe2 on The Meaning of Right · 2008-07-29T05:09:32.000Z · LW · GW

Wow, there's a lot of ground to cover. For everyone who hasn't read Eliezer's previous writings, he talks about something very similar in Creating Friendly Artificial Intelligence, all the way back in 2001 (link = http://www.singinst.org/upload/CFAI/design/structure/external.html). With reference to Andy Wood's comment:

"What claim could any person or group have to landing closer to the one-place function?"

Next obvious question: For purposes of Friendly AI, and for correcting mistaken intuitions, how do we approximate the rightness function? How do we determine whether A(x) or B(x) is a closer approximation to Right(x)?

Next obvious answer: The rightness function can be computed by computing humanity's Coherent Extrapolated Volition, written about by Eliezer in 2004 (http://www.singinst.org/upload/CEV.html). The closer a given algorithm comes to humanity's CEV, the closer it should come to Right(x).

Note: I did not think of CFAI when I read Eliezer's previous post, although I did think of CEV as a candidate for morality's content. CFAI refers to the supergoals of agents in general, while all the previous posts referred to a tangle of stuff surrounding classic philosophical ideas of morality, so I didn't connect the dots.

Comment by Tom_McCabe2 on Touching the Old · 2008-07-20T09:27:59.000Z · LW · GW

"I don't think I've ever touched anything that has endured in the world for longer than that church tower."

Nitpick: This probably holds true for things of human construction, but there are obviously rocks, bits of dirt, etc. that have endured for far longer than a thousand years.

Comment by Tom_McCabe2 on Possibility and Could-ness · 2008-06-14T16:13:49.000Z · LW · GW

"What concrete state of the world - which quarks in which positions - corresponds to "There are three apples on the table, and there could be four apples on the table"? Having trouble answering that? Next, say how that world-state is different from "There are three apples on the table, and there couldn't be four apples on the table.""

For the former: An ordinary kitchen table with three apples on it. For the latter: An ordinary kitchen table with three apples on it, wired to a pressure-sensitive detonator that will set off 10 kg of C4 if any more weight is added onto the table.

"But "I could have a heart attack at any time" and "I could have a heart attack any time I wanted to" are nonetheless not exactly the same usage of could, though they are confusingly similar."

They both refer to possible consequences if the initial states were changed, while still obeying a set of constraints. The first refers to a change in initial external states ("there's a clot in the artery"/"there's not a clot in the artery"), while the second refers to a change in initial internal states ("my mind activates the induce-heart-attack nerve signal"/"my mind doesn't activate the induce-heart-attack nerve signal"). Note that "could" only makes sense if the initial conditions are limited to a pre-defined subset. For the above apple-table example, in the second case, you would say that the statement "there could be four apples on the table" is false, but you have to assume that the range of initial states the "could" refers to don't refer to states in which the detonator is disabled. For the heart-attack example, you have to exclude initial states in which the Mad Scientist Doctor (tm) snuck in in the middle of the night and wired up a deliberation-based heart-attack-inducer.

Comment by Tom_McCabe2 on Causality and Moral Responsibility · 2008-06-13T23:48:43.000Z · LW · GW

"But if we assume that Lenin made his decisions after the fashion of an ordinary human brain, and not by virtue of some alien mechanism seizing and overriding his decisions, then Lenin would still be exactly as much of a jerk as before."

I must admit that I still don't really understand this. It seems to violate what we usually mean by moral responsibility.

"When, in a highly sophisticated form of helpfulness, I project that you would-want lemonade if you knew everything I knew about the contents of the refrigerator, I do not thereby create a copy of Michael Vassar who screams that it is trapped inside my head."

This is, I think, because humans are a tiny subset of all possible computers, and not because there's a qualitative difference between predicting and creating. It is, for instance, possible to look at a variety of factorial algorithms, and rearrange them to predictably compute triangular numbers. This, of course, doesn't mean that you can look at an arbitrary algorithm and determine whether it computes triangular numbers. I conjecture that, in the general case, it's impossible to predict the output of an arbitrary Turing machine at any point along its computation without doing a calculation at least as long as the calculations the original Turing machine does. Hence, predicting the output of a mind-in-general would require at least as much computing power as running the mind-in-general.

Incidentally, I think that there's a selection bias at work here due to our limited technology. Since we don't yet know how to copy or create a human, all of the predictions about humans that we come up with are, by necessity, easier than creating a human. However, for most predictions on most minds, the reverse should be true. Taking Michael Vassar and creating an electronic copy (uploading), or creating a human from scratch with a set of prespecified characteristics, are both technologically feasible with tools we know how to build. Creating a quantum simulation of Michael Vassar or a generic human to predict their behavior would be utterly beyond the processing power of any classical computer.

Comment by Tom_McCabe2 on Living in Many Worlds · 2008-06-05T03:51:21.000Z · LW · GW

"One of the things that always comes up in my mind regarding this is the concept of space relative to these other worlds. Does it make sense to say that they're "ontop of us" and out of phase so we can't see them, or do they propagate "sideways", or is it nonsensical to even talk about it?"

It's nonsensical. The space that we see is just an artifact of a lower level of reality. See http://www.acceleratingfuture.com/tom/?p=124.

"And you should always take joy in discovery, as long as you personally don't know a thing."

I generally give independent, replicated discoveries the same "joy status" (if that makes sense) as first-time-in-this-branch discoveries. However, you should take a hit when you're just rereading someone else's work, which isn't as challenging, or as fun.

Comment by Tom_McCabe2 on Class Project · 2008-05-31T11:52:37.000Z · LW · GW

I really, really hope that you aren't going to try and publish a theory of quantum gravity, for practical reasons; even if it's more elegant than every other theory yet proposed, the lack of experimental evidence and your lack of credentials will make you seem like a crackpot.

Comment by Tom_McCabe2 on My Childhood Role Model · 2008-05-24T03:37:42.000Z · LW · GW

First of all, to Eliezer: Great post, but I think you'll need a few more examples of how stupid chimps are compared to VIs and how stupid Einsteins are compared to Jupiter Brains to convince most of the audience.

"Maybe he felt that the difference between Einstein and a village idiot was larger than between a village idiot and a chimp. Chimps can be pretty clever."

We see chimps as clever because we have very low expectations of animal intelligence. If a chimp were clever in human terms, it would be able to compete with humans in at least some areas, which is clearly silly. How well would an adult chimp do, if he was teleported into a five-year-old human's body and thrown into kindergarten?

"But I don't buy the idea of intelligence as a scalar value."

Intelligence is obviously not a scalar, but there does seem to be a scalar component of intelligence, at least when dealing with humans. It has long been established that intelligence tests strongly correlate with each other, forming a single scalar known as Spearman's g (http://en.wikipedia.org/wiki/General_intelligence_factor), which correlates with income, education, etc.

"2) you're handwaving away deep problems of knowledge and data processing by attributing magical thought powers to your AI."

Yes. If you have a way to solve those problems, and it's formal and comprehensive enough to be published in a reputable journal, I will pay you $1,000. Other people on OB will probably pay you much more. Until then, we do the best we can.

"as opposed to simply stating that it could obviously do those things because it's a superintelligence."

See the previous post at http://lesswrong.com/lw/qk/that_alien_message/ for what simple overclocking can do.

"We haven't even established how to measure most aspects of cognitive function - one of the few things we know about how our brains work is that we don't possess tools to measure most of the things it does."

Er, yes, we do, actually. See http://lesswrong.com/lw/kj/no_one_knows_what_science_doesnt_know/.

"Some people can do it without much effort at all, and not all of them are autistic, so you can't just say that they've repurposed part of their brain for arithmetic."

Since when is autism necessary for brain repurposing? Autism specifically refers to difficulty in social interaction and communication. Savantism is actually an excellent example of what we could do with the brain if it worked efficiently.

"By the way, when the best introduction to a supposedly academic field is works of science fiction, it sets off alarm bells in my head. I know that some of the best ideas come from sci-fi and yada, yada, but just throwing that out there."

Sci-fi is useful for introducing the reader to the idea that there are possibilities for civilization other than 20th-century Earth. It's not meant to be technical material.

"But I'm skeptical that this uniformity extends to system II. The system II abilities of the best rationalists of today may depend significantly on their having learned a set of reasoning skills developed by their culture over a long period of time."

That's precisely the point; the biological difference between humans is not that great, so the huge differences we see in human accomplishment must be due in large part to other factors.

"The simplest best theory we have for precisely predicting an arbitrary 12 grams of carbons behaviour over time requires avogadros of data for the different degrees of freedom of the start state, the electron energy states etc."

No, it doesn't; the Standard Model only has eighteen adjustable parameters (physical constants) that must be found through experiment.

"The minor tweaks in brain design allowed enormous improvements in cognitive performance, and I think that the intelligence scale should reflect the performance differences rather than the anatomical ones."

The difference between humans and chimps is fairly small anatomically; we share 95-98% of our DNA and most of our brain architecture. The huge difference between a civilization inhabited entirely by village idiots and a civilization of chimps is obvious.

"Eliezer, I think this whole frame of analysis has an element of ego-stroking/sour grapes (stroking your ego and perhaps the ego of your reading audience that defines brainy as being Einstein-like, and that defines social success as being inversely correlated, because y'all are more Einstein-like than you're socially successful)."

Social success will gradually become more irrelevant as society develops further, because social success is a zero-sum game; it doesn't produce anything of value. Dogs, orangutans, and chimps all have complex social structures. Dogs, orangutans, and chimps would all currently be extinct if we didn't have domesticated animals and environmentalists.

"The empiricism based seduction community indicates a braininess advantage in being able "to play well with the other kids"."

If you define braininess as social success, social success is obviously going to correlate with braininess. The ability to find an optimal mate is not why people are successful. Monks, who were the closest thing to scholars during the medieval period, explicitly renounced the quest for a mate, and they didn't do too badly by the standards of their time period.

"I've resisted this thread, but I'm more interested in James Simon and the google founders as an example as the high end of braininess than the Albert Einsteins of today."

If you're referring to this James Simon (http://en.wikipedia.org/wiki/James_Simon), he is obviously less accomplished than Newton, Einstein, etc., by any reasonable metric. Larry Page and Sergey Brin are rich primarily because they were more interested in being rich than in publishing papers. They sure as heck didn't become rich because they knew how to win a high school popularity contest; Bill Gates, the most famous of the dot-com billionaires, is widely reputed to be autistic.

Comment by Tom_McCabe2 on Einstein's Speed · 2008-05-21T23:13:03.000Z · LW · GW

"Celeriac, the distinction is that Tom McCabe seemed to me to be suggesting that the search space was small to begin with - rather than realizing the work it took to cut the search space itself down."

The search space, within differential geometry, was fairly small by Einstein's day. It was a great deal of work to narrow the search space, but most of it was done by others (Conservation of Energy, various mathematical theorems, etc., were all known in 1910). The primary difficulties were in realizing that space could be described by differential geometry, and then in deriving GR from known postulates. Neither of these involve large search spaces; the former follows quickly once you realize that your assumptions are inconsistent with Minkowski space, and there's only one possible derivation of GR if you do the math correctly. I don't know why the first one is hard, but Einstein showed twice that physicists are very reluctant to question background assumptions (linear time for SR, Euclidean space for GR), so we know it must be. The second one is hard because the human brain does not come equipped with a differential geometry lobe- it took me several hours to fully understand the derivation of the Schwarzschild solution from its postulates, even though the math is simple by GR standards and there is only one possible answer (see http://en.wikipedia.org/wiki/Deriving_the_Schwarzschild_solution).

Comment by Tom_McCabe2 on Einstein's Speed · 2008-05-21T22:32:10.000Z · LW · GW

"IIRC, Einstein wasn't the first to try to develop a curvature theory of gravity. Riemann himself apparently tried. And, IIRC, Einstein was one of Riemann's students. Einstein brought to the table the whole thing about having to deal with spacetime rather than space."

Riemann died in 1866, Einstein was born in 1879. Riemann was a mathematician: he developed the math of differential geometry, among a great deal of other things, so a lot of stuff is named after him. Einstein applied Riemann's geometry to the physical universe. So far as I know, none of the early non-Euclidean geometry people thought that their geometries might be applicable in reality. The first theorems of hyperbolic geometry were produced in an attempt to create a contradiction and so prove Euclid's fifth postulate.

"I disagree strongly with the suggestion Einstein was a proponent of MWI. In fact, the overemphasis on deduction (defined here as induction from few au priors) caused him to waste the remaining 2/3 of his life attempting to disprove quantum phenomena, no?"

I have to find an actual physicist to discuss this with, but there appears to be nothing wrong with Einstein's quest for a unified theory; he simply didn't have the prerequisite information of QM at the time (Feynman, Dyson, etc. didn't develop renormalization until the 1940s). MWI wasn't proposed until several years after Einstein's death.

"A willingness to reconsider his assumptions, an openness to new explanations, and an abiding belief that hypotheses should always be tested against the data - and discarded if they were found wanting."

Plenty of scientists have these, and many of them make significant discoveries in their fields. But what was it about Einstein that let him discover, not one, but two of the fundamental theories of physics?

Comment by Tom_McCabe2 on Changing the Definition of Science · 2008-05-18T18:59:51.000Z · LW · GW

"Science tolerates errors, Bayescraft does not. Nobel laureate Robert Aumann, who first proved that Bayesians with the same priors cannot agree to disagree, is a believing Orthodox Jew."

I think there's a larger problem here. You can obviously make a great deal of progress by working with existing bodies of knowledge, but when some fundamental assumption breaks down, you start making nonsensical predictions if you can't get rid of that assumption gracefully. Aumann learned Science, and Science worked extremely well when applied to probability theory, but because Aumann didn't ask "what is the general principle underlying Science, if you move into an environment without a long history of scientific thought?", he didn't derive principles which could also be applied to religion, and so he remained Jewish. The same thing, I dare say, will probably happen to Bayescraft if it's ever popularized. Bayescraft will work better than Science, across a larger variety of situations. But no textbook could possibly cover every situation- at some point, the rules of Bayescraft will break down, at least from the reader's perspective (you list an example at http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/). If you don't have a deeper motivation or system underlying Bayescraft, you won't be able to regenerate the algorithms and work around the error. It's Feynman's cargo cult science, applied to Science itself.

Comment by Tom_McCabe2 on Do Scientists Already Know This Stuff? · 2008-05-18T04:59:48.000Z · LW · GW

"I figure that anyone who wants to paint me as a lunatic already has more than enough source material to misquote. Let them paint and be damned!"

The problem isn't individual nutcases wanting to paint you as a lunatic; their cause would be better served by SitS or other Singularity-related material. It's that people who haven't heard your ideas before- the largest audience numerically, if you publish this in book form- might classify you as a lunatic and then ignore the rest of your work. Einstein, when writing about SR, did not go on about how the classical physicists were making a bunch of stupid mistakes and how his methods were superior to anything Newton ever developed. You have, of course, made far more extreme statements elsewhere (by mainstream standards), but the overall proportion of such material should scale polynomially with the number of readers who reject you as a crackpot.

Comment by Tom_McCabe2 on Do Scientists Already Know This Stuff? · 2008-05-17T22:39:01.000Z · LW · GW

"This is insanity. Does no one know what they're teaching?"

I doubt any systematic study has been done on the difference in curricula between MIT and Generic State U., even though it would be much easier, and MIT has 78 affiliated Nobel laureates while State U. probably has zero. You can argue from first principles (http://www.paulgraham.com/colleges.html) or experimental data (http://www.csis.gvsu.edu/~mcguire/worth_college_leagues.html) that elite colleges are selecting Nobel Prize winners rather than creating them, but I don't know how accurate this is. If we could make MIT and Caltech replicas pop up all over the country, it would be well worth the time and effort.

Comment by Tom_McCabe2 on When Science Can't Help · 2008-05-15T20:13:53.000Z · LW · GW

"If scientific reasoning is merely Bayesian,"

Scientific reasoning is an imperfect approximation of Bayesian reasoning. Using your geometric analogy, science is the process of sketching a circle, while Bayesian reasoning is a compass.

"It seems to me that it is easy to represent strict standards of evidence within looser ones, but not vice versa."

If you already understand the strict standard, it's usually easy to understand the looser standard, but not vice-versa. Physicists would have a much easier time writing literature papers than literary theorists would writing physics papers.

"The frequency of 'Bayesian reasoners' mistaking data for evidence on this site should serve as example enough."

Data, assuming it's not totally random, is always evidence for some theory. Of course, not all data is evidence for every theory.

Comment by Tom_McCabe2 on The Dilemma: Science or Bayes? · 2008-05-13T22:28:54.000Z · LW · GW

"Of course I had more than just one reason for spending all that time posting about quantum physics. I like having lots of hidden motives, it's the closest I can ethically get to being a supervillain."

Your work on FAI is still pretty supervillain-esque to most SL0 and SL1 people. You are, essentially, talking about a human-engineered end to all of civilization.

"I wanted to present you with a nice, sharp dilemma between rejecting the scientific method, or embracing insanity. Why? I'll give you a hint: It's not just because I'm evil. If you would guess my motives here, think beyond the first obvious answer."

The obvious answer is that the scientific method is simply an imperfect approximation of ideal rationality, and it was developed before Bayes' Theorem was even proposed, so we should expect it to have some errors. So far as I know, it was never even defined mathematically. I haven't thought of any non-obvious answers yet.

"I don't believe you. I don't believe most scientists would make such huge mistakes."

It took thirty years between the original publication of Maxwell's laws (1865) and Einstein's discovery of their inconsistency with classical mechanics (~1895). It took another ten years before he published (1905). In the meantime, so far as I know, nobody else realized the fundamental incompatibility of the two main theories of classical physics.

Comment by Tom_McCabe2 on The Failures of Eld Science · 2008-05-12T21:05:05.000Z · LW · GW

This. Is. Awesome. If you weren't busy with FAI, you could make a fortune selling this stuff to universities.

Comment by Tom_McCabe2 on Spooky Action at a Distance: The No-Communication Theorem · 2008-05-05T20:47:31.000Z · LW · GW

"The idea that density matrices summarize locally invariant entanglement information is certainly helpful, but I still don't know how to start with a density matrix and visualize some physical situation, nor can I take your proof and extract back out an argument that would complete the demonstration in this blog post. I confess this is strictly a defect of my own education, but..."

From what I understand (which is admittedly not much; I could well be wrong), a density matrix is the thingy that describes the probability distribution of the quantum system over all possible states. Suppose that you have a set of quantum states A1...An. The density matrix is a way of describing a system that, say, has a 75% chance of being in A1 and a 25% chance of being in A2, or a 33% chance of being in A1 or A2 or A4, or whatever. You can then plug the density matrix into the standard quantum equations, but everything you get back will have one extra dimension, to account for the fact that the system you are discussing is described by a distribution rather than a pure quantum state.

The gist of Scott Aaronson's proof is (again, if I understand correctly): Suppose that you have two quantum systems, A and B. List the Cartesian product over all possible states of A and B (A1B1, A2B1, A3B1, etc., etc.). Use a density matrix to describe a probability distribution over these states (10% chance of A1B1, 5% chance of A1B2, whatever). Suppose that you are physically located at system A, and you fiddle with the density matrix using some operator Q. Using some mathematical property of Q which I don't really understand, you can show that, after Q has been applied, another person's observations at B will be the same as their earlier observations at B (ie, the density matrix after Q acts the same as it did before Q, so long as you only consider B).

Comment by Tom_McCabe2 on Identity Isn't In Specific Atoms · 2008-04-19T14:51:35.000Z · LW · GW

"If you furthermore had any thoughts about a particular "helium atom" being a factor in a subspace of an amplitude distribution that happens to factorize that way,"

If a helium atom is just an accidential, temporary factorization of an amplitude distribution, then why does it keep appearing over and over again when we look at the universe? If you throw a thousand electrons together, let them interact, zap them with laser radiation, etc., etc., at the end of the day you will still see a bunch of electrons with 511 keV rest mass and -1 charge. Why does the universe so carefully conserve these particular bundles of amplitude, with only one exception that I am aware of (annihilation by positrons), while other bundles of amplitude never exist at all (eg., a particle with 360 keV rest mass, or a particle with 7/2 charge).

Comment by Tom_McCabe2 on The Quantum Arena · 2008-04-17T03:58:08.000Z · LW · GW

"I was simply trying to figure out that if so, what's the "actual reality"?"

There is none, at least not in those terms. There is no "actual positional configuration space", any more than there's an "actual inertial reference frame" or "actual coordinate system"; they are all equivalent in the experimental world. Feel free to use whichever one you like.

"I'd thought the Hilbert space was uncountably dimensional because the number of functions of a real line is uncountable."

The number of functions of the real line is actually strictly greater than beth-one uncountability (by Cantor's theorem).

"I'd thought the Hilbert space was uncountably dimensional because the number of functions of a real line is uncountable."

The category of Hilbert spaces includes spaces of both finite and infinite dimension, so it presumably includes both countable and uncountable infinities.