Comment by Ian_C. on Another Call to End Aid to Africa · 2009-04-04T03:08:58.000Z · LW · GW

Stopping aid to Africa? It won't happen. Even people who fancy themselves rationalist still follow the Christian ethic that it's better to give something you earn to someone else than to keep it for yourself.

This ethic is irrational because to follow reason is to follow cause and effect, therefore the person who caused the thing to be (who earned it) should suffer the effect (receive the thing).

Comment by Ian_C. on Pretending to be Wise · 2009-02-20T02:58:32.000Z · LW · GW

Religion is possibly to blame for the idea that suspended judgment = superiority. Only God is omniscient, so only He knows things for sure, everyone else must act unsure and tentative.

Priests are allowed to pass judgment and still retain their authority, because they are the voice of God on earth. Maybe the idea of judges evolved from priests and retained that immunity.

Comment by Ian_C. on The Thing That I Protect · 2009-02-08T02:27:24.000Z · LW · GW

Value is fragile - isn't that what conservatives/republicans believe? And the liberal/democrat side believe they can undermine little bits here and there of their society's value system and not have the whole thing collapse. Who is right?

Comment by Ian_C. on Value is Fragile · 2009-01-29T15:08:06.000Z · LW · GW

Evolution (as an algorithm) doesn't work on the indestructible. Therefore all naturally-evolved beings must be fragile to some extent, and must have evolved to value protecting their fragility.

Yes, a designed life form can have paper clip values, but I don't think we'll encounter any naturally occurring beings like this. So our provincial little values may not be so provincial after all, but common on many planets.

Comment by Ian_C. on OB Status Update · 2009-01-27T15:35:30.000Z · LW · GW

How are we meant to interpret the name? At first blush, I would take it to mean "Posts here are less wrong than average, but still wrong," which is not really encouraging for potential posters...

Also a workaround for anonymous posting might be to make an actual account called "anonymous" and publicize the password.

Comment by Ian_C. on BHTV: Yudkowsky / Wilkinson · 2009-01-27T13:52:53.000Z · LW · GW

There were a number of anti-Bush comments in that video. Whatever you thought of him, there were no terrorist attacks for 7 years. Let's hope Obama can beat that record.

Comment by Ian_C. on Visualizing Eutopia · 2008-12-17T00:20:58.000Z · LW · GW

"Why does anything exist in the first place?" or "Why do I find myself in a universe giving rise to experiences that are ordered rather than chaotic?"

So... is cryonics about wanting to see the future, or is it about going to the future to learn the answers to all the "big questions?"

To those who advocate cryonics, if you had all the answers to all the big questions today, would you still use it or would you feel your life "complete" in some way?

I personally will not be using this technique. I will study philosophy and mathematics, and whatever I can find out before I die - that's it - I just don't get to know the rest.

Comment by Ian_C. on Disjunctions, Antipredictions, Etc. · 2008-12-10T03:40:17.000Z · LW · GW

The idea of making a mind-design n-space by putting various attributes on the axis, such as humorous/non-humorous, conceptual/perceptual/sensual, etc. -- how much does this tell us about the real possibilites?

What I mean is, for a thing to be possible, there must be some combination of atoms that can fit together to make it work. But merely making an N-space does not tell us about what atoms there are and what they can do.

Come to think of it, how can we assert anything is possible without having already designed it?

Comment by Ian_C. on Artificial Mysterious Intelligence · 2008-12-08T00:59:36.000Z · LW · GW

But if the brain does not work by magic (of course), then insight does not either. Genius is 99% perspiration, 10,000 failed lightbulbs and all that...

I think the kind of experimental approach Jed Harris was talking about yesterday is where AI will eventually come from. Some researcher who has 10,000 failed AI programs on his hard drive will then have the insight, but not before. The trick is, once he has the insight, to not implement it right away but stop and think about friendliness! But after so many failures how could you resist...

Comment by Ian_C. on Is That Your True Rejection? · 2008-12-06T18:39:59.000Z · LW · GW

Eliezer, I'm sure if you complete your friendly AI design, there will be multiple honorary PhDs to follow.

Comment by Ian_C. on Underconstrained Abstractions · 2008-12-05T00:37:23.000Z · LW · GW

"as long as the differences in the new situation are things that were originally allowed to vary"

And all the things that were fixed are still present of course! (since these are what we are presuming are the causal factors)

Comment by Ian_C. on Underconstrained Abstractions · 2008-12-05T00:31:35.000Z · LW · GW

'How many new assumptions, exactly, are fatal? How many new terms are you allowed to introduce into an old equation before it becomes "unvetted", a "new abstraction"?'

Every abstraction is made by holding some things the same and allowing other things to vary. If it allowed nothing to vary it would be a concrete not an abstraction. If it allowed everything to vary it would be the highest possible abstraction - simply "existence." An abstraction can be reapplied elsewhere as long as the differences in the new situation are things that were originally allowed to vary.

That's not to say this couldn't be a black swan, there's no guarantees, but going purely on evidence what other choice do you have except to do it this way.

Comment by Ian_C. on Hard Takeoff · 2008-12-03T04:51:24.000Z · LW · GW

"So if you suppose that a final set of changes was enough to produce a sudden huge leap in effective intelligence, it does beg the question of what those changes were."

Perhaps the final cog was language. The original innovation is concepts: the ability to process thousands of entities at once by forming a class. Major efficiency boost. But chimps can form basic concepts and they didn't go foom.

Because forming a concept is not good enough - you have to be able to do something useful with it, to process it. Chimps got stuck there, but we passed abstractions through our existing concrete-only processing circuits by using a concrete proxy (a word).

Comment by Ian_C. on Recursive Self-Improvement · 2008-12-02T01:48:13.000Z · LW · GW

How clear is the distinction between knowledge and intelligence really? The whole genius of the digitial computer is that programs are data. When a human observes someone else doing something, they can copy that action: data seems like programs there too.

And yet "cognitive" is listed as several levels above "knowledge" in the above post, and yesterday CYC was mocked as being not much better than a dictionary. Maybe cognition and knowledge are not so separate, but two perspectives on the same thing.

Comment by Ian_C. on Recursive Self-Improvement · 2008-12-01T23:18:08.000Z · LW · GW

"Recursion that can rewrite the cognitive level is worth distinguishing."

Eliezer, would a human that modifies the genes that control how his brain is built qualify as the same class of recursion (but with a longer cycle-time), or is it not quite the same?

Comment by Ian_C. on Chaotic Inversion · 2008-11-30T01:25:17.000Z · LW · GW

'I found my most productive fifteen minutes were when a friend said, out of nowhere, "want to see who can do the most work in 15 minutes?"'

That's interesting, because historically great works have been accomplished when a group of really talented people get together in the same place (e.g. Florence, Silicon Valley, Manhattan Project).

The Internet is great in that it enables you to find like minded people and bounce ideas of them. But that's only half the achievement puzzle. The other half is pestering each other to work, which the Internet is not so good for.

Comment by Ian_C. on Chaotic Inversion · 2008-11-29T12:05:36.000Z · LW · GW

Many rationalists (not saying Eli is one) are of the opinion that introspection is worthless (or at least suspect), so not surprising that trying to predict certain things doesn't occur to us.

Comment by Ian_C. on Thanksgiving Prayer · 2008-11-28T13:31:32.000Z · LW · GW

While I totally agree with the sentiment of Eliezer's prayer, I don't think saying a prayer on Thanksgiving makes you religious or even implies a belief in God - it's just tradition. It's harmless to follow traditions as long as you are epistemologically strong enough not to be in any danger of confusing reality and myth. Just like it's safe for a person with very strong reason to read a lot of fiction.

Comment by Ian_C. on Whence Your Abstractions? · 2008-11-20T06:38:45.000Z · LW · GW

Robin's concept of "Singularity" may be very broad, but your concept of "Optimization Process" is too.

Comment by Ian_C. on Lawful Creativity · 2008-11-08T22:12:54.000Z · LW · GW

I agree. Creativity is not just being random. The old masters used measurement and perspective when painting their masterpieces, they didn't just sit there and hum and at the sky and wait for inspiration to strike them.

I think the idea that creativity is somehow mystical comes from a religious model of the human body. If you think your body has causal flesh and a supernatural/acausal soul, and that creativity comes from your soul (the part that is "you") then it follows that creativity comes from the acausal.

"So do we reason that the most unexpected events, convey the most information, and hence the most surprising acts are those that give us a pleasant shock of creativity - the feeling of suddenly absorbing new information?"

This is very cool.

Comment by Ian_C. on Recognizing Intelligence · 2008-11-08T06:31:35.000Z · LW · GW

Earlier I said we are seeing things that are like what we make. But that's not a very useful definition implementation-wise.

My own approach to implementation is to define intelligence as the results of a particular act - "thinking" - and then introspect to see what the individual elements of that act are, and implement them individually.

Yes, I went to Uni and was told intelligence was search, and all my little Prolog programs worked, but I think they were oversimplifying. They were unacknowledged Platonists, trying to find the hidden essence, trying to read God's mind, instead of simply looking at what is (albeit through introspection) and attempting to implement it.

All very naive and 1800s of me, I know. Imagine using introspection! What an unthinkable cad. Well pardon me for actually looking at the thing I'm trying to program.

Comment by Ian_C. on Recognizing Intelligence · 2008-11-08T00:48:38.000Z · LW · GW

"For there to be a concept, there has to be a boundary. So what am I recognizing?"

I think you're just recognizing that the alien artifact looks like something that wouldn't occur naturally on Earth, rather than seeing any kind of essence. Because Earth is where we originally made the concept, and we didn't need an essence there, we just divided the things we know we made from the things we know we didn't.

Comment by Ian_C. on BHTV: Jaron Lanier and Yudkowsky · 2008-11-02T10:17:37.000Z · LW · GW

I think I see where the disconnect was in this conversation. Lanier was accusing general AI people of being religious. Yudkowsky took that as a claim that something he believed was false, and wanted Lanier to say what.

But Lanier wasn't saying anything in particular was false. He was saying that when you tackle these Big Problems, there are necessarily a lot of unknowns, and when you have too many unknowns reason and science are inapplicable. Science and reason work best when you have one unknown and lots of knowns. If you try to bite off too big a chunk at once you end up reasoning in a domain that is now only e.g. 50% fact, and that reminds him of the "reasoning" of religious people.

Knowledge is a big interconnected web, with each fact reinforcing the others. You have to grow it from the edge. And our techniques are design for edge space.

Comment by Ian_C. on Inner Goodness · 2008-10-24T08:50:44.000Z · LW · GW

"Ayn Rand? Aleister Crowley? How exactly do you get there? What Rubicons do you cross? It's not the justifications I'm interested in, but the critical moments of thought."

Do you want to know the critical moments of thought so you can shape an AI that will not have them? I think Ayn Rand's key realization was that concepts can have interdependencies.

For example, if you don't already have the concept "property" then no matter how many concrete examples of stealing you see, you won't be able to form the concept "stealing." You won't be able to "see" what they're doing without that earlier knowledge, you will just see them as people moving stuff around.

Then at some point she realized the same kind of dependency relationship exists between "value" and "life," and that it implies something about what the standard of value must be.

But then, worse case, if an AI did discover this, and follow the same path, it would not become a selfish psychopath. It would conclude that since it is not alive, the concept "value" does not apply to it. Therefore the only reason to perform any act at all is if it helps us, the living.

Comment by Ian_C. on Which Parts Are "Me"? · 2008-10-23T07:42:00.000Z · LW · GW

The ability to become emotionally detached is a useful skill (e.g. if you are being tortured) but when it becomes an automatic reflex to any emotion, it can take all the colour out of life.

Sometimes highly intelligent people are also overwhelmingly sensitive/empathetic so detaching is very tempting. The first few minutes of this video with the genius girl walking around the spaceship shows what it's like to be highly empathetic (Firefly).

But also: emotions come from the subconscious, and the subconscious contains that which is done repeatedly on the conscious level. So if you are habitually rational, does that effect your subconscious and therefore your emotions?

I think what happens is, you are so consistent on the conscious level (e.g. the way the our host cross-links all his posts) that the subconscious is also highly consistent. So when it produces an emotion it produces it with the whole of itself, instead of just one part contradicted by another (mixed emotions). Therefore the genius has very strong emotions, which interestingly is the stereotype: the overwrought genius who flys off the handle.

The sheer strength of having a conscious and subconscious in total agreement, and both in turn in agreement with reality, can be overwhelming and, like the girl above ("It's getting very crowded in here!") you just want to shut it off.

Comment by Ian_C. on Ethical Injunctions · 2008-10-21T13:28:46.000Z · LW · GW

I agree that there are certain moral rules we should never break. Human beings are not omniscient, so all of our principles have to be principles-in-a-context. In that sense every principle is vulnerable to a black swan, but there are levels of vulnerability. The levels correspond to how wide ranging the abstraction. The more abstract the less vulnerable.

Injunctions about truth are based on the metaphysical fact of identity, which is implied in every single object we encounter in our entire lives. So epistemological injunctions are the most invulnerable. The one about not helping the ferry boat captain - well helping him would be an absolute in normal life, but war is not normal life. It's a big, ugly, black swan. They should not feel guilty over that poor fellow, because "it's just war." (and I mean that in a deep epistemological sense, not a redneck sense)

Comment by Ian_C. on Traditional Capitalist Values · 2008-10-17T13:26:10.000Z · LW · GW

I could not live under a non-capitalist system. I believe people should be able to keep what they earn and charity should be a private affair.

Comment by Ian_C. on Ends Don't Justify Means (Among Humans) · 2008-10-15T18:01:59.000Z · LW · GW

I don't think it's possible that our hardware could trick us in this way (making us doing self-interested things by making them appear moral).

To express the idea "this would be good for the tribe" would require the use of abstract concepts (tribe, good) but abstract concepts/sentences are precisely the things that are observably under our conscious control. What can pop up without our willing it are feelings or image associations so the best trickery our hardware could hope for is to make something feel good.

Comment by Ian_C. on AIs and Gatekeepers Unite! · 2008-10-09T23:15:46.000Z · LW · GW

The meta argument others have mentioned - "Telling the world you let me out is the responsible thing to do," would work on me.

Comment by Ian_C. on Make an Extraordinary Effort · 2008-10-08T13:44:46.000Z · LW · GW

Re: why rationality can't be learned by rote -

If you introspect on a process of reason, you see that you actually choose at each step which path of inquiry to follow next and which to ignore. Each choice takes the argument to the next step, ultimately driving it to completion. Reason is "powered by choice(TM)" which is why it is incoherent to argue rationally for determinism and also why it can't be learned by rote.

Software developers (such as myself) in our more abstract moments can think of reason as simply encoding ones premises as a string of symbols standing for definitions and mechanically applying the rules of deduction (Prolog style). But introspection belies this - it's actually highly creative and messy. Reason is an art not a science.

Comment by Ian_C. on Make an Extraordinary Effort · 2008-10-07T16:34:24.000Z · LW · GW

Except the universe doesn't care how much backbreaking effort you make, only if you get the cause and effect right. Which is why cultures that emphasize hard work are not overtaking cultures that emphasize reason (Enlightenment cultures). Of course even these cultures must still do some work, that of enacting their cleverly thought out causes.

Comment by Ian_C. on On Doing the Impossible · 2008-10-07T06:48:11.000Z · LW · GW

"If you are talking about a timescale of decades, than intelligence augmentation does seems like a worthy avenue of investment"

This seems like the way to go to me. It's like "generation ships" in sci-fi. Should we launch ships to distant star systems today, knowing that ships launched 10 years from now will overtake them on the way?

Of course in the case of AI, we don't know what the rate of human enhancement will be, and maybe the star isn't so distant after all.

Comment by Ian_C. on Beyond the Reach of God · 2008-10-05T03:03:09.000Z · LW · GW

I don't want to sign up for cryonics because I'm afraid I will be revived brain-damaged. But maybe others are worried they will have the social status of a freak in that future society.

Comment by Ian_C. on The Level Above Mine · 2008-10-04T17:39:00.000Z · LW · GW

I don't believe IQ tests measure everything. There's a certain feeling when being creative, and when completing these tests I have not felt it, so I don't think it's measuring it.

Also I am not sure intelligence is general. At the level of ordinary life it certainly is, but geniuses are always geniuses at something, e.g. maths, physics, composing. Why aren't they geniuses at everything.

Comment by Ian_C. on Beyond the Reach of God · 2008-10-04T17:21:40.000Z · LW · GW

Reminds me of this: "Listen, and understand. That terminator is out there. It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead."

But my question would be: Is the universe of cause and effect really so less safe than the universe of God? At least in this universe, someone who has an evil whim is limited by the laws of cause and effect, e.g. Hitler had to build tanks first, which gave the allies time to prepare. In that other universe, Supreme Being decides he's bored with us and zap we're gone, no rules he has to follow to achieve that outcome.

So why is relying on the goodness of God safer than relying on the inexorability of cause and effect?

Comment by Ian_C. on Trying to Try · 2008-10-01T14:33:40.000Z · LW · GW

When Yoda said "there is no try," I took it more literally. In the absence of human concepts there is no "try," there is only things that act or don't act. Let go of your mind and all that.

Comment by Ian_C. on The Magnitude of His Own Folly · 2008-09-30T13:15:11.000Z · LW · GW

"I understood that you could do everything that you were supposed to do, and Nature was still allowed to kill you."

You finally realized inanimate objects can't be negotiated with... and then continued with your attempt to rectify this obvious flaw in the universe :)

Comment by Ian_C. on Friedman's "Prediction vs. Explanation" · 2008-09-29T07:36:55.000Z · LW · GW

One theory has a track record of prediction, and what is being asked for is a prediction, so at first glance I would choose that one. But the explanation based-one is built on more data.

But it is neither prediction nor explanation that makes things happen in the real world, but causality. So I would look in to the two theories and pick the one that looks to have identified a real cause instead of simply identifying a statistical pattern in the data.

Comment by Ian_C. on Competent Elites · 2008-09-28T07:09:34.000Z · LW · GW

The observations in this post gel with my experience also.

Middle managers can be the most short-sided, penny-pinching, over-simplifying people in the world. But when you talk to CEOs they are often well-spoken, well-read, philosophical, long-term.

You ask them a business question and expect to get back balance sheets, dollars, etc. but instead you get something surprisingly wide-ranging/philosophical.

Comment by Ian_C. on A Prodigy of Refutation · 2008-09-18T07:39:58.000Z · LW · GW

How can you tell if someone is an idiot not worth refuting, or if they're a genius who's so far ahead of you to sound crazy to you? Could we think an AI had gone mad, and reboot it, when it is really genius.

Comment by Ian_C. on Excluding the Supernatural · 2008-09-12T08:08:34.000Z · LW · GW

It's deeper than science being only applicable to natural things -- reason as such is only applicable to natural things. Once you are in the realm of the supernatural anything is possible and the laws of logic don't necessarily hold. You have to just close your mouth and turn off your mind and have faith. Which does not give a teacher a lot of material to work with...

Comment by Ian_C. on Rationality Quotes 15 · 2008-09-07T09:54:02.000Z · LW · GW

@denis bider: 'In the example you made, it appears as though you are using "superior" to mean "the one I like more" or "the one I think is worthy of praise" or "the one whose behavior should be encouraged".'

I was using it as in "an actual is better than a potential."

Comment by Ian_C. on Rationality Quotes 15 · 2008-09-07T03:15:59.000Z · LW · GW

Having a high IQ doesn't make someone a "superior human being" in my opinion, it's what you do that counts. An man of average intelligence who starts a small business and employs some people is superior to a lazy genius.

Comment by Ian_C. on Qualitative Strategies of Friendliness · 2008-08-30T09:40:44.000Z · LW · GW

Stephen: "the issue isn't whether it could determine what humans want, but whether it would care."

That is certainly an issue, but I think in this post and in Magical Categories, EY is leaving that aside for the moment, and simply focussing on whether we can hope to communicate what we want to the AI in the first place.

It seems to me that today's computers are 100% literal and naive, and EY imagines a superintelligent computer still retaining that property, but would it?

Comment by Ian_C. on Qualitative Strategies of Friendliness · 2008-08-30T06:10:20.000Z · LW · GW

Is intelligence general or not? If it is, then an entity that can do molecular engineering but be completely naive about what humans want it impossible.

Comment by Ian_C. on When Anthropomorphism Became Stupid · 2008-08-17T01:52:31.000Z · LW · GW

People don't believe that inanimate objects contain spirits any more but they do still believe that God can control such items which is almost the same thing. True rationality is realizing that they are not controlled by any mind - their own or God's - but they do what they do because of what they are.

Comment by Ian_C. on Hot Air Doesn't Disagree · 2008-08-16T07:19:47.000Z · LW · GW

Caledonian - in matters of the heart perhaps people go with emotion, merely rationalizing after the fact, but in other areas - career, finances, etc, I think most people try to reason it out. You need to have more faith in your fellow man :)

Comment by Ian_C. on Hot Air Doesn't Disagree · 2008-08-16T03:31:17.000Z · LW · GW

The whole question of "should" only arises if you have a choice and a mind powerful enough to reason about it. If you watch dogs it does sometimes look like they are choosing. For example if two people call a dog simultaneously it will look here, look there, pause and think (it looks like) and then go for one of the people. But I doubt it has reasoned out the choice, it has probably just gone with whatever emotion strikes it at the key moment.

Comment by Ian_C. on The Bedrock of Morality: Arbitrary? · 2008-08-15T10:20:34.000Z · LW · GW

In the real world, everything worth having comes from someone's effort -- even wild fruit has to be picked, sorted, and cleaned and fish need to be caught, gutted etc. I think this universal fact of required effort is probably part of the data we get the concept of fairness from in the first place, so reasoning in a space where pies pop in to existence from nothing seems like whatever you conclude might not be applicable to the real world anyway.

Comment by Ian_C. on Is Fairness Arbitrary? · 2008-08-14T11:46:54.000Z · LW · GW

In reality someone would have had to bake the pie, and it's fair that they get it since they put in the work. The problem is that the author, in creating the example, eliminated certain facts such as the baker in order to get to the essence of the problem. But the more facts you eliminate the more chance that something will appear arbitrary, due to fewer paths back to reality. It's the fallacy of the over-simplified model (no that's not a real fallacy :).