Posts

Comments

Comment by pyramid_head3 on Measuring Optimization Power · 2008-10-28T00:17:54.000Z · score: 1 (1 votes) · LW · GW

And there goes Caledonian making pointless arguments again... Couldn't you pick a more frivolous objection?

Comment by pyramid_head3 on Psychic Powers · 2008-09-12T23:22:17.000Z · score: 2 (2 votes) · LW · GW

Eliezer, what if psi phenomena are real, but they work through as-yet-unknown laws of physics? In this case reductionism could still be true (and probable), even if psi is real. I can't really see why psi phenomena rule out a reductionist universe (and I guess Damien Broderick agrees...).

By the way, I don't believe in psi, and think that all effects found thus far are based on the misapplication of statistics and related errors.

Comment by pyramid_head3 on Qualitative Strategies of Friendliness · 2008-08-31T01:19:41.000Z · score: 1 (1 votes) · LW · GW

Eliezer: ...I'm seriously starting to wonder if some people just lack the reflective gear required to abstract over their background frameworks

I'm pretty sure of it, since I've seen some otherwise smart people make this kind of mistake (and I'm even more perplexed since I outgrew it right after my teenage years...)

Comment by pyramid_head3 on Qualitative Strategies of Friendliness · 2008-08-30T12:09:29.000Z · score: 2 (2 votes) · LW · GW

@Caledonian: So if an AI wants to wipe out the human race we should be happy about it? What if it wants to treat as cattle? Which/whose preferences should it follow? (Notice the weasel words?)

When I was a teenager I used to think just like you. A superintelligence would have better goals than ordinary humans, because it is superintelligent. Then I grew up and realized that minds are not blank slates, and you can't just create a "value-free" AI and see what kinds of terminal values it chooses for itself.

Comment by pyramid_head3 on Inseparably Right; or, Joy in the Merely Good · 2008-08-09T12:05:01.000Z · score: 0 (0 votes) · LW · GW

Good post, Eliezer. Now that I've read it (and the previous one), I can clearly see (I think) why you think CEV is a good idea, and how you arrived at it. And now I'm not as skeptical about it as I was before.

Comment by pyramid_head3 on Contaminated by Optimism · 2008-08-07T18:11:00.000Z · score: 0 (0 votes) · LW · GW

Well, is there really no one else in the world right now to work in this problem along with Eliezer (who, in my opinion, don't lack discipline)? I can't help but think that it's rather arrogant...

Well, that's one of the reasons I'm not a SIAI donor, though. Can't donate money to someone who write blogs instead of researching Friendly AI theory. And I'm not nearly smart enough to make any progress on my own, or even help someone else. So I guess mankind is screwed :)

Comment by pyramid_head3 on Contaminated by Optimism · 2008-08-07T11:15:00.000Z · score: 0 (0 votes) · LW · GW

@Kaj Sotala: I can't - I'm not smart enough :)

But seriously, do you really think that we ought to wait a decade before a brilliant researcher shows up? And it seems all the more suspicious because this brilliant researcher has to read Eliezer's material in a tender age, or else he won't be good enough.

Now don't get me wrong, I love Eliezer's posts here, and I've learned A LOT of stuff. And I also happen to think that he's onto something when he talks about Friendly AI (and AI in general). But I don't see how he can hope to save the world by writing blog posts...

Comment by pyramid_head3 on Contaminated by Optimism · 2008-08-06T17:55:53.000Z · score: 0 (2 votes) · LW · GW

Eliezer,

You should either: a) ban Caledonian; b) let him write whatever he wants.

Censoring his posts is kind of nasty, because it looks like he can only express opinions you think worth posting. Personally, I think you should choose (a), because his comments are boring, disruptive and useless, but if you don't wanna do it, then go for (b).

And as for this: "Research help, in particular, seems to me to probably require someone to read all this stuff at the age of 15 and then study on their own for 7 years after that, so I figured I'd better get started on the writing now", I think it's kinda dumb, and it will never work out. If you keep hoping on something like that, somebody will get there first, and it probably won't be a Friendly outcome.

So you should plow ahead, and perhaps not be so arrogant as to think that no one else on the planet right now can help you with the research. There're plenty of smart guys out there, and if they have access to the proper literature, I'm sure you can find worthy contributors, instead of waiting 7 more years.

Comment by pyramid_head3 on Changing Your Metaethics · 2008-07-27T17:27:43.000Z · score: 2 (2 votes) · LW · GW

I said it somewhere else, but... it seems like Caledonian’s sole purpose in life is to disagree with Eliezer whenever possible. Reminds me of a quote from Stephen King:

"These days if Stu Redman said a firetruck was red, Harold Lauder would produce facts and figures proving that most of them these days were green."

Just exchange Stu Redman for Eliezer, and Harold for Caledonian…

Comment by pyramid_head3 on Math is Subjunctively Objective · 2008-07-25T11:24:41.000Z · score: 13 (14 votes) · LW · GW

Hmm, Eliezer likes Magic the Gathering (all five basic terrains?)...

Comment by pyramid_head3 on Could Anything Be Right? · 2008-07-18T14:19:57.000Z · score: 1 (1 votes) · LW · GW

Ban Caledonian already... His comments are cluttered, boring to read and confrotational for confrontation's sake...

Comment by pyramid_head3 on The Gift We Give To Tomorrow · 2008-07-17T23:14:58.000Z · score: 1 (1 votes) · LW · GW

@Caledonian: A properly designed AI should care about the very things that we care. Or what would be the purpose of building it? Wipe out the human race trying to prove the Riemann hypothesis? I'm well aware of the huge mind-design space out there - but I think that we should aim for the right spot when designing our first AI (now, where is the spot, and how to aim for it is still an open question).

I would be interested in seeing what you think about how we should build an AI (and what its goals should be as well...)

Comment by pyramid_head3 on The Gift We Give To Tomorrow · 2008-07-17T12:06:05.000Z · score: 3 (3 votes) · LW · GW

And there goes Caledonian again, with his vacuous comments intended only to spite. Eliezer already said everything you're saying - too bad you couldn't see it :(

Comment by pyramid_head3 on The Second Law of Thermodynamics, and Engines of Cognition · 2008-02-27T02:51:13.000Z · score: 11 (11 votes) · LW · GW

"Isn't speed the same as velocity?"

Nope, speed is a scalar, while velocity is a vector.