Posts

Comments

Comment by Vladimir_Slepnev on Continuous Improvement · 2009-01-11T16:33:20.000Z · LW · GW

It's premature optimization, we won't reach heaven. Anyway, do you test those ideas in practice? Theoretical falsifiability isn't enough.

Comment by Vladimir_Slepnev on Changing Emotions · 2009-01-05T13:07:46.000Z · LW · GW

Eliezer is attacking human augmentation for the same reason he attacked subsumption arch: to rationalize his working on from-scratch AI. I don't yet see quantifiable arguments why from-scratch AI is easier.

Comment by Vladimir_Slepnev on Complex Novelty · 2008-12-22T21:54:11.000Z · LW · GW

Richard Hollerith, thanks for your interest, but you'll be disappointed: I have no religion to offer. The highlights of every person's ethical system depend on the specific wrongs they have perceived in life. My own life has taught me to bear fruit into tomorrow, but also to never manipulate others with normative/religious cheap talk.

Also, Occam's Razor can only apply to those terminal beliefs that are weaker held than the razor itself. Fortunately, most people's values aren't so weak, even if yours are. :-)

Comment by Vladimir_Slepnev on Complex Novelty · 2008-12-21T17:35:04.000Z · LW · GW

Anytime! If you want exploration, you'll see the next frontier of escape after the Singularity. If you want family life, artistic achievement or wireheading, you can have it now.

Comment by Vladimir_Slepnev on Complex Novelty · 2008-12-21T16:07:26.000Z · LW · GW

You're all wrong. We can't run out of real-world goals. When we find ourselves boxed in, the next frontier iwill be to get out, ad infinitum. Is there a logical mistake in my reasoning?

Comment by Vladimir_Slepnev on Prolegomena to a Theory of Fun · 2008-12-18T16:17:14.000Z · LW · GW

V.G., see my exchange with Eliezer about this in November: http://lesswrong.com/lw/vg/building_something_smarter/ , search for "religion". I believe he has registered our opinion. Maybe it will prompt an overflow at some point, maybe not.

The discussion reminds me of Master of Orion. Anyone remember that game? I usually played as Psilons, a research-focused race, and by the endgame my research tree got maxed out. Nothing more to do with all those ultra-terraformed planets allocated to 100% research. Opponents still sit around but I can wipe the whole galaxy with a single ship at any moment. Wait for the opponents to catch up a little, stage some nice space battles... close the game window at some point. What if our universe is like that?

Comment by Vladimir_Slepnev on Prolegomena to a Theory of Fun · 2008-12-18T15:17:47.000Z · LW · GW

V.G., good theory but I think it's ethnic rather than religious. Ayn Rand fell prey to the same failure mode with an agnostic upbringing. Anyway this is a kind of ad hominem called the Bulverism fallacy ("ah, I know why you'd say that"), not a substantive critique of Eliezer's views.

Substantively: Eliezer, I've seen indications that you want to change the utility function that guides your everyday actions (the "self-help" post). If you had the power to instantly and effortlessly modify your utility function, what kind of Eliezer would you converge to? (Remember each change is influenced by the resultant utility function after the previous change.) I believe (but can't prove) you would either self-destruct, or evolve into a creature the current you would hate. This is a condensed version of the FAI problem, without the AI part :-)

Comment by Vladimir_Slepnev on Is That Your True Rejection? · 2008-12-07T09:24:31.000Z · LW · GW

Daniel, I knew it :-)

Phil, you can look at it another way: the commonality is that to win you have to make yourself believe a demonstrably false statement.

Comment by Vladimir_Slepnev on Is That Your True Rejection? · 2008-12-06T15:18:57.000Z · LW · GW

Immediate association: pick-up artists know well that when a girl rejects you, she often doesn't know the true reason and has to deceive herself. You could recruit some rationalists among PUAs. They wholeheartedly share your sentiment that "rational agents must WIN", and have accumulated many cynical but useful insights about human mating behaviour.

Comment by Vladimir_Slepnev on Hard Takeoff · 2008-12-03T14:22:50.000Z · LW · GW

I have a saying/hypothesis that a human trying to write code is like someone without a visual cortex trying to paint a picture - we can do it eventually, but we have to go pixel by pixel because we lack a sensory modality for that medium; it's not our native environment.

Eliezer, this sounds wrong to me. Acquired skills matter more than having a sensory modality. Computers are quite good at painting, e.g. see the game Crysis. Painting with a brush isn't much easier than pixel by pixel, and it's not a natural skill. Neither is the artist's eye for colour and shape, or the analytical ear for music (do you know the harmonies of your favourite tunes?) You can instantly like or dislike a computer program, same as a painting or a piece of music: the inscrutable inner workings get revealed in the interface.

Comment by Vladimir_Slepnev on Chaotic Inversion · 2008-11-29T13:26:24.000Z · LW · GW

The self help route. I've seen good bloggers succumb to it. Please don't go there.

Comment by Vladimir_Slepnev on Observing Optimization · 2008-11-21T15:08:11.000Z · LW · GW

Eric, it's more amusing that both often cite a theorem that agreeing to disagree is impossible. And even more amusing that in "Nature of Logic" Eliezer practically explained agreeing to disagree: our mind is more cognition than logic. Eliezer and Robin generalize from facts to concepts differently which leads them to different predictions. When they try using logic to reconcile, logic kinda bottoms out at the concepts and there doesn't seem to be any way out except to test both theories. The argument goes on because both are polite and respectful, but it doesn't seem to shed any light.

(I apologize to the hosts for harping on the same topic repeatedly.)

Comment by Vladimir_Slepnev on Complexity and Intelligence · 2008-11-04T08:10:08.000Z · LW · GW

+1 to Anatoly Vorobey. Using K-complexity to capture the human notion of complexity seems to be even worse than using game-theoretic rationality to capture human rationality - something that's been attacked to death already.

Comment by Vladimir_Slepnev on Building Something Smarter · 2008-11-03T19:52:10.000Z · LW · GW

So you're telling me I ought to stop doing that?

Cute counter but fallacious IMO. There are systems of oughts that don't look and sound like religions. For example, I don't write sermons for mine. Anyway, you're not engaging my central point, just nitpicking on an illustratory phrase.

Comment by Vladimir_Slepnev on Building Something Smarter · 2008-11-03T07:28:04.000Z · LW · GW

Vladimir, you haven't been reading this blog for long, have you?

Eliezer, I've lurked here for about a year. The quantum sequence was great (turned me on to many-worlds), but already pretty religious, e.g. the rationale or "it came time to break your allegiance to Science". I ate the tasty intellectual parts and mentally discarded the nasty religious parts. (For example, attacking science by attacking the Copenhagen interpretation was pretty low - most physicists don't even consider interpretations science.) Your recent posts however are all nasty, no tasty. Talmudic.

Thanks for reminding about "Is Humanism A Religion-Substitute?", it's a perfect example of what I'm talking about. You seem to be instinctively religious - you want to worship something, while e.g. for me it's just distasteful.

Religions don't go bad because they are false and stupid. Religions go bad because they live on the "ought" side of is/ought, where there is no true and false. (Cue your morality sequence.)

Comment by Vladimir_Slepnev on Building Something Smarter · 2008-11-02T21:28:59.000Z · LW · GW

Eliezer, I wanna tell you something that will sound offensive, please keep in mind I'm not trying to offend you...

You're making a religion out of your stuff.

Your posts are very different from Robin's - he shows specific applications of rationality, while you preach rationality as a Way. Maybe it has to do with your ethnicity: inventing religions is the #1 unique, stellar specialty of Jews. (Quick examples: Abrahamic religions, socialism, Ayn Rand's view of capitalism. Don't get offended, don't.)

Not saying your personal brand of rationality is wrong - far from it! It's very interesting, and you have taught me much. But as the blog title says, notice and overcome the bias.

Because religions have a way of becoming ugly in the long run.

Comment by Vladimir_Slepnev on Aiming at the Target · 2008-10-27T07:39:51.000Z · LW · GW

+1 to Will Pearson and Richard Kennaway. Humans mostly follow habit instead of optimizing.

Eliezer, this is interesting:

my general theory of Newcomblike problems

Some kind of bounded rationality? Could you give us a taste?

Comment by Vladimir_Slepnev on Aiming at the Target · 2008-10-26T20:22:57.000Z · LW · GW

This is very similar to an earlier post. Eliezer, go faster. I, for one, am waiting for some non-trivial FAI math - is there any?

Comment by Vladimir_Slepnev on Ethical Injunctions · 2008-10-21T21:58:40.000Z · LW · GW

Tim Tyler, IMO you're wrong: a human mind does not act as if maximizing any utility function on world states. The mind just goes around in grooves. Nice things like culture and civilization fall out accidentally as side effects. But thanks for the "bright light" idea, it's intriguing.

Comment by Vladimir_Slepnev on Ethical Injunctions · 2008-10-21T12:31:02.000Z · LW · GW

So AIs are dangerous, because they're blind optimization processes; evolution is cruel, because it's a blind optimization process... and still Eliezer wants to build an optimizer-based AI. Why? We human beings are not optimizers or outcome pumps. We are a layered cake of instincts, and precisely this allows us to be moral and kind.

No idea what I'm talking about, but the "subsumption architecture" papers seem to me much more promising - a more gradual, less dangerous, more incrementally effective path to creating friendly intelligent beings. I hope something like this this will be Eliezer's next epiphany: the possibility of non-optimizer-based high intelligence, and its higher robustness compared to paperclip bombs.

Comment by Vladimir_Slepnev on Ends Don't Justify Means (Among Humans) · 2008-10-15T10:17:14.000Z · LW · GW

What if a AI decides, with good reason, that it's running on hostile hardware?

Comment by Vladimir_Slepnev on What Would You Do Without Morality? · 2008-06-29T19:51:00.000Z · LW · GW

Eliezer, if I lose all my goals, I do nothing. If I lose just the moral goals, I begin using previously immoral means to reach my other goals. (It has happened several times in my life.) But your explaining won't be enough to take away my moral goals. Morality is desire conditioned by examples in childhood, not hard logic following from first principles. De-conditioning requires high stress, some really bad experience, and the older you get, the more punishment you need to change your ways.

Sebastian Hagen, people change. Of course you may refuse to accept it, but the current you will be dead in a second, and a different you born. There's a dead little girl in every old woman.

Comment by Vladimir_Slepnev on [deleted post] 2008-06-29T19:11:00.000Z

Wendy Collings, about your Q1, see the book "Five Love Languages". It's precisely about that.

Angel, if you're still here (I doubt it), try to apply the label "mindfuckery" to your posts. Sorry I don't have any substantive criticism. Or rather this is the substantive criticism.

Nominull, yes, pick-up techniques are mind control. So is flirting to obtain a drink or get out of a speeding ticket. Sorry, mind control is a thing humans do and try to get better at. Jonathan Haidt: "We did not evolve language and reasoning because they helped us to find truth; we evolved these skills because they were useful to their bearers, and among their greatest benefits were reputation management and manipulation." http://edge.org/3rd_culture/haidt07/haidt07_index.html

Z. M. Davis, even where gender roles are behavioral, they are (in a sense) determined by biology. For example, the enterprise salesman/client relationship has many similarities to the familiar male/female dynamic, even when both actors are middle-aged men. Now imagine if they had studied this role-play of power every day since childhood.

TGGP, good point.