Posts

Comments

Comment by provocateur on Open Thread, April 1-15, 2012 · 2012-04-04T21:58:07.925Z · LW · GW

Brevity is the soul of wit. Why is LW so obviously biased towards long-windedness?

Comment by provocateur on You only live once: a reframing of working towards posthumanity · 2012-04-04T21:48:01.094Z · LW · GW

this secret area contains hundreds of times as much content as the actual game.

How can a part be bigger than the whole? You probably want to say "as the rest of the game" instead. It took me a bit of effort to understand what you are trying to say.

Comment by provocateur on SotW: Be Specific · 2012-04-04T02:36:45.290Z · LW · GW

Hacker News discussion about this

Comment by provocateur on Evidence for the orthogonality thesis · 2012-04-04T01:04:03.961Z · LW · GW

All these arguments for the danger of AGI are worthless if the team that creates it doesn't heed the warnings.

I knew about this site for years, but only recently noticed that it has "discussion" (this was before the front page redesign), and that the dangers of AGI are even on-topic here.

Not that I'm about to create an AGI: The team that is will probably be even busier and less willing to be talked down to as in "you need to learn to think", etc.

Just my 2e-2

Comment by provocateur on Advice for an isolated Rationalist? · 2012-04-03T23:57:43.666Z · LW · GW

I can confidently say that many of the ideas on in this community have done much to better my life

Could you give some examples?

Comment by provocateur on Cryonics on LessWrong vs at LessWrong meetups · 2012-04-02T23:32:04.361Z · LW · GW

I'm an atheist, and believe that my mind can be seen as simply "software" running on my brain. However that "software" also believes that "I" is not just the software, but the brain and perhaps even the rest of the body.

If someone cloned my body atom for atom, "I" feel like it wouldn't really be me, just an illusion fooling outside observers. Same for mind uploads.

Do any other atheists feel the same way?

As to cryonics, that's obviously not quite the same a mind upload, but it feels like a greyish area, if the original cells are destroyed.

Another thing: if my world is just a simulation (even the NYT wrote about this theory), which I have no way of knowing, then cloning myself and killing the original is still suicide, with a very negative utility.

What do others think? I know that Kurzweil can't wait to upload his mind, and Goertzel wants multiple copies of himself to hedge his bets.

Comment by provocateur on [draft] Concepts are Difficult, and Unfriendliness is the Default: A Scary Idea Summary · 2012-03-31T21:37:18.551Z · LW · GW

AGI will only be Friendly if its goals are the kinds of goals that we would want it to have

At the risk of losing my precious karma, I'll play the devil's advocate and say I disagree.

First some definitions: "Friendly" (AI), according to Wikipedia, is one that is beneficial to humanity (not a human buddy or pet). "General" in AGI means not problem-specific (narrow AI).

My counterexample is an AI system that lacks any motivations, goals or actuators. Think of an AIXI system (or, realistically, a system that approximates it), and subtract any reward mechanisms. It just models its world (looking for short programs that describe its input). You could use it to make (super-intelligent) predictions about the future. This seems clearly beneficial to humanity (until it falls into malicious human hands, but that's besides the argument you are making)

Comment by provocateur on AI Risk & Opportunity: A Timeline of Early Ideas and Arguments · 2012-03-31T19:32:29.700Z · LW · GW

SPOILER ALERT don't read if you are yet to see Eagle Eye.

I doubt that the Terminator introduced any new important ideas. Its notability is like that of David Chalmers' recent paper, in bringing old ideas to the attention of the broader public.

Eagle Eye was spoofing its own sensors at some point. Again, not a novel idea per se, but pretty great for a movie. In the beginning of the movie, IIRC there was some Bayesian updating going on based on different sources of evidence.

Comment by provocateur on AI Risk & Opportunity: A Timeline of Early Ideas and Arguments · 2012-03-31T17:01:21.471Z · LW · GW

Since you are including works of fiction, I think Terminator (1984) is worth mentioning. This is what most people think of when it comes to AI risk.

By the way, my personal favorite, when it comes to AI doing what it wasn't intended to, would have to be Eagle Eye (2008) . It's got everything: hard take-off and wireheading of sorts, second-guessing humans, decent acting.

Comment by provocateur on [video] Paul Christiano's impromptu tutorial on AIXI and TDT · 2012-03-29T22:15:02.270Z · LW · GW

The embedded YouTube video seems to end rather abruptly. Did the iPhone battery run out?