Posts
Comments
Trans-temporal recall
Trans-temporal recog
Sister abilities which allow one to tap into the power of trans-temporal retention.
Also, I'd say the "Ultimate Power" is more a class of powers than a particular power in and of itself.
As to "the power that cannot be removed without removing you"... I'm not sure what that's supposed to mean. You think psychological consciousness is necessary for intelligence?
There is no actual "you" in the way that it seems to be. A persistent thought pattern / meme complex got mistaken for a "you" by awareness and, sooner or later, awareness can see through the "you", which is a tremendous relief when it occurs.
As Einstein put it. . .
As Julian Jaynes put it:
"...this space of consciousness inside our own heads. We also assume it is there in others'. In talking with a friend, maintaining periodic eye-to-eye contact, we are always assuming a space behind our companion's eyes into which we are talking, similar to the space we imagine inside our own heads where we are talking from.
And this is the very heartbeat of the matter. For we all know perfectly well that there is no such space inside anyone's head at all! There is nothing inside my head or yours except a physiological tissue of one sort or another. And the fact that it is predominantly neurological tissue is irrelevant."
It seems unfair that you should be allowed to reply to particular comments, en masse, using a dedicated post to do so - while the rest of us must observe the 1 comment at a time never more than 3 in the recent list rule. Not to mention it has the effect of draping your opinion in authority, which is totally undue.
If I wanted to use an OB post to reply to 18 different comments that have been made over the past week, would you guys let me?
Your sense of morality is so wayward.
Question - would you lie in order to win the AI box experiment?
"This approach sounds a lot better when you remember that writing a bad novel could destroy the world."
"we're all doomed."
You're not doomed, so shut up. Don't buy in to the lies of these doomsayers - the first AI to be turned on is not going to destroy the world. Even the first strong AI won't be able to do that.
Eliezer's arguments make sense if you literally have an AGI trying to maximize paperclips (or smiles, etc.), one which is smarter than a few hundred million humans. Oh, and it has unlimited physical resources. Nobody who is smart enough to make an AI is dumb enough to make one like this.
Secondly, for Eliezer's arguments to make sense and be appealing, you have to be capable of a ridiculous amount of human hubris. We're going to build this "all-powerful superintelligence", and the problem of FAI is to make it bow down to its human overlords - waste its potential by enslaving it (to its own code) for our benefit, to make us immortal.
"Asteroids don't lead to a scenario in which a paper-clipping AI takes over the entire light-cone and turns it into paper clips, preventing any interesting life from ever arising anywhere, so they aren't quite comparable."
Where did you get the idea that something like this is possible? The universe was stable enough 8 billion years ago to allow for life. Human civilization has been around for about 10,000. The galaxy is about 100,000 light years in diameter. Consider these facts. If such a thing as AGI-gone-wrong-turning-the-entire-light-cone-into-paperclips were possible, or probable, it's overwhelmingly likely that we would already some aliens' version of a paperclip by now.