Posts
Comments
Maybe I am missing something, but hasn't a seed AI already been planted? Intelligence (whether that means ability to achieve goals in general, or whether it means able to do what humans can do) depends on both knowledge and computing power. Currently the largest collection of knowledge and computing power on the planet is the internet. By the internet, I mean both the billions of computers connected to it, and the two billion brains of its human users. Both knowledge and computing power are growing exponentially, doubling every 1 to 2 years, in part by adding users, but mostly on the silicon side by collecting human knowledge and the hardware to sense, store, index, and interpret it.
My question: where is the internet's reward button? Where is its goal of "make humans happy", or whatever it is, coded? How is it useful to describe the internet as a self-improving goal-directed optimization process?
I realize that it is useful, although not entirely accurate, to describe the human brain as a goal directed optimization process. Humans have certain evolved goals, such as food, and secondary goals such as money. Humans who are better at achieving these goals are assumed to be more intelligent. The model is not entirely accurate because humans are not completely rational. We don't directly seek positive reinforcement. Rather, positive reinforcement is a signal that has the effect of increasing the probability of performing actions that immediately preceded it, for example, shooting heroin into a vein. Thus, unlike a rational agent, your desire to use heroin (or wirehead) depends on how many times you have tried it in the past.
We like the utility model because it is mathematically simple. But it also leads to a proof that ideal rational agents cannot exist (AIXI). Sometimes a utility model is still a useful approximation, and sometimes not. Is it useful to model a thermostat as an agent that "wants" to keep the room at a constant temperature? Is it useful to model practical AI this way?
I think the internet has the potential to grow into something you might not wish for, for example, something that will marginalize human brains as an insignificant component. But what are the real risks here? Is it really a problem of misinterpreting or taking over its goals.
Because FAI is a hard problem. If it were easy then we would not still be paying people $70 trillion per year worldwide to do work that machines aren't smart enough to do yet.
If we were smart enough to understand its policy, then it would not be smart enough to be dangerous.
There will never be a singularity. A singularity is infinitely far in the future in "perceptual time" measured in bits learned by intelligent agents. But evolution is a chaotic process whose only attractor is a dead planet. Therefore there is a 100% chance that the extinction of all life (created by us or not) will happen first. (95%).
It's a good idea but upvote because evolution will thwart your plans.
I disagree because a simulation could program you to believe the world was real and believe it was more complex than it actually was. Upvoted for under confidence.
It is not possible for an agent to make a rational choice between 1 or 2 boxes if the agent and Omega can both be simulated by Turing machines. Proof: Omega predicts the agent's decision by simulating it. This requires Omega to have greater algorithmic complexity than the agent (including the nonzero complexity of the compiler or interpreter). But a rational choice by the agent requires that it simulate Omega, which requires that the agent have greater algorithmic complexity instead.
In other words, the agent X, with complexity K(X), must model Omega which has complexity K(X + "put $1 million in box B if X does not take box A"), which is slightly greater than K(X).
In the framework of the ideal rational agent in AIXI, the agent guesses that Omega is the shortest program consistent with the observed interaction so far. But it can never guess Omega because its complexity is greater than that of the agent. Since AIXI is optimal, no other agent can make a rational choice either.
As an aside, this is also a wonderful demonstration of the illusion of free will.