Comment by mattmahoney on The genie knows, but doesn't care · 2013-09-16T16:04:32.630Z · score: 2 (8 votes) · LW · GW

Maybe I am missing something, but hasn't a seed AI already been planted? Intelligence (whether that means ability to achieve goals in general, or whether it means able to do what humans can do) depends on both knowledge and computing power. Currently the largest collection of knowledge and computing power on the planet is the internet. By the internet, I mean both the billions of computers connected to it, and the two billion brains of its human users. Both knowledge and computing power are growing exponentially, doubling every 1 to 2 years, in part by adding users, but mostly on the silicon side by collecting human knowledge and the hardware to sense, store, index, and interpret it.

My question: where is the internet's reward button? Where is its goal of "make humans happy", or whatever it is, coded? How is it useful to describe the internet as a self-improving goal-directed optimization process?

I realize that it is useful, although not entirely accurate, to describe the human brain as a goal directed optimization process. Humans have certain evolved goals, such as food, and secondary goals such as money. Humans who are better at achieving these goals are assumed to be more intelligent. The model is not entirely accurate because humans are not completely rational. We don't directly seek positive reinforcement. Rather, positive reinforcement is a signal that has the effect of increasing the probability of performing actions that immediately preceded it, for example, shooting heroin into a vein. Thus, unlike a rational agent, your desire to use heroin (or wirehead) depends on how many times you have tried it in the past.

We like the utility model because it is mathematically simple. But it also leads to a proof that ideal rational agents cannot exist (AIXI). Sometimes a utility model is still a useful approximation, and sometimes not. Is it useful to model a thermostat as an agent that "wants" to keep the room at a constant temperature? Is it useful to model practical AI this way?

I think the internet has the potential to grow into something you might not wish for, for example, something that will marginalize human brains as an insignificant component. But what are the real risks here? Is it really a problem of misinterpreting or taking over its goals.

Comment by mattmahoney on Breakdown of existential risks · 2012-11-23T18:26:17.113Z · score: -1 (5 votes) · LW · GW

Because FAI is a hard problem. If it were easy then we would not still be paying people $70 trillion per year worldwide to do work that machines aren't smart enough to do yet.

Comment by mattmahoney on Reply to Holden on 'Tool AI' · 2012-07-05T17:38:30.313Z · score: 0 (0 votes) · LW · GW

If we were smart enough to understand its policy, then it would not be smart enough to be dangerous.

Comment by mattmahoney on Thoughts on the Singularity Institute (SI) · 2012-05-11T16:06:33.693Z · score: 4 (4 votes) · LW · GW

Comment by mattmahoney on The Irrationality Game · 2011-04-26T16:29:04.749Z · score: 22 (35 votes) · LW · GW

There will never be a singularity. A singularity is infinitely far in the future in "perceptual time" measured in bits learned by intelligent agents. But evolution is a chaotic process whose only attractor is a dead planet. Therefore there is a 100% chance that the extinction of all life (created by us or not) will happen first. (95%).

Comment by mattmahoney on The Irrationality Game · 2011-04-26T16:16:06.756Z · score: 0 (0 votes) · LW · GW

It's a good idea but upvote because evolution will thwart your plans.

Comment by mattmahoney on The Irrationality Game · 2011-04-26T16:01:43.220Z · score: 2 (2 votes) · LW · GW

I disagree because a simulation could program you to believe the world was real and believe it was more complex than it actually was. Upvoted for under confidence.

Comment by mattmahoney on Newcomb's Problem and Regret of Rationality · 2008-04-12T23:56:00.000Z · score: 1 (1 votes) · LW · GW

It is not possible for an agent to make a rational choice between 1 or 2 boxes if the agent and Omega can both be simulated by Turing machines. Proof: Omega predicts the agent's decision by simulating it. This requires Omega to have greater algorithmic complexity than the agent (including the nonzero complexity of the compiler or interpreter). But a rational choice by the agent requires that it simulate Omega, which requires that the agent have greater algorithmic complexity instead.

In other words, the agent X, with complexity K(X), must model Omega which has complexity K(X + "put $1 million in box B if X does not take box A"), which is slightly greater than K(X).

In the framework of the ideal rational agent in AIXI, the agent guesses that Omega is the shortest program consistent with the observed interaction so far. But it can never guess Omega because its complexity is greater than that of the agent. Since AIXI is optimal, no other agent can make a rational choice either.

As an aside, this is also a wonderful demonstration of the illusion of free will.