Posts
Comments
Does the Singularity Institute have plans for what to do if an unfriendly AI appears from nowhere? (Not that you should make such plans public.)
Why did the SuperHappies adopt the Babyeater's ethics? I thought that they exterminated them. Or is 6/8 an alternative to 5/8 instead of its sequel?
It might be better to number the sections 1, 2, 3, 4, 5A, 6A, 5B, 6B.
Eliezer's hard takeoff scenario for "AI go FOOM" is if the AI takes off in a few hours or weeks. Let's say that the AI has to increase in intelligence by a factor of 10 for it to count as "FOOM". If there is no increase in resources, then this means that intelligence has to double anywhere from once an hour to once every few days just through recursion or cascades. If intelligence doubles once a day, then this corresponds to an annual interest rate of about 10 to the 100th power. This is quite a large number. It seems more likely that "AI goes FOOM" will be the result of resource overhang than recursion or cascades.
Note that a nuclear chain reaction is not an example of recursion. Once an atom is split, it can't be split again. A nuclear chain reaction is more like a forest fire when the tinder is very dry. It is probably better explained as a resource overhang than recursion.
I've been wondering how much of Moore's law was due to increasing the amount of human resources being devoted to the problem. The semiconductor industry has grown tremendously over the past fifty years, with more and more researchers all over the world being drawn into the problem. Jed, do you have any intuition about how much this has contributed?
Eliezer: If "AI goes FOOM" means that the AI achieves super-intelligence within a few weeks or hours, then it has to be at the meta-cognitive level or the resource-overhang level (taking over all existing computer cycles). You can't run off to Proxima Centauri in that time frame.
One source of diminishing returns is upper limits on what is achievable. For instance, Shannon proved that there is an upper bound on the error-free communicative capacity of a channel. No amount of intelligence can squeeze more error-free capacity out of a channel than this. There are also limits on what is learnable using just induction, even with unlimited resources and unlimited time (cf "The Logic of Reliable Inquiry" by Kevin T. Kelly). These sorts of limits indicate that an AI cannot improve its meta-cognition exponentially forever. At some point, the improvements have to level off.
Perhaps, in analogy with Fermi's pile, there is a certain critical mass of intelligence that is necessary for an AI to go FOOM. Can we figure out how much intelligence is needed? Is it reasonable to assume that it is more than the effective intelligence of all of the AI researchers working in AI? Or more conservatively, the intelligence of one AI researcher?
I think that Eliezer and Robin are both right. General AI is going to take a few big insights AND a lot of small improvements.
I think that Eliezer and Robin are both right. General AI is going to take a few big insights AND a lot of little improvements.
One way to evaluate a Bayesian approach to science is to see how it has fared in other domains where it is already being applied. For instance, statistical approaches to machine translation have done surprisingly well compared to rule-based approaches. However, a paper by Franz Josef Och (one of the founders of statistical machine translation) shows that probabilistic approaches do not always perform as well as non-probabilistic (but still statistical) approaches. Basically, maximizing the likelihood of a machine translation system produces results that are significantly worse than directly minimizing the error. The general principle is that you should maximize the function that is closest to the criteria that you most care about. Maximizing the probability of a system won't give you good results if what you really care about is minimizing the error.
By analogy, maximizing the likelihood of scientific hypotheses may lead to different results from minimizing the error. Currently, science tries to minimize the error -- it is always trying to disprove bad hypotheses through experimentation. The best hypotheses are the ones left standing. If science switched to maximizing the likelihood of the best hypotheses, this might lead to unintended consequences. For instance, it might be easier to maximize the probability of your pet hypothesis by refining your priors rather than by seeking experiments that could potentially disprove it.
I'm not a physicist, I'm a programmer. If I tried to simulate the Many-Worlds Interpretation on a computer, I would rapidly run out of memory keeping track of all of the different possible worlds. How does the universe (or universe of universes) keep track of all of the many worlds without violating a law of conservation of some sort?
If a photon hits two full mirrors at right angles, then its amplitude is changed by i*i = -1. Does it matter whether the second mirror turns the photon back towards its source, or causes the photon to continue in the direction it was going originally? Do you get -1 in both cases?