Posts

Limits to Learning: Rethinking AGI’s Path to Dominance 2023-06-02T16:43:25.635Z

Comments

Comment by tangerine on Humans, chimpanzees and other animals · 2023-06-01T15:47:30.634Z · LW · GW

The difference between humans and chimpanzees is purely one of imitation. Humans have evolved to sustain cultural evolution, by imitating existing culture and expanding upon it during a lifetime. Chimpanzees don’t imitate reliably enough to sustain this process and so their individual knowledge gains are mostly lost at death, but the individual intelligence of a human and a chimpanzee is virtually the same. A feral human child, that is, a human without human culture, does not behave more intelligently than a chimpanzee.

The slow accumulation of culture is what separated humans from chimpanzees. For AI to be to humans what humans are to chimps is not really possible because you either accumulate culture (knowledge) or you don’t. The only remaining distinction is one of speed. AI could accumulate a culture of its own, faster than humans. But how fast?

Comment by tangerine on Hands-On Experience Is Not Magic · 2023-05-28T21:41:28.364Z · LW · GW

The laws of physics are much simpler than the detailed structure of a given table

It is not practical to simulate everything down to the level of the laws of physics. In practice, you usually have to come up with much coarser models that can actually be computed within a reasonable time and most of the experimentation is needed to construct those models in the first place so that they align sufficiently with reality, and even then only in certain circumstances.

You could maybe use quantum mechanics to calculate the planetary orbits out for thousands of years, but it’s much simpler to use Newtonian mechanics for that, and that’s because the planetary motions happen to be easily modelable in that way, which however isn’t true for building rocket engines, or predicting the stock market or global politics.

Comment by tangerine on Adumbrations on AGI from an outsider · 2023-05-28T20:59:38.008Z · LW · GW

Intelligence is indeed not magic. None of the behaviors that you display that are more intelligent than a chimpanzee’s behaviors are things you have invented. I’m willing to bet that virtually no behavior that you have personally come up with is an improvement. (That’s not an insult, it’s simply par for the course for humans.) In other words, a human is not smarter than a chimpanzee.

The reason humans are able to display more intelligent behavior is because we’ve evolved to sustain cultural evolution, i.e., the mutation and selection of behaviors from one generation to the next. All of the smart things you do are a result of that slow accumulation of behaviors, such as language, counting, etc., that you have been able to simply imitate. So the author’s point stands that you need new information from experiments in order to do something new, including new kinds of persuasion.

Comment by tangerine on [deleted post] 2023-05-27T15:53:31.728Z

evidence against foom-in-a-box is just an improvement to the map of how to foom.

Could you elaborate on this? I equate foom with the hard take-off scenario, for which I think I’ve stated why I think this is virtually impossible, in contrast to the slow take-off, which in spite of being slow is still very dangerous, as I described.

I think my view roughly aligns with those of Robin Hanson and Paul Christiano, but I think I’ve provided a more precise, gears-level description that has been lacking and why the onus is really on those who think the hard take-off is possible at all.

Comment by tangerine on [deleted post] 2023-05-26T10:36:19.093Z

Okay, I guess this comes down to the interpretation of what “foom” means? I don’t think a world that looks like the current one can be taken over inconspicuously by AI in seconds, and not weeks either, and not even less than a year. If society has progressed to a point where we feel comfortable giving much more power to artificial agents, then that shortens the timeline.

The reason I think timelines are long is that I think it is inherently hard to do novel things, much harder than typically thought. I mean, what new things do you and I really do? Virtually nothing. What I tried to state in this essay is that knowledge is an inherent part of what we typically mean by intelligence, and for new tasks, new intelligence and knowledge is needed. The way this knowledge is gained is through cultural evolution; memes constitute knowledge and intelligence and these evolve similarly to genes; the vast majority of good genes you have are from your ancestors and most of your new genes or recombinations thereof are vastly likely to not improve your genetic makeup. It works the same way with memes; virtually everything you and I can do that we consider uniquely human are things we’ve copied from somewhere else, including “simple” things like counting or percentages. And, virtually none of the new things you and I do are improvements.

AI is not exempted from the process described above. Its intelligence is just as dependent on knowledge gained through trial and error and cultural evolution. This process is slow, and the faster and greater the effect to be achieved, the more knowledge and time is needed to actually do it in one shot.

Comment by tangerine on [deleted post] 2023-05-26T07:11:18.308Z

For any synthetic data to be useful however, requires that data to be grounded. Generating synthetic data is easy, fast and cheap, but if you want to ground it in empirical facts, that makes it much slower and expensive. For example, behind every paper published is an amount of work much, much greater than writing down the words.

Comment by tangerine on [deleted post] 2023-05-26T07:03:20.948Z

An individual agent can’t beat humans at cultural evolution, but multiple agents can. However, the way they do it will almost certainly be very conspicuous, especially if it’s novel (outside the training distribution), because the way you get sufficient data about a new task is by trial and error. If these agents tried to take over the world quickly it would be like the January 6th Insurrection; very visible, misguided and ineffective. They could do it over a long time span by assuming parts of the economy and gaining leverage by lobbying, but that is a slow process.

The Bengio quote is valid, but it doesn’t apply to short timespans. How would a group of agents be able to learn to copy itself over a very large array of hardware, and learn to coordinate, without drawing massive attention to itself? None of this could be done without precedent. We have systems currently that do distributed learning, but these are very specific and narrow implementations that do not scale to taking over or destroying large parts of the world; that is absolutely unprecedented.

Comment by tangerine on Could a superintelligence deduce general relativity from a falling apple? An investigation · 2023-04-23T19:07:10.745Z · LW · GW

You are assuming a superintelligence that knows how to perform all these deductions. Why would this be a valid assumption? You are reasoning from your own point of view, i.e., the point of view of someone who has already seen much, much more of the world than a few frames, and more importantly someone who already knows what the thing is that is supposed to be deduced, which allows you to artificially reduce the hypothesis space. On what basis would this superintelligence be able to do this?

Comment by tangerine on Why did everything take so long? · 2017-12-30T17:12:33.278Z · LW · GW

Your point c definitely rings true to me. An answer often seems simple in hindsight, but that an answer is simple doesn’t mean it’s simple to find. There are often many simple answers and the vast majority of them useless.

Comment by tangerine on Security Mindset and the Logistic Success Curve · 2017-11-27T16:28:39.180Z · LW · GW

Mr. Topaz is right (even if for the wrong reasons). If he is optimizing for money, being the first to market may indeed get him a higher probability of becoming rich. Wanting to be rich is not wrong. Sure, the service will be insecure, but if he sells before any significant issues pop up he may sell for billions. “Après moi, le déluge.”

Unfortunately, the actions of Mr. Topaz are also the very optimization process that leads to misaligned AGI, already happening at this very moment. And indeed, trying to fix Mr. Topaz is ordinary paranoia. Security mindset would be closer to getting rid of The Market that so perversely incentivized him; I don’t like the idea, but if it’s between that and a paperclip maximizer, I know what I would choose.