post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Vladimir_Nesov · 2023-05-25T23:36:31.852Z · LW(p) · GW(p)

Synthetic data is probably important. Sam Altman seems bullish on it.

Replies from: tangerine
comment by tangerine · 2023-05-26T07:11:18.308Z · LW(p) · GW(p)

For any synthetic data to be useful however, requires that data to be grounded. Generating synthetic data is easy, fast and cheap, but if you want to ground it in empirical facts, that makes it much slower and expensive. For example, behind every paper published is an amount of work much, much greater than writing down the words.

comment by the gears to ascension (lahwran) · 2023-05-25T23:23:27.241Z · LW(p) · GW(p)

So, it's not possible to beat humans at cultural evolution because... humans also have cultural evolution? I fail to see how that actually makes us guaranteed to be durably better substrates for it.

I do agree that artificial agents also having to deal with moloch is a major difficulty for them, and that this is a major reason we're safer than some expected for now; but I don't see how that disproves sudden increase in capabilities. Sure, the old model of instant takeoff has been disproven at this point. But so what if it takes two weeks instead of seconds, and needs to be parallelized across many machines? to quote Bengio,

An AI system in one computer can potentially replicate itself on an arbitrarily large number of other computers to which it has access and, thanks to high-bandwidth communication systems and digital computing and storage, it can benefit from and aggregate the acquired experience of all its clones; this would accelerate the rate at which AI systems could become more intelligent (acquire more understanding and skills) compared with humans. Research on federated learning [1] and distributing training of deep networks [2] shows that this works (and is in fact already used to help train very large neural networks on parallel processing hardware).

Replies from: tangerine
comment by tangerine · 2023-05-26T07:03:20.948Z · LW(p) · GW(p)

An individual agent can’t beat humans at cultural evolution, but multiple agents can. However, the way they do it will almost certainly be very conspicuous, especially if it’s novel (outside the training distribution), because the way you get sufficient data about a new task is by trial and error. If these agents tried to take over the world quickly it would be like the January 6th Insurrection; very visible, misguided and ineffective. They could do it over a long time span by assuming parts of the economy and gaining leverage by lobbying, but that is a slow process.

The Bengio quote is valid, but it doesn’t apply to short timespans. How would a group of agents be able to learn to copy itself over a very large array of hardware, and learn to coordinate, without drawing massive attention to itself? None of this could be done without precedent. We have systems currently that do distributed learning, but these are very specific and narrow implementations that do not scale to taking over or destroying large parts of the world; that is absolutely unprecedented.

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2023-05-26T07:47:06.490Z · LW(p) · GW(p)

Then I agree, but I still don't believe that makes us even as safe as the title claim.

Replies from: tangerine
comment by tangerine · 2023-05-26T10:36:19.093Z · LW(p) · GW(p)

Okay, I guess this comes down to the interpretation of what “foom” means? I don’t think a world that looks like the current one can be taken over inconspicuously by AI in seconds, and not weeks either, and not even less than a year. If society has progressed to a point where we feel comfortable giving much more power to artificial agents, then that shortens the timeline.

The reason I think timelines are long is that I think it is inherently hard to do novel things, much harder than typically thought. I mean, what new things do you and I really do? Virtually nothing. What I tried to state in this essay is that knowledge is an inherent part of what we typically mean by intelligence, and for new tasks, new intelligence and knowledge is needed. The way this knowledge is gained is through cultural evolution; memes constitute knowledge and intelligence and these evolve similarly to genes; the vast majority of good genes you have are from your ancestors and most of your new genes or recombinations thereof are vastly likely to not improve your genetic makeup. It works the same way with memes; virtually everything you and I can do that we consider uniquely human are things we’ve copied from somewhere else, including “simple” things like counting or percentages. And, virtually none of the new things you and I do are improvements.

AI is not exempted from the process described above. Its intelligence is just as dependent on knowledge gained through trial and error and cultural evolution. This process is slow, and the faster and greater the effect to be achieved, the more knowledge and time is needed to actually do it in one shot.

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2023-05-26T11:53:39.361Z · LW(p) · GW(p)

Yeah, agreed with all mechanistic points, disagreed on timelines for a network of artificial agents to pull this off. I do think it initially looks like existing human culture subnetworks of malice being amplified by ai misuse, but we see that now. And that one group of beings who like being loud about capability increase seem increasingly like The Borg every time I encounter them on Twitter. I've inverted my downvote because this post plus these comments seems more reasonable, but I still think the claim in the title is very overconfident - evidence against foom-in-a-box is just an improvement to the map of how to foom.

Replies from: tangerine
comment by tangerine · 2023-05-27T15:53:31.728Z · LW(p) · GW(p)

evidence against foom-in-a-box is just an improvement to the map of how to foom.

Could you elaborate on this? I equate foom with the hard take-off scenario, for which I think I’ve stated why I think this is virtually impossible, in contrast to the slow take-off, which in spite of being slow is still very dangerous, as I described.

I think my view roughly aligns with those of Robin Hanson and Paul Christiano, but I think I’ve provided a more precise, gears-level description that has been lacking and why the onus is really on those who think the hard take-off is possible at all.