Coordination explosion before intelligence explosion...?

post by tailcalled · 2023-03-05T20:48:55.995Z · LW · GW · 9 comments

Contents

9 comments

Epistemic status: musing that I wanted to throw out there.

A traditional AI risk worry has been the notion of an intelligence explosion: An AI system will rapidly grow in intelligence and become able to make huge changes using small[1] subtle[2] tricks such as bioengineering or hacking. Since small actions are not that tightly regulated, these huge changes would be made in a relatively unregulated way, probably destroying a lot of things, maybe even the entire world or human civilization.

Modern AI systems such as LLMs seem to be making rapid progress in turning sensory data into useful information, aggregating information from messy sources, processing information in commonsense ways, and delivering information to people. These abilities do not seem likely to generalize to bioengineering or hacking (which involve generating novel capabilities), but they do seem plausibly useful for some things.


Two scenarios of interest:

Coordination implosion: Some people suggest that because modern AI systems are extremely error-prone, they will not be useful, except for stuff like spam, which degrades our coordination abilities. I'm not sure this scenario is realistic because there seem to be a lot of people working on making it work for useful stuff.

Coordination explosion: By being able to automatically do basic information processing, it seems like we might be able to do better coordination. We are already seeing this with chatbots that work as assistants, sometimes being able to give useful advice based on their mountains of integrated knowledge. But we could imagine going further, e.g. by automatically registering people's experiences and actions, and aggregating this information and routing it to relevant places.

(For instance, maybe a software company installs AI-based surveillance, and this surveillance notices when developers encounter bugs, and takes note of how they solve the bugs so that it can advise future developers who encounter similar bugs about what to do.)

This might revolutionize the way we act. Rather than having to create, spread, and collect information, maybe we would end up always having relevant information at hand, ready for our decisions. With a bit of rationing, we might even be able to keep spam down to a workable level.


I'm not particularly sure this is what things are going to look like. However I think the possibility is useful to keep in mind: There may be an intermediate phase between "full AGI" and now, where we have a sort of transformative artificial intelligence, but not in the sense of leading to an intelligence explosion. There may still be an intelligence explosion afterwards. Or not, if you don't believe in intelligence explosions.

I foresee privacy to be one counteracting force. These sorts of systems seem like they work better when they invade your privacy more, so people will resist that.

  1. ^

    Small = Involving relatively minor changes in terms of e.g. matter manually moved.

  2. ^

    Subtle = Dependent on getting many "bits" right at a distance.

9 comments

Comments sorted by top scores.

comment by trevor (TrevorWiesinger) · 2023-03-06T02:14:54.082Z · LW(p) · GW(p)

I've been researching coordination explosions for three years now. The rabbit hole goes deep, and there's much more empirical research on near-term applications than there appears to be at first glance, and surprisingly little of it has anything to do with LLMs or soft takeoffs.

You can do research by yourself, in a group, or even go in bizarre and dangerous and terrible directions like Janus where you dump your entire existence into various LLMs and see what happens, and there's so much low-hanging fruit that it doesn't even matter where you go or what you do. This domain is basically the wild west of AI safety.

I'm not really comfortable talking about the details publicly on the internet, but what I can say is that there's so much uncharted territory with coordination explosions, that almost any individual who goes in this general direction gets to collide with game-changing discoveries.

Replies from: MSRayne
comment by MSRayne · 2023-03-06T02:45:32.637Z · LW(p) · GW(p)

Are you comfortable talking about the details privately on the internet? I'd appreciate a DM. You've got my curiosity with the whole "juicy secrets" aura. Also I'm intending to do that "dump my entire existence into an LLM" thing at some point...

comment by Gordon Seidoh Worley (gworley) · 2023-03-06T23:41:54.115Z · LW(p) · GW(p)

I think this is worth exploring and seeing what the risks are here.

That said, I also take a bit of an outside view that coordination problems are unusually hard, and humans have been trying to solve them really hard for a long time. Although perhaps a bit naive on the inside view, the outside view says coordination problems are probably the last thing to be solved, or not at all. In fact, if we die for AI, it's arguably because even with AI assistance we still couldn't figure out how to solve the coordination problems we cared out.

comment by Gunnar_Zarncke · 2023-03-07T00:56:16.470Z · LW(p) · GW(p)

It is somewhat of a tangent, but if better communication is one effect of more powerful AI, that suggests another way to measure AI capability gain: Changes in the volume of (textual) information exchanged between people, number of messages exchanged, or number of contacts maintained.

comment by Kaj_Sotala · 2023-03-06T10:47:19.828Z · LW(p) · GW(p)

Related, I have an old post named "Intelligence Explosion vs. Co-operative Explosion [LW · GW]", though it's more about the argument that AGIs might overpower humanity with a superhuman ability to cooperate even if they can't become superhumanly intelligent.

comment by Roko · 2023-03-06T22:22:14.456Z · LW(p) · GW(p)

or hacking

Hacking can probably be done to a superhuman level using self-play since code is ultimately something like chess - it can all be simulated.

comment by cwillu (carey-underwood) · 2023-03-07T15:44:24.262Z · LW(p) · GW(p)

Please don't break title norms to optimize for attention.

Replies from: kave
comment by kave · 2024-07-02T00:17:56.985Z · LW(p) · GW(p)

Mod note: I removed the emoji from the title of this post

comment by baturinsky · 2023-03-06T05:58:15.973Z · LW(p) · GW(p)

I was considering coordination improvement from the other angle. Making a flexible network that can be shaped by any user to their will.

Imagine Semantic Web. But anyone can add any triples and documents at will. And anyone (or anything if it's done by algorithm) can sign any triple and document, confirming it's validity with their own reputation.

It's up to the reader to decide which signatures they recognize as credible and not in which case. 

Server(s) just accept new data and allows running arbitrary queries on the data it has. With some protection from spam and DDOS, of cause. So, clients can interpret and filter data anyhow without needing changes on server architecture.

This network has unlimited flexibility at the cost of clients having to query more data and process it in more complex way. So, it's possible to reproduce any way of communication on it (forum, wiki, blog, chat, etc) with just some triples like "tr:inReplyTo". 

Or make something that was not possible before. Imagine a chatroom were million people are talking at once. But with every client seeing only posts that are important enough for them - be it because post has enough upvotes (by those client trusts), is from someone respected or known by client, or it maybe even some compound comment that was distilled from many different similar comments.