Defense Against The Super-Worms

post by viemccoy · 2025-03-20T07:24:56.975Z · LW · GW · 1 comments

Contents

1 comment

I am not writing this to give anyone any ideas. However, it seems to me that hackers are going to be able to leverage distributed language models in the near future, and I think that we should be creating proactive countermeasures before the tech is powerful enough to cause real damage. I predict that with globally distributed language model training having been proven possible, we will soon see bot-net style distributed inference. If you have enough devices on a home network, it is already trivial to run a very large model with the compute distributed across your phone, iPad, laptop, etc.

In my opinion, it is only a matter of time before hackers leverage distributed model inference to create self-propagating Super-Worms. These Super-Worms would function similar to a normal bot-net, with the added functionality of being able to recurse over their own code when they notice chunks of their compute being successfully eradicated by new additions to antivirus software.

Because the compute would be distributed, it seems obvious that the virus would be able to run more than just a single 405b-parameter model across the entire network. Given enough time, it could run many models in parallel, synchronously (or asynchronously) communicating with one another to improve their ability to spread. The virus, whose purpose would likely be to mine bitcoin or collect user data, would then be able to cybernetically adjust its own priorities. Should it dedicate more compute to self-improvement? Or, if the anti-virus software developers seem to be lagging behind, should it should dedicate more compute to mining bitcoin?

Of course, hackers won't be the only ones who can launch these distributed Super-Worms into the ether. Nation states, collectives, and anyone with access to the internet will be able to take one of these distributed viruses and tailor its ethos to their own desires. This, to me, seems to be the biggest risk with improving AI technology. Not a singleton deciding its time to off the entire planet, but smaller intelligent viruses wreaking havoc on our infrastructure.

My proposal for a solution here is an unfortunate one. I think the time of the world wide web is coming to an end. Smaller intranets are going to be the only way to keep our infrastructure safe. There should not be a pathway of fiber optic cables between a power plant and a hackers hideout. In many cases, there is often a low-security connection between the internet and our important infrastructure - and that just seems like asking for trouble.

When we do eventually realize that smaller intranets are necessary to protect ourselves, there are a few ways I think we can retain global connectivity. Low-bandwidth pathways between intranets or dedicated social media lines seem like an option wherein we can keep some of our existing networks. I'm imagining some sort of airlock that only allows tweet-sized packages to travel through, preventing the high-speed transfer of a self-propagating and self-improving AI virus.

I am optimistic about AGI. I think that the human race is going to be okay. I see a beautiful animist future ahead of us, one where every computer has a small language model inside of it - giving voice to things that were once silent. However, my optimism here is predicated on our caring enough about mitigating the risks of these Super-Worms to actually make meaningful change to our internet infrastructure. Solving this might even mean giving up certain forms of social media, or drastically changing the rate at which we transfer information across the globe. Regardless of the details, we ought to prioritize taking all of our power plants, water purification stations, and nuclear facilities out of the world-wide-web. I think we're clever enough to run these things without needing them connected to Facebook.

1 comments

Comments sorted by top scores.

comment by ChristianKl · 2025-03-21T13:50:04.215Z · LW(p) · GW(p)

Regardless of the details, we ought to prioritize taking all of our power plants, water purification stations, and nuclear facilities out of the world-wide-web. 

I think it's very questionable, to make major safety policy "regardless of the details". If you want to increase the safety of power plants, listening to the people who are responsible for the safety of power plants and their analysis of the details, is likely a better step instead of making these kind of decisions without understanding the details.