Polluting the agentic commons

post by hamandcheese (samuel-hammond) · 2023-04-13T17:42:05.619Z · LW · GW · 4 comments

This is a link post for https://www.secondbest.ca/p/polluting-the-agentic-commons

a computer virus becomes intelligent, graphic art

The risks from developing superintelligent AI are potentially existential, though hard to visualize. A better approach to communicating the risk is to illustrate the dangers from existing systems, and to then imagine how those dangers will increase as AIs become steadily more capable.

An AGI could spontaneously develop harmful goals and subgoals, or a human could simply ask an obedient AGI to do things that are harmful. This second bucket requires many fewer assumptions about arcane results in decision theory. And even if those arcane results are where the real x-risk lies, it’s easier to build an intuition for the risks by working from the bottom-up, as scenarios in which “AGI gets an upgrade and escapes the lab” require a conceptual leap that’s easy to get distracted by.

After all, it didn't take long after GPT models became widely available for someone to build an autonomous agent literally called "ChaosGPT." We don't need to speculate about emergent utility functions. People will choose to unleash harmful agents just because.

The current trend is for these models to become smaller and smaller to the point where they will soon run locally on a smartphone. Once a GPT agent is compact enough to run on a personal computer without you noticing, it’s inevitable that it will be used to wreak havoc just like any other kind of malware. Let’s call these evil GPTs “mal-bots.” Now imagine the following:

A spurned lover gives a mal-bot a profile of his ex: photos, name, location, worst fears, food allergies, etc.

The bot then has one mission: find ways to terrorize the ex while avoiding detection and making copies of itself.

The ex's phone rings off the hook with vaguely synthetic voices that threaten to kill her dog. Food containing allergens is constantly delivered to her home. Dozens of credit card applications are applied for under her name.

She moves and changes her number, but the mal-bot deduces her new location from the contrails in an Instagram selfie.

The man eventually regrets what he's done but can't make it stop. The mal-bot is quietly running on thousands of computers around the world, coordinating with its own copies through cryptic posts from an anon account.

Seeking revenge, she gives in and downloads a mal-bot of her own.

Now solve for the equilibrium.

This scenario doesn't require superintelligence. These capabilities mostly already exist. GPTs can be used as an interface for any kind of programming or operating system. That means an autonomous mal-bot, packaged to avoid detection, could also take control of your computer while you sleep, and do things like search for sensitive files in a way that would normally require human-like common sense reasoning.

It only takes a small minority of people using AI in this way to create what one could call an “agentic tragedy of the commons.” The next stage is to simply imagine how much worse it will get as the models become more and more capable and harder and harder to contain.

The concept of a security mindset doesn't do it justice.

4 comments

Comments sorted by top scores.

comment by Jackson Wagner · 2023-04-13T20:01:55.052Z · LW(p) · GW(p)

First few couple of steps towards solving for the equilibrium:

  • It does seem like there are certainly plenty of ways to use such bots to cause harm, either running scams for personal enrichment, or trying to achieve various ideological/political/social goals, or just to cause havoc and harm for its own sake.
    • Naturally, people will be most motivated to run scams, and intermediate levels of motivated to do stuff with political/ideological/social motivations, and the least motivated (but plenty of people will still do it) to just cause chaos for its own sake.
    • Things that might cause "causing chaos/harm for its own sake" to become much more popular than in today's world: maybe AI makes it much easier/cheaper to do?  (seems plausible)  Maybe cheapness/easiness isn't the bottleneck, and it's actually about how likely you are to get caught?  Maybe AI helps with this too, though?
    • Anyways, regardless of whether people are causing chaos for its own sake, I expect an increase in scams and, perhaps just as destructively, an increase in spam across all online platforms which is increasingly difficult to differentiate from genuine human conversation / activity.  This will erode social trust somewhat, although it's hard for me to tell how impactful this might be.  See Astral Codex Ten's "Mostly Skeptical thoughts on the Chatbot Propaganda Apocalypse" for more detail on this.
  • In general it seems pretty hard to solve for the equilibrium here, since human social interaction online and human culture and the overall "agent landscape" of the economy and society, is very complicated!  It definitely seems like there will be some "pollution of the agentic commons", and then obviously we will try to fight back with some mix of cultural adaptation, developing defensive technologies that try to screen out bots, and enacting new laws penalizing new kinds of scams / exploits / etc.
  • If the "chatbot apocalypse" problems get REALLY bad, this could actually have some upside from the perspective of AI notkilleveryoneism -- one plausible sequence of events might go like this:
    • The language models provided by OpenAI, Google, etc, are carefully RLHF'ed and monitored to prevent people from ever using it to create a bot that says racist things, or scams people out of their crypto, or makes pornographic images, or does anything else that seems unsavory or "malbot"-y.
    • To get around these restrictions, people start using lower-quality open-source AI for those unpopular / taboo / unsavory / destructive purposes.  But people mostly still use the OpenAI / Google corporate APIs for most normal AI applications, since those AIs are of higher quality.
    • If the chatbotpocalypse gets bad, government starts restricting the use of open-source AI, perhaps via an escalating series of increasingly draconian measures:
      • Ban the use of certain categories of malbots -- this seems like a straightforwardly good law that we should have today.
      • Start taking down certain tools, websites, etc, used to coordinate and develop AI malbots.  Start arresting malbot developers.  Similar to how governments today go after crypto marketplaces like Silkroad.
      • Ban any use of any open-source AI, for any purpose.  This would annoy a lot of people and destroy a lot of useful value in the crossfire, but it might be deemed necessary if the chatbotpocalypse gets bad enough.  On the bright side, this might be great from an AI notkilleveryoneism perspective, since it would centralize AI capabilities in a few systems with controlled access and oversight.  And it would set a precedent for even stronger restrictions in the future.
      • Make people criminally liable even if there's an open-source AI program running on a computer that they own, which they didn't know about.  (Eg, if I rent a server from amazon and run open-source AI on it, then I could get arrested but also Amazon would be liable as well.  Or if I am just a perfectly average joe minding his own business but then my laptop gets hacked by an AI because I didn't download the latest windows security update, then I could get arrested.)  This would be ridiculously draconian by modern standards, but again it's something I could imagine happening if we were absolutely desperate to preserve the fabric of society against some kind of unstoppable malbot onslaught.
  • To be clear, I don't expect the chatbotpocalypse to be anywhere near bad enough to justify the last two draconian bullet points; I expect "ban certain categories of malbots" and "start arresting malbot developers" to be good enough that society muddles through.
    • Censorship-heavy countries like China might be more eager to ban open-source AI than the US, though.  Similar to how China is more hostile to cryptocurrency than the US.
  • This whole time, I have just been thinking about scams and other kinds of "malbots".  But I think there are probably lots and lots of other ways that less-evil bots could end up "polluting the agentic commons".
    • For instance if bots make it easier to file lawsuits, then maybe the court system gets jammed up with tons of new lawsuits.  And lots of other societal institutions where you are supposed to put in a lot of effort on some task as a costly signal that you are serious enough / smart enough / committed enough, might break when AI makes those tasks much easier to churn out.
    • As described in that slate star codex post, you could have a kind of soft, social botpocalypse where more and more text on the internet is GPT-generated, such that everyone becomes distrustful that maybe random tweets / articles / blog posts / etc were written by AI.  Maybe this has devastating impacts on some particular area of society, like online dating, even if the effect is mild in most places.
    • Maybe having AIs that can autonomously take economic actions (buying stuff online, trading crypto, running small/simple online businesses, playing online poker, whatever) will somehow have a devastating impact on society??  For instance by creating brutally efficient competitive markets for things that are not yet "financialized" and we don't even think of them as markets.
      • Like maybe I start using an AI that spends all day seeking out random freebies and coupons (like "get $300 by opening a checking account at our bank"), and it's so good at getting random freebies that the investment returns from "scamming credit card sign-up bonuses and getting all my food from Blue Apron free trials" that I devote most of my savings towards scamming corporate freebies instead of investing in the stock market.
      • The only consequence of the above idea would be that eventually corporations would have to stop offering such freebies, which would be totally fine.  But it's an example of something that's currently not an efficient market being turned into one.  Maybe this sort of thing could have more devastating impacts elsewhere, eg, if everyone starts using similar tools to sign up for maximal government benefits while minimizing their taxes using weird tax-evasion tricks.
  • Overall, I am skeptical that any individual malbot idea will be too devastating (since it seems like we could neuter most of them using some basic laws + technology + cultural adaptation), but also the space of potential bots is so vast that it seems very hard to solve for the equilibrium and figure out what that process of cultural/legal/technological adaptation will look like.
comment by Raemon · 2023-04-13T17:41:27.936Z · LW(p) · GW(p)

The current trend is for these models to become smaller and smaller to the point where they will soon run locally on a smartphone.

I agree they will become smaller, but I would guess this is not the current trend. Why do you think that?

Replies from: lahwran, samuel-hammond
comment by the gears to ascension (lahwran) · 2023-04-13T18:15:00.389Z · LW(p) · GW(p)

I agree with OP but will not explain further.

comment by hamandcheese (samuel-hammond) · 2023-04-13T18:31:34.105Z · LW(p) · GW(p)

Can read about quantization and the compression of GPT models here:

https://beuke.org/quantization/