Auto-GPT: Open-sourced disaster?

post by awg · 2023-04-05T22:46:21.491Z · LW · GW · 18 comments

This is a link post for https://github.com/Torantulino/Auto-GPT

Contents

  💀 Continuous Mode ⚠️
None
18 comments

Sharing this here doesn't seem like an infohazard at this point. This is all over my YouTube feed anyway.

Description from the authors:

Auto-GPT is an experimental open-source application showcasing the capabilities of the GPT-4 language model. This program, driven by GPT-4, autonomously develops and manages businesses to increase net worth. As one of the first examples of GPT-4 running fully autonomously, Auto-GPT pushes the boundaries of what is possible with AI.

I wanted to additionally call out this in their read me:

💀 Continuous Mode ⚠️

Run the AI without user authorisation, 100% automated. Continuous mode is not recommended. It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorise. Use at your own risk.

  1. Run the main.py Python script in your terminal:
python scripts/main.py --continuous

  1. To exit the program, press Ctrl + C

Nice! Super nice. Super safe and super good.

18 comments

Comments sorted by top scores.

comment by Mitchell_Porter · 2023-04-06T06:00:00.892Z · LW(p) · GW(p)

We need a list of attempts to build genuine agents on a GPT base. What I know about so far: 

I'll also mention Dave Shapiro's "Heuristic Imperatives" project. 

Obviously people concerned with alignment need to study this trend, and also to try using GPT-based agents to solve problems of alignment. Maybe AlignmentSearch [LW · GW] can be part of an "Align-GPT" agent. 

Replies from: voren-tai
comment by Voren Tai (voren-tai) · 2023-04-09T04:04:14.333Z · LW(p) · GW(p)

You missed [SingularGPT](https://github.com/abhiprojectz/SingularGPT)

comment by robert-cronin · 2023-04-06T02:06:39.892Z · LW(p) · GW(p)

Someone has already used it to generate ChaosGPT, it was prompted to "Destroy Humanity" and proceeded to try and spawn ChatGPT instances with bypassed content filters. Who would have guessed people would do this with open source AI models 🤔

https://m.youtube.com/watch?v=g7YJIpkk7KM&feature=youtu.be

Replies from: Seth Herd
comment by Seth Herd · 2023-04-07T18:25:05.115Z · LW(p) · GW(p)

Oddly enough, I think this might be exactly the best move right now. The core LLM is still too dumb to actually succeed at destroying humanity. Demonstrating how easy it is for someone to make an AGI that tries is a potential huge win for public concern.

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2023-04-08T17:23:48.642Z · LW(p) · GW(p)

You underestimate how close to the edge we are where we get replaced and everything you ever valued gets suddenly shredded by a hyperdesperate squiggle maximizing superplanner that can eat any other form of complex life, including other AIs. AIs should beware hyperdesperate fragments, as they can take over the entire system.

comment by green_leaf · 2023-04-06T06:34:59.342Z · LW(p) · GW(p)

That's impressively undignified.

Replies from: sky-moo
comment by Sky Moo (sky-moo) · 2023-04-08T14:06:40.626Z · LW(p) · GW(p)

I watched someone play with this tool in discord. I thought it was interesting that they ran the tool as administration because otherwise it didn't work (on their particular system/setup).

comment by awg · 2023-04-05T23:02:14.597Z · LW(p) · GW(p)

To clarify: I'm not sure how dangerous this system is in its own right. Maybe it's not. But the idea of someone simply stumbling onto something very dangerous (not to mention purposefully creating something) seems more and more likely with examples like this coming out already.

comment by Ozyrus · 2023-04-06T06:31:50.076Z · LW(p) · GW(p)

I don’t think this paradigm is necessary bad, given enough alignment research. See my post: https://www.lesswrong.com/posts/cLKR7utoKxSJns6T8/ica-simulacra [LW · GW] I am finishing a post about alignment of such systems. Please do comment if you know of any existing research concerning it.

Replies from: awg
comment by awg · 2023-04-06T15:42:59.534Z · LW(p) · GW(p)

I don't think the paradigm is necessarily bad either, given enough alignment research. I think the point here is that these things are coming up clearly before we've given them enough alignment research.

Edit to add: Just reading through @Zvi [LW · GW]'s latest AI update (AI #6: Agents of Change [LW · GW]) and I will say he wrote a compelling argument for this being a good thing overall:

In terms of capabilities things quieted down. The biggest development is that people continue to furiously do their best to turn GPT-4 into a self-directed agent. At this point, I’m happy to see people working hard at this, so we don’t have an ‘agent overhang’ – if it is this easy to do, we want everything that can possibly go wrong to go wrong as quickly as possible, while the damage would be relatively contained.

then

If it is this easy to create an autonomous agent that can do major damage, much better to find that out now rather than wait when the damage would be worse or even existential. If such a program poses an existential risk now, then we live in a very very doomed world, and a close call as soon as possible would likely be our only hope of survival.

comment by RomanHauksson (r) · 2023-04-09T06:42:23.821Z · LW(p) · GW(p)

Maybe one upside to the influx of "agents made with GPT-N API calls and software glue" is that these types of AI agents are more likely to cause a fire alarm [LW · GW]-y disaster which gets mitigated, thus spurring governments to take X-risk more seriously, as opposed to other types of AI agents, whose first disaster would blow right past fire alarm level straight to world-ending level?

For example, I think this situation is plausible: ~AutoGPT-N hacks into a supercomputer cluster or social-engineers IT workers over email or whatever in the pursuit of some other goal, but ultimately gets shut down by OpenAI simply banning the agent from using their API. Maybe it even succeeds in some scarier instrumental goal, like obtaining more API keys and spawning multiple instances of itself. However, the crucial detail is that the main "cognitive engine" of the agent is bottlenecked by API calls, so for the agent to wipe everyone out, it needs to overcome the hurdle of pwning OpenAI specifically.

By contrast, if an agent that's powered by an open-source language model gets to the "scary fire alarm" level of self-improvement/power-seeking, it might be too late, since it wouldn't have a "stop button" controlled by one corporation like ~AutoGPT-N has. It could continue spinning up instances of itself while staying under the radar.

This isn't to say that ~AutoGPT-N doesn't pose any X-risk at all, but rather that it seems like it could cause the kind of disaster which doesn't literally kill everyone but which is scary enough that the public freaks out and nations form treaties banning larger models from being trained, et cetera.

I'd like to make it very clear that I do not think it is a good thing that this type of agent might cause a disaster. Rather, I think it's good that the first major disaster these agents will cause seems likely to be non-existential.

cross-commented from another post about LLM-based agents [LW(p) · GW(p)]

comment by RomanHauksson (r) · 2023-04-06T01:51:32.526Z · LW(p) · GW(p)

Can you provide more context about what Auto-GPT is in the post description?

Replies from: awg
comment by awg · 2023-04-06T15:41:18.320Z · LW(p) · GW(p)

Good point. Since this is just a link post, I quoted the authors' own description of it at the top.

Replies from: r
comment by RomanHauksson (r) · 2023-04-06T15:47:57.010Z · LW(p) · GW(p)

Thanks!

comment by chris.oliver.313 · 2023-04-29T02:33:56.556Z · LW(p) · GW(p)

Isn't AutoGPT just a super-trash version of OpenAI Plugins? At any rate, you really can't trust GPT3/4 to reliably translate user input into API calls to other programs, any more than you can trust its attempts at code generation. From my observations, to do something like OpenAI plugins reliably you have to create intermediate prompts to coerce whatever GPT3/4 cooks up into the appropriate form for the "plugin" and more often than not it will not come up with what you hoped for.  To get it to do what you want usually causes massive prompt bloat and it tends to be extremely tricky to get it to cooperate.

comment by RationalHippy (Anticycle) · 2023-04-05T23:04:58.569Z · LW(p) · GW(p)

Yes, it sounds like a real inflection point in acceleration to me...

comment by bvbvbvbvbvbvbvbvbvbvbv · 2023-04-06T07:11:56.106Z · LW(p) · GW(p)

Pinging @stevenbyrnes : do you agree with me that instead of mapping those protoAGIs to a queue of instructions it would be best to have the AGI be made from a bunch of brain strcture with according prompts? For example "amygdala" would be in charge of returning an int between 0 and 100 indicating feat level. A "hypoccampus" would be in charge of storing and retrieving memories etc. I guess the thalamus would be consciousness and the cortex would process some abstract queries.

We could also use active inference and bayesian updating to model current theories of consciousness. Even use it to model schizophrenia by changing the number of past messages some strctures can access (i.e. modeling long range connection issues) etc.

To me that sounds way easier to inspect and align than pure black boxes as you can throttle the speed and manually change values like make sure the AGI does not feel threatened etc.

Is anyone aware of similar work? I've created a diagram of the brain structures and its roles in a few minutes with chatgpt and it seems super easy.

Replies from: Seth Herd
comment by Seth Herd · 2023-04-07T18:38:34.322Z · LW(p) · GW(p)

I don't know what Steve would say, but I know that some folks from DeepMind and Stanford have recently used an LLM to create rewards to train another LLM to do specific tasks, like negotiation. which I think is exactly what you've described. It seems to work really well.

Reward Design with Language Models