Should we openly talk about explicit use cases for AutoGPT?

post by ChristianKl · 2023-04-20T23:44:02.162Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    5 shminux
    4 Max H
None
No comments

I think many people were thinking about AutoGPT-type agents for a while before people built and published the code on GitHub. Being conservative with talking about those agents might have led to their development a month or two later than would have happened if we had talked more openly about them. 

Right now I'm unsure whether or not to talk about concrete near-time uses. Is talking about applications and thus motivation capabilities deployment bad? Or is it good because it makes us understand the landscape better?

Answers

answer by Shmi (shminux) · 2023-04-21T00:22:37.668Z · LW(p) · GW(p)

I think whatever gets talked about on this forum is a drop in the bucket compared to everything else published around various GPT uses. It comes across as rather conceited when one thinks that some discussion here can predictably push the needle in a specific direction.

comment by ChristianKl · 2023-04-21T02:23:21.044Z · LW(p) · GW(p)

Generally, there are a lot of open project that could be persued with GPT. Speaking about the ideas for particular projects increases the changes that someone will put in the effort to make the project happen when that conversation happens here or at another place where capable people read the conversation.

Sooner or later, most ideas are likely found by someone else as well but that processes is not immediate. 

answer by Max H · 2023-04-21T00:41:55.149Z · LW(p) · GW(p)

I would definitely avoid publicly suggesting improvements, or worse, submitting pull requests to the Auto-GPT repo or any related projects.

But talking about new use cases, experimenting with them, and even using them yourself for mundane utility seems pretty harmless (for now; though I wouldn't give one access to anything really important), and probably even net-positive, in terms of raising awareness about the future danger, and for avoiding an agent overhang [LW · GW].

Plausibly, having really powerful and useful GPT-4-based Auto-GPT agents running around everywhere might even make OpenAI in particular hesitant or slower to start building GPT-n+1. "Agents use a lot of tokens; why bother training something even smarter that will allow people to use fewer tokens, when we already have a money-printing machine with GPT-4?" 

This is a bit too galaxy-brained to make me go out and start hacking on Auto-GPT myself, but slowing new foundation models does actually seem more important than slowing development of the glue code that makes those models most useful (and most dangerous). 

The code for Auto-GPT itself is very easy to write (mostly written by a few developers as a side project, I think?) compared to the effort required to train a SotA foundation model (literally hundreds of millions of dollars worth of compute and developer time). 

Even if danger ultimately comes from plugging in a powerful enough foundation model (which might be safe or "aligned [LW · GW]" on its own) into an Auto-GPT-style agent, the code and idea for Auto-GPT is very much not the bottleneck here. Probably better to have the code around, ready for evals [LW · GW] and other hopefully-safe forms of private red-teaming, than to have it developed after the the next-gen foundation models are widely available.

comment by ChristianKl · 2023-04-21T02:30:43.952Z · LW(p) · GW(p)

The code for Auto-GPT itself is very easy to write (mostly written by a few developers as a side project, I think?) 

The fact that the current code is very easy to write does not in itself suggest that you don't get something more powerful when you spent more effort. 

In general, LLM's can be fine-tuned to specific applications. Currently, Auto-GPT isn't benefiting from fine-tuning but it could be fine-tuned and such efforts can take more cognitive work. 

No comments

Comments sorted by top scores.