What specific thing would you do with AI Alignment Research Assistant GPT?

post by quetzal_rainbow · 2023-01-08T19:24:26.221Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    17 janus
    7 AtillaYasar
None
1 comment

Why I think this question is important: I asked myself, "What would my AGI timelines be if some AI could summarize Yudkowsky-Ngo debates on alignment difficulty [LW · GW] in a way that both participants agree with this summary, everyone who reads this summary understands both positions and participants can check understanding in conversation?" My semi-intuitive answer: "Five years tops and two years as modal prediction". Debate Summarizer is not a very useful Alignment Assistant, it can't boost research by 10x. If someone told me that Alignment Assistant suggested idea that sparked optimism in MIRI, I would think that we have exact amount of time it takes for someone to turn every tools needed to build such an Alignment Assistant to the creation of AGI (conditional on "this Alignment Assistant is not AGI itself").

I.e., if you bet on assistance of narrow AI in alignment research, you should also bet on finding solution quickly. Quick search for a solution requires an already existing plan. On the other hand, we are talking about a narrow AI, you can't just ask "solve alignment problem for me". You should ask specific questions, test pre-selected hypotheses, prove well-defined statements. Therefore, I think that those who want to use Alignment Assistants should outline this set of specific things as soon as possible.

UPD: Thanks janus for the link [LW · GW], it helped me to clarify what I would like to see as a perfect answer.

Let's suppose that your immediate answer is "brainstorming". Then the perfect specific answer is something like that: 

"In my opinion, the most narrow bottleneck in AI alignment is the lack of ideas about X, so I will brainstorm about it with Alignment Assistant."

Extremely unrealistic example:

"I have The Grand Theory of Alignment, but it critically depends on Goldbach conjecture, so I will try to prove it."

My very (very) simplified model of Paul Cristiano's answer:

"80% of alignment can be solved with ELK strategy, so we can make builder-breaker debate on (counter)examples for ELK between Assistant and ARC until we figure out the solution."

Yet another possible answer:

"I don't know! We are still in early exploratory mode, I can't imagine a specific thing. I just want to become as effectively smart as possible and see where it gets us."

Answers

answer by janus · 2023-01-08T22:04:55.825Z · LW(p) · GW(p)

Some tasks overlap with what I would want a hypothetical smart human assistant to do: Implement ML experiments and interfaces. Read over my hundreds of pages of drafts, connect ideas to relevant prior work, formalize what makes sense to be formalized and derive implications within the formalism, suggest and perform experiments to test hypotheses, write the ideas and findings up into legible posts. Summarize conversations and meetings [LW · GW]. Brainstorm and roleplay useful simulacra [LW(p) · GW(p)] with me.

However, I do not think that an Assistant character is the best or only interface AI can give us re augmenting alignment research. I want a neocortex prosthesis [? · GW] that has a more powerful imagination than I, that knows vastly more, is better at math, writing, critical thinking, programming, etc, and which I can weave my thoughts and context into with high bandwidth and minimal overhead, and which is retargetable [LW · GW] to any intention I might have. Oh, and which can instantiate Assistants or any other simulacra that might come in handy for the situation.

Sorry if this isn't as specific as you asked for; there are several reasons I didn't describe e.g. the ML experiments I'd like an assistant to do more specifically, mostly laziness.

Also, if you haven't yet, you should check out Results from a survey on tool use and workflows in alignment research [LW · GW].

comment by David Rein (david-rein) · 2023-01-09T02:39:27.570Z · LW(p) · GW(p)

I think the issue with the more general “neocortex prosthesis” is that if AI safety/alignment researchers make this and start using it, every other AI capabilities person will also start using it.

Replies from: jacques-thibodeau, janus
comment by jacquesthibs (jacques-thibodeau) · 2023-01-09T02:53:17.715Z · LW(p) · GW(p)

While I'm not so sure about this since GPT-3 came out in early 2020 and very few people have used it to its potential (though that number will certainly grow with ease-of-use tools like ChatGPT), your issue is way more likely in the case if there is a publicly available demo [LW · GW] vs a few alignment researchers using it in private. That said, it's still very much something to be concerned and careful about.

comment by janus · 2023-01-09T02:53:44.006Z · LW(p) · GW(p)

Yup, that's a problem.

The problem also exists with regard to an alignment assistant, although the problem is exacerbated here because "retargetable" is part of the specification. On the other hand, unlike the AI Assistant paradigm, a neocortex prothesis need not be optimized to be user-friendly, and will probably have a respectable learning curve, which makes instant/universal adoption by others less likely. There are also other steps that could be taken to mitigate risks (e.g. siloeing information).

Second-order impacts are important to consider, but I also think it's productive to think separately about the problem of what systems would be the most useful to alignment researchers.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2023-01-09T03:16:23.447Z · LW(p) · GW(p)

More importantly though, there's a point that I think matters here that you said. GPT is not an agent, and a lot of AI risk arguments don't work without agents.

One other point to keep in mind is that for the most part, capabilities people will probably create better AIs no matter what we do, so there isn't much control here.

I think that we don't have much choice in this matter. Automated research is the only way we can even reasonably solve the alignment problem on short timelines.

Replies from: janus, quetzal_rainbow
comment by janus · 2023-01-09T04:13:31.280Z · LW(p) · GW(p)

I think the concern expressed here is that the neocortex prosthesis could be used by capabilities researchers to do capabilities research more effectively, rather than the system being directly a dangerous agent.

comment by quetzal_rainbow · 2023-01-09T05:10:31.047Z · LW(p) · GW(p)

This is not the post where I intended to discuss this question, just want to express disagreement here: you want a useful LLM, not LLM that produces all possible completions of your idea, but LLM that produces useful completion of your idea. So you want LLM which outputs are at least partially weighted by their usefulness (like ChatGPT), which implies consequentialism.

answer by AtillaYasar · 2023-01-09T01:33:07.095Z · LW(p) · GW(p)

Disagreement.

I disagree with the assumption that AI is "narrow". In a way GPT is more generally intelligent than humans, because of the breadth of knowledge and type of outputs, and it's actually humans who outperform AI (by a lot) at certain narrow tasks.

And an assistance can include more than asking a question and receiving an answer. It can be exploratory with the right interface to a language model.

(Actually my stories are almost always exploratory, where I try random stuff, change the prompt a little, and recursively play around like that, to see what the AI will come up with)

"Specific"

Related to the above: in my opinion thinking of specific tools is the wrong framing. Like how a gun is not a tool to kill a specific person, it kills whoever you point it at. And a language model completes whichever thought or idea you start, effectively reducing the time you need to think.

So the most specific I can get is I'd make it help me build tooling (and I already have). And the better the tooling the more "power" the AI can give you (as George Hotz might put it).

For example I've built a simple webpage with ChatGPT despite knowing almost no javascript. What does this mean? It means my scope as a programmer just changed from Python to every language on earth (sort of), and it's the same for any piece of knowledge, since ChatGPT can explain things pretty well. So I can access any concept or piece of understanding much more quickly, and there are lots of things Google searches simply don't work for.

1 comment

Comments sorted by top scores.

comment by Chris_Leong · 2023-01-09T04:19:44.989Z · LW(p) · GW(p)

I'm already finding it useful in terms of writing posts, especially when I have a paragraph, but it doesn't quite flow right. I feel it makes the writing process so much easier.