post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by romeostevensit · 2020-07-26T22:50:33.232Z · LW(p) · GW(p)

I think that, memetically, we'll be selecting hardest for cases where it is most difficult to see the ways in which the human is doing a lot of work by pushing it out into the context thus making gpt-3 look most impressive. I think only a little of that is happening here, just saying it because this is what sparked the thought.

Replies from: zachary-robertson, ESRogs
comment by Past Account (zachary-robertson) · 2020-07-27T01:33:09.293Z · LW(p) · GW(p)

I agree. Coming up with the right prompts was not trivial. I almost quit several times. Yet, there is a science to this and I think it’ll become more important to turn out focus away from the spectacle aspects of GPT and more towards reproducibility. More so if the way forward is via interrelated instances of GPT.

As an aside, critique seems much easier than generation. I’m cautiously optimistic about prompting GPT instances to “check” output.

Replies from: romeostevensit
comment by romeostevensit · 2020-07-27T06:50:54.887Z · LW(p) · GW(p)

Similar to sharp google-fu today but much deeper.

comment by ESRogs · 2020-07-27T01:08:36.706Z · LW(p) · GW(p)

Interesting to think about how this will evolve. Over time, humans will have to do less of the work, and the combined system will be able to do more. (Though the selection pressure that you mention will continue to be there.)

It seems to me that we might not be too far away from "natural language programming". With some combination of the the above approach, plus the program synthesis examples where you just specify a comment, plus some extra tricks, it seems like you could end up just sort of specifying your programs via an algorithm description in English.

You'd want to set it up so that it alerted you when it thought things were ambiguous, and that it auto-generated test cases for different possible interpretations and showed you the results.

I've personally started using TabNine in the last few weeks, and I'd say it's just barely over the edge of being useful. But I can imagine next-gen versions of these things pretty radically transforming the process of programming.

Replies from: ESRogs, ESRogs
comment by ESRogs · 2020-07-27T01:16:38.050Z · LW(p) · GW(p)

With some combination of the the above approach, plus the program synthesis examples where you just specify a comment, plus some extra tricks

Another interesting direction to go with this -- can you get it to do a sort of distillation step, where you first get amplified-GPT to implement some algorithm, a la the recursion dialogue above, and then you get it to generate code that implements the same algorithm?

comment by ESRogs · 2020-07-27T01:12:23.816Z · LW(p) · GW(p)

Possible startup idea -- design an IDE from the ground up to take advantage of GPT-like abilities.

I think just using the next-gen version of TabNine will be powerful, and I expect all major IDEs' autocomplete features to improve a lot in the coming years, but I also suspect that if you designed an IDE to really take advantage of that these systems can do, you might end up designing something rather different from just today's IDEs + better autocomplete.

comment by spkoc · 2020-07-27T12:24:54.330Z · LW(p) · GW(p)

How are you actually doing this in AI Dungeon? I have Dragon mode enabled, everything else default.

I start a new Single player game. Choose Custom mode(6). Then at the prompt I just paste (using Say mode)

Q: Say I want to sum the items in a list. How would I do this recursively? The answer involves two steps.

 and I get

Q: Say I want to sum the items in a list. How would I do this recursively? The answer involves two steps. First, I need to know how many items there are in total. Second, I need to find out which item is at the top of that list. A: You could use recursive_sum() .

Similarly when I tried to reproduce stuff from https://old.reddit.com/r/slatestarcodex/comments/hrx2id/a_collection_of_amazing_things_gpt3_has_done/ I didn't get anything near as impressive. Also the responses get increasingly confused. Like if I ask it to translate something to French or Romanian it will randomly translate later prompts as well.

Is there some basic tutorial for how you seed these AI Dungeon interactions?

Replies from: zachary-robertson, sil-ver, avturchin
comment by Past Account (zachary-robertson) · 2020-07-27T13:09:49.102Z · LW(p) · GW(p)

You could prompt with “Q:” + (content) and then “A:”

I use the default settings on the temperature, but I do cut it off after it finishes an answer. However, you likely won’t get my exact results unless you literally copy the instances. Moreover, if you gave up after the first response I think might’ve given up to quickly. You can respond to it and communicate more information, as I did. The above really was what I got on the first try. It’s not perfect, but that’s the point. You can teach it. It’s not “it works” or “it doesn’t work”.

I don’t think there are tutorials, but perhaps in due time someone (maybe me) will get to that. I also feel like ‘trying’ to get it to do something might be a sub-optimal approach. This is a subtle difference, but my intent here was to get it to confirm it understood what I was asking by answering questions.

comment by Rafael Harth (sil-ver) · 2020-07-28T13:57:06.778Z · LW(p) · GW(p)

The approach I've been using (for different things, but I suspect the principle is the same) is

  • If you want it to do X, give it about four examples of X in the question-answer format as a prompt (as in, commands from the human plus answers from the AI)
  • Repeat for about three times:
    • Give it another such question, reroll until it produces a good answer (might take a lot of rolls)

At that point it is much better than one where you prompted everything to begin with.

comment by avturchin · 2020-07-27T14:20:54.875Z · LW(p) · GW(p)

I also have replicating difficulties with AI Dungeon. I think it has weaker version of GPT-3 than API.

Replies from: 157-239n
comment by 157 239n (157-239n) · 2020-08-02T12:33:28.129Z · LW(p) · GW(p)

Yeah, they have 2 different models. "Griffin" is GPT-2 and free of charge. "Dragon" is GPT-3 and I believe cost $5/month

Replies from: avturchin
comment by avturchin · 2020-08-02T13:37:06.067Z · LW(p) · GW(p)

I have paid account.

comment by Raemon · 2020-07-27T13:14:00.119Z · LW(p) · GW(p)

Your formatting makes it hard for me to tell which parts are your prompting vs GPT3. one common format is ‘bold = you’, but the opening line wasn’t bold, so I was confused about what’s going on there 

Replies from: zachary-robertson
comment by Past Account (zachary-robertson) · 2020-07-27T13:41:43.080Z · LW(p) · GW(p)

Thanks! I forgot to do this. Luckily I can go back through the run and put this is in. There is ambiguity whenever it auto-completes, but I hope I did a decent job of noting where this is happening.