Beta test GPT-3 based research assistant

post by jungofthewon · 2020-12-16T13:42:50.432Z · LW · GW · 2 comments

Ought is working on building Elicit, a tool to automate and scale open-ended reasoning about the future. To date, we’ve collaborated with LessWrong to embed interactive binary predictions [LW · GW], share AGI timelines [LW · GW] and the assumptions driving them [LW · GW], forecast existential risk [LW · GW], and much more. 

We’re working on adding GPT-3 based research assistant features to help forecasters with the earlier steps in their workflow. Users create and apply GPT-3 actions by providing a few training examples. Elicit then scales that action to thousands of publications, datasets, or use cases. 

Here’s a demo of how someone applies existing actions:

 

And a demo of how someone creates their own action (no coding required):  

Some actions we currently support include:

There’s no better community than LessWrong to codify and share good reasoning steps so we’re looking for people to contribute to our action repository, creating actions like: 

If you’re interested in becoming a beta tester and contributing to Elicit, please fill out this form! Again, no technical experience required.

2 comments

Comments sorted by top scores.

comment by romeostevensit · 2020-12-17T04:20:22.202Z · LW(p) · GW(p)

What I most want is creativity mode where it uses some of the best practices from structured creativity exercises to hit you with random prompts and elaborations. I think this is easily doable but might be its own side project.

Replies from: jungofthewon
comment by jungofthewon · 2020-12-17T16:46:25.366Z · LW(p) · GW(p)

Do you have any examples?