GPT-3 and the future of knowledge work

post by fowlertm · 2021-03-05T17:40:12.039Z · LW · GW · 0 comments

Contents

No comments

The most recent episode of the Futurati Podcast is a big one. We had Jungwon Byun and Andreas Stuhlmüller on to talk about their startup 'Ought' and, to the best of my knowledge, this is the first public, long-form discussion of their work around.

(It's also probably our funniest episode.)

Their ambition is to wrap a sleek GUI around advanced language models to build a platform which could transform scholarship, education, research, and almost every other place people think about stuff.

The process is powered by GPT-3, and mostly boils down to teaching it how to do something you want it to do by showing it a couple of examples. To complete a list of potential essay topics you'd just show it 3-4 essay topics, and it'd respond by showing you a few more.

The more you interact with it, the better it gets.

There's all sorts of subtlety and detail, but that's the essence of it.

This may not sound all that impressive, but consider what it means. You can have Elicit (a separate spinoff of Ought) generate counterarguments to your position, brainstorm failure modes (and potential solutions) to a course of action, summarize papers, and rephrase a statement as a question or in a more emotionally positive tone.

The team is working on some integrations to extend these capabilities. Soon enough, Elicit will be able to connect to databases of published scientific papers, newspapers, blogs, or audio transcripts. When you ask it a research question, it'll be able to link out to millions of documents and offer high-level overviews of every major theme; it'll be able to test your comprehensions by asking you questions as you read; it'll be able to assemble concept hierarchies; it'll be able to extract all the figures from scientific papers and summarize them; it'll be able to extract all the proper names, find where those people are located, get their email addresses where available, and write them messages inviting them on your podcast.

We might one day be able to train a model on Einstein or Feynman and create lectures in their style.

What's more, people can share workflows they've developed. If I work out a good approach to learning about the subdisciplines of a field, for example, I can make that available to anyone to save them the effort of discovering it on their own.

There will be algorithms of thought that can make detailed, otherwise inaccessible aspects of other people's cognitive processes available.

And this is just researchers. It could help teachers dynamically adjust material on the basis of up-to-the-minute assessments of student performance. It could handle rudimentary aspects of therapy. It could help people retrain if they've been displaced by automation. It could summarize case law. It could help develop language skills in children.

I don't know if the future will look the way we hope it will, but I do think something like this could power huge parts of the knowledge work economy in the future, making everyone dramatically more productive.

It's tremendously exciting, and I'm honored to have been able to learn about it directly.

0 comments

Comments sorted by top scores.