Feature proposal: integrate LessWrong with ChatGPT to promote active reading
post by DirectedEvolution (AllAmericanBreakfast) · 2023-03-19T03:41:34.781Z · LW · GW · 4 commentsContents
4 comments
I've built a little Python app I call aiRead.
Its least interesting feature is that it breaks down text the way I prefer to read it: a few sentences at a time, presented in ticker-tape fashion.
However, I've also integrated it with ChatGPT. Two of my engineered prompts are most useful. One is activated by typing "explain," which generates a rewording, a definition of jargon terms, and a simplified explanation in conversational language. The other is activated by typing "quiz," which generates an interactive quiz on the content you're reading.
The point of these features is to promote active reading. I find them tremendously helpful, such that despite the numerous shortcomings of aiRead, I'm now converting all text I have to read to be compatible with it. When I read LessWrong posts, I put them into aiRead, then generate an interactive quiz to help me understand them better. It is transformative.
I would love to see features like these integrated into LessWrong.
I know the dev team is busy, and they do a wonderful job - this is one of the best forum architectures I've ever encountered already. The only reason I'm suggesting this is because an ongoing, convenient explanation-and-quiz feature in my reading app has completely redefined the way I learn overnight, and I'm excited to see it incorporated into the other ways I consume information.
Note: Anyone is welcome to download and use aiRead, and if you do, I'd be very interested to hear about your experiences with it. You'll need a ChatGPT+ subscription. The way to use it is explained in the readme.
Also, I'm looking for collaborators on aiRead! If you'd like to help me polish this up, or ideally turn aiRead into a browser-based app, please get in touch!
4 comments
Comments sorted by top scores.
comment by Olomana · 2023-03-19T06:58:02.578Z · LW(p) · GW(p)
Thank you for sharing this. FYI, when I run it, it hangs on "Preparing explanation...". I have an OpenAI account, where I use the gpt-3.5-turbo model on the per-1K-tokens plan. I copied a sentence from your text and your prompt from the source code, and got an explanation quickly, using the same API key. I don't actually have the ChatGPT Plus subscription, so maybe that's the problem.
ChatGPT has changed the way I read content, as well. I have a browser extension that downloads an article into a Markdown file. I open the Markdown file in Obsidian, where I have a plugin that interacts with OpenAI. I can summarize sections or ask for explanations of unfamiliar terms.
On the server side, Less Wrong has a lot of really good content. If that content could be used to fine-tune a large language model... it would be like talking to ChatLW instead of ChatGPT.
Things like explanations and poems are better done on the user side, as you have done.
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2023-03-19T07:47:32.189Z · LW(p) · GW(p)
Edit:
I managed to solve this issue, which appears to be a widespread problem with accessing ChatGPT via the API. The fix is incorporated to the code on Github.
comment by habryka (habryka4) · 2023-03-19T04:04:31.158Z · LW(p) · GW(p)
I kind of like this idea. I've been thinking for a long time about how to make LessWrong more of a place of active learning. My guess is there is still a lot of hand-curating to be done to create the right type of active engagement, but I do think the barrier to entry is a bunch lower now and it might be worth the time-investment.
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2023-03-19T04:27:15.776Z · LW(p) · GW(p)
Since you kind of like it, let me spell out the way I use this to help with active engagement.
My software lets the user select the amount of material they view at a time. Usually, I use 1-3 sentences. When the user activates the quiz feature, I incorporate the material they're viewing into a static engineered prompt that cues ChatGPT to produce a quiz question based on that material and prompts the user to reply. After the user replies, it's sent back to ChatGPT to receive a grade.
I'm just using GPT-3.5 turbo, and it's not especially accurate at grading. But I give the user the ability to view the original material that the question was based on to verify the grading accuracy. The point of the quiz is not so much accuracy as to get the user into an active reading mode as they go.
If you're interested in having a conversation, I'd be happy to chat - I'm both interested in helping LessWrong improve, and also curious about the technical and logistical challenges you're working with and the type of active engagement you're hoping to promote.