Which LessWrong/Alignment topics would you like to be tutored in? [Poll]

post by Ruby · 2024-09-19T01:35:02.999Z · LW · GW · 12 comments

Contents

  Background
None
12 comments

Would you like to be tutored in applied game theory [? · GW], natural latents [LW · GW], CFAR-style rationality techniques [LW · GW], "general AI x-risk", Agent Foundations [? · GW], anthropics [? · GW], or some other topics [? · GW] discussed on LessWrong?

I'm thinking about prototyping some topic-specific LLM tutor bots, and would like to prioritize topics that multiple people are interested in.

Topic-specific LLM tutors would be customized with things like pre-loaded relevant context, helpful system prompts, and more focused testing to ensure they work.

Note: I'm interested in topics that are written about on LessWrong, e.g. infra-bayesianism, and not magnetohydrodynamics".


I'm going to use the same poll infrastructure that Ben Pace pioneered [LW · GW] recently. There is a thread below [LW(p) · GW(p)] where you add and vote on topics/domains/areas where you might like tutoring.

  1. Karma: upvote/downvote to express enthusiasm about there being tutoring for a topic.
  2. Reacts: click on the agree react to indicate you personally would like tutoring on a topic.
  3. New Poll Option. Add a new topic for people express interest in being tutored on.

For the sake of this poll, I'm more interested in whether you'd like tutoring on a topic or not, separate from the question of whether you think a tutoring bot would be any good. I'll worry about that part.

Background

I've been playing around with LLMs a lot in the past couple of months and so far my favorite use case is tutoring. LLM-assistance is helpful via multiple routes such as providing background context with less effort than external search/reading, keeping me engaged via interactivity, generating examples, and breaking down complex sections into more digestible pieces.

12 comments

Comments sorted by top scores.

comment by Elizabeth (pktechgirl) · 2024-09-20T01:13:19.393Z · LW(p) · GW(p)

what I want for rationality techniques is less a tutor and more of an assertive rubber duck walking me through things when capacity is scarce. 

comment by Ruby · 2024-09-19T01:40:34.033Z · LW(p) · GW(p)

Poll for LW topics you'd like to be tutored in
(please use agree-react to indicate you'd personally like tutoring on a topic, I might reach out if/when I have a prototype)

Note: Hit cmd-f or ctrl-f (whatever normally opens search) to automatically expand all of the poll options below.

Replies from: Ruby, habryka4, Ruby, Ruby, Ruby, Ruby, Ruby, rhollerith_dot_com, Ruby
comment by habryka (habryka4) · 2024-09-19T02:34:10.605Z · LW(p) · GW(p)

Writing well

comment by RHollerith (rhollerith_dot_com) · 2024-09-19T04:30:35.807Z · LW(p) · GW(p)

Applying decision theory to scenarios involving mutually untrusting agents.

comment by Ruby · 2024-09-19T01:48:33.963Z · LW(p) · GW(p)

Anthropics [? · GW]

comment by TheManxLoiner · 2024-10-19T22:19:30.016Z · LW(p) · GW(p)

What is the status of this project? Are there any estimates of timelines?