Hire (or become) a Thinking Assistant / Body Double

post by Raemon · 2024-12-23T03:58:42.061Z · LW · GW · 8 comments

Contents

    Executive Assistants
  Core Skills of a Metacognitive Assistant
  Pitfalls
  Optimizing for (not-particularly skilled) Metacognitive Assistance
  Automated AI Assistants?
  Trialing People + Matchmaking
  Focusmate + TaskRabbit?
  Aligning Incentives
None
8 comments

Of the posts I've delayed writing for years, I maybe regret this one the most.

I think more people (x-risk focused people in particular) should consider becoming (and hiring) metacognitive assistants. This is the single largest performance boost I know of – hire someone to sit with you and help you think. It doesn't help me (much) when I'm at my peak, but I'm not at my peak most of the time.

There are four types of assistants I'm tracking so far:

  1. Body Doubles
  2. Metacognitive Assistants
  3. Tutors
  4. Partners/Apprentices 

Body doubles [LW · GW] just sit in the room with you, periodically looking at your screen, and maybe saying "hey, do you endorse being on facebook?". They're a kind of brute force willpower aid. The person I know who uses them the most (Alex Altair) has them just sit in the same room (I believe while doing pomodoros, each of them working on different things). He guesses that they 2x his productivity (which is around what I've gotten)

A metacognitive assistant is a step beyond, where they are dedicating their attention to you, noticing when you are getting stuck, and gently intervening. (I assume people vary in how they like to be intervened on, but for people doing nuanced cognitive work, I think not disrupting someone's thought process is very important. You need to feel safe with a metacognitive assistant). My experience is that this is a 1.5x to 2x multiplier on my output.

The next two types are both more involved than Metacognitive Assistants, but in different ways.

Tutors pay attention to you, but are particularly modeling how you are approaching a particular skill (programming, math, etc). They notice when you seem to be tackling the skill in a confused or inefficient way, and ask questions about your thought process so as to figure out what subskills or concepts you need to develop.

Partners or apprentices are full on "pairing" – they actively collaborate with you on your task. Hiring a partner/apprentice is a very hard task, it requires tons of chemistry and intellectual compatibility, so it's not really a shortcut to anything, but if you find the right person it seems great. 

(John Wentworth says his research partner David Lorell multiplied his productivity by 3x, largely by raising John's floor performance. His earlier estimates were higher, and he says the current 3x takes into account that the trend of value-estimate has been downward. He does flag that the reduction-in-value-estimate included "dealing with some burnout" at times when he ended up pushing himself harder than he'd have naturally done if working on his own. He's since iterated on how to deal with that).

This post is mostly focused on Metacognitive Assistants, because I think they a) require some upfront investment to turn into a functioning niche of the rationalsphere (moreso than body doubles), b) feel achievable to scale up (whereas Tutors/Partners are both pretty advanced roles).

Pricing here varies wildly. I believe Alex Altair mostly hires UC Berkeley grad students for ~$15/hr, I've worked with people in more dedicated Metacognitive Assistant roles for $40-$80/hr depending on circumstances. Research assistants and tutors are probably much more bespoke. 

Executive Assistants

I'm contrasting "Thinking Assistants" with "Executive Assistants." They do involve many of the same skillsets. I see executive assistants' job as a) handling your general metacognition across all the domains other than your core competency, and often handling various other personal-or-professional tasks that free up your time to focus on your core competency. 

I think executive assistants are also great, and maybe they should blend with the Thinking Assistant role, since you realistically don't need a Thinking Assistant all the time and do need this other stuff dealt with and they probably collectively are worth one fulltime hire. But it is a different job.

Core Skills of a Metacognitive Assistant

I assume people will vary in what works for them. But, what I want out of a Thinking Assistant is:

There are also important outside-the-container skillsets, such as:

Even the minimum bar (i.e. "attentive body double") here is a surprisingly skilled position. It requires gentleness/unobtrusiveness, attentiveness, a good vibe. 

A thing that feels a bit silly to me is that this isn't something I've been able to make work very well at Lightcone with other Lightcone employees. Sometimes we actively pair on tasks and that works well. But, our hiring process sort of filters for ornery opinionatedness, which is kinda the opposite of what you want here. I think even the simplest version of this is a specialized role. 

The skill ceiling, meanwhile, seems quite high. The most skilled versions of this are the sort of therapist or executive coach who would charge hundreds of dollars an hour. The sort of person who is really good at this tends to quickly find their ambitions outgrowing the role (same with good executive assistants, unfortunately).

Pitfalls

Common problems I've run into:

Optimizing for (not-particularly skilled) Metacognitive Assistance

I've worked with people who were actively skilled at Thinking Assistance, and one person for whom it wasn't really their main thing, just a job.

One way I got more mileage out of the not-as-skilled person was to do upfront work of assembling a list of cognitive situations + habits. i.e:

etc. 

Then, since I've done the upfront work of thinking through my own metacognitive practices, the assistant only has to track in the moment what situation I'm in, and basically follow a flowchart I might be too tunnel-visioned to handle myself.

Automated AI Assistants?

Like many professions, AI is probably going to automate this pretty soon. I think the minimum viable "attentive body double + rubber duck" is something AI could implement right now. ChatGPT's voice mode would basically be fine at this if it:

Presumably people are working on this somewhere. I might go ahead and build my own version of it since I expect to eventually want highly customized cyborg tooling for myself, and since AI is dropping the cost of developing apps from scratch. But, I expect the market to figure it out sooner or later.

This establishes a pretty solid floor in quality. But, since part of the active ingredient here is "a real human is paying attention to you and will hold you accountable with a bit of their human soul", I expect there to continue being at least some benefit to having a real human. (I think there will be some minimum bar of attentiveness + unobtrusiveness + able-to-follow that a human will need, to be worth using over an AI, once the AI is basically working) 

Trialing People + Matchmaking

For the immediate future, I'd like to trial more people at cognitively assisting me, explicitly with a goal of being able to matchmake them with other people if appropriate. DM me if you're interested.

I also generally recommend other people trying experimenting with this in an ad-hoc way and writing up their experiences.

Focusmate + TaskRabbit?

It'd be nice to have a scalable talent pipeline for this, that matchmakes people with assistants.

Because of the combination of:

I think the natural vehicle here is a matchmaking site that's similar to FocusMate (which pairs people for coworking) but more like you're hiring skilled labor. I can imagine something where people list different skills and rates, and get ratings based on how helpful they've been. 

Hypothetically this could be a very openended public-facing commercial website. I do personally feel like for a lot of work in the x-risk space it helps a lot to have someone in sync about my strategic frame and would feel more friction working with a more random general population person.

Aligning Incentives

An obvious idea that might occur to you is "Provide metacognitive assistance for free, to people you think are doing good work." I don't think this is a good idea longterm – I think it's a recipe for people ending up undervalued, as people model the cost as "free" rather than "subsidized." It also might turn into some kind of Lost Purposes Appendage where nobody knows how to evaluate either the research or the thinking-assistance and it gets propped up (or not) depending on how flush-with-funding the EAcosystem is this particular year.

I feel more optimistic about "the ecosystem overall figures out how much work various people's work is worth via various evaluation / grantmaking processes", and then people pay for metacognitive assistance if it's actually worth it.


Overall, this is one of the highest effect sizes I know of for productivity (up there with "get medication for your depression", "get a more motivating job" and "get enough sleep"). It is admittedly not cheap – $800/week at the cheap end if fulltime, and sort of unboundedly expensive at the higher end. (Modulo "maybe someone can build a good AI for this").

If you go this route – remember to keep track of whether you're overworking yourself. My current model is most people can in fact work more hours than they can motivate themselves to while working alone, but John's and my experience is that it's at least possible to overdo it if you're not careful.

8 comments

Comments sorted by top scores.

comment by Gurkenglas · 2024-12-23T15:27:14.707Z · LW(p) · GW(p)

I'd like to do either side of this! Which I say in public to have an opportunity to advertise that https://www.lesswrong.com/posts/MHqwi8kzwaWD8wEQc/would-you-like-me-to-debug-your-math [LW · GW] remains open.

comment by Bart Bussmann (Stuckwork) · 2024-12-23T08:32:17.813Z · LW(p) · GW(p)

I haven't actually tried this, but recently heard about focusbuddy.ai, which might be a useful ai assistant in this space.

comment by plex (ete) · 2024-12-23T13:42:52.892Z · LW(p) · GW(p)

I've been offering various flavours of this to selected people for the past few years (using the normally ill-advised free option, but I don't super need to earn money currently and it feels good to ask people to pay it forward and do good for the world), with pretty good reviews. I'm not super looking to expand this currently, but might be open to testing out more people in a month or three, depending on where priorities fall and whether I think the person is doing unusually good doom reducing work.

comment by Nina Panickssery (NinaR) · 2024-12-23T09:00:44.881Z · LW(p) · GW(p)

I think more people (x-risk researchers in particular) should consider becoming (and hiring) metacognitive assistants


Why do you think x-risk researchers make particularly good metacognitive assistants? I would guess the opposite - that they are more interested in IC / non-assistant-like work?

Replies from: gw
comment by gw · 2024-12-23T09:25:51.902Z · LW(p) · GW(p)

Hazarding a guess from the frame of 'having the most impact' and not of 'doing the most interesting thing':

  • It might help a lot if a metacognitive assistant already has a lot of context on the work
  • If you think someone else is doing better work than you and you can 2x them, that's better than doing your individual work. (And if instead you can 3x or 4x people...)
Replies from: Raemon
comment by Raemon · 2024-12-23T12:19:00.900Z · LW(p) · GW(p)

I actually meant to say "x-risk focused individuals" there (not particularly researchers), and yes was coming from the impact side of things. (i.e. if you care about x-risk, one of the options available to you is to becoming a thinking assistant). 

comment by Alex_Altair · 2024-12-23T04:19:30.180Z · LW(p) · GW(p)

DM me if you're interested.

I, too am quite interested in trialing more people for roles on this spectrum.

comment by Dalmert · 2024-12-23T11:24:37.125Z · LW(p) · GW(p)

I'm interested in variants of this from both sides. Feel free to shoot me a DM and let's see if we can set something up.

I haven't had a good label to put on things like this but I've gravitated towards similar ways of work over the last 10-20 years, and I've very often found very good performance boosting effects, especially where compatibility and trust could be achieved.