Operationalizing timelines

post by Zach Stein-Perlman · 2023-03-10T16:30:01.654Z · LW · GW · 1 comments

If you're forecasting AI progress or asking someone about their timelines, what event should you focus on?

tl;dr it's messy and I don't have answers.

AGI, TAI, etc. are bad. Mostly because they are vague or don't capture what we care about.

More: APS-AI [LW · GW]; PASTAprepotent AIfractional automation of 2020 cognitive tasks [LW · GW]; [three levels of transformativeness]; various operationalizations for predictions (e.g., MetaculusManifoldSamotsvety [EA · GW]); and various definitions of AGI, TAI, and HLAI. Allan Dafoe uses “Advanced AI” to "gesture[] towards systems substantially more capable (and dangerous) than existing (2018) systems, without necessarily invoking specific generality capabilities or otherwise as implied by concepts such as 'Artificial General Intelligence' ('AGI')." Some people talk about particular visions of AI, such as CAIS [LW · GW], tech company singularity [LW · GW], and perhaps PASTA.

Some forecasting methods are well-suited for predicting particular kinds of conditions. For example, biological anchors [LW · GW] most directly give information about time to humanlike capabilities. And "Could Advanced AI Drive Explosive Economic Growth?" uses economic considerations to give information about economic variables; it couldn't be adapted well for other kinds of predictions.

Operationalizations of things-like-AGI are ideally

If you're eliciting forecasts, like in a survey, make sure respondents interpret what you say correctly. In particular, things you should clarify for timelines surveys (of a forecasting-sophisticated population like longtermist researchers, not like the public) are:

Forecasting a particular threshold of AI capabilities may be asking the wrong question. To inform at least some interventions, "it may be more useful to know when various 'pre-TAI' capability levels would be reached, in what order, or how far apart from each other, rather than to know when TAI will be reached" (quoting Michael Aird). "We should think about the details of different AI capabilities that will emerge over time [] and how those details will affect the actions we can profitably take" (quoting Ashwin Acharya).

This post draws on some research by and discussion with Michael Aird, Daniel Kokotajlo, Ashwin Acharya, and Matthijs Maas.

1 comments

Comments sorted by top scores.

comment by the gears to ascension (lahwran) · 2023-03-10T20:10:39.967Z · LW(p) · GW(p)

start by asking what behavior you're worried about, then ask about the timeline of a behavior to protect against its injuries to others