post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by mic (michael-chen) · 2023-10-19T01:30:22.963Z · LW(p) · GW(p)

Do we still not have any better timelines reports than bio anchors? From the frame of bio anchors, GPT-4 is merely on the scale of two chinchillas [? · GW], yet outperforms above-average humans at standardized tests. It's not a good assumption that AI needs 1 quadrillion parameters to have human-level capabilities.

Replies from: jacob_cannell, markovial, amaury-lorin
comment by jacob_cannell · 2023-11-01T00:57:42.686Z · LW(p) · GW(p)

The general scaling laws are universal and also apply to biological brains, which naturally leads to a net-training compute timeline [LW · GW] projection (there's a new neurosci paper or two now applying scaling laws to animal intelligence that I'd discuss if/when I update that post)

Note I posted that a bit before GPT4, which used roughly human-brain lifetime compute for training and is proto-AGI (far more general in the sense of breadth of knowledge and mental skills than any one human, but still less capable than human experts at execution). We are probably now in the sufficient compute regime, given better software/algorithms.

comment by markov (markovial) · 2023-10-19T08:52:14.915Z · LW(p) · GW(p)

I think the point of Bio Anchors was to give a big upper bound, and not say this is exactly when it will happen. At least that is how I perceive it. People who might be at a 101 level still probably have the impression that capabilities heavy AI is like multiple decades if not centuries away. The reason I have bio anchors here, is to try to point towards the fact that we have quite likely at most until 2048. Then based on that upper bound we can scale back further.

We have the recent OpenAI report that extends bio anchors - What a compute-centric framework says about takeoff speeds (https://www.openphilanthropy.org/research/what-a-compute-centric-framework-says-about-takeoff-speeds/). There is a comment under meta-notes that mentioned that I plan to include updates to timelines and takeoff in a future draft based on this report.

comment by momom2 (amaury-lorin) · 2023-10-19T09:17:12.130Z · LW(p) · GW(p)

I assume it's incomplete. It doesn't present the other 3 anchors mentioned, nor forecasting studies.

comment by Charbel-Raphaël (charbel-raphael-segerie) · 2023-10-18T22:35:17.421Z · LW(p) · GW(p)

This is well-crafted. Thank you for writing this, Markov.

Participants of the ML4Good bootcamps, students of the university course I organized, and students of AISF from AIS Sweden were very happy to be able to read your summary instead of having to read the numerous papers in the corresponding AGISF curriculum, the reviews were really excellent.

comment by Kabir Kumar (kabir-kumar-1) · 2024-02-13T14:14:22.999Z · LW(p) · GW(p)

Perhaps a note on Pre-Requisites would be useful. 
E.g. the level of math & comp sci that's assumed. 
Suggestion: try going through the topics to 50+ random strangers. Wildly useful for improving written work. 

comment by momom2 (amaury-lorin) · 2023-10-19T09:20:32.617Z · LW(p) · GW(p)

I don't understand how the parts fit together. For example, what's the point of presenting the (t-,n)-AGI framework or the Four Background Claims?

Replies from: markovial
comment by markov (markovial) · 2023-10-31T20:54:29.676Z · LW(p) · GW(p)

Newcomers to the AI Safety arguments might be under the impression that there will be discrete cutoffs, i.e. either we have HLAI or we dont. The point of (t,n) AGI is to give a picture of what a continuous increase in capabilities looks like. It is also slightly more formal than the simple "words based" definitions of AGI. If you know of a more precise mathematical formulation of the notion of general and super intelligences, I would love if you could point me towards it so that I can include that in the post.

As for Four Background Claims, the reason for inclusion is to provide an intuition behind why general intelligence is important. And that even though future systems might be intelligent it is not the default case that they will either care about our goals, or even follow our goals in the way as intended by the designers.