How much funding and researchers were in AI, and AI Safety, in 2018?

post by Raemon · 2019-03-03T21:46:59.132Z · LW · GW · 2 comments

This is a question post.

Contents

  Answers
    19 Unnamed
    13 SoerenMind
    8 DanielFilan
    4 Raemon
None
2 comments

I'm trying to build up a picture of how "much" research is going into general AI capabilities, and how much is going into AI safety.

The ideal question I'd be asking is "how much progress [measured in "important thoughts/ideas/tools" was being made that plausibly could lead to AGI in 2018, and how much progress was made that could plausibly lead to safe/aligned AI].

I assume that question is nigh impossible, so instead asking the approximation:

a) how much money went into AI capabilities research in 2018

b) how much money went into AI alignment research in 2018

c) how many researchers (ideally "research hours" but I'll take what I can get) were focused on capabilities research in 2018

d) how many researchers were focused on AI safety in 2018?

Answers

answer by Unnamed · 2019-03-30T00:39:52.063Z · LW(p) · GW(p)

Some numbers related to c (how many capabilities researchers):

In 2018 about 8,500 people attended NeurIPS and about 4,000 people attended ICML. There are about 2,000 researchers who work at Google AI, and in December 2017 there were reports that about 700 total people work at DeepMind including about 400 with a PhD.

Turning this into a single estimate for "number of researchers" is tricky for the sorts of reasons that catherio gives [LW(p) · GW(p)]. Capabilities researchers is a fuzzy category and it's not clear to what extent people who are working on advancing the state of the art in general AI capabilities should include people who are primarily working on applications using the current art and people who are primarily working on advancing the state of the art in narrower subfields. Also obviously only some fraction of the relevant researchers attended those conferences or work at those companies.

I'll suggest 10,000 people as a rough order-of-magnitude estimate. I'd be surprised if the number that came out of a more careful estimation process wasn't within a factor of ten of that.

comment by Raemon · 2019-03-30T01:00:53.070Z · LW(p) · GW(p)

Thanks!

Replies from: Raemon
comment by Raemon · 2019-03-30T06:22:38.555Z · LW(p) · GW(p)

(This seems like a reasonable answer for the "number of capabilities researchers" part of the question. Still interested in answers for capabilities funding, safety researchers and safety funding)

answer by SoerenMind · 2019-04-21T20:19:10.758Z · LW(p) · GW(p)

I counted 37 researchers with safety focus plus MIRI researchers in September 2018. These are mostly aimed at AGI and at least PhD level. I also counted 38 who do safety at various levels of part-time. I can email the spreadsheet. You can also find it in 80k's safety Google group.

answer by DanielFilan · 2019-03-03T23:52:27.640Z · LW(p) · GW(p)

By my quick mental count, CHAI's Berkeley branch had something like the equivalent of 8 to 11 researchers focussing on AI alignment in 2018. Kind of tricky to count because we had new PhD students coming in in August, as well as some interns over the summer (some of whom stayed on for longer periods).

comment by Raemon · 2019-03-04T01:34:16.831Z · LW(p) · GW(p)

Hmm. I notice that in the case of AI safety, it's probably possible to just literally count the researchers by hand. I assume for "broader work on AI" it'd be necessary to either consult some kind of research that already had them counted, since there's just way too much stuff going on.

Replies from: DanielFilan
comment by DanielFilan · 2019-03-04T05:22:11.976Z · LW(p) · GW(p)

I notice that in the case of AI safety, it's probably possible to just literally count the researchers by hand.

I think this is probably not true for the average LW reader, or even the average person who's kind of interested in AI alignment, since many orgs are sort of opaque about how many people work there and what team people are on. For example my guess is that most people don't know how many interns CHAI takes, or how many new PhD students we get in a given year, and similarly, I'm not even confident that I could name everybody in OpenAI's safety team without someone to catch my errors.

I assume for "broader work on AI" it'd be necessary to either consult some kind of research that already had them counted, since there's just way too much stuff going on.

Seems correct to me.

Replies from: Raemon
comment by Raemon · 2019-03-04T08:21:02.445Z · LW(p) · GW(p)

Nod. I didn’t mean you could count them trivially, but I hadn’t e en been thinking of the solution ‘someone from each org just mentions the approximate number of researchers and then you add them’ as a possible solution

answer by Raemon · 2019-04-10T21:36:33.582Z · LW(p) · GW(p)

I think the "Number AI Safety Research" turned out to be at least partially answered by this website, although I haven't yet reviewed it that thoroughly.

2 comments

Comments sorted by top scores.

comment by catherio · 2019-03-12T08:30:52.967Z · LW(p) · GW(p)

Two observations:

  • I'd expect that most "AI capabilities research" that goes on today isn't meaningfully moving us towards AGI at all, let alone aligned AGI. For example, applying reinforcement learning to hospital data. So "how much $ went to AI in 2018" would be a sloppy upper bound on "important thoughts/ideas/tools on the path to AGI".
  • There's a lot of non-capabilities non-AGI research targeted at "making the thing better for humanity, not more powerful". For example, interpretability work on models simpler than convnets, or removing bias from word embeddings. If by "AI safety" you mean "technical AGI alignment" or "reducing x-risk from advanced AI" this category definitely isn't that, but it also definitely isn't "AI capabilities" let alone "AGI capabilities".
Replies from: Raemon
comment by Raemon · 2019-03-12T20:19:16.026Z · LW(p) · GW(p)

Nod. Definitely open to better versions of the question that carve at more useful joints. (With a caveat that the question is more oriented towards "what are the easiest street lamps to look under" than "what is the best approximation")

So, I guess my return question is: do you have suggestions on subfields to focus on, or exclude, from "AI capabilities research" that more reliably points to "AGI", that you think there's likely to exist public data on? (Or some other way to carve up AI research space)

It does seem good to have a separate category for "things like removing bias from word embeddings" that is separate from "Technical AGI alignment". (I think it's still useful to have a sense of how much effort humanity is putting into that, just as a rough pointer at where our overall priorities are)