AI x-risk reduction: why I chose academia over industry

post by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2021-03-14T17:25:12.503Z · LW · GW · 14 comments

Contents

14 comments

I've been leaning towards a career in academia for >3 years, and recently got a tenure track role at Cambridge.  This post sketches out my reasoning for preferring academia over industry.

Thoughts on Industry Positions:
A lot of people working on AI x-risk seem to think it's better to be in industry.  I think the main arguments for that side of things are:

I think these are good reasons, but far from definitive.  
I'll also note that nobody seems to be going to Google, even though they are arguably the most likely to develop AGI, since 1) they are bigger, publish more, and have more resources, 2) they can probably steal from DeepMind to some extent.  So if you ARE going to industry, please consider working for Google.  Also Chinese companies.

My reasons for preferring academia:

Main crux: timelines?
A lot of people think academia only makes sense if you have longer timelines.  I think this is likely true to some extent, but I think academia starts to look like a clear win within 5-10 years, so you need to be quite confident in very short timelines to think industry is a better bet.  Personally, I'm also quite pessimistic about our chances for success if timelines are that short; I think we have more leverage if timelines are longer, so it might make sense to hope that we're lucky enough to live in a world where AGI is at least a decade away.

Conclusion:
I think the main cruxes for this choice are:
1) timelines
2) personal fit
3) expected source of impact.

I discussed (1) and (2) already.  By (3), I mean roughly: "Do you expect the research you personally conduct/lead to be your main source of impact?  Or do you think your influence on others (e.g. mentoring students, winning hearts and minds of other researchers and important decision makers) will have a bigger impact?"  I think for most people, influencing others could easily be a bigger source of impact, and I think more people working on reducing AI x-risk should focus on that more.  
But if someone has a clear research agenda, a model of how it will substantially reduce x-risk, and a well-examined belief that their counter-factual impact on pushing the agenda forward is large, then I think there's a strong case for focusing on direct impact.  I don't think this really applies to me; all of the technical research I can imagine doing seems to have a fairly marginal impact.

I've discussed this question with a good number of people, and I think I've generally found my pro-academia arguments to be stronger than their pro-industry arguments (I think probably many of them would agree?).  I'd love to hear arguments people think I've missed.
EDIT: in the above, I wanted to say something more like: "I think the average trend in these conversations has been for people to update in the direction of academia being more valuable than they thought coming into the conversation".  I think this is true and important, but I'm not very confident in it, and I know I'm not providing any evidence... take it with a grain of salt I guess :).
 

14 comments

Comments sorted by top scores.

comment by Rohin Shah (rohinmshah) · 2021-03-15T19:06:27.167Z · LW(p) · GW(p)

I've discussed this question with a good number of people, and I think I've generally found my pro-academia arguments to be stronger than their pro-industry arguments (I think probably many of them would agree?)

I... think we've discussed this? But I don't agree, at least insofar as the arguments are supposed to apply to me as well (so e.g. not the personal fit part).

Some potential disagreements:

  1. I expect more field growth via doing good research that exposes more surface area for people to tackle, rather than mentoring people directly. Partly this is because people can get mentorship comes from generic PhD programs, and because a lot of my research aims to be conceptual clarification of the field as a whole. That second reason may not apply to you.
  2. I wouldn't say "radical governance solutions" or "political activism" are "likely necessary" (though I wouldn't say they are "likely unnecessary" either, it just seems pretty uncertain).
  3. I didn't notice you talking about early research being more impactful than later research, which seems like an important factor to the extent you think you'll do better + faster research in industry relative to academia (as I do believe).
  4. You mention "all the usual reasons for preferring industry" -- I want to note that those seem like pretty strong reasons; idk how strong you think those are. (I'd also note salary in addition to the ones you mention -- even altruistically, you can donate a much larger amount on a typical industry salary.)

Personally, I find the "bully pulpit" argument for academia most persuasive.

Btw, planned summary for the Alignment Newsletter:

This post and its comments discuss considerations that impact whether new PhD graduates interested in reducing AI x-risk should work in academia or industry.

Replies from: capybaralet
comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2021-03-16T01:56:16.393Z · LW(p) · GW(p)

Yeah we've definitely discussed it!  Rereading what I wrote, I did not clearly communicate what I intended to...I wanted to say that "I think the average trend was for people to update in my direction".  I will edit it accordingly.

I think the strength of the "usual reasons" has a lot to do with personal fit and what kind of research one wants to do.  Personally, I basically didn't consider salary as a factor.

comment by Srdjan Miletic (srdjan-miletic) · 2021-03-14T18:52:09.811Z · LW(p) · GW(p)

I think one thing to consider is that the two paths don't have an equal % chance to succeed. Getting a tenure track position at a top 20 university is hard. Really hard. Getting a research scientist position is, based on my very uncertain and informal understanding, less hard.

Replies from: jsteinhardt, rhaps0dy
comment by jsteinhardt · 2021-03-15T05:52:28.441Z · LW(p) · GW(p)

This doesn't seem so relevant to capybaralet's case, given that he was choosing whether to accept an academic offer that was already extended to him.

comment by Adrià Garriga-alonso (rhaps0dy) · 2021-03-15T19:05:38.965Z · LW(p) · GW(p)

Also, you can try for a top-20 tenure-track position and, if you don't get it, "fail" gracefully into a research scientist position. The paths for the two of them are very similar (± 2 years of postdoctoral academic work).

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-03-15T08:51:03.742Z · LW(p) · GW(p)

When you say academia looks like a clear win within 5-10 years, is that assuming "academia" means "starting a tenure-track job now?" If instead one is considering whether to begin a PhD program, for example, would you say that the clear win range is more like 10-15 years?

Also, how important is being at a top-20 institution? If the tenure track offer was instead from University of Nowhere, would you change your recommendation and say go to industry?

Would you agree that if the industry project you could work on is the one that will eventually build TAI (or be one of the leading builders, if there are multiple) then you have more influence from inside than from outside in academia?

Replies from: capybaralet
comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2021-03-16T01:49:02.716Z · LW(p) · GW(p)

When you say academia looks like a clear win within 5-10 years, is that assuming "academia" means "starting a tenure-track job now?" If instead one is considering whether to begin a PhD program, for example, would you say that the clear win range is more like 10-15 years?

Yes.  

Also, how important is being at a top-20 institution? If the tenure track offer was instead from University of Nowhere, would you change your recommendation and say go to industry?

My cut-off was probably somewhere between top-50 and top-100, and I was prepared to go anywhere in the world.  If I couldn't make into top 100, I think I would definitely have reconsidered academia.  If you're ready to go anywhere, I think it makes it much easier to find somewhere with high EV (but might have to move up the risk/reward curve a lot).

Would you agree that if the industry project you could work on is the one that will eventually build TAI (or be one of the leading builders, if there are multiple) then you have more influence from inside than from outside in academia?

Yes.  But ofc it's hard to know if that's the case.  I also think TAI is a less important category for me than x-risk inducing AI.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-03-16T08:44:00.213Z · LW(p) · GW(p)

Makes sense. I think we don't disagree dramatically then.

I also think TAI is a less important category for me than x-risk inducing AI.

Also makes sense -- just checking, does x-risk-inducing AI roughly match the concept of "AI-induced potential point of no return" [LW · GW] or is it importantly different? It's certainly less of a mouthful so if it means roughly the same thing maybe I'll switch terms. :)

Replies from: capybaralet
comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2021-03-17T01:15:04.204Z · LW(p) · GW(p)

um sorta modulo a type error... risk is risk.  It doesn't mean the thing has happened (we need to start using some sort of phrase like "x-event" or something for that, I think).

Replies from: Ikaxas
comment by Vaughn Papenhausen (Ikaxas) · 2021-03-17T16:21:00.635Z · LW(p) · GW(p)

I've started using the phrase "existential catastrophe" in my thinking about this; "x-catastrophe" doesn't really have much of a ring to it though, so maybe we need something else that abbreviates better?

comment by Gerald Monroe (gerald-monroe) · 2021-03-15T05:06:37.575Z · LW(p) · GW(p)

My personal thought is:

      a.  So fundamental algorithms are cool and critical contributions to CS.  But where did we get the stuff we have now?  Well, arguably the things we have now are (1) absurdly powerful silicon devices (2) a large amount of open source and proprietary software.  Most of all of this was developed by industry or non-academic open source contributors.

     b.  What will it take to make AGI a reality?  The thing is, I think it is similar to comparing Werner Von Braun's work before he had a nation backing him, and after.  I think the most probable route to AGI is as follows: 

                  1.  Today we are trying to build practical systems that work as [sensor inputs] -> [intermediate state abstractions: example, collidable objects on  a grid around the agent] -> [goal state abstractions: example, predicted $ for each path the agent takes, with negative dollars for risks ].   They are fairly simple.

                  2.  I think AGI will essentially be "more meta".  There will be many more feeder subsystems that supply more complex intermediate states.  And more layers of meta states that ultimately result in high level abstractions like 'awareness' and 'self desires' and so on.  

To me all this looks like immense scale.  You need a gigantic software infrastructure that gets reused thousands of times over.  Modern example: wordpress.  You need a massive hardware infrastructure to host it and giga-dollar budgets.  

Also I think that many of the fundamental R&D steps - finding better activation functions, better neural network architectures, finding optimal configurations for a given problem, alternative algorithms - can be found better by massive autonomous systems that explore the possibility space.  I don't think human researchers will be able to contribute much directly.  Here's one paper as an example.

If you want to be a researcher who can exploit this you need to be a damn good programmer, and you need a big budget for cloud runs.  

    c.  Exponential progress is going to come not from throwing more humans at the problem, but by building clever software that bootstraps early progress in AI to make further progress.  Example, a neural network to generate potential functions for regression - which may not even be neural networks - to solve general regression problems.

comment by Ethan Perez (ethan-perez) · 2021-03-16T02:29:02.325Z · LW(p) · GW(p)

What are your thoughts for subfields of ML where research impact/quality depends a lot on having lots of compute?

In NLP, many people have the view that almost all of the high impact work has come from industry over the past 3 years, and that the trend looks like it will continue indefinitely. Even safety-relevant work in NLP seems much easier to do with access to larger models with better capabilities (Debate/IDA are pretty hard to test without good language models). Thus, safety-minded NLP faculty might end up in a situation where none of their direct work is very impactful, but all of the expected impact is by graduating students who end up going to work in industry labs in particular.  How would you think about this kind of situation?

Replies from: capybaralet
comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2021-03-17T01:17:39.608Z · LW(p) · GW(p)

You can try to partner with industry, and/or advocate for big government $$$.
I am generally more optimistic about toy problems than most people, I think, even for things like Debate.
Also, scaling laws can probably help here.

comment by flandry39 · 2022-09-08T04:33:37.738Z · LW(p) · GW(p)

Maybe we need a "something else" category?   An alternative other than simply business/industry and academics?   

Also, while this is maybe something of an old topic, I took some notes regarding my thoughts on this topic and and related matters posted them to:

   https://mflb.com/ai_alignment_1/academic_or_industry_out.pdf