Jobs that can help with the most important century

post by HoldenKarnofsky · 2023-02-10T18:20:07.048Z · LW · GW · 0 comments

Contents

  Recapping the major risks, and some things that could help
  Jobs that can help
      I really think security is not getting enough attention from people concerned about AI risk, and I disagree with the idea that key security problems can be solved just by hiring from today’s security industry.
    Low-guidance jobs
  Things you can do if you’re not ready for a full-time career change
  Some general advice
  Footnotes
None
No comments

Let’s say you’re convinced that AI could make this the most important century of all time for humanity. What can you do to help things go well instead of poorly?

I think the biggest opportunities come from a full-time job (and/or the money you make from it). I think people are generally far better at their jobs than they are at anything else.

This piece will list the jobs I think are especially high-value. I expect things will change (a lot) from year to year - this is my picture at the moment.

Here’s a summary:

Role Skills/assets you'd need
Research and engineering on AI safety Technical ability (but not necessarily AI background)
Information security to reduce the odds powerful AI is leaked Security expertise or willingness/ability to start in junior roles (likely not AI)
Other roles at AI companies Suitable for generalists (but major pros and cons)
Govt and govt-facing think tanks Suitable for generalists (but probably takes a long time to have impact)
Jobs in politics Suitable for generalists if you have a clear view on which politicians to help
Forecasting to get a better handle on what’s coming Strong forecasting track record (can be pursued part-time)
"Meta" careers Misc / suitable for generalists
Low-guidance options These ~only make sense if you read & instantly think "That's me"

A few notes before I give more detail:

The first section of this piece will recap my basic picture of the major risks, and the promising ways to reduce these risks (feel free to skip if you think you’ve got a handle on this).

The next section will elaborate on the options in the table above.

After that, I’ll talk about some of the things you can do if you aren’t ready for a full-time career switch yet, and give some general advice for avoiding doing harm and burnout.

Recapping the major risks, and some things that could help

I've cut this section from the email version of this piece to save space. If you'd like to read it, click here.

Jobs that can help

In this long section, I’ll list a number of jobs I wish more people were pursuing.

Unfortunately, I can’t give individualized help exploring one or more of these career tracks. Starting points could include 80,000 Hours and various other resources.

Research and engineering careers. You can contribute to alignment research as a researcher and/or software engineer (the line between the two can be fuzzy in some contexts).

There are (not necessarily easy-to-get) jobs along these lines at major AI labs, in established academic labs, and at independent nonprofits (examples in footnote).2

Different institutions will have very different approaches to research, very different environments and philosophies, etc. so it’s hard to generalize about what might make someone a fit. A few high-level points:

I also want to call out a couple of categories of research that are getting some attention today, but seem at least a bit under-invested in, even relative to alignment research:

Information security careers. There’s a big risk that a powerful AI system could be “stolen” via hacking/espionage, and this could make just about every kind of risk worse. I think it could be very challenging - but possible - for AI projects to be secure against this threat. (More above.)

I really think security is not getting enough attention from people concerned about AI risk, and I disagree with the idea that key security problems can be solved just by hiring from today’s security industry.

Other jobs at AI companies. AI companies hire for a lot of roles, many of which don’t require any technical skills.

It’s a somewhat debatable/tricky path to take a role that isn’t focused specifically on safety or security. Some people believe7 that you can do more harm than good this way, by helping companies push forward with building dangerous AI before the risks have gotten much attention or preparation - and I think this is a pretty reasonable take.

At the same time:

How a careful AI project could be helpful (Details not included in email - click to view on the web)

80,000 Hours has a collection of anonymous advice on how to think about the pros and cons of working at an AI company.

In a future piece, I’ll discuss what I think AI companies can be doing today to prepare for transformative AI risk. This could be helpful for getting a sense of what an unusually careful AI company looks like.

Jobs in government and at government-facing think tanks. I think there is a lot of value in providing quality advice to governments (especially the US government) on how to think about AI - both today’s systems and potential future ones.

I also think it could make sense to work on other technology issues in government, which could be a good path to working on AI later (I expect government attention to AI to grow over time).

People interested in careers like these can check out Open Philanthropy’s Technology Policy Fellowships.

One related activity that seems especially valuable: understanding the state of AI in countries other than the one you’re working for/in - particularly countries that (a) have a good chance of developing their own major AI projects down the line; (b) are difficult to understand much about by default.

A future piece will discuss other things I think governments can be doing today to prepare for transformative AI risk. I won’t have a ton of tangible recommendations quite yet, but I expect there to be more over time, especially if and when standards and monitoring frameworks become better-developed.

Jobs in politics. The previous category focused on advising governments; this one is about working on political campaigns, doing polling analysis, etc. to generally improve the extent to which sane and reasonable people are in power. Obviously, it’s a judgment call which politicians are the “good” ones and which are the “bad” ones, but I didn’t want to leave out this category of work.

Forecasting. I’m intrigued by organizations like Metaculus, HyperMind, Good Judgment,9 Manifold Markets, and Samotsvety - all trying, in one way or another, to produce good probabilistic forecasts (using generalizable methods10) about world events.

If we could get good forecasts about questions like “When will AI systems be powerful enough to defeat all of humanity?” and “Will AI safety research in category X be successful?”, this could be useful for helping people make good decisions. (These questions seem very hard to get good predictions on using these organizations’ methods, but I think it’s an interesting goal.)

To explore this area, I’d suggest learning about forecasting generally (Superforecasting is a good starting point) and building up your own prediction track record on sites such as the above.

“Meta” careers. There are a number of jobs focused on helping other people learn about key issues, develop key skills and end up in helpful jobs (a bit more discussion here).

It can also make sense to take jobs that put one in a good position to donate to nonprofits doing important work, to spread helpful messages, and to build skills that could be useful later (including in unexpected ways, as things develop), as I’ll discuss below.

Low-guidance jobs

This sub-section lists some projects that either don’t exist (but seem like they ought to), or are in very embryonic stages. So it’s unlikely you can get any significant mentorship working on these things.

I think the potential impact of making one of these work is huge, but I think most people will have an easier time finding a fit with jobs from the previous section (which is why I listed those first).

This section is largely to illustrate that I expect there to be more and more ways to be helpful as time goes on - and in case any readers feel excited and qualified to tackle these projects themselves, despite a lack of guidance and a distinct possibility that a project will make less sense in reality than it does on paper.

A big one in my mind is developing safety standards that could be used in a standards and monitoring regime. By this I mean answering questions like:

There is some early work going on along these lines, at both AI companies and nonprofits. If it goes well, I expect that there could be many jobs in the future, doing things like:

Other public goods for AI projects. I can see a number of other ways in which independent organizations could help AI projects exercise more caution / do more to reduce risks:

Thinking and stuff. There’s tons of potential work to do in the category of “coming up with more issues we ought to be thinking about, more things people (and companies and governments) can do to be helpful, etc.”

Some AI companies have policy teams that do work along these lines. And a few Open Philanthropy employees work on topics along the lines of the first bullet point. However, I tend to think of this work as best done by people who need very little guidance (more at my discussion of wicked problems), so I’m hesitant to recommend it as a mainline career option.

Things you can do if you’re not ready for a full-time career change

Switching careers is a big step, so this section lists some ways you can be helpful regardless of your job - including preparing yourself for a later switch.

First and most importantly, you may have opportunities to spread key messages via social media, talking with friends and colleagues, etc. I think there’s a lot of potential to make a difference here, and I wrote a on this specifically.

Second, you can explore potential careers like those I discuss above. I’d suggest generally checking out job postings, thinking about what sorts of jobs might be a fit for you down the line, meeting people who work in jobs like those and asking them about their day-to-day, etc.

Relatedly, you can try to keep your options open.

Right now there aren’t a lot of obvious places to donate (though you can donate to the Long-Term Future Fund12 if you feel so moved).

Learning more about key issues could broaden your options. I think the full series I’ve written on key risks is a good start. To do more, you could:

Finally, if you happen to have opportunities to serve on governing boards or advisory boards for key organizations (e.g., AI companies), I think this is one of the best non-full-time ways to help.

Some general advice

I think full-time work has huge potential to help, but also big potential to do harm, or to burn yourself out. So here are some general suggestions.

Think about your own views on the key risks of AI, and what it might look like for the world to deal with the risks. Most of the jobs I’ve discussed aren’t jobs where you can just take instructions and apply narrow skills. The issues here are tricky, and it takes judgment to navigate them well.

Furthermore, no matter what you do, there will almost certainly be people who think your work is useless (if not harmful).19 This can be very demoralizing. I think it’s easier if you’ve thought things through and feel good about the choices you’re making.

I’d advise trying to learn as much as you can about the major risks of AI (see above for some guidance on this) - and/or trying to work for an organization whose leadership you have a good amount of confidence in.

Jog, don’t sprint. Skeptics of the “most important century” hypothesis will sometimes say things like “If you really believe this, why are you working normal amounts of hours instead of extreme amounts? Why do you have hobbies (or children, etc.) at all?” And I’ve seen a number of people with an attitude like: “THIS IS THE MOST IMPORTANT TIME IN HISTORY. I NEED TO WORK 24/7 AND FORGET ABOUT EVERYTHING ELSE. NO VACATIONS."

I think that’s a very bad idea.

Trying to reduce risks from advanced AI is, as of today, a frustrating and disorienting thing to be doing. It’s very hard to tell whether you’re being helpful (and as I’ve mentioned, many will inevitably think you’re being harmful).

I think the difference between “not mattering,” “doing some good” and “doing enormous good” comes down to how you choose the job, how good at it you are, and how good your judgment is (including what risks you’re most focused on and how you model them). Going “all in” on a particular objective seems bad on these fronts: it poses risks to open-mindedness, to mental health and to good decision-making (I am speaking from observations here, not just theory).

That is, I think it’s a bad idea to try to be 100% emotionally bought into the full stakes of the most important century - I think the stakes are just too high for that to make sense for any human being.

Instead, I think the best way to handle “the fate of humanity is at stake” is probably to find a nice job and work about as hard as you’d work at another job, rather than trying to make heroic efforts to work extra hard. (I criticized heroic efforts in general here.)

I think this basic formula (working in some job that is a good fit, while having some amount of balance in your life) is what’s behind a lot of the most important positive events in history to date, and presents possibly historically large opportunities today.

Special thanks to Alexander Berger, Jacob Eliosoff, Alexey Guzey, Anton Korinek and Luke Muelhauser for especially helpful comments on this post. A lot of other people commented helpfully as well.

Twitter Facebook Reddit More

  [LW · GW]

Footnotes


  1. I use “aligned” to specifically mean that AIs behave as intended, rather than pursuing dangerous goals of their own. I use “safe” more broadly to mean that an AI system poses little risk of catastrophe for any reason in the context it’s being used in. It’s OK to mostly think of them as interchangeable in this post. 

  2. AI labs with alignment teams: Anthropic, DeepMind and OpenAI. Disclosure: my wife is co-founder and President of Anthropic, and used to work at OpenAI (and has shares in both companies); OpenAI is a former Open Philanthropy grantee.

    Academic labs: there are many of these; I’ll highlight the Steinhardt lab at Berkeley (Open Philanthropy grantee), whose recent research I’ve found especially interesting.

    Independent nonprofits: examples would be Alignment Research Center and Redwood Research (both Open Philanthropy grantees, and I sit on the board of both).

    You can also  

  3. Examples: AGI Safety Fundamentals, SERI MATS, MLAB [EA · GW] (all of which have been supported by Open Philanthropy

  4. On one hand, deceptive and manipulative AIs could be dangerous. On the other, it might be better to get AIs trying to deceive us before they can consistently succeed; the worst of all worlds might be getting this behavior by accident with very powerful AIs. 

  5. Though I think it’s inherently harder to get evidence of low risk than evidence of high risk, since it’s hard to rule out risks arising as AI systems get more capable

  6. Why do I simultaneously think “This is a mature field with mentorship opportunities” and “This is a badly neglected career track for helping with the most important century”?

    In a nutshell, most good security people are not working on AI. It looks to me like there are plenty of people who are generally knowledgeable and effective at good security, but there’s also a huge amount of need for such people outside of AI specifically.

    I expect this to change eventually if AI systems become extraordinarily capable. The issue is that it might be too late at that point - the security challenges in AI seem daunting (and somewhat AI-specific) to the point where it could be important for good people to start working on them many years before AI systems become extraordinarily powerful. 

  7. Here’s Katja Grace [LW · GW] arguing along these lines. 

  8. An Open Philanthropy grantee. 

  9. Open Philanthropy has funded Metaculus and contracted with Good Judgment and HyperMind. 

  10. That is, these groups are mostly trying things like “Incentivize people to make good forecasts; track how good people are making forecasts; aggregate forecasts” rather than “Study the specific topic of AI and make forecasts that way” (the latter is also useful, and I discuss it below). 

  11. The governing board of an organization has the hard power to replace the CEO and/or make other decisions on behalf of the organization. An advisory board merely gives advice, but in practice I think this can be quite powerful, since I’d expect many organizations to have a tough time doing bad-for-the-world things without backlash (from employees and the public) once an advisory board has recommended against them. 

  12. Open Philanthropy, which I’m co-CEO of, has supported this fund, and its current Chair is an Open Philanthropy employee. 

  13. I generally expect there to be more and more clarity about what actions would be helpful, and more and more people willing to work on them if they can get funded. A bit more specifically and speculatively, I expect AI safety research to get more expensive as it requires access to increasingly large, expensive AI models. 

  14. Not investment advice! I would only do this with money you’ve set aside for donating such that it wouldn’t be a personal problem if you lost it all. 

  15. Some options here, here [? · GW], here [EA · GW], here. I’ve made no attempt to be comprehensive - these are just some links that should make it easy to get rolling and see some of your options. 

  16. Spinning Up in Deep RL, ML for Alignment Bootcamp [EA · GW], Deep Learning Curriculum

  17. For the basics, I like Michael Nielsen’s guide to neural networks and deep learning; 3Blue1Brown has a video explainer series that I haven’t watched but that others have recommended highly. I’d also suggest The Illustrated Transformer (the transformer is the most important AI architecture as of today).

    For a broader overview of different architectures, see Neural Network Zoo.

    You can also check out various Coursera etc. courses on deep learning/neural networks. 

  18. I feel like the easiest way to do this is to follow AI researchers and/or top labs on Twitter. You can also check out Alignment Newsletter or ML Safety Newsletter for alignment-specific content. 

  19. Why?

    One reason is the tension between the “caution” and “competition” frames: people who favor one frame tend to see the other as harmful.

    Another reason: there are a number of people who think we’re more-or-less doomed without a radical conceptual breakthrough on how to build safe AI (they think the sorts of approaches I list here are hopeless, for reasons I confess I don’t understand very well). These folks will consider anything that isn’t aimed at a radical breakthrough ~useless, and consider some of the jobs I list in this piece to be harmful, if they are speeding up AI development and leaving us with less time for a breakthrough.

    At the same time, working toward the sort of breakthrough these folks are hoping for means doing pretty esoteric, theoretical research that many other researchers think is clearly useless.

    And trying to make AI development slower and/or more cautious is harmful according to some people who are dismissive of risks, and think the priority is to push forward as fast as we can with technology that has the potential to improve lives. 

0 comments

Comments sorted by top scores.