Earning money with/for work in AI safety

post by rmoehn · 2016-07-18T05:37:55.551Z · LW · GW · Legacy · 31 comments

Contents

31 comments

(I'm re-posting my question from the Welcome thread, because nobody answered there.)

I care about the current and future state of humanity, so I think it's good to work on existential or global catastrophic risk. Since I've studied computer science at a university until last year, I decided to work on AI safety. Currently I'm a research student at Kagoshima University doing exactly that. Before April this year I had only little experience with AI or ML. Therefore, I'm slowly digging through books and articles in order to be able to do research.

I'm living off my savings. My research student time will end in March 2017 and my savings will run out some time after that. Nevertheless, I want to continue AI safety research, or at least work on X or GC risk.

I see three ways of doing this:


Oh, and I need to be location-independent or based in Kagoshima.

I know http://futureoflife.org/job-postings/, but all of the job postings fail me in two ways: not location-independent and requiring more/different experience than I have.

Can anyone here help me? If yes, I would be happy to provide more information about myself.

(Note that I think I'm not in a precarious situation, because I would be able to get a remote software development job fairly easily. Just not in AI safety or X or GC risk.)

31 comments

Comments sorted by top scores.

comment by Kaj_Sotala · 2016-07-19T12:19:39.355Z · LW(p) · GW(p)

I'm currently working on an AI strategy project for the Foundational Research Institute; they are hiring and do not require plenty of experience:

Requirements

  • Language requirement is research proficiency in English.
  • We anticipate that an applicant is dedicated to alleviating and preventing suffering, and considers it the top global priority.
  • A successful applicant will probably have a background in quantitative topics such as game theory, decision theory, computer science, physics, or math. But we welcome applicants regardless of background.
  • Peer-reviewed publications or a track record of completed comparable research output is not required, but a plus.
  • There is no degree requirement, although a PhD is an advantage, all else equal.

Their open research questions include a number of AI-related ones, and I expect many of them to still have plenty of low-hanging fruit. I'm working on getting a better handle on hard takeoff scnearios in general; most of the my results so far can be found on my website under the "fri-funded" tag. (Haven't posted anything new in a while, because I'm working on a larger article that's been taking some time.)

Replies from: rmoehn, qmotus
comment by rmoehn · 2016-07-20T06:52:40.185Z · LW(p) · GW(p)

Thanks! I hadn't come across the Foundational Research Institute yet.

Though, hmm, not plenty of experience? If there's talk about PhDs as an advantage, it sounds to me like they're looking for people with PhD-level experience. I'm far from that. But unless you say »oh well then maybe not«, I'll apply. Who knows what will come out of it.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2016-07-21T05:58:13.182Z · LW(p) · GW(p)

I don't have a PhD either, and I know of at least one other person who'd been discussing working for them who was also very far from having that level of experience.

comment by qmotus · 2016-07-21T17:14:13.515Z · LW(p) · GW(p)

Will your results ultimately take the form of blog posts such as those, or peer-reviewed publications, or something else?

I think FRI's research agenda is interesting and that they may very well work on important questions that hardly anyone else does, but I haven't yet supported them as I'm not certain about their ability to deliver actual results or the impact of their research, and find it a tad bit odd that it's supported by effective altruism organizations, since I don't see any demonstration of effectiveness so far. (No offence though, it looks promising.)

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2016-07-22T07:24:13.602Z · LW(p) · GW(p)

The final output of this project will be a long article, either on FRI's website or a peer-reviewed publication or both; we haven't decided on that yet.

comment by James_Miller · 2016-07-18T18:05:02.593Z · LW(p) · GW(p)

You could get a job at one of the big computer firms that might someday develop AI to position yourself to influence the development of AI.

comment by John_Maxwell (John_Maxwell_IV) · 2016-07-18T05:53:09.271Z · LW(p) · GW(p)

That's awesome that you are looking to work on AI safety. Here are some options that I don't see you mentioning:

  • If you're able to get a job working on AI or machine learning, you'll be getting paid to improve your skills in that area. So you might choose to direct your study and independent projects towards building a resume for AI work (e.g. by participating in Kaggle competitions).

  • If you get in to the right graduate program, you'll be able to take classes and do research in to AI and ML topics.

  • Probably quite difficult, but if you're able to create an app that uses AI or machine learning to make money, you'd also fulfill the goal of both making money and studying AI at the same time. For example, you could earn money through this stock market prediction competition.

  • 80000 hours has a guide on using your career to work on AI risk.

  • MIRI has set up a research guide for getting the background necessary to do AI safety work. (Note that if MIRI is correct, your understanding of math may be much more important than your understanding of AI in order to do AI safety research. So the previous plans I suggested might look less attractive. The best path might be to aim for a job doing AI work, and then once you have that, start studying math relevant to AI safety part time.)

BTW, the x risk career network is also a good place to ask questions like this. (Folks on that mailing list are probably more qualified than me to answer this question but they don't browse LW that often.)

Replies from: rmoehn
comment by rmoehn · 2016-07-19T01:54:06.990Z · LW(p) · GW(p)

Thanks for your varied suggestions!

Actually I'm kind of more comfortable with MIRI math than with ML math, but the research group here is more interested in machine learning. If I recommended them to look into provability logic, they would get big eyes and say Whoa!, but no more. If, however, I do ML research in the direction of AI safety, they would get interested. (And they are getting interested, but (1) they can't switch their research too quickly and (2) I don't know enough Japanese and the students don't know enough English to make any kind of lunchtime or hallway conversation about AI safety possible.)

Replies from: ChristianKl
comment by ChristianKl · 2016-07-19T15:51:26.674Z · LW(p) · GW(p)

It seems like Toyota has some interest in provable correct software: https://www.infoq.com/news/2015/05/provably-correct-software .

comment by RyanCarey · 2016-07-18T06:35:16.851Z · LW(p) · GW(p)

I would endorse what John Maxwell has said but would be interested to hear more details.

After graduating, why would you need to be based in Kagoshima? Most postdocs travel around the world a lot in order to be with the leading experts and x-risk research is no different.

Have you taken a look at the content on MIRI's Agent Foundations forum?

Have you considered running a MIRIx workshop to practice AI safety research?

Have you considered applying to visit AI safety researchers at MIRI or FHI? That would help you to figure out where your interests and theirs overlap, and to consider how you might contribute. If you're not eligible to visit for some reason, that might imply that you're further from being useful than you thought.

Good luck!

Replies from: rmoehn
comment by rmoehn · 2016-07-19T01:42:05.880Z · LW(p) · GW(p)

Thank you!

After graduating, why would you need to be based in Kagoshima?

I need to be based in Kagoshima for pretty strong personal reasons. Sorry for not providing details. If you really need them, I can tell you more via PM.

Ah, you write »after graduating«? Sorry for not providing that detail: research students in Japan are not working on a master's or PhD. They're just hanging around studying or doing research and hopefully learn something during that time.

Have you taken a look at the content on MIRI's to practice AI safety research?

Yes, I've read all of the agenda papers and some more.

Have you considered applying to visit AI safety researchers at MIRI or FHI? That would help you to figure out where your interests and theirs overlap, and to consider how you might contribute.

I applied for the MIRI Summer Fellows Programme, which I didn't get into by a small margin, and CFAR's Workshop on AI Safety Strategy, which I also didn't get into. They told me they might put me in the next one. That would definitely help me with my questions, but I thought it's better to start early, so I asked here.

If you're not eligible to visit for some reason, that might imply that you're further from being useful than you thought.

I am at the very beginning of learning ML and AI and therefore kind of far from being useful. I know this. But I'm quite good at maths and computer science and a range of other things, so I thought contributing to AI safety research shouldn't be too far to go. It will just take time. (Just as a master's programme would take time, for example.) The hard part is to get hold of money to sustain myself during that time.

I might be useful for other things than research directly, such as support software development, teaching, writing, outreach, organizing. I haven't done much teaching, outreach and organization, but I would be interested to try more.

Replies from: RyanCarey
comment by RyanCarey · 2016-07-19T08:44:49.513Z · LW(p) · GW(p)

I don't really know of any ai researchers in our extended network out of some dozens who've managed to be taken very seriously without being colocated with other top researchers, so without knowing more, it still seems moderately likely to me that the best plan involves doing something like earning while practising math, or studying a PhD, with the intent to move in 2-3 years, depending on when you can't move until.

Otherwise, it seems like you're doing the right things, but until you put out some papers or something, I think I'd sooner direct funding to projects among the FLI grantees. I'd note that most of the credible LW/EA researchers are doing PhDs and postdocs or taking on AI safety research roles in industry, and recieve funds through those avenues and it seems to me like those would also be the next steps for you in your career.

If you had a very new idea that you had an extraordinary comparative advantage at exploring, then it's not concievable that you could be among the most eligible GCR-reduction researchers for funding but you'd have to say a lot more.

comment by Daniel_Burfoot · 2016-07-18T19:45:48.058Z · LW(p) · GW(p)

Coincidentally, I've recently been toying with the idea of setting up a consulting company which would allow people who want to work on "indy" research like AI safety to make money by working on programming projects part-time.

The key would be to 1) find fun/interesting consulting projects in areas like ML, AI, data science and 2) use the indy research as a marketing tool to promote the consulting business.

It should be pretty easy for good programmers with no family obligations to support themselves comfortably by working half-time on consulting projects.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2016-07-19T02:35:31.110Z · LW(p) · GW(p)

I recently had an idea for an app that would make use of natural language processing and provide a service to businesses doing online marketing. I think there's a pretty good chance businesses would be willing to pay for this service, and after a quick Google search, I don't think there are companies doing anything similar yet. If you or anyone else interested in AI safety wants to hear more, feel free to send me a PM.

Replies from: rmoehn
comment by rmoehn · 2016-07-21T04:44:31.891Z · LW(p) · GW(p)

I thought online marketing businesses were powerful enough…

comment by ChristianKl · 2016-07-18T11:17:44.723Z · LW(p) · GW(p)

Currently most AI safety research happens in the West. On the one hand it means that there are more jobs for it in the West. On the other hand it might mean that AI safety is neglegted in Japan and China.

How do you see the AI safety landscape in Japan?

Replies from: rmoehn
comment by rmoehn · 2016-07-19T02:14:59.373Z · LW(p) · GW(p)

Not much going on as far as I know. What I know is the following:

  • Naozumi Mitani has taught a course on Bostrom's Superintelligence and is »broadly pursuing the possible influence of AI on the future lives of humanity«. He's an associate professor of philosophy at Shinshu University (in Nagano).
  • The Center for Applied Philosophy and Ethics at Kyoto University is also somehow interested in AI impacts.
  • My supervisor is gradually getting interested, too. This is partly my influence, but also his own reading. For example, he found the Safely Interruptible Agents and Concrete Problems in AI Safety independently of me through Japanese websites. He's giving me chances to make presentations about AI safety for my fellow students and hopefully also for other professors.

Other than that I know of nobody and searching the web quickly, I didn't find out more. One problem here is that most students don't understand much English, so most of the AI safety literature is lost on them. The professors do know English, but I maybe they're usually not inclined or able to change their research focus.

It's a good sign that my supervisor finds AI safety articles through Japanese websites, though.

Replies from: Kyre
comment by Kyre · 2016-07-19T04:01:21.127Z · LW(p) · GW(p)

Maybe translating AI safety literature into Japanese would be a high-value use of your time ?

Replies from: rmoehn
comment by rmoehn · 2016-07-19T04:52:45.167Z · LW(p) · GW(p)

Yeah, that would be great indeed. Unfortunately my Japanese is so rudimentary that I can't even explain to my landlord that I need a big piece of cloth to hang it in front of my window (just to name an example). :-( I'm making progress, but getting a handle on Japanese is about as time-consuming as getting a handle on ML, although more mechanical.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2016-07-21T11:13:53.492Z · LW(p) · GW(p)

Do you get the impression that Japan has numerous benevolent and talented researchers who could and would contribute meaningfully to AI safety work? If so, it seems possible to me that your comparative advantage is in evangelism rather than research (subject to the constraint that you're staying in Japan indefinitely). If you're able to send multiple qualified Japanese researchers west, that's potentially more than you'd be able to do as an individual.

You'd still want to have thorough knowledge of the issues yourself, if only to convince Japanese researchers that the problems were interesting.

Replies from: rmoehn
comment by rmoehn · 2016-07-29T03:50:49.823Z · LW(p) · GW(p)

Why should I send them west? Hopefully so that they learn and come back and produce researcher offspring? I'll see what I can do. – Nag my supervisor to take me to domestic conferences…

comment by rmoehn · 2016-07-19T00:55:10.630Z · LW(p) · GW(p)

Thanks for your responses! I will post some individual comments.

comment by Dagon · 2016-07-18T15:27:38.179Z · LW(p) · GW(p)

Consider more carefully your ranking of preferences, and expand your horizons quite a bit. There're lots of ways to improve the predicted future state of humanity that are less direct, but possibly more effective, than this particular topic.

I care about the current and future state of humanity

That's sweet of you. I'm glad.

so I think it's good to work on existential or global catastrophic risk

That's a pretty big jump. I'll grant that human existential risk is important, but why is your best contribution to work directly on it? Perhaps you'd do a lot more good with a slight reduction in shipping costs or tiny improvements in safety or enjoyment of some consumer product. In the likely case that your marginal contribution to x-risk doesn't save the world, a small improvement for a large number of people does massive amounts more good.

Regardless of whether you focus on x-risk or something else valuable, the fact that you won't consider leaving Kagoshima is an indication that you aren't as fully committed as you claim. IMO, that's ok: we all have personal desires that we put ahead of the rest of the world. But you should acknowledge it and include it in your calculations.

Replies from: rmoehn, ChristianKl
comment by rmoehn · 2016-07-19T01:17:15.102Z · LW(p) · GW(p)

In the likely case that your marginal contribution to x-risk doesn't save the world

So you think that other people could contribute much more to x-risk, so I should go into areas where I can have a lot of impact? Otherwise, if everyone says »I'll only have a small impact on x-risk. I'll do something else.«, nobody would work on x-risk. Are you trying to get a better justification for work on x-risk out of me? At the moment I only have this: x-risk is pretty important, because we don't want to go extinct (I don't want humanity to go extinct or into some worse state than today). Not many people are working on x-risk. Therefore I do work on x-risk, so that there are more people working on it. Now you will tell me that I should start using numbers.

the fact that you won't consider leaving Kagoshima is an indication that you aren't as fully committed as you claim

What did I claim about my degree of commitment? And yes, I know that I would be more effective at improving the state of humanity if I didn't have certain preferences about family and such.

Anyway, thanks for pushing me towards quantitative reasoning.

Replies from: Dagon
comment by Dagon · 2016-07-19T13:52:53.767Z · LW(p) · GW(p)

So you think that other people could contribute much more to x-risk

"marginal" in that sentence was meant literally - the additional contribution to the cause that you're considering. Actually, I think there's not much room for anybody to contribute large amounts to x-risk mitigation. Most people (and since I know nothing of you, I put you in that class) will do more good for humanity by working at something that improves near-term situations than by working on theoretical and unlikely problems.

Replies from: rmoehn
comment by rmoehn · 2016-07-20T06:58:06.495Z · LW(p) · GW(p)

So you think there's not much we can do about x-risk? What makes you think that? Or, alternatively, if you think that only few people who can do much good in x-risk mitigation, what properties enable them to do that?

Oh, and why do you consider AI safety a "theoretical [or] unlikely" problem?

Replies from: Dagon
comment by Dagon · 2016-07-20T16:26:03.558Z · LW(p) · GW(p)

I think that there's not much more that most individuals can do about x-risk as a full-time pursuit than we can as aware and interested civilians.

I also think that unfriendly AI Foom is a small part of the disaster space, compared to the current volume of unfriendly natural intelligence we face. Increase in destructive power of small (or not-so-small) groups of humans seems 20-1000x more likely (and I generally think toward the higher end of that) to filter us than a single or small number of AI entities becoming powerful enough to do so.

Replies from: rmoehn
comment by rmoehn · 2016-07-21T04:53:52.434Z · LW(p) · GW(p)

So it would be better to work on computer security? Or on education, so that we raise fewer unfriendly natural intelligences?

Also, AI safety research benefits AI research in general and AI research in general benefits humanity. Again only marginal contributions?

Replies from: Dagon
comment by Dagon · 2016-07-21T14:52:40.325Z · LW(p) · GW(p)

Or on healthcare or architecture or garbage collection or any of the billion things humans do for each other.

Some thought to far-mode issues is worthwhile, and you might be able to contribute a bit as a funder or hobbyist, but for most people, including most rationalists, it shouldn't be your primary drive.

comment by ChristianKl · 2016-07-19T15:44:09.717Z · LW(p) · GW(p)

Perhaps you'd do a lot more good with a slight reduction in shipping costs or tiny improvements in safety or enjoyment of some consumer product.

Perhaps you would also do more good by working in a slight increase in shipping costs.

Replies from: Dagon
comment by Dagon · 2016-07-19T21:50:21.531Z · LW(p) · GW(p)

Quite. Whatever you consider an improvement to be. Just don't completely discount small, likely improvements in favor of large (existential) unlikely ones.