As a Washed Up Former Data Scientist and Machine Learning Researcher What Direction Should I Go In Now?

post by Darklight · 2020-10-19T20:13:44.993Z · LW · GW · 2 comments

This is a question post.

Contents

  Answers
    9 meadComposition
    5 FactorialCode
    3 ryan_b
    1 Dach
None
2 comments

Some background, I was interested in AI before the hype, back when neural networks were just an impractical curiosity in our textbooks.  I went through an undergrad in Cognitive Science and decided that there was something to the idea of connectionist bottom up AI having tremendous untapped potential because I saw the working example of the human mind.  So I embarked on a Masters in Computer Science focused on ML and eventually graduated at just about the perfect time (2014) to jump into industry and make a splash.  It helped that I'd been ambitious and tried to create crazy things like the Music-RNN and the Earthquake Predictor Neural Network, which, though not technically effective, showed surprising amounts of promise.  The Music-RNN could at least generate sounds that vaguely resembled the audio data, and the Earthquake Predictor predicted the Ring of Fire: low magnitude, high frequency quakes that weren't important, but hey, it was better than random...

I also had earlier published two mediocre papers on stuff like occluded object recognition in some fairly inconsequential conferences, but combined with my projects and the AI hype, a Canadian startup called Maluuba (which would later be bought by Microsoft) took a chance on me and hired me as a Data Scientist for a few months doing NLP.  Later my somewhat helpful posts on the Machine Learning Reddit attracted the attention of a recruiter from Huawei, and I ended up spending a few years as a Research Scientist at their Canadian subsidiary (specifically in Noah's Ark Lab), working both on NLP and later Computer Vision.  Unfortunately, I was foolish and got into some unfortunate office politics that basically derailed my career within the company and I eventually requested a buyout package to avoid being stuck on projects I didn't think were relevant under a manager I didn't agree with.

Alas, I found myself unemployed right when COVID-19 hit.  Despite still interviewing at places like Amazon, Facebook, and Deloitte, my lack of a PhD and so-so engineering ability hampered my efforts to get back into the industry, and made me question if I could still compete in a market that seemed a lot more saturated than before.

So recently I started to read some books that had been on my todo list for a while, like Bostrom's Superintelligence and Russell's Human Compatible.  Before I started my career, I'd spent a fair bit of time reading through the Less Wrong sequences by Eliezer and also posting some of my own naive ideas like the contrarian (contrary to the Orthogonality Thesis in fact) concept of an AI Existential Crisis (where the AI itself would question and potentially change its values), and the Alpha Omega Thereom (which is actually in retrospect very similar to Bostrom's ideas of Anthropic Capture and the Hail Mary solution to the Alignment Problem).  Even while working in my career, I did think about the Alignment Problem, though like most ML practitioners, I thought it was such a far off, amorphous challenge with no obvious avenue of attack that I didn't see a clear way to directly work on it.

At Maluuba and Huawei, I'd have some casual conversations with colleagues about the Alignment Problem, but we kept on working on our models without really considering if it was right to.  After all, I needed to eat and make enough to live comfortably first, and ML was definitely good money, and the models did really cool stuff!  But my recent time away from work has given me a chance to think about things a lot more, and wonder if my actions of prodding the technology forward by even a tiny increment could actually be harmful given how far away we seem from having a robust solution to Alignment.

So, naturally, I wondered if I could try to do research directly on the problem and be useful.  After doing some reading on the SOTA papers...  it seems like we're still in the very early stages of coming up with definitions for things and making conceptual frameworks and I'm worried now that compared to say, the enormous, ever expanding literature on Reinforcement Learning, things are going waaay too slowly.

But on the other hand, I don't know where to begin to be useful with this.  Or even if, given how important this work could be, that I couldn't end up making things worse by contributing work that isn't rigorous enough.  One of the things I learned from doing research in industry is that experimental rigor is actually very hard to do properly, and almost everyone, people who publish papers in academia, as well as industry, cut corners to get things out ASAP so they can flag-plant on ArXiv.  And then people complain about results that are not reproducible and demand source code.  There's a lot of noise in the signal, even when we have working prototypes and models running.  How we can expect to Align models by proving things in advance rather than experimentation...  it just seems doubtful to me because the working models we use in industry always have to endure tests and uncertainty means it's basically impossible to guarantee there won't be some edge case that fails.

So, I guess the question boils down to, how seriously should I consider switching into the field of AI Alignment, and if not, what else should I do instead?  Like should I avoid working on AI at all and just do something fun like game design, or is it still a good idea to push forward ML despite the risks?  And if switching to AI Alignment should be done, can it be a career or will I need to find something else to pay the bills with as well?

Any advice is much appreciated.  Thank you for your time and consideration.

Answers

answer by meadComposition · 2020-10-19T20:22:42.458Z · LW(p) · GW(p)

With the field still making its way into the industry, I am surprised that a Master's degree in CS and good work experience is unable to help you land a job. Is there something else that is missing about why this is happening to you?
On a related note, your experience and advice in handling corporate politics rationally would be very beneficial to the industry community here if you are open to sharing it!

comment by Darklight · 2020-10-19T22:05:36.402Z · LW(p) · GW(p)

I have been able to land interviews at a rate of about 8/65 or 12% of the positions I apply to.  My main assumption is that the timing of COVID-19 is bad, and I'm also only looking at positions in my geographical area of Toronto.  It's also possible that I was overconfident early on and didn't prep enough for the interviews I got, which often involved general coding challenges that depended on data structures and algorithms that I hadn't studied since undergrad, as well as ML fundamentals for things like PCA that I hadn't touched in a long time as my research work has been deep learning focused.

As for corporate politics and how to handle them rationally, I'm not entirely sure I can be much help, as to be honest, I'm not entirely clear on what happened to cause the situation that I got myself into.

Perhaps the thing I could suggest is to be tactful and avoid giving people an excuse or opportunity to side line you, and never assume that you can work with anyone without issue, because toxic or hostile managers especially can make you miserable and prevent you from being successful, and noticing such people in advance and avoiding having to depend on their performance appraisals is probably a good idea.

Most people in business seem focused on performing and getting results, and some of them are wary of others who could overtake them, and so you need to balance showing your value with not seeming threatening to their position.  I was in an awkward position that my immediate manager and I didn't get along, but the director of the department who originally hired me protected me from too much reprisal.  However, he needed me to perform better to be able to advocate for me effectively, and it was difficult to do so under the person I was directly under.

Such situations can arise and get quite complicated.  I wish I could say you can use the tools of rationality to reason with anyone and convince them to work cooperatively on team goals, but I found that some people are less amenible than others.  Furthermore, if someone makes an attack against you in corporate politics, chances are you won't see it coming, using a subordinate to strike indirectly, and those involved will straight up ignore your communications or give you the runaround in such a way that you won't be sure who is actually responsible for what.  Many meetings are behind closed doors, and there is a clear limit to the information you will have relative to your superiors, which can make it difficult to defend yourself even if you know something is going on.

I guess another thing I can add is that probably a large part of why I was able to avoid being fired was that I had substantial documentation, including a detailed research journal, and a spreadsheet of my working hours to back me up.  When trying to be a rational and honest worker in the corporate world, a paper trail is protection and a good way to ensure that the compliance department and HR will be on your side when it counts.

Also, beware that if you let certain types of people get away with one seemingly small thing, they will see that as weakness and that you are exploitable.  Know your boundaries and the regulations of the company.  Bullies are not just a schoolyard problem, but in the office, they're much smarter and know how to get away with things.  Sometimes these people are also good enough at their jobs that you will not be able to do anything to them because the company needs what they provide.  That is life.  Pick your battles and don't allow unfair situations and difficulties to make you lose sleep and perform worse.  Do the best you can to do your job well, such that you are beyond rapproach if possible.  Be aware that things can spiral.  If you lose sleep over something that happened, and this makes you late for work the next day, you've given your detractors ammunition.

That's all I can think of right now.

Edit:  As an example of how clever other people can be at office politics, I was once put in a kind of double bind or trap situation that was similar to a fork in Chess.  Basically, I was told by a manager not to push some code into a repository, ostensibly because we'd just given privileges to someone who had been hired by a different department and who we suspected might steal the code for that department (there's a horse race culture at the corporation).  Here's the thing, if I did what he told me to, this repo would be empty and I'd have no independent evidence that my current project had made any progress, leaving me vulnerable to him accusing me of not doing work, or he could deny that he told me not to put in the code, making it look like I was concealing stuff from the company.  If I refused to go along and instead pushed the code, I would be insubordinate and disloyal to my department and his managers, who he claimed had told him to tell me what to do.

answer by FactorialCode · 2020-10-19T23:34:13.706Z · LW(p) · GW(p)

Money makes the world turn and it enables research, be it academic or independent. I would just focus on getting a bunch of that. Send out 10x to 20x more resumes than you already have, expand your horizons to the entire planet, and put serious effort into prepping for interviews.

You could also try getting a position at CHAI or some other org that supports AI alignment PhDs, but it's my impression that those centres are currently funding constrained and already have a big list of very high quality applicants, so your presence or absence might not make that much of a difference.

Other than that, you could also just talk directly with the people working on alignment. Send them emails, and ask them about their opinion on what kind of experiments they'd like to know the result of but don't have time to run. Then turn those experiments into papers. Once you've gotten a taste for it, you can go and do your own thing.

answer by ryan_b · 2020-10-19T20:59:47.721Z · LW(p) · GW(p)

I notice you have the following:

  • Long-term concern about the problem
  • A relevant background in several dimensions
  • Some time on your hands

Sounds to me like an excellent opportunity to firm up your analysis of the risks. With this, you can make a much more informed decision about whether to tackle the problem head on.

Also this:

One of the things I learned from doing research in industry is that experimental rigor is actually very hard to do properly, and almost everyone, people who publish papers in academia, as well as industry, cut corners to get things out ASAP so they can flag-plant on ArXiv.

I am far from the expert on the subject, but rigorous and safe toy models in code demonstrating any of the things which we discuss seem like they would be very useful.

answer by Dach · 2020-10-20T04:46:12.982Z · LW(p) · GW(p)

So, I guess the question boils down to, how seriously should I consider switching into the field of AI Alignment, and if not, what else should I do instead?

I think you should at least take the question seriously. You should consider becoming in involved in AI Alignment to the extent that you think doing so will be the highest value strategy, accounting for opportunity costs. An estimate for this could be derived using the interplay between your answers to the following basic considerations:

  • What are your goals?
  • What are the most promising methods for pursuing your various goals?
    • What resources do you have, and how effective would investing those resources be, on a method by method and goal by goal basis?

An example set of (short and incomplete) answers which would lead you to conclude "I should switch to the field of AI Alignment" is:

Like should I avoid working on AI at all and just do something fun like game design, or is it still a good idea to push forward ML despite the risks?

If you're not doing bleeding edge research (and no one doing bleeding edge research is reading your papers), your personal negative impact on AI Alignment efforts can be more effectively offset by making more money and then donating e.g. $500 to MIRI (or related) than changing career.

And if switching to AI Alignment should be done, can it be a career or will I need to find something else to pay the bills with as well?

AI Alignment is considered by many to be literally the most important problem in the world. If you can significantly contribute to AI Alignment, you will be able to find someone to give you money.

If you can't significantly personally contribute to AI Alignment but still think the problem is important, I would advise advancing some other career and donating money to alignment efforts, starting a youtube channel and spreading awareness of the problem, etc.

I am neither familiar with you nor an alignment researcher, so I will eschew giving specific career advice.

2 comments

Comments sorted by top scores.

comment by shminux · 2020-10-20T03:16:57.353Z · LW(p) · GW(p)

I am neither in ML nor in math nor in AI alignment, so just throwing it out there. From my reading of the issues facing the alignment research, it looks like the very basics of formalizing embedded agency are still lacking, but easier to make progress on than anything directly related to alignment proper.

comment by snog toddgrass · 2020-10-19T21:41:10.495Z · LW(p) · GW(p)

David Roodman was fired from the Bill and Melinda Gates Foundation for his poor office politics skillS. He’s my greatest role model so you’re in good company.

He talks about it on his 80k interview, iirc.