Looking for machine learning and computer science collaborators

post by Stuart_Armstrong · 2017-05-26T11:53:12.377Z · LW · GW · Legacy · 9 comments

Contents

9 comments

I've been recently struggling to translate my various AI safety ideas (low impact, truth for AI, Oracles, counterfactuals for value learning, etc...) into formalised versions that can be presented to the machine learning/computer science world in terms they can understand and critique.

What would be useful for me is a collaborator who knows the machine learning world (and preferably had presented papers at conferences) which who I could co-write papers. They don't need to know much of anything about AI safety - explaining the concepts to people unfamiliar with them is going to be part of the challenge.

The result of this collaboration should be things like the paper of Safely Interruptible Agents with Laurent Orseau of Deep Mind, and Interactive Inverse Reinforcement Learning with Jan Leike of the FHI/Deep Mind.

It would be especially useful if the collaborators were located physically close to Oxford (UK).

Let me know if you know or are a potential candidate, in the comments.

Cheers!

9 comments

Comments sorted by top scores.

comment by IlyaShpitser · 2017-05-26T13:18:30.951Z · LW(p) · GW(p)

Hi Stuart. I am not an ML person, and I am not close to Oxford, but I am interested in this type of stuff (in particular, I went through the FDT paper just two days ago with someone). I do write papers for ML conferences sometimes.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2017-05-30T06:40:59.182Z · LW(p) · GW(p)

I am not an ML person

I do write papers for ML conferences sometimes.

Interesting ^_^ under what name are these paper?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2017-05-30T14:04:19.817Z · LW(p) · GW(p)

Mine? I can send you my cv if you want.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2017-05-30T16:34:34.715Z · LW(p) · GW(p)

Is this you: https://www.researchgate.net/profile/Ilya_Shpitser/publications ?

Just let me know which papers are particularly ML-ish :-)

comment by harshhpareek · 2017-05-27T07:43:48.133Z · LW(p) · GW(p)

Hi Stuart, I am about to complete a PhD in Machine Learning and would be interested in collaborations like these but only October onwards.

I have written and presented papers at Machine Learning conferences, and am quite interested in contributing to concrete AI safety research. My work so far has been on issues in supervised ranking tasks, but I have read a fair bit on reinforcement learning.

I am not close to Oxford. I am current in Austin, TX and will be in the bay area October onwards.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2017-05-30T06:43:51.450Z · LW(p) · GW(p)

ok! Sending you my email in a PM. Would you mind contacting me in October, if that's ok and you're still interested?

Cheers!

comment by Darklight · 2017-06-13T05:37:40.784Z · LW(p) · GW(p)

I might be able to collaborate. I have a masters in computer science and did a thesis on neural networks and object recognition, before spending some time at a startup as a data scientist doing mostly natural language related machine learning stuff, and then getting a job as a research scientist at a larger company to do similar applied research work.

I also have two published conference papers under my belt, though they were in pretty obscure conferences admittedly.

As a plus, I've also read most of the sequences and am familiar with the Less Wrong culture, and have spent a fair bit of time thinking about the Friendly/Unfriendly AI problem. I even came up with an attempt at a thought experiment to convince an AI to be friendly.

Alas, I am based near Toronto, Ontario, Canada, so distance might be an issue.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2017-06-15T05:06:05.804Z · LW(p) · GW(p)

Interesting. Can we exchange email addresses?

comment by Thomas · 2017-05-26T12:56:27.000Z · LW(p) · GW(p)

Consider for a moment, that this DL thing may be soon obsolete. It is great, the best so far, but anyway.

The first problem I have with it is the enormous data set needed for training.

The second problem is the inherent non-understandability of what those weights mean.

So, perhaps something better may be just around the corner.