AI-related honours projects?
post by klkblake · 2013-09-20T03:50:05.604Z · LW · GW · Legacy · 18 commentsContents
18 comments
I'm starting my Honours next year, and would like to do something towards helping MIRI with Friendly AI. I would also prefer to avoid duplicating any of MIRI's work (either already done, or needed to be done before my honours are finished midway through 2015). I decided to post this here rather than directly email MIRI as I guessed a list of potential projects would probably be useful for others as well (in fact, I was sure such a thing had already been posted, but I was unable to find it if it did in fact exist). So: what sort of Friendly AI related projects are there that could potentially be done by one person in a year of work? (I suppose it would make sense to include PhD-length suggestions here as well).
Some notes about me and my abilities: I am reasonably good with math, though my understanding of probability, model theory and provability logic are lacking (I will have a few months before hand that I plan to use to try and learn whatever maths I will need that I don't already have). I am a competent Haskell programmer, and (besides AI) I am interested in dependent type systems, total languages, and similar methods of proving certain program errors cannot occur, although I would have to do some background research to learn more of the state of the art in that field. I would (hesitantly) guess that this would be the best avenue for something that a single person could do that might be useful, but I'm not sure how useful it would be.
18 comments
Comments sorted by top scores.
comment by lukeprog · 2013-09-20T20:30:36.631Z · LW(p) · GW(p)
This all depends on your major and your advisors, of course, but...
- To find out whether you can contribute to MIRI's technical research at this time, you could apply to attend a workshop and/or write up some of your own comments on MIRI's published papers, like Quinn did.
- We explained what someone could do on IEM with Yudkowsky (2013) and Grace (2013). Katja has the most detailed picture of what exactly someone would do next, if you're interested. Some of the work would be CS-related (like Katja's report), some of it would be about evolutionary biology, some of it would be on other topics.
- For more sociological work, one could follow up on these two projects, which are both on "pause" right now. We outlined pretty clearly what the next steps there would be.
For you specifically, it seems you'd have to do something fairly technical. Is that right? If so, I can try to talk through which pieces of MIRI's technical research agenda you're most likely to be able to contribute to, if you tell me more about your background. E.g. which of the subjects here are you already familiar with?
Replies from: klkblake↑ comment by klkblake · 2013-09-21T13:48:38.398Z · LW(p) · GW(p)
Fairly technical would be good. IEM and the sociological work are somewhat outside my interests. Attending a workshop would unfortunately be problematic; anxiety issues make travelling difficult, especially air travel (I live in Australia). Writing up comments on the research papers is an excellent idea; I will certainly start doing that regardless of what project I do. Of the subjects listed, I am familiar (in roughly decreasing order) with functional programming, efficient algorithms, parallel computing, discrete math, numerical analysis, linear algebra, and the basics of set theory and mathematical logic. I have "Naive Set Theory", "Introduction to Mathematical Logic", and "Godel, Escher, Bach" sitting on my desk at the moment, and I am currently taking courses in theory of computation, and intelligent systems (a combination AI/machine learning/data mining course). The areas I had planned to learn after the above are incompleteness/undecidability, model theory, and category theory. In terms of how my prospective advisor could affect things, he's mostly interested in cognitive science based AI, with some side interest in theory of computation.
Replies from: lukeprog↑ comment by lukeprog · 2013-09-21T16:30:03.746Z · LW(p) · GW(p)
In that case, I think you'll want to study mathematical logic, theory of computation, incompleteness/undecidability and model theory, to improve your ability to contribute to the open problems that Eliezer thinks are most plausibly relevant to Friendly AI. Skimming our recent technical papers (definability of truth, robust cooperation, tiling agents) should also give you a sense of what you'd need to learn to contribute at the cutting edge.
A few years from now, I hope to have write-ups of a lot more open problems, including ones that don't rely so heavily on mathematical logic.
Something closer to cognitive science based AI, which Paul Christiano and Andreas Stuhlmuller (and perhaps others) think is plausibly relevant to FAI, is concept learning. The idea is that this will be needed at some point for getting AIs to "do what I mean." The September workshop participants spent some time working on this. You could email Stuhlmuller to ask for more details, preferably after reading the paper linked above.
Replies from: klkblake↑ comment by klkblake · 2013-09-25T13:00:25.410Z · LW(p) · GW(p)
Sorry for the late reply; my mid-semester break just started, which of course meant I came down with a cold :). I've (re-)read the recent papers, and was rather surprised at how much of the maths I was able to understand. I'm feeling less confidant about my mathematical ability after reading the papers, but that is probably a result of spending a few hours reading papers I don't fully understand rather an accurate assessment of my ability. Concept learning seems to be a good backup option, especially since it sounds like something my supervisor would love (except for the part where it's a form of supervised learning, but that's unlikely to be a problem).
I vaguely remember EY mentioning something about there needing to be research into better operating systems and/or better programming languages (in terms of reliability/security/correctness), but this may have been a while ago. I have quite a bit of interest in this area, and some experience as well. Is this something that you think would be valuable (and if so, how valuable compared to work on the main open problems)?
Do you know which of the open problems MIRI is likely to attack first? I'd like to avoid duplication of effort, though I know with the unpredictability of mathematical insight that's not always feasible.
UPDATE: I just had a meeting with my supervisor, and he was pretty happy with all of the options I presented, so that won't be a problem. An idea I had this morning, which I'm pretty excited about, is potentially applying the method from the probabalistic reflection paper to the Halting Problem, since it seems to share the same self-referential structure.
Replies from: None, lukeprog↑ comment by [deleted] · 2013-09-30T18:48:59.342Z · LW(p) · GW(p)
I vaguely remember EY mentioning something about there needing to be research into better operating systems and/or better programming languages (in terms of reliability/security/correctness), but this may have been a while ago. I have quite a bit of interest in this area, and some experience as well. Is this something that you think would be valuable (and if so, how valuable compared to work on the main open problems)?
Three areas I would look into are distributed capability based security systems (example: Amoeba), formally verified kernels (example: seL4), and formal verification of user programs (example: Singularity OS). Programming language research isn't really needed - haskell is the language I would choose, for its practical and theoretical advantages, but there are other options too. Where the work would be needed is in integration: making the GHC compiler output proofs (haskell is well suited to this, but there is not to my knowledge a complete framework for doing so), making the operating system / distributed environment runtime verify them, and most importantly of all, choosing what invariants to enforce.
↑ comment by lukeprog · 2013-09-29T17:34:22.340Z · LW(p) · GW(p)
better operating systems and/or better programming languages
I doubt this is worth pushing on now. If it's useful, it'll be useful when we're closer to doing engineering rather than philosophy and math.
Do you know which of the open problems MIRI is likely to attack first?
In the immediate future we'll keep tackling problems addressed in our past workshops. Other than that, I'm not sure which problems we'll tackle next. We'll have to wait and see what comes of Eliezer's other "open problems" write-ups, and which ideas workshop participants bring to the November and December workshops.
potentially applying the method from the probabilistic reflection paper to the Halting Problem, since it seems to share the same self-referential structure
The participants of our April workshop checked this, and after some time decided they could probably break Tarski, Godel, and Lob with probabilistic reflection, but not the Halting Problem, despite the similarities in structure. You could ask (e.g.) Qiaochu if you want to know more.
comment by solipsist · 2013-09-20T13:34:53.406Z · LW(p) · GW(p)
When you're prioritizing, consider:
You're young, and you will make your greatest impact later in your career. Optimize for placing your future self in the best position to succeed, not effecting change now.
Replies from: None↑ comment by [deleted] · 2013-10-09T04:52:40.907Z · LW(p) · GW(p)
You're young, and you will make your greatest impact later in your career.
While I'm not convinced what followed is bad advice, is this first sentence really true? I worked for 4 years with two different interdisciplinary teams of “distinguished” (aka late-career) scientists. They were all very accomplished, but the consensus of the group was that they felt the best work of their team was always done by grad students and postdocs. They saw their role as finding and guiding young scientists who were really doing the hard work of pushing the frontier of knowledge forward. I know I'm relying on anecdotal evidence, but from things I've read that appears to be a universal phenomenon in the hard sciences...
comment by LM7805 · 2013-09-20T22:04:35.134Z · LW(p) · GW(p)
I am interested in dependent type systems, total languages, and similar methods of proving certain program errors cannot occur, although I would have to do some background research to learn more of the state of the art in that field.
If you're not already familiar with Idris, I highly recommend checking it out -- it's a dependently typed Haskell variant, a bit like Agda but with a much friendlier type syntax. The downside of Idris is that, as a newer language, it doesn't have nearly as robust a standard library of proofs as, say, Coq. That said, the author, Edwin Brady, is keenly focused on making it a good language for expressing security properties in.
The field I work in has to do with proving that program errors related to maliciously crafted input cannot occur; if it'd be useful I'm happy to braindump/linkdump.
Replies from: klkblake↑ comment by klkblake · 2013-09-21T14:06:32.244Z · LW(p) · GW(p)
I'd heard of Idris. Parts of it sound really good (dependent typing, totality, a proper effects system, being usable from Vim), although I'm not a huge fan of tactic-based proofs (that's what the Curry-Howard Isomorphism is for!). It's definitely on the top of my list of languages to learn. I wasn't aware of the security focus, that is certainly interesting.
Proving safety in the face of malicious input sounds fascinating -- a dump would be much appreciated.
Replies from: LM7805, LM7805↑ comment by LM7805 · 2013-09-21T14:50:11.483Z · LW(p) · GW(p)
"Security Applications of Formal Language Theory" is a good overview. (If you don't have IEEE access, there's a tech report version.) Much of the work going on in this area has to do with characterizing classes of vulnerabilities in terms of unintended computational automata that arise from the composition of independent systems, often through novel vulnerability discovery motivated by considering the formal characteristics of a composed system and figuring out what can be wedged into the cracks. There's also been some interesting defensive work (Haskell implementation, an approach I'm interested in generalizing). That's probably a good start.
I have not actually learned Idris yet, and I think I could motivate myself better if I had a study partner; would you be interested in something like that?
Replies from: klkblake, klkblake↑ comment by klkblake · 2013-09-26T15:50:23.573Z · LW(p) · GW(p)
I might be interested in being your study partner; what would that involve?
Replies from: LM7805↑ comment by LM7805 · 2013-09-27T11:03:57.757Z · LW(p) · GW(p)
Depends mainly on how we both learn best. For me, when it comes to learning a new language that tends to be finding a well-defined, small (but larger than toy) project and implementing it, and having someone to rubber-duck with (over IM/IRC/email is fine) when I hit conceptual walls. I'm certainly up for tackling something that would help out MIRI.
Replies from: klkblake↑ comment by LM7805 · 2013-09-25T14:23:47.408Z · LW(p) · GW(p)
Also, presuming that the talk Andreas Bogk has proposed for 30c3 is accepted, you'll want to see it -- it's a huge pragmatic leap forward. (I apologize for not being at liberty to go into any more detail than that. The talk will be livestreamed and recorded, FWIW.)
comment by iDante · 2013-09-20T05:21:02.254Z · LW(p) · GW(p)
Just to confirm, it's undergraduate CSE honors? Have you taken an AI course?
My initial impression is that you'll have trouble doing something specifically related to FAI, but it depends on your background.
Replies from: klkblake↑ comment by klkblake · 2013-09-20T06:37:12.478Z · LW(p) · GW(p)
I haven't heard the term CSE before (computer science & engineering?), but I'm doing a Bachelor of Science, majoring in Computer Science and minoring in Mathematics. I am taking an AI course at the moment (actually, its a combined AI/data mining course, and it's a bit shallower than I would like, but it covers the basics).