Help wanted: feedback on research proposals for FHI application
post by otto.barten (otto-barten) · 2020-10-08T14:42:50.033Z · LW · GW · 3 commentsContents
3 comments
As some of you will know, the Future of Humanity Institute is looking for researchers. I want to apply to the Foundational Deep Future and AI Governance positions, and as part of the procedure, I am asked to write a research proposal. There are a number of ideas I'm having, but it's hard for me to know which idea is the most relevant. Therefore I'm asking your feedback. The ideas are in the document below, you should be able to comment in the document, or as a reply of course.
Thanks for helping out!
https://docs.google.com/document/d/1vXhclr9Vp28EY4VkOUZZitSootwtCTxCQLrRIBkluCU/edit?usp=sharing
3 comments
Comments sorted by top scores.
comment by Charlie Steiner · 2020-10-11T18:38:51.906Z · LW(p) · GW(p)
I'm not sure how to take the right mix of my perspective, your perspective, and FHI's perspective.
For example, there's not much related to object-level understanding of AI safety. If I was writing a research proposal for myself, this would be a problem. But it is in fact you writing a research proposal for FHI, and I'm actually quite confident that FHI likes meta-level work.
To be more fancy, you could add references (maybe just in footnotes) to papers and books.
The strongest part of the proposal is the questions related to AGI skepticism. What I think you could do to improve this is to not merely present a list of questions, but to also give some things you might like to do to answer those questions empirically.
The second-most interesting bit to me is the Personal Strategies In The AGI Century section. Again, you could expend this with more interesting questions, and then expand those questions with concrete ways to answer them.
I would put the strongest subsections first in their section.
Replies from: otto-barten↑ comment by otto.barten (otto-barten) · 2020-10-12T08:54:00.430Z · LW(p) · GW(p)
Thanks Charlie! :)
They are asking for only one proposal, so I will have to choose one and am planning to work out that one. So I'm mostly asking about which idea you find most interesting, rather than about which one is the strongest proposal now - that will be worked out. But thanks a lot for your feedback so far - that helps!
comment by Pattern · 2020-10-10T19:46:58.577Z · LW(p) · GW(p)
Contents:
- Delayed Singularity: an overview of arguments for and against superintelligence postponement
- Superintelligence skepticism
- What can we learn from how democracies work, for AGI alignment?