Link dump: Future of Humanity institute technical reports

post by Stuart_Armstrong · 2013-10-25T16:07:50.493Z · LW · GW · Legacy · 0 comments

For those who may be interested in these things, here are the links to all the FHI's technical reports.

Global Catastrophic Risks Survey: At the Global Catastrophic Risk Conference in Oxford (17‐20 July, 2008) an informal survey was circulated among participants, asking them to make their best guess at the chance that there will be disasters of different types before 2100. This report summarizes the main results.

Record of the Workshop on Policy Foresight and Global Catastrophic Risks: On 21 July 2008, the Policy Foresight Programme, in conjunction with the Future of Humanity Institute, hosted a day-long workshop on “Policy Foresight and Global Catastrophic Risks” at the James Martin 21st Century School at the University of Oxford. This document provides a record of the day’s discussion.

Whole Brain Emulation: a Roadmap: This report aims at providing a preliminary roadmap for Whole Brain Emulations (possible future one‐to‐one modelling of the function of the human brain), sketching out key technologies that would need to be developed or refined, and identifying key problems or uncertainties.

Utility Indifference: A utility-function-based method for making an Artificial Intelligence indifferent to certain facts or states of the world, which can be used to make certain security precautions more successful.

Machine Intelligence Survey: At the FHI Winter Intelligence conference on machine intelligence 16/1 2011 an informal poll was conducted to elicit the views of the participants on various questions related to the emergence of machine intelligence. This report summarizes the results.

Indefinite Survival through Backup Copies: Continually copying yourself may help you preserve yourself from destruction. As long as the copies fate is independent, increasing the number of copies at a logarithmic rate is enough to ensure a non-zero probability of surviving for ever. The model is of more general use for many similar processes.

Anthropics: why Probability isn’t enough: This report argues that the current treatment of anthropic and self-locating problems over-emphasises the importance of anthropic probabilities, and ignores other relevant and important factors, such as whether the various copies of the agents in question consider that they are acting in a linked fashion and whether they are mutually altruistic towards each other. These help to reinterpret the decisions, rather than probabilities, as the fundamental objects of interest in anthropic problems.

Nash equilibrium of identical agents facing the Unilateralist's Curse: This report is an addendum to the 'Unilateralist's Curse' of Nick Bostrom, Thomas Douglas and Anders Sandberg. It demonstrates that if there are identical agents facing a situation where any one of them can implement a policy unilaterally, then the best strategies they can implement are also Nash equilibriums.

AI arms race: a simple model of AI arms race (though it can be generalised). Some of the insights are obvious - that the competing teams are more likely to take safety precautions if there are not too many of them, if they agree with each other's values and if skill is more important than risk-taking in developing a functioning AI. But one result is surprising: that teams are most likely to take risks if they know the capabilities of their own team or their opponents'. In this case, the less you know, the safer you'll behave.

Please cite these reports as:

  • Sandberg, A. & Bostrom, N. (2008): “Global Catastrophic Risks Survey”, Technical Report #2008-1, Future of Humanity Institute, Oxford University: pp. 1-5.
  • Tickell, C. et al. (2008): “Record of the Workshop on Policy Foresight and Global Catastrophic Risks”, Technical Report #2008-2, Future of Humanity Institute, Oxford University: pp. 1-19.
  • Sandberg, A. & Bostrom, N. (2008): “Whole Brain Emulation: a Roadmap”, Technical Report #2008-3, Future of Humanity Institute, Oxford University: pp. 1-130.
  • Armstrong, S. (2010): “Utility Indifference”, Technical Report #2010-1, Future of Humanity Institute, Oxford University: pp. 1-5.
  • Sandberg, A. & Bostrom, N. (2011): “Machine Intelligence Survey”, Technical Report #2011-1, Future of Humanity Institute, Oxford University: pp. 1-12.
  • Sandberg, A. & Armstrong, S. (2012): “Indefinite Survival through Backup Copies”, Technical Report #2012-1, Future of Humanity Institute, Oxford University: pp. 1-5.
  • Armstrong, S. (2012): “Anthropics: why Probability isn’t enough”, Technical Report #2012-2, Future of Humanity Institute, Oxford University: pp. 1-10.
  • Armstrong, S. (2012): “Nash equilibrium of identical agents facing the Unilateralist's Curse”, Technical Report #2012-3, Future of Humanity Institute, Oxford University: pp. 1-5.
  • Armstrong, S. & Bostrom, N. & Shulman, C. (2013): “Racing to the precipice: a model of artificial intelligence development”, Technical Report #2013-1, Future of Humanity Institute, Oxford University: pp. 1-8.

0 comments

Comments sorted by top scores.