Posts

New Leverhulme Centre on the Future of AI (developed at CSER with spokes led by Bostrom, Russell, Shanahan) 2015-12-03T10:07:15.102Z
New positions and recent hires at the Centre for the Study of Existential Risk (Cambridge, UK) 2015-10-13T11:11:23.172Z
Postdoctoral research positions at CSER (Cambridge, UK) 2015-03-26T17:59:53.828Z
Cambridge (England) lecture: Existential Risk: Surviving the 21st Century, 26th February 2014-02-14T19:39:31.632Z
Update on establishment of Cambridge’s Centre for Study of Existential Risk 2013-08-12T16:11:23.263Z
Vacancy at the Future of Humanity Institute: Academic Project Manager 2013-06-28T12:17:56.737Z

Comments

Comment by Sean_o_h on Propaganda or Science: A Look at Open Source AI and Bioterrorism Risk · 2023-11-03T13:42:09.890Z · LW · GW

(disclaimer: one of the coauthors) Also, none of the linked comments by the coauthors actually praise the paper as good and thoughtful? They all say the same thing, which is "pleased to have contributed" and "nice comment about the lead author" (a fairly early-career scholar who did lots and lots of work and was good to work with). I called it "timely", as the topic of open-sourcing was very much live at the time.

 

(FWIW, I think this post has valid criticism re: the quality of the biorisk literature cited and the strength with which the case was conveyed; and I think this kind of criticism is very valuable and I'm glad to see it).

Comment by Sean_o_h on [AN #90]: How search landscapes can contain self-reinforcing feedback loops · 2020-03-15T14:02:54.568Z · LW · GW

I believe the working title is 'Intelligence Rising'

Comment by Sean_o_h on Crisis and opportunity during coronavirus · 2020-03-14T20:09:47.770Z · LW · GW

This is super awesome. Thank you for doing this.

Comment by Sean_o_h on Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" · 2020-01-03T15:09:05.444Z · LW · GW

Johnson was perhaps below average in his application to his studies, but it would be a mistake to think he is/was a below average intelligence pupil.

Comment by Sean_o_h on Another AI Winter? · 2019-12-25T12:41:50.203Z · LW · GW
I can imagine DM deciding that some very applied department is going to be discontinued, like healthcare, or something else kinda flashy.

With Mustafa Suleyman, the cofounder most focused on applied (and leading DeepMind Applied) leaving for google, this seems like quite a plausible prediction. So a refocusing on being a primarily research company with fewer applied staff (an area that can soak up a lot of staff) resulting in a 20% reduction of staff probably wouldn't provide a lot of evidence (and is probably not what Robin had in mind). A reduction of research staff, on the other hand, would be very interesting.

Comment by Sean_o_h on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-19T12:16:38.141Z · LW · GW

(Cross-posted to the EA forum). (Disclosure: I am executive director of CSER) Thanks again for a wide-ranging and helpful review; this represents a huge undertaking of work and is a tremendous service to the community. For the purpose of completeness, I include below 14 additional publications authored or co-authored by CSER researchers for the relevant time period not covered above (and one that falls just outside but was not previously featured):

Global catastrophic risk:

Ó hÉigeartaigh. The State of Research in Existential Risk

Avin, Wintle, Weitzdorfer, O hEigeartaigh, Sutherland, Rees (all CSER). Classifying Global Catastrophic Risks

International governance and disaster governance:

Rhodes. Risks and Risk Management in Systems of International Governance.

Biorisk/bio-foresight:

Rhodes. Scientific freedom and responsibility in a biosecurity context.

Just missing the cutoff for this review but not included last year, so may be of interest is our bioengineering horizon-scan. (published November 2017). Wintle et al (incl Rhodes, O hEigeartaigh, Sutherland). Point of View: A transatlantic perspective on 20 emerging issues in biological engineering.

Biodiversity loss risk:

Amano (CSER), Szekely… & Sutherland. Successful conservation of global waterbird populations depends on effective governance (Nature publication)

CSER researchers as coauthors:

(Environment) Balmford, Amano (CSER) et al. The environmental costs and benefits of high-yield farming

(Intelligence/AI) Bhatagnar et al (incl Avin, O hEigeartaigh, Price): Mapping Intelligence: Requirements and Possibilities

(Disaster governance): Horhager and Weitzdorfer (CSER): From Natural Hazard to Man-Made Disaster: The Protection of Disaster Victims in China and Japan

(AI) Martinez-Plumed, Avin (CSER), Brundage, Dafoe, O hEigeartaigh (CSER), Hernandez-Orallo: Accounting for the Neglected Dimensions of AI Progress

(Foresight/expert elicitation) Hanea… & Wintle The Value of Performance Weights and Discussion in Aggregated Expert Judgments

(Intelligence) Logan, Avin et al (incl Adrian Currie): Uncovering the Neural Correlates of Behavioral and Cognitive Specialization

(Intelligence) Montgomery, Currie et al (incl Avin). Ingredients for Understanding Brain and Behavioral Evolution: Ecology, Phylogeny, and Mechanism

(Biodiversity) Baynham Herdt, Amano (CSER), Sutherland (CSER), Donald. Governance explains variation in national responses to the biodiversity crisis

(Biodiversity) Evans et al (incl Amano). Does governance play a role in the distribution of invasive alien species?

Outside of the scope of the review, we produced on request a number of policy briefs for the United Kingdom House of Lords on future AI impacts; horizon-scanning and foresight in AI; and AI safety and existential risk, as well as a policy brief on the bioengineering horizon scan. Reports/papers from our 2018 workshops (on emerging risks in nuclear security relating to cyber; nuclear error and terror; and epistemic security) and our 2018 conference will be released in 2019.

Thanks again!

Comment by Sean_o_h on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-18T10:40:26.151Z · LW · GW
It is possible they had timing issues whereby a substantial amount of work was done in earlier years but only released more recently. In any case they have published more in 2018 than in previous years.

(Disclosure: I am executive director of CSER) Yes. As I described in relation to last year's review, CSER's first postdoc started in autumn 2015, most started in mid 2016. First stages of research and papers began being completed throughout 2017, most papers then going to peer-reviewed journals. 2018 is more indicative of run-rate output, although 2019 will be higher.

Throughout 2016-2017, considerable CSER leadership time (mine in particular) has also gone on getting http://lcfi.ac.uk/ up and running, which will increase our output on AI safety/strategy/governance (although CFI also separately works on near term and non-AI safety-related topics).

Thank you for another detailed review! (response cross-posted to EA forum too)

Comment by Sean_o_h on IEEE released the first draft of their AI ethics guide · 2016-12-15T13:58:19.025Z · LW · GW

And several more of us were at the workshop that worked on and endorsed this section at the Hague meeting - Anders Sandberg (FHI), Huw Price and myself (CSER). But regardless, the important thing is that a good section on long-term AI safety showed up in a major IEEE output - otherwise I'm confident it would have been terrible ;)

Comment by Sean_o_h on Open Thread March 7 - March 13, 2016 · 2016-03-09T15:42:57.681Z · LW · GW

FLI's anthony aguirre is centrally involved or leading, AFAIK.

Comment by Sean_o_h on NIPS 2015 · 2015-12-08T09:25:23.497Z · LW · GW

Thanks for the initiative! I'll be there Thursday through Saturday (plus Sunday) for symposia and workshops, if anyone would like to chat (Sean O hEigeartaigh, CSER).

Comment by Sean_o_h on New positions and recent hires at the Centre for the Study of Existential Risk (Cambridge, UK) · 2015-11-04T10:45:27.793Z · LW · GW

A quick reminder: our deadline closes a week from tomorrow (midday UK time) - so now would be a great time to apply if you were thinking of it, or to remind fellow researchers! Thanks so much, Seán.

Comment by Sean_o_h on New positions and recent hires at the Centre for the Study of Existential Risk (Cambridge, UK) · 2015-10-13T11:18:37.186Z · LW · GW

A pre-emptive apology: I have a heavy deadline schedule over the next few weeks, so will answer questions when I can - please excuse any delays!

Comment by Sean_o_h on Self-improvement without self-modification · 2015-07-24T08:11:51.381Z · LW · GW

"The easiest and the most trivial is to create a subagent, and transfer their resources and abilities to it ("create a subagent" is a generic way to get around most restriction ideas)." That is, after all, how we humans are planning to get around our self-modification limitations in creating AI ;)

Comment by Sean_o_h on [link] FLI's recommended project grants for AI safety research announced · 2015-07-02T11:14:53.525Z · LW · GW

In turn Nick, for his part, very regularly and explicitly credits the role that Eliezer's work and discussions with Eliezer have played in his own research and thinking over the course of the FHI's work on AI safety.

Comment by Sean_o_h on One week left for CSER researcher applications · 2015-06-15T10:09:50.301Z · LW · GW

A few comments. I was working with Nick when he wrote that, and I fully endorsed it as advice at the time. Since then, the Xrisk funding situation - and number of locations at which you can do good work - has improved dramatically. it would be worth checking with him how he feels now. My view is that jobs are certainly still competitive though.

In that piece he wrote "I find the idea of doing technical research in AI or synthetic biology while thinking about x-risk/GCR promising." I also strongly endorse this line of thinking. My view is that in addition to centres specifically doing Xrisk, having people who are Xrisk-motivated working in all the standard mainstream fields that are relevant to Xrisk would be a big win. Not just AI or synthetic biology (although obviously directly valuable here) - I'd include areas like governance, international relations, science & technology studies, and so on. There will come a point (in my view) when having these concerns diffusing across a range of fields and geographic locations will be more important than increasing the size of dedicated thought bubbles at e.g. Oxford.

"Do you guys want to share how pleased you were about the set of applicants you received for these jobs?" I can't say too much about this, because hires not yet finalised, but yes, pleased. The hires we made are stellar. There were a number of people not hired who at most times I would have thought to be excellent, but for various reasons the panel didn't think they were right at this time. You will understand if I can't say more about this, (and my very sincere apologies to everyone I can't give individual feedback to, carrying a v heavy workload at the moment w minimal support).

That said, I wouldn't be willing to stand up and say x-risk reduction is not talented-limited, as I don't think there's enough data for that. Our field was large, and top talent was deep enough on this occasion, but could have been deeper. Both CSER and FHI have more hires coming up, so that will deplete the talent pool further.

Another consideration: I do feel that many of the most brilliant people the X-risk field needs are out there already, finishing their PhDs in relevant areas but not currently part of the field. I think organisations like ours need to make hard efforts to reach out to these people.

Recruitment strategies: Reaching out through our advisors' networks. Standard academic jobs hiring boards, emails to the top 10-20 departments in the most relevant fields. Getting in touch with members of different x-risk organisations and asking them to spread the word through their networks. Posting online in various x-risk/ea-related places. I also got in touch with a large range of the smaller, more specific centres (and authors) producing the best work outside of the x-risk community - e.g. in risk, foresight, horizon-scanning, security, international relations, DURC, STS and so on, asked them for recommendations and to distribute it among their network. And I iterated a few times through the contacts I made this way. E.g. I got in touch with Tetlock and others on expertise elicitation & aggregation, who put me in touch with people at the Good Judgement Project and others, who put me in touch with other centres. Eventually got some very good applicants in this space, including one from Australia's Centre of Excellence for Biosecurity Risk Analysis, whose director I was put in touch with through this method but hadn't heard of previously.

This was all v labour intensive, and I expect I won't have time to recruit so heavily in future. But I hope going forward we will have a bigger academic footprint. I also had tremendous help from a number of people in the Xrisk community, including Ryan Carey, Seth Baum, FHI folks, to whom I'm v grateful. Also, a huge thanks to Scott Alexander for plugging our positiosn on his excellent blog!

I think our top 10 came pretty evenly split between "xrisk community", "standard academic jobs posting boards/university department emails" and "outreach to more specific non-xrisk networks". I think all our hires are new introductions to existential risk, which is encouraging.

Re: communicating internally, I think we're doing pretty well. E.g. on recruitment, I've been communicating pretty closely with FHI as they have positions to fill too at present and coming up, and will recommend to some excellent people who applied to us to apply to them. (note that this isn't always just quality - we have both had excellent applicants who weren't quite a fit at this time at one, but would a top prospect at the other, going in both directions).

More generally, internal communication within x-risk has been good in my view - project managers and researchers at FHI, MIRI and other orgs make a point of regular meetings with the other organisations, and this has made up a decent chunk of my time too over the past couple of years and has been very important, although I'm likely to have to cut back personally for a couple of years due to increasing cambridge-internal workload (early days of a new, unusual centre in an old traditional university). I expect our researchers will play an important role in communicating between centres however.

One further apology: I don't expect to have much time to comment/post on LW going forward, so I apologise that I won't always be able to reply to qs like this. But I'm very grateful for all the useful support, advice and research input I've received from LW members over the years.

Comment by Sean_o_h on The mechanics of my recent productivity · 2015-04-08T10:51:24.665Z · LW · GW

9 single author research papers is extremely impressive! Well done.

Comment by Sean_o_h on Request for help: Android app to shut down a smartphone late at night · 2015-04-02T12:33:36.887Z · LW · GW

This does seem quite hazardous, though. If an emergency happened at 3am, I'm pretty sure I'd want my phone easily available and usable.

I was going to say this too, it's a good point. Potential fix: have a cheap non-smartphone on standby at home.

Comment by Sean_o_h on Postdoctoral research positions at CSER (Cambridge, UK) · 2015-03-29T15:42:31.817Z · LW · GW

Leplen, thank you for your comments, and for taking the time to articulate a number of the challenges associated with interdisciplinary research – and in particular, setting up a new interdisciplinary research centre in a subfield (global catastrophic and existential risk) that is in itself quite young and still taking shape. While we don’t have definitive answers to everything you raise, they are things we are thinking a lot about, and seeking a lot of advice on. While there will be some trial and error, given the quality and pooled experience of the academics most involved I’m confident that things will work out well.

Firstly, re: your first post, a few words from our Academic Director and co-founder Huw Price (who doesn’t have a LW account).

“Thanks for your questions! What the three people mentioned have in common is that they are all interested in applying their expertise to the challenges of managing extreme risks arising from new technologies. That's CSER's goal, and we're looking for brilliant early-career researchers interested in working on these issues, with their own ideas about how their skills are relevant. We don't want to try to list all the possible fields these people might come from, because we know that some of you will have ideas we haven't thought of yet. The study of technological xrisk is a new interdisciplinary subfield, still taking shape. We're looking for brilliant and committed people, to help us design it.

We expect that the people we appoint will publish mainly in the journals in their home field, thus helping to raise awareness of these important issues within those fields – but there will also be opportunities for inter-field collaborations, too, so you may find yourself publishing in places you wouldn't have expected. We anticipate that most of our postdocs will go on to distinguished careers in their home fields, too, though hopefully in a way which maintains their links with the interdisciplinary xrisk community. We anticipate that there will also be some opportunities for more specialised career paths, as the field and funding expand. “

A few words of my own to expand: As you and Ryan have discussed, we have a number of specific, quite well-defined subprojects that we have secured grant funding for (two more will be announced later on). But we are also in the lucky position of having some more unconstrained postdoctoral position funding – and now, as Huw says, seems like an opportune time to see what people, and ideas, are out there, and what we haven’t considered. Future calls are likely to be a lot more constrained – as the centre’s ongoing projects and goals get more locked in, and as we need to hire for very specific people to work on specific grants.

Some disciplines seem very obviously relevant to me – e.g. if the existential risk community is to do work on AI, synthetic biology, pandemic risk, geoengineering, it needs people with qualifications in CS/math, biology/informatics, epidemiology, climate modelling/physics. Disciplines relevant to risk modelling and assessment seem obvious, as does science & technology studies, philosophy of science, and policy/governance. In aiming to develop implementable strategies for safe technology development and x-risk reduction, economics, law and international relations seem like fields that might produce people with necessary insights. Some or a little less clear-cut: insights into horizon-scanning and foresight/technological prediction could come from a range of areas. And I’m sure there are disciplines we are simply missing. Obviously we can’t hire people with all of these backgrounds now (although, over the course of the centre we would aim to have all these disciplines pass through and make their mark). But we don’t necessarily need to; we have enough strong academic connections that we will usually be able to provide relevant advisors and collaborators to complement what we have ‘in house’. E.g. if a policy/law-background person seems like an excellent fit for biosecurity work or biotech policy/regulation, we would aim to make sure there’s both a senior person in policy/law to provide guidance, and collaborators in biology to make sure the science is there. And vice versa.

With all that said, from my time at FHI and CSER, a lot of the biggest progress and ideas have come from people whose backgrounds might not have immediately seemed obvious to x-risk, at least to me – cosmologists, philosophers, neuroscientists. We want to make sure we get the people, and the ideas, wherever they may be.

With regards to your second post:

You again raise good questions. For the people who don’t fall squarely into the ‘shovel-ready’ projects (although the majority of our hires this year will), I expect we will set up senior support structures on a case by case basis depending on what the project/person needs.

One model is co-supervision or supervisor+advisor. For one example, last year I worked with a CSER postdoctoral candidate on a grant proposal for a postdoc project that would have taken in both technical modelling/assessment of extreme risks from sulphate aerosol geoengineering, but where the postdoc also wanted to explore broader socio/policy challenges. We felt we had the in-house expertise for the latter but not the former. We set up an arrangement whereby he would be advised by a climate specialist in this area, and spend a period of the postdoc with the specialist’s group in Germany. (The proposal was unfortunately unsuccessful with the granting body.)

As we expect AI to be a continuing focus, we’re developing good connections with AI specialist groups in academia and industry in Cambridge, and would similarly expect that a postdoc with a CS background might split their time between CSER’s interdisciplinary group and a technical group working in this area and interested in long-term safe/responsible AI development. The plan is to develop similar relations in bio and other key areas. If we feel like we’re really not set up to support someone as seems necessary and can’t figure out how to get around that, then yes, that may be a good reason not to proceed at a given time. That said, during my time at FHI, a lot of good research has been done without these kinds of setups – and incidentally I don’t think being at FHI has ever harmed anyone’s long-term career prospects - so they won’t always be necessary.

And overly-broad job listings are par for the course, but before I personally would want to put together a 3 page project proposal or hunt down a 10 page writing sample relevant or even comprehensible to people outside of my field, I'd like to have some sense of whether anyone would even read them or whether they'd just be confused as to why I applied.

An offer: if you (or anyone else) have these kinds of concerns and wish to send me something short (say 1/3-1/2 page proposal/info about yourself) before investing the effort in a full application, I’ll be happy to read and say whether it’s worth applying (warning: it may take me until weekend on any given week).

Comment by Sean_o_h on Postdoctoral research positions at CSER (Cambridge, UK) · 2015-03-27T11:42:31.025Z · LW · GW

Placeholder: this is a good comment and good questions, which I will respond to by tomorrow or Sunday.

Comment by Sean_o_h on GCRI: Updated Strategy and AMA on EA Forum next Tuesday · 2015-03-03T16:37:14.885Z · LW · GW

This is reasonable.

Comment by Sean_o_h on GCRI: Updated Strategy and AMA on EA Forum next Tuesday · 2015-03-03T13:16:49.275Z · LW · GW

This was a poorly phrased line, and it is helpful to point that out. While I can't and shouldn't speak for the OP, I'm confident that the OP didn't mean it in an "ordering people from best to worst" way, especially knowing the tremendous respect that people working and volunteering in X-risk have for Seth himself, and for GCRI's work. I would note that the entire point of this post (and the AMA which the OP has organised) was to highlight GCRI's excellent work and bring it to the attention of more people in the community. However, I can also see how the line might be taken to mean things it wasn't at all intended to do.

Hence, I'd like to take this opportunity to appeal for a charitable reading of posts of this nature - ones that are clearly intended to promote and highlight good work in LW's areas of interest - especially in "within community" spaces like this. One of the really inspiring things about working in this area is the number of people putting in great work and long hours alongside their full-time commitments - like Ryan and many others. And those working fulltime in Xrisk/EA often put in far in excess of standard hours. This sometimes means that people are working under a lot of time pressure or fatigue, and phrase things badly (or don't recognise that something could easily be misread). That may or may not be the case here, but I know it's a concern I often have about my own engagements, especially when it's gone past the '12 hours in the office' stage.

With that said, please do tell us when it looks like we're expressing things badly, or in a way that might be taken to be less than positive. It's a tremendously helpful learning experience about the mistakes we can make in how we write (particularly in cases where people might be tired/under pressure and thus less attentive to such things).

Comment by Sean_o_h on "Human-level control through deep reinforcement learning" - computer learns 49 different games · 2015-02-26T12:34:01.906Z · LW · GW

They've also released their code (for non-commercial purposes): https://sites.google.com/a/deepmind.com/dqn/

In other interesting news, a paper released this month describes a way of 'speeding up' neural net training, and an approach that achieves 4.9% top 5 validation error on Imagenet. My layperson's understanding is that this is the first time human accuracy has been exceeded on the Imagenet benchmarking challenge, and represents an advance on Chinese giant Baidu's progress reported last month, which I understood to be significant in its own right. http://arxiv.org/abs/1501.02876

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift Sergey Ioffe, Christian Szegedy

(Submitted on 11 Feb 2015 (v1), last revised 13 Feb 2015 (this version, v2))

Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters."

Comment by Sean_o_h on GCRI: Updated Strategy and AMA on EA Forum next Tuesday · 2015-02-23T21:59:50.760Z · LW · GW

Seth is a very smart, formidably well-informed and careful thinker - I'd highly recommend jumping on the opportunity to ask him questions.

His latest piece in the Bulletin of the Atomic Scientists is worth a read too. It's on the "Stop Killer Robots" campaign. He agrees with Stuart Russell (and others)'s view that this is a bad road to go down, and also presents it as a test case for existential risk - a pre-emptive ban on a dangerous future technology:

"However, the most important aspect of the Campaign to Stop Killer Robots is the precedent it sets as a forward-looking effort to protect humanity from emerging technologies that could permanently end civilization or cause human extinction. Developments in biotechnology, geoengineering, and artificial intelligence, among other areas, could be so harmful that responding may not be an option. The campaign against fully autonomous weapons is a test-case, a warm-up. Humanity must get good at proactively protecting itself from new weapon technologies, because we react to them at our own peril."

http://thebulletin.org/stopping-killer-robots-and-other-future-threats8012

Comment by Sean_o_h on Open thread, Jan. 26 - Feb. 1, 2015 · 2015-01-26T12:04:52.718Z · LW · GW

Script/movie development was advised by CSER advisor and AI/neuroscience expert Murray Shanahan (Imperial). Haven't had time to go see it yet, but looking forward to it!

Comment by Sean_o_h on Elon Musk donates $10M to the Future of Life Institute to keep AI beneficial · 2015-01-18T10:06:19.185Z · LW · GW

Yes. The link with guidelines, grant portal, should be on FLI website within the coming week or so.

Comment by Sean_o_h on Elon Musk donates $10M to the Future of Life Institute to keep AI beneficial · 2015-01-16T11:13:59.010Z · LW · GW

This will depend on how many other funders are "swayed" towards the area by this funding and the research that starts coming out of it. This is a great bit of progress, but alone is nowhere near the amount needed to make optimal progress on AI. It's important people don't get the impression that this funding has "solved" the AI problem (I know you're not saying this yourself).

Consider that Xrisk research in e.g. biology draws usefully on technical and domain-specific work in biosafety and biosecurity being done more widely. Until now AI safety research hasn't had that body to draw on in the same way, and has instead focused on fundamental issues on the development of general AI, as well as outlining the challenges that will be faced. Given that much of this funding will go towards technical work by AI researchers, this will hopefully get this side of things going in a big way, and help build a body of support and involvement from the non-risk AI/CS community, which is essential at this moment in time.

But there's a tremendous amount of work that will need to be done - and funded - in both the technical, fundamental, and broader (policy, etc) areas. Even if FHI/CSER are successful in applying, the funds that are likely to be allocated to the work we're doing from this pot is not going to be near what we would need for our respective AI research programmes (I can't speak for MIRI, but I presume this to be the case also). But it will certainly help!

Comment by Sean_o_h on Elon Musk donates $10M to the Future of Life Institute to keep AI beneficial · 2015-01-16T10:44:10.634Z · LW · GW

An FLI person would be best placed to answer. However, I believe the proposal came from Max Tegmark and/or his team, and I fully support it as an excellent way of making progress on AI safety.

(i) All of the above organisations are now in a position to develop specific relevant research plans, and apply to get them funded - rather than it going to one organisation over another. (ii) Given the number of "non-risk" AI researchers at the conference, and many more signing the letter, this is a wonderful opportunity to follow up with that by encouraging them to get involved with safety research and apply. This seems like something that really needs to happen at this stage.

There will be a lot more excellent project submitted for this than the funding will cover, and this will be a great way to demonstrate that there are a lot of tractable problems, and immediately undertake-able work to be done in this area - this should hopefully both attract more AI researchers to the field, and additional funders who see how timely and worthy of funding this work is.

Consider it seed funding for the whole field of AI safety!

Sean (CSER)

Comment by Sean_o_h on question: the 40 hour work week vs Silicon Valley? · 2014-11-04T12:27:50.354Z · LW · GW

As another non-native speaker, I frequently find myself looking for a "plural you" in English, which was what I read hyporational's phrase as trying to convey. Useful feedback not to use 'you people'.

Comment by Sean_o_h on Open thread, Oct. 20 - Oct. 26, 2014 · 2014-10-25T10:32:31.224Z · LW · GW

A question I've been curious about: to those of you who have taken modafinil regularly/semi-regularly (as opposed to a once off) but have since stopped: why did you stop? Did it stop being effective? Was it no longer useful for your lifestyle? Any other reasons? Thanks!

Comment by Sean_o_h on What supplements do you take, if any? · 2014-10-23T18:00:02.654Z · LW · GW

I take fish oil (generic) capsules most days, for the usual reasons they're recommended. Zinc tablets when I'm feeling run down.

Perhaps not what you mean by supplements (in which case, apologies!), but If we're including nootropics, I take various things to try to extend my productive working day. I take modafinil twice a week (100mg in mornings), and try to limit my caffeine on those days. I take phenylpiracetam about twice a week too (100mg in afternoons on different days to modafinil), and nicotine lozenges (1mg) intermittently through the week (also not on modafinil days) - usually if I start getting sluggish in the evening. I also only take nicotine if I'm working, and usually on something I find hard or don't want to do - as I like the feeling, hoping this sets up a 'reward effect'. I drink coffee and green tea throughout the week, although I intend to start limiting my coffee intake more.

I regularly experiment with other supplements for mental focus and stamina - recently I've experimented with L-theaninine with caffeine, and rhodiola rosae for more general mental stamina. However I don't track much, and my lifestyle is so variable (travel, etc) that it can be hard to track what's effective for me, other than the really obvious ones (modafinil, caffeine, nicotine, phenylpiracetam). However, it can be helpful for placebo effect if nothing else!

Comment by Sean_o_h on How to write an academic paper, according to me · 2014-10-16T10:11:08.968Z · LW · GW

I think our field of philosophy, and that of xrisk, could very much benefit from more/better figures, but this might be the biologist in me speaking. Look at how often Nick Bostrom's (really quite simplistic) xrisk "scope versus intensity" graph is used/reproduced.

Comment by Sean_o_h on [Link] The Coming Plague · 2014-10-15T21:56:42.403Z · LW · GW

Thank you for writing this clear and well-researched post, really useful stuff.

Comment by Sean_o_h on Open thread, Sept. 29 - Oct.5, 2014 · 2014-10-04T13:08:10.960Z · LW · GW

Does your experience refer to M&G? I can see why you anti-recommend them!

Comment by Sean_o_h on Open thread, Sept. 29 - Oct.5, 2014 · 2014-10-02T17:02:05.454Z · LW · GW

I'd be very interested in hearing about your experience and advice further along in the process. Thanks!

Comment by Sean_o_h on Open thread, Sept. 29 - Oct.5, 2014 · 2014-10-02T13:07:27.801Z · LW · GW

Thank you, also useful advice. My pre-moving to UK savings are all in Euro, my post-moving to UK savings are in sterling, so I guess I'll have to look at both. Damn UK refusing to join the single currency, makes my personal finances so much more complicated...

Comment by Sean_o_h on The Future of Humanity Institute could make use of your money · 2014-09-29T20:12:40.521Z · LW · GW

I agree that this would be a good idea, and agree with the points below. Some discussion of this took place in this thread last Christmas: http://lesswrong.com/r/discussion/lw/je9/donating_to_miri_vs_fhi_vs_cea_vs_cfar/

On that thread I provided information about FHI's room for more funding (accurate as of start of 2014) plus the rationale for FHI's other, less Xrisk/Future of Humanity-specific projects (externally funded). I'd be happy to do the same at the end of this year, but instead representing CSER's financial situation and room for more funding.

Comment by Sean_o_h on Open thread, Sept. 29 - Oct.5, 2014 · 2014-09-29T19:27:54.698Z · LW · GW

Oh, excellent - thanks so much! Side note: I really look forward to making some of the London meet ups when work pressure subsides a little, seems like these meet ups are excellent.

Comment by Sean_o_h on Open thread, Sept. 29 - Oct.5, 2014 · 2014-09-29T18:24:19.025Z · LW · GW

Would you (or anyone else) have good suggestions for index funds for those living and earning in the UK/Europe? Thanks!

Comment by Sean_o_h on [LINK] Article in the Guardian about CSER, mentions MIRI and paperclip AI · 2014-08-30T21:40:58.083Z · LW · GW

Thank you! We appear to have been successful with our first foundation grant; however, the official award T&C letter comes next week, so we'll know then what we can do with it, and be able to say something more definitive. We're currently putting the final touches on our next grant application (requesting considerably more funds).

I think the sentence in question refers to a meeting on existential/extreme technological risk we will be holding in Berlin, in collaboration with the German Government, on 19th of September. We hope to use this as an opportunity to forge some collaborations in relevant areas of risk with European research networks, and with a bit of luck, to put existential risk mitigation a little higher on the European policy agenda. We'll be releasing a joint press release with the German Foreign Office as soon as we've got this grant out of the way!

Comment by Sean_o_h on [LINK] Article in the Guardian about CSER, mentions MIRI and paperclip AI · 2014-08-30T20:54:54.016Z · LW · GW

Nearly certainly, unfortunately that communication didn't involve me so I don't know which one it is! But I'll ask him when I next see him, and send you a link. http://www.econ.cam.ac.uk/people/crsid.html?crsid=pd10000&group=emeritus

Comment by Sean_o_h on [LINK] Article in the Guardian about CSER, mentions MIRI and paperclip AI · 2014-08-30T17:21:11.686Z · LW · GW

"A journalist doesn't have any interest not to engage in sensationalism."

Yes. Lazy shorthand in my last lw post, apologies. I should have said something along the lines of "in order to clarify our concerns , and not give the journalist the honest impression we though these things all represented imminent doom, which might result in sensationalist coverage" - as in, sensationalism resulting from misunderstanding. If the journalist chooses deliberately to engage in sensationalism, that's a slightly different thing - and yes, it sells newspapers.

"Editors want to write articles that the average person understands. It's their job to simplify. That still has a good chance of leaving the readers more informed than they were before reading the article."

Yes. I merely get concerned when "scientists think we need to learn more about this, and recommend use of the precautionary principle before engaging" gets simplified to "scientists say 'don't do this", as in that case it's not clear to me that readers come away with a better understanding of the issue. There's a lot of misunderstanding of science due to simplified reporting. Anders Sandberg and Avi Roy have a good article on this in health (as do others): http://theconversation.com/the-seven-deadly-sins-of-health-and-science-reporting-21130

"It's not the kind of article that I would sent people who have an background and who approach you. On the other hand it's quite fine for the average person."

Thanks, helpful.

Comment by Sean_o_h on [LINK] Article in the Guardian about CSER, mentions MIRI and paperclip AI · 2014-08-30T15:56:29.516Z · LW · GW

Thanks, reassuring. I've mainly been concerned about a) just how silly the paperclip thing looks in the context it's been put b) the tone, a bit - as one commenter on the article put it

"I find the light tone of this piece - "Ha ha, those professors!" to be said with an amused shake of the head - most offensive. Mock all you like, but some of these dangers are real. I'm sure you'll be the first to squeal for the scientists to do something if one them came true. Price asks whether I have heard of the philosophical conundrum the Prisoner's Dilemma. I have not. Words fail me. Just what do you know then son? Once again, the Guardian sends a boy to do a man's job."

Comment by Sean_o_h on [LINK] Article in the Guardian about CSER, mentions MIRI and paperclip AI · 2014-08-30T15:52:15.263Z · LW · GW

Thanks. Re: your last line, quite a bit of this is possible: we've been building up a list of "safe hands" journalists at FHI for the last couple of years, and as a result, our publicity has improved while the variance in quality has decreased.

In this instance, we (CSER) were positively disposed towards the newspaper as a fairly progressive one with which some of our people had had a good set of previous interactions. I was further encouraged by the journalist's request for background reading material. I think there was just a bit of a mismatch: they sent a guy who was anti-technology in a "social media is destroying good society values" sort of way to talk to people who are concerned about catastrophic risks from technology (I can see how this might have made sense to an editor).

Comment by Sean_o_h on [LINK] Article in the Guardian about CSER, mentions MIRI and paperclip AI · 2014-08-30T15:16:57.059Z · LW · GW

Hi,

I'd be interested on LW's thoughts on this. I was quite involved in the piece, though I suggested to the journalist it would be more appropriate to focus on the high-profile names involved. We've been lucky at FHI/Cambridge with a series of very sophisticated tech-savvy journalists with whom the inferential distance has been very low (see e.g. Ross Andersen's Aeon/Atlantic pieces); this wasn't the case here, and although the journalist was conscientious and requested reading material beforehand, I found that communicating on these concepts more difficult than expected.

In my view the interview material turned out better than expected, given the clear inferential gap. I am less happy with the 'catastrophic scenarios'' which I was asked for. The text I sent (which I circulated to FHI/CSER members) was distinctly less sensational, and contained a lot more qualifiers. E.g. for geoengineering I had: "Scientific consensus is against adopting it without in depth study and broader societal involvement in the decisions made, but there may be very strong pressure to adopt once the impacts of climate change become more severe." and my pathogen modification example did not go nearly as far. While qualifiers can seem like unnecessary padding to editors, it can really change the tone of a piece. Similarly, in a pre-emptive line to ward off sensationalism, I included "I hope you can make it clear these are "worst case possibilities that currently appear worthy of study" rather than "high-likelihood events". Each of these may only have e.g. a 1% likelihood of occurring. But in the same way an aeroplane passenger shouldn't accept a 1% possibility of a crash, society should not accept a 1% possibility of catastrophe. I see our role as (like airline safety analysts) figuring out which risks are plausible, and for those, working to reduce the 1% to 0.00001%"; this was sort-of-addressed, but not really.

That said, the basic premises - that a virus could be modified for greater infectivity and released by a malicious actor, 'termination risk' for atmospheric aerosol geoengineering, future capabilities of additive manufacturing for more dangerous weapons - are intact.

Re: 'paperclip maximiser'. I mentioned this briefly in conversation, after we'd struggled for a while with inferential gaps on AI (and why we couldn't just outsmart something smarter than us, etc), presenting it as a 'toy example' used in research papers on AI goals, meant to encapsulate the idea that seemingly harmless or trivial but poorly thought through goals can result in unforseen and catastrophic consequences when paired with the kind of advanced resource utilisation and problem-solving ability a future AI might have. I didn't expect it it to be taken as a literal doomsday concern - and it wasn't in the text I sent - and to my mind it looks very silly in there, possibly deliberately so. However, I feel that Huw and Jaan's explanations were very good, and quite well-presented..

We've been considering whether we should limit ourselves to media opportunities where we can write the material ourselves, or have the opportunity to view and edit the final material before publishing. MIRI has significantly cut back on its media engagement, and this seems on the whole sensible (FHI's still doing a lot, some turns out very good, some not so good).

Lesson to take away: 1) this stuff can be really, really hard. 2) Getting used to v sophisticated, science/tech-savvy journalists and academics can leave you unprepared. 3) Things that are v reasonable with qualifies can become v unreasonable if you remove the qualifiers - and editors often just see the qualifiers as unnecessary verbosity (or want the piece to have stronger, more senational claims)

Right now, I'm leaning fairly strongly towards 'ignore and let quietly slip away' (the guardian has a small UK readership, so how much we 'push' this will probably make a difference), but I'd be interested in whether LW sees this as net positive or net negative on balance for existential risk in the public. However, I'm open to updating. I asked a couple of friends unfamiliar with the area what their take away impression was, and it was more positive than I'd anticipated.

Comment by Sean_o_h on Steelmanning MIRI critics · 2014-08-19T10:43:40.845Z · LW · GW

Without knowing the content of your talk (or having time to Skype at present, apologies), allow me to offer a few quick points I would expect a reasonably well-informed, skeptical audience member to make (part-based on what I've encountered):

1) Intelligence explosion requires AI to get to a certain point of development before it can really take off (let's set aside that there's still a lot we need to figure out about where that point is, or whether there are multiple different versions of that point). People have been predicting that we can reach that stage of AI development "soon" since the Dartmouth conference. Why should we worry about this being on the horizon (rather than a thousand years away) now?

2) There's such a range of views on this topic by apparent experts in AI and computer science that an analyst might conclude "there is no credible expertise on "path/timeline to super intelligent AI". Why should we take MIRI/FHI's arguments seriously?

3) Why are mathematician/logician/philosophers/interdisciplinary researchers the community we should be taking most seriously when it comes to these concerns? Shouldn't we be talking to/hearing from the cutting edge AI "builders"?

4) (Related). MIRI (and also FHI, but not to such a 'primary' extend') focuses on developing theoretical safety designs, and friendly-AI/safety-relevant theorem proving and maths work ahead of any efforts to actually "build" AI. Would we not be better to be more grounded in the practical development of the technology - building, stopping, testing, trying, adapting as we see what works and what doesn't, rather than trying to lay down such far-reaching principles ahead of the technology development?

Comment by Sean_o_h on Steelmanning MIRI critics · 2014-08-19T10:20:00.438Z · LW · GW

Speaking as someone who speaks about X-risk reasonably regularly: I have empathy for the OP's desire for no surprises. IMO there are many circumstances in which surprises are very valuable - one on one discussions, closed seminars and workshops where a productive, rational exchange of ideas can occur, boards like LW where people are encouraged to interact in a rational and constructive way.

Public talks are not necessarily the best places for surprises, however. Unless you're an extremely skilled orator, the combination of nerves, time limitations, crowd dynamics, and other circumstances can make it quite difficult to engage in an ideal manner. Crowd perception of how you "handle" a point, particularly a criticism, can do a huge amount in how the overall merit of you, your talk, and your topic, are perceived - even if the criticism is invalid or your response adequate. My experience is also that the factors above can push us into less nuanced, more "strong"-seeming positions than we would ideally take. In a worst-case scenario, a poor presentation/defence of an important idea can impact perception of the idea itself outside the context of the talk (if the talk is widely enough disseminated).

These are all reasons why I think it's an excellent idea to consider the best and strongest possible objections to your argument, and to think through what an ideal and rational response would be - or, indeed, if the objection is correct, in which case it should be addressed in the talk. This may be the OP's only to expose his audience to these ideas.

Comment by Sean_o_h on Optimal Exercise · 2014-08-11T12:44:56.450Z · LW · GW

Thank you for this post, extremely helpful and I'm very grateful for the time you put into writing/researching it.

A question: what's your opinion of when "level of exercise" goes from "diminishing returns" to "negative returns" in health and longevity? Background: I used to train competitively for running, 2xday for 2hrs total time/day, 15hrs week total (a little extra at the weekend) which sounds outlandish but is pretty standard in competitive long-distance running/cycling/triathlon. I quit because a) it wasn't compatible with doing my best in work and b) I began to worry that pushing my body this hard was not actually good for long term health (for reasons like inflammation load, heart effects etc).

These days I train usually 1 hr/day, 6 days a week split about 50:50 between running and lifting/strength, and still pretty intensely (partly because I'm otherwise prone to weight gain unless I control my diet carefully, which I prefer not to have to worry about, partly because it's a good antidote to a tendency towards anxiety/depression). I expect, realistically, I'm a low-level exercise addict, and I certainly have some obsessive tendencies. Two questions I'm interested in are: a) Am I still in "potentially not doing myself long-term favours" territory - i.e. would cutting from 360min/week to 300 min.week be actually better for my health? b) Even if a) isn't true, are the benefits so diminished that I should cut to e.g. 5xweek at 45mins/day (225mins) for pure efficiency of time use reasons (am I throwing away 2+hrs a week of valuable time)? My schedule involves either running home from work and taking in a gym trip, or training when I need a break from work, so it's reasonably efficient, but these days every hour I can squeeze out of a week seems to count. I also walk 30 mins to work every morning, not included in the above. Other than this my lifestyle's quite sedentary (no active hobbies at present, spend most waking hours at a computer/in meetings).

Comment by Sean_o_h on Rationalist Sport · 2014-06-18T16:37:43.812Z · LW · GW

Some emerging concerns I'm aware of for really serious runners: heart problems due to thickened heart wall, skin cancer (just due to being out in the sun so much, sweating off sunscreen). Potential causes for concern: lots of cortisol production from hard aerobic exercise, inflammation.

Comment by Sean_o_h on Rationalist Sport · 2014-06-18T16:33:30.439Z · LW · GW

Fascinating, thank you for this!

Comment by Sean_o_h on Rationalist Sport · 2014-06-18T10:16:14.290Z · LW · GW

For a lot of people running should be fine for their knees if done properly.

As far as I can tell, running is most likely to damage your knees if you're (a) very big/heavy (b) have poor running technique (most people don't learn to run properly/efficiently) (c) run a lot on bad surfaces (avoid running extensively on surfaces that are banked, or where you may step in potholes!) (d) have a genetic predisposition to knee problems or have brought on ostearthritis-type conditions through poor diet (happens sometimes with exercise anorexics).

As a past competitive runner, I've spent a lot of time with running "lifers" (>10,000 miles on the legs) and knee problems don't seem to be particularly common (though obviously there are some selection effects there). Anecdotally, I have no knee problems after 6 years of 100mile/week training and most of my sports friends who do have them as a result of acute injuries (usually soccer).

That said, there's enough weak evidence to suggest that this kind of heavy aerobic training may not be good for long-term health and longevity to cause me to reduce my running to 20-30 mins/day (supplemented by weight training).