Posts
Comments
I'll post the obvious resources:
Future of Life Institute's summaries of AI policy resources
AI Governance: A Research Agenda (Allan Dafoe, FHI)
Allen Dafoe's research compilation: Probably just the AI section is relevant, some overlap with FLI's list.
The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (2018). Brundage and Avin et al.: One of the earlier "large collaboration" papers I can recall, probably only the AI Politics and AI Ideal Governance sections are relevant for you.
Policy Desiderata for Superintelligent AI: A Vector Field Approach: Far from object-level, in Bostrom's style, but tries to be thorough in what AI policy should try to accomplish at a high level.
CSET's Reports: Very new AI policy org, but pretty exciting as it's led by the former head of IARPA so their recommendations probably have a higher chance of being implemented than the academic think tank reference class. Their work so far focuses on documenting China's developments and US policy recommendations, e.g. making US immigration more favorable for AI talent.
Published documents can trail the thinking of leaders at orgs by quite a lot. You might be better off emailing someone at the relevant orgs (CSET, GovAI, etc.) with your goals, what you plan to read, and seeing what they would recommend so you can catch up more quickly.
You may be interested in this white paper by a Google enginer using a NN to predict power consumption for their data centers with 99.6% accuracy.
http://googleblog.blogspot.com/2014/05/better-data-centers-through-machine.html
Looking at the interals of the model he was able to determine how sensitive the power consumption was to various factors. 3 examples were given for how the new model let them optimize power consumption. I'm a total newbie to ML but this is one of the only examples I've seen where: predictive model -> optimization.
Here's another example you might like from Kaggle cause-effect pairs challenge. The winning model was able to accurately classify whether A->B, or B->A with and AUC of >.8 , which is better than some medical tests. A writeup and code were provided by the top three kagglers.
I used ideas I learned here to resolve a problem that I've failed at for over 10 years.
I was in an volatile arguement. My base rate of regreting arguements with this person is >90% over my entire adult life. I was really confident, perhaps even arrogant in hindsight. Then I remembered to think of our disagreement as travelers comparing independatly composed maps against a common territory. I proceded to draw a causailty DAG representing my own thinking. He added some nodes and edges I hadn't considered, but made sense after listening to him.
I felt the confidence of my position slipping away in my mind as the murkiness of uncertainty appeared. We could both easily be right, but the deciding information was out of reach for the time being. Our emotional arousal deflated. He felt good, reminded of his career as an engineer using fishbone diagrams.
It was the most pleasant ending I can remember compared to how our intense disagreements of utterly trival matters usually end: anger, bitterness, despondency, regret. I used a thinking tool, and changed minds, including my own, in a way I didn't anticipate. It felt strange, but good.
Not awesome by most cultural standards, but I think this is the only place where a simple story of changing my mind might be worth sharing.