Map of the AI Safety Community
post by SoerenE · 2017-09-26T08:39:10.136Z · LW · GW · 4 commentsThis is a link post for https://aisafety.com/wp-content/uploads/2017/09/AI_Safety_Community_Map_Version__1_0.jpg
Contents
4 comments
I have made a map of the AI Safety Community!
The map is greatly inspired by the map of the rationalist community made by Scott Alexander.
There are bound to be omissions and misunderstandings, and I will be grateful for any corrections. I promise that I will incorporate the feedback into a new version of the map.
The sizes of the cities/dwellings reflect my understanding of how much they contribute to AI Safety. The locations and borders reflect my judgement of who focus on what, and I had to make some difficult choices.
(Made with Fractal Mapper 8, and crossposted to AISafety.com and r/controlProblem)
I hope that you will find the map useful, and find inspiration to visit new places.
4 comments
Comments sorted by top scores.
comment by the gears to ascension (lahwran) · 2017-09-26T18:56:13.194Z · LW(p) · GW(p)
I downvoted because this doesn't seem on-topic for lesswrong. The posting guidelines says to avoid discussion of community, rather than discussion of more enduring fact.
comment by ZeitPolizei · 2017-09-26T09:04:06.218Z · LW(p) · GW(p)
I'd consider putting FRI closer to Effective Altruism, since they are also concerned with suffering more generally.
Do you have criteria for including fiction? Other relevant fiction I am aware of:
A bluer shade of white: About being able to enhance your own intelligence, but less about AI and more about transhumanism.
The Metropolitan Man: Also less about AI, more about existential risk.
Crystal trilogy: Roughly human-level AI as the main character.
Also Vernor Vinge is spelled with an 'o'.
↑ comment by SoerenE · 2017-09-26T09:43:44.873Z · LW(p) · GW(p)
Thank you for your comments. I have included them in version 1.1 of the map, where I have swapped FRI and OpenAI/DeepMind, added Crystal Trilogy and corrected the spelling of Vernor Vinge.