What would an Incandescence about FAI look like?
post by VNKKET · 2011-05-01T20:30:43.049Z · LW · GW · Legacy · 2 commentsContents
This post spoils Greg Egan's Incandescence. None 2 comments
This post spoils Greg Egan's Incandescence.
Incandescence is a success story about some people who notice an existential threat and avoid it using science and engineering. We see them figure out how gravity works, which is more interesting than it might sound, partly because their everyday experiences are full of gravitational effects that we don't notice on Earth. At first they do science out of pure curiosity, but it turns into an urgent collective action problem when they discover that their orbit will lead them towards all sorts of disasters, including falling into a black hole. The solution, it turns out, is to move some dirt around.
Has anyone considered writing a success story about using Friendly AI to solve an existential threat?
2 comments
Comments sorted by top scores.
comment by anonynamja · 2011-05-02T15:20:10.894Z · LW(p) · GW(p)
MOPI/Revelation passage come to mind.
comment by MrMind · 2011-05-02T13:23:58.006Z · LW(p) · GW(p)
In all the stories I read about an AI dystopia, the solution proposed is to kill it. From Disney to the Lawnmower movie to Rucker's Postsingular etc. While we know what General Relativity looks like, and so we can develop the story of a civilization which happens to discover it, we still have little clue to what a FAI would look like, and I think we shouldn't burden a poor writer to discover the theory before writing a novel... From here a writer has two choices: uses FAI (we can imagine how it looks) to solve some other existential risk, or concentrate the UAI existential risk to some subset where the Friendly part is solvable but not obvious. I think I'll ponder the last track for a while...