How does the organization "EthAGI" fit into the broader AI safety landscape?
post by Liam Donovan (liam-donovan) · 2019-07-08T00:46:02.191Z · LW · GW · No commentsThis is a question post.
Contents
Answers 5 habryka None No comments
I was surprised to have never heard of an organization working on long-term AI safety research, and can't find specific information on its work or connection to the AI safety community.
Answers
I haven't heard of it either, but I am not too surprised by that. Since Superintelligence came out I hear about organizations like this every few months, with no one I know working on it, without any visible output and usually disappearing after a year or two. Not fully sure what the point of them is. Maybe they do some valuable work completely behind the scenes, or maybe it's just people trying to sell people something by jumping onto the AI Safety bandwagon, or maybe it's just some enthusiastic individuals being excited about AI Alignment and wanting to somehow prevent a gap in their resume while they spend a bunch of time thinking about it.
↑ comment by Liam Donovan (liam-donovan) · 2019-07-08T01:19:53.140Z · LW(p) · GW(p)
As far as visible output, the founder did write a (misleading imho) fictional book about AI risk called "Detonation", which is how I heard of Ethagi. I was curious how an organization like this could form with no connection to "mainstream" AI safety people, but I guess it's more common than I thought
No comments
Comments sorted by top scores.