Why don't we just, like, try and build safe AGI?
post by Sun · 2022-01-01T23:24:34.114Z · LW · GW · 4 commentsContents
4 comments
Why not? Currently, the AGI efforts in the world seem to be like, openAI and deepmind. None of them have as their primary aim of building safe AGI.
Why don't we just get a few billion dollars together as a community, put everyone really smart somewhere, and just, go for it? Before openAI or Deepmind does. Sure, lots to research happening in AI safety, but it doesn't seem like any of this research will find it's way to openAI or Deepmind, anyway. Seems unlikely with incentives as misaligned as they are.
So, creating an organization with its primary aim to build safe AGI, and no other aims, at all, fully and only funded by EA money seems at least worth the attempt given how much money this community should have soon.
4 comments
Comments sorted by top scores.
comment by leogao · 2022-01-03T00:46:29.204Z · LW(p) · GW(p)
The first two sentences of the OpenAI charter are as follows:
OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.
comment by ESRogs · 2022-01-03T01:13:26.742Z · LW(p) · GW(p)
Sure, lots to research happening in AI safety, but it doesn't seem like any of this research will find it's way to openAI or Deepmind, anyway.
Lots of the research is already happening there. And even if you don't like that research, both orgs are super socially connected to the LW and EA communities, so the there's a pretty plausible path for work done elsewhere to find its way to them.
comment by ChristianKl · 2022-01-03T11:21:30.302Z · LW(p) · GW(p)
There's are no billion-dollar organizations that have one primary aim and no other defacto aims. Once you get that much money and different people with their own interests involved, organizational alignment is usually not perfect.