Are there AI policies that are robustly net-positive even when considering different AI scenarios?
post by Noosphere89 (sharmake-farah) · 2023-04-23T21:46:40.952Z · LW · GW · No commentsThis is a question post.
Contents
Answers 2 Gerald Monroe None No comments
One thing I might have noticed recently is a lot of governance on AI might require specific models of AI in society, especially over misalignment or misuse concerns.
Are there AI policies that could be robustly-net positive without having to tie it to a specific scenario?
Answers
Nobody has answered this?
Centralizing compute into known locations. This has minimal effect on the development of advanced AI (it adds a little bit of latency to robotics applications since the racks of compute driving a robot would be in a nearby data center, with only a little onboard the bot) but has protective effect on worst case scenarios.
Suspect there is an out of control ASI out there? Send people in person to each location with a paper list, check what each cluster is really doing with out of channel methods. (Like plug in a monitor and keyboard to actual hardware and check at the hypervisor or privileged os layer)
Or just start pulling breakers.
No comments
Comments sorted by top scores.