Forecasting Questions: What do you want to predict on AI?
post by Nathan Young · 2023-11-01T13:17:00.040Z · LW · GW · No commentsThis is a question post.
Contents
Answers 2 Dumbledore's Army 2 Nathan Young None No comments
Lots of people are saying that AI regulation is great or terrible, but I am not seeing a lot of concrete forecasts.
What would a world where regulation goes really well look like?
What would a world where regulation goes really badly look like?
What other things would you like forecasted around AI in general?
Answers
First, I just wanted to say that this is an important question and thank you for getting people to produce concrete suggestions.
Disclaimer, I’m not a computer scientist so I’m approaching the question from the point of view of an economist. As such, I found it easier to come up with examples of bad regulation than good regulation.
Some possible categories of bad regulation:
1 It misses the point.
- Example: a regulation only focused on making sure that the AI can’t be made to say racist things, without doing anything to address extinction risk.
- Example: a regulation that requires AI-developers to employ ethics officers or risk management or similar without any requirement that they be effective. (Something similar to cyber-security today: the demand is that companies devote legible resources to addressing the problem, so they can’t be sued for negligence. The demand is not that the resources are used effectively to reduce societal risk.)
NB: I am implicitly assuming that a government which misses the point will pass bad regulation and then stop because they feel that they have now addressed ‘AI safety’. That is, passing bad legislation makes it less likely that good legislation is passed.
2 It creates bad incentives
- Example: from 2027 the government will cap maximum permissible compute for training at whatever the maximum used by that date was. Companies are incentivised to race to do the biggest training runs they can before that date
- Example: restrictions or taxes on compute apply to all AI companies unless they’re working on military or national security projects. Companies are incentivised to classify as much of their research as possible as military, meaning the research still goes ahead, but it’s now much harder for independent researchers to assess safety, because now it’s a military system with a security classification.
- Example: the regulation makes AI developers liable for harms caused by AI but makes an exception for open-source projects. There is now a financial incentive to make models open-source
3 It is intentionally accelerationist, without addressing safety
- A government that wants to encourage a Silicon Valley type cluster in its territory offers tax breaks for AI research over and above existing tax credits. Result: they are now paying people for going into capabilities research, so there is a lot more capabilities research
- Industrial policy, or supply chain friendshoring, that results in a lot of new semiconductor fabs being built (this is an explicit aim of America’s IRA). The result is a global glut of chip capacity, and training AI ends up a lot cheaper than in a free-market situation.
Here is one I may suggest to metaculus/real money markets suggested by twitter user JGalt
No comments
Comments sorted by top scores.