Lessons from the FDA for AI
post by Remmelt (remmelt-ellen) · 2024-08-02T00:52:57.848Z · LW · GW · 4 commentsThis is a link post for https://ainowinstitute.org/wp-content/uploads/2024/08/20240801-AI-Now-FDA.pdf
Contents
4 comments
4 comments
Comments sorted by top scores.
comment by ChristianKl · 2024-08-04T07:59:28.402Z · LW(p) · GW(p)
The FDA has a risk model and works to prevent the risks in the model. When a whistleblower comes to them to tell them about a risk that's not really in their risk model such as what happened in the Ranbaxy saga the FDA was awful at reacting.
The opioid epidemic is another example where the FDA was bad at reacting to a risk that's outside of what they initially considered.
Given that we don't have a good understanding of AI risk building an organization like the FDA that's highly restrictive for certain risks and very bureaucratic and bad at tackling other risks would be bad.
comment by the gears to ascension (lahwran) · 2024-08-02T09:20:23.115Z · LW(p) · GW(p)
I like the idea of an "FDA but not awful" for AI alignment. However, the FDA is pretty severely captured and inefficient, and has pretty bad performance compared to a randomly selected WHO-listed authority; though that may not be the right comparison set, as you'd ideally want to filter to only those in countries with significant drug development research industries, I've heard the FDA is bad even by that standard. this is mentioned in the article but at the end and only briefly.
comment by habryka (habryka4) · 2024-08-02T02:27:04.676Z · LW(p) · GW(p)
I am confused. It seems like this report does not acknowledge that the FDA should by most reasonable perspectives be considered a pretty major failure responsible for enormous harms to economic productivity and innovation.
Like, I think it's reasonable to disagree with that take and to think the FDA is good, but somehow completely ignoring the question of FDA efficacy seems kinda crazy.
Replies from: remmelt-ellen↑ comment by Remmelt (remmelt-ellen) · 2024-08-06T03:12:46.235Z · LW(p) · GW(p)
The report is focussed on preventing harms of technology to people using or affected by that tech.
It uses FDA’s mandate of premarket approval and other processes as examples of what could be used for AI.
Restrictions to economic productivity and innovation is a fair point of discussion. I have my own views on this – generally I think the negative assymetry around new scalable products being able to do massive harm gets neglected by the market. I’m glad the FDA exists to counteract that.
The FDA’s slow response to ramping up COVID vaccines during the pandemic is questionable though, as one example. Getting a sense there is a lot of problems with bureacracy and also industrial capture with FDA.
The report does not focus on that though.