Request for comments/opinions/ideas on safety/ethics for use of tool AI in a large healthcare system.

post by bokov (bokov-1) · 2024-05-24T20:53:14.613Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    3 PhilosophicalSoul
None
No comments

I know somebody at a large healthcare system who is working on an AI roadmap/policy. He has an opportunity to do things right from the start-- on a local level but with tangible real-world impact. 

The primary types of AI we are looking at are LLMs (for grinding through repetitive natural language tasks) and more traditional predictive models trained on diagnostic imaging or structured numeric data. These will be mostly provided by EHR vendors and third-party vendors but possibly with some in-house development where it makes sense to do so.

I value this community's thoughts regarding:

Things that are already not on the table for legal and common-sense reasons:

I am writing this as a private individual. My views and statements do not reflect those of my employer or collaborators. 

Thank you.

Answers

answer by PhilosophicalSoul · 2024-05-24T21:52:02.844Z · LW(p) · GW(p)

I genuinely think it could be one of the most harmful and dangerous ideas known to man. I consider it to be a second head on the hydra of AI/LLMs. 

Consider the fact that we already have multiple scandals of fake research coming from prestigious universities (papers that were referenced by other papers, and so on). This builds an entire tree of fake knowledge, which, if left unaudited, would have been seen as a legitimate foundation of epistemology upon which to teach future students, scholars and practitioners.

Now imagine applying this to something like healthcare. Instead of having human eyes who (while mistakes can be made, they're usually for reasons other than pre-programmed generalisations) scan over, absorb the information and adapt accordingly, we have an AI/LLM. Such an entity may be correct 80% of the time in analysing whatever cancer growth, or disease it's been trained on over millions of generations. What about the other 20%? 

What implications does this have for insurance claims where an AI makes a presumption built on flawed data about the degree of risk in a person's health? What impact does this have on triage? Who takes responsibility when the AI makes a mistake? (And I know of no single legal practitioner held in high regard who is yet to substantively tackle this consciousness problem in law). 

It's also pretty clear that AI companies don't give a damn about privacy. They may claim to, but they don't. At the end of the day, these AI companies are fortified behind oppressive terms & conditions, layers of technicalities, and huge all-star lawyer teams that take hundreds of thousands of dollars to defeat at minimum. Accountability is an ideal put beyond reach by strong-arm litigation on the 'little guy', or, the average citizen. 

I'm not shitting on your idea. I'm merely outlining the reality of things at the moment.

When it comes to AI; what can be used for evil, will be used for evil. 

comment by bokov (bokov-1) · 2024-05-24T22:52:04.974Z · LW(p) · GW(p)

I appreciate your feedback and take it in the spirit it is intended. You are in no danger of shitting on my idea because it's not my idea. It's happening with or without me.

My idea is to cast a broad net looking for strategies for harm reduction and risk mitigation within these constraints.

I'm with you that machines practising medicine autonomously is an bad idea, as do doctors. Because, idealistically, they got into this work in order to help people, and cynically, they don't want to be rendered redundant.

The primary focus looks like workflow management, not diagnoses. E.g. how to reduce the amount of time various requests sit in a queue by figuring out which humans are most likely the ones who should be reading them.

Also, predictive modelling, e.g. which patients are at elevated risk for bad outcomes. Or how many nurses to schedule for a particular shift. Though these don't necessarily need AI/ML and long predate AI/ML.

Then there are auto-suggestor/auto-reminder use-cases: "You coded this patient as having diabetes without complications, but the text notes suggest diabetes with nephropathy, are you sure you didn't mean to use that more specific code?"

So, at least in the short term, AI apps will not have the opportunity to screw up in the immediately obvious ways like incorrect diagnoses or incorrect orders. It's the more subtle screw-ups that I'm worried about at the moment.

No comments

Comments sorted by top scores.