Skip to main content

AI in hospitals works—if we get the guardrails right

As AI rapidly transforms healthcare, a leading medical ethicist explores the critical balance between thoughtful regulation and avoiding unnecessary hurdles.

2 min read
United States
5 views✓ Verified Source
Share

Why it matters: Responsible regulation of healthcare AI can ensure its benefits are equitably distributed while mitigating risks, ultimately improving patient outcomes and supporting healthcare workers.

Artificial intelligence is moving into hospitals faster than regulation can keep up. The potential is real: AI can spot cancers in scans, predict patient deterioration, flag drug interactions. But without thoughtful guardrails, it risks widening the gap between well-resourced hospitals and struggling ones—and between patients who benefit and those left behind.

In September, the Joint Commission (which accredits most U.S. hospitals) and the Coalition for Health AI released the first major recommendations for implementing AI safely in medical settings. It's a necessary start. But according to I. Glenn Cohen, who directs Harvard Law School's health law center, the current approach has a critical flaw: it places the burden almost entirely on individual hospitals to validate and monitor AI systems themselves.

The Cost Problem

Proper vetting of a complex AI algorithm costs $300,000 to $500,000. That's manageable for a major academic medical center. For a small community hospital operating on tight margins, it's prohibitive. The result is predictable: cutting-edge AI ends up concentrated in wealthy systems, while lower-resource hospitals either fall behind or skip the oversight entirely.

Wait—What is Brightcast?

We're a new kind of news feed.

Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.

Start Your News Detox

Cohen and colleagues, writing in the Journal of the American Medical Association, argue that some form of centralized regulation is necessary—particularly for high-risk applications like algorithms that influence treatment decisions or mental health chatbots that interact directly with patients. The question isn't whether to regulate, but how.

Full federal review of every medical AI product would be slow and expensive, potentially stalling innovation. But leaving it entirely to individual hospitals creates a patchwork where standards vary wildly and smaller institutions can't afford to participate. What's needed is something in between: a system that sets clear standards without pricing out community hospitals.

Cohen noted another risk often overlooked in the race to deploy AI: ethics gets left behind. When speed and competitive pressure dominate, it's easy to move fast and skip the harder questions about bias, equity, and whose data trained the system. An algorithm trained mostly on patients from wealthy hospitals might perform differently—sometimes dangerously—on patients from different backgrounds.

The Joint Commission's guidelines are reasonably strong, requiring hospitals to notify patients about AI use, get their consent, monitor for accuracy, and continually test for bias. The problem isn't the standards themselves. It's that many hospitals simply can't afford to meet them.

Cohen remains optimistic about what medical AI can achieve. But that optimism comes with a condition: the system has to work for everyone, not just the hospitals that can afford it. A more centralized approach—shared standards, shared data, distributed resources—could democratize access to these tools. Without it, AI in healthcare risks becoming another way that wealth determines who gets the best care.

71
SignificantMajor proven impact

Brightcast Impact Score

This article discusses the need for regulation of AI in healthcare, which is a notable new approach to address potential pitfalls like bias and burnout. While the impact could be significant, the article focuses more on the regulatory challenges than specific measurable outcomes. The article cites expert opinions and industry guidelines, providing a good level of verification.

24

Hope

Solid

23

Reach

Strong

24

Verified

Strong

Wall of Hope

0/50

Be the first to share how this story made you feel

How does this make you feel?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50

Connected Progress

Share

Originally reported by Harvard Gazette · Verified by Brightcast

Get weekly positive news in your inbox

No spam. Unsubscribe anytime. Join thousands who start their week with hope.

More stories that restore faith in humanity