Artificial intelligence is moving into hospitals faster than regulation can keep up. The potential is real: AI can spot cancers in scans, predict patient deterioration, flag drug interactions. But without thoughtful guardrails, it risks widening the gap between well-resourced hospitals and struggling ones—and between patients who benefit and those left behind.
In September, the Joint Commission (which accredits most U.S. hospitals) and the Coalition for Health AI released the first major recommendations for implementing AI safely in medical settings. It's a necessary start. But according to I. Glenn Cohen, who directs Harvard Law School's health law center, the current approach has a critical flaw: it places the burden almost entirely on individual hospitals to validate and monitor AI systems themselves.
The Cost Problem
Proper vetting of a complex AI algorithm costs $300,000 to $500,000. That's manageable for a major academic medical center. For a small community hospital operating on tight margins, it's prohibitive. The result is predictable: cutting-edge AI ends up concentrated in wealthy systems, while lower-resource hospitals either fall behind or skip the oversight entirely.
We're a new kind of news feed.
Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.
Start Your News DetoxCohen and colleagues, writing in the Journal of the American Medical Association, argue that some form of centralized regulation is necessary—particularly for high-risk applications like algorithms that influence treatment decisions or mental health chatbots that interact directly with patients. The question isn't whether to regulate, but how.
Full federal review of every medical AI product would be slow and expensive, potentially stalling innovation. But leaving it entirely to individual hospitals creates a patchwork where standards vary wildly and smaller institutions can't afford to participate. What's needed is something in between: a system that sets clear standards without pricing out community hospitals.
Cohen noted another risk often overlooked in the race to deploy AI: ethics gets left behind. When speed and competitive pressure dominate, it's easy to move fast and skip the harder questions about bias, equity, and whose data trained the system. An algorithm trained mostly on patients from wealthy hospitals might perform differently—sometimes dangerously—on patients from different backgrounds.
The Joint Commission's guidelines are reasonably strong, requiring hospitals to notify patients about AI use, get their consent, monitor for accuracy, and continually test for bias. The problem isn't the standards themselves. It's that many hospitals simply can't afford to meet them.
Cohen remains optimistic about what medical AI can achieve. But that optimism comes with a condition: the system has to work for everyone, not just the hospitals that can afford it. A more centralized approach—shared standards, shared data, distributed resources—could democratize access to these tools. Without it, AI in healthcare risks becoming another way that wealth determines who gets the best care.










