OpenAI just rolled out ChatGPT Health, a feature that lets you feed the chatbot your medical records and fitness app data so it can spot patterns in your health. It won't diagnose you or prescribe anything — that's not what it's built for. Instead, it's meant to help you understand what's happening with your body and walk into your doctor's office with better questions.
The timing makes sense. People are already asking ChatGPT about their health constantly — 40 million queries a day, with one in twenty chatbot messages about medical stuff. Doctors are using AI too, mostly for the grinding work of documentation and care planning. So OpenAI is essentially formalizing what was already happening in the shadows.
What ChatGPT Health Actually Changes
For someone waiting six hours in an ER or living somewhere without easy access to a doctor, having a tool that can say "here's what your data suggests is going on" feels genuinely useful. You're not getting a diagnosis. You're getting a pattern-spotter that knows how to read your numbers.
We're a new kind of news feed.
Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.
Start Your News DetoxOpenAI partnered with b.well, a digital health platform that handles medical data seriously, to make the connections between your records and the chatbot. The partnership comes with actual privacy guardrails — though you can always opt out of sharing your data entirely if you'd rather not.
But there's a real gap between what the tool is designed to do and what people might actually do with it. Trust is the problem. A chatbot that sounds confident and cites your actual health history can feel like medical authority, even when it's not. And if someone's data gets breached — which happens — your medical information wouldn't have HIPAA protection the way it would at a hospital. That's a meaningful difference.
The Bigger Picture
What's less talked about: running AI systems at this scale costs energy and water. The data centers powering ChatGPT use serious amounts of both. That's not an argument against the tool, but it's worth knowing what the infrastructure costs.
Claude, Anthropic's chatbot, is getting a similar health feature. This is becoming standard. The real test isn't whether AI can read health data — it clearly can. It's whether people will use it as a complement to actual medical care rather than a replacement. That part is still entirely up to us.










