In Kampala, Uganda, calls to a hospital helpline are becoming training data for something that might change who gets help and who doesn't. Researchers at Makerere University are listening to thousands of conversations with people reaching out about depression, anxiety, and thoughts of self-harm—not to treat them directly, but to teach an algorithm to recognize the patterns of distress that hide beneath everyday language.
The problem is stark. One in ten people across Africa struggles with mental health, yet the continent has roughly one psychiatrist for every 500,000 people. In some countries, there are none at all. Clinics exist in capital cities, but most people live far away, and the shame around mental illness keeps many from seeking help even when care is within reach. A chatbot that works in Swahili or Luganda—that doesn't require you to name your illness or sit across from a stranger—could reach people who would never walk through a clinic door.
The Algorithm Learning to Listen
What makes this project different from the chatbots already flooding the market is the training ground. The Butabika hospital helpline in Kampala has been fielding calls for years. People call when they're in crisis, when they're confused about what they're feeling, when they don't have the words. They might say "my head is too heavy" or "I can't sleep and my thoughts won't stop." The algorithm is learning to recognize these phrases—the real language of distress—rather than waiting for someone to say "I have depression." This matters because in many African contexts, people don't think in clinical terms. They think in the language their grandmother used, in metaphors that make sense in their community.
We're a new kind of news feed.
Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.
Start Your News DetoxOnce trained, the chatbot could do something the existing mental health system simply cannot: be available at 2 a.m. on a Tuesday in a rural town, in a language the person actually speaks, without judgment. It could flag when someone needs urgent care and connect them to a human clinician. It could give the overworked doctors and therapists who do exist a chance to focus on the most complex cases instead of triaging everyone who calls.
Regulators in South Africa and the UK are already working on frameworks to make sure AI in mental health doesn't cause harm—there are real risks, from the algorithm confidently giving wrong advice to potentially triggering someone in crisis. But researchers involved in the project argue the risks of doing nothing are worse. About a billion people worldwide live with a mental health condition. Most of them have no access to treatment. An imperfect tool that reaches them might save lives.
The Makerere team isn't claiming to have solved mental health. They're building a bridge across a gap that's too wide for the existing system to cross. What happens next depends on whether regulators move fast enough, whether the technology actually works as hoped, and whether communities in Africa get a say in how it's deployed—not just whether it's technically possible.









