Right now, local governments are experimenting with AI to streamline everything from permit processing to benefits claims. The potential is real — but so are the risks. A panel of city leaders and tech experts recently laid out what's actually required to use AI in ways that don't accidentally harm the people it's meant to serve.
The core tension is simple: AI is a tool, and tools can be misused — or used thoughtlessly. "The burden of making it ethical is on the human in the loop," says Joel Natividad, co-CEO of datHere. That means the people choosing, training, and deploying the AI have to own the outcomes. It's not about the algorithm being "fair" in some abstract sense. It's about whether the data fed into it is representative, whether the results can be traced, and whether someone can actually explain what happened when things go wrong.
Start with the problem, not the technology
Cities often get this backwards. They see AI as a shiny solution and retrofit a problem to it. Instead, Jaime Gracia, director of corporate affairs for The Wolverine Group, says the conversation should start here: What are we actually trying to solve? Processing benefits claims faster? Flagging errors before they reach someone's desk? Those are different problems that need different approaches.
We're a new kind of news feed.
Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.
Start Your News DetoxOnce you know what you're solving for, you need a vendor who can show their work. "If I can't explain to you how that AI was used, how the AI works, then that protest more than likely will get sustained," Gracia says. This isn't theoretical. A Veterans Affairs project illustrates the stakes perfectly: an AI that rapidly denied claims would technically increase processing speed, but it would shift the burden onto vulnerable people. An AI that flags errors so claims get approved correctly the first time solves the same efficiency problem without the collateral damage.
The disability problem most AI systems miss
Here's something most AI developers don't think about: their training data usually doesn't include people with disabilities. "Most AI systems work broadly on averages," says Owen Barton, CTO of CivicActions. "They're going to try to come up with the solution that best fits the training data that it's seen, and most of the training data that it's seen will not be about people with disabilities." This means an AI trained on "typical" usage patterns will systematically fail for people who don't fit that average. The fix is straightforward but rarely done: put a human in the loop — specifically, someone from the disabled community — to train and test the system.
Danielle Mouw, a procurement analyst with the General Services Administration, points to the deeper structural issue: ethics can't be a patch you add later. "Ethics should become an embodiment of the technology, as opposed to something to add to it later." This means building ethical metrics into the project from the start, not as an afterthought.
Ultimately, the panelists agree that the question isn't whether AI itself is ethical. It's whether the people designing, governing, and overseeing these tools are making decisions that actually serve the public. That's harder than it sounds — and it's entirely on us.









