Skip to main content

Cities are learning to use AI without losing their humanity

Can AI be ethical? No, say experts. But neither can a car - 'When there's a vehicle crash, no one says a car was being unethical,' notes Huntridge CTO Bennett Gebken.

By Elena Voss, Brightcast
2 min read
Las Vegas, United States
6 views✓ Verified Source
Share

Why it matters: As cities increasingly adopt AI for public services, establishing ethical guardrails now is critical to prevent harm to vulnerable populations. The challenge isn't whether AI itself is "ethical," but whether the humans deploying it take responsibility for representative data, explainable outcomes, and accountability when systems fail—making the difference between efficiency gains that help residents and automated decisions that shift burdens onto those least able to absorb them.

Right now, local governments are experimenting with AI to streamline everything from permit processing to benefits claims. The potential is real — but so are the risks. A panel of city leaders and tech experts recently laid out what's actually required to use AI in ways that don't accidentally harm the people it's meant to serve.

The core tension is simple: AI is a tool, and tools can be misused — or used thoughtlessly. "The burden of making it ethical is on the human in the loop," says Joel Natividad, co-CEO of datHere. That means the people choosing, training, and deploying the AI have to own the outcomes. It's not about the algorithm being "fair" in some abstract sense. It's about whether the data fed into it is representative, whether the results can be traced, and whether someone can actually explain what happened when things go wrong.

Start with the problem, not the technology

Cities often get this backwards. They see AI as a shiny solution and retrofit a problem to it. Instead, Jaime Gracia, director of corporate affairs for The Wolverine Group, says the conversation should start here: What are we actually trying to solve? Processing benefits claims faster? Flagging errors before they reach someone's desk? Those are different problems that need different approaches.

Wait—What is Brightcast?

We're a new kind of news feed.

Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.

Start Your News Detox

Once you know what you're solving for, you need a vendor who can show their work. "If I can't explain to you how that AI was used, how the AI works, then that protest more than likely will get sustained," Gracia says. This isn't theoretical. A Veterans Affairs project illustrates the stakes perfectly: an AI that rapidly denied claims would technically increase processing speed, but it would shift the burden onto vulnerable people. An AI that flags errors so claims get approved correctly the first time solves the same efficiency problem without the collateral damage.

The disability problem most AI systems miss

Here's something most AI developers don't think about: their training data usually doesn't include people with disabilities. "Most AI systems work broadly on averages," says Owen Barton, CTO of CivicActions. "They're going to try to come up with the solution that best fits the training data that it's seen, and most of the training data that it's seen will not be about people with disabilities." This means an AI trained on "typical" usage patterns will systematically fail for people who don't fit that average. The fix is straightforward but rarely done: put a human in the loop — specifically, someone from the disabled community — to train and test the system.

Danielle Mouw, a procurement analyst with the General Services Administration, points to the deeper structural issue: ethics can't be a patch you add later. "Ethics should become an embodiment of the technology, as opposed to something to add to it later." This means building ethical metrics into the project from the start, not as an afterthought.

Ultimately, the panelists agree that the question isn't whether AI itself is ethical. It's whether the people designing, governing, and overseeing these tools are making decisions that actually serve the public. That's harder than it sounds — and it's entirely on us.

67
HopefulSolid documented progress

Brightcast Impact Score

This article discusses how local governments can use AI ethically, highlighting the importance of defining clear goals, finding transparent vendors, and maintaining human oversight. While AI is a rapidly expanding technology, the article emphasizes the need for responsible and accountable implementation to ensure positive outcomes for communities. The article provides a balanced perspective, acknowledging the challenges and opportunities of using AI in the public sector.

25

Hope

Solid

20

Reach

Solid

22

Verified

Strong

Wall of Hope

0/50

Be the first to share how this story made you feel

How does this make you feel?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50

Connected Progress

Drop in your group chat

Just read that AI is 'the wild west right now' - cities need to define ethical use of the tech. www.brightcast.news

Share

Originally reported by Smart Cities Dive · Verified by Brightcast

Get weekly positive news in your inbox

No spam. Unsubscribe anytime. Join thousands who start their week with hope.

More stories that restore faith in humanity