Skip to main content

Superintelligence isn't coming—it's already here in your workplace

Elena Voss
Elena Voss
·2 min read·Cambridge, United States·7 views
Share

Why it matters: this perspective encourages a more collaborative and symbiotic approach to developing AI systems that can benefit humanity as a whole, rather than fearing a dystopian machine takeover.

The conversation around artificial superintelligence typically splits the world in two: those who see it as humanity's salvation, and those convinced it's our extinction. Microsoft researcher E. Glen Weyl refuses both frames. His argument is stranger and more useful: superintelligence already exists, woven into the systems we live within every day.

Think about how a corporation coordinates thousands of people toward a single goal. Or how a democracy aggregates millions of individual preferences into collective decisions. Or how a religion binds communities across centuries. These aren't digital systems in the way we usually imagine AI—they're human systems that exhibit something we might call superintelligence. They accomplish things no single person could.

The danger, Weyl argues, emerges when we separate digital intelligence from people entirely. When an algorithm operates in isolation, it loses something critical: feedback. It can't sense whether it's drifting off course. It can't correct itself through the friction of real participation. More practically, it becomes unproductive. A factory robot that doesn't understand the broader production process will optimize for the wrong thing. An algorithm that doesn't know how humans actually work will solve the wrong problem.

Wait—What is Brightcast?

We're a new kind of news feed.

Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.

Start Your News Detox

Japan's Kaizen system offers a concrete example. Factory workers weren't just executing orders—they received targeted information about the entire production process. This transparency created a feedback loop. Workers could see how their small innovations rippled outward. The result wasn't faster robots; it was smarter humans embedded in smarter systems.

Weyl calls this "common knowledge." Not information that exists somewhere in the system, but information everyone shares, understands, and knows that others understand. It's the opposite of what social media produces. Algorithms there amplify division and polarization precisely because they fragment common ground. Everyone lives in a different informational reality.

Taiwan has begun experimenting with a different approach, using a tool called Polis. When someone posts something on social media, Polis shows them where their view clusters with others—and crucially, where seemingly opposite perspectives actually overlap. It's not about changing minds. It's about restoring the common knowledge that usually gets lost in digital spaces.

Weyl rejects the framing of an "East versus West" AI competition, too. The real question isn't whose superintelligent system is stronger. It's which systems can actually listen. Which ones can monitor their own functioning and adjust when incoming information suggests they're wrong. Which ones remain integrated with human judgment instead of replacing it.

The shift is subtle but foundational: superintelligence isn't something to fear arriving from outside. It's something we build by making sure our most powerful systems—digital or otherwise—stay connected to the feedback of human participation. That's not a technological problem waiting for a solution. It's a design choice we make right now.

70
SignificantMajor proven impact

Brightcast Impact Score

This article presents a positive perspective on how 'superintelligence' already exists in the form of collective human systems like corporations, democracies, and cultures. The author argues that separating AI from people makes it dangerous, and that we should instead view AI as an extension of human social relationships. The article highlights constructive solutions and real hope for the future of AI development.

20

Hope

Solid

25

Reach

Strong

25

Verified

Strong

Wall of Hope

0/50

Be the first to share how this story made you feel

How does this make you feel?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
Share

Originally reported by Harvard Gazette · Verified by Brightcast

Get weekly positive news in your inbox

No spam. Unsubscribe anytime. Join thousands who start their week with hope.

More stories that restore faith in humanity