Starting December 10th, Australia became the first country to completely ban children under 16 from social media. The move affects TikTok, Instagram, Facebook, YouTube, Snapchat, Threads, and others—a sweeping intervention that's already reshaping how the world thinks about kids and digital platforms.
The decision rests on research that's hard to ignore. A government-commissioned study found 96% of Australian children aged 10 to 15 use social media, and 70% have encountered harmful content: misogyny, violence, material promoting suicide and eating disorders. One in seven reported being groomed online. More than half experienced cyberbullying. The platforms themselves, the government argued, are designed to keep kids scrolling—algorithmically engineered to hook young brains with content that damages their wellbeing.
How It Works
Children under 16 can no longer create or maintain accounts. Existing ones must be deactivated. (They can still watch passively—the ban targets participation, not consumption.) YouTube Kids, WhatsApp, and Google Classroom are exempt. So are online games like Roblox and Discord, though some have quietly added age checks anyway.
We're a new kind of news feed.
Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.
Start Your News DetoxThe burden of enforcement falls entirely on platforms. They face fines up to A$49.5 million (about USD $32 million) if they fail to take "reasonable steps" to keep under-16s out. This means moving beyond the honor system of self-declared birthdays. Companies are now scrambling to implement age-assurance technology: government ID checks, facial recognition, voice analysis, or algorithms that infer age from behavior patterns.
The Messy Reality
The law is well-intentioned and imperfect—which is partly why it matters. Facial recognition has a documented problem with accuracy across age groups and ethnicities. Privacy advocates worry about the biometric data being collected and stored. Tech companies, including Google, called the rollout "rushed" and warned it could backfire by removing safety tools kids actually rely on.
There are loopholes too. Dating apps, gaming platforms, and AI chatbots escaped the ban despite documented issues with predatory behavior. And the teenagers most affected seem unimpressed—they're already planning workarounds: fake accounts, shared family profiles, VPNs.
Yet Australia's government made a calculation: imperfect action beats paralysis. While other democracies have debated, studied, and delayed, Australia chose to move. The law is a test case, and the world is watching.
What happens next will depend less on the technology than on whether platforms genuinely adapt their business models—or simply find new ways to work around the rules. Either way, this is no longer a question only Australia is asking.







