The Social Media Ban: Law vs Reality (a mum’s view)
Australia’s social media ban looks good on paper. But real life reveals gaps that maybe only a parent can see.
You’ll be pleased to know the social media ban worked.
People were blocked.
And by ‘people’, I don’t mean my 13-year-old and any of her vast network of friends, who continue scrolling freely despite the ban.
I mean me.
Snapchat flagged my adult account as an age violation and promptly shut it down.
Australia’s Online Safety Amendment (Social Media Minimum Age) Act 2024 came into force on 10 December 2025. A lauded world-first law banning Australians under 16 from holding accounts on major social media platforms. Since its launch, Snapchat says it has blocked more than 415,000 accounts it believes belong to under-16s. Across all platforms, around 4.7 million accounts were reportedly restricted or removed.
These numbers sound impressive, but real life tells a different story. Many under-16s quickly found workarounds: fake dates of birth, VPNs, alternative apps and swapping devices. Some parents even enabled them because teen social worlds are so tightly built around these platforms.
The obvious loopholes
From the very first debate, the law contained glaring omissions. Platforms warned of significant tech gaps in fulfilling the mandate. As a result, the AI-driven checks misfired and many kids slipped through, while others (like me) were wrongly banned. Many under-16s created their profiles on these platforms with fake information to begin with, so how could AI ever verify truth when honesty was never required? Meanwhile, the government has left kids, parents and themselves completely unaccountable. Only the platforms face penalties (up to AUD$49.5 million) for not taking ‘reasonable steps’ to prevent these minors from having accounts, and yet the Act doesn’t define what those steps must be.
It’s a policy that assumes tech alone can fix behavioural and social problems. In reality, it’s like locking your front door to prevent burglaries but leaving your windows wide open.
A cultural shift or hollow gesture?
Despite its flaws, the Act didn’t trigger spiralling backlash or a policy meltdown, as critics predicted. Classrooms didn’t descend into chaos either. Perhaps the rollout was intentionally cautious, prioritising education and communication over aggressive enforcement - and that’s not nothing.
But the absence of failure is not the same as the presence of impact. While the law didn’t spark disruption, it also didn’t solve the problem. It’s just given us the illusion of action without any real substance.
The question is: are kids safer or are the risks simply displaced? Banning accounts doesn’t stop teens from seeing harmful content that’s public, nor does it prevent them from migrating to other platforms not covered by the law.
How the ban could actually work
1. All platforms must be accountable, but in a meaningful way.
Identity verification technology already exists, and it works. We do it for passports, police clearance and Working with Children Checks (WWCC). It involves taking a live, date-stamp image of yourself while holding your ID next to your face. If there’s doubt, you verify in person. Why is this considered ‘too hard’ for social media, yet routine everywhere else?
Rather than just blocking accounts, platforms should also be responsible for harm reduction, by limiting how content is served to young users. They could curate feeds, implement hard time limits and redesign addictive interfaces, much like gambling laws regulate play and enforce time-outs.
2. Parents must be empowered and supported
No hammer of legislation can replace real parental involvement because ultimately, rules start and end in the home. But parents need government support for the rules to work. Otherwise, the burden falls on the few who try to comply while the majority face no consequences at all. What’s needed is education and frameworks that empower families to enforce limits together.
3. The government must provide the infrastructure
Responsibility must be shared. If platforms reduce harm and parents enforce limits, the Government needs to build the infrastructure that makes compliance realistic. This policy should have included foolproof age verification from day one and closer cooperation with parents, schools and communities.
So, what now?
The social media ban is about more than protecting kids’ mental health or shielding them from online bullying. What is often-overlooked is the effect on attention.
Right now, kids are consuming ‘digital junk food’ at pace, swiping faster than adults can process, streaming TV while gaming and half-listening to conversations. It’s no wonder attention spans are collapsing, and kids are increasingly being diagnosed with ADHD. What we’re seeing isn’t a sudden epidemic - it’s chronic overstimulation in action.
I don’t doubt the good intentions behind the ban. But intentions aren’t enough. If the law is to do more than make headlines, it needs revision and a real ecosystem, not just box‑ticking. If we care about children’s mental health, we need real-world policies.
And if we really want to address the problem, we need to speak to the experts first.
And by ‘experts’, I mean mums, living it in real time.
This article was originally published in Darling Magazine