
With access to social media now regulated for Australian children under 16, there are still big challenges ahead, particularly because of US hostility to similar legislation.
Fom an international cyber safety perspective, we will be living in βinteresting timesβ over the next few years.
Over the past decade, there has been a significant rise in pressure from international organisations for the regulation of social media and the ethical design of those environments.
Australia’s very own eSafety Commissioner and the European Commission have become world leaders in pressuring Big Tech towards the safety-by-design principle of their networks.
That pressure has included billions of dollarsβ worth of fines over the past decade for breaches of international law or the non-removal of harmful content.
Now, from 10 December 2025, children under 16 in Australia canβt continue to operate a current social media account or make a new account on many social media networks, because of the Online Safety Amendment (Social Media Minimum Age) Act 2024 coming into force.
The Act was legislated to help protect young people from aspects of the platforms that try to keep them scrolling for too long, and from content that can affect their safety, health and wellbeing.
Itβs important to view this law not as a ban, but as a delay to help children under 16 build skills and awareness about the risks of technology.
The legislation is aimed at any service or environment that meets the definition of a βsocial media platformβ. The platforms currently targeted by the law are Facebook, Instagram, Snapchat, TikTok, X, YouTube, Reddit, Kick and Twitch.
The networks currently included are likely to grow or change in the coming months as Australiaβs eSafety Commissioner, Julie Inman Grant, makes decisions on which platforms might fall under that definition.

At this stage, the excluded environments are messaging apps (WhatsApp, Messenger, Signal, Telegram, etc), email services, voice/video calling apps (Zoom, Teams, FaceTime, etc), online games (Fortnite, Roblox, Minecraft, etc), and professional networking (LinkedIn).
The key component of the legislation is the requirement for declared social media networks to take βreasonable stepsβ to ensure under-16s in Australia are removed from their platform within a reasonable amount of time once an underage account is identified, and to prevent a child from creating a new account before they turn 16.
The eSafety Commissioner has published regulatory guidance describing examples of βreasonable stepsβ platforms can take to identify underage accounts β such as age verification or estimation tools, parental-control options, and better detection options.
If the networks fail to take these actions, they can be fined up to $49.5 million.
The legislation will have to rely on cooperation from the platforms, whose response will be based on their own interpretation of the legislation β especially the βreasonable stepsβ requirement. Excuse the cynicism, but Iβm not very confident about what they will deem to be βreasonableβ.
Iβm strongly of the opinion they will argue that trying to actively remove all Australian children under 16 from their environments is not reasonable.
I believe they will introduce detection methods that will pick up on the most basic under-16 identifiers. They might also make a few alterations to code for Australian accounts, or design a specific algorithm to pick up kids whoβve added their date of birth to an account or recently changed the date; users who have a daily location at an Australian school; what the ages of most of their βfriendsβ are; and the nature of the websites theyβre visiting.
Based on these very simple steps, the networks might well remove 100,000 Aussie kids. They could deem that to be βreasonableβ under the law, despite Australia having about two million children aged 10 to 15!
I expect the networks will also rely heavily on parents, schools and the eSafety Commission reporting underage accounts and essentially doing the work for them.
β Meta boss Mark Zuckerberg told a US Senate Hearing into Child Online Safety in 2024: βI do not support the conclusion that social media causes changes in adolescent mental health.β And he said that with more than 30 parents sitting behind him who had lost children to suicide on Instagram and Facebook.
While Australia is tackling this head-on, the government that needs to act the most β in the country where the vast majority of the most popular social media apps are based β sadly refuses to do so.
In fact, instead of introducing legislation to address online harm, the US Congress in the 1990s introduced laws to protect Big Tech from civil and criminal litigation.
Section 230 of the USβs Communications Decency Act has allowed online environments to flourish without regulation for years, and Big Tech giants have taken full advantage of the protection that law provides.
The stubborn nature of Big Tech has remained a constant hurdle for legislators and international bodies such as our own eSafety Commission.
Even politicians within the US who support the global drive for social media regulation are facing staunch opposition, often being beaten down by the βfree speechβ argument.
This defiance by Big Tech is set to ensure that the collective army of cyber-safety advocates and legislators outside of the US pushing for much-needed increases in online safety will continue to encounter a massive wall of resistance.
Who can forget Meta boss Mark Zuckerbergβs announcement in January that he would be scaling back on moderation and content filtering?
His pushback against those who seek to address the lack of safety and regulation on social media networks was condemned internationally.
But Zuckerberg has been here many times before. He continues to deny the clear and present danger of social media.
As he told a US Senate Hearing into Child Online Safety in 2024: βI do not support the conclusion that social media causes changes in adolescent mental health.β
And he said that with more than 30 parents sitting behind him who had lost children to suicide on Instagram and Facebook.
His statement would be utterly destroyed by those poor parents, and by the families of the 11 children in my Australian schools over my 15 years of working in this space, who were lost to suicide as a result of harms experienced from social media.
Echoing her boss, Metaβs Vice-President and Global Head of Safety, Antigone Davis, told our Senateβs Joint Select Committee on Social Media and Australian Society in June last year: βI don’t think social media (Facebook & Instagram) does harm to our children.β
Recently there was a great deal of hype around Instagram, owned by Meta, introducing a βPG 13β setting to their network. A number of Instagram influencers (most on Meta’s payroll) touted this as a game changer.
For his part, Instagram head Adam Mosseri says how committed he (and Meta) is to protect teens on the platform, and how this change will help parents. I donβt trust that apparent commitment, nor any such statement coming out of Meta.
To support why I say this, hereβs what Mosseri told a US Senate hearing in 2021: βI believe at Instagram we try and respond to all reports; and if we fail to do so, that is a mistake and something we should correct.β But an in-depth online study I conducted last year showed that less than 2% of teen reports of harm were acted on by Instagram in the first instance, and only 22% of reports were responded to after multiple attempts at removal of harmful material. So, the supposed βmistakeβ has not been corrected, despite Mosseriβs commitment four years ago to fix it.
β The legislation will have to rely on cooperation from the platforms, whose response will be based on their own interpretation of the legislation β especially the βreasonable stepsβ requirement. Excuse the cynicism, but Iβm not very confident about what they will deem βreasonableβ. I strongly believe they will argue that trying to actively remove all Australian children under 16 from their environments is not reasonable.
Just weeks before Instagramβs PG 13 hype, at yet another congressional hearing in the US about the unethical practices of Meta, two former employees came forward to rip back the curtain on exactly what Meta thinks of our children.
On 9 September this year, whistleblowers Cayce Savage and Jason Sattizahn provided testimony to a US Senate Judiciary Committee hearing on Privacy, Technology and the Law about how Meta buried child safety research.
Over nearly two hours of evidence, the pair went into explicit detail regarding their experiences.
Most concerning was the manner in which Meta was said to be acting in relation to research into the harms being faced by children using Meta’s VR headsets, especially within Horizon Worlds.
“Meta are fully aware children are being harmed in VR and all Meta products.”
β Cayce Savage
“Meta is aggressively ambivalent to people!”
β Jason Sattizahn
The statement by Ms Savage, who worked at Meta from 2019 to 2023, directly contradicts what Antigone Davis told the Australian Parliament last year.
Mr Sattizahn, who has a PhD in Integrative Neuroscience, was at Meta from 2018 to 2024 and worked exclusively in user research and safety, tasked with understanding users and their needs. He used this research to try and make Meta products safer.
He testified that Meta deliberately deleted evidence of children being exposed to harm on their products, and failed to report such harm to investigative or judicial bodies.
He and others in research were told not to investigate harms against children, and all research was monitored by a βlegal surveillanceβ team in the wake of the testimony of Facebook whistleblower Francis Haugen in 2021.
For the record, Meta has described these recent claims by Mr Sattizahn and Ms Savage as βnonsenseβ β but their sworn testimony perfectly reflects what other online safety advocates and I have been saying for years.
You're just getting to the good part! Log in to your AMA (WA) member account to continue reading, or join today for full access to exclusive content, member benefits, and more.
Month End Offer is live!
Get up to 40% off on AMAMedius
Sign in using your membership email and date of birth.