Australia Sets AI Age-Gating Deadline; 60% of Platforms Unprepared
Australia's eSafety Commissioner threatens A$49.5M fines as 60% of AI platforms miss March 2026 deadline. Only 9 of 50 major services implemented age verification, forcing CMOs to reassess AI tool ...
Australia's internet regulator has issued formal compliance warnings to AI platforms ahead of a March 9, 2026 deadline, threatening fines of up to A$49.5 million (approximately SG$44.4 million (~US$33 million)) for services that fail to restrict minors from harmful content.
60% of AI Platforms Unprepared as Deadline Arrives
A Reuters review of the 50 most popular text-based AI products found that 30 platforms, or 60%, had taken no visible steps toward compliance just days before the deadline. Only nine platforms had rolled out or announced age verification systems. Eleven others implemented blanket content filters or planned to block all Australian users entirely.

The eSafety Commissioner's office stated it would use "the full range of our powers" for non-compliance, explicitly naming "gatekeeper services such as search engines and app stores that provide key points of access" as enforcement targets. That language puts Apple's App Store and Google Play directly in the regulator's crosshairs.
Companion chatbots showed the worst compliance profile. Three-quarters had no functioning filters or age verification, and one-sixth lacked even a published email address for reporting breaches, itself a separate legal requirement under Australian online safety law.
High-Profile Cases Illustrate Compliance Divide
Among platforms that acted, OpenAI's ChatGPT, Anthropic's Claude, and Character.AI represent the clearest compliance benchmarks. ChatGPT and Claude began rolling out age assurance systems ahead of the deadline. Character.AI restricted open-ended chat for minors.
At the other end, Elon Musk's Grok had zero age assurance measures or content filters as of the compliance review, despite being under global investigation for suspected failure to prevent synthetic sexualized imagery of children.
The eSafety Commissioner's office cited children as young as 10 using AI tools up to six hours daily, and warned that "AI companies are leveraging emotional manipulation, anthropomorphism and other advanced techniques to entice, entrance and entrench young people into excessive chatbot usage."
Lisa Given, Director of RMIT University's Center for Human-AI Information Environments, noted that "most of these tools are being designed without a view to potential harms and the need for those kinds of safety controls."
Australia's Regulatory Architecture Extends Beyond Age Gating
The March 9 deadline is one part of a broader regulatory build-up. Australia's Online Safety Amendment Act came into force on December 10, 2025, establishing a tiered age verification framework now being extended to AI services.
The Australian Communications and Media Authority's Commercial Radio Code of Practice 2026, effective July 1, 2026, will require broadcasters to disclose AI-generated synthetic voices in scheduled programs and news bulletins. Separately, Privacy Act amendments effective late 2026 will require disclosure of automated decision-making in privacy policies.
Jennifer Duxbury, Head of Policy at Digital Industry Group Inc., stated plainly: "Any service operating in Australia is responsible for understanding its legal obligations."
What This Means for AI Vendor Decisions Across Asia-Pacific
Australia's December 2024 social media ban for teenagers prompted world leaders to announce similar plans, establishing a documented pattern of policy diffusion. Australia's National AI Strategy explicitly targets international alignment, suggesting the current framework is designed with regional export in mind.
For marketing and technology leaders managing AI tools across Asia-Pacific, the Reuters finding that 60% of platforms showed no compliance steps is a vendor risk signal. Brands using non-compliant AI tools in customer-facing applications face potential reputational exposure by association.
The compliance divide between ChatGPT and Claude on one side, and companion chatbots on the other, now provides a practical benchmark for AI procurement decisions in markets where similar requirements are anticipated.
Want to stay up-to-date on the stories shaping Asia's media, marketing, and comms industry? Subscribe to Mission Media for exclusive insights, campaign deep-dives, and actionable intel.

