APAC Labels Assess Brand Risk as AI Music Clones Spread
Unauthorized AI-generated music is appearing on verified artist profiles, exposing brands, labels, and streaming platforms in APAC to new risks.
Unauthorized AI-generated music is infiltrating verified artist profiles on Spotify and Apple Music, forcing brands, labels, and platform operators across Asia-Pacific to confront new reputation and intellectual property risks.
British folk musician Emily Portman and Australian artist Paul Bender both discovered fraudulent AI tracks uploaded under their names in recent months, highlighting security gaps that allow scammers to earn royalties by impersonating legitimate artists.
The incidents underscore a broader crisis in streaming platform verification. Bender found four fake tracks on his band's profile and launched a petition calling for stronger security measures that attracted 24,000 signatures. Portman spent eight weeks appealing to platforms to remove an AI-generated album called Orca that fans mistook for her work.
Platform Response Falls Short of Prevention
Spotify removed 75 million spam tracks in the 12 months between 2024 and 2025, introducing new policies requiring artist consent for AI voice cloning and flagging metadata mismatches. Yet these measures remain largely reactive rather than preventive.
The scale of the issue extends beyond individual impersonation cases. The Velvet Sundown, a band later confirmed as 100% AI-generated, gained more than one million Spotify subscribers and 850,000 listeners within weeks. Deezer's analysis with Ipsos found that 70% of AI track streams were fraudulent and that most listeners cannot distinguish AI-generated music from human-created work.
Dougie Brown of UK Music explained the economic incentive: "The reason that music was uploaded under her (Portman's) name was essentially to make sure that they could gain royalties from (it)." In the most extreme case, US scammer Michael Smith earned US$10 million through AI-generated tracks with names like "Zygotic Washstands," using bots to inflate stream counts and peak at US$110,000 per month in royalties.

Regulatory Gap Leaves Asian Markets Exposed
Legal protections remain fragmented across markets. California has enacted specific AI impersonation laws, but the UK and most Asia-Pacific jurisdictions rely on outdated copyright frameworks that do not address AI-generated content. Philip Morris of the Musicians' Union noted that limited protections leave artists vulnerable, stating: "AI-generated music can impersonate real artists' work, leaving musicians exposed."
Though no Asia-specific impersonation cases have been publicly reported, platforms enforce global policies across APAC markets, meaning the same verification weaknesses exist in Singapore, Hong Kong, Tokyo, and other regional hubs. For labels and brand partnerships dependent on artist authenticity, the eight-week takedown process Portman experienced represents significant reputation risk.

Operational Implications for APAC Stakeholders
Brand and label executives should monitor several operational metrics as this issue evolves. Takedown response times, false-positive rates in fraud detection systems, and distributor know-your-customer processes all represent potential risk points. The Hi-Tide label case, where an AI surf rock album called Surf Tides was fraudulently uploaded to indie bands' profiles on Qobuz and Spotify, demonstrates how third-party distributors like DistroKid can become vectors for fraud.
Platform accountability will likely drive procurement decisions for AI detection and brand protection vendors in the coming quarters. While Deezer has deployed AI detection tools and Spotify has enhanced fraud monitoring with distributors, the gap between detection and prevention remains wide.
Want to stay up-to-date on the stories shaping Asia's media, marketing, and comms industry? Subscribe to Mission Media for exclusive insights, campaign deep-dives, and actionable intel.



