Why CEO Avatars Are Creating Trust Crises (Not Solving Them)
AI CEO avatars promise efficiency but erode employee trust. Research shows a 43-point collapse when leaders use synthetic replacements.
In April 2026, Kaltura CEO Ron Yekutiel deployed his own AI digital twin to handle executive communications on his behalf. The avatar speaks 30+ languages, can answer questions in real time, and runs on the exact same platform Kaltura sells to enterprise clients. His reason? Too many people to reach, not enough hours in the day.
It sounds like a compelling solution to an old problem. But there is a more uncomfortable question sitting just beneath the surface: when a CEO deploys a synthetic version of themselves, are they solving a communications problem, or creating a very expensive one?
The Self-Referential Marketing Problem
Yekutiel is not the first tech CEO to do this. Klarna's CEO used an AI avatar on a Q1 2025 earnings call. Zoom's Eric Yuan did the same shortly after, using the company's own Clips tool. In every case, the CEO's platform was the product doing the demonstrating.
This is clever marketing. It is also a conflict of interest that rarely gets named. When a vendor uses their own product as a proof-of-concept, the result is indistinguishable from the pitch. Kaltura has not disclosed independent ROI data from Yekutiel's deployment. Without it, the announcement functions more as a product demo than a business case. The distinction matters.
It also sets a precedent that may be difficult to walk back. HeyBoss founder Xiaoyin Qu appointed an AI avatar as the official CEO of her startup during a US$3.5 million seed round backed by the OpenAI Startup Fund. The synthetic executive concept has moved from a communications novelty into a governance question at the funding stage.
What the Research Actually Says About Trust
The business case for synthetic executives rests on a simple idea: expand reach without expanding the calendar. But the research on how employees and stakeholders receive AI-assisted communications tells a different story.

A study of 1,100 professionals found that only 40-52% of employees viewed supervisors as sincere when those supervisors used high levels of AI in their messages. For low-assistance messages, the sincerity rating was 83%. That is a 31-43 percentage point trust collapse tied directly to the perception of AI substitution.
The perception problem is the real problem. HBR research found that employees rated messages as less helpful when they believed AI was involved, even when the message was not AI-generated at all. The damage happens at the belief level, not the content level. A CEO avatar does not need to say anything wrong to erode trust. It just needs to exist.
For APAC markets, the stakes are higher. Asian corporate cultures often assign specific relational weight to direct leader communication. A CEO appearing in person at a town hall or investor briefing is not just a delivery mechanism; it is a signal of organizational seriousness. Replacing that presence with a synthetic proxy does not replicate the signal. It may cancel it.
The Governance Gap Nobody Is Talking About
The deeper issue is not authenticity. It is accountability.
Regulatory frameworks for AI-generated executive communications vary dramatically across APAC. Singapore, Hong Kong, Australia, and Japan are at different stages of AI governance legislation. A synthetic executive making strategic disclosures in one market may trigger securities law concerns in another. The patchwork creates real legal exposure that most deployment announcements do not address.
The deepfake risk compounds this. In January 2026, the Bombay Stock Exchange issued an urgent warning after deepfake videos of its CEO spread online promoting fraudulent stock tips. The mechanism for exploitation is straightforward: when companies train their stakeholders to accept synthetic executives as normal, they lower the verification threshold that protects against impersonation. Only 32% of corporate executives say they are prepared to handle a deepfake incident, even as executive avatar adoption accelerates.
Qantas learned a related lesson in 2025, when an AI-assisted apology email to 5.7 million customers after a data breach was later scrutinized for AI involvement. The backlash was not about the content. It was about whether the apology was genuine. In a trust-critical moment, the synthetic origin of the message became the story.
Looking for World-Class PR & Comms in APAC?
Tailored service packages for select brands and agencies.
When Scale Becomes the Problem, Not the Solution
There is a legitimate use case for executive AI avatars: repeatable, low-stakes, high-volume communications where the relational cost of automation is minimal. Product walkthroughs, FAQ responses, training content. These are the contexts where scale genuinely helps and where substitution carries little trust risk.
But earnings calls, employee announcements, crisis responses, and customer apologies are not that. These are the moments when the person behind the message matters. The decision to automate them is not a bandwidth fix. It is a choice about how much the communication actually cost the leader to make.
Trust starts eroding when leaders treat relational moments as if they were transactional, one internal communications platform noted in its guidance on AI avatars. Over time, employees will start to question whether leadership cares.
Only 10% of organizations report significant ROI from agentic AI, with most taking two to four years to see satisfactory returns. For an approach that undermines executive trust in the near term while delivering uncertain financial returns over a multi-year horizon, the business case for synthetic CEO deployment remains more aspirational than proven. That may not stop the trend. But it should inform how executives in the region approach it.
Want to reach thousands of marketing and comms professionals across Asia?
Get your brand in front of industry decision-makers.
Partner with Mission Media →
