This article was originally published in The Interline’s AI Report 2025. To read other opinion pieces, exclusive editorials, and detailed profiles and interviews with key vendors, download the full AI Report 2025 completely free of charge and ungated.
Key Takeaways:
- AI-generated influencers are emerging as a viable tool for brands seeking speed, flexibility, and low-cost content production. While adoption appears to be growing, especially for visual campaigns, their rise is prompting new questions around connection, personality, and what audiences expect from influencer marketing in the first place.
- Early evidence suggests AI personas may be most effective in aesthetic-first categories, with limitations emerging where trust, texture, or personal experience play a bigger role. As expectations shift, authenticity is being reframed through tone, consistency, and emotional relevance rather than through human presence alone.
- Legal frameworks are starting to address AI-generated endorsements using existing advertising rules, though clarity remains limited. As regulation evolves, brands may need to reconsider how claims are made and interpreted, particularly when synthetic voices are used to simulate first-hand experience.
AI-generated models and influencers are no longer a looming hypothetical. They’re already here, and they’re already being put to work. What began as a backstage tool for prototyping garments has been pushed straight into the spotlight. Brands like H&M are now creating photorealistic digital clones for campaigns, raising awkward new questions about creative labour, and what kind of connection consumers actually expect from fashion media.
AI-generated influencers go further. They don’t just wear the clothes, they communicate. They can talk, narrate, share routines, and speak directly to their followers. The voices are synthetic, the scripts auto-generated, but the performance is familiar enough to pass, and getting better with every new model.
So what happens when fashion’s most overused word – authenticity – starts drifting away from anything remotely human? What kinds of products work in this new setup, and where does it all fall apart? To find out, I spoke with The Clueless, the creators behind some of the most popular AI-generated influencers, Venture Beyond, a DTC growth agency that incorporates AI-generated content in their campaigns, and William Lawrance, Counsel at Venable LLP, who practices at the intersection of antitrust, IP, and advertising law.
Why AI Influencers Work
The logic is simple. AI-generated influencers are fast, reliable, and cheap. For brands looking to scale content quickly, that’s more than enough. Unlike traditional influencer campaigns that rely on real people and the potential unpredictability that comes with them, AI-generated influencers can deliver content specifically tailored to different brands and audiences at a fraction of the price of a human influencer. Their narratives can be completely controlled, and the turnaround is almost instant. They do not need a break and can be present in multiple places at the same time. In theory, there are almost no limits to how, when and where they can be, making them an endlessly adaptable marketing asset. And for the creators behind them, this means virtually limitless commercial opportunities.
As a result, the AI-generated influencer landscape is evolving rapidly. PR agencies represent AI talent and help AI-generated influencers negotiate brand deals, dedicated awards, such as the AI beauty pageant 2024, where Moroccan AI-generated influencer, Kenza Layli, was crowned Miss AI, celebrate AI talent, and brands and marketing agencies are increasingly integrating them into their social media strategies.
Venture Beyond is one of the agencies leading this shift. I spoke with its founder, Shahbaz Khokhar, about how they are using AI-generated influencers, their commercial value, and how their clients typically respond to them.
According to Khokhar, how a brand responds to AI-generated content largely depends on who they are speaking to. If you talk to the owner of the business, someone driven by profit, they are a lot more open to it, he explained. “They’ll usually say: just run whatever you need to run to get sales”. On the other hand, when dealing with people employed by the brand, “the ethics become more important than the profit. What they stand for personally starts to shape their decision”. Khokhar also noted that concerns around job security can play a role in creating resistance to AI-generated content, where some employees may be cautious about technologies that could threaten their roles. However, he believes that “those walls are going to come crumbling down pretty quickly”, with the cost-saving benefits of using AI in advertising being almost irresistible. “It’s a massive cost versus almost no cost”, he adds. “If someone is planning a high-budget fashion shoot for £50,000, for example, we could generate the same kind of image for a few cents and create thousands of variations for them to choose from”.
There’s no creative director. No call times. No models, stylists, location scouts, or reshoots. Just a prompt and a few clicks – and enough content to flood your feed for weeks.
It’s not just imagery anymore. With new tools – many of them built directly into platforms like Gemini and ChatGPT – brands can generate, post, and optimise content without any humans in the loop. AI agents can now run the whole pipeline: create the visuals, publish them, track performance, and tweak the next batch based on the results.
Though it’s worth considering that kind of scale has a cost. In the endless optimisation cycle, something goes missing. The offbeat, the unpolished, the personality. The very things that once made influencer marketing feel like it had a heartbeat.
Where AI Falls Short
In fashion and beauty, not everything is sold on looks alone. Some products are bought for how they function, others for how they feel, and that changes what consumers expect from the person (or persona) doing the selling.
Take fashion accessories like jewellery, belts, or handbags. These are visual purchases. If an AI-generated influencer models a pair of earrings and they catch someone’s eye, that might be all it takes. “When it comes to fashion, you’re just trying to present something in an aesthetic way,” says Khokhar. “You’re not making any kind of claim where the customer’s going to be upset because they didn’t have a real person backing it up.”
But move into footwear, and things get trickier. Comfort, fit, and feel start to play a role, and those are things an AI-generated influencer can’t experience or describe. Sure, the shoes might still be bought for their style, but if a follower wants insight into stretch, support, or how they hold up after a day of walking, an AI-generated influencer has nothing genuine to offer.
Beauty and skincare raise the bar even further. A face moisturiser, for example, can be described – its ingredients listed, its claims repeated – but a script can’t say how it really feels on the skin, whether it absorbs well, or if it caused irritation. In categories where trust and performance go hand in hand, like skincare, cosmetics, or supplements, first-hand experience still counts. And for now, that’s something AI-generated influencers simply can’t deliver.
Then again, how real is any of this to begin with? Influencer content is already a hall of mirrors. Most followers know the posts are paid for, filtered, and approved in advance. So if most of it’s a curated illusion, does it really matter whether the person behind the post is real at all?
That’s not to say human influencers have lost all credibility, far from it. But the trust they trade on is more fragile than it once was. As someone who follows plenty of them, I’ve learned to approach endorsements with caution. I don’t assume they’ve used the product, or liked it, or even written the caption themselves.
So, what does that mean for influencer marketing? In a world where we’re already questioning the credibility of human influencers – and where our feeds are increasingly filled with AI-generated, and soon perhaps fully automated, content – are we simply looping back to the era before influencer marketing began?
A time when advertising was just that: ads on TV or in magazines, fronted by people paid to demonstrate a product with no pretence of personal experience. Only now, those ads live on social media and go by a different name.
Rethinking Authenticity
Authenticity has always been the currency of trust. In influencer marketing, it’s long been the reason people listen, engage, and ultimately, buy. But what happens to authenticity when the influencer isn’t human at all? I wrote last year about the idea of virtual influencers being ‘authentically fake’, how being upfront about their artificial nature can, oddly enough, create a kind of trust. A year later, the same question still lingers: in a world that’s rapidly changing, is it time to rethink what we really mean by authenticity?
Rubén Cruz and Diana Núñez, the CEOs of The Clueless and creators of Aitana Lopez – the world’s first AI-generated influencer – believe authenticity doesn’t depend on biology. Instead, it depends on “the intention behind the character and how it interacts with the world around it”. They explain that authenticity in the context of AI “is about the character’s ability to communicate in a consistent, meaningful, and emotionally relevant way”. Focusing more specifically on Aitana’s case, for instance, they argue that “we don’t just publish visual content – we’ve also developed her backstory, her passions (like fitness and gaming), her relationships, and her tone of voice”. This is exactly what allows many people to relate to her, follow her, and feel like they’re connecting with someone who has something to say, even if she was created by AI, Cruz and Núñez explain.
This posthuman view of authenticity turns it into a design challenge rather than a human one. If an AI-generated influencer is consistent, meaningful, emotionally resonant – does it matter that they were manufactured? Speaking personally, I believe that meanings are fluid and constantly evolving, and so are the ways we interact and communicate with each other.
We once wrote letters. Now we swipe through curated lives and communicate through digital “performance”. So yes, part of me flinches at the idea of authenticity being redefined, but another part accepts that change never asks for permission.
The redefinition of authenticity is already happening. As Cruz and Núñez point out, people aren’t just accepting AI-generated influencers, they’re following them, engaging with them, and forming real emotional connections.
When people learn that Aitana is entirely AI-generated, the most common reaction is surprise. Many don’t notice at first. But once they do, curiosity tends to take over. “They’re often fascinated by the level of realism and the narrative world that surrounds her,” they said.
In their experience, most people move past the initial shock and begin interacting with her just like they would with any other influencer. “For most, it’s not about whether she’s real or not, but what she stands for, how she communicates, and the kind of content she shares.”
The proof, say Cruz and Núñez, is in the numbers. According to them, Aitana now earns over €3,000 a month, and in some cases, more than €10,000, thanks to growing demand from brands both in Spain, where The Clueless is based, and internationally. For many of those brands, especially those looking to reach younger audiences in new ways, AI-generated influencers like Aitana aren’t just a novelty, they’re delivering results.
Beyond the obvious cost and convenience, Cruz and Núñez point to media impact. For instance, in one campaign for a haircare brand, Aitana’s content doubled the average engagement rate on Instagram. In another, her post became one of the most-viewed on a sustainable fashion brand’s profile.
The Legal Grey Zone
As brands increasingly experiment with AI-generated influencers, the regulatory landscape around them is becoming harder to ignore. The technology is evolving fast but the rules that govern advertising haven’t kept up, leaving brands and platforms to operate in murky grey areas.
Take Meta. The platform now requires users to disclose when content includes photorealistic video or realistic-sounding audio generated or altered by AI, with potential penalties for non-compliance. But the actual implementation feels vague. When an AI label appears on Instagram, the info popup reads: “[username] added an AI label to this content. AI may have been used for a wide range of purposes, from photo retouching to generating entirely new content.”
It’s a broad brush, one that offers little clarity. There’s no distinction between a lightly edited image and a fully AI-generated avatar, leaving users unsure how to interpret what they’re seeing. And while Meta has said its approach will “evolve as people’s expectations and the technology evolve,” the current system does little to offer meaningful transparency.
While there is undoubtedly an urgent need for regulations to evolve with the fast pace of technology, regulators are drawing on existing frameworks to address the current landscape surrounding AI in advertising. In the U.S., the Federal Trade Commission (FTC) is already applying long-standing advertising laws to AI-generated content. In September 2024, the agency launched Operation AI Comply, bringing five enforcement actions against companies for allegedly deceptive behaviour rooted in AI technology. These cases were all based on existing Truth in Advertising rules that the FTC has been applying for decades. As William Lawrence, counsel at Venable LLP, explains, “there is no AI exemption from the laws on the books”, quoting FTC Chair Lina Khan, highlighting that the rules regarding remaining truthful and accurate in any representation to consumers have not changed.
“While specific regulations may lag behind technologies, regulators are much quicker in adapting to using old laws to attack what they perceive as new bad behaviour”, he noted. For example, when asked what happens if an AI-generated influencer is scripted to give a product recommendation, Lawrence pointed to existing FTC guidelines: “The FTC has pretty detailed guidelines on what makes an endorsement misleading from the perspective of the FTC. That guidance is clear that when an advertisement represents that the endorser has used the product, the endorser must have been a bona fide user at the time the endorsement was given”, noting that “AI or not, brands should be very careful to be truthful and not misleading”.
All of this underscores just how sensitive the space around AI-generated influencers really is, especially when it comes to product claims. If they can’t physically use or experience a product, can they ever count as a bona fide user under current regulations? That raises a broader question: are AI-generated influencers effectively limited to promoting surface-level aesthetics, while anything tied to performance, credibility, or wellbeing remains out of reach?
Where Does That Leave Us?
As brands continue to explore the potential of AI-generated influencers and content, fashion is reaching a critical inflection point. What once felt like a speculative experiment is now a commercial reality, where long-held ideas about authenticity are being questioned and redefined.
For some brands, the cost-efficiency of AI is too compelling to ignore. For others, the ethical and legal uncertainties are enough to slow adoption. Those selling pure aesthetics may find smoother ground, while brands dealing in credibility or health claims face far more complex terrain.
And as content becomes increasingly AI-generated and automated, the core question becomes harder to avoid: who are consumers actually listening to, a real person, a persona, or just a prompt?