Efficiency VS Ethics In Consumer-Facing AI Campaigns

Key Takeaways:

  • While AI offers significant efficiency gains for fashion brands, its application in consumer-facing campaigns is met with unpredictable public scrutiny. The core of the issue is not visual quality, but a perceived lack of human authorship. For brands, this emotional backlash signals that the audience’s trust and acceptance of AI-generated content is conditional and highly sensitive to context, particularly in areas of brand identity and storytelling.
  • The business case for AI is undeniable. Brands are pulling the efficiency lever, with Zalando using AI for 70% of its editorial images to cut production time from six weeks to three days and reduce costs by 90%. Other studies show AI-generated visuals can increase click-through rates by up to 50%.
  • Audiences tend to accept AI in functional product shots, where clarity and accuracy matter most. The backlash grows when AI appears in editorial or campaign imagery, because those images carry fashion’s creative identity. What once felt novel in 2024 is now met with suspicion in 2025, as both saturation and anxiety about creative livelihoods intensify.

The Real-Time Roadmap

The Interline has partnered with Epic Games to publish a new, downloadable guide to what real-time 3D really means for fashion, how it relates to the first wave of digital product creation – and what comes next. The Real-Time Roadmap is free to download in low-impact, shareable size or archival quality.

AI fashion imagery keeps driving debate, with consumer reactions remaining complicated

It’s the start to another week of generative AI as an efficiency tool, publicly colliding with fashion consumers’ attitudes towards automation. Ahead of the weekend, US brand J. Crew became the latest to experiment with consumer-facing AI imagery, and in doing so, offered up another case study of how audiences are responding to these tools. Curiosity from some, criticism from others, yet more questions about how much automation audiences will welcome in spaces that are meant to define identity, and a new set of benchmarks for where gen AI will be most scrutinised, and how brands should disclose its use

What makes these flare-ups interesting is less to do with the technology itself and more so about the way people interpret the intent behind its use. And market traction isn’t a useful yardstick either: J. Crew’s Vans collaboration is close to selling out, proving commercial uptake is still there. But online reaction – some, it must be said, from dedicated AI sleuths who are committed to identifying these instances and spotlighting them – focused on whether the images felt “real” or contained identifiable enough flaws to mark them as ‘worse’ than photography. 

j.crew

Overall, the images in this particular case were largely successful according to most criteria.  They were on-brand, instantly recognisable as part of the J.Crew heritage aesthetic, and avoided falling into the uncanny, glossy, mixed-media look that plagued a lot of early AI ‘photography’.

So what, exactly, was the problem? A line from Blackbird Spyplane (the sleuth blog that broke the story) summed it up, suggesting the campaign looked like the brand had “counterfeit their own vibes.”. Whether fair or not, that framing captures the cultural unease that follows AI imagery when it’s used on public facing campaigns, even if (as was probably the case here) the ethical argument that AI is trained on the aggregate output of other people’s work wasn’t applicable. 

That phrasing is really interesting because it captures how people interpret AI imagery in cultural terms, and raises the broader question of whether a brand can even be said to counterfeit itself simply by automating the creation of new content from its own past. Is the sole difference whether or not a human hand is in the mix?

We wrote about this dynamic in Authorship in the Age of Automation. About how authorship shifts when a brand is both the creator and the dataset. And while the provenance of models underpinning these images isn’t clear, the brand in this case doesn’t appear to have been J. stealing from anyone else, but rather sampling and remixing its own past.

A similar question surfaced in our AI Report article Designed with Data, where we looked at how data driven systems might influence creativity. The takeaway was that there’s a difference between using archives as inspiration and using archives as raw input for automated recombination. Both draw on history, but only one feels like authorship. We even suggested that campaigns could begin to echo their own campaigns, and that once AI-generated work is fed back into the dataset, those echoes only intensify. 

J.Crew, of course, is far from the only brand to hit this nerve. Mango has been a source of similar debates when AI-generated models appeared in ecommerce listings. Guess found itself under the same gaze after AI-created models were used in Vogue, despite the presence of a footnote. Numerous other cases have surfaced over the last 12 months, each sparking a familiar wave of questions. Why does the cultural discomfort seem most pointed when AI is used in campaigns and editorials, rather than in more functional contexts such as eCommerce photography? 

Before answering that, though, it’s important to remember that there is an efficiency lever being pulled here; brands and retailers are not deploying AI just for the sake of experimentation. Zalando, for instance, claims to have cut campaign production times from six weeks to three days, with cost savings approaching 90 percent. Around 70 percent of its editorial images are now made using AI. What’s more, Alibiba’s generative design experiments found AI-generated items lifted clickthrough conversions by over 13 percent compared to human-designed equivalents. 

mango

And a large scale study involving more than a quarter of a million human evaluations found that, in controlled tests, AI-generated marketing images were consistently rated higher for quality and realism than professional stock photography. In the same study, live banner ad experiments showed that the best AI-generated images delivered up to 50 percent higher click-through rates than their human-made counterparts. Whether people realised these were generated images is a different question, of course.

These are all significant financial impactors, but it’s not like this is simply about saving money. AI also fills in gaps that used to cost time. Background swaps that once needed reshoots. Product shots that had to be repeated across multiple sizes, colors, and body types. The promise of AI is that it shortens cycles, simplifies iteration, and lets brands adapt creatively at the speed of commerce. 

The friction, it would seem, comes when AI is applied in the “wrong places”, though exactly what constitutes “wrong” is still up for debate, and is one of those hazy cultural / technological frontiers that are unique to fashion. In functional product photography (in many ways the digital equivalent of catalogue shots) there seems to be much less resistance. Shoppers want clarity and accuracy so they can make informed buying decisions. If the hemline is visible and the fabric and colour looks true enough to life, the job is done – at least insofar as a conversion has taken place. Add in the ability to show a dress across different bodies (and particular different body types) or settings without booking a new shoot, and the value is obvious. But In campaigns and editorials, the story changes, and it’s here that AI application becomes “wrong” in the consumer’s eyes. The image may look convincing, but the thread of authorship, identity, and brand-to-consumer value exchange and alignment is missing, and in that absence trust begins to fray.

That’s why Mango faced accusations of deception, Guess drew headlines despite disclosure, and J.Crew’s Vans imagery stirred debate. And it’s also why, just twelve months ago, Etro, Prada, and Moncler could run AI campaigns with little pushback. Back then, the technology still felt novel, something unusual enough to dismiss as a curiosity. Today, the mood has shifted.

What’s changed is twofold. Volume and anxiety. In 2024, an AI campaign was a novelty and one that was easily identifiable. Using AI was easy to frame as a gimmick or a pilot; but in 2025, the experiments have multiplied, the tech has matured, and the results are improving with every new model. With that in mind, audiences are reacting to both saturation (according to some studies 70% of social media images are now made with or are assisted by AI), but also a growing fear that AI is both becoming less visible as quality improves, and that is swallowing parts of the labour chain. The creatives both in front of and behind the lens are on high alert, because even if AI can’t fully replace them yet, it’s starting to resemble their output enough to make their roles look ever more precarious.   

That’s one likely reason the strongest pushback comes when AI shows up in campaigns and editorials. Catalogue shots and ecommerce photography are easier to accept: audiences just want to see the product and if AI delivers that convincingly, it’s accomplished the task it was assigned to. But campaigns carry authorship and aspiration. They define who the brand is. When AI appears there it threatens the livelihoods and identities tied to fashion’s creative core.

Still, the technology seems to still be improving. The glitches and physical defects that make these images straightforward to spot today will fade, people will begin to care less, and what’s left may be harder to tell apart from the photography we’re used to. The question is whether the unease fades with it. Right now, the positives are too numerous for experimentation to slow down. What’s less certain is how audiences will respond as the surface improves. Do the doubts fade along with the glitches, or does the desire for a human presence in the image linger even when everything looks flawless?

Exit mobile version