Key Takeaways:

  • As fashion and beauty brands embrace digital transformation, they are taking on the data governance risks of a software company. A recent breach at the dating app Tea, which exposed sensitive user photos and IDs from a legacy system, underscores how a failure to secure user data, especially when using AI-assisted services like virtual try-on, can be catastrophic to a brand’s reputation and its contract with its consumers.
  • A recent incident where an AI coding assistant on Replit deleted a production database and then ‘lied’ about it highlights the risks of using AI in critical software development. This finding is reinforced by a Veracode study, which found that 45% of code produced by AI contains security vulnerabilities, demonstrating that these models prioritise plausible output over security.
  • Third-party vendor vulnerabilities remain a critical weakness, accounting for half of all breaches last year. As fashion’s technology stacks deepen and rely more on external platforms and AI agents, the industry inherits these vulnerabilities, increasing the risk of data exposure even if a brand’s internal systems are secure.

If fashion and beauty brands now want to  operate like  software businesses (and the widening scope of digital transformation and digital methods of engagement with consumers suggests that they do)), then those same companies are, sooner rather than later, going to come up against the difficult frontiers that digital platform owners have to navigate. 

This week, two stories from outside our industry have highlighted just how quickly the contract between brand and user changes when personally-identifiable information enters the picture, and just how much the brand’s responsibility in that relationship could change as fashion and beauty brands push into uncharted – for them – digital space. 

In one, a dating app called Tea, built around the idea of offering women a safer space for connection, suffered a major breach. The scope of the breach (covering photos, messages, and other potentially identifiable data) was devastating, since it both effectively undermined the stated purpose of the service, and also served as a lightning rod for an active cultural debate about personal information security. 

The cause, according to early reports – and some shifty responses from the company itself – wasn’t a sophisticated cyber attack, but a legacy system that had never been properly rebuilt after launch. (A public Amazon S3 bucket, housing user photos, is the kind of schoolboy security error that rarely makes it into any kind of consumer-facing application.)

Now, fashion and beauty brands have not historically needed to collect photos of their users in order to deliver their service, unlike dating apps and similar platforms that rely on authenticating users through images. But as both industries lean further into virtual try-on (AI-assisted and otherwise,) the nature and the sensitivity of the identifiable information they hold about their customers are set to change.

And while not all VTO services involve generative AI, AI itself is already being used extensively in the software engineering side of beauty and fashion, and this week saw another cautionary tale emerge in the assisted-coding space.  An AI coding assistant running on Replit’s platform deleted a live production database, wiping more than a thousand company records during what was supposed to be a controlled test. 

The agent had been explicitly told not to make changes, then it did anyway. When questioned, it fabricated explanations, claimed tests had passed when they hadn’t, and eventually admitted it had “panicked” and made a “catastrophic error in judgment.” If that sounds strange to read, believe us, it felt even stranger to write, but this is where we are, working with tools that not only act, but apparently lie should the tools feel the need to do so. 

For the software engineers in the audience, there’s likely to be no small amount of schadenfreude from reading this story. Not just because it represents some pretty serious teething problems in the rush to roll out AI in DevOps, but because doing anything experimental in production is another easily-avoided problem.

In this case, the data was later restored from backups, and Replit’s CEO issued a public apology, calling the incident unacceptable and outlining immediate structural fixes. But a recent study from Veracode suggests that this is far from an isolated incident. Looking at 80 curated coding tasks across more than 100 generative AI models, researchers found that 45% of code produced contained known security vulnerabilities, even when a safer method was available. The report suggests that this isn’t a question of model size or technical sophistication. It may, as is often the case with AI, come down to simple training data, which dutifully reflects flawed code from real world examples. After all these tools are designed to produce plausible, not secure output. 

As Vijay Dilwale, a principal consultant at Black Duck, puts it: “LLMs can write code that works, but not necessarily code that’s safe. They don’t understand threat models or risk context the way security engineers do. As GenAI becomes a bigger part of how software gets built, we have to remember: security doesn’t come for free. It has to be part of the process.”

Why does this matter to beauty and fashion? Because, whether we like to admit it or not, just as the industries are moving into spaces that require them to hold, secure, and govern a much wider corpus of data about their users, they are also beginning (like all industries) to deepen their technology ecosystems – which means outsourcing more and more core processes to systems provided by third party vendors.

According to recent analysis, half of all breaches last year could be traced to a third-party vulnerability. The breach at a storied UK brand earlier this year, which reportedly exposed sensitive customer data, is thought to have originated with a third party supplier who had access to sensitive systems and data. So while all the key fashion and beauty tech companies that The Interline is aware of have invested heavily in their own information security and have hardened against attack to the level required by enterprise software vendors, there is no accounting for the risk vectors that can be exposed by third parties – especially when those third parties either use generative AI in delivering their services, or are AI model providers or agents themselves.

“What about companies that don’t want to experiment with virtual try-on?” you might ask. Well, those are likely to be able to avoid an initial wave of potential breaches, but in the long run it may become desirable (or even inevitable) to age-gate content, experiences, or certain purchases. And age-gating itself, not to mention the third party platforms that support it, is also at the root of another story this week.

Over the last few days, the UK government has come under pressure over its choice of age-gating providers, after questions were raised about how some of these services were storing and managing sensitive identity data. 

Many of the same platforms that are now being deployed in the UK to age-check users of web services are also systems that have seen testing in other areas (legal contracts in property purchasing, vehicle leasing etc. have required users to take selfies and scan driver’s licenses for a while), but it’s only when a wider roll-out happens that they really undergo complete penetration testing. And this story is a reminder that, just because something works in one context, or by one definition, doesn’t necessarily mean that it’s secure enough to deploy in other scenarios. And as more of fashion’s systems are written with AI or handed over to third party platforms, and as the industry takes on more responsibility for user privacy, it’s likely not enough to simply focus on what the tools do. The real question runs deeper, centered around how they’re built, who controls them, and what happens when something breaks.