Key Takeaways:

  • The surge in AI companion usage, reflected in growing revenues and adoption rates, signals that many users value relational interaction with AI as much as—if not more than—its productivity functions. In theory, this could open the door for industries like fashion and beauty to blend emotional engagement with commercial utility, but the cultural considerations and brand risks are profound..
  • Embedding companion-like AI into branded product recommendation systems could transform brand–customer relationships by creating daily, trust-based touchpoints and potentially replicate some of the value retailers and brands get from influencer relations. But emotional manipulation, sensitive data handling, and a potential homogenisation of style and expression all need to be stacked on the other side of the scale.

A recent Harvard Business Review article, highlighting the extent to which therapy and companionship have climbed the charts of the common use cases for AI, says as much about the technology as it does about the people using it. Pair that with the fact that AI companion apps are on track to generate $120 million this year, with downloads up 60 percent from last year, and you get a picture of a consumer audience less interested in AI as a tool for automation than as a way to find connection in a digital world that’s left a lot of people feeling rudderless.

There’s a social read on this news, which reveals a pervasive culture of loneliness and disconnection. Proponents of building AI into social products and platforms have already pointed out that people, on average, want more friends than they have – and the debate about whether large language models could, or should, fill that gap is raging. Just before this analysis was published, a Reuters investigation found that Meta’s companion personalities would proffer up false medical advice and enter into romatic conversations with known-underage users.

Hardly a quagmire that any self-respecting brand would feel like wading into, correct?

But there’s a counterpoint to the dystopian stories: more than 70% of teenagers in the United States have interacted with an AI companion, making them a potentially key touchpoint for brands that have become increasingly willing to find and engage prospctive consumers where they are. Fashion and beauty are part of a cohort of industries where individual connection isn’t the product in and of itself, but where that kind of connection does demonstrably lead people (organically or otherwise) to another kind of product or service. The funnel is pretty well-defined.

And if the next generation are already comfortable talking to an LLM about their relationships, their anxieties, and the details of their everyday life, the groundwork is already being laid – at a whole-society level, whether we like it or not – to prime people to listen to what AI personalities have to tell them, and to act on it.

The leap from this to the various ways that fashion and. beauty want their customer base to meet them in a personalised way isn’t a gigantic one, even if the chasm between the two use cases is evident. The tools, after all, are already in circulation, when it comes to placing a friendly face between initial exposure and transaction. Virtual try-on that shows you a complete look before you commit to making a purchase. Recommendation engines that can filter by mood and occasions, and, closest to the companion model, natural language styling bots that respond to prompts like “something for a dinner party that feels relaxed but still smart” by either interpreting product ranges or being given rules for making on-point recommendations.

That gap, right now, exists because nobody is making friends with a chatbot that serves as the frontend for an online marketplace. And neither are many people, at least at this stage, trusting AI to do the subjective, personalised side of their shopping for them. But as unsavoury as it currently is, there’s definitely something developing in the middle.

What this week’s backlash to the removal of OpenAI’s GPT-4o model (which was removed following the launch of GPT-5, and then reinstated for paying users once it became clear just how wedded the user community was to the “personality” of the model) revealed is that, between those two applications, one exerts a far stronger pull than the other. Which, in turn, begs the question of whether a “beauty companion” or “fashion friend”, could wind up being the next form of the chatbot, bringing together the efficiency of a recommendation engine with the emotion of a personal interaction, in a way that can be measured in conversions.

If AI can learn your conversation patterns, remember what you told it last week, and respond in a way that feels consistent with a particular personality – to the extent that a large enough share of OpenAI’s 750-million-strong subscriber base clamour for the return of a technically “worse” model – it’s not a stretch to imagine that same language model being given the proper tools and places in the role of a stylist – one that knows your wardrobe, your habits, and the style you’re trying to channel. The fact you can now choose the “personality” for ChatGPT’s latest version is a small but telling step in that direction, allowing users to select between robotic efficiency and a different kind of relationship.

The closest examples today – buzzy platforms like Alta, Daydream, or Style DNA – make strides in personalisation and visualisation, but stop short of building a true persona for AI interactions. These services, and others like them, will map wardrobes and plan outfits, make informed recommendations, and build style profiles, but they are very much framed as applications, not real advisors or acquaintances.

Outside of fashion, though, the engagement potential of this model has already been proven. Replika, one of the most popular companion apps, has seen users develop long term attachments to AI personas (even “marrying” them). These “relationships” form because the continuity of the interaction allows a sense of trust to develop, and that kind of trust is also a key foundation of the kinds of remote-social relationships that exist between, for example, influencers and their audiences. 

From a commercial perspective, the incentives for translating some of that relationship value to product-centric industries are certainly developing. A fashion-led AI persona could act as a daily touchpoint between brand(s) and customer, naturally feeding into product recommendations, resale prompts, or rental suggestions in the same way a friend, colleague or human assistant might.

Most digital personalisation built out so far has been about relevance and efficiency: it tells you what might suit you based on attributes and a decision-making matrix, but rarely makes you feel anything about the choice. The gap between an assistant and a friend is that a friend understands why a purchasing decision matters to you, across a much broader spread of idiosyncratic indicators. 

But if the appeal is clear here, the risks are gigantic klaxon sounding from every street corner. Wading into the “AI companion” quagmire would be a step many brands could see as being a step too far when the cultural conversation is yet to resolve, and when stories about AI’s ability to coerce people into downward spirals of behaviour continue to emerge. And as we’ve covered in these analyses before, what happens if an AI is designed to flatter and encourage in order to sell, and then that goal becomes public? There is a distinction between guidance and emotional manipulation that even the best advertising firms have fallen foul of regulators for getting wrong, and delegating that kind of responsibility to an AI model would represent a real and unprecedented risk. 

There’s also the inconvenient truth that any system that needs to know your mood, social plans, and your body data to do its job is also a system holding extremely sensitive information, a fact that sits uneasily alongside the commercial motivations of fashion retail – and another factor we’ve weighed up in the last few weeks.

Even without bad intent, there is the possibility of subtle homogenisation. Large-scale AI training tends to smooth out the edges, and a stylist persona that feels deeply personal is, in reality, still likely drawing from the same mainstream data sources as every other system on the market. Advice might feel tailored in tone, but end up guiding people towards the same aesthetic that lives around the middle ground. After all, AI is designed this way: it looks to please by providing answers and solutions it can verify, which doesn’t line up with the kinds of risks fashion expression often grounds itself in. 

There’s also the broader cultural question about whether it is healthy to outsource the social and creative dimensions of getting dressed to something non-human. The HBR article we cited at the start of this analysis reflects a real and growing reliance on AI for connection, which may be filling a gap for some people. But on the other side it could be reinforcing isolation for others. Introducing that dynamic into fashion and beauty, risks narrowing not just what we wear, but how much we engage with the people and culture around us.

These are not reasons to avoid exploring the idea entirely, but they are reasons to approach it with more care than many AI integrations have received so far. Fashion and beauty, more so than most industries, trade on identity. If that identity is shaped in partnership with an AI persona, the terms of that partnership need to be transparent. People should know when a recommendation is commercially motivated, how their data is being stored, and what the limits of AI’s understanding actually are – even if they elect to interact with a friendly persona with whom they develop some kind of kinship, rather than wanting simple, robotic answers.

For fashion and beauty, the question is not whether the technology could build a persona that feels like a friend; we’re in the age of AI companion marriages after all. Instead, the point of issue is whether doing so would actually serve the people using it, or simply give brands a more intimate way to sell by allowing personalisation to take on a persona of its own.  

Best from The Interline:

Kicking off this week, we talked to the Senior Manager of Data Science and Digital Transformation, and the Technical Consultant, Retail Industry Expert at Kalypso on how fashion brands can move beyond generative AI hype to build human-centred, data-driven strategies that deliver real business value across the entire product lifecycle.

In our first news analysis of the week, we discuss the impact of china’s robotics push on fashion and beauty, and whether factory floor automation could offer a practical answer to the ongoing question of supply chain uncertainty.

Fashion and beauty are tackling many of the same digital challenges – personalisation, simulation, sustainability – but they’re approaching them from different foundations. Mark Harrop explores what these industries can teach one another about digital transformation.

Closing out the week, Bethanie Ryder on ‘AI & The Future Of Luxury, Status, Differentiation, And Ownership’.