Key Takeaways:
- Fashion’s decades-old idee-fixe is finally being put to bed, but the most interesting part of the “Clueless closet” isn’t the generative AI, but the combination of contextual layer and novel application.
- Headlines continue to coalesce around the most sensationalist and least-plausible applications of AI, but a new case study reveals the start of some evidence that well-codified (if less exciting) AI use cases are delivering tangible value.
- Behind the scenes of both publicity-grabbing ideas and grounded AI applications, the unit economics of the commodity technology are shifting towards higher price multiples that could challenge the business case for automation.
Summarise and debate with AI:
Take the content and context of this article into a new, private debate with your AI chatbot of choice, as a prompt for your own thinking. (Requires an active account for ChatGPT or Claude. The Interline has no visibility into your conversations. AI can make mistakes.)
Take part in the AI Survey 2026
It’s clear from this week’s stories that the scope of AI in fashion is becoming better-defined, and some old ghosts are finally getting laid to rest. But as part of our upcoming AI Report 2026, The Interline wants to capture deep fashion and beauty professional perspectives on how executives, end users, and technology implementers feel about the actual roll-out of the most potentially polarising technology in a long time.
From today through the end of May, readers can share their opinions and thoughts, anonymously, through our AI Survey 2026. The survey should take 5-7 minutes to complete, and the results will be analysed, in detail, in our third, free-to-download, AI Report this summer.
The survey does not start from any biases: whether you like AI or hate it, use it or refuse to engage with it, we want to know!
Fashion’s ideas for AI are taking a familiar shape, and preliminary ROI statistics are rolling in, but as token subsidies start to expire, their financial contours are anything but predictable.
“There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.” When Hamlet said that, he probably wasn’t talking about the damn Clueless closet.
Anyone who’s worked in fashion technology for any length of time will have been haunted by the Clueless closet. For three decades it’s been such a dog-eared shorthand for what people expect fashion technology to be, that it feels like it’s forever lurking around the corner, ready to pounce on anybody who happens to admit in polite society that they work in the tech side of the industry.
“Oh cool. Hey! When are we going to get that Clueless closet?!”
This week, beleaguered friends. This week. Maybe. It depends on whether we can afford the underlying models long-term.
If you happen not to have seen the 1995 film that gives it its name, the Clueless closet is a combination of some arcane behind-the-walls machinery that shuttles a vast selection of clothes around, and a weirdly prescient touchscreen interface on a CRT. Combined, they allow Alicia Silverstone’s trust-fund baby (not a lead character that would get a lot of legroom in today’s ‘eat the rich’ era) to scroll through a digital representation of everything she owns, and then have the approved outfit appear in her room.
Watching the scene back, The Interline has to admit that the team behind the film had a lot of solid ideas for what amounted to a fleeting gag. Virtual try-on is in there. Style recommendations are present and correct. The most jarring part is that Cher – the film does know how to laugh at itself – has all this advanced tech at home, but still does all her shopping at the mall.
That weird inversion of where the technology lives aside, fashion has chased a lot of the ideas behind the Clueless Closet down between 1995 and today. We’ve had several cycles of VTO, despite widespread adoption not picking up under the past paradigms. We have AI stylists.
But beyond its life on thousands of crappy pitch decks, and in a new cohort of vibe-coded hobby projects, the Closet hasn’t actually had a proper run – at least partially because we don’t all live in Beverly Hills.
That changed this week, with Google announcing the release of what it calls Google Photos Wardrobe. For all the jibes we’ve directed at this perennial idea, this instantiation of it actually represents something pretty interesting: a new generative AI layer on top of the kind of deep learning that’s been running behind the scenes of the major cloud photo storage services for a decade or more, and built on the contextual foundation that’s created.
We see The Interline’s audience skewing younger over time, so we know that some of our audience will not remember a time when you needed to manually tag photos if you want to semantically search their contents later. Not in the sense that that manifests itself today – occasionally needing to correct Apple Photos to point out that you and your sibling, or worse, your parent, aren’t the same person – but in the most fundamental way.
Before machine learning was pointed at cloud photo storage, if you wanted to find photos you took on a trip to Yellowstone, you’d have to remember the date you went there, or rely on geo-tagging. You couldn’t search by context, or by objects in the photos, the way you can today.
That exercise – turning your personal memories into a corpus of training data for deep learning – has paid off in some low-key ways over time, as a better experience interacting with those photo library products. But as tempting as it is to rib Google for falling down the Clueless closet rabbit hole, this week’s announcement is evidence that that same kind of infrastructure could pay off in some brand new ways when AI, by the contemporary definition, is applied on top of it.
Think about it this way: satellite mapping (especially Google Maps) was a useful navigation tool in isolation, but then applications like Uber turned its ubiquity into something different by architecting a new class of applications on top.
Google Photos Wardrobe may be a small experiment, in the grand scheme of Google’s big AI ambitions, but it’s also evidence of the importance of context layers to the creation of compelling consumer AI applications. If the idea takes off – and knowing Google’s history of retiring ideas, it potentially doesn’t have long to get properly rolling – then the truly valuable part, to end users, will be its ability to start with a bespoke wardrobe of items automatically captured and tagged from years’ worth of existing photos, rather than its ability to use commodity image generation and editing models to put people in clothes.
So we may not have the Clueless closet, mechanically speaking, but we did walk into a version of it that’s arguably more interesting because it skips the onboarding.
All of which begs the question: what horizon is fashion going to look towards now, to fill the void left by the infamous closet? What other wild ideas has fashion been pursuing with AI this week?
Well, Amazon Web Services (AWS) recently published a “generative AI playbook for retail,” which certainly puts a neat bow around a lot of the common ideas: virtual try-on, sizing recommendations, and personalisation of the product search, discovery, and recommendations.
There’s nothing fundamentally new in there, but Amazon’s hyperscale business has never really been about front-facing innovation. Surely the customer-facing business unit of the world’s biggest online retailer has some other novel AI ideas?
What’s that? Generative AI podcasts where two fake hosts debate the polarised opinions that make up the typical marketplace reviews page? The Interline has been known to be wrong about these kinds of things in the past, but this feels as though it comes more from the bottom of the barrel than the top.
How about Google itself? Riding high on the Clueless wave, they must be ready to surprise us with some other novel applications! It’s… agentic checkouts for beauty, allowing shoppers to buy products from online beauty retailer Ulta, directly in Gemini and AI mode, without having to visit Ulta’s own website.
We don’t want to downplay the technical effort involved in building the plumbing and the standardisation here, but paraphrasing Jonathan Arena’s podcast episode from late last year, if the future is brands and retailers all becoming part of a vast, undifferentiated AI “mall” run by a single tech giant, then something vital to selling online will have been lost.
How about the startup ecosystem? There must be some inspiring new ideas coming out of there this week! Ah, it’s founders and celebrity investors promising to build “the Spotify of fashion”. The Netflix of fashion must be just around the corner!
We’re obviously being cheeky here, but if you were to benchmark the progress of AI ideas based solely on what makes the headlines, you’d be justified in saying we’re in the “throw stuff at the wall and see what sticks” era. Which is, not coincidentally, also the era where investors are desperately canvassing around for ways to spread-bet before one of the frontier labs goes public later this year.
The deployments that don’t justify excitable-sounding articles, though, are codifying into a much more sensible and recognisable shape. As readers of The Interline will see from this year’s AI Report (coming soon), there are now mature AI applications and integrated solutions across the familiar scope of the product journey – from generative design workspaces to shopfloor control, and from market intelligence to content generation.
These may not be “new” ideas in the strictest sense, but they are – ironically, like Google Photos Wardrobe – applications of novel technology to achieve well-defined and longstanding strategic aims.
And those solutions you don’t see in the mainstream press are also, at least according to a first-party case study released by Microsoft and ASOS yesterday, delivering a tangible ROI that doesn’t need to be measured with a reinvented yardstick.
The case study cites a couple of relevant figures: roughly half of all customer enquiries are now handled end-to-end by AI agents, with hand-off to humans for complex cases; and “about 15%” of the brand’s code is now AI-generated. On the more qualitative end, ASOS’s CEO, is quoted in the release as saying that “We go from idea to shelf in about three weeks […] AI helps us find ideas faster and increases the productivity of our designers.”
But what do these tangible use cases and the kind high-concept ideas in the news have in common? The same thing that unites essentially every application of generative AI that isn’t based on an entirely homegrown and pre-trained model: reliance on the same commodity technology under the hood.
There are, obviously, many image generation models, but the bulk of the actual generative work being done by applications that offer image-gen or AI-edit capabilities is being shouldered by either Google’s Nano Banana family, or by the new GPT Image 2 model. The same is also true of text-generation and agentic tool-calling: across fashion’s CRMs and accounting systems, its copilots and its conversational interfaces, the same small cohort of closed and open-weights models are running the show.
This is not, in and of itself, a problem. After all, most of those applications will also rely on the same small pool of cloud storage and distributed database and compute providers, which is why an AWS outage has a tendency to knock out a big tranche of the internet all at once.
Where it becomes a problem is when the trust cost of providing those models needs to be passed onto the customer – and when that customer is a software-builder selling a subscription service to end users, all of a sudden the economics behind their offer can stop making sense.
The last week or so of news has poured a lot of fuel onto this fire. As Anthropic (the company behind the extremely popular Claude model series, and the brand leaders in the agentic coding space) gears up for a potential trillion-dollar IPO, potentially almost tripling its value in a single year, the company will undergo a period of intense P&L scrutiny that will expose just how heavily-subsidised its models are.
Those subsidies aren’t labelled as such, but the calculus isn’t complex: AI labs might make a small, inference-time margin on the token pricing they offer to developers through their APIs, but that margin does not come close to funding their other expenses, which include talent, marketing, and, of course, the training of new models.
Those other expenses have, so far, been entirely funded by the injection of new capital from investors who will, sooner or later, expect a return. And with no sign of the AI infrastructure build-out slowing down, the only way to trigger a return for those investors is to dramatically increase the cost of calling existing models.
Perhaps the most telling indicator of just how soon companies like Anthropic might alter their pricing structures to make this switch was this week’s announcement that Github Copilot (the coding agent built into the Microsoft-owned de facto development, storage, and collaboration hub) would be switching from a flat subscription-based pricing model to a usage-based one.
As the official blog post puts it, this shift is intended to “align […] Copilot pricing with actual usage,” and is “an important step toward a sustainable, reliable Copilot business and experience for all users”.
This is language that fashion customers of AI products might need to get used to hearing sooner rather than later. Because, if the cost of serving the underlying commodity goes up, there is only so much margin erosion the software provider can swallow – and this is as true of a wild swing like a “Spotify of fashion” as it is of a proven, enterprise-ready generative application with a much better-defined set of use cases.
This week, we also saw NVIDIA’s VP of Deep Learning explain that that division spends more on AI tokens than it spends on human salaries – not to mention the fact that the AI and the human need to co-exist, essentially doubling the cost of every employee. And while these kinds of statements are hard to substantiate, and can easily be used as a smokescreen to shield layoffs that were already planned, Meta’s Mark Zuckerberg also explained yesterday that he believes the costs of AI contributed to the company’s need to let 8,000 people go.
Whether they fit into the “proven business solutions” bucket or the crazy ideas cohort, how many of fashion’s business cases for AI are going to stand the test of time if their essential inputs keep becoming more expensive? At what point might the industry realise that automation might wind up being more costly, at least per-hour and per-outcome, than human talent?
We can’t very well criticise Clueless itself for not engaging with any of this unit economics. After all, Cher was as insulated from consequence as it gets. But the generations inspired by that blasted closet are going to need to figure out how much they’re willing to pay for AI capabilities.
