Key Takeaways:

  • Headlines have focused on the speed of consumer uptake, but AI rollouts (not necessarily uptake and adoption) in business, across direct subscriptions and API usage, have speed-run the usual gatekeeping process.
  • The reliability and level of integration observed in frontier models over the last 6-12 months has made them the foundation of new, mature applications from serious companies (not the solo-dev “ChatGPT wrappers” of old) but competition is especially fierce given the limited moats that fashion software companies can build around rapidly-commoditising underlying tech.
  • As the biggest AI companies in the world prune their own portfolios and abandon various lines of business, after realising that everyone is building essentially the same things, a similar market consolidation in fashion is inevitable.

Summarise and debate with AI:

Take the content and context of this article into a new, private debate with your AI chatbot of choice, as a prompt for your own thinking. (Requires an active account for ChatGPT or Claude. The Interline has no visibility into your conversations. AI can make mistakes.)

    Technology usually goes through a process of gatekeeping and progressive availability. That staging can be a function of complexity; a technology can be too raw for non-experts to use, and then need to undergo a process of refinement and commercialisation. Or it can be a result of the investment in early-stage development coming from a private sector that wants to guard its new competitive edge, and hold the reins on its creators’ roadmap, for as long as possible.

    Generative AI has bucked this trend. Everyone reading this is tired of being told how quickly ChatGPT became the most-downloaded consumer application, but the lesser-known claim is that OpenAI billed itself as “the fastest-growing business platform in history” last November, based on a combination of direct subscriptions and API / platform token consumption.

    That’s a very difficult claim to verify, which is precisely why the lab is comfortable making it, but the principle is at least testable if we treat it as provider-agnostic. 

    You can undertake that test by looking around you in the office, or the coworking space if you’re remote, or the airport bar if you’re travelling. If you work from home, maybe phone a friend or scrutinise your partner’s laptop. Whatever setting you’re in, take a quick peek at just how many people have a ChatGPT or Claude icon in their dock or menu bar (more likely to be the latter today, but that’s a different story). 

    Anecdotal evidence is not scientific evidence, obviously, but the odds are pretty good that you’ll see your share of AI users around you. And this is only the visible side of the sheer ubiquity we’re dealing with – it doesn’t account for the number of people who are using the same language models as features of other applications.

    As a quick bit of self-attested validation, around 80% of the apps in the dock of team members here at The Interline now have AI capabilities of varying utility, and the web applications we use follow the same pattern.

    Point being: the gatekeeping phase of AI, however truncated it was, is now very much over. Not just in the sense that there’s an LLM in every surface we interact with in our personal lives, but from the perspective of selling and selecting software.

    We’ve already talked, last week, about the likely trajectory for roll-your-own and, eventually, self-composing software, but there’s something much more near-term happening that doesn’t require the industry to reach an egalitarian builder-state or pass the weird event horizon where software assembles itself from a marketplace of tools, skills, and capabilities.

    In the here and now, state of the art language and image generation models have reached a level of reliability and maturity that has made them attractive to both deeply integrate into enterprise workflows in their self-contained or first-party-app-expressed forms (see the ascendance of Claude, Claude Code, Claude CoWork etc. in business applications) and to become the magic the curtains of a new cohort of business-focused applications that treat retrieval, tool-calling, skills, and so on as both mostly-solved problems and as the new architecture of enterprise software.

    These are, we need to note, distinct from the early stage avalanche of “GPT wrapper” applications. Where those barely earned the label of “application” at all, existing as extremely thin layers between the user and an LLM with a fancy system prompt, this new group of solutions marries the core capabilities of frontier models with real front-end engineering, workflow consideration, industry-specific tooling, best practices, talent and taste.

    Neither are the companies behind them the same kind of fly-by-nights we saw in the earliest days of LLMs becoming available through APIs and platforms. The new wave of applications are being built by teams of 10 and upwards, including fashion industry veterans and technology executives who have previously been part of major brands or successful and widely-adopted non-AI solutions.

    All of which sounds, on the surface, like great news. And in one sense it is: go looking for an AI application that meets your business needs, and you’re probably going to find one. If you don’t, look again a month later and the market has probably evolved.

    So what’s the issue? The same challenges that are also being observed between the big AI players themselves: the difficulty of differentiating one application from another when all of them are built on either the same underlying models or to serve an identical market need.

    And we’ve seen those challenges expressed in some headline-grabbing ways this week. 

    The most visible of them was OpenAI’s decision to shut down Sora, the confusingly-named video generation app that was also the name of the model behind it, which is also being abandoned and will no longer be available through the OpenAI platform beyond the sunset period. Fashion never really did a great deal with Sora – at least not openly – but the model and the eponymous app were behind a billion-dollar agreement with Disney, which is now shelved.

    Weirdly, the Sora 2 model was the subject of an enterprise case study that OpenAI published just two months ago. Which is at least indicative of how quickly these kinds of decisions get made.

    The press coverage has focused on OpenAI’s decision as part of a focusing initiative that’s seen the company kill off a series of sideways pushes that its Applications head referred to as “side quests,” including the “erotic” mode that someone apparently thought was a good idea, and, most pertinently for our readers, the original vision for in-app checkouts.

    In its official announcement, also published this week, OpenAI bills this as a forward step, but the language used in the announcement reveals it as just as much of a backtrack:

    “We’ve found that the initial version of Instant Checkout did not offer the level of flexibility that we aspire to provide, so we’re allowing merchants to use their own checkout experiences while we focus our efforts on product discovery.”

    You can read this as a conscious pivot to focus where, arguably, the value of natural language intent-expression should have always been focused. But you can just as easily read it as a realisation that building a checkout was a largely pointless endeavour, because online transactions are already an intensely consolidated market where a few key players have already won the majority of the market share, and will be difficult to dislodge.

    This same philosophy is also behind another pullback this week: Google closing its AI try-on service Doppl. The Interline doesn’t believe that the company is shuttering its underlying virtual try-on models, which we got to try first-hand a couple of weeks ago, so this shift is instead emblematic of the idea that building something a lot of other companies are already building can end up being a waste of resources. (Well, that or Google had had enough, after nine months, of holding the infamously poisoned chalice that is VTO in fashion.)

    It’s also not as though Google isn’t on the other side of this race in a lot of other cases. The most famous is obviously YouTube. In theory, the web architecture exists to allow anyone to build a YouTube competitor, but nobody does because YouTube already exists. But the same logic looks like it could end up applying to low-latency voice synthesis, with this week’s release of Gemini 3.1 Flash Live audio, which is a crappy name for what could end up becoming the de facto model behind basically any use cases for instant-feeling conversations with an AI agent.

    In fashion, the race is currently on to map the areas where maturity in the core generative AI models most closely aligns with industry demand. And right now that’s manifesting itself in image and video generation, across a class of applications that aim the same (or a broadly similar) feature set at both early-funnel creative users and end-of-funnel marketers, content creators and communicators. 

    In an unusual way, this group of solutions has smushed – a very technical term, The Interline realises – the two poles of the product journey together, and has brought forward end-stage visualisation to the same environment as concepting. There are some deep questions to be asked about how well this target audience understands the garment engineering and commercialisation gap between creating an on-model, photoreal look at a garment that’s only in the early design stage, and actually making it. But there’s little question that this is an expanded toolset that creative teams want.

    And it’s a good job the demand is there, because there are at least fifteen applications offering very similar feature sets today – and they all rely on the same commodity image generation models under the hood. Is there room in the market for all of them? Definitely not.

    In one sense this is fascinating to watch, as observers and analysts of technology development and adoption – especially in creative use cases. It’s like sitting on the sidelines for the period where Adobe Illustrator hadn’t cornered the market all over again, but with tens of competitors instead of just a couple. (The Interline is also reminded, again, that CorelDRAW managed to successfully carve out persistent market share outside the West.)

    And, just as was the case with Illustrator, these tools also compete, feature-to-feature, with industry agnostic platforms that, again, use all the same image and video generation models, but are tuned for more general and adaptable creative workflows, rather than those unique to fashion. When we include those platforms, the competitive landscape is even more crowded.

    So what are fashion brands to do? The Interline’s suggestion would be to start framing the core technology of turning a sketch into a photoreal-looking “render” as a solved problem, just as we look at adding things to a cart and processing payments as commodity services with little differentiation between them. 

    Instead of being impressed by image generation as a core competency of mature models, the futureproof approach is to see it as the ground upon which software developers have to create great applications that your teams want to use, and that do something distinctive with the same tokens you can easily go out and burn in fifteen or more other places.