Key Takeaways:

  • After more than two years of selling AI on the promise of nebulous metrics like efficiency, productivity, and output, enterprises are now seeking more concrete metrics based on KPIs such as design adoption rates and margins.
  • Where AI capabilities are being added to existing enterprise products, with sliding-scale pricing based on tokens, technology vendors could soon find themselves having to price those tokens differently depending on the extent to which they contribute to business outcomes.
  • After an initial period of platform introductions and take-offs, the secondary market for fashion is now entering a second phase, characterised by consolidation. In the process, big tech’s hold on the channel between brand and consumer could become even stronger.

Summarise and debate with AI:

Take the content and context of this article into a new, private debate with your AI chatbot of choice, as a prompt for your own thinking. (Requires an active account for ChatGPT or Claude. The Interline has no visibility into your conversations. AI can make mistakes.)

    What happened to the vision for end-to-end-3D?

    Before we get underway with this week’s new analysis, it turns out the original driving principle behind digital product creation (a full, end-to-end workflow architected on top of a single 3D asset) is alive, well, and being steadily proven out in a market segment you might now be paying attention to: team sports.

    Find out more in The Interline’s recent collaboration with Embodee, which spotlights seamless workflows that run from configuration and product customisation right through to digital production.

    Is fashion a suitable candidate for outcome-based software pricing?

    Ever since the launch of ChatGPT, a stubborn gulf has remained between the promise of AI – a fundamentally different way to interact with software and do work today, and an eventual drop-in replacement for any knowledge worker – and its quantifiable impact.

    Everyone’s familiar with the infamous MIT study that forms the basis of the often-cited claim that “95% of AI pilots return no value,” but even without that report’s shaky methodology and unclear implications, there’s still a prevailing sense – especially in the enterprise – that AI should either be delivering more value that it is, or that the way it’s being deployed isn’t translating into ROI measurability.

    This is also tied to the widespread trend of “shadow AI” usage, where workers who make use of generative AI in their personal lives bring those same chatbots into the office (physical or remote) because they believe those tools will make them more productive in work. According to a study from last summer, this practice was common at around 90% of companies. And while that distribution is likely to have changed in the last six months as enterprises have tightened data governance restrictions and underlined the severity of inputting sensitive data into off-the-shelf cloud models, and as enterprise software has progressively had more AI features attached – with wildly different degrees of success – it remains the case that individuals perceive a gulf between the power and / or usability of the tools they have access to at home, and the software they interact with at work.

    The real difference between these two worlds of software, though, aside from the obvious fact that one segment is pitched at consumers and the other is sold to companies, is one of scope. 

    A consumer-facing chatbot has, by definition, to be all things to all people. Large language models are pre-trained on a huge spectrum of different domains precisely because they are intended to be infinitely adaptable workhorses that can just as easily compose a song – see this week’s addition of music generation, using the third iteration of the Lyria model, into the consumer Gemini app – as they can counsel someone through a break-up. The biggest AI labs continue to push the frontier of large-parameter-count “general intelligence” precisely because the vision is to sell a single product, either through a first party application or via API, that can fit virtually any intended use case.

    This is not, generally speaking, how enterprise software works. Obviously the footprint of behemoth categories like ERP and PLM has morphed over time, but business software has always been sold on the assumption that the return on investment would be quantifiable within a specific discipline, domain, or team.

    This disconnect has, The Interline believes, been behind a lot of the friction between the way big AI models (and the applications that are built on top of them) are pitched and the difficulties that organisations have when it comes to mapping an effectively infinite capability horizon to product journeys that are made up of discrete steps with pretty binary success and fail states.

    Or, to put it another way, unless the vision for AGI is ever achieved (in which case the very idea of a “role” ceases to matter, which should be a reminder of just how massive an idea that is!) then AI for the enterprise should be framed – and its success criteria should be judged – based on its ability to impact narrow disciplines, rather than its potential to be endlessly, generally transformative.

    Out in the open market, this mindset shift is already underway. As we wrote in last year’s AI Report, the initial flurry of companies attempting to foist giant, general purpose models onto their entire workforces, and asking people to create value from them, has been superseded by the “application era” of AI.

    To see this philosophical difference in action, ask yourself whether or not ChatGPT is a good enterprise product. Then ask yourself the same question about Claude Code. Most people will arrive at a different answer, even though the models underneath those businesses are more similar than they are different. The delta stems solely from how the model is productised, and on how successfully it has been bottled, marketed, and sold to users who have a very specific job to do.

    (The Interline is well aware that OpenAI Codex exists, and our team uses it, but the first-mover advantage in software engineering has been extremely pronounced.)

    But there is still a difference between AI that’s successfully packaged and positioned as an enterprise application, and the metrics that are used to determine whether that application is actually creating value. Sticking to the same SWE domain for a moment, consider the study (again, with limited scope and curious methodology) from 2024 that AI made developers feel more productive, but was objectively slowing them down.

    Now, we suspect that an update to that study would reveal different results, based on how successfully agentic coding has been productised and how much the state of the art models have improved, but even today, judging development work based on output volume, commit velocity, and similar metrics is not fundamentally different from historic promises that general-purpose AI would make us all “more productive”.

    Instead, the software sector is moving towards judging AI on much harder metrics: so-called Software Engineering Intelligence platforms and methods that connect daily coding tasks to longer-term business outcomes.

    This, to The Interline, feels like a necessary step in fashion workflows as well. It’s no secret that AI has served to explode the beginning and the end of the funnel, through generative image and video models that lower the barrier to creating inspiration or final pixels (static or moving) to effectively zero. But while the consumer-facing applications can be attached to concrete metrics like visitor conversions and return rates, increasing designers’ outputs is not a viable success metric, especially compared to KPIs like adoption rates.

    This switch to outcomes is one that we expect to see the fashion industry make over the next 12-24 months, and two news stories from across other technology sectors this week provide the foundation for that prediction.

    First, the Financial Times captured a raft of different headlines to produce an analysis of what was really at the core of technology stock sell-off we documented last week, with the answer being that the user / seat-centric business model for selling software – predicated on the idea that empowering a single person to do more is the primary objective – may soon need to give way to a pricing model built on “tasks completed”. Or, by another label, outcomes.

    In some sectors of work, this shift is easy to visualise. A sales contract, for example, is either signed at the end of the cycle or it’s not. In fashion, there are several ways to approach the same transition, with varying degrees of abstraction. Should we judge AI’s impact when a product actually sells? Should we measure it when one or more sample cycles are cut? So we benchmark it when margins are maintained or improved?

    This logical reframing from pay-out to pay-back is simple to describe, but the process of mapping it out will, we expect, become a unique exercise, business-to-business. But for software vendors, as Snowflake CEO Sridhar Ramaswamy explained this week, the era of selling seats could soon be over, replaced by quantifiable value creation.

    And this is a particularly important consideration for technology companies that are now layering extra-charge AI capabilities on top of their solutions, with additive pricing based on tokens. 

    If the aim is to upcharge users for AI on a usage basis, then a different rubric for judging the value of that usage could be just around the corner. Just as not every AI application is equal, not every AI token is either.

    Marketplace consolidation and big tech control in the secondary market.

    As a short coda to this week’s analysis, it was announced on Wednesday that Depop, one of the largest secondary marketplaces for pre-loved fashion, was being acquired by eBay, from previous owner Etsy, for more than $1 billion USD.

    On the surface, this kind of changing-hands of platforms wouldn’t be remarkable, but the eBay has played a subtle hand over the last couple of years to consolidate its hold on the market for used goods, and analysts believe the company’s roll-out of AI tools for sellers has a good chance of further increasing the amount of product that moves through the company’s channels.

    depop

    And while eBay is certainly not a members of the big tech trillions club, the company is still a mult-billion dollar technology player that now owns an increased share of a fast-growing route between brand and consumer.

    For a while now, The Interline has observed fashion brands ceding control of an increasingly important way for brands to engage with the people who buy and wear their products. There was a juncture where it seemed as though first-party used channels were on the rise, but outside of repair and replace programmes, brands now seem to have abdicated a lot of their desire to own a slice of the used market, and to leave it to other platforms to host.

    What happens when those platforms consolidate, and when ownership in them transfers over to technology giants, could prove to be a wake-up call for just how much ground fashion has allowed third parties to gain.

    Best from The Interline:

    The success of any comprehensive transformation initiative hinges not just on uptake on the brand side, but on the existence of class-leading capabilities upstream. But what does it look like for a leading supplier to build technology maturity that matches or exceeds the innovation its customers are pursuing?

    To answer this, Steve Dodd, Chief Digital Officer at MAS, joined The Interline Podcast earlier this week.

    Next up, we interviewed Kalypso’s Retail Industry Lead, Joshua Young, on why digital product creation only scales when it’s defined as a business model shift, not a technology rollout.

    In partnership with DeSL, we talked to Workwear Outfitters about how the unique forces driving the digitalisation of the workwear sector – as well as surprising array of elements it shares with the broader fashion sector – created a strong business case for adopting PLM as a universal source of truth across teams, brands, and business units.