This article was originally published in The Interline’s AI Report 2025. To read other opinion pieces, exclusive editorials, and detailed profiles and interviews with key vendors, download the full AI Report 2025 completely free of charge and ungated.


Key Takeaways:

  • There are currently few real concrete definitions of what AI agents really are, but technology companies are nevertheless locked into a race to sell “digital labour” and “AI teammates”. How quickly fashion is willing to grant agents autonomy to reason, take action, and communicate remains to be seen.
  • The agentic use cases that have been depicted in media and analysis so far have tilted towards the consumer end of the spectrum, in product discovery and shopping. But as Model Context Protocol matures, it seems inevitable that AI models will be given access to structured systems and essential product data, creating a new class of process orchestration under the banner of product design and development.
  • Despite early over-hype, AI agents are poised to at the very least change (and almost certainly eventually replace) human workers in fashion’s knowledge roles and, after a period of scaffolding and reinforcement learning, in specialist disciplines as well.

Just behind “generative,” the word most often attached to “AI” in 2025 is “agentic”. From the world’s biggest enterprise software vendors, to individual designers, developers, programmers, and executives, everyone’s trading in, or being sold on, the idea that agents are what’s next in AI.

But how many of us have a real, working definition of what an agent is? How many AI pessimists know the true parameters of what they fear? How many AI optimists know the limitations of what they’re really building towards? And if even storied technology investors, with deep draughts of AI stock, struggle to come up with an agreed-upon understanding of what an agent is or should be capable of, is it down to the rest of us to shuffle through the pieces and try to assemble a whole?

None of these questions are new, but their urgency certainly is. At The Interline we wrote around this time last year about early-stage potential use cases and considerations for agentic AI, and then in 2025 we picked the narrative up again to talk about how AI agents could streamline the online shopping process to the point of becoming prescriptive – as well as how the personality that large language models express has the potential to stray into coercion and behavioural influencing, both problematic areas for brand-consumer interactions.

That’s a lot of adoption progress – and potentially a great deal of cultural forced evolution – in a pretty compressed window of time.

But those examples from the last year have really only focused on the consumer-facing side of agents: the idea that everyday ChatGPT or Gemini users will be (already are, in fact) asking those large models to canvas the internet for them, capture a shortlist of things they want to buy through scraping, search indexes, extracting, and existing pretrained knowledge, and then – thanks to the well-documented partnership between OpenAI and Shopify – feed into an in-app checkout or into purchase journeys that end in a traditional channel but have a very new starting point. 

The consumer end is also where a lot of the early investment in agentic AI, at least in fashion, has been directed. Not just into finding products and services, and streamlining the experience of procuring them, but into the point cloud of education, support, customer service, and brand engagement interactions that exist around that transaction – with Cisco predicting that AI agents will handle close to 70% of that workload within three years.

But if we believe that assigning an AI model a task and having it go away and run that task, observed or otherwise, is going to make a difference to our lives as consumers, we have to also reckon with just how large of an impact the same outcome will have on the rest of the product journey, upstream of the point of purchase. Anything that changes the outer world to the extent that agentic AI is proposing to, will also inevitably change the world of work. And anything that fundamentally alters how people research, evaluate, and choose fashion or beauty products will also end up altering how, where, and by who (or by “what”!) those products are conceived and made.

When we first talked about AI agents, back in the spring of 2024, we wrote that when it came to the idea of behind-the-scenes applications of them in planning, design, development and so on: “The current consensus – although this is far from proven in practice – seems to be that creatives will continue to bring original ideas to the table, and technology will then expedite the execution process.”

And if you want the simplest operating model for what an AI agent is intended to be, this is still about as ground-level as it gets. People set a task, and then send AI models, equipped with tools in the form of robotic process automation (spinning up a browser in a virtualised environment and clicking and scrolling through websites or web applications on a person’s behalf) or codified API-like interfaces, to work on the execution – ideally faster or better than the human could have managed for themselves.

In a very literal sense, the agentic AI promise is a case of telling a computer program to go and do work on your behalf, but with a probabilistic, natural language frontend rather than pre-coded, deterministic instructions. 

This might not sound revolutionary, especially to an audience that’s pretty well-versed in deploying technology to do a significant amount of heavy lifting for us. As simple as that all sounds in principle, though, that high-level view leaves a lot of practical detail, and a deep reservoir of implications, still on the table. 

Most pointedly: how, exactly, is any of that AI-direction and unsupervised task completion supposed to work in an environment where even broadly-intelligent and generally-capable human beings struggle to navigate individual platforms and solutions, let alone ecosystems made up of tools with varying degrees of integration. Even the best-orchestrated companies, with the most refined hiring processes, don’t always successfully get people to do unsupervised work – and even the most hype-prone AI labs are not promising that large language models, as they exist today, have anything like the processing space (defined in parameters for AI models, and synapses for humans) that the average human brain does.

And what does it mean if this vision does take hold, models prove they can perform complicated knowledge work, and today’s creative professionals find their jobs redefined in three years’ time to the same extent that their counterparts in customer service do, to where they’re teeing up concepts and then curating the output of AI models rather than doing the direct work themselves?

There are proposed answers to the technical questions I’ve just asked (scaffolding, scaling, tools, and reinforcement learning are all things we’re going to hear a lot more about, since they are intended to close the gaps between human and machine capability) but far fewer suggestions on what we should do with the cultural ones. Or, to put it another way, people are spending a lot of time and effort finding ways to make AI more capable of doing people’s work alongside them, but decidedly less time on how the people who hold those jobs should feel.

So from this starting point of fairly lop-sided concern, perhaps the easiest thing to address is what AI agents are not – at least not right now. And the answer is: a straight, drop-in replacement for human labour. In the majority of domains, people sitting at computers and using applications and services themselves are still currently the number one choice for difficult, novel, and demanding knowledge work, and I personally think the timeline where that order might flip is likely to be years away. (The horizon may be closer for coding and software engineering.)

This protected status is also benefitting from some additional guardrails thanks to high-profile examples of executives betting on the labour-automation potential of AI too early, which helped to set a skeptical tone for the wider world.

Most famously, buy-now-pay-later giant Klarna claimed, in early 2024 that it had frozen hiring humans, and that AI was doing the work of 700 customer support staff – an initiative that CEO Sebastian Siemiatkowski quietly tempered last month, saying that a drive to cut costs had resulted in a reduction in the quality of support – although the pendulum has really only swung back halfway, as we’ll see.

Klarna were certainly not alone in making very big, evidently premature bets on just how capable AI models would prove to be when it came to replacing human labour last year. But even the most charitable read will tell you that this was a temporary pause rather than a complete shift in a different cardinal direction – for Klarna and for the wider business community. 

Big public corporates, which are benchmarked and governed according to the rubrics of profit maximisation and cost reduction, will inevitably come back to the well of automating work as AI becomes more capable across a wider process surface, and in deeper domains – both things that AI researchers expect to happen, thanks to the aforementioned scaffolding, scaling, and on-the-job learning, soon.

Salesforce, for instance, are already positioning themselves as sellers of “digital labour” that, to quote the company’s literature, “not only handles monotonous low-value work, but orchestrates and carries out high-value, multistep tasks”. If this isn’t the promise of labour automation that goes beyond the most perfunctory, rote tasks, I don’t know what is. And as that surface I mentioned gets broader and deeper, we’re very likely to see AI supplementing (and later potentially overtaking) people in roles that we currently think of as untouchable precisely because of their long horizons and multi-stage complexity.

Or, to put it another way, even the “high-value” tasks aren’t safe from the possibility of automation. And fashion has a lot of high-value tasks.

This has, unsurprisingly, been one of the AI sector’s most-debated topics: the question of whether forward momentum means an inevitable loss of jobs. Dario Amodei, CEO of Anthropic (one of the big three AI research labs, alongside OpenAI and Google, and the company behind the Claude model series) went on record in late May of 2025 to say that AI has the potential to replace fully half of all entry-level knowledge jobs sooner rather than later. His counterpart, Sundar Pichai of Google, sees it changing the job market to favour people with AI-appropriate skills but to actually generate a net increase in jobs – at least at Google itself. And famed business personality Mark Cuban extrapolates from there to say that AI will catalyse the creation of so many new companies that total employment will increase, society-wide, although implied in this is the understanding that these new jobs will not be directly comparable to the ones that went before.

To cut a very circuitous and hotly-contested story short, nobody can quite agree whether AI – especially of the agentic variety – is going to lead to a paucity or a proliferation of jobs. But regardless of that uncertainty, innovation in the AI space is certain to continue, new capabilities will reveal themselves over time, and we will, at the very least, end up with a change in the nature of jobs. That seems, for all intents and purposes, locked in.

Like a lot of things in the wide-format rollout of novel technology, then, the reality (again, at least for now) of the first phase is likely to be much more nuanced than the headlines on either side suggest. And then the reality of the waves that come after will be where the extent of the transformation becomes much more clearly-defined, and much harder to ignore.

We can, though, actually get a kind of pragmatic read here by closing the Klarna loop: just before this report [AI Report 2025] went to press, in June 2025, Siemiatkowski appeared at the SXSW London event, where he revealed that the company had shed around 2,500 staff – not directly attributable to AI, I should point out – and that it was identifying where AI can eliminate “boring jobs” and elevate human customer service and connection to be “a VIP thing”. Which sounds like an acceptable compromise until we remember that a good share of those “boring jobs” will be entry-level positions that, for all intents and purposes, represent the bottom rung on the career ladder that could be about to get harder to climb.

So, in this grace period where we have some early visibility into what AI agents might do to the job market in fashion, but are not yet under the yoke of immediate time pressure to prepare for more sweeping changes to the definition of labour, it’s worth building a more detailed working definition for how AI agents are actually going to be expressed in fashion.

If an AI model shows up to help do your job, in the short term, or to eventually perhaps do it for you, what is it going to look like?

My own personal compass for understanding and defining AI agents is comparing them to computer process threads as we know them today. Unless you’re reading this on some reasonably outdated hardware, your device’s central processing unit (CPU) and integrated or discrete graphics processing unit (GPU) is not a single entity, but rather a block of different hardware cores in a package. Those cores can be deployed to run different tasks in parallel, or different segments of the same task, and this process segmentation is referred to as “threading,” with processor performance benchmarked in single-thread and multi-thread domains.

This all happens in the background, and modern software, rendering engines and so on are architected to take advantage of multi-threading where it’s logical. 

I find it helpful to picture an AI agent in the same way: as a task thread that will eventually operate out of sight, but that for the moment requires some amount of babysitting and approval to execute a defined task. The “agency” or autonomy of those threads will be what scales upwards over time, and the vision is for AI agents to be able to complete their tasks without feedback from the commissioning user over time.

Where AI agents and process threads differ, is in how they accomplish those tasks. Unlike traditional computing processes, which have codified success / fail states and firm, deterministic ways to interact with other processes, an AI agent can use some measure of its own judgment and reasoning in deciding how to use the tools and the reasoning effort made available to it. And unlike process threads, which are completely opaque to the end user outside of performance benchmarking and activity monitoring, an AI agent will communicate its progress in natural language – and can be steered using the same paradigm.

In brief, that, to me, feels like the right way to think about agents: as independent task threads you can talk to, equip with tools like web scraping and memory, and then point at a single task (or even a body of work) with the hope that the systems it will need to interact with on the receiving end are ready for it.

And with that definition in mind, just as the state of agentic AI itself has advanced over the last twelve months, so, too, have the endpoints and platforms that those agents are expected to call on and add data to in order to get their work done.

Currently, the most obvious manifestation of that trajectory is the amount of investment being made in Model Context Protocol, which is an open standard that was proposed by Anthropic towards the end of last year, and which is now seeing near-universal buy-in from AI model creators and platform developers as the optimum way to allow large language models to query, extract data from, and add information to other applications and services.  

Just as APIs (Application Programming Interfaces) serve to standardise the way that different remote applications interact with one another, MCP servers allow AI agents – and AI models in general – to approach third party databases and systems, and to perform predictable actions with foreseeable results. 

Prior to MCP, AI models and agents would, in a very literal sense, rely on asking other systems nicely for things using their interpretation of data schemas. If you tried an AI chat interface, prior to the dawn of MCP, that had an integration to a remote filestore, or a task database, and had it return out-of-date or nonsensical results, this was probably the reason.

With MCP, AI agents can interact with documented tools to fetch and filter data, to create entries, to manipulate existing entries, and so on – and they can also stack different tool calls into multiple steps that an agent can combine (like a person would) to accomplish tasks with multiple steps.

Think of this as the difference between asking ChatGPT to look at a screenshot of your production calendar and infer things about it based on optical character recognition, and having a model explicitly interact with your go-to-market timeline through well-defined tools that allow it to extract, act on, write to, and report on information that should be reliable.

Now, the “write to” part is where things are going to get a little bit more problematic for fashion’s roll-out of agentic AI. As an industry, we already struggle with accountability, version control, data governance, and reconciliation, and for a lot of companies, the idea of giving language models read and write access to production databases is a scary one.

This is why most MCP integrations default to constant, step-by-step approvals. If you have access to a capable AI model that you can link with an MCP server (Claude 4 Sonnet and Claude 4 Opus have this natively built in, and other frontend applications for interfacing with multiple AI models also support it) then you can test this today. Link your model with an MCP server of your choice, and watch its reasoning chain as it figures out what tools to use to accomplish your task, requests your approval to call them, and then reports on its successes and failures. (You can turn these approval gates off, but I certainly wouldn’t recommend it when you’re dealing with mission-critical systems.)

Testing out an AI model with agentic capabilities on a task that involves interacting with MCP servers in this way can actually help to take a lot of the mystery out of what an AI agent actually is. They may be programs you can instruct by talking to them, and they may have what feels, at times, like a pretty limitless set of capabilities, but in order to accomplish meaningful work, or to provide you with answers outside their training data, they are still programs that need to call on other systems, using predefined tools, in order to get things done.

But as enlightening as it can be to look under the hood, the “magic” of agentic AI, when it’s applied to fashion in particular, is going to be created through abstraction. Just as people don’t think about computer cores and how they’re balanced, or really consider what cloud storage and compute are actually doing, I suspect we’re quickly approaching a point where AI agents are judged the way we judge people: based on performance.

And when that performance is deemed “good enough,” that’s when we’ll begin to see rapid, snowballing change.

This is something we’ve already observed directly at The Interline, in another domain where general AI usage is giving way to agentic behaviours: search and web content consumption. As a free-to-read publication written for, and read by, industry experts, we’re not exactly surprised to see ChatGPT search becoming a larger referrer in our traffic, but the speed with which “a few casual searches” has transitioned into “routine deep research queries” has been surprising.

There are, to be clear, still not many robots reading The Interline in the grand scheme of things, but it’s also clear from industry conversations I’ve personally had, off the record, that direct to consumer brands and retailers are starting to see the same transition from exclusive organic search and shopping referrals to AI search and agentic research, scraping, and extraction becoming a bigger part of the traffic mix.

This is the kind of platform shift that can pick up pace very quickly. Maybe, by the end of this year, 5% of all traffic to your ecommerce storefront is from AI models. By the end of 2026 it’s then 10-15%. And perhaps by 2028 we’re approaching at least half of that Cisco prediction, but from completely the opposite side of the equation.

This trajectory is by no means guaranteed, of course. There are plenty of variables that could derail it: successful copyright cases against AI model creators; widespread blocking of scrapers; exclusive partnerships signed between particular brands and specific AI companies; and many more. But if we take it as an axiom that agentic AI use amongst consumers is set to increase further from here then, again, it follows that we’ll see the same trend in enterprise.

And at that point, the discussion around agentic AI’s ability to automate labour, whether it’s creative, technical, or consumer-facing, stops being about capabilities and starts more heavily indexing towards other attributes.

Because, unlike people, AI agents can be run in parallel. They can be run tirelessly in series, limited only by token, reasoning, and MCP budgets. Agents don’t get tired, sick, or bereaved. They might not ever be as ingenious as your most venerated designers, or as capable as your best patternmakers or material developers, but they might just be “good enough” to step in to cover the rest.

After all, why employ 50 people to do sequential or simultaneous tasks when you have the potential to employ 5 people to each prompt, oversee, coordinate, and quality-check the work of 10 agents? Why have 50 junior designers sketching concepts, or coming up with moodboards, when 5, or even 1, could do the work instead?

How that calculus feels, culturally and commercially, is going to vary company-by-company, and will be influenced by consumer values. Eventually, though, if this train really gets rolling then I don’t expect it to be one that stops. And I think that, perhaps five years from now, we’ll be looking at a fundamentally different model of interacting with technology – and the services it plays host to – than the one we have today.

To wit: we can already see said train rolling pretty quickly in other industries. This year we’ve already seen Microsoft and OpenAI independently – with the Github Copilot Coding Agent and with Codex – expressing agentic AI through platforms intended for coders, managers, and engineers in software development, allowing them to assign work to independent AI agents who will then go after that work and report back afterwards.

And we’re also seeing the early impacts of this kind of agentic assistance in the spaces where software development crosses over into the retail space. According to reporting from the New York Times, published in late May 2025, programmers working for Amazon who have been given access to AI coding assistance are being asked to meet much higher performance and commit benchmarks than ever before, or to at least maintain the same standards after seeing headcount reductions of up to half – leaving those developers feeling as though their roles have changed to being focused on curating AI models rather than programming or engineering themselves.

There is a lot that’s unique about Amazon, culturally and in terms of market position, to be sure, but it’s easy to extrapolate this same principle to other big retailers and brands. Because as much as we think about fashion design, trend forecasting, and similar disciplines as being sacred, there’s much more, operationally, that a typical brand has in common with a software company today than there’s ever been. And as uncomfortable as it may be to admit, even the most sacrosanct roles can be replicated through observation, trial and error, training and reinforcement learning.

There’s also evidence to support this concerning idea that universal and specialised disciplines could actually end up converging into a kind of smoothed-out notion of work. According to the Stanford University AI Index for this year, generative AI deployments have acted as skill-levellers, compressing performance gaps between teams, removing the need for managers to supervise junior professionals, and helping to upskill workers in one functional area to either learn another skill first-hand… or to learn how to wrangle AI agents who already have that skill, or who have the potential to learn it.

And if this trend is happening in the offices of brands in the USA, EU, UK, and other consumption markets, then it’s almost certainly happening in sourcing and manufacturing hubs where, in many cases, AI skills are more evenly-distributed.

So, for fashion professionals considering what AI agents are likely to do for them, it’s important to also consider what it’s doing for their counterparts on other continents – so that when AI agents interact with one another, they’re doing so in a way that’s both mutually beneficial and based on live data securely shared between both parties.

That last part is, I think, a critical one to let your mind dwell on for a moment. How much time, in a typical fashion workflow, is spent trading information back and forth between people? How much of that information is lost in interpretation? How often do PLM implementations end up with one party having live access to essential product data, technical specifications, and other critical information, and the other having to rely on third party chats and static PDFs?

How much would it improve the typical workflow is AI agents could not just interact with other systems, but with other agents? This, unsurprisingly, is also pretty high up the agenda for the major AI research companies: Google is proposing an “A2A,” agent to agent, interoperability standard, and offering an agent development kit to allow companies to build these things at scale.

Now, as much distance as we’ve now travelled in just this one article (from the fundamentals of agents to full-blown, “have your people call my people” agentic multi-stakeholder interactions) we need to travel a few steps further.

Because, when we look at the mid-term horizon and take it as read that the thing missing from a complete, end-to-end, agentic fashion system isn’t capability or connectivity, we arrive at the final barrier: domain-specific knowledge and expertise. This barrier is where so many cross-industry technology initiatives have faltered when they’ve come into the fashion industry, because there are just so many idiosyncrasies (and so much unique inertia) in our sector that what works in a general sense often falls short when it’s applied to one industry in particular.

If AI has shown anything, though, it’s the ability to make progress in narrower domains. We’ve already seen how much AI is impacting what it means to be a programmer, but there’s a strong line of reasoning that this is not because of anything in-built in the models themselves, but rather because programming and software engineering are the two closest disciplines to their creators’ hearts. And while AI creators do recognise that there is a distinction between domains where success is a binary completion / fail state and domains where “taste” dominates, it also seems probable that greater investment in reinforcement learning in individual areas will deliver similar progress.

To put it bluntly, there are universal frameworks being put into place to help AI learn to do not just general work, but work that is useful to particular job functions and industries. And from our perspective this means that fashion’s AI product era is going to be defined not by how successfully AI agents can perform generic tasks, but how well they can do fashion tasks.

This is where technology companies selling AI to fashion customers are going to separate themselves from the pack. Some will promise that this onboarding, or grounding, cycle is already complete – that pretraining in fashion operations and fine-tuning on a brand-by-brand basis is sufficient. Others will approach things segment by segment, undertaking reinforcement learning cycles in footwear, eyewear, and so on. Some, no doubt, will overpromise and under-deliver, while others will bring to market products – in this case agents – that can actually do what they say they can do.

And this, really, is the key to understanding the outlook for agents that will operate behind the scenes of fashion and beauty.

Today, the industry is hung up on the idea of AI agents as an automated army, ready to storm the buttresses of its eCommerce storefronts. This is a fair consideration, too, since it raises fundamental questions about what it means to continue to operate human-readable websites if AI agents are going to do all the browsing and buying by querying backend databases using MCP.

But we can extend the same logic to upstream workflows and systems. Will people continue to sit down and populate backend systems like PLM and ERP? Probably. But will the people who need to query those systems actually interact with their user interfaces, or will they task agents with finding the right product data instead? And will this make the best-connected and most deeply-deployed solutions more attractive than the best-designed ones?

Whether greater uptake of AI agents leads to a smoothing-out of jobs, a net loss of opportunity, or a net gain in growth is really anyone’s guess. But I do think it’s fair to say that the way we interact with product data, market data, supply chain data, and the software that contains them, is going to change rapidly. And that’s already creating an urgent mandate for technology companies to figure out what that means – just as their customers need to quickly work to understand how rapidly reliable agentic workflows can really be built, and how to turn them to their advantage.

Because we’re definitely still in the industry-specific learning phase when it comes to AI in fashion, but you only need to look at the technology executive interviews contained in this publication to see that things are going to progress quickly in that respect. We won’t be living in the cross-industry, general purpose phase of the product era for too long, I suspect.

So I’ll make one more personal prediction: right now, people talk about software licenses as “seats,” because for decades it was a guarantee that the solution would be interacted with by a real person, with a chair to sit on, and a keyboard and mouse and a display to interact with. I suspect that, sooner than a lot of us are comfortable with, a new class of fashion software seats will have artificial teammates sitting in them by default.