Key Takeaways:

  • Peak holiday shopping data for 2025 shows AI, as a channel to purchase, firmly on the rise. But the usefulness of “agents” is still constrained by the fact that the majority of eCommerce websites are designed for human use, and the lack of a codified standard for agentic access to databases and tools has held back further development.
  • The recent donation of the Model Context Protocol, from Anthropic, to the Linux Foundation, provides greater trust in governance and non-partisanship where the future of “APIs for AI” are concerned, and this direct line to the data housed in brand and retail backends could quickly increase the viability of AI shopping. 
  • For retail executives, there are now decisions to be taken about how and where they decide to permit AI models to interact with their systems, and what value this is going to return. The Interline will be interviewing two technology executives about this choice in early 2026.

Over the holidays, a year’s worth of behaviour and macro indicators gets compressed into a short window, and it becomes tempting to over-analyse as a result. This year was no different: spending across the Black Friday to Cyber Monday period pushed to new nominal highs, and total holiday e-commerce was projected to exceed two hundred and fifty billion.

Looked at plainly, those numbers reveal that nothing about the season felt especially out of turn for an industry predicated on hitting new high watermarks for growth with every loop of the sun. Adjusted for inflation, the 2025 peak shopping season looked more like continuity than a step change, even at a time when the general economic climate is far from rosy.

And, as you might expect given that this year’s news cycle has been heavily tilted towards the ill-defined promise of “Agentic AI” shopping, we observed a steady run of headlines crediting AI systems with helping to drive some of the season’s performance. AI was clearly present during the holidays in ways that were measurable and easy to point to. Adobe’s reporting showed a sharp rise in traffic (even larger than expected) arriving from AI-driven sources, and some retailers themselves spoke about the growing role of automated layers across search and customer service.

Even with sharp growth in AI usage, though, that path to purchase still represents a small slice of shopping activity, and much of that AI use remains concentrated on desktop at a time when mobile continues to dominate how people actually shop. When we compare how people shopped this year with a few years ago, most of it still takes place in familiar places, just navigated with a few extra layers helping things along.

For all the talk of databases and backend systems being reoriented to serve AI agents shopping on people’s behalf, the internet people shop through remains organised around human navigation. Online shops show people what they need to decide whether to buy something, presenting prices, stock messages, and delivery estimates. And those same frontends are also where brands and retailers deploy behavioural techniques to influence those paths to purchase.

Most of the time, AI is just reading the same pages shoppers read. It picks up cues from layout, copy, and promotions, rather than tapping straight into how inventory or pricing is being managed, and without being given anything even approaching backend access to retailers databases.

For the moment, at least, AI agents are clicking through websites in place of people, in small-but-growing numbers, but the predicted step change where eCommerce starts to actually reorient itself to turn those agents into first-class citizens hasn’t happened yet.

Last week, though, Anthropic made a move aimed at addressing part of that issue: the fact that different AI labs (and the creators of web content) have hesitated to fully adopt the widest-deployed standard for agentic interactions with web tools and information – Model Context Protocol – since it was single-company-owned rather than being part of a foundation or equivalent. Now, the company has donated the Model Context Protocol to the Linux Foundation, placing it under neutral governance within the newly formed Agentic AI Foundation. 

At a practical level, MCP is an attempt to make the way AI works with other software and databases more structured. It lays out a common way for tools to say what they can do, gives models a menu to work from when they ask for those things explicitly, and allows for responses to come back in a form that makes it clear what happened and under what conditions. The underlying idea is straightforward enough: if AI systems are going to interact with the same services that already run businesses, it makes more sense to give them something closer to a direct API line into those systems than to keep asking them to infer meaning from interfaces that were built for people to look at.

As well-adopted as it already is, though, a lot of what’s happening around MCP is still pointed at solving a fundamental problem, rather than building anything novel: AI agents simply can’t do what they’re asked to do in all but the most selective scenarios. Teams aren’t deploying fully autonomous systems at scale, but they are already running into the same problems as soon as they ask AI to do anything more sophisticated than answering questions.

On that basis, we don’t expect to see immediate results from the codification of MCP as the standard for the agentic web, but at the same time the establishment of a standard does provide confidence to fashion companies who see value in optimising their collections and experiences for AI agents, allowing them to invest in projects without worrying about a rug-pull at the most foundational level.

As we head into 2026, expect this space to get much hotter and more contested quickly. The Interline will be speaking to executives active in this space for The Interline Podcast early in the new year, and our AI Report 2026 – due in the springtime – will be taking a look at the inner workings and longer-term rollout of MCP.

Best from The Interline:

Kicking off this week, Marguerite Le Rolland, Global Insight Manager – Fashion at Euromonitor International joined The Interline Podcast, looking forward at what the next four have in store at a global market and technological level, and how fashion brands are likely to respond.

In our first news analysis this week: threading the needle on sloppy AI Christmas ads, metaverse funding cuts, and how perennial digital platforms like Fortnite are holding onto their relevance in fashion and beauty.

Next up, The Interline’s annual examination of the creative and commercial impact, market outlook, and evolving definition of 3D and digital product creation (DPC)! Exclusive brand stories, cutting-edge process and pipeline profiles, top-flight executive interviews, and frank analysis of the AI-touched future of 3D and digital product creation for fashion and beauty.

Lastly, an exclusive collaboration between The Interline and Threedium. To differentiate themselves, brands are seeking to put interactive 3D content and experiences in front of consumers at scale. This is placing a new emphasis on accelerating the AI-native toolchain for creating, distributing, automating, and monetising the digital twins at the heart of the next era of engagement.