Released in The Interline’s AI Report 2025, this executive interview with Bamboo Rose is one of an eight-part series that sees The Interline quiz executives from companies who have either introduced new AI solutions or added meaningful new AI capabilities into their existing platforms.

For more on artificial intelligence in fashion, download the full AI Report 2025 completely free of charge and ungated.


Key Takeaways:

  • AI is evolving beyond human imitation, with 2025 expected to be the year AI models become autonomous “decision engines.” This transition will make rigid software logic more fluid, emphasising high-quality, well-governed data as the crucial long-term asset for training AI agent behaviour and reasoning.
  • To power smarter decision-making, AI platforms like Bamboo Rose’s require the unification of three data layers: core operational data (from PLM & Sourcing), contextual data (from merchandising, OMS, POS), and external signals (like weather and social trends). While few retailers currently have all three connected, a “critical mass of clean, labelled events” can be a starting point.
  • AI agents, capable of perceiving context, reasoning towards a goal, and acting autonomously, are set to transform retail. Key applications include “Sense” (multi-variant demand forecasting), “Shape” (generative mix optimisation), “Commit” (autonomous PO agents), and “Protect” (intelligent markdown/promotion sequencing), which have already demonstrated double-digit margin gains in pilot programs.
Where do you believe we currently are on the progression curve from AI as an extremely broad set of capabilities and promises, to AI as the foundation for applications and services that can deliver a measurable return on investment in well-defined areas?

If the curve were a runway, I’d say the wheels have lifted but the plane is still climbing through low clouds. 2023-24 proved that generative AI can imitate human output; 2025 is the year those models start to act on their own, collaboratively, moving from “answer engines” to true decision engines. In other words, we’re crossing the line where the logic layer of software—those rigid rules and workflows we’ve coded for decades—becomes fluid, and even disposable. What remains as a long-term asset is high-quality, well-governed and secured data, because that’s the fuel every AI agent will need to train its behavior and reason in context.

For retailers, that means a handful of concrete pilots (sourcing recommendation, dynamic markdown optimization, or automated PO creation) have already shown double-digit margin gains. Most companies, though, still treat these proofs as exotic add-ons rather than the start of a new, lasting operating model. So the ROI is real—but unevenly distributed.

When we look at the full extent of the modern product lifecycle, there’s a huge amount of information being created, but not necessarily having its full value extracted. This is a key problem that you’re pointing the concept of “Decision Intelligence” at, so let’s start from the foundations: what are the primary data sources that AI needs to draw from in order to support smarter decision-making, where does that data live today, and is the average organisation ready to apply AI to it?

At Bamboo Rose, we usually group data into three concentric circles:

  1. Core operational data—product hierarchies, vendor catalogs, cost sheets, purchase orders, receipts, etc. Most of this sits in PLM & Sourcing platforms like Bamboo Rose’s TotalPLM suite.
  2. Contextual data—sell-through, returns, stock positions, prices and promotions,… That’s usually in merchandising, OMS, and POS systems.
  3. External signals— weather, raw material prices, social sentiments, competitor prices, freight indices, change rates, TikTok trends,…. These live outside the firewall of enterprise IT.

A decision-intelligence platform has to unify those three layers and keep them trustworthy. Technically, that means robust APIs, a semantic data organization, and clear data-ownership and security rules. We are at the very beginning of Decision Intelligence, thus, very few retailers have all three circles connected today; the good news is you don’t need perfection to start—just a critical mass of clean, labelled events the agent can learn from.

Walk us through what you see as some of the most potent use cases for Decision Intelligence. There’s obviously a broad spectrum of choices that get made, in-house and in collaboration with upstream and downstream partners, to bring a successful product to market, and the background to those choices is changing faster than ever. Where do you see AI adding the most value?

I would group the early winners into four verbs that in my opinion matter most to fashion retail: Sense, Shape, Commit, Protect.

  • Sense future demand: multivariant forecasting that ingests regional preferences, weather, promotional events, etc. to propose highly accurate demand curves by style–color, location – allowing to get rid of manual buyer intuition.
  • Shape assortments: generative mix optimisation that balances margin, novelty, and sustainability targets, then creates “what-if” line plans in minutes instead of weeks.
  • Commit supply: autonomous PO agents that watch real-time sell-through and supplier capacity and trigger reorder quantities—or cancel dormant commitments—without waiting for a weekly meeting. 
  • Protect margin: intelligent markdown and promotion sequencing that treats every item like a tiny financial asset, learning the elasticity curve continuously instead of locking a promo calendar six months ahead.

Each of those use-cases touches partners upstream or downstream, so Decision Intelligence has to be collaborative by design, not a black-box stuck in silos.

So far we’ve been talking about using AI to augment people’s capability to make decisions, by equipping them with relevant, new insights. But while there’s no doubt going to remain a set of decisions that companies agree should remain in the hands of human decision-makers, it seems inevitable that at least some of what we think of as important choices today could be automated in the near future. How do you define an AI agent, how should brands and retailers be thinking about which choices an agent could automate, and what does it look like to put that into practice?

An AI agent is software with three extra muscles: it can perceive context, reason toward a goal, and then act through an API or UI—without a human prompt. Think of it as a junior colleague who never sleeps and keeps learning.

The litmus test for agent-worthy decisions is a 2×2: impact on the P&L vs. variance in the decision pattern. High impact + low behavioural variance (for example, safety-stock top-ups or freight mode selection) should be automated first. 

High impact but high variance (for example theme-buy for next season) stays human-led with AI copiloting.

Practically, you combine an LLM-based reasoning core with a policy-engine (“never exceed credit-limit”, “respect ethical-sourcing codes”) and give the agent narrow, auditable permissions—start with “recommend + ask” before “recommend + do”. That staged autonomy builds trust and delivers value within weeks, not quarters. 

Taking everything into account and looking to the longer-term horizon, it’s clear that you don’t see AI as a simple bolt-on to the existing technology ecosystem, but as potentially the start of a completely different model for information systems and the data that flows between them. How should readers be thinking about this shift from technology as a series of discrete applications to technology that’s rearchitected around the different decisions that need to made to optimise, for example, merchandising, product development, sourcing, and production processes?

The mental shift is from apps to outcomes. Classic software captured screens and clicks; AI-native systems capture intent and orchestrate whatever micro-services deliver that intent. In this world, the only architectural constant is the decision graph—an explicit map of dependencies (for example “fabric MOQ affects lead-time which affects launch window which affects markdown risk”). These logic flows are generated on the fly, and soon will be interfaces; they become ephemeral UI slices that appear when the user or agent needs them and disappear afterward.

For a retailer this means:

  • Break the monolith around functional silos—PLM, OMS, Allocation—and expose their data through decision APIs.
  • Instrument every decision with feedback loops (did the supplier actually hit the new target cost? did the style sell at the expected velocity?).
  • Let agent networks learn which chain of micro-decisions maximises value under real-time constraints.

It’s a journey, but companies that treat their decision catalogue as the new enterprise architecture will out-learn the competition.

What do you believe are the next steps for how AI is deployed and used? Is it more likely that AI will solidify its place as a new human interface paradigm the frontend of tools and workflows? Or is its future closer to what cloud infrastructure has become today – a quieter commodity that is still the foundation for the next generation of applications, but in a less obvious way than what we’ve seen over the last couple of years? Or is it both?

Both—sequentially. In the short term, conversational interfaces are the on-ramp; they lower cognitive friction and democratise data. Over time, as agents prove more and more reliable, they will recede into a “quiet yet indispensable” substrate—much like electricity or cloud computing today. Code is getting so cheap that we’ll spin up disposable workflows, let them run a season, then archive them just like last season’s mood-board. 

What remains visible to the user is purpose-driven storytelling: “Here’s what the agent did, here’s why, here’s the risk distribution.” That transparency is crucial, because humans still own accountability and intuition.

So expect a world where the interface is conversational, the infrastructure is largely invisible, and the differentiator is the retail brain trust that curates the right data and governs the agents ethically.