Analysing AI with Chelsea Barnes of Kalypso

Released in The Interline’s AI Report 2024, this executive interview with Kalypso is one of a fourteen-part series that sees The Interline quiz executives from companies who have either introduced new AI solutions or added meaningful new AI capabilities into their existing platforms

For more on artificial intelligence in fashion, download the full AI Report 2024 completely free of charge and ungated.


What’s your working definition of AI? Does it differ from the public understanding, which is currently dominated by large language models and generative text-to-image models?

Many think of artificial intelligence (AI) as machines performing tasks like humans – think perception, learning and problem-solving. This absolutely includes capabilities like large language models and generative AI, but our definition at Kalypso extends much broader to include expert systems that automate routine tasks, to predictive models and advanced neural networks that enhance decision making.

The rise in generative AI and large language models has helped bring AI into the mainstream, making it much more real and tangible. On the flip side, the popularity of these models, and the human-like outputs they generate, can lead to the misconception that a large language model (LLM) can solve all problems.

While an LLM can be an excellent research assistant, editor and trip planner, it falls short when you’re looking for more deterministic outputs like forecasting, anomaly detection and outcome optimization.

These technologies are valuable tools in the broader data science toolkit. When you’re looking to implement AI, the key is to begin by first pinpointing the problem at hand. Then, choose the AI tool that best addresses the specific challenge. It’s all about matching the right AI capability with the right problem, not starting with a technology and asking what problems it can solve.

There are currently a wide array of possibilities for AI, and little agreement on how to pin that potential down to discrete use cases. How do you approach categorising AI use cases across the different stages of the product lifecycle into segments such as Automation, Assistance, Make, Move, and Sell? And in your opinion, how far are we on the journey to having ready-made applications in each of those areas?

The first thing I’d recommend for pinning down the potential of AI is to start with the business problems to solve. Here’s a typical list of questions I go through to define and qualify the business use case:

Once we’ve pinpointed the high-value problems to solve, we can categorize them by type of AI solution needed – to automate, assist, optimize or generate – and define the product lifecycle stage it impacts – discover, create, make, move. With that solution description in hand, you can begin assessing if there’s a tool to meet your need or if you require something custom.

In terms of ready-made applications, there’s huge variability across different segments of the product lifecycle. Areas that have long been data rich and address “commodity” (common) problems, like targeted advertising optimization and inventory management, are more likely to have ready-made solutions available. In contrast, domains heavy in propietary processes or unique brand characteristics often require bespoke solutions. The upfront investment may be higher in those scenarios, but they also offer significant upside in competitive differentiation.

One of the cornerstone themes of this report is the need for brands and retailers to better understand the inner workings of AI, and to use that knowledge to formulate strategies for change management. There are two significant barriers to building that understanding, though: fear that AI will replace, rather than augment, human talent; and a lack of transparency and openness into how models are trained and how they operate. How do you encourage clients to try and overcome these?

Absolutely, there’s inherent distrust around AI and overcoming that distrust is critical to driving adoption and realizing its value.

The three things I recommend for increasing AI adoption are the following:

  1. Educate and communicate. Build your organization’s “AI literacy” through things like optional trainings or AI community forums. Communicate goals, “what’s in it for me” benefits and progress around AI initiatives.
  2. Take a human-centered approach. Involve users in problem definition, iterate with on solution development together and focus on user-friendly, well-integrated design so AI tools are approachable and easy to use.
  3. Prioritize explainability. Opt for algorithms that offer transparency over opaque, “black box” models. Lean on “analytics translators” who can interpret model output into actionable insights for users. For example, what data or context could be added to an AI’s recommendation to help users make informed decision?

To the last point, this is also why we’re very selective about where Generative AI solutions are used. These are typically black box models, where even their developers can’t fully explain why they act the way they do, including oddities such as performing better when praised or acting “lazier” in the month of December. It underscores the importance of choosing the solutions that fit specific needs and keeping humans in the loop to ensure responsible usage and control.

There may be some limited crossover between today’s AI applications and traditional BI tools, but drawing parallels between the two – or conceptualising AI as just a newer version of business intelligence – runs the risk of selling the possibilities short. Can you explain where you draw the line between what you call retrospective analytics (which covers traditional BI capabilities) and the new set of advanced analytics, which includes pattern recognition, deep learning, computer vision and so on? At an enterprise level, how do these differ? And where does generative AI fit into the picture?

Traditional business intelligence (BI) tools are retrospective in nature. They analyze past data to give us reports and dashboards that help understand our current state – great for status checks and comparisons.

AI and advanced analytics, on the other hand, are proactive. They use historical data to predict future events and recommend actions. In other words:

BI, including descriptive and diagnostic analytics, tells you what’s happening and why.

AI, including predictive and prescriptive analytics, forecasts what will happen next and suggest how to respond.

These two analytical approaches often complement each other at the enterprise level. For instance, BI can highlight the areas that need the most improvement, where AI can be put in place to drive action towards meeting those improvement targets.

I think of generative AI sliding pretty seamlessly into that mix. It could simplify key performance indicator (KPI) reporting through natural language prompts, especially when it comes to creating customized dashboards or doing ad hoc queries. And generative AI can also be a great asset in achieving KPI improvements, especially for productivity-focused goals.

Each type of analytics has a strategic place within the enterprise ecosystem, and when used together, companies see the most benefit.

When we talk about enterprise AI here, it’s clear that we’re often talking about new ways of extracting additional value and creating new experiences from existing data. And some brands will be concerned about the demands this is going to place on data governance practices that are still under development, or on data sources that are still siloed or fragmented. How justifiable is that worry? How can brands start to get their data prepared for AI and advanced analytics? And, thinking further down the line, how can they start to actually use AI to streamline this process in the future?

Concerns about data governance and fragmented data are legitimate as brands look to leverage AI. However, waiting for perfect conditions means potentially never starting.

I’d recommend three new ways to tackle that challenge – starting now and looking ahead.

  1. Start with what you have. Many companies are surprised by how much they can achieve with their existing data. Prioritize use cases that strike the value potential vs. complexity balance, choosing ones that can be achieved with current data sources.
  2. Accelerate data cleanup with data science. Traditional data cleansing processes are highly manual and very time-consuming. The new way to approach those tasks is aided by AI. Machine learning algorithms that can be used to help sort, organize, and cleanse data in a much more automated (and often higher quality) way.
  3. Leverage LLMs for unstructured data. LLMs are particularly useful for extracting value from unstructured data – data sources that were previously off-limits for many analytics efforts. Now, analyzing rich information stores, like transcriptions from customer service interactions or e-commerce product descriptions, is easily facilitated with these powerful language models.
What do you see as the near-term future of AI? Do you believe it will be a transformative class of technologies the way people expect? And what does the roadmap to value look like?

Over the past few decades, tools leveraged in the fashion industry have evolved rapidly. New technology is reshaping how brands operate and bring product to market. Today, the industry works in a combination of old-world tools (draping, sketching, excel planning) and advanced tools (vector tools, 3D design, visual assortments) that are all poised for explosive evolution once again with the integration of AI. This is what I see as the near-term future of AI – enhancing and streamlining existing tools and workflows.

However, as AI technology continues to advance, it’s absolutely on a trajectory to transform the way we work, especially in creative fields and industries that rely heavy on intellectual capital.

Each brand’s roadmap will be unique, but most successful strategies will find a balance between embracing risk – leaning into the paradigm shifts offered by AI – and maintaining a realistic perspective grounded in what truly drives their business. This balanced approach will equip brands to harness AI not just for incremental improvements, but for transformative changes that redefine their creative and operational processes.

Exit mobile version