This article was originally published in The Interline’s AI Report 2024. To read other opinion pieces, exclusive editorials, and detailed profiles and interviews with key vendors, download the full AI Report 2024 completely free of charge and ungated.


Key Takeaways:

  • AI is often seen as complicated, niche, or too expensive due to the tech industry’s tendency to oversell its capabilities. AI is not universally powerful but excels in specific use cases such as data parsing.
  • Traditional AI focuses on narrow, analytical tasks. But generative AI creates new outputs like text, audio, and images, opening up new possibilities in industries like fashion.
  • The success of AI initiatives depends heavily on the underlying infrastructure. Scalability, elasticity, and flexibility are crucial for supporting AI workloads and accommodating evolving demands.

AI needs a facelift. The top concerns I hear day in, and day out when I speak with customers are that AI is complicated, companies lack the skills internally to make effective use of it, its use cases don’t apply to their specific market niches, or the resource cost of implementing AI is just too high. Unfortunately for the companies who are letting inertia beat innovation, AI is changing the way we think about enterprise and consumer technology, and those who don’t take the time to understand it will be at a disadvantage moving forward.

So what about AI makes it so easy to compartmentalise as being complicated, niche, or overly expensive? The reality is that this misconception is born from the tech industry’s tendency to oversell AI’s capabilities. Surely if it’s this world-changing technology that will revolutionise everything from how consumers interact with brands to how electronics and manufacturing equipment communicate with each other it must be universally powerful and capable, right? Not so much.

AI has some extremely strong use cases, especially in the world of fashion, but it’s not a panacea. It won’t suddenly turn every business on earth into a trillion-dollar titan of industry with completely new levels of productivity. What it will do though, is drive efficiencies in carefully selected areas. AI is phenomenal, for example, at parsing large data sets, but therein lies one of the devils among all the details: AI is only as good as the data it’s trained on and what it’s trained to do with it.

Starting from the top

The first step in any AI journey is to choose a model, be that open source, a pre-packaged commercially available model, or the decision to build your own. But here’s where the first misconception originates, AI isn’t a monolith: there are certainly different ways to acquire a large language model or a diffusion image creation model, but those only represent two architectures and use cases that sit under a very broad AI umbrella.

The loudest kind we see in the market at present is Generative AI (GenAI), which synthesises an output, be that text, audio, image or video, based on an input. The architectures behind these generative models represented a step change in taking AI mainstream in that they broke from what is sometimes called “traditional AI”. Whereas GenAI can take the data it has been trained on and synthesise new data, “traditional AI” can primarily serve very narrow, typically analytical purposes.

To put this in the context of the fashion industry, “traditional AI” might have been used for data-analysis and prediction tasks like demand forecasting or customer segmentation. These are valuable uses of the technology – especially in an industry that struggles with deriving insights from information quickly enough to effect change – but are also somewhat limited in scope. GenAI, on the other hand, opens up avenues like creating virtual fashion designs, personalising customer experiences with virtual try-ons, massively scaling-up product ‘photography, ’or even generating new fabric textures, patterns, or embroideries based on existing designs.

But here’s the catch: while GenAI offers exciting possibilities, it also demands more from the hardware and software infrastructure that supports it. The shift from “traditional” to generative models has placed a huge – and ever-growing – weight on the systems and the pure silicon that sit underneath the AI boom. And this means that businesses in every industry need to rethink their AI infrastructure strategy, because this broad spectrum of potential use cases will only be realised if organisations are able to rely on the foundations that will run training, inference, and general use of AI models.

The elastic waistband of AI infrastructure

Building out the infrastructure for AI, though, is not just about investing in the latest hardware or software. It’s about creating an ecosystem that supports experimentation, agility, and scalability. And at a society-wide level, increasing AI adoption and putting new possibilities in the hands of creators, enterprises, and entrepreneurs means lowering the barriers to entry for businesses looking to adopt AI and push the envelope for what it can accomplish in their specific verticals.

As heavy as they already are, as AI workloads will grow and evolve further, and the supporting cloud infrastructure needs to scale seamlessly to accommodate these changes. Scalability ensures that infrastructure can handle larger datasets, longer training runs, more complex models, and increased user interactions without compromising performance.

Elasticity goes hand in hand with scalability. While scalability refers to the ability to grow, elasticity refers to the ability to adapt to changing demands in real time. To picture it in practice: one month you might be running simulations for a new fabric blend, and the next you’re handling a surge in demand for virtual try-ons. AI infrastructure needs to be able to scale up or down based on these demands, which can’t always be forecast. An elastic infrastructure means that you only pay for the resources you use, making it cost-effective and efficient.

Flexibility is another crucial aspect of AI infrastructure. AI projects are often iterative, requiring frequent updates and adjustments. A flexible infrastructure allows you to experiment with new AI models, algorithms, and techniques without major overhauls. You might need more computing power during peak seasons like Black Friday or you could be trialling a particularly intensive model during an off-peak season.

The privacy question

As AI continues to reshape fashion (along with many other industries), the issue of data privacy and ownership will continue to be on the tip of everyone’s tongue. With the vast amounts of data being generated and processed, questions about who owns this data and how it’s used are becoming increasingly important.

When brands store their data on the cloud, they’re essentially entrusting cloud service providers with sensitive information about their customers, products, and operations. This is precisely why many companies are imposing limits on employees interacting with the most popular cloud-based language models – for fear of sensitive information leaking. And this is also why paid enterprise accounts with companies like OpenAI include stipulations that user data will not be used to train new models or newer iterations of existing ones.

But these concerns about data ownership and control need not feel so imposing. As an independent infrastructure provider we, for example, have strict policies and processes in place that ensure the complete privacy and security of all user data, and we are further evidence of that idea that AI is not a monolith at the infrastructure level any more than it is at the solution level.

This is just one of the reasons why independent infrastructure providers could be more desirable than the ‘hyperscalers’ or large cloud service providers. Just like with other enterprise technology initiatives, vendor lock-in is set to become a key concern when we think about the cloud services that underpin AI – especially when the default option is to work with hyperscale providers like AWS, Google Cloud, or Microsoft Azure.

Vendor lock-in occurs when a company becomes overly dependent on a single provider’s services, making it difficult and costly to switch to another provider or bring services in-house.

The main argument, casting our eyes slightly forward to where AI use cases and demand for computer is scaling up quickly, is that if a brand is experiencing vendor lock-in, that can seriously hamper their AI ambitions. Scalability, elasticity and flexibility aren’t typically associated with hyperscalers’ offerings which can cause trouble for fashion brands in particular.

Is your infrastructure behind the trend?

Fashion brands are particularly susceptible to some of these issues. The industry tends to face more fluctuating demand, seasonal trends, and evolving consumer preferences than others, making it imperative to make sure its AI structure has those all defining features I just touched on. As AI rollouts mature, these issues are only become more magnified – and in fashion, unlike other sectors, the timeline for growing infrastructure and scaling capabilities and performance is potentially going to be short and unpredictable.

Early on in an AI programme, flexibility is paramount. Whether it’s launching a new product line, rolling out a personalized marketing campaign, or integrating new data sources, a flexible AI infrastructure would allow brands to experiment, iterate, and innovate without being constrained by technology limitations. With a flexible infrastructure, fashion brands can easily customize their AI models, scale their operations, and integrate with other platforms and technologies. This adaptability is, in my opinion, what’s going to enable brands to stay ahead of the competition, drive innovation, and deliver exceptional customer experiences through real AI applications instead of theoretical ones or pilot programmes.

As AI implementations mature data volumes increase, AI workloads become more complex, and demand for AI-driven insights grows, brands need an infrastructure that can scale seamlessly to meet these evolving needs. A growing AI infrastructure allows fashion brands to handle large volumes of data, support more complex AI models, and serve a growing number of users without compromising performance or reliability. This scalability ensures that brands can continue to leverage AI effectively as their business grows, without the need for costly and disruptive infrastructure upgrades.

Elasticity, then, provides brands with the agility to respond quickly to market opportunities and challenges, ensuring that they can capitalize on trends, optimize operations, and deliver value to customers when it matters most.

It might sound hyperbolic, but I believe that the choice of AI infrastructure – and a recognition of just how much it matters – is going to be as important for brands as the decisions they take around what they want to accomplish with AI in the first place.

Sewing it shut

Looking across the extended fashion value chain, everything from design, production, and distribution, right through to point of sale can potentially be improved by AI. Those possibilities have not just been dreamed up out of whole cloth: AI is here and it’s set to make a major difference to how fashion operates.

Two guiding rules though: never expect AI to do everything; but always make sure you have the infrastructure to support what it can do.