We are well into the ‘disruption’ era of artificial intelligence. Isn’t it funny to even say that, considering it’s only been about 3 years since this tech actually landed in our laps? Things move fast.
Of course AI has been around for decades, as people are fond of pointing out, but not as something that’s on everyone’s desk every Monday morning to use, and in everyone’s pockets – and definitely not as part of a massive ongoing discussion around the future of software for fashion.
Like many, I bought into the early hype.I even considered building a design app centered around a single AI image generation model, which, looking back on it, would have been a drop in a gigantic wave of similar applications, and would have been a very risky business proposition. A lot of early adopters jumped into using Midjourney and Stable Diffusion for image generation, and Runway or Sora for video generation, because those were the cutting edge models at the time. Now there are countless closed and open source models that make the outputs from just a couple of years ago look ancient.
As strong as the temptation to build was, it’s fair to point out that there weren’t many durable companies started at that time that are still scaling. Or at least not in user-facing applications. New database architectures and platforms have had a much smoother ride.
While all that first-stage enthusiasm was spreading, and optimists were putting wrappers on what would become commodity technology incredibly fast, more pragmatic (and probably smarter) people and brands were pausing and asking, what the implications of this new technology were. How ready should we be to share our data and designs with the companies developing these AI models? Which of the companies developing the models will even be around in five years time? There were plenty of good reasons to wait and see, and time – even the limited time we’re talking about here – has been kind to those who were cautious.
There are also positive things to be said about those early adopters, though. They experienced teething issues and failures, and developed methodology and instincts that gave them a head start. This hard-earned experience now guides how they use AI in their workflows, whether they have founded new companies or developed their roles within leading brands and retailers.
Today, some of those same professionals are implementing much more mature generative AI effectively, as seen by brands like Zalando turning static PDP imagery – some of which, itself, is probably generated – into video for eCommerce. Or as seen in the range of different consumer-facing virtual try-on deployments.
But I actually don’t think that the bulk of that gap is visible when we look at those downstream applications. While consumer-facing marketing imagery grabs the headlines, there’s significant leverage hidden upstream, where planning, range-building, design, sales, and marketing all have an opportunity to employ the same generative tools to transform the way they communicate and make decisions.
If you’ve read Nick Eley from ASOS’s piece for The Interline, you’ll know that designers who would typically have worked in labour-intensive 3D first are now taking to generative AI tools, by turning sketches and early designs into rendered design images, or generating videos from their new collection designs.
High-end 3D computer graphics (CGI) can look extraordinary, but that level of realism is built on time, budget, and specialist expertise. In my opinion, 3D-DPC platforms were never truly designed for the pace fashion designers work at, and the level of fidelity they need to target. So as terrific as the output of 3D workflows can be, they also represented a missed opportunity to change workflows deeper into the product journey, because they simply couldn’t scale. Generative AI image and video tools change that dynamic by being fast and sufficiently realistic to accelerate creative story telling and decision making.
I’ve seen this all play out in my own work. In one project — advising for interior design, a field with very similar creative dynamics as fashion – we watched AI take over a big part in the communication and delivery of the design concepts and imagery to clients at a level of speed, quality and accuracy that was not possible with more traditional methods. In a very real way, in real production environments, AI is being used to dramatically streamline processes that were incredibly time and talent-intensive in the past.
And there’s also a place for the chatbox many of us use now in these same workflows! Conversations with clients, partners, or colleagues in different departments are now recorded and transcribed accurately using AI. Large language models (LLMs), like ChatGPT, Gemini and Claude, can be trained and used to capture every detail in those conversations – preferences, constraints and tensions. Armed with those insights, the designer can focus their full attention on the client or prospect, without also attending to incomplete note taking during conversations, and provided those notes then make it into a system with proper accountability, they also represent a much faster – and likely better – way to capture key thoughts and intent throughout the product journey.
The same AI models are also being trained to build accurate design briefs that then go on to instruct AI image-generation models, to build accurate representations of the designer’s vision.
In these real-world cases, the designer no longer spends hours building static mood boards, worrying over presentations and explaining via chat and email what the visuals are trying to convey. The result is AI videos that communicate to the clients what they are paying for (in the case of a studio or agency model) or that convey complete creative intent to colleagues who then work through the commercialisation lifecycle, in a way that no mood board or 3D simulation could.
This same workflow and logic obviously applies to fashion.
With 2D software like Adobe Illustrator, or 3D-DPC platforms, the drawings and 3D base models can inform the generative output. Teams can immediately produce photographic-quality visuals of their new collections without waiting for samples or 3D renders. They can generate motion studies of a garment rotating, with visual approximations of how the fabric would drape when a person walks, sits and runs wearing the garments, and test colourways all at the same time – in minutes.
Over the next couple of years, the gap between these high-velocity workflows and digital product creation will widen further. AI video will sit alongside real footage in quality, and it’ll be accessible to all at a fraction of the cost.
Other internal processes can also benefit from this. The sales team could walk into a meeting with a retail buyer earlier, remove the guesswork, and get to a firm ‘yes’ much faster. Since accurate high-quality visualization happens so early, it can also feed the marketing team which can start gauging consumer interest and planning digital content before even committing to a full production run.
I do, though, continue to caution companies about how they use closed source, cloud-hosted AI models – and I strongly advise them to clean the data they share.The big tech companies have proven to us that our data is being shared and sold between them for years, and regulators have also proven they are not going to police these tech companies properly.
Meeting transcripts, project files, presentations, design briefs and source imagery can all be useful to our AI assistants. Lean into using these language models by removing important proprietary data and anonymising the files you upload into them.
Additionally, clean your data from ‘noise’. If an AI model accesses files that contain irrelevant information, or even information that skews the results in an unwanted direction, then removing that information is the path to getting better results.
Then there’s the fear that the second you upload new designs to a Gen-AI image or video model, you’ve lost your protection. It’s a valid concern, and there are many online platforms that should be avoided if a brand is sensitive to this.
This fear, though, should not be a dealbreaker. It just means you have to do the technical homework. Some of the major players, like Google with Gemini and Adobe with Firefly, do offer privacy protection. There are also third party platforms built using open source models and API to leading Gen-AI image and video models, to help design teams specifically, where your data is walled off and protected.
So what are the steps to take if you’re not already making use of generative AI for image and video in both consumer-facing and in-house use cases?
- Review your workflows. Ask where you’re losing time and spending most of your resources. What are your biggest pain points, and can AI assist? Now that the rate of development of technology is so fast, make it a habit to regularly ask such questions.
- Maintain an R&D attitude to AI, allowing for some resources to experiment and test new tools relevant to your actual workflows, to prove the benefits. Having the right people in the team for this is crucial.
- Develop proprietary AI prompts, and methods for chosen AI tools that work to your brand identity and guidelines.
- Based on the above,replace parts of your existing workflows with AI tools that include a suitable LLM, chosen Gen-AI image and video generators that have proven they can deliver for your team in quality and IP protection.
Some organisations will build these capabilities internally. Others will look to an outsider’s advice for perspective – someone who isn’t tied to existing habits, and who can ask the uncomfortable questions and help see what actually needs changing. Not all parts of a workflow need AI, after all.
The opportunity for new, smarter workflows are at your fingertips if you understand your processes well and learn how today’s AI tools, which are markedly more mature than the ones before them, can operate within them. Because it is now a permanent part of our lives, so the sooner you make it a permanent part of your product lifecycles, the better.
