Released in The Interline’s DPC Report 2023, this executive interview is one of a sixteen-part series that sees The Interline quiz executives from major DPC companies on the evolution of 3D and digital product creation tools and workflows, and ask their opinions on what the future holds for the the extended possibilities of digital assets.
For more on digital product creation in fashion, download the full DPC Report 2023 completely free of charge and ungated.
Key Takeaways:
- This year brand and retailers will likely build out their DPC ecosystems by investing in solutions, training, talent, and integrations with the ambition of getting a little bit closer to the vision for a digital twin of product and process.
- AI is also facilitating the capture of real-world data through technologies like body scanning and product digitisation, as well as for training generative models on trend and design DNA to swiftly generate new materials, product styles, and colourways.
- Infrastructure-level barriers for DPC including managing vast amounts of data, optimising compute resources for AI models, and ensuring seamless integration of various digitisation processes require require investment and the adoption of scalable AI solutions.
What do you believe are the greatest opportunities that are realistically achievable, in 2024, through investment in DPC talent and tools?
This year and beyond, I expect to see brand and retail businesses continuing to build out their DPC ecosystems by investing in solutions, training, talent, and integrations – all with the ambition of getting a little bit closer to the vision for a digital twin of product and process. But the biggest advances in the next twelve months are going to come from the use of generative AI to automate and accelerate workflows across those extended ecosystems. We are already seeing the impact of generative models in other creative sectors, and fashion will be no different.
It’s interesting to see the extent to which DPC and AI strategies are starting to converge. What are some genuine use cases where the two strands cross over?
Beyond what we’ve already seen with off-the-shelf models, brands are already beginning to either train existing generative models or build entirely new ones using their own trend and design DNA as the foundations. The objective here is to enable their creative teams to quickly bring new materials, new product styles, new colourways, and new options to life in a way that’s on-brand and that avoids the persistent problem of consumer-facing models being trained on datasets from multiple different brands. This kind of bespoke text-to-image output will potentially cut ideation times down to a matter of minutes.
We’re also already seeing the use of AI in capturing the world – whether it’s body / foot scanning, or digitising full products and environments. I’m aware of several AI applications that have the potential to rapidly expand on brands’ and retailers’ DPC footprints by enabling them to more quickly and easily capture parts of the real world.
We’re talking about a fully next-level DPC ecosystem here. What do you see as being the major barriers, at the infrastructure level, to delivering it?
A truly all-encompassing DPC ecosystem will generate a huge amount of data and consume a mammoth amount of compute when we consider all the different point solutions and the multiple lanes of digitisation that need to merge to create a complete workflow. And that’s before we introduce AI – both within the individual processes themselves, and at the oversight and orchestration level where it needs to work with a mix of structured and unstructured data.
AI models currently require a tremendous amount of compute, which is largely done in the cloud. But the near future is going to be defined by different tiers of models that can run locally, in general data centres, and in specialised AI clusters. That infrastructure is still very much being built out, and it’s likely to become a barrier sooner, not later.
There’s a lot of focus on individual process areas in this report, because the current level of DPC maturity is still heavily geared around individual tasks and manual work. How do you see automation developing over the next couple of years, and what DPC process areas will it touch first?
There is no technical reason why we can’t continue to accelerate the end-to-end design to product development process – in both areas where it’s already touched by DPC, and areas where DPC tools and workflows are only just beginning to have an impact.
The fashion industry’s focus should be on deploying generative AI-powered applications, that share common LLM datasets, to automate mundane, time-consuming tasks that do not add value to the finished product. There are many of these manual tasks, from data collection and entry to scanning, file organisation and colour-switching, that are prevalent across trend analysis, storyboarding, stitching, patternmaking and many other areas. The objective of this kind of automation is not to take over these activities, but to remove the manual, non-productive work that people currently undertake.
What’s the likelihood that DPC and DPC-adjacent jobs will be lost to this kind of automation?
Over the next couple of years, I believe fashion is going to face a steep learning curve when it comes to understanding and capitalising on the opportunities that are presented by holistic automation in DPC. This is not a key that’s going to be turned quickly or easily, and I don’t expect that it’s going to lead to job displacements any time soon.
What I do see, though, are teams becoming much more efficient as DPC automation is rolled out, which will raise some deep organisational questions about where people’s talents are best deployed, and how to make use of them to create new opportunities and greater choice for consumers.
I can see designers adopting AI copilots as design assistants to help augment their creativity, and I expect that their jobs – and the roles and responsibilities of the wider creative teams – to change as they have the opportunity to learn new skills, improve the way their designs are created and tagged so that they become a more seamless part of the post-design pipeline, and push their creative frontiers even further.
How would you describe the ideal 3D / DPC pipeline – category-specific or generalised – and what barriers are currently preventing it from being built and widely adopted? What pieces still need to be put in place for fashion to stand the best chance of achieving what you define as the full-scale vision for DPC?
A truly future-ready DPC pipeline needs to be highly flexible, category-agnostic, and it should make use of shared, smart (LLM) datasets. To help realise this, we need to concentrate on creating seamless integrations across existing technologies and processes, to enable frictionless collaborative creation that unifies planners, data scientists, designers, 3D artists, material and colour technical experts and many more – all in a single share ecosystem with an intuitive UI/UX, built on a powerful new IT infrastructure.