The End State For Digital Product Creation

Multi colored textile pattern on elegant dress generated by artificial intelligence

Key Takeaways:

  • With fashion deeply immersed in the mechanics of delivering on the vision for digital assets and digital workflows, the industry runs the risk of losing sight of how that vision was developed, and how to recognise it when it arrives.
  • Other sectors, where 3D adoption runs deeper, and pipelines are more mature, can offer guidance, cross-industry tools and best practices, as well as potential templates for entirely new approaches to computer graphics.
  • As demonstrated by significant investments made by major brands, fashion is pursuing an end state for digital product creation where every creative and commercial decision that’s currently made based on a physical asset can be made from a digital one instead – building universal, unqualified trust in 3D.

We will someday reach a point where fashion can say that the scaffolding it needs to support the full scope of digital product creation is built.

This doesn’t mean that the work of DPC itself will be done. Just as nobody in architectural visualisation, videogaming, or visual effects is sitting back and happily having their jobs completely automated away by a mature pipeline, no-one in fashion is suggesting that the DPC technology and process ecosystem – when it comes into full focus – is going to leave them sitting idle.

When we reach this destination, the craft of designing, developing, engineering, visualising, marketing, and making products digitally, with as few compromises as possible, will still be in incredibly high demand. The toolkit those craftspeople use, though, will no longer have any major missing blocks, data dead-ends, incomplete modules, fragmented standards, or other rough edges.

At the risk of trivialising a lot of tough graft, commercial dealmaking, creative input, best practices, and hard-fought standardisation, the time will come where this all ‘just works’. And, like someone standing back after spending years kitting out a workshop to cover every possible eventuality, the industry will find itself asking what it can create, now that the tools are all in the right place. And it’ll realise the answer is “basically anything”. Which is going to be potent and paralysing at the same time.

That’s why I believe that thinking forward to what that kitted-out garage looks like, and getting some advance on that vertiginous feeling of a wide-open possibility space, is a useful exercise. We’re all guilty, at one time or another, of becoming so immersed in a journey, or the incremental steps and successes along the way, that we forget to re-evaluate where we’re headed – and what the benefits of arriving will be.

And this is where looking at the road that those other industries – VFX, architecture, games and more – have travelled is valuable. Because they, with their comparatively pristine, complete workbenches, are, on average, farther down the road towards their own end state for digital product creation and all-round digital transformation. So, in theory at least, they have plenty to teach us.

But first: how did that happen? How did other sectors steal the march on fashion and build deeper 3D / DPC capabilities and systems, sooner?

Part of the reason is a simple function of effort over time: on average, those sectors started embracing digital design and 3D product definitions earlier than fashion did. (There were notable brand and supplier exceptions, who did pioneering, early work in 3D in fashion.) An equally large part is down to the nature of the products and components those sectors are designing, developing, engineering and visualising in 3D – and how well those elements have been served by the existing paradigms of computer graphics.

I want to really zero in on the last part. Approximating the direct and indirect optical properties of light, for example, is a hard problem, but one that’s had a series of different and progressively better solutions – from hand-drawing based on real reference, pre-baking and rasterisation, to radiosity, ray tracing, path tracing and other methods of simulation.

So if your primary challenge is simulating light to help create more realistic-looking scenes, then you’re pretty well-served by the different techniques and tools that exist today, even if none are fully accurate. And you’ve also likely seen serious strides being made in how effectively and efficiently this work is done during your lifetime.

I was 13 years old when the original Toy Story came out, and a single frame of that film took hours (or even days) to render on racks of the best local hardware available at the time. I’m now 41, and there’s a smartphone on my desk that supports hardware-based ray tracing. It’s certainly not going to render anything at the same image quality as a CG film, but the point still stands: I can, if I want, run real-time simulations of reflectance, multi-bounce indirect lighting and other complex calculations at passable frame-rates, alongside rendering characters, materials, and environments that would put that 1995 film to shame – all on a portable consumer device.

Now, this isn’t a lesson about getting older around technology. (Although that’s definitely a feeling I have as we come to the end of another year!) This is, instead, a spotlight on how far the simulation of difficult things can come, and how fast, when software and specialised hardware are applied to the problem, ecosystems, standards, pipelines and workflows accrete around those core capabilities.

Across those other sectors, a lot of smart people have devoted a huge amount of time and effort to apply technology and ingenuity to solve precisely those kinds of complex problems, which has led to industry-agnostic watershed moments like physically-based rendering (PBR), programmable shaders, particle systems, procedural generation, and much more – all of which have collectively contributed to a pronounced leap in the believability of both offline rendered and real-time 3D graphics, applicable across a huge spectrum of different use cases.

Fashion, by dint of borrowing tools and best practices from other sectors, now benefits from all of these advances. But our industry has an additional, just as computationally difficult, problem to solve on top: material simulation that captures both the visual and the granular physical and performance attributes of fabrics, construction techniques, and the garments, shoes, and accessories that they combine to create.

This is, to be clear, a fundamentally different problem to the task of cloth simulation in film or videogames, where aesthetics and animations take precedence over accuracy, and where garments and materials are manipulated to conform to the desired visual outcome, with form trumping function instead of the other way around. For fashion brands, the mandate is to simulate materials and garments in a way that doesn’t just look right, but that behaves with total accuracy at rest and in motion.

So fashion is facing a multi-pronged challenge, needing to use 3D tools to create high-quality visualisations, uncompromising fabric simulation, and also bills of materials, construction and engineering details, sizing and fit, and other elements that are necessary to create digital products that can actually be made – without needing to reverse-engineer those 3D models in other solutions to accomplish the tasks of production.

This is the combined challenge that the 3D / DPC technology vendor community has been collectively and separately working to address. And their successes in individual lanes (and in integrated workflows) is what led to the marked uptick in adoption of DPC solutions that we benchmarked in last year’s DPC Report. Some of that upward trend in uptake can be laid at the door of the pandemic, of course, since COVID suddenly took physical prototyping and sampling off the table, but a much larger contributor was the maturity of the technology, and the level of trust that people were able to place in 3D assets – allowing them to make creative and commercial decisions with confidence.

And while digital assets don’t always travel well from one use case to the next (at least currently) they can, largely, fulfil most of the purposes that physical assets are deployed to achieve. On that basis, it’s easy to see why fashion has forged ahead with DPC strategies even without the proximate threat of the pandemic – because the value, measured in time, cost, creativity and other key indicators, of substituting a digital asset for a physical one has been proven many times over.

Even in those individual use cases, there’s still work to be done – and simulation engines are getting better all the time. But realistically the biggest roadblock to completing fashion’s DPC technology toolset isn’t deeper simulation, but deeper integration, interoperability, and standardisation. At a strategic level, the work of getting fabrics to drape a little better pales in comparison to the importance of making sure that design inspiration, patterns, materials data, avatars, trims, sewing operations, costs, emissions calculations, and other digital product attributes become part of a cohesive, rolled-up “digital twin” And that, by extension, it can all travel through the entire technology estate and into every conceivable use case.

This is the scaffolding that, in fashion, is still very much under construction. And it’s also the scaffolding that other industries are now concentrating their attentions on with frameworks and standards like Universal Scene Description (USD or OpenUSD), which are explicitly designed to make collaboration, non-destructive editing, and different intersecting workstreams the standard in computer graphics.

But even while that scaffolding is being built, there are massive, multinational brands that are making huge strategic investments in DPC tools, processes, and talent right here, right now – providing a testament to just how vital they expect these resources and capabilities to be in the near future.

As you’re reading through this year’s publication (which we recognise is not a single-sitting engagement for most people) you’ll see opinions from a range of different brands and suppliers who are at different, advanced, stages of their DPC journeys, telling their stories and sharing their thoughts on where things go from here. And in the last twelve months I’ve personally had off-the-record conversations with even more brands, across all shapes, sizes, and product mixes – all of whom have major ambitions for what’s possible today with digital representations of physical products, and what’s going to be possible in the future when those digital assets become more fully-featured digital twins.

Speaking on the record, HUGO BOSS – a brand routinely held out, for good reason, as leading the vanguard of 3D adoption –  provided us with a new corporate statement that demonstrates just how transformative DPC has been for the company already, and how fundamental it is to the organisation’s near-term future:

“To support our vision of becoming the leading premium tech-driven fashion platform worldwide, HUGO BOSS was one of the early companies to explore the potential of 3D and immersive design in fashion. By leveraging cutting-edge technology, we combined creativity with eco-conscious practices to minimise waste, maximise efficiency, and pave the way for a more sustainable future. Our teams can now review designs digitally, bypassing the need to create and ship samples back and forth between the suppliers and vendors. We have already reduced the number of physical prototypes by 30%, and we aim to keep making progress in this respect. We have a goal of achieving 90% 3D product design and development by 2025. Today we have around 600 employees working with these innovative tools. Our ultimate objective is to enable everyone related to product creation – from design to development – to work with 3D at HUGO BOSS.

These are big results and bold targets, but also ambitions that the brand has certainly made the right long-term commitments to achieve. And while it’s likely to take organisations that are just beginning their DPC journeys far longer to reach them, the hope is that these goals and results will serve as a lighthouse for the wider industry to target.

So let’s come back around to that original question. What should it look like when the DPC ecosystem is fully built out to the extent that the full complement of job roles in fashion design, development, sourcing, production, sales, marketing and more can work on either creating, using, or interacting with 3D assets? What is the “end state” for digital product creation?

My personal answer to that hasn’t changed in the last few years. I believe the industry-wide DPC toolset and process library can be considered ‘complete’ when anyone who currently makes a creative, strategic, technical, commercial, promotional, or sustainability choice based on either a physical asset, a flat sketch, or a database is able to make the same choices – with the same degree of confidence – based on a digital twin.

Of course, people will also use those same assets to create new experiences and new possibilities that go far beyond what it’s currently possible to achieve using a physical asset or a set of data at the product level, and eventually entire digital twins that model the full set of complexities of their extended supply chain. But in terms of accomplishing the work that’s already begun, a digital asset that can stand in for a physical one in every conceivable scenario is an effective measure of success.

Which begs the question: what’s the distance from here to there? Obviously the answer is extremely subjective at the individual brand level, but from a whole-industry perspective the solutions are likely to be fairly common: more training, more talent, deeper integration and interoperability between solutions, codification of standards, collaboration between in-house departments and external partners, and other methods of keeping closer alignment between physical products and their digital twins.

Now, this all assumes that no breakthroughs or alternative approaches emerge in the interim – or anything else that sparks a more fundamental rearchitecting of the underlying technology, or challenges any of the core assumptions of digital product creation in a way that shortens the distance between physical and digital.

But that kind of radical rethink is what the DigitalCore Consortium is proposing. Billed as being an “alliance of global industry leaders who are collaborating to establish a groundbreaking standard and 100% virtual ecosystem for digitisation of systems, objects, and processes,” which includes cross-industry organisations, one of the Consortium’s key members is Mode Maison.

Mode Maison have worked to build what they call a “multi-brand retail platform” on top of the Consortium’s proposed framework standard. I wanted to quiz Steven Gay, their Co-Founder and CEO, about how the DigitalCore standard could factor into a possible end state for DPC, and what “convergence” – a word they use a lot – might mean in an industry with as many moving parts as fashion has:

“The future of fashion and other product-centric industries is going to be defined by increasing complexity and universal digitisation. Trying to tackle those challenges through the classic, piecemeal, computational approach is always going to fall short of the ultimate goal: creating digital products and building digital experiences around them that both accurately represent the interconnected rules that govern the physical world, and that anchor those digital assets in scientific reality. This is what’s meant by convergence: building a shared foundation for digitisation and digital creation that’s predicated exclusively on data, and that represents real-world materials in a way that’s accurate, flexible, and exponentially scalable.”

If the DigitalCore Consortium’s goals of solving essentially the whole of known physics in a computer sounds like a tall order – it certainly is. And time will tell if this novel approach challenges the current pillars of digitisation and digital creation, but the idea of sidestepping the isolated challenges of lighting, materials, soft-body avatars and instead pursuing a cohesive approach to simulation is, in theory, one that would resonate well with the vision for complete, comprehensive digital twins.

When we think forward to what the future should look like, with the goal of empowering every job role to take any creative, commercial, or strategic decision based on digital assets, it will be absolutely vital for those assets to be standardised, complete, and composable. Instead of just siloed representations of parts and finished products, the ambition would be to create a single, layered, scalable source of truth reflects reality with no compromises, and that allows people to work with it in an additive, non-destructive way.

On the complete opposite spectrum is generative AI, which – currently at least – does nothing to address first principles or fundamental physics, but which nevertheless is capable of doing a strong job of inference, and generating outputs that look as though they were created by a system that understands those principles. And while the focus of our DPC Report 2023 is on the objectives, structures, and systems of core human creativity, it would be naive not to see generative models approach this problem from the other end of the spectrum, and creating new examples for a range of different use cases.

Do I, personally, see generative AI overtaking essential product creation tasks like creative design and accurate 3D modelling? Not yet. Do I expect it to play a major role in virtual photography, where it can instantly elevate lighting and materials in a finished render? Absolutely. Is it already being deployed in specialised applications like embroidery, material tiling, and other areas that straddle the line between aesthetics and engineering? Definitely.

And while it may not be the case that generative models are creating production-ready patterns, geometry, and styles, there is a strong, looming suspicion that more generalised (although not full general) intelligence is further along its own track than some people realise. If the last year’s journey from ChatGPT to Google Gemini has taught us anything, it’s not to underestimate the scale of the rug-pull that sudden AI unveilings can accomplish – especially when new modalities are brought to bear.

However the current AI race shakes out, though (a deep-dive of which we’ll be running this year) it’s clear that fashion is going to make progress towards that end state vision in unique and idiosyncratic ways, with different pathways per-category, and different philosophies per-brand beneath those. We only need to take a look behind the curtain at the way different designers, different brands, different suppliers, and different technology companies work to realise that everyone is building their own scaffolding as they work.

And that makes it difficult for anyone to really take a step back and notice that they’re all heading in the same direction. Some brands, like HUGO BOSS and the others that contributed to our latest DPC Report, are trailblazers, playing an active role in defining future standards and building workflows that will eventually benefit the whole industry. Others are specialising in their own focus areas, and creating new downstream experiences, or new methods of connecting with their partners using digital assets as the foundation.

But, on balance, everyone is working to an end state where a universal, unqualified level of trust can be placed in digital twins of physical products, for essentially any use case throughout the extended value chain. And when we get there, a different, deeper kind of work will begin. And it will be a different kind of fashion industry doing that work – not just one that does a better job of digital product creation, but one that has fully internalised the idea of representing itself and governing itself digitally.

Exit mobile version