Released in the first-ever DPC Report 2022, this executive interview is one of a twenty-part series that sees The Interline quiz executives from major DPC companies on the evolution of 3D and digital product creation tools and workflows, and ask their opinions on what the future holds for the the extended possibilities of digital assets.

For more on digital product creation in fashion, download the full DPC Report 2022 completely free of charge and ungated.

Digital product creation in fashion seems to have reached critical mass, with more brands than ever kick-starting or scaling DPC strategies. Why now?

I believe it’s a combination of many things happening all at the same time.

The first is the democratisation of modelling tools – CLO and Browzwear in apparel, for example, or Gravity Sketch for footwear – that have solved a lot of the roadblocks that used to exist at the beginning of the pipeline. The ability to convincingly render scenes has been there for a long term, but before the wide adoption of those tools we were missing the 3D garments, sneakers, and accessories to populate them. Those assets really are the starting point for everything that comes afterwards, and the fashion industry has now been empowered to create those assets in much higher quality and higher volume because modelling tools like the ones I mentioned have reached a tipping point of usability, accessibility, and performance.

The second is the real-time technology wave that began in the game industry. Substance has been part of this wave, as a key force in texturing and digital materials that’s now being used in many industries outside video games, but the growth of game engines such as Unreal and Unity into new sectors has made it more intuitive than ever to create real- time 3D experiences – either in virtual reality or on flat screens. Fashion being given the ability to make changes to products and environments in real-time has been really impactful.

The third and final thing is the general move towards digital twins and the Metaverse, which has created a sudden need for 3D assets in huge numbers, since brands who want to build a presence in those virtual worlds need to populate them with compelling 3D content and experiences.

All of these forces occurring together is, in my opinion, why progress towards digital product creation and digital-native working has happened so fast. Bringing them together is generating a huge amount of value for both creators (fashion designers, for example) and the final consumer. That final consumer gets an experience that’s more immersive, and they can engage with phygital fashion, where the distance is being closed between the virtual twin and the physical object, leading to new blended experiences. And at the same time creators are better able to reach their potential because they have access to tools that allow them to bring their ideas to life in entirely new ways.

The Adobe Substance suite has seen extensive use in other industries where 3D-native working is perhaps more embedded than it is in fashion. What lessons can the fashion industry learn from those sectors? And, conversely, how much of what fashion needs to do in digital product creation is unique to our industry?

That’s a very good question, and I would say that fashion has as much to learn from other industries as they have to learn from fashion.

Fashion, for example, has a strong culture built around colour: colour management, colour creativity, colour theory and so on. It’s a fundamental part of how that industry operates, and it’s also something that you see reflected in other industries such as visual effects, where colour calibration and grading are key parts of a final look. So there are crossover areas where one industry can benefit from the knowledge, experiences, and best practices that are built for another.

Take the gaming industry as another example. The entire culture there is built around real-time performance, and achieving results super-quickly, in-engine. That’s something that the fashion industry is working towards right now, because there’s such creative power and such a strong business case for designers, merchandisers, marketing teams and more being able to visualise their ideas as fast as possible. So I think there’s considerable cross-pollination of tools and workflows that’s already taking place between those industries, and there’s a lot more still to come.

The challenge is in understanding where fashion needs to build its own workflows and processes – because there are certainly things that are specific to the fashion industries – and where it will be better served by grabbing portions of workflows from other industries. Because fashion does not always need to use solutions that were built specifically for fashion; the industry can make use of cross-industry tools and workflows where its aims align with the aims of other sectors.

Our new, accessible 3D modelling tool, Modeler, for example was not designed just for footwear, but we’re excited to see how the footwear industry is already making use of it. And if you take Gravity Sketch as another example, that’s a tool that was born in the automotive industry but that has since gone on to find a huge audience in footwear – to the extent that you could easily assume it was conceived as a footwear tool.

The Substance suite is very similar. Those tools began life in texturing characters and environments in game design, but when you look at how they’re being used in in fashion today (we regularly hear fashion designers tell us that Substance has changed their working lives) you might imagine they were designed specifically for fashion.

That’s what leads to the important realisation that, while every industry is different in its outputs and in its demands, many of them have objectives and processes in common. And that’s why I believe that the most important innovations might originate in one industry, but their benefits will eventually be shared by the others.

One of the primary values of digital assets and digital materials is their ability to stand in for physical alternatives in a range of decision-making scenarios. How do you see DPC workflows enabling faster, more flexible routes to market?

Our primary focus recently has been towards building workflows and tools that allow people to move from the digital world to the physical world of vice versa, any time.

The starting point of that journey was building our image-to-material capability in Substance 3D Sampler, where you have a physical fabric in your hands that you want to apply to a garment created in, say, CLO, and you need to bring that real material into the digital world. We’re making it as easy as possible to turn a reference photo, or a quick smartphone snapshot, into a high- quality 3D material. Now, we are also about to release a new capability in Sampler that allows users to capture a 3D model through photogrammetry, and to turn it into a digital model. So you have two options to digitise real- world assets in a user-friendly way, which is something we see as being the heart of 3D workflows in fashion and other industries.

Image by Pauline Boiteux.

We’ve also spent a lot of time working on taking that journey in the opposite direction with 3D printing. Together with Mimaki, Stratasys, and other partners, we’re building streamlined ways to turn a digital object into a physical one, at any time. And that’s something we have also worked towards with digital fabric printing and on-demand production, as we showcased with our partnership with Gerber to bring an artist’s vision to life digitally and then physically.

The goal here is not to replace every physical asset or every physical workflow with a digital one. We believe the right approach is one that transcends digital and physical, and that allows the user to move back and forth between those two worlds, so that the physical can be augmented by the digital – and the other way around.

Procedural generation and the adaptability of parametric materials open up a huge possibility space in virtual materials, empowering material designers to create almost anything they can imagine. This is terrific in a digital-for-digital workflow (where the final output is a digital asset) but some questions remain around how materials that begin life virtually can then hook into physical production. What is your take on where the creative flexibility of virtual and the achievability of physical should meet?

The best way to create a digital material is the one that best serves what you’re trying to achieve. A scan-based workflow is faster, but it also locks you into a particular track since you only have the ability to make a narrow set of changes once the initial digitisation has taken place. A procedural approach can, in the end, deliver an asset that has greater value because it’s more flexible and because it opens up more creative possibilities, but it requires you to have a very clear view from the beginning of where you want to go, because all your parameters need to be defined the right way – which means spending time up front to gain back more time later.

The other component to procedural materials is that they can potentially help to drive manufacturing. This is something we’re still in the early stages of, but it’s possible for digital knitting machines, for example, to leverage the data from procedural digital materials. And from an authoring point of view, we can put guardrails in place to ensure that anything you can generate with a procedural material can indeed be manufactured using a proven process. We’re only at the beginning of this journey right now, but we’re working to build deep connections between digital working and digital manufacturing that we believe will bring the creative power of virtual and the mechanics of physical manufacturing closer together.

The Substance suite is very much geared around comprehensive 3D working, f rom modelling to staging. Beyond the solutions perspective, extracting that extended value from digital assets, upstream and downstream, is going to require a high level of standardisation and interoperability. How close do you believe the fashion industry is to achieving that standardisation?

A big part of the DPC revolution is going to be standardisation. From the outset we wanted to make sure that the core Substance SBSAR format was as widely supported as possible, and today it’s accessible in solutions from CLO / Marvelous, Browzear, Lectra, and many, many more – bringing the power of parametric materials into the environments that fashion users already know.

The next step on that journey is embracing USD (Universal Scene Description), which is a framework for bringing together all the different elements of 3D graphics. The animation industry can provide a glimpse into the future here, because there has been a move away from having separate meshes, materials, and textures that are merged by the animation tool, towards having a single file that contains everything, using USD.

A useful way to think about this is that Photoshop or Illustrator have multiple layers in a single file. And this is what the move to USD promises to do for 3D assets – giving users the ability to consolidate information and to produce more variations, more easily.

It’s important to note, too, that we’re definitely not alone in moving in this direction. We’re part of a much wider consortium of companies – including NVIDIA – that is working towards democratising and standardised 3D even further.

Image by Anthony Salvi.
Where do you see digital product creation – and digital assets – going from here? What does the near-future look like for fashion as a whole, and for Adobe specifically?

Creation tools have never been more accessible than they are today. A single person, with a consumer-grade laptop, with software that’s affordable and easy to use, can go through a complete, end-to-end 3D workflow and arrive at either a super high quality static render or a real-time experience, starting from the initial sketch.

I think this is a unique point in history. The amount of power that’s being placed in the hands of anyone who’s interested in creativity is unprecedented. And that’s the world that the next generation of fashion designers are entering – one where you can create and share in ways that would have been unimaginable before.

It’s also even more exciting to think about that cross- pollination of tools and expertise I talked about earlier. The next great fashion designer might not even start their career in fashion! Perhaps they start as a game designer, or a movie director, before moving on to use the same toolsets and the same workflows to create art of a completely different type. In that context, silos are making less and less sense every day, because the world is becoming one big crowd of creators.