(Re)defining DPC with Adam Hankin of Gemell

Released in The Interline’s DPC Report 2026, this executive interview is one of an eight-part series that sees The Interline quiz executives from major DPC companies on the evolution of 3D and digital product creation tools and workflows, and ask their opinions on what the future holds for the the extended possibilities of digital assets.

For more on digital product creation in fashion, download the full DPC Report 2026 completely free of charge and ungated.

For a while now the broad shape and scope of 3D and DPC strategies have been generally accepted, but now companies are asking some fundamental questions about how far those initiatives should stretch. Some see a clear opportunity to take them further. Others potentially see arguments for either ringfencing them where they stand, or possibly even scaling them back. Technology footprints will always morph over time, but this feels like a deviation from the standard. What’s your perspective?

My perspective? We have a huge opportunity in front of us. The only reason any company would consider scaling back the use of digital tools is because the technology they have today is not benefitting them in real manufacturing scenarios. Using DPC tools can be fun, but if the output does not translate into something manufacturable, where the physical product actually resembles the digital design, then what was the point?

Technology in DPC has to earn its place. It needs to make you more efficient, more accurate, reduce cost, shorten timelines, cut waste, and stay connected to the realities of production. If it cannot support real decisions, companies will naturally question how far they should take it.

Gemell is a new-age technology business. We are not held back by legacy systems or scanning-based limitations. We can rethink the pipeline from the ground up and push DPC further upstream into the places where most sampling, cost and waste occurs. Once you start modelling at the fiber, yarn, and fabric level using real manufacturing data, the digital output becomes something you can trust. And when you can trust it, you stop asking whether DPC should be scaled back. You start realising how far it can go.

Different parties might not always agree on whose responsibility the actual work is, but the industry has nevertheless largely standardised on the idea that getting fabrics into the digital product ecosystem means scanning physical samples or swatches that already exist. You’re proposing a different approach based on simulation and digital twins of fibers and yarns. Why take that route when the rest of the industry is set up a different way?

All good ideas come from experiencing pain, and for us that pain was scanning thousands of physical samples. We saw first-hand how frustrating, slow, unscalable, and restrictive it was. And in the corner of the room, there were the boxes of samples waiting to be thrown out. It was obvious the industry needed a better approach.

My cofounder, Rathe Hollingum, has a deep technical, mathematical and VFX background, paired with an almost obsessive curiosity about how textiles are made. When you look closely at textiles, you realise the entire process is built on data. Fibers come with lab test information. Yarns are spun to exact technical parameters. Fabrics are woven or knitted from design files that run directly on the machines. None of that data was being used in DPC.

We realised that if we could 3D model the fiber, digitally spin the yarn, and digitally weave or knit the fabric using the same manufacturing data mills already rely on, we could create a fully procedural workflow with photorealistic, fiber-level accuracy. No samples. No scanning. No post-production. No waste.

This unlocks a different way of working. Designers get editable digital twins of yarn and fabric that behave like real materials inside CLO or Browzwear. If someone wants to change a yarn or adjust a colourway, they no longer have to wait two weeks for a new sample to be produced and scanned. They update it in Gemell, re-render, and continue designing within minutes.

Walk us through how you’re aiming to improve the accuracy of yarn and fabric representation in 3D, not just at the construction level, but in aesthetic areas – especially colour.

Accurate digital materials start with accurate colour, and you cannot get that from scanning or surface photography. Colour in textiles does not live on the surface. It is the result of how thousands of fibers, each with their own colour response, blend and scatter light through a yarn.

This is why we start at the fiber level. We capture fiber material, length, diameter, luster and how the fiber scatters and absorbs light. We use spectral colour data rather than RGB values so the colour behaves correctly under different lighting conditions. That is essential for apparel because the same fabric looks different in daylight, in store lighting and under studio lights.

Once the fibers are modelled, we generate the yarn using the actual manufacturing parameters. Blend ratios, twist, yarn count and melange recipes all influence how colour mixes inside the yarn. For example, melange yarns are not simply grey with flecks of colour. They are the result of multiple coloured fibers interlocking and catching light differently at every angle. Our system recreates that behaviour rather than painting a texture on top.

After that, we build the fabric using the real weave or knit design file. Colour behaves differently in a twill compared with a jersey knit because the yarn paths change and light scatters in different directions across the structure. By modelling the structure in 3D, we produce colour that shifts naturally with lighting, scale and viewing angle.

The result is colour accuracy that is not only visually correct, but grounded in how the material is actually constructed. This is what allows designers to trust what they see on screen and make decisions with confidence.

As any company with enough digital product creation experience under their belt will tell you, there’s a fundamental difference between visually representing something and actually simulating the physical properties that govern the way it behaves. With the ideal end goal of DPC being a full-blown digital twin that can stand in for its physical counterpart wherever decisions need to be made, it seems clear that any fiber-forward simulation would need to include both aesthetic and physical characteristics, in order for the yarn or fabric to be considered “complete”. How far are you covering both elements right now?

Our initial focus has been on visualisation, but the major step forward comes in 2026 with the release of our fully procedural physics simulation for yarn and fabric. This becomes especially important as recycled and next-gen fibers grow, because their behaviour depends heavily on fiber-level properties and blend ratios.

Since Gemell models materials from the fiber upward, we already capture the data needed for real physics: fiber length, density, tenacity, elongation, plus yarn characteristics like the recipe, yarn count, ply count, and twist. Using these inputs, we have demonstrated the ability to simulate stretch and even the breaking point of yarn digitally.

When you combine that with the weave or knit structure, you can simulate stretch, drape, and deformation at the fabric level without producing a physical sample or using testing equipment. That is the direction we are moving in. Visuals help you recognise a material. Physics lets you trust it. Our goal is to bring both together so that digital twins can support real production decisions.

A clear picture is emerging – both in this year’s DPC Report and in the wider industry – of where 3D and AI are likely to specialise and diverge, and the key edge for 3D feels like its ability to simulate rather than just visualise, and to provide the foundational data layer behind a lot of other initiatives and ideas. Given that Gemell is emphasising full simulation, it seems like perhaps you’re in the right place to go after that new distinction and definition for DPC. Do you see that opportunity? And what else do you believe your approach puts you in the right position to go after in the next few years?

AI can generate an endless stream of images, but it cannot replace the technical understanding required for production decisions. That is the line that matters. If you are choosing a yarn, a fabric construction, or a supplier, you need more than a nice picture. You need to know how that material is built and how it will behave in the real world.

Right now, one of the biggest reasons brands are cautious about AI is that most AI tools invent texture. They fake the weave, the yarn structure, the fiber mix. For a brand that spends time and money developing beautiful, complex surfaces, that is a non-starter. Nobody wants a system that “guesses” what their hero fabric should look like.

The only way to make AI genuinely useful is to ground it in real, production based knowledge. That is where Gemell sits. We know the fiber data, the yarn recipes, the constructions and the lab results. We can use AI to speed up workflows, not to alter geometry or appearance. It becomes a way to enhance what we already know, not overwrite it.

And visualisation and simulation are only the first layer. Once you understand materials at this level, you can start to analyse entire manufacturing chains. For example, with a yarn spinner in India we have taken their raw fiber test data, their yarn lab results and their production settings, then shown them how to optimise sourcing and processing for better yield and consistency.

So yes, we see the opportunity in using simulation as the foundation for DPC. But we also see something bigger. The same skill set lets you turn material simulation into process intelligence, and that is where things get really interesting.

What’s the most useful question that companies can ask themselves, right now, to better understand what they want to accomplish next with 3D – whether that’s driven by their own ambitions, or by changes in the market?

How much time and money would you save, and how much quicker to market would you be if you could fully trust what you are looking at on a screen?

My biggest piece of advice is to question every process. Imagine you arrived in this industry with no prior knowledge and were asked to design a workflow from scratch. With the technology available today, there is no chance you would build the same analogue, sample driven pipeline that we still see across most of the supply chain. You would design something faster, cleaner and more digital from the beginning.

Asking that question helps teams understand what they really want from 3D and where the biggest opportunities sit. Once you identify the places where you rely on physical samples to make decisions, you also identify where digital materials can make the most impact.

Exit mobile version