Over the past decade or more, fashion has invested huge energy, money, and optimism into 3D design and development. The promise was compelling: faster samples, fewer errors, improved fit accuracy, clearer supplier communication, quicker content creation, and a more sustainable product pipeline. And in theory, 3D is still capable of delivering all of that.

But after six years trying to embed 3D design across the extended design community here at ASOS, I’ve arrived at a question that I think the wider industry also needs to ask itself: is 3D always the right digital tool for every one of those objectives? Or can some of those benefits be realised more quickly or more easily by using other tools – especially, today, with the help of generative AI?

An important caveat: asking this question is not the same thing as saying that 3D is obsolete, or walking back from the deeper drive to digitise the way our product creation teams work. It means, at least to my mind, that by pursuing a “generalist” approach, and tasking DPC with doing all the heavy lifting across the product journey, we, as an industry, misunderstood where 3D actually creates value, which companies and teams benefit from it the most, and that we tried to over-index on places where that value just isn’t being found. 

And it also means – again, to me – that the future of digital product creation lies mapping and zeroing in on the areas where specialism, technical accuracy, and depth of capability matter. This can be a by-department exercise or, in cases like ours, it can be something that’s influenced by market segment.  

In mass market fashion, with its constant push for speed, option width, and newness, our experience has shown that the steep learning curve of 3D, and the requirement to work with technical precision from the very beginning of the product lifecycle, simply doesn’t match the realities of how designers need to work. By contrast, AI has arrived with minimal onboarding friction and slotted naturally into the parts of the design process that genuinely needed transformation, but that were always going to be resistant to the kind of fundamental architectural and talent re-think that 3D demands.

So this opinion piece is not an argument for AI over 3D, or vice versa. The case I want to make is for choosing the right digital tool for the right task, and allowing designers, not technologies, to dictate how digital creation evolves from here.

The Pressures on Modern Product Teams

To understand why AI has taken off so quickly and to identify the cases where 3D has struggled in our sector, we need to look at the environment today’s fashion designers actually operate in.

Across our segment, product teams come into work every day and stare down three intensifying pressures:

The first is speed. The cycle from idea to sample has tightened relentlessly. Designers are asked to create more, faster, with less room for exploration or experimentation.

The second is option count. Our approach to fashion thrives on breadth in a way that simply isn’t reflected in other market lanes. Every week brings new themes, new drops, new categories to react to. Designers often work across dozens of styles simultaneously.

Finally, we have the weight of data that comes from the first two points. The availability, velocity, and sheer variety of sales data; click-throughs, add-to-basket rates, returns has shifted decision-making for fashion companies. Where instinct once guided design, the pressure now is to “play the odds,” replicating proven winners and minimising perceived risk.

I’ve often heard people express the concern that technology is turning creatives into robots. Anyone who’s been in 3D long enough has had that opinion put to them, by creative professionals who feel as though their jobs are being tilted towards administration and data-entry. And everyone today has been asked, or has asked themselves, whether AI is about to change creators into curators of machine output.

The real irony, though, is that many designers have already been behaving like robots long before AI arrived – not by choice, but because the structures around them de-emphasise the parts of their roles that we broadly consider to be “creative”.

What gets squeezed out in this environment? The highest-value creative activities, basically. Things like visiting vintage markets, discovering new references, or exploring physical archives. These are the activities that lead to genuine originality, they’re the ones designers consistently tell me they miss the most, and they’ve received very short shrift in conversations around both 3D and AI.

So let’s keep this concept in mind when we think about how any digital tool is going to be able to prove its value, or otherwise.

For some sectors, a fully digital pipeline might not be practical

When 3D first hit its stride in fashion, the wider industry (not just mass market fashion) saw it as the future of design and product creation. The end-to-end pitch, as I’ve said, was seductive: hyper-accurate fit; digital prototypes replacing samples; frictionless supplier collaboration; a fully digital pipeline from design to production.

Many years into the journey, those are still the objectives. And there are plenty of brands that have made big and measurable strides towards them, but there are also companies that tried to apply the end-to-end 3D philosophy universally and technology-first, and then encountered barriers that had nothing to do with vision or enthusiasm, and everything to do with practical deployment and user fit.

At ASOS, we found that the learning curve for 3D was, in most cases, simply too steep to fit into the time available. Our designers are exceptional at what they do, but they also work at extraordinary pace. They simply don’t have the time to spend months becoming competent 3D specialists. The ramp-up period was incompatible with the speed of their day job.

Another key challenge was that our approach to fashion lacks stable blocks. 3D, as we’ve learnt, excels when blocks remain consistent season after season. For market segments like sportswear, leisurewear, tailoring, luxury and so on, this creates a significant business case and a clear user fit for DPC, but at ASOS, newness is the primary model. Shapes change constantly. Blocks are updated constantly. There is very little “repeatability” for designers to anchor to, and consequently precious little purchase for 3D to hold onto.

In our specific case, governance barriers and gatekeepers also created structural friction. Patterns and foundational blocks sit naturally with garment technologists, and their responsibility is to maintain accuracy and fit integrity. This meant that designers couldn’t freely change blocks in 3D without risking the consistency of calibrated fit. The result here was structural, not anything that can be attributed to the solutions. We had the tools, but no one could use them.

We explored a tiered approach that allowed designers to make minor changes, but that required collaboration for major ones, and even discussed a modular system of sleeves, collars, and components. But the governance model alone slowed our progress.

Finally, we found that 3D in isolation wasn’t where the real value would lie – it was going to be in the entire DPC ecosystem. So we set out to explore fabric scanning, materials libraries, avatar refinement, and more. Each one helped get us closer to the vision, but each also added new complexity.

3D, we came to realise, just doesn’t make sense in isolation; it requires this infrastructure, and while some brands are built for that (wholesale brands, sportswear, footwear and others), other segments of fashion simply aren’t there yet because of the demands of speed and scale that it places on each of those different components.

Again: this doesn’t mean 3D has no place in fashion the way we create it. It means that we need to be careful, as a whole industry, to really understand which environments, market segment, roles, and organisational structures it fits best.

Why some designers have embraced ai as an easier entrypoint to digital creation

As we’ll see in a moment, I don’t believe that AI is going to replace 3D. In fact, as you’ll read elsewhere in this report, it’s becoming progressively clearer where the two technology strands complement each other, where they converge, where they diverge, and where they can, sometimes, exist in tension.

But from the point of view of wanting to document our experiences with digital product creation, and to start understanding why the wider industry is pitting these things against one another, it’s important to know where and why generative AI has slotted into our designers’ workflows in places that 3D eventually ended up not making sense.

The most obvious of these is that AI is relatively frictionless. It’s slotted into designers’ natural workflows rather than asking them to learn entirely new ones. And today at ASOS, almost all designers use AI in some part of their process. Not because they were told to, or as part of a whole end-to-end transformation initiatives, but because it solves one or more of their everyday pain points.

For designers whose objective is to make the leap from sketch to photoreal visual, the clearest benefit is speed. What they want to accomplish can be done in minutes, rather than hours or days, and if they’re capable of sketching then they’re able to work with AI, without acquiring any additional skills.

In an environment where breadth and rapid-fire iteration matter a lot, the fact that AI doesn’t think programmatically or linearly is a benefit rather than a drawback. That lack of predictability would, obviously, be anathema to a 3D pipeline, but with AI it allows our teams to find unexpected outputs and “happy accidents” that are invaluable for sparking ideas at the start of the process.

Perhaps the biggest difference, though, is in the flexibility of AI tools when it comes to providing solutions for a wider audience. Where 3D designers needed to fit a specific mould, and their work had to be exported and shared with other people in order to create value for a wider audience, we’ve seen three different ‘archetypes’ of AI users naturally emerge:

Ideators, who represent about 30% of our user base, and who use AI to rapidly explore shape, proportion, and newness. Visualisers (which is almost everyone) who want to make sketches photoreal, iterate on materials, colours, and trims, and who use AI output it for internal sign-off and supplier clarity. And finally storytellers – about 10% of the userbase – who build full looks, backgrounds, environments, using AI imagery to convey mood and styling.

And one unexpected, but significant, area has been that putting AI first has allowed us to build better relationships with our buyers. Because visualisation is now effectively instant, designers can show far more wildcard ideas without investing hours sketching them. Buyers see broader ranges, faster, with clearer rationale and decision-making has become more dynamic and collaborative as a result.

Where 3S still wins (and why DPC still very much matters) 

Despite the results we’ve obtained from our designers working with AI, and the relative ease of adoption, I’m under no illusion that AI is a replacement for 3D. 

And the reasons for this all have one common root: fit, technical, and pattern accuracy remain out of the reach of AI. Generative visuals do not accurately represent pattern-piece integrity, fabric behaviour, weight, or stretch. 

For one set of use cases this delta between visual representation and accuracy isn’t a big barrier. For other categories, like performance wear, tailoring, dresses, suiting etc. this accuracy is non-negotiable.

And from that same foundation of accuracy come several other places where 3D will remain the right choice for not just specialist teams, or designers with more runway to learn different methods, but also top-level strategic objectives for their parent brands:

  • Digital product passports
  • True digital twins
  • Physics-based simulation
  • Downstream automation
  • AI × 3D hybrid workflows

We’re actually already experimenting with early hybrids: simple garments created in 3D, fitted to ASOS avatars, rendered into a variety of poses, and then made photoreal through AI post-processing. This is something I expect we’re going to see much more of the near future, too, as different design, development, and product teams find the right tools for their workflows, rather than being asked to conform to a vision that might not be right for the way they actually work.

From here, based on our experience, I see the market dividing along logical lines:

  • Mass market fashion → mostly AI-first
  • Technical categories → 3D-first
  • The wider future → hybrid, with AI providing the easy onramp and offramp, and 3D providing precision where it matters

To put it as succinctly as I can: 3D isn’t disappearing, it’s becoming correctly embedded into the parts of the industry where it truly belongs and where it can create transformational value, without necessarily needing overstretched designers to learn how to become part of a complete digital pipeline.

Designer-first, not tool-first

One of the biggest mistakes of the past decade, I believe, was assuming 3D was “for everyone.” 

If we reframe this statement to “the value of 3D is for everyone” then it continues to hold water, because the beneficiaries of everything I’ve just written about (technical accuracy, DPPs, true simulation, automation) will be scattered across the extended product journey.

But if we confine ourselves to looking at which tools specific teams want to use, then the statement can be made even simpler: tools should adapt to designers, not designers to tools.

This is something that I think the DPC community tends to overlook, and it’s also part of the reason that the design community has such a strong emotional reaction to AI along two very different axes. 

I’ll never forget a hand-painted print designer I spent time with, who was upset the first time she realised AI could mimic her “style” in seconds. Her fear wasn’t irrational at all: it was deeply human, and it existed on the complete opposite end of the spectrum to the reaction that creative professionals can sometimes have to 3D. She wasn’t concerned that what she was seeing was too hard to use, or that it would create too much of a burden on her time. She wanted reassurance that a tool that seemed so easy to use was going to complement her eye – not replace. She wanted to know that her composition, detail, and brand handwriting weren’t just going to be replicated by a model, but that the model could become a useful collaborator for her.

Creatives understand that their value is foundational. They have skills in fit, function, fabric behaviour, art, and other specialist areas that they want to be able to express, to elevate, and to communicate using technology. This is how they judge the value of design and development solutions, and it’s how we should judge them as well.

Neither 3D nor AI is about creating talent, but rather it’s about amplifying it in ways that make sense to the people who have acquired those skills and who use them to create value for brands like ours.

If I was given the task of redesigning fashion education today, I’d mandate those foundations first, followed by AI literacy to give people the widest possible pool of technology options with the lowest skill floor.

In that curriculum, 3D would remain a high-value specialist discipline with a high skill ceiling, and a key data and technical foundation for the broader transformation initiatives that are slowly reshaping the industry, because in most environments, that’s where it works best.