Key Takeaways:

  • With 2D image generation set to be well and truly commoditised, new leaps are happening in 2D-to-3D AI workflows and video-to-video – with implications across fashion’s perception of DPC and marketing.
  • A change in the way 3D assets are made available to creators working with real-time engines like Unreal could provide even more support for fashion’s journey into real-time 3D.
  • For one of the first times, consumers are prioritising a smart version of a wearable product over its “dumb” alternative. Could this be a lasting trend?

Sourcing, PLM, and supply chain professionals: share your perspective!

Are you a user of PLM, supply chain management, or sourcing technology? Right now, The Interline is collecting anonymous perspectives from professionals who work in these areas.

In exchange for five minutes of your time, you can help shape another forward-looking report (from The Interline and Bamboo Rose) that will contain actionable insights for your sourcing strategies and the solutions you use.

Take part in the survey today, or take in the key findings from last year’s edition here at The Interline.

Upcoming Webinar: How AI Is Already Reshaping Apparel Supply Chains

Next month, The Interline will be hosting a live, free-to-attend webinar in partnership with TradeBeyond – building on the themes of this year’s AI Report and Sustainability Report to examine how, and where, AI is transforming fashion’s supply chains.

The webinar will run for an hour, at 9AM New York (2PM London, 3PM Paris) on 7th November 2024, and will feature scene-setting opening remarks from our Editor-in-Chief, Ben Hanson, a deep-dive presentation from Jeff Alpert, Founder & Ceo of Pillar AI, and a back-and-forth discussion about the tangible applications of AI in the supply chain with André Von Appen, TradeBeyond’s VP of Retail Solutions in Europe.

AI comes for concepting, early-stage visualisation, and animation.

With the upcoming release of Apple’s branded generative AI tools (Apple Intelligence, which is being pushed out in a staggered way following the release of the company’s latest phones), it’s fair to say that 2D image generation has been well and truly commoditised. 

apple

It’s fascinating to think just how compressed, in fact, that adoption curve has been: from the early days of trying to deploy local models on ill-suited hardware, or using Midjourney through Discord, to the broad availability of Apple’s “Image Playground,” AI images have come a long way – at least in terms of how easy they are to access. Quite how artists and creatives should feel about that ubiquity of access is another matter entirely. 

But while the jury is still out on how generative image models will impact how we think about consumer photography and art in the long term, developers have not rested on their laurels. The leading 2D image generation models have become more capable (see the latest versions of Midjourney, Flux and others) and controllable without the need for external tools and tuning, and The Interline’s anecdotal experience is that the results are cropping everywhere – often without attribution.

midjourney explore

But perhaps the biggest impacts generative AI could have on fashion are happening in areas that are adjacent to image creation: video generation (with performance mapping) and the extrapolation of 2D sketches. Both of these areas saw potentially landmark new announcements in the last ten days. 

First, Adobe’s MAX conference included a demonstration of one of the company’s “Sneaks” (early-stage internal innovations) called Project Turntable, which is a very unremarkable name for a proof of concept that has some profound implications.

The idea behind Project Turntable is to use generative AI – trained, Adobe says, on properly licensed data – to turn flat vector sketches into rotatable 3D models. By itself, this is already a pretty mind-blowing sentence to write (see that compressed adoption curve we mentioned earlier) but the demonstration hammers home just how fundamental a shift this is in how we think about consumer graphics.

Consider it through the lens of the fashion creation pipeline. A huge amount of effort today is expended in turning 2D creative sketches into 3D visualisations for the initial purpose of demonstrating the viability of those ideas. Once adopted, those styles undergo technical design, patternmaking, commercialisation and all those necessary steps, but the first stage (turning 2D concepts into 3D concepts) is currently both labour-intensive and talent-heavy.

adobe max project turntable

What if that process could be replicated, at least to some extent, in seconds? Again: the idea would not be for technical design or even fine-grained creative choices to be conducted this way, but if a creative designer simply wants to bring an idea to life – moving from two dimensions to three – as quickly as possible, so they can share that idea with other people, would they be better served by learning 3D design or by simply prompting right in Illustrator?

Now, as a “Sneak” there’s no suggestion that Project Turntable is ready for primetime any time soon. And there would be deep questions to answer in fashion about how, for instance, materials would be handled. But as a demonstration of just how quickly and comprehensively AI could sidestep a lot of what we think about as “3D” today, what’s been shown so far should be prompting a lot of re-evaluation of the purpose of digital product creation strategies.

Second, we have this week’s release of Runway Act-One, the latest step from the AI video generation company (and research lab), which uses relatively simple video inputs – i.e. a piece-to-camera recording of a performer delivering a line – and a text prompt to generate AI video that approximates something like broad-brush performance capture.

Fashion, as we know, has made some halting starts towards replacing human models with generated ones – opening up a sizeable can of ethical worms in the process. The cultural discussion that all kick-started has definitely not settled yet, but from a technical point of view the same improvements to image generation and control have moved the needle in terms of how good the output of those static models can look.

Much more limited inroads have been made, though, towards using generated models for video marketing – primarily because the so-called ‘uncanny valley’ effect is much stronger in motion. And while Runway’s approach currently seems to work best with stylised characters, there now seems to be a much shorter distance to travel from current reality to a near-term future where performance capture from one lifestyle shoot or runway show can be endlessly repurposed to create new footage.

How far that is considered to be a good idea will, like a lot of AI-related discussions, be as much of a cultural debate as it is a technical one – but based on this week’s stories that technical discussion is advancing quickly.

Centralising the toolkit for real-time creation.

A shift of a different kind has started this week in real-time graphics, with the release of Epic Games’ new “Fab” store. This consolidates what had historically been separate parts of the Unreal Engine ecosystem into a single content creation marketplace for 3D assets, textures, models, environments and more – with support for Unreal Engine, UEFN, and even Unity.

While a lot of the publicity around Fab is, understandably, focused on the videogame and VFX industries, there is a major push happening in fashion towards creating environments, experiences, and marketing materials in real-time rendering engines. And the wider, easier availability of environments, characters, objects and more – all suited for real-time interactions and cinematics or staged renders could be a key unlock for even more uptake use of real-time engines in telling the story of apparel, footwear, and accessories the way they have for other sectors.

The Interline believes that fashion is on the cusp of a real-time revolution, and these kinds of steps towards maturing the content creation ecosystem and community even further are steps in the right direction.

A surprising milestone in consumer uptake of wearable technology.

A short, sharp headline, this one: Meta’s AI-enhanced smart glasses, the fruit of their partnership with EssilorLuxottica, are now outselling “dumb” Ray-Bans in a majority of EMEA stores – even though the actual AI features of the glasses are being held back in the EU pending regulatory clarity.

ray ban meta

For fashion (and especially eyewear) brands, this could be a month-on-month outlier. Or it could prove to be the first indicator that embedded systems – particularly of the generative AI variety – could actually begin driving purchasing decisions where fashion is concerned.

The best from The Interline:

This week on The Interline, we started with a deep-dive opinion piece from Kitty Yeung, examining the unusual adoption pattern of virtual try-on (VTO) solutions for consumers, and asking whether generative AI is likely to change that curve.

Next we announced the upcoming release of The DPC Report 2024 – the next instalment in our landmark annual series looking behind the curtains of 3D and digital product creation.

In partnership with ettos, we then published a new collaboration looking at what it takes to be prepared for regulatory compliance in fashion – and why first-hand supply chain knowledge and visibility is just as essential as technology.

Next, we released the first standalone piece from our Sustainability Report 2024: Darya Badiei’s investigation into the root causes of overproduction, and how they might be addressed through intelligence.

And finally we released the second piece from the same report, with our News Editor Emma Feldner-Busztin charting the landscape of sustainability legislation.