Every week, The Interline rounds up the most vital talking points from across the landscape of fashion technology news. This roundup is also delivered to Interline Insiders by email.
Real-time models and avatars take a big stride in believability and accessibility.
Fashion needs a lot from digital models.
The videogame sector has to walk a careful tightrope between convincing (or well-stylised) aesthetic representations of characters, and usability concerns like animation priority. The movie and TV VFX industry emphasises surface believability and natural-looking animation, but has the relative luxury of rendering those things offline, rather than in real-time.
Across both of those applications of digital characters, skin, hair, and clothing need to pass as real, but don’t always have to behave with absolute accuracy. There is a semi-famous example of this in the CG movie Frozen, where lead character Elsa’s hair phases through her arm during a pivotal moment in a big musical number. But this was not an animation error – it was a choice made for artistic reasons, to prioritise motion and feeling over technical precision.
To really work in fashion, digital characters – and the artists who work with them – cannot afford to compromise in any of these areas. For human models with the goal of photorealism (stylised characters are exempt from the same demands), to be dressed in garments with the combined goals of photorealism and unbreakable pattern accuracy, it’s hard to find corners that can be cut. To stand in for real models, digital humans need to look real, animate like real people, and we need to be able to drape them in clothes that are also manufacturable. And they need to render in real-time for most use cases.
Another critical component of successful digital models for fashion retail is going to be scalability and accessibility. Shoppers are incredibly diverse even within a single brand’s target demographic, and for multi-brand groups, the variety of people who want to see themselves represented in try-ons, virtual photoshoots, and CGI marketing materials is extremely broad.
All of which is important context for understanding the impact of this week’s unveiling of Epic Games’ MetaHuman Creator – an intuitive, scalable way of making convincing-looking digital humans for use in real-time applications of the Unreal Engine, with full animation rigging and the potential for real-time facial performance capture through Epic’s Live Link mobile application.
Fashion has had digital models before, but they have been the result of huge amounts of artistic time and creative endeavour. With the MetaHuman Creator, Epic seems poised to devolve that creative power into individual brands’ and designers’ hands – giving anyone with the power to render them in real-time the tools to build models that look and move the way they want.
Crucially, MetaHumans borrow the game engine concept of LODs (levels of detail) where models are shown at full complexity when they are close to the camera, or positioned as the “hero” in a scene, but then shown with reduced geometric and texture detail in other settings. By automating this process, the MetaHuman Creator can scale the same digital human for different use cases – and even produce a model that can render in real-time on a mobile GPU for augmented reality applications.
It’s rare for The Interline to describe any technology as groundbreaking, but this is one of those times. The MetaHuman Creator appears to have every creative and commercial base covered… so all that’s left is to see how it can be made to integrate with apparel and footwear 3D design solutions to realise the vision of drag-and-drop product try-ons on an essentially infinite number of different people.
A major move holds promise for digital design-to-manufacturing workflows
While not as immediately impactful as the idea of turnkey digital models, this week saw a no less significant announcement: Lectra’s acquisition of Gerber Technology.
For the small number of readers who may not recognise those names, Lectra and Gerber Technology have been two of the most dominant forces in the digitisation of the fashion value chain – each with portfolios of hardware and software that have been deployed, developed, and refined over decades. And the prospect of those two portfolios being unified under a single umbrella could be a watershed moment for an entirely new era of digitisation in the design-to-manufacture workflow.
The Interline has been privileged enough to visit the R&D campuses and showrooms of both vendors many times in the past (read our report on Gerber Technology’s Ideation 2020 event here, but through this new lens), and where the potential of their combinations of PLM, 3D, DAM and other software, along with cutting, spreading and other manufacturing hardware was compelling separately, it is going to be incredibly imposing combined.
As is the case with any acquisition, there are questions to be answered about which products will be carried forward. But by extrapolating from this week’s announcement alone, The Interline can envisage a future where a clear and unambiguous standard for design-to-machine not only exists, but percolates throughout the supply chain as a function of the two companies’ combined reach into production.
So even while we await specifics, it does not feel premature to say that this acquisition could be the spark that further ignites the fashion industry’s desire to better understand, control, and measure manufacturing through a massive coming-together of software and hardware from two industry giants.
(The image used for the header of this article is also courtesy of Epic Games / Unreal Engine.)