Last month, I had a deep conversation with a leading vendor. Afterwards, I was inspired to write a series of articles covering all things 3D – to help the ever-growing numbers of businesses that are exploring the use of 3D in design, development, collaboration, or marketing.
This article is the fourth in that series. And in case you haven’t been following so far, here’s a quick refresher.
First I looked at the practicalities of material scanning, an area which is constantly developing in both technology terms and in the scope of its applications. Second, I explained the importance of best practices for virtual materials, ensuring that digital draping can replicate the feel and performance of physical fabrics. Third, I looked back at the history of population-wide size surveys and analysed the way new advances in 3D scanning are enabling retailers to access more granular sizing data and employ machine learning to improve fit, lower return rates, and enable all-new businesses models through personalisation.
In this article, I want to pick up that thread and talk about how that scanned customer sizing data can be used to create 3D avatars (once anonymised and aggregated) and even individual virtual twins.
The phrase “virtual twin” is one that’s already in use for products in a range of industries – from automotive to apparel – and is one of the cornerstones of the smart manufacturing or Industry 4.0 revolution. It’s a very descriptive phrase, since it refers to digital versions of physical products that are accurate to patterns, seams, colours, styling details, construction, components and so on. Simply put: a virtual twin is a precise replica of a physical product in a digital space. Today, I see a clear opportunity to develop virtual twins of customers’ bodies, for use across eCommerce.
But first, I want to look at the potential for the creation of new, accurate, retailer-specific avatars. With modern scanning technologies and best practices, it has become possible to gather very specific sizing data that can then be used to create a virtual fit model of the mean average consumer. And, using the same data, you can then develop up-to-date size ranges – each with their own detailed, data-driven body shapes and points of measures.
The approved sizing datasets can then be used to develop both engineered block-patterns for manufacturing and 3D avatars that should accurately represent your end customers. It’s important to keep in mind, though, that just as manual patternmakers will create a library of 2D block patterns on standard sized mannequins of real-life models for different products types, your virtual pattern makers will need to duplicate the same process. This will first mean creating multiple accurately-sized avatars, using a detailed range of measurement points, body shapes, weights and muscle tone profiles – especially if your target customers are coming from the athletics sector.
In the 2D world, we would style our blocks and then apply the grading increments for your domestic or international markets (these will be different!) and 3D should follow the same process – especially if you want to use the output as part of the manufacturing process.
Typically, these are time-consuming processes conducted with physical dress-forms based on historical size surveys, but today it’s possible to instead visualise a customer-accurate avatar in virtual or augmented reality, and to make fit, style, and range-building decisions based on that visualisation.
The potential for this application of accurate digital avatars is massive. Just imagine completing a sample review meeting moments after the design and development team have finished work on their latest creations. No longer will you have to wait for weeks to turn your samples around, as a 3D asset suitable for internal use – lower fidelity than one for external consumption or design use – can be created in less than an hour.
One possible application of a more photorealistic avatar could be in design and product creation, where perfect technical fit can take a back seat to creative expression and aesthetics. Working on a 3D avatar that they can be confident represents their average customer, designers can quickly make style changes by adding or removing style lines, changing seam or stitch types, experimenting with colours, prints, stripes, plaids, checks and artwork, and much more.
Beyond this, designers can also then use their in-house libraries of pattern shapes and assets (things like collars, cuffs, pockets, plackets, fronts, backs and sleeves) to make changes, or they can choose to draw news shapes directly onto the avatar. Either way, the goal is to be able to design new prototypes that can be draped and quickly visualised in a short space of time, then shared with buying, marketing, and with supply chain partners that are involved in the design and co-creation process.
The third use-case I can envision for virtual twin avatars is using high-fidelity 3D assets as part of a connected workflow process to enable the development of product configurators. Simply speaking, these are 3D modelling applications which use parts that have been pre-texture mapped and that are then added to a database of materials, components, and embellishments. Each of the materials, components, parts and colour options are limited to what is in a pre-planned product offering. The consumer can then customise or configure a product based on the pre-assigned options in a virtual space, on a virtual avatar, providing the feeling of purchasing a personalised product one of a kind product – which to some degree you could argue they are, but in fact, this is what is known as mass-customisation.
We have now arrived at a time when our 3D samples can not only be engineered with almost complete accuracy, but they can also be shared with our manufacturing partners to build the physical products with real-time collaboration. It’s possible today to make our new virtual designs look completely photorealistic to the point that they are completely indistinguishable from a real product within a photograph. These improvements have been brought about by ongoing developments in processing speeds and capacities in graphical processing units.
As well as general improvements in computing, at the same time we are also benefitting from the huge advancements in scanning technologies used for materials, components & body scanning and the use of smartphones to scan both materials and people.
Advancements in gaming engines and team collaborations are also helping to build new platforms. The march of graphical technology in videogaming has led to the introduction of new lighting and geometry rendering techniques, as well as texturing and material approaches, that can enhance the entire creative process. And we can expect these new advancements to find their way into our 3D fashion systems in the very near future – leading to more believable virtual avatars, virtual scenes, and digital products that look correct in those settings.
To summarise, 3D body scanning and the resulting body data are just two examples of the huge variety of use cases that exist for 3D within the fashion industry today. From the sustainability revolution that is virtual sampling, to the ability to communicate and collaborate visually with suppliers, to the power that can be put in consumers’ hands with product configurators, 3D is allowing the fashion sector to create, automate and simulate in real-time – making the unreal a reality and the impossible possible!